threads
listlengths
1
275
[ { "msg_contents": "Hi,\n\nwe faced a performance issue when joining 2 partitioned tables \n(declarative partitioning). The planner chooses nested loop while we \nexpect hash join.\n\nThe query and the plan are available here: https://explain.depesz.com/s/23r9\n\ntable_1 and table_2 are hash partitioned using volume_id column. Usually \nwe make analyze on partitions. We do not make analyze on the partitioned \ntable (parent).\nHowever, if we run 'analyze' on the partitioned table then planner \nstarts choosing hash join. As a comparison, the execution using nested \nloop takes about 15 minutes and if it is done using hash join then the \nquery lasts for about 1 minute. When running 'analyze' for the \npartitioned table, postgres inserts statistics for the partitioned table \ninto pg_stats (pg_statistics). Before that, there are only statistics \nfor partitions. We suspect that this is the reason for selecting bad \nquery plan.\n\nThe query is executed with cursor thus, in order to avoid parallel \nquery, I set max_parallel_workers_per_gather to 0 during tests.\n\nWe found that a similar issue was discussed in the context of \ninheritance: \nhttps://www.postgresql.org/message-id/Pine.BSO.4.64.0904161836540.11937%40leary.csoft.net \nand the conclusion was to add the following paragraph to the 'analyze' doc:\n\n > If the table being analyzed has one or more children, ANALYZE will \ngather statistics twice: once on the rows of the parent table only, and \na second time on the rows of the parent table with all of its children. \nThis second set of statistics is needed when planning queries that \ntraverse the entire inheritance tree. The autovacuum daemon, however, \nwill only consider inserts or updates on the parent table itself when \ndeciding whether to trigger an automatic analyze for that table. If that \ntable is rarely inserted into or updated, the inheritance statistics \nwill not be up to date unless you run ANALYZE manually.\n(https://www.postgresql.org/docs/13/sql-analyze.html)\n\nI would appreciate if anyone could shed some light on the following \nquestions:\n1) Is this above paragraph from docs still valid in PG 13 and does it \napply to declarative partitioning as well? Is running analyze manually \non a partitioned table needed to get proper plans for queries on \npartitioned tables? Partitioned table (in the declarative way) is \n”virtual” and does not keep any data so it seems that there are no \nstatistics that can be gathered from the table itself and statistics \nfrom partitions should be sufficient.\n2) Why does the planner need these statistics since they seem to be \nunused in the query plan. The query plan uses only partitions, not the \npartitioned table.\n\nPostgreSQL version number:\n version\n---------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 13.3 (Ubuntu 13.3-1.pgdg16.04+1) on x86_64-pc-linux-gnu, \ncompiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit\n(1 row)\n\n\nHow you installed PostgreSQL: From Ubuntu 16 repositories.\n\nChanges made to the settings in the postgresql.conf file:\n name | current_setting \n | source\n-------------------------------------+-----------------------------------------+----------------------\n application_name | psql \n | client\n auto_explain.log_analyze | on \n | configuration file\n auto_explain.log_min_duration | 30s \n | configuration file\n auto_explain.log_nested_statements | on \n | configuration file\n auto_explain.log_timing | off \n | configuration file\n autovacuum_freeze_max_age | 1000000000 \n | configuration file\n autovacuum_max_workers | 6 \n | configuration file\n autovacuum_vacuum_cost_delay | 20ms \n | configuration file\n autovacuum_vacuum_cost_limit | 2000 \n | configuration file\n checkpoint_completion_target | 0.9 \n | configuration file\n checkpoint_timeout | 15min \n | configuration file\n cluster_name | 13/main \n | configuration file\n cpu_index_tuple_cost | 0.001 \n | configuration file\n cpu_operator_cost | 0.0005 \n | configuration file\n cursor_tuple_fraction | 1 \n | configuration file\n DateStyle | ISO, MDY \n | configuration file\n default_statistics_target | 200 \n | configuration file\n default_text_search_config | pg_catalog.english \n | configuration file\n dynamic_shared_memory_type | posix \n | configuration file\n effective_cache_size | 193385MB \n | configuration file\n effective_io_concurrency | 1000 \n | configuration file\n external_pid_file | /var/run/postgresql/13-main.pid \n | configuration file\n from_collapse_limit | 15 \n | configuration file\n geqo_threshold | 15 \n | configuration file\n idle_in_transaction_session_timeout | 1h \n | configuration file\n jit_above_cost | -1 \n | configuration file\n jit_inline_above_cost | -1 \n | configuration file\n jit_optimize_above_cost | -1 \n | configuration file\n join_collapse_limit | 15 \n | configuration file\n lc_messages | en_US.UTF-8 \n | configuration file\n lc_monetary | en_US.UTF-8 \n | configuration file\n lc_numeric | en_US.UTF-8 \n | configuration file\n lc_time | en_US.UTF-8 \n | configuration file\n log_autovacuum_min_duration | 1min \n | configuration file\n log_checkpoints | on \n | configuration file\n log_connections | on \n | configuration file\n log_destination | stderr \n | configuration file\n log_directory | pg_log \n | configuration file\n log_disconnections | on \n | configuration file\n log_filename | postgresql-%Y-%m-%d_%H%M%S.log \n | configuration file\n log_line_prefix | %t [%p-%l] app=%a %q%u@%d \n | configuration file\n log_lock_waits | on \n | configuration file\n log_min_duration_statement | 3s \n | configuration file\n log_rotation_age | 1d \n | configuration file\n log_rotation_size | 1GB \n | configuration file\n log_temp_files | 0 \n | configuration file\n log_timezone | America/New_York \n | configuration file\n logging_collector | on \n | configuration file\n maintenance_work_mem | 2GB \n | configuration file\n max_connections | 1000 \n | configuration file\n max_locks_per_transaction | 1280 \n | configuration file\n max_parallel_workers_per_gather | 6 \n | configuration file\n max_stack_depth | 2MB \n | environment variable\n max_wal_size | 10GB \n | configuration file\n max_worker_processes | 26 \n | configuration file\n min_wal_size | 1GB \n | configuration file\n pg_stat_statements.max | 2000 \n | configuration file\n pg_stat_statements.track | all \n | configuration file\n pg_stat_statements.track_planning | off \n | configuration file\n port | 5433 \n | configuration file\n random_page_cost | 1.5 \n | configuration file\n shared_buffers | 8GB \n | configuration file\n shared_preload_libraries | pg_stat_statements,auto_explain \n | configuration file\n ssl | on \n | configuration file\n ssl_cert_file | \n/etc/ssl/certs/ssl-cert-snakeoil.pem | configuration file\n ssl_key_file | \n/etc/ssl/private/ssl-cert-snakeoil.key | configuration file\n stats_temp_directory | \n/var/run/postgresql/13-main.pg_stat_tmp | configuration file\n temp_buffers | 2GB \n | configuration file\n TimeZone | America/New_York \n | configuration file\n track_commit_timestamp | on \n | configuration file\n track_io_timing | on \n | configuration file\n unix_socket_directories | /var/run/postgresql \n | configuration file\n vacuum_freeze_table_age | 1000000000 \n | configuration file\n wal_buffers | 128MB \n | configuration file\n work_mem | 758MB \n | configuration file\n(75 rows)\n\n\nOperating system and version: Linux r730server 4.15.0-142-generic \n#146~16.04.1-Ubuntu SMP Tue Apr 13 09:27:15 UTC 2021 x86_64 x86_64 \nx86_64 GNU/Linux\nWhat program you're using to connect to PostgreSQL: psql\nIs there anything relevant or unusual in the PostgreSQL server logs?: No\n\n-- \nBest Regards\nKamil Frydel\n\n\n", "msg_date": "Thu, 22 Jul 2021 13:32:51 +0200", "msg_from": "Kamil Frydel <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioned table statistics vs autoanalyze" }, { "msg_contents": "On Thu, Jul 22, 2021 at 01:32:51PM +0200, Kamil Frydel wrote:\n> table_1 and table_2 are hash partitioned using volume_id column. Usually we\n> make analyze on partitions. We do not make analyze on the partitioned table\n> (parent).\n> However, if we run 'analyze' on the partitioned table then planner starts\n> choosing hash join. As a comparison, the execution using nested loop takes\n> about 15 minutes and if it is done using hash join then the query lasts for\n> about 1 minute. When running 'analyze' for the partitioned table, postgres\n> inserts statistics for the partitioned table into pg_stats (pg_statistics).\n> Before that, there are only statistics for partitions. We suspect that this\n> is the reason for selecting bad query plan.\n\n> updated, the inheritance statistics will not be up to date unless you run\n> ANALYZE manually.\n> (https://www.postgresql.org/docs/13/sql-analyze.html)\n> \n> I would appreciate if anyone could shed some light on the following\n> questions:\n> 1) Is this above paragraph from docs still valid in PG 13 and does it apply\n> to declarative partitioning as well? Is running analyze manually on a\n> partitioned table needed to get proper plans for queries on partitioned\n> tables? Partitioned table (in the declarative way) is ”virtual” and does not\n> keep any data so it seems that there are no statistics that can be gathered\n> from the table itself and statistics from partitions should be sufficient.\n\nUp through v13, autoanalyze doesn't collect stats on parent tables (neither\ndeclarative nor inheritence). I agree that this doesn't seem to be well\ndocumented. I think it should also be mentioned here:\nhttps://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-STATISTICS\n\nIn v14 (which is currently in beta), autoanalyze will process the partitioned\ntable automatically:\nhttps://www.postgresql.org/docs/14/release-14.html\n|Autovacuum now analyzes partitioned tables (Yuzuko Hosoya, Álvaro Herrera)\n|Insert, update, and delete tuple counts from partitions are now propagated to their parent tables so autovacuum knows when to process them.\n\n> 2) Why does the planner need these statistics since they seem to be unused\n> in the query plan. The query plan uses only partitions, not the partitioned\n> table.\n\nThe \"inherited\" stats are used when you SELECT FROM table. The stats for the\nindividual table would be needed when you SELECT FROM ONLY table (which makes\nno sense for a partitioned table).\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 22 Jul 2021 07:15:59 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table statistics vs autoanalyze" }, { "msg_contents": "> In v14 (which is currently in beta), autoanalyze will process the partitioned\n> table automatically:\n> https://www.postgresql.org/docs/14/release-14.html\n> |Autovacuum now analyzes partitioned tables (Yuzuko Hosoya, Álvaro Herrera)\n> |Insert, update, and delete tuple counts from partitions are now propagated to their parent tables so autovacuum knows when to process them.\n> \n\nThank you for the prompt reply! Changes in v14 sound promising.\n\n\n-- \nBest regards\nKamil Frydel\n\n\n", "msg_date": "Thu, 22 Jul 2021 16:20:16 +0200", "msg_from": "Kamil Frydel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned table statistics vs autoanalyze" } ]
[ { "msg_contents": "Dear Team,\n\nRecently we have noticed that in one of our DB instances there is a potential delay in querying a table from java code. could you please check the attached log and help understand what is the problem and which direction should be look into solving this delay of 4 odd mins ?\n\nThe table definition is as below, it contains around 2 billion rows.\n\ncreate table \"TAFJ_HASHLOCKS\" (recid integer);\nalter table \"TAFJ_HASHLOCKS\" add constraint \"TAFJ_HASHLOCKS_PK\" PRIMARY KEY (recid);\n\n\n\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl execute\nFINEST: simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@13e344d, maxRows=0, fetchSize=0, flags=1\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl sendSimpleQuery\nFINEST: FE=> SimpleQuery(query=\"SAVEPOINT PGJDBC_AUTOSAVE\")\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl sendParse\nFINEST: FE=> Parse(stmt=null,query=\"SELECT RECID FROM TAFJ_HASHLOCKS WHERE RECID = $1 FOR UPDATE NOWAIT \",oids={1043})\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl sendBind\nFINEST: FE=> Bind(stmt=null,portal=null,$1=<'256292129'>,type=VARCHAR)\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl sendDescribePortal\nFINEST: FE=> Describe(portal=null)\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl sendExecute\nFINEST: FE=> Execute(portal=null,limit=0)\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl sendSync\nFINEST: FE=> Sync\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl receiveCommandStatus\nFINEST: <=BE CommandStatus(RELEASE)\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl receiveRFQ\nFINEST: <=BE ReadyForQuery(T)\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl receiveCommandStatus\nFINEST: <=BE CommandStatus(SAVEPOINT)\nJul 22, 2021 4:25:00 PM org.postgresql.core.v3.QueryExecutorImpl receiveRFQ\nFINEST: <=BE ReadyForQuery(T)\nJul 22, 2021 4:29:20 PM org.postgresql.core.v3.QueryExecutorImpl processResults\nFINEST: <=BE ParseComplete [null]\nJul 22, 2021 4:29:20 PM org.postgresql.core.v3.QueryExecutorImpl processResults\nFINEST: <=BE BindComplete [unnamed]\nJul 22, 2021 4:29:20 PM org.postgresql.core.v3.QueryExecutorImpl receiveFields\nFINEST: <=BE RowDescription(1)\n\n\nThanks\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.", "msg_date": "Thu, 22 Jul 2021 13:54:25 +0000", "msg_from": "Manoj Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Issue on a table" }, { "msg_contents": "On Thu, Jul 22, 2021 at 01:54:25PM +0000, Manoj Kumar wrote:\n> Recently we have noticed that in one of our DB instances there is a potential delay in querying a table from java code. could you please check the attached log and help understand what is the problem and which direction should be look into solving this delay of 4 odd mins ?\n\nI'm not familiar with the log, but it looks like the delay is in query parsing\n(ParseComplete). Which seems weird. You might try running wireshark to verify\nthat. Or check postgres logs, and make sure the query isn't being blocked by\nDDL commands. Make sure these are enabled:\n\nlog_lock_waits = 'on'\ndeadlock_timeout = '1s'\n\n> 4:25:00 PM ... execute FINEST: simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@13e344d, maxRows=0, fetchSize=0, flags=1\n> 4:25:00 PM ... sendSimpleQuery FINEST: FE=> SimpleQuery(query=\"SAVEPOINT PGJDBC_AUTOSAVE\")\n> 4:25:00 PM ... sendParse FINEST: FE=> Parse(stmt=null,query=\"SELECT RECID FROM TAFJ_HASHLOCKS WHERE RECID = $1 FOR UPDATE NOWAIT \",oids={1043})\n> 4:25:00 PM ... sendBind FINEST: FE=> Bind(stmt=null,portal=null,$1=<'256292129'>,type=VARCHAR)\n> 4:25:00 PM ... sendDescribePortal FINEST: FE=> Describe(portal=null)\n> 4:25:00 PM ... sendExecute FINEST: FE=> Execute(portal=null,limit=0)\n> 4:25:00 PM ... sendSync FINEST: FE=> Sync\n> 4:25:00 PM ... receiveCommandStatus FINEST: <=BE CommandStatus(RELEASE)\n> 4:25:00 PM ... receiveRFQ FINEST: <=BE ReadyForQuery(T)\n> 4:25:00 PM ... receiveCommandStatus FINEST: <=BE CommandStatus(SAVEPOINT)\n> 4:25:00 PM ... receiveRFQ FINEST: <=BE ReadyForQuery(T)\n> 4:29:20 PM ... processResults FINEST: <=BE ParseComplete [null]\n> 4:29:20 PM ... processResults FINEST: <=BE BindComplete [unnamed]\n> 4:29:20 PM ... receiveFields FINEST: <=BE RowDescription(1)\n\n\n", "msg_date": "Fri, 23 Jul 2021 13:18:17 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issue on a table" } ]
[ { "msg_contents": "Hi, first time posting, hope I have included the relevant information.\r\n\r\nI am trying to understand the performance of a query which is intended to retrieve a subset of the following table:\r\n\r\n\tTable \"contracts.bis_person_alle_endringer\"\r\n\t\t Column | Type | Collation | Nullable | Default \r\n\t----------------------------------+--------------------------+-----------+----------+---------\r\n\t person_id | uuid | | not null | \r\n\t dpd_gyldig_fra_dato | date | | not null | \r\n\t dpd_i_kraft_fra_dato | date | | not null | \r\n\t dpd_i_kraft_til_dato | date | | not null | \r\n\t dpd_endret_tidspunkt | timestamp with time zone | | not null | \r\n\t dpd_bis_foedselsnummer | text | | | \r\n\t dpd_bis_treffkilde_id | text | | | \r\n\t... [omitted for brevity] ...\r\n\t dpd_endret_av | text | | | \r\n\t dpd_bis_kjoenn_id | text | | | \r\n\tIndexes:\r\n\t \"bis_person_alle_endringer_by_person_id\" btree (person_id)\r\n\t \"bis_person_alle_endringer_unique_descending\" UNIQUE, btree (dpd_bis_foedselsnummer, dpd_gyldig_fra_dato DESC, dpd_endret_tidspunkt DESC)\r\n\r\n\r\n\r\n\tdpd=> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='bis_person_alle_endringer';\r\n\t\t relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size\r\n\t---------------------------+----------+-------------+---------------+---------+----------+----------------+------------+---------------\r\n\t bis_person_alle_endringer | 9367584 | 1.09584e+08 | 6392129 | r | 106 | f | | 76760489984\r\n\t(1 row)\r\n\r\nI have ommitted most of the columns, as there are 106 columns in total. The ommitted columns have data types text, numeric or date, all are nullable.\r\n\r\nTo create the subsets, I (or rather my application) will receive lists of records which should be matched according to some business logic. Each of these lists will be read into a temporary table:\r\n\r\n\t\t Table \"pg_temp_9.records_to_filter_on\"\r\n\t Column | Type | Collation | Nullable | Default\r\n\t---------------------+------+-----------+----------+---------\r\n\t foedselsnummer | text | | |\r\n\t tariff_dato | date | | |\r\n\t versjons_dato | date | | |\r\n\t kjent_i_system_dato | date | | |\r\n\r\nThe subset is then created by the following query, which finds the records in contracts.bis_person_alle_endringer which satisfies the business logic (if any).\r\n\r\n select * from records_to_filter_on r\r\n left join lateral (\r\n select * from contracts.bis_person_alle_endringer b\r\n where b.dpd_bis_foedselsnummer = r.foedselsnummer AND\r\n r.kjent_i_system_dato >= b.dpd_endret_tidspunkt AND\r\n r.tariff_dato > b.dpd_gyldig_fra_dato \r\n order by b.dpd_gyldig_fra_dato desc, b.dpd_endret_tidspunkt desc\r\n limit 1\r\n ) c on true\r\n where person_id is not null and\r\n r.versjons_dato < c.dpd_i_kraft_til_dato\r\n\r\nThe temporary table records_to_filter_on and the result of the above query will typically contain 1-5 million rows (the returned subsets are used for training machine learning models).\r\n\r\nI've created a sample data set with 3.75 million rows and run EXPLAIN (ANALYZE, BUFFERS) on the query, https://explain.dalibo.com/plan/U41 (and also attached). Running the full EXPLAIN (ANALYZE, BUFFERS) takes about 30 minutes, which seems quite slow. However, as I am new to postgres, I find it difficult to interpret the output of the EXPLAIN (ANALYZE, BUFFERS) - most of the time is spent during an index scan, which to my understanding is \"good\". However, I don't think I understand postgres well enough to judge whether this is the best I can achieve (or at last close enough) or if the query should be rewritten. Alternatively, is it not realistic to expect faster performance given the size of the table and the hardware of the database instance?\r\n\r\nI am running PostgreSQL 11.9 on x86_64-pc-linux-gnu using AWS Aurora on a db.t3.large instance (https://aws.amazon.com/rds/instance-types/). The output of \r\n\r\n\tSELECT name, current_setting(name), source\r\n\t FROM pg_settings\r\n\t WHERE source NOT IN ('default', 'override');\r\n\r\nis attached in pg_settings.conf.\r\n\r\nI realize that these questions are a little vague, but any guidance would be much appreciated.\r\n\r\nThanks, Simen Lønsethagen\r\n\r\n", "msg_date": "Mon, 26 Jul 2021 13:56:54 +0000", "msg_from": "=?utf-8?B?U2ltZW4gQW5kcmVhcyBBbmRyZWFzc2VuIEzDuG5zZXRoYWdlbg==?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Performance of lateral join" }, { "msg_contents": "On Mon, Jul 26, 2021 at 01:56:54PM +0000, Simen Andreas Andreassen L�nsethagen wrote:\n> To create the subsets, I (or rather my application) will receive lists of records which should be matched according to some business logic. Each of these lists will be read into a temporary table:\n\nEasy first question: is the temp table analyzed before being used in a join ?\n(This is unrelated to \"explain analyze\").\n\n> I am running PostgreSQL 11.9 on x86_64-pc-linux-gnu using AWS Aurora on a db.t3.large instance (https://aws.amazon.com/rds/instance-types/). The output of \n> \t FROM pg_settings\n> is attached in pg_settings.conf.\n\nI think the attachment is missing.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 26 Jul 2021 13:17:45 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of lateral join" }, { "msg_contents": "> Easy first question: is the temp table analyzed before being used in a join ?\r\n\r\nNo, I haven't done that. Today, I tried to run \r\n\r\n\tANALYZE records_to_filter_on;\r\n\r\non the same sample data set (3.75 million rows) before the join, and it did not seem to make much of a difference in terms of time (new output of EXPLAIN ANALYZE at https://explain.dalibo.com/plan/YZu - it seems very similar to me). \r\n\r\nNot sure if it is relevant, but I did some experimentation with smaller samples, and for those, there was a significant speedup. Could there be some size threshold on the temp table after which running ANALYZE does not yield any speedup?\r\n \r\n> I think the attachment is missing.\r\n\r\nAdded now.\r\n\r\nSimen", "msg_date": "Tue, 27 Jul 2021 09:08:49 +0000", "msg_from": "=?utf-8?B?U2ltZW4gQW5kcmVhcyBBbmRyZWFzc2VuIEzDuG5zZXRoYWdlbg==?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of lateral join" }, { "msg_contents": "On Tue, Jul 27, 2021 at 09:08:49AM +0000, Simen Andreas Andreassen L�nsethagen wrote:\n> > Easy first question: is the temp table analyzed before being used in a join ?\n> \n> No, I haven't done that. Today, I tried to run \n> \n> \tANALYZE records_to_filter_on;\n> \n> on the same sample data set (3.75 million rows) before the join, and it did not seem to make much of a difference in terms of time (new output of EXPLAIN ANALYZE at https://explain.dalibo.com/plan/YZu - it seems very similar to me). \n\nIf the \"shape\" of the plan didn't change, then ANALYZE had no effect.\n\nI think you'd see an improvement if both tables were ordered by foedselsnummer.\nIt might be that that's already somewhat/partially true (?)\n\nI suggest to create an index on the temp table's r.foedselsnummer, CLUSTER on\nthat index, and then ANALYZE the table. The index won't be useful for this\nquery, it's just for clustering (unless you can instead populate the temp table\nin order).\n\nCheck if there's already high correlation of dpd_bis_foedselsnummer (over 0.9):\n| SELECT tablename, attname, inherited, null_frac, n_distinct, correlation FROM pg_stats WHERE attname='dpd_bis_foedselsnummer' AND tablename='...';\n\nIf not, consider clustering on the existing \"unique_descending\" index and then\nanalyzing that table, too.\n\nThis would also affect performance of other queries - hopefully improving\nseveral things at once.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 27 Jul 2021 21:27:33 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of lateral join" } ]
[ { "msg_contents": "Hi Experts,\n\nThe attached query is performing slow, this needs to be optimized to\nimprove the performance.\n\nCould you help me with query rewrite (or) on new indexes to be created to\nimprove the performance?\n\nThanks a ton in advance for your support.", "msg_date": "Tue, 27 Jul 2021 05:29:19 +0530", "msg_from": "kenny a <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance !" }, { "msg_contents": ">\n> Hi Experts,\n>\n> The attached query is performing slow, this needs to be optimized to\n> improve the performance.\n>\n> Could you help me with query rewrite (or) on new indexes to be created to\n> improve the performance?\n>\n> Thanks a ton in advance for your support.\n>\n\nHi Experts,The attached query is performing slow, this needs to be optimized to improve the performance.Could you help me with query rewrite (or) on new indexes to be created to improve the performance?Thanks a ton in advance for your support.", "msg_date": "Tue, 27 Jul 2021 22:44:03 +0530", "msg_from": "kenny a <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance !" }, { "msg_contents": "On Tue, Jul 27, 2021 at 10:44:03PM +0530, kenny a wrote:\n> Hi Experts,\n> \n> The attached query is performing slow, this needs to be optimized to\n> improve the performance.\n> \n> Could you help me with query rewrite (or) on new indexes to be created to\n> improve the performance?\n> \n> Thanks a ton in advance for your support. \n\nUh, there is no query, and I think you should read this:\n\n\thttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 27 Jul 2021 13:18:15 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance !" }, { "msg_contents": "Please don't cross post to multiple lists like this.\n\nCc: [email protected], [email protected],\n\[email protected],\n\[email protected]\n\nIf you're hoping for help on the -performance list, see this page and send the\n\"explain analyze\" for this query.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nOn Tue, Jul 27, 2021 at 05:29:19AM +0530, kenny a wrote:\n> Hi Experts,\n> \n> The attached query is performing slow, this needs to be optimized to\n> improve the performance.\n> \n> Could you help me with query rewrite (or) on new indexes to be created to\n> improve the performance?\n> \n> Thanks a ton in advance for your support.\n\n\n", "msg_date": "Thu, 29 Jul 2021 10:10:24 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance !" } ]
[ { "msg_contents": "Hi,\n\nwe have a customer which was migrated from Oracle to PostgreSQL 12.5 (I know, the latest version is 12.7). The migration included a lot of PL/SQL code. Attached a very simplified test case. As you can see there are thousands, even nested calls to procedures and functions. The test case does not even touch any relation, in reality these functions and procedures perform selects, insert and updates. \n\nI've tested this on my local sandbox (Debian 11) and here are the results (three runs each):\n\nHead:\nTime: 97275.109 ms (01:37.275)\nTime: 103241.352 ms (01:43.241)\nTime: 104246.961 ms (01:44.247)\n\n13.3:\nTime: 122179.311 ms (02:02.179)\nTime: 122622.859 ms (02:02.623)\nTime: 125469.711 ms (02:05.470)\n\n12.7:\nTime: 182131.565 ms (03:02.132)\nTime: 177393.980 ms (02:57.394)\nTime: 177550.204 ms (02:57.550)\n\n\nIt seems there are some optimizations in head, but 13.3 and 12.7 are noticeable slower.\n\nQuestion: Is it expected that this takes minutes sitting on the CPU or is there a performance issue? Doing the same in Oracle takes around 30 seconds. I am not saying that this implementation is brilliant, but for the moment it is like it is.\n\nThanks for any inputs\nRegards\nDaniel", "msg_date": "Fri, 30 Jul 2021 07:12:17 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue with thousands of calls to procedures and\n functions?" }, { "msg_contents": "Hi Daniel,\n\nside note:\n\nMaybe you can tune the \"function\" with some special query optimizer\nattributes:\n IMMUTABLE | STABLE | VOLATILE | PARALLEL SAFE\n\nso in your example:\n create or replace function f1(int) returns double precision as\n\n$$\ndeclare\nbegin\n return 1;\nend;\n$$ language plpgsql *IMMUTABLE PARALLEL SAFE*;\n\n\n\"\"\" : https://www.postgresql.org/docs/13/sql-createfunction.html\nPARALLEL SAFE :\n* indicates that the function is safe to run in parallel mode without\nrestriction.*\nIMMUTABLE *: indicates that the function cannot modify the database and\nalways returns the same result when given the same argument values; that\nis, it does not do database lookups or otherwise use information not\ndirectly present in its argument list. If this option is given, any call of\nthe function with all-constant arguments can be immediately replaced with\nthe function value.*\n\"\"\"\n\nRegards,\n Imre\n\nDaniel Westermann (DWE) <[email protected]> ezt írta\n(időpont: 2021. júl. 30., P, 9:12):\n\n> Hi,\n>\n> we have a customer which was migrated from Oracle to PostgreSQL 12.5 (I\n> know, the latest version is 12.7). The migration included a lot of PL/SQL\n> code. Attached a very simplified test case. As you can see there are\n> thousands, even nested calls to procedures and functions. The test case\n> does not even touch any relation, in reality these functions and procedures\n> perform selects, insert and updates.\n>\n> I've tested this on my local sandbox (Debian 11) and here are the results\n> (three runs each):\n>\n> Head:\n> Time: 97275.109 ms (01:37.275)\n> Time: 103241.352 ms (01:43.241)\n> Time: 104246.961 ms (01:44.247)\n>\n> 13.3:\n> Time: 122179.311 ms (02:02.179)\n> Time: 122622.859 ms (02:02.623)\n> Time: 125469.711 ms (02:05.470)\n>\n> 12.7:\n> Time: 182131.565 ms (03:02.132)\n> Time: 177393.980 ms (02:57.394)\n> Time: 177550.204 ms (02:57.550)\n>\n>\n> It seems there are some optimizations in head, but 13.3 and 12.7 are\n> noticeable slower.\n>\n> Question: Is it expected that this takes minutes sitting on the CPU or is\n> there a performance issue? Doing the same in Oracle takes around 30\n> seconds. I am not saying that this implementation is brilliant, but for the\n> moment it is like it is.\n>\n> Thanks for any inputs\n> Regards\n> Daniel\n>\n>\n\nHi Daniel,side note: Maybe you can tune the \"function\" with some special query optimizer attributes:      IMMUTABLE | STABLE | VOLATILE |  PARALLEL SAFEso in your example:      create or replace function f1(int) returns double precision as$$declarebegin  return 1;end;$$ language plpgsql IMMUTABLE PARALLEL SAFE;\"\"\"  : https://www.postgresql.org/docs/13/sql-createfunction.html PARALLEL SAFE : indicates that the function is safe to run in parallel mode without restriction.IMMUTABLE : indicates that the function cannot modify the database and always returns the same result when given the same argument values; that is, it does not do database lookups or otherwise use information not directly present in its argument list. If this option is given, any call of the function with all-constant arguments can be immediately replaced with the function value.\"\"\"Regards,  ImreDaniel Westermann (DWE) <[email protected]> ezt írta (időpont: 2021. júl. 30., P, 9:12):Hi,\n\nwe have a customer which was migrated from Oracle to PostgreSQL 12.5 (I know, the latest version is 12.7). The migration included a lot of PL/SQL code. Attached a very simplified test case. As you can see there are thousands, even nested calls to procedures and functions. The test case does not even touch any relation, in reality these functions and procedures perform selects, insert and updates. \n\nI've tested this on my local sandbox (Debian 11) and here are the results (three runs each):\n\nHead:\nTime: 97275.109 ms (01:37.275)\nTime: 103241.352 ms (01:43.241)\nTime: 104246.961 ms (01:44.247)\n\n13.3:\nTime: 122179.311 ms (02:02.179)\nTime: 122622.859 ms (02:02.623)\nTime: 125469.711 ms (02:05.470)\n\n12.7:\nTime: 182131.565 ms (03:02.132)\nTime: 177393.980 ms (02:57.394)\nTime: 177550.204 ms (02:57.550)\n\n\nIt seems there are some optimizations in head, but 13.3 and 12.7 are noticeable slower.\n\nQuestion: Is it expected that this takes minutes sitting on the CPU or is there a performance issue? Doing the same in Oracle takes around 30 seconds. I am not saying that this implementation is brilliant, but for the moment it is like it is.\n\nThanks for any inputs\nRegards\nDaniel", "msg_date": "Fri, 30 Jul 2021 10:01:42 +0200", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with thousands of calls to procedures and\n functions?" }, { "msg_contents": "Hi\n\npá 30. 7. 2021 v 10:02 odesílatel Imre Samu <[email protected]> napsal:\n\n> Hi Daniel,\n>\n> side note:\n>\n> Maybe you can tune the \"function\" with some special query optimizer\n> attributes:\n> IMMUTABLE | STABLE | VOLATILE | PARALLEL SAFE\n>\n> so in your example:\n> create or replace function f1(int) returns double precision as\n>\n> $$\n> declare\n> begin\n> return 1;\n> end;\n> $$ language plpgsql *IMMUTABLE PARALLEL SAFE*;\n>\n>\nIt cannot help in this case. PL/pgSQL routine (and expression calculations)\nis one CPU every time.\n\nRegards\n\nPavel\n\n\n>\n> \"\"\" : https://www.postgresql.org/docs/13/sql-createfunction.html\n> PARALLEL SAFE :\n> * indicates that the function is safe to run in parallel mode without\n> restriction.*\n> IMMUTABLE *: indicates that the function cannot modify the database and\n> always returns the same result when given the same argument values; that\n> is, it does not do database lookups or otherwise use information not\n> directly present in its argument list. If this option is given, any call of\n> the function with all-constant arguments can be immediately replaced with\n> the function value.*\n> \"\"\"\n>\n> Regards,\n> Imre\n>\n> Daniel Westermann (DWE) <[email protected]> ezt írta\n> (időpont: 2021. júl. 30., P, 9:12):\n>\n>> Hi,\n>>\n>> we have a customer which was migrated from Oracle to PostgreSQL 12.5 (I\n>> know, the latest version is 12.7). The migration included a lot of PL/SQL\n>> code. Attached a very simplified test case. As you can see there are\n>> thousands, even nested calls to procedures and functions. The test case\n>> does not even touch any relation, in reality these functions and procedures\n>> perform selects, insert and updates.\n>>\n>> I've tested this on my local sandbox (Debian 11) and here are the results\n>> (three runs each):\n>>\n>> Head:\n>> Time: 97275.109 ms (01:37.275)\n>> Time: 103241.352 ms (01:43.241)\n>> Time: 104246.961 ms (01:44.247)\n>>\n>> 13.3:\n>> Time: 122179.311 ms (02:02.179)\n>> Time: 122622.859 ms (02:02.623)\n>> Time: 125469.711 ms (02:05.470)\n>>\n>> 12.7:\n>> Time: 182131.565 ms (03:02.132)\n>> Time: 177393.980 ms (02:57.394)\n>> Time: 177550.204 ms (02:57.550)\n>>\n>>\n>> It seems there are some optimizations in head, but 13.3 and 12.7 are\n>> noticeable slower.\n>>\n>> Question: Is it expected that this takes minutes sitting on the CPU or is\n>> there a performance issue? Doing the same in Oracle takes around 30\n>> seconds. I am not saying that this implementation is brilliant, but for the\n>> moment it is like it is.\n>>\n>> Thanks for any inputs\n>> Regards\n>> Daniel\n>>\n>>\n\nHipá 30. 7. 2021 v 10:02 odesílatel Imre Samu <[email protected]> napsal:Hi Daniel,side note: Maybe you can tune the \"function\" with some special query optimizer attributes:      IMMUTABLE | STABLE | VOLATILE |  PARALLEL SAFEso in your example:      create or replace function f1(int) returns double precision as$$declarebegin  return 1;end;$$ language plpgsql IMMUTABLE PARALLEL SAFE;It cannot help in this case. PL/pgSQL routine (and expression calculations) is one CPU every time.RegardsPavel \"\"\"  : https://www.postgresql.org/docs/13/sql-createfunction.html PARALLEL SAFE : indicates that the function is safe to run in parallel mode without restriction.IMMUTABLE : indicates that the function cannot modify the database and always returns the same result when given the same argument values; that is, it does not do database lookups or otherwise use information not directly present in its argument list. If this option is given, any call of the function with all-constant arguments can be immediately replaced with the function value.\"\"\"Regards,  ImreDaniel Westermann (DWE) <[email protected]> ezt írta (időpont: 2021. júl. 30., P, 9:12):Hi,\n\nwe have a customer which was migrated from Oracle to PostgreSQL 12.5 (I know, the latest version is 12.7). The migration included a lot of PL/SQL code. Attached a very simplified test case. As you can see there are thousands, even nested calls to procedures and functions. The test case does not even touch any relation, in reality these functions and procedures perform selects, insert and updates. \n\nI've tested this on my local sandbox (Debian 11) and here are the results (three runs each):\n\nHead:\nTime: 97275.109 ms (01:37.275)\nTime: 103241.352 ms (01:43.241)\nTime: 104246.961 ms (01:44.247)\n\n13.3:\nTime: 122179.311 ms (02:02.179)\nTime: 122622.859 ms (02:02.623)\nTime: 125469.711 ms (02:05.470)\n\n12.7:\nTime: 182131.565 ms (03:02.132)\nTime: 177393.980 ms (02:57.394)\nTime: 177550.204 ms (02:57.550)\n\n\nIt seems there are some optimizations in head, but 13.3 and 12.7 are noticeable slower.\n\nQuestion: Is it expected that this takes minutes sitting on the CPU or is there a performance issue? Doing the same in Oracle takes around 30 seconds. I am not saying that this implementation is brilliant, but for the moment it is like it is.\n\nThanks for any inputs\nRegards\nDaniel", "msg_date": "Fri, 30 Jul 2021 10:04:16 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with thousands of calls to procedures and\n functions?" }, { "msg_contents": "Hi\n\npá 30. 7. 2021 v 9:12 odesílatel Daniel Westermann (DWE) <\[email protected]> napsal:\n\n> Hi,\n>\n> we have a customer which was migrated from Oracle to PostgreSQL 12.5 (I\n> know, the latest version is 12.7). The migration included a lot of PL/SQL\n> code. Attached a very simplified test case. As you can see there are\n> thousands, even nested calls to procedures and functions. The test case\n> does not even touch any relation, in reality these functions and procedures\n> perform selects, insert and updates.\n>\n> I've tested this on my local sandbox (Debian 11) and here are the results\n> (three runs each):\n>\n> Head:\n> Time: 97275.109 ms (01:37.275)\n> Time: 103241.352 ms (01:43.241)\n> Time: 104246.961 ms (01:44.247)\n>\n> 13.3:\n> Time: 122179.311 ms (02:02.179)\n> Time: 122622.859 ms (02:02.623)\n> Time: 125469.711 ms (02:05.470)\n>\n> 12.7:\n> Time: 182131.565 ms (03:02.132)\n> Time: 177393.980 ms (02:57.394)\n> Time: 177550.204 ms (02:57.550)\n>\n>\n> It seems there are some optimizations in head, but 13.3 and 12.7 are\n> noticeable slower.\n>\n\n> Question: Is it expected that this takes minutes sitting on the CPU or is\n> there a performance issue? Doing the same in Oracle takes around 30\n> seconds. I am not saying that this implementation is brilliant, but for the\n> moment it is like it is.\n>\n\nUnfortunately yes, it is possible. PL/pgSQL is interpreted language without\n**any** compiler optimization. PL/SQL is now a fully compiled language with\na lot of compiler optimization. There is main overhead with repeated\nfunction's initialization and variable's initialization. Your example is\nthe worst case for PL/pgSQL - and I am surprised so the difference is only\n3-4x.\n\nMaybe (probably) Oracle does inlining of f1 function. You can get the same\neffect if you use SQL language for this function. PL/pgSQL is bad language\nfor one line functions. When I did it, then then I got 34 sec (on my comp\nagainst 272 sec)\n\nand mark this function as immutable helps a lot of too - it takes 34 sec on\nmy computer.\n\nRegards\n\nPavel\n\n\n\n\n\n\n> Thanks for any inputs\n> Regards\n> Daniel\n>\n>\n\nHipá 30. 7. 2021 v 9:12 odesílatel Daniel Westermann (DWE) <[email protected]> napsal:Hi,\n\nwe have a customer which was migrated from Oracle to PostgreSQL 12.5 (I know, the latest version is 12.7). The migration included a lot of PL/SQL code. Attached a very simplified test case. As you can see there are thousands, even nested calls to procedures and functions. The test case does not even touch any relation, in reality these functions and procedures perform selects, insert and updates. \n\nI've tested this on my local sandbox (Debian 11) and here are the results (three runs each):\n\nHead:\nTime: 97275.109 ms (01:37.275)\nTime: 103241.352 ms (01:43.241)\nTime: 104246.961 ms (01:44.247)\n\n13.3:\nTime: 122179.311 ms (02:02.179)\nTime: 122622.859 ms (02:02.623)\nTime: 125469.711 ms (02:05.470)\n\n12.7:\nTime: 182131.565 ms (03:02.132)\nTime: 177393.980 ms (02:57.394)\nTime: 177550.204 ms (02:57.550)\n\n\nIt seems there are some optimizations in head, but 13.3 and 12.7 are noticeable slower. \n\nQuestion: Is it expected that this takes minutes sitting on the CPU or is there a performance issue? Doing the same in Oracle takes around 30 seconds. I am not saying that this implementation is brilliant, but for the moment it is like it is.Unfortunately yes, it is possible. PL/pgSQL is interpreted language without **any** compiler optimization. PL/SQL is now a fully compiled language with a lot of compiler optimization. There is main overhead with repeated function's initialization and variable's initialization. Your example is the worst case for PL/pgSQL - and I am surprised so the difference is only 3-4x. Maybe (probably) Oracle does inlining of f1 function. You can get the same effect if you use SQL language for this function. PL/pgSQL is bad language for one line functions. When I did it, then then I got 34 sec (on my comp against 272 sec)and mark this function as immutable helps a lot of too - it takes 34 sec on my computer.RegardsPavel \n\nThanks for any inputs\nRegards\nDaniel", "msg_date": "Fri, 30 Jul 2021 10:07:24 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with thousands of calls to procedures and\n functions?" }, { "msg_contents": "pá 30. 7. 2021 v 10:04 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n> Hi\n>\n> pá 30. 7. 2021 v 10:02 odesílatel Imre Samu <[email protected]> napsal:\n>\n>> Hi Daniel,\n>>\n>> side note:\n>>\n>> Maybe you can tune the \"function\" with some special query optimizer\n>> attributes:\n>> IMMUTABLE | STABLE | VOLATILE | PARALLEL SAFE\n>>\n>> so in your example:\n>> create or replace function f1(int) returns double precision as\n>>\n>> $$\n>> declare\n>> begin\n>> return 1;\n>> end;\n>> $$ language plpgsql *IMMUTABLE PARALLEL SAFE*;\n>>\n>>\n> It cannot help in this case. PL/pgSQL routine (and expression\n> calculations) is one CPU every time.\n>\n\nIMMUTABLE helps, surely, because it is translated to constant in this case.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> \"\"\" : https://www.postgresql.org/docs/13/sql-createfunction.html\n>> PARALLEL SAFE :\n>> * indicates that the function is safe to run in parallel mode without\n>> restriction.*\n>> IMMUTABLE *: indicates that the function cannot modify the database and\n>> always returns the same result when given the same argument values; that\n>> is, it does not do database lookups or otherwise use information not\n>> directly present in its argument list. If this option is given, any call of\n>> the function with all-constant arguments can be immediately replaced with\n>> the function value.*\n>> \"\"\"\n>>\n>> Regards,\n>> Imre\n>>\n>> Daniel Westermann (DWE) <[email protected]> ezt írta\n>> (időpont: 2021. júl. 30., P, 9:12):\n>>\n>>> Hi,\n>>>\n>>> we have a customer which was migrated from Oracle to PostgreSQL 12.5 (I\n>>> know, the latest version is 12.7). The migration included a lot of PL/SQL\n>>> code. Attached a very simplified test case. As you can see there are\n>>> thousands, even nested calls to procedures and functions. The test case\n>>> does not even touch any relation, in reality these functions and procedures\n>>> perform selects, insert and updates.\n>>>\n>>> I've tested this on my local sandbox (Debian 11) and here are the\n>>> results (three runs each):\n>>>\n>>> Head:\n>>> Time: 97275.109 ms (01:37.275)\n>>> Time: 103241.352 ms (01:43.241)\n>>> Time: 104246.961 ms (01:44.247)\n>>>\n>>> 13.3:\n>>> Time: 122179.311 ms (02:02.179)\n>>> Time: 122622.859 ms (02:02.623)\n>>> Time: 125469.711 ms (02:05.470)\n>>>\n>>> 12.7:\n>>> Time: 182131.565 ms (03:02.132)\n>>> Time: 177393.980 ms (02:57.394)\n>>> Time: 177550.204 ms (02:57.550)\n>>>\n>>>\n>>> It seems there are some optimizations in head, but 13.3 and 12.7 are\n>>> noticeable slower.\n>>>\n>>> Question: Is it expected that this takes minutes sitting on the CPU or\n>>> is there a performance issue? Doing the same in Oracle takes around 30\n>>> seconds. I am not saying that this implementation is brilliant, but for the\n>>> moment it is like it is.\n>>>\n>>> Thanks for any inputs\n>>> Regards\n>>> Daniel\n>>>\n>>>\n\npá 30. 7. 2021 v 10:04 odesílatel Pavel Stehule <[email protected]> napsal:Hipá 30. 7. 2021 v 10:02 odesílatel Imre Samu <[email protected]> napsal:Hi Daniel,side note: Maybe you can tune the \"function\" with some special query optimizer attributes:      IMMUTABLE | STABLE | VOLATILE |  PARALLEL SAFEso in your example:      create or replace function f1(int) returns double precision as$$declarebegin  return 1;end;$$ language plpgsql IMMUTABLE PARALLEL SAFE;It cannot help in this case. PL/pgSQL routine (and expression calculations) is one CPU every time.IMMUTABLE helps, surely, because it is translated to constant in this case.RegardsPavelRegardsPavel \"\"\"  : https://www.postgresql.org/docs/13/sql-createfunction.html PARALLEL SAFE : indicates that the function is safe to run in parallel mode without restriction.IMMUTABLE : indicates that the function cannot modify the database and always returns the same result when given the same argument values; that is, it does not do database lookups or otherwise use information not directly present in its argument list. If this option is given, any call of the function with all-constant arguments can be immediately replaced with the function value.\"\"\"Regards,  ImreDaniel Westermann (DWE) <[email protected]> ezt írta (időpont: 2021. júl. 30., P, 9:12):Hi,\n\nwe have a customer which was migrated from Oracle to PostgreSQL 12.5 (I know, the latest version is 12.7). The migration included a lot of PL/SQL code. Attached a very simplified test case. As you can see there are thousands, even nested calls to procedures and functions. The test case does not even touch any relation, in reality these functions and procedures perform selects, insert and updates. \n\nI've tested this on my local sandbox (Debian 11) and here are the results (three runs each):\n\nHead:\nTime: 97275.109 ms (01:37.275)\nTime: 103241.352 ms (01:43.241)\nTime: 104246.961 ms (01:44.247)\n\n13.3:\nTime: 122179.311 ms (02:02.179)\nTime: 122622.859 ms (02:02.623)\nTime: 125469.711 ms (02:05.470)\n\n12.7:\nTime: 182131.565 ms (03:02.132)\nTime: 177393.980 ms (02:57.394)\nTime: 177550.204 ms (02:57.550)\n\n\nIt seems there are some optimizations in head, but 13.3 and 12.7 are noticeable slower.\n\nQuestion: Is it expected that this takes minutes sitting on the CPU or is there a performance issue? Doing the same in Oracle takes around 30 seconds. I am not saying that this implementation is brilliant, but for the moment it is like it is.\n\nThanks for any inputs\nRegards\nDaniel", "msg_date": "Fri, 30 Jul 2021 10:11:14 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with thousands of calls to procedures and\n functions?" }, { "msg_contents": "pá 30. 7. 2021 v 9:12 odesílatel Daniel Westermann (DWE) <[email protected]<mailto:[email protected]>> napsal:\nHi,\n\nwe have a customer which was migrated from Oracle to PostgreSQL 12.5 (I know, the latest version is 12.7). The migration included a lot of PL/SQL code. Attached a very simplified test case. As you can see there are thousands, even nested calls to procedures and functions. The test case does not even touch any relation, in reality these functions and procedures perform selects, insert and updates.\n\nI've tested this on my local sandbox (Debian 11) and here are the results (three runs each):\n\nHead:\nTime: 97275.109 ms (01:37.275)\nTime: 103241.352 ms (01:43.241)\nTime: 104246.961 ms (01:44.247)\n\n13.3:\nTime: 122179.311 ms (02:02.179)\nTime: 122622.859 ms (02:02.623)\nTime: 125469.711 ms (02:05.470)\n\n12.7:\nTime: 182131.565 ms (03:02.132)\nTime: 177393.980 ms (02:57.394)\nTime: 177550.204 ms (02:57.550)\n\n\nIt seems there are some optimizations in head, but 13.3 and 12.7 are noticeable slower.\n\nQuestion: Is it expected that this takes minutes sitting on the CPU or is there a performance issue? Doing the same in Oracle takes around 30 seconds. I am not saying that this implementation is brilliant, but for the moment it is like it is.\n\n>Unfortunately yes, it is possible. PL/pgSQL is interpreted language without **any** compiler optimization. PL/SQL is now a fully compiled >language with a lot of compiler optimization. There is main overhead with repeated function's initialization and variable's initialization. Your >example is the worst case for PL/pgSQL - and I am surprised so the difference is only 3-4x.\n\n>Maybe (probably) Oracle does inlining of f1 function. You can get the same effect if you use SQL language for this function. PL/pgSQL is >bad language for one line functions. When I did it, then then I got 34 sec (on my comp against 272 sec)\n\n>and mark this function as immutable helps a lot of too - it takes 34 sec on my computer.\n\nThank you, Pavel. As far as I understand the docs, I cannot use immutable as the \"real\" functions and procedures do database lookups.\n\nRegards\nDaniel\n\n\n\nThanks for any inputs\nRegards\nDaniel\n\n\n\n\n\n\n\n\n\n\n\n\npá 30. 7. 2021 v 9:12 odesílatel Daniel Westermann (DWE) <[email protected]> napsal:\n\n\nHi,\n\nwe have a customer which was migrated from Oracle to PostgreSQL 12.5 (I know, the latest version is 12.7). The migration included a lot of PL/SQL code. Attached a very simplified test case. As you can see there are thousands, even nested calls to procedures\n and functions. The test case does not even touch any relation, in reality these functions and procedures perform selects, insert and updates.\n\n\nI've tested this on my local sandbox (Debian 11) and here are the results (three runs each):\n\nHead:\nTime: 97275.109 ms (01:37.275)\nTime: 103241.352 ms (01:43.241)\nTime: 104246.961 ms (01:44.247)\n\n13.3:\nTime: 122179.311 ms (02:02.179)\nTime: 122622.859 ms (02:02.623)\nTime: 125469.711 ms (02:05.470)\n\n12.7:\nTime: 182131.565 ms (03:02.132)\nTime: 177393.980 ms (02:57.394)\nTime: 177550.204 ms (02:57.550)\n\n\nIt seems there are some optimizations in head, but 13.3 and 12.7 are noticeable slower.\n\n\n\n\nQuestion: Is it expected that this takes minutes sitting on the CPU or is there a performance issue? Doing the same in Oracle takes around 30 seconds. I am not saying that this implementation is brilliant, but for the moment it is like it is.\n\n\n\n>Unfortunately yes, it is possible. PL/pgSQL is interpreted language without **any** compiler optimization. PL/SQL is now a fully compiled >language with a lot of compiler optimization. There is main overhead with repeated function's initialization and\n variable's initialization. Your >example is the worst case for PL/pgSQL - and I am surprised so the difference is only 3-4x.\n\n\n\n\n>Maybe (probably) Oracle does inlining of f1 function. You can get the same effect if you use SQL language for this function. PL/pgSQL is >bad language for one line functions. When I did it, then then I got 34 sec (on my comp against 272 sec)\n\n\n\n>and mark this function as immutable helps a lot of too - it takes 34 sec on my computer.\n\n\n\nThank you, Pavel. As far as I understand the docs, I cannot use immutable as the \"real\" functions and procedures do database lookups.\n\n\nRegards\nDaniel\n\n\n\n\n\n\n\nThanks for any inputs\nRegards\nDaniel", "msg_date": "Fri, 30 Jul 2021 08:12:03 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue with thousands of calls to procedures and\n functions?" }, { "msg_contents": "pá 30. 7. 2021 v 10:12 odesílatel Daniel Westermann (DWE) <\[email protected]> napsal:\n\n>\n> pá 30. 7. 2021 v 9:12 odesílatel Daniel Westermann (DWE) <\n> [email protected]> napsal:\n>\n> Hi,\n>\n> we have a customer which was migrated from Oracle to PostgreSQL 12.5 (I\n> know, the latest version is 12.7). The migration included a lot of PL/SQL\n> code. Attached a very simplified test case. As you can see there are\n> thousands, even nested calls to procedures and functions. The test case\n> does not even touch any relation, in reality these functions and procedures\n> perform selects, insert and updates.\n>\n> I've tested this on my local sandbox (Debian 11) and here are the results\n> (three runs each):\n>\n> Head:\n> Time: 97275.109 ms (01:37.275)\n> Time: 103241.352 ms (01:43.241)\n> Time: 104246.961 ms (01:44.247)\n>\n> 13.3:\n> Time: 122179.311 ms (02:02.179)\n> Time: 122622.859 ms (02:02.623)\n> Time: 125469.711 ms (02:05.470)\n>\n> 12.7:\n> Time: 182131.565 ms (03:02.132)\n> Time: 177393.980 ms (02:57.394)\n> Time: 177550.204 ms (02:57.550)\n>\n>\n> It seems there are some optimizations in head, but 13.3 and 12.7 are\n> noticeable slower.\n>\n>\n> Question: Is it expected that this takes minutes sitting on the CPU or is\n> there a performance issue? Doing the same in Oracle takes around 30\n> seconds. I am not saying that this implementation is brilliant, but for the\n> moment it is like it is.\n>\n>\n> >Unfortunately yes, it is possible. PL/pgSQL is interpreted language\n> without **any** compiler optimization. PL/SQL is now a fully compiled\n> >language with a lot of compiler optimization. There is main overhead with\n> repeated function's initialization and variable's initialization. Your\n> >example is the worst case for PL/pgSQL - and I am surprised so the\n> difference is only 3-4x.\n>\n> >Maybe (probably) Oracle does inlining of f1 function. You can get the\n> same effect if you use SQL language for this function. PL/pgSQL is >bad\n> language for one line functions. When I did it, then then I got 34 sec (on\n> my comp against 272 sec)\n>\n> >and mark this function as immutable helps a lot of too - it takes 34 sec\n> on my computer.\n>\n> Thank you, Pavel. As far as I understand the docs, I cannot use immutable\n> as the \"real\" functions and procedures do database lookups.\n>\n\nIn your example, the bottleneck is calling the function f1. So you need to\ncheck only this function. It is not important if other functions or\nprocedures do database lookups.\n\nOr if it does just one database lookup, then you can use SQL language. I\nrepeat, PL/pgSQL is not good for ultra very frequent calls (where there is\nminimal other overhead).\n\nGenerally, start of function or start of query are more expensive on\nPostgres than on Oracle. Postgres is much more dynamic, and it needs to do\nsome rechecks. The overhead is in nanoseconds, but nanoseconds x billions\nare lot of seconds\n\n\n> Regards\n> Daniel\n>\n>\n>\n> Thanks for any inputs\n> Regards\n> Daniel\n>\n>\n\npá 30. 7. 2021 v 10:12 odesílatel Daniel Westermann (DWE) <[email protected]> napsal:\n\n\n\n\n\npá 30. 7. 2021 v 9:12 odesílatel Daniel Westermann (DWE) <[email protected]> napsal:\n\n\nHi,\n\nwe have a customer which was migrated from Oracle to PostgreSQL 12.5 (I know, the latest version is 12.7). The migration included a lot of PL/SQL code. Attached a very simplified test case. As you can see there are thousands, even nested calls to procedures\n and functions. The test case does not even touch any relation, in reality these functions and procedures perform selects, insert and updates.\n\n\nI've tested this on my local sandbox (Debian 11) and here are the results (three runs each):\n\nHead:\nTime: 97275.109 ms (01:37.275)\nTime: 103241.352 ms (01:43.241)\nTime: 104246.961 ms (01:44.247)\n\n13.3:\nTime: 122179.311 ms (02:02.179)\nTime: 122622.859 ms (02:02.623)\nTime: 125469.711 ms (02:05.470)\n\n12.7:\nTime: 182131.565 ms (03:02.132)\nTime: 177393.980 ms (02:57.394)\nTime: 177550.204 ms (02:57.550)\n\n\nIt seems there are some optimizations in head, but 13.3 and 12.7 are noticeable slower.\n\n\n\n\nQuestion: Is it expected that this takes minutes sitting on the CPU or is there a performance issue? Doing the same in Oracle takes around 30 seconds. I am not saying that this implementation is brilliant, but for the moment it is like it is.\n\n\n\n>Unfortunately yes, it is possible. PL/pgSQL is interpreted language without **any** compiler optimization. PL/SQL is now a fully compiled >language with a lot of compiler optimization. There is main overhead with repeated function's initialization and\n variable's initialization. Your >example is the worst case for PL/pgSQL - and I am surprised so the difference is only 3-4x.\n\n\n\n\n>Maybe (probably) Oracle does inlining of f1 function. You can get the same effect if you use SQL language for this function. PL/pgSQL is >bad language for one line functions. When I did it, then then I got 34 sec (on my comp against 272 sec)\n\n\n\n>and mark this function as immutable helps a lot of too - it takes 34 sec on my computer.\n\n\n\nThank you, Pavel. As far as I understand the docs, I cannot use immutable as the \"real\" functions and procedures do database lookups.In your example, the bottleneck is calling the function f1. So you need to check only this function. It is not important if other functions or procedures do database lookups.Or if it does just one database lookup, then you can use SQL language. I repeat, PL/pgSQL is not good for ultra very frequent calls (where there is minimal other overhead).Generally, start of function or start of query are more expensive on Postgres than on Oracle. Postgres is much more dynamic, and it needs to do some rechecks. The overhead is in nanoseconds, but nanoseconds x billions are lot of seconds \n\n\nRegards\nDaniel\n\n\n\n\n\n\n\nThanks for any inputs\nRegards\nDaniel", "msg_date": "Fri, 30 Jul 2021 10:21:53 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with thousands of calls to procedures and\n functions?" }, { "msg_contents": ">In your example, the bottleneck is calling the function f1. So you need to check only this function. It is not important if other functions or >procedures do database lookups.\n\n>Or if it does just one database lookup, then you can use SQL language. I repeat, PL/pgSQL is not good for ultra very frequent calls (where >there is minimal other overhead).\n\n>Generally, start of function or start of query are more expensive on Postgres than on Oracle. Postgres is much more dynamic, and it needs >to do some rechecks. The overhead is in nanoseconds, but nanoseconds x billions are lot of seconds\n\nThank you Pavel, for all the information. That was very helpful.\n\nRegards\nDaniel\n\n", "msg_date": "Fri, 30 Jul 2021 11:15:19 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue with thousands of calls to procedures and\n functions?" } ]
[ { "msg_contents": "Hi Team,\n\nWe have a highly transactional system as the source of logical replication\nand the database size is 500GB+. We are replicating all tables from source\nusing logical replication.\n\nFor two tables the initial data load is very slow and it never completes\neven after 24hrs+\nTable size is under 100GB and index size is around 400GB.\n\nHow can we increase the speed of the initial data load without dropping the\nindexes on destination?\n\nWe increased max_sync_workers_per_subscription to 3 but it didn't help much\nfor single tables\n\nThanks,\nNikhil\n\nHi Team,We have a highly transactional system as the source of logical replication and the database size is 500GB+. We are replicating all tables from source using logical replication. For two tables the initial data load is very slow and it never completes even after 24hrs+ Table size is under 100GB and index size is around 400GB. How can we increase the speed of the initial data load without dropping the indexes on destination?We increased max_sync_workers_per_subscription to 3 but it didn't help much for single tablesThanks,Nikhil", "msg_date": "Wed, 4 Aug 2021 20:36:46 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Logical Replication speed-up initial data" }, { "msg_contents": "Hello,\n\nI also faced a similar issue. Try removing the indexes on the destination\nfirst if possible. After that, you can add the indexes.\n\nRegards.\n\n\nNikhil Shetty <[email protected]>, 4 Ağu 2021 Çar, 18:07 tarihinde\nşunu yazdı:\n\n> Hi Team,\n>\n> We have a highly transactional system as the source of logical replication\n> and the database size is 500GB+. We are replicating all tables from source\n> using logical replication.\n>\n> For two tables the initial data load is very slow and it never completes\n> even after 24hrs+\n> Table size is under 100GB and index size is around 400GB.\n>\n> How can we increase the speed of the initial data load without dropping\n> the indexes on destination?\n>\n> We increased max_sync_workers_per_subscription to 3 but it didn't help\n> much for single tables\n>\n> Thanks,\n> Nikhil\n>\n\n\n-- \nHüseyin Demir\n\nSenior Database Platform Engineer\n\nTwitter: https://twitter.com/d3rh5n\nLinkedin: hseyindemir\n<https://www.linkedin.com/in/h%C3%BCseyin-demir-4020699b/>\nGithub: https://github.com/hseyindemir\nGitlab: https://gitlab.com/demirhuseyinn.94\nMedium: https://demirhuseyinn-94.medium.com/\n\nHello,I also faced a similar issue. Try removing the indexes on the destination first if possible. After that, you can add the indexes. Regards.Nikhil Shetty <[email protected]>, 4 Ağu 2021 Çar, 18:07 tarihinde şunu yazdı:Hi Team,We have a highly transactional system as the source of logical replication and the database size is 500GB+. We are replicating all tables from source using logical replication. For two tables the initial data load is very slow and it never completes even after 24hrs+ Table size is under 100GB and index size is around 400GB. How can we increase the speed of the initial data load without dropping the indexes on destination?We increased max_sync_workers_per_subscription to 3 but it didn't help much for single tablesThanks,Nikhil\n-- Hüseyin DemirSenior Database Platform EngineerTwitter:  https://twitter.com/d3rh5nLinkedin: hseyindemirGithub: https://github.com/hseyindemirGitlab: https://gitlab.com/demirhuseyinn.94Medium: https://demirhuseyinn-94.medium.com/", "msg_date": "Wed, 4 Aug 2021 18:24:39 +0300", "msg_from": "=?UTF-8?Q?H=C3=BCseyin_Demir?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "Hello,\nin my experience, to speed up the initial load, I had to drop UKs and FKs.\nUnfortunately, the initial load doesn't work in parallel and, for each\ntable, there is only one sync worker.\n\nRegards\n\nStefano Amoroso\n\nIl giorno mer 4 ago 2021 alle ore 17:24 Hüseyin Demir <\[email protected]> ha scritto:\n\n> Hello,\n>\n> I also faced a similar issue. Try removing the indexes on the destination\n> first if possible. After that, you can add the indexes.\n>\n> Regards.\n>\n>\n> Nikhil Shetty <[email protected]>, 4 Ağu 2021 Çar, 18:07 tarihinde\n> şunu yazdı:\n>\n>> Hi Team,\n>>\n>> We have a highly transactional system as the source of logical\n>> replication and the database size is 500GB+. We are replicating all tables\n>> from source using logical replication.\n>>\n>> For two tables the initial data load is very slow and it never completes\n>> even after 24hrs+\n>> Table size is under 100GB and index size is around 400GB.\n>>\n>> How can we increase the speed of the initial data load without dropping\n>> the indexes on destination?\n>>\n>> We increased max_sync_workers_per_subscription to 3 but it didn't help\n>> much for single tables\n>>\n>> Thanks,\n>> Nikhil\n>>\n>\n>\n> --\n> Hüseyin Demir\n>\n> Senior Database Platform Engineer\n>\n> Twitter: https://twitter.com/d3rh5n\n> Linkedin: hseyindemir\n> <https://www.linkedin.com/in/h%C3%BCseyin-demir-4020699b/>\n> Github: https://github.com/hseyindemir\n> Gitlab: https://gitlab.com/demirhuseyinn.94\n> Medium: https://demirhuseyinn-94.medium.com/\n>\n\nHello,in my experience, to speed up the initial load, I had to drop UKs and FKs.Unfortunately, the initial load doesn't work in parallel and, for each table, there is only one sync worker.RegardsStefano AmorosoIl giorno mer 4 ago 2021 alle ore 17:24 Hüseyin Demir <[email protected]> ha scritto:Hello,I also faced a similar issue. Try removing the indexes on the destination first if possible. After that, you can add the indexes. Regards.Nikhil Shetty <[email protected]>, 4 Ağu 2021 Çar, 18:07 tarihinde şunu yazdı:Hi Team,We have a highly transactional system as the source of logical replication and the database size is 500GB+. We are replicating all tables from source using logical replication. For two tables the initial data load is very slow and it never completes even after 24hrs+ Table size is under 100GB and index size is around 400GB. How can we increase the speed of the initial data load without dropping the indexes on destination?We increased max_sync_workers_per_subscription to 3 but it didn't help much for single tablesThanks,Nikhil\n-- Hüseyin DemirSenior Database Platform EngineerTwitter:  https://twitter.com/d3rh5nLinkedin: hseyindemirGithub: https://github.com/hseyindemirGitlab: https://gitlab.com/demirhuseyinn.94Medium: https://demirhuseyinn-94.medium.com/", "msg_date": "Wed, 4 Aug 2021 17:55:04 +0200", "msg_from": "Stefano Amoroso <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "\n\n> On Aug 4, 2021, at 08:06, Nikhil Shetty <[email protected]> wrote:\n> \n> How can we increase the speed of the initial data load without dropping the indexes on destination?\n\nYou can do the usual steps of increasing checkpoint_timeout and max_wal_size (since incoming logical replication changes are WAL logged) and setting synchronous_commit = off, but those will be modest improvements. You will get an enormous benefit from dropping indexes and foreign key constraints, and those aren't much use during the initial sync anyway.\n\n", "msg_date": "Wed, 4 Aug 2021 09:28:06 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "Hi,\n\nThank you for the suggestion.\n\nWe tried by dropping indexes and it worked faster compared to what we saw\nearlier. We wanted to know if anybody has done any other changes that helps\nspeed-up initial data load without dropping indexes.\n\nThanks,\nNikhil\n\nOn Wed, Aug 4, 2021 at 8:54 PM Hüseyin Demir <[email protected]>\nwrote:\n\n> Hello,\n>\n> I also faced a similar issue. Try removing the indexes on the destination\n> first if possible. After that, you can add the indexes.\n>\n> Regards.\n>\n>\n> Nikhil Shetty <[email protected]>, 4 Ağu 2021 Çar, 18:07 tarihinde\n> şunu yazdı:\n>\n>> Hi Team,\n>>\n>> We have a highly transactional system as the source of logical\n>> replication and the database size is 500GB+. We are replicating all tables\n>> from source using logical replication.\n>>\n>> For two tables the initial data load is very slow and it never completes\n>> even after 24hrs+\n>> Table size is under 100GB and index size is around 400GB.\n>>\n>> How can we increase the speed of the initial data load without dropping\n>> the indexes on destination?\n>>\n>> We increased max_sync_workers_per_subscription to 3 but it didn't help\n>> much for single tables\n>>\n>> Thanks,\n>> Nikhil\n>>\n>\n>\n> --\n> Hüseyin Demir\n>\n> Senior Database Platform Engineer\n>\n> Twitter: https://twitter.com/d3rh5n\n> Linkedin: hseyindemir\n> <https://www.linkedin.com/in/h%C3%BCseyin-demir-4020699b/>\n> Github: https://github.com/hseyindemir\n> Gitlab: https://gitlab.com/demirhuseyinn.94\n> Medium: https://demirhuseyinn-94.medium.com/\n>\n\nHi,Thank you for the suggestion.We tried by dropping indexes and it worked faster compared to what we saw earlier. We wanted to know if anybody has done any other changes that helps speed-up initial data load without dropping indexes.Thanks,NikhilOn Wed, Aug 4, 2021 at 8:54 PM Hüseyin Demir <[email protected]> wrote:Hello,I also faced a similar issue. Try removing the indexes on the destination first if possible. After that, you can add the indexes. Regards.Nikhil Shetty <[email protected]>, 4 Ağu 2021 Çar, 18:07 tarihinde şunu yazdı:Hi Team,We have a highly transactional system as the source of logical replication and the database size is 500GB+. We are replicating all tables from source using logical replication. For two tables the initial data load is very slow and it never completes even after 24hrs+ Table size is under 100GB and index size is around 400GB. How can we increase the speed of the initial data load without dropping the indexes on destination?We increased max_sync_workers_per_subscription to 3 but it didn't help much for single tablesThanks,Nikhil\n-- Hüseyin DemirSenior Database Platform EngineerTwitter:  https://twitter.com/d3rh5nLinkedin: hseyindemirGithub: https://github.com/hseyindemirGitlab: https://gitlab.com/demirhuseyinn.94Medium: https://demirhuseyinn-94.medium.com/", "msg_date": "Thu, 5 Aug 2021 10:26:59 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "Hi Stefano,\n\nThank you for the information.\n\nRegards,\nNikhil\n\nOn Wed, Aug 4, 2021 at 9:25 PM Stefano Amoroso <[email protected]>\nwrote:\n\n> Hello,\n> in my experience, to speed up the initial load, I had to drop UKs and FKs.\n> Unfortunately, the initial load doesn't work in parallel and, for each\n> table, there is only one sync worker.\n>\n> Regards\n>\n> Stefano Amoroso\n>\n> Il giorno mer 4 ago 2021 alle ore 17:24 Hüseyin Demir <\n> [email protected]> ha scritto:\n>\n>> Hello,\n>>\n>> I also faced a similar issue. Try removing the indexes on the destination\n>> first if possible. After that, you can add the indexes.\n>>\n>> Regards.\n>>\n>>\n>> Nikhil Shetty <[email protected]>, 4 Ağu 2021 Çar, 18:07 tarihinde\n>> şunu yazdı:\n>>\n>>> Hi Team,\n>>>\n>>> We have a highly transactional system as the source of logical\n>>> replication and the database size is 500GB+. We are replicating all tables\n>>> from source using logical replication.\n>>>\n>>> For two tables the initial data load is very slow and it never completes\n>>> even after 24hrs+\n>>> Table size is under 100GB and index size is around 400GB.\n>>>\n>>> How can we increase the speed of the initial data load without dropping\n>>> the indexes on destination?\n>>>\n>>> We increased max_sync_workers_per_subscription to 3 but it didn't help\n>>> much for single tables\n>>>\n>>> Thanks,\n>>> Nikhil\n>>>\n>>\n>>\n>> --\n>> Hüseyin Demir\n>>\n>> Senior Database Platform Engineer\n>>\n>> Twitter: https://twitter.com/d3rh5n\n>> Linkedin: hseyindemir\n>> <https://www.linkedin.com/in/h%C3%BCseyin-demir-4020699b/>\n>> Github: https://github.com/hseyindemir\n>> Gitlab: https://gitlab.com/demirhuseyinn.94\n>> Medium: https://demirhuseyinn-94.medium.com/\n>>\n>\n\nHi Stefano,Thank you for the information.Regards,NikhilOn Wed, Aug 4, 2021 at 9:25 PM Stefano Amoroso <[email protected]> wrote:Hello,in my experience, to speed up the initial load, I had to drop UKs and FKs.Unfortunately, the initial load doesn't work in parallel and, for each table, there is only one sync worker.RegardsStefano AmorosoIl giorno mer 4 ago 2021 alle ore 17:24 Hüseyin Demir <[email protected]> ha scritto:Hello,I also faced a similar issue. Try removing the indexes on the destination first if possible. After that, you can add the indexes. Regards.Nikhil Shetty <[email protected]>, 4 Ağu 2021 Çar, 18:07 tarihinde şunu yazdı:Hi Team,We have a highly transactional system as the source of logical replication and the database size is 500GB+. We are replicating all tables from source using logical replication. For two tables the initial data load is very slow and it never completes even after 24hrs+ Table size is under 100GB and index size is around 400GB. How can we increase the speed of the initial data load without dropping the indexes on destination?We increased max_sync_workers_per_subscription to 3 but it didn't help much for single tablesThanks,Nikhil\n-- Hüseyin DemirSenior Database Platform EngineerTwitter:  https://twitter.com/d3rh5nLinkedin: hseyindemirGithub: https://github.com/hseyindemirGitlab: https://gitlab.com/demirhuseyinn.94Medium: https://demirhuseyinn-94.medium.com/", "msg_date": "Thu, 5 Aug 2021 10:27:50 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "On Thu, Aug 5, 2021 at 12:57 AM Nikhil Shetty <[email protected]>\nwrote:\n\n> Hi,\n>\n> Thank you for the suggestion.\n>\n> We tried by dropping indexes and it worked faster compared to what we saw\n> earlier. We wanted to know if anybody has done any other changes that helps\n> speed-up initial data load without dropping indexes.\n>\n\nIt would be kind of cool if the database could just \"know\" that it was an\ninitial load and automatically suppress FK checks and index updates until\nthe load is done. Once complete it would go back and concurrently rebuild\nthe indexes and validate the FK's. Then you wouldn't have to manually\ndrop all of your indexes and add them back and hope you got them all, and\ngot them right.\n\nOn Thu, Aug 5, 2021 at 12:57 AM Nikhil Shetty <[email protected]> wrote:Hi,Thank you for the suggestion.We tried by dropping indexes and it worked faster compared to what we saw earlier. We wanted to know if anybody has done any other changes that helps speed-up initial data load without dropping indexes.It would be kind of cool if the database could just \"know\" that it was an initial load and automatically suppress FK checks and index updates until the load is done.  Once complete it would go back and concurrently rebuild the indexes and validate the FK's.   Then you wouldn't have to manually drop all of your indexes and add them back and hope you got them all, and got them right.", "msg_date": "Thu, 5 Aug 2021 09:25:57 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty <[email protected]> wrote:\n\n> Hi,\n>\n> Thank you for the suggestion.\n>\n> We tried by dropping indexes and it worked faster compared to what we saw\n> earlier. We wanted to know if anybody has done any other changes that helps\n> speed-up initial data load without dropping indexes.\n>\n>\nPS: i have not tested this in production level loads, it was just some exp\ni did on my laptop.\n\none option would be to use pglogical extension (this was shared by\nDharmendra in one the previous mails, sharing the same),\nand then use pglogical_create_subscriber cli to create the initial copy via\npgbasebackup and then carry on from there.\nI ran the test case similar to one below in my local env, and it seems to\nwork fine. of course i do not have TB worth of load to test, but it looks\npromising,\nespecially since they introduced it to the core.\npglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE ·\n2ndQuadrant/pglogical (github.com)\n<https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/t/010_pglogical_create_subscriber.pl>\nOnce you attain some reasonable sync state, you can drop the pglogical\nextension, and check if things continue fine.\nI have done something similar when upgrading from 9.6 to 11 using pglogical\nand then dropping the extension and it was smooth,\nmaybe you need to try this out and share if things works fine.\nand\nThe 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot -\nPercona Database Performance Blog\n<https://www.percona.com/blog/postgresql-logical-replication-using-an-rds-snapshot/>\n\nOn Thu, 5 Aug 2021 at 10:27, Nikhil Shetty <[email protected]> wrote:Hi,Thank you for the suggestion.We tried by dropping indexes and it worked faster compared to what we saw earlier. We wanted to know if anybody has done any other changes that helps speed-up initial data load without dropping indexes.PS: i have not tested this in production level loads, it was just some exp i did on my laptop.one option would be to use pglogical extension (this was shared by Dharmendra in one the previous mails, sharing the same),and then use pglogical_create_subscriber cli to create the initial copy via pgbasebackup and then carry on from there.I ran the test case similar to one below in my local env, and it seems to work fine. of course i do not have TB worth of load to test, but it looks promising,especially since they introduced it to the core.pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE · 2ndQuadrant/pglogical (github.com)Once you attain some reasonable sync state, you can drop the pglogical extension, and check if things continue fine.I have done something similar when upgrading from 9.6 to 11 using pglogical and then dropping the extension and it was smooth,maybe you need to try this out and share if things works fine.and The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot - Percona Database Performance Blog", "msg_date": "Thu, 5 Aug 2021 19:57:42 +0530", "msg_from": "Vijaykumar Jain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 5, 2021 at 11:28 AM Vijaykumar Jain <\[email protected]> wrote:\n\n> On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> Thank you for the suggestion.\n>>\n>> We tried by dropping indexes and it worked faster compared to what we saw\n>> earlier. We wanted to know if anybody has done any other changes that helps\n>> speed-up initial data load without dropping indexes.\n>>\n>>\n> You could leverage pg_basbeackup or pg_dump with parallel jobs\ntaken from a Standby (preferably replication paused if pg_dump, anyways\npg_basebackup should be straight-forward) or taken even from\nPrimary, for the purpose of initial data load.\n\nAs you are able to drop indexes and make some schema changes, I would\nassume that you could pause your app temporarily. If that's the case\nyou may look into the simple steps i am posting here that demonstrates\npg_dump/pg_restore instead.\n\nIf you cannot pause the app, then, you could look into how you\ncould use pg_replication_origin_advance\n<https://www.postgresql.org/docs/13/functions-admin.html#PG-REPLICATION-ORIGIN-ADVANCE>\n\n\nStep 1 : Pause App\nStep 2 : Create Publication on the Primary CREATE PUBLICATION\n<some_pub_name> FOR ALL TABLES;\nStep 3 : Create Logical Replication Slot on the Primary SELECT * FROM\npg_create_logical_replication_slot('<some_slot_name>', 'pgoutput'); Step 4\n: Create Subscription but do not enable the Subscription\nCREATE SUBSCRIPTION <some_sub_name> CONNECTION\n'host=<some_host> dbname=<some_db> user=postgres\npassword=secret port=5432' PUBLICATION <some_pub_name>\nWITH (copy_data = false, create_slot=false, enabled=false,\nslot_name=<some_slot_name>);\n\nStep 5 : Initiate pg_dump. We can take a parallel backup for a faster\nrestore.\n\n$ pg_dump -d <some_db> -Fd -j 4 -n <some_schema> -f <some_unique_directory>\n-- If its several hundreds of GBs or TBs, you may rather utilize one of\nyour Standby that has been paused from replication using -> select\npg_wal_replay_pause();\n\nStep 6 : Don't need to wait until pg_dump completes, you may start the App.\n-- Hope the app does not perform changes that impact the pg_dump or\ngets blocked due to pg_dump.\nStep 7 : Restore the dump if you used pg_dump.\npg_restore -d <some_db> -j <some_numer_of_parallel_jobs> <some_directory> Step\n8 : Enable subscription.\nALTER SUBSCRIPTION <some_sub_name> ENABLE;\n\nIf you have not stopped your app then you must advance the lsn using\npg_replication_origin_advance\n<https://www.postgresql.org/docs/13/functions-admin.html#PG-REPLICATION-ORIGIN-ADVANCE>\n\n\nThese are all hand-written steps while drafting this email, so,\nplease test it on your end as some typos or adjustments are definitely\nexpected.\n\nPS: i have not tested this in production level loads, it was just some exp\n> i did on my laptop.\n>\n> one option would be to use pglogical extension (this was shared by\n> Dharmendra in one the previous mails, sharing the same),\n> and then use pglogical_create_subscriber cli to create the initial copy\n> via pgbasebackup and then carry on from there.\n> I ran the test case similar to one below in my local env, and it seems to\n> work fine. of course i do not have TB worth of load to test, but it looks\n> promising,\n> especially since they introduced it to the core.\n> pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE ·\n> 2ndQuadrant/pglogical (github.com)\n> <https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/t/010_pglogical_create_subscriber.pl>\n> Once you attain some reasonable sync state, you can drop the pglogical\n> extension, and check if things continue fine.\n> I have done something similar when upgrading from 9.6 to 11 using\n> pglogical and then dropping the extension and it was smooth,\n> maybe you need to try this out and share if things works fine.\n> and\n> The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot -\n> Percona Database Performance Blog\n> <https://www.percona.com/blog/postgresql-logical-replication-using-an-rds-snapshot/>\n>\n>\n\n-- \nRegards,\nAvinash Vallarapu (Avi)\nCEO,\nMigOps, Inc.\n\nHi, On Thu, Aug 5, 2021 at 11:28 AM Vijaykumar Jain <[email protected]> wrote:On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty <[email protected]> wrote:Hi,Thank you for the suggestion.We tried by dropping indexes and it worked faster compared to what we saw earlier. We wanted to know if anybody has done any other changes that helps speed-up initial data load without dropping indexes.You could leverage pg_basbeackup or pg_dump with parallel jobs taken from a Standby (preferably replication paused if pg_dump, anyways pg_basebackup should be straight-forward) or taken even from Primary, for the purpose of initial data load. As you are able to drop indexes and make some schema changes, I would assume that you could pause your app temporarily. If that's the caseyou may look into the simple steps i am posting here that demonstrates pg_dump/pg_restore instead. If you cannot pause the app, then, you could look into how you  could use pg_replication_origin_advance Step 1 : Pause App Step 2 : Create Publication on the Primary\nCREATE PUBLICATION <some_pub_name> FOR ALL TABLES;\nStep 3 : Create Logical Replication Slot on the Primary\nSELECT * FROM pg_create_logical_replication_slot('<some_slot_name>', 'pgoutput');\nStep 4 : Create Subscription but do not enable the SubscriptionCREATE SUBSCRIPTION <some_sub_name> CONNECTION 'host=<some_host> dbname=<some_db> user=postgres password=secret port=5432' PUBLICATION <some_pub_name> WITH (copy_data = false, create_slot=false, enabled=false, slot_name=<some_slot_name>);Step 5 : Initiate pg_dump. We can take a parallel backup for a faster restore.$ pg_dump -d <some_db> -Fd -j 4 -n <some_schema> -f <some_unique_directory>\n-- If its several hundreds of GBs or TBs, you may rather utilize one of your \nStandby that has been paused from replication using \n-> select pg_wal_replay_pause();\nStep 6 : Don't need to wait until pg_dump completes, you may start the App. -- Hope the app does not perform changes that impact the pg_dump or gets blocked due to pg_dump. Step 7 : Restore the dump if you used pg_dump. pg_restore -d <some_db> -j <some_numer_of_parallel_jobs> <some_directory>\nStep 8 : Enable subscription.ALTER SUBSCRIPTION <some_sub_name> ENABLE; If you have not stopped your app then you must advance the lsn using pg_replication_origin_advance These are all hand-written steps while drafting this email, so, please test it on your end as some typos or adjustments are definitely expected.PS: i have not tested this in production level loads, it was just some exp i did on my laptop.one option would be to use pglogical extension (this was shared by Dharmendra in one the previous mails, sharing the same),and then use pglogical_create_subscriber cli to create the initial copy via pgbasebackup and then carry on from there.I ran the test case similar to one below in my local env, and it seems to work fine. of course i do not have TB worth of load to test, but it looks promising,especially since they introduced it to the core.pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE · 2ndQuadrant/pglogical (github.com)Once you attain some reasonable sync state, you can drop the pglogical extension, and check if things continue fine.I have done something similar when upgrading from 9.6 to 11 using pglogical and then dropping the extension and it was smooth,maybe you need to try this out and share if things works fine.and The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot - Percona Database Performance Blog\n-- Regards,Avinash Vallarapu (Avi)CEO,MigOps, Inc.", "msg_date": "Thu, 5 Aug 2021 12:11:50 -0300", "msg_from": "Avinash Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "Hi Avinash,\n\nThank you for the detailed explanation.\n\nIndexes were dropped on the destination to increase initial data load\nspeed. We cannot stop the App on source and it is highly transactional.\nI had thought about this method but I am not sure after the pg_restore from\nwhere the logical replication will be started, we cannot afford to lose any\ndata.\n\nI will give this method a test though and check how it works.\n\nThanks,\nNikhil\n\nOn Thu, Aug 5, 2021 at 8:42 PM Avinash Kumar <[email protected]>\nwrote:\n\n> Hi,\n>\n> On Thu, Aug 5, 2021 at 11:28 AM Vijaykumar Jain <\n> [email protected]> wrote:\n>\n>> On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty <[email protected]>\n>> wrote:\n>>\n>>> Hi,\n>>>\n>>> Thank you for the suggestion.\n>>>\n>>> We tried by dropping indexes and it worked faster compared to what we\n>>> saw earlier. We wanted to know if anybody has done any other changes that\n>>> helps speed-up initial data load without dropping indexes.\n>>>\n>>>\n>> You could leverage pg_basbeackup or pg_dump with parallel jobs\n> taken from a Standby (preferably replication paused if pg_dump, anyways\n> pg_basebackup should be straight-forward) or taken even from\n> Primary, for the purpose of initial data load.\n>\n> As you are able to drop indexes and make some schema changes, I would\n> assume that you could pause your app temporarily. If that's the case\n> you may look into the simple steps i am posting here that demonstrates\n> pg_dump/pg_restore instead.\n>\n> If you cannot pause the app, then, you could look into how you\n> could use pg_replication_origin_advance\n> <https://www.postgresql.org/docs/13/functions-admin.html#PG-REPLICATION-ORIGIN-ADVANCE>\n>\n>\n> Step 1 : Pause App\n> Step 2 : Create Publication on the Primary CREATE PUBLICATION\n> <some_pub_name> FOR ALL TABLES;\n> Step 3 : Create Logical Replication Slot on the Primary SELECT * FROM\n> pg_create_logical_replication_slot('<some_slot_name>', 'pgoutput'); Step\n> 4 : Create Subscription but do not enable the Subscription\n> CREATE SUBSCRIPTION <some_sub_name> CONNECTION\n> 'host=<some_host> dbname=<some_db> user=postgres\n> password=secret port=5432' PUBLICATION <some_pub_name>\n> WITH (copy_data = false, create_slot=false, enabled=false,\n> slot_name=<some_slot_name>);\n>\n> Step 5 : Initiate pg_dump. We can take a parallel backup for a faster\n> restore.\n>\n> $ pg_dump -d <some_db> -Fd -j 4 -n <some_schema> -f <some_unique_directory>\n> -- If its several hundreds of GBs or TBs, you may rather utilize one of\n> your Standby that has been paused from replication using -> select pg_wal_replay_pause();\n>\n> Step 6 : Don't need to wait until pg_dump completes, you may start the\n> App.\n> -- Hope the app does not perform changes that impact the pg_dump or\n> gets blocked due to pg_dump.\n> Step 7 : Restore the dump if you used pg_dump.\n> pg_restore -d <some_db> -j <some_numer_of_parallel_jobs> <some_directory> Step\n> 8 : Enable subscription.\n> ALTER SUBSCRIPTION <some_sub_name> ENABLE;\n>\n> If you have not stopped your app then you must advance the lsn using\n> pg_replication_origin_advance\n> <https://www.postgresql.org/docs/13/functions-admin.html#PG-REPLICATION-ORIGIN-ADVANCE>\n>\n>\n> These are all hand-written steps while drafting this email, so,\n> please test it on your end as some typos or adjustments are definitely\n> expected.\n>\n> PS: i have not tested this in production level loads, it was just some exp\n>> i did on my laptop.\n>>\n>> one option would be to use pglogical extension (this was shared by\n>> Dharmendra in one the previous mails, sharing the same),\n>> and then use pglogical_create_subscriber cli to create the initial copy\n>> via pgbasebackup and then carry on from there.\n>> I ran the test case similar to one below in my local env, and it seems to\n>> work fine. of course i do not have TB worth of load to test, but it looks\n>> promising,\n>> especially since they introduced it to the core.\n>> pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE ·\n>> 2ndQuadrant/pglogical (github.com)\n>> <https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/t/010_pglogical_create_subscriber.pl>\n>> Once you attain some reasonable sync state, you can drop the pglogical\n>> extension, and check if things continue fine.\n>> I have done something similar when upgrading from 9.6 to 11 using\n>> pglogical and then dropping the extension and it was smooth,\n>> maybe you need to try this out and share if things works fine.\n>> and\n>> The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot -\n>> Percona Database Performance Blog\n>> <https://www.percona.com/blog/postgresql-logical-replication-using-an-rds-snapshot/>\n>>\n>>\n>\n> --\n> Regards,\n> Avinash Vallarapu (Avi)\n> CEO,\n> MigOps, Inc.\n>\n\nHi Avinash,Thank you for the detailed explanation.Indexes were dropped on the destination to increase initial data load speed. We cannot stop the App on source and it is highly transactional. I had thought about this method but I am not sure after the pg_restore from where the logical replication will be started, we cannot afford to lose any data. I will give this method a test though and check how it works. Thanks,NikhilOn Thu, Aug 5, 2021 at 8:42 PM Avinash Kumar <[email protected]> wrote:Hi, On Thu, Aug 5, 2021 at 11:28 AM Vijaykumar Jain <[email protected]> wrote:On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty <[email protected]> wrote:Hi,Thank you for the suggestion.We tried by dropping indexes and it worked faster compared to what we saw earlier. We wanted to know if anybody has done any other changes that helps speed-up initial data load without dropping indexes.You could leverage pg_basbeackup or pg_dump with parallel jobs taken from a Standby (preferably replication paused if pg_dump, anyways pg_basebackup should be straight-forward) or taken even from Primary, for the purpose of initial data load. As you are able to drop indexes and make some schema changes, I would assume that you could pause your app temporarily. If that's the caseyou may look into the simple steps i am posting here that demonstrates pg_dump/pg_restore instead. If you cannot pause the app, then, you could look into how you  could use pg_replication_origin_advance Step 1 : Pause App Step 2 : Create Publication on the Primary\nCREATE PUBLICATION <some_pub_name> FOR ALL TABLES;\nStep 3 : Create Logical Replication Slot on the Primary\nSELECT * FROM pg_create_logical_replication_slot('<some_slot_name>', 'pgoutput');\nStep 4 : Create Subscription but do not enable the SubscriptionCREATE SUBSCRIPTION <some_sub_name> CONNECTION 'host=<some_host> dbname=<some_db> user=postgres password=secret port=5432' PUBLICATION <some_pub_name> WITH (copy_data = false, create_slot=false, enabled=false, slot_name=<some_slot_name>);Step 5 : Initiate pg_dump. We can take a parallel backup for a faster restore.$ pg_dump -d <some_db> -Fd -j 4 -n <some_schema> -f <some_unique_directory>\n-- If its several hundreds of GBs or TBs, you may rather utilize one of your \nStandby that has been paused from replication using \n-> select pg_wal_replay_pause();\nStep 6 : Don't need to wait until pg_dump completes, you may start the App. -- Hope the app does not perform changes that impact the pg_dump or gets blocked due to pg_dump. Step 7 : Restore the dump if you used pg_dump. pg_restore -d <some_db> -j <some_numer_of_parallel_jobs> <some_directory>\nStep 8 : Enable subscription.ALTER SUBSCRIPTION <some_sub_name> ENABLE; If you have not stopped your app then you must advance the lsn using pg_replication_origin_advance These are all hand-written steps while drafting this email, so, please test it on your end as some typos or adjustments are definitely expected.PS: i have not tested this in production level loads, it was just some exp i did on my laptop.one option would be to use pglogical extension (this was shared by Dharmendra in one the previous mails, sharing the same),and then use pglogical_create_subscriber cli to create the initial copy via pgbasebackup and then carry on from there.I ran the test case similar to one below in my local env, and it seems to work fine. of course i do not have TB worth of load to test, but it looks promising,especially since they introduced it to the core.pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE · 2ndQuadrant/pglogical (github.com)Once you attain some reasonable sync state, you can drop the pglogical extension, and check if things continue fine.I have done something similar when upgrading from 9.6 to 11 using pglogical and then dropping the extension and it was smooth,maybe you need to try this out and share if things works fine.and The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot - Percona Database Performance Blog\n-- Regards,Avinash Vallarapu (Avi)CEO,MigOps, Inc.", "msg_date": "Fri, 6 Aug 2021 00:11:20 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "Hi Vijaykumar,\n\nThanks for the details.\nIn this method you are saying the pg_basebackup will make the initial load\nfaster ?\nWe intend to bring only a few tables. Using pg_basebackup will clone an\nentire instance.\n\nThanks,\nNikhil\n\n\n\nOn Thu, Aug 5, 2021 at 7:57 PM Vijaykumar Jain <\[email protected]> wrote:\n\n> On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> Thank you for the suggestion.\n>>\n>> We tried by dropping indexes and it worked faster compared to what we saw\n>> earlier. We wanted to know if anybody has done any other changes that helps\n>> speed-up initial data load without dropping indexes.\n>>\n>>\n> PS: i have not tested this in production level loads, it was just some exp\n> i did on my laptop.\n>\n> one option would be to use pglogical extension (this was shared by\n> Dharmendra in one the previous mails, sharing the same),\n> and then use pglogical_create_subscriber cli to create the initial copy\n> via pgbasebackup and then carry on from there.\n> I ran the test case similar to one below in my local env, and it seems to\n> work fine. of course i do not have TB worth of load to test, but it looks\n> promising,\n> especially since they introduced it to the core.\n> pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE ·\n> 2ndQuadrant/pglogical (github.com)\n> <https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/t/010_pglogical_create_subscriber.pl>\n> Once you attain some reasonable sync state, you can drop the pglogical\n> extension, and check if things continue fine.\n> I have done something similar when upgrading from 9.6 to 11 using\n> pglogical and then dropping the extension and it was smooth,\n> maybe you need to try this out and share if things works fine.\n> and\n> The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot -\n> Percona Database Performance Blog\n> <https://www.percona.com/blog/postgresql-logical-replication-using-an-rds-snapshot/>\n>\n>\n\nHi Vijaykumar,Thanks for the details.In this method you are saying the pg_basebackup will make the initial load faster ? We intend to bring only a few tables. Using pg_basebackup will clone an entire instance. Thanks,NikhilOn Thu, Aug 5, 2021 at 7:57 PM Vijaykumar Jain <[email protected]> wrote:On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty <[email protected]> wrote:Hi,Thank you for the suggestion.We tried by dropping indexes and it worked faster compared to what we saw earlier. We wanted to know if anybody has done any other changes that helps speed-up initial data load without dropping indexes.PS: i have not tested this in production level loads, it was just some exp i did on my laptop.one option would be to use pglogical extension (this was shared by Dharmendra in one the previous mails, sharing the same),and then use pglogical_create_subscriber cli to create the initial copy via pgbasebackup and then carry on from there.I ran the test case similar to one below in my local env, and it seems to work fine. of course i do not have TB worth of load to test, but it looks promising,especially since they introduced it to the core.pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE · 2ndQuadrant/pglogical (github.com)Once you attain some reasonable sync state, you can drop the pglogical extension, and check if things continue fine.I have done something similar when upgrading from 9.6 to 11 using pglogical and then dropping the extension and it was smooth,maybe you need to try this out and share if things works fine.and The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot - Percona Database Performance Blog", "msg_date": "Fri, 6 Aug 2021 00:15:15 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "On Fri, 6 Aug 2021 at 00:15, Nikhil Shetty <[email protected]> wrote:\n\n> Hi Vijaykumar,\n>\n> Thanks for the details.\n> In this method you are saying the pg_basebackup will make the initial load\n> faster ?\n>\nWe intend to bring only a few tables. Using pg_basebackup will clone an\n> entire instance.\n>\n\nyeah. In that case, this will not be useful. I assumed you wanted all\ntables.\npglogical/pglogical_create_subscriber.c at REL2_x_STABLE ·\n2ndQuadrant/pglogical (github.com)\n<https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_create_subscriber.c#L704>\n\nOn Fri, 6 Aug 2021 at 00:15, Nikhil Shetty <[email protected]> wrote:Hi Vijaykumar,Thanks for the details.In this method you are saying the pg_basebackup will make the initial load faster ?  We intend to bring only a few tables. Using pg_basebackup will clone an entire instance.  yeah. In that case, this will not be useful. I assumed you wanted all tables.pglogical/pglogical_create_subscriber.c at REL2_x_STABLE · 2ndQuadrant/pglogical (github.com)", "msg_date": "Fri, 6 Aug 2021 00:50:39 +0530", "msg_from": "Vijaykumar Jain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication speed-up initial data" }, { "msg_contents": "On Thu, Aug 5, 2021 at 12:57 AM Nikhil Shetty <[email protected]>\nwrote:\n\n> Hi,\n>\n> Thank you for the suggestion.\n>\n> We tried by dropping indexes and it worked faster compared to what we saw\n> earlier. We wanted to know if anybody has done any other changes that helps\n> speed-up initial data load without dropping indexes.\n>\n\nIf index maintenance is the bottleneck, nothing but dropping the indexes is\nlikely to be very effective. Just make sure not to drop the replica\nidentity index. If you do that, then the entire sync will abort and\nrollback once it gets to the end, if the master had had any UPDATE or\nDELETE activity on that table during the sync period. (v14 will remove\nthat problem--replication still won't proceed until you have the index, but\nprevious synced work will not be lost while it waits for you to build the\nindex.)\n\nSyncing with the index still in place might go faster if shared_buffers is\nlarge enough to hold the entire incipient index(es) simultaneously. It\nmight be worthwhile to make shared_buffers be a large fraction of RAM (like\n90%) if doing so will enable the entire index to fit into shared_buffers\nand if nothing else significant is running on the server. You probably\nwouldn't want that as a permanent setting though.\n\nCheers,\n\nJeff\n\nOn Thu, Aug 5, 2021 at 12:57 AM Nikhil Shetty <[email protected]> wrote:Hi,Thank you for the suggestion.We tried by dropping indexes and it worked faster compared to what we saw earlier. We wanted to know if anybody has done any other changes that helps speed-up initial data load without dropping indexes.If index maintenance is the bottleneck, nothing but dropping the indexes is likely to be very effective.  Just make sure not to drop the replica identity index.  If you do that, then the entire sync will abort and rollback once it gets to the end, if the master had had any UPDATE or DELETE activity on that table during the sync period.  (v14 will remove that problem--replication still won't proceed until you have the index, but previous synced work will not be lost while it waits for you to build the index.)Syncing with the index still in place might go faster if shared_buffers is large enough to hold the entire incipient index(es) simultaneously.  It might be worthwhile to make shared_buffers be a large fraction of RAM (like 90%) if doing so will enable the entire index to fit into shared_buffers and if nothing else significant is running on the server.  You probably wouldn't want that as a permanent setting though.Cheers,Jeff", "msg_date": "Thu, 5 Aug 2021 16:03:00 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication speed-up initial data" } ]
[ { "msg_contents": "Table \"product\" has a GIN index on \"lexeme\" column (tsvector) that is not used.\n\nQuery that doesn't use lexeme idx:  https://explain.dalibo.com/plan/BlB#plan, ~8s, ~60.000 blocks needed\n\nQuery forced to use lexeme idx: https://explain.dalibo.com/plan/i52, ~800ms (10x less), ~15.000 blocks needed (x4 less)\nTable metdata:\n         relname          | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size\n--------------------------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n product_property_default |     8992 |    622969 |          8992 | r       |       16 | f              |            |      73719808\n product                  |    49686 |    413840 |         49686 | r       |       14 | f              |            |     493314048\nTable stats:\n   frac_mcv    |        tablename         | attname | inherited | null_frac | n_distinct  | n_mcv | n_hist | correlation\n---------------+--------------------------+---------+-----------+-----------+-------------+-------+--------+-------------\n               | product                  | lexeme  | f         |         0 |          -1 |       |        |\n    0.99773335 | product_property_default | meaning | f         |         0 |          63 |    39 |     24 |  0.19444875\n     0.6416333 | product_property_default | first   | f         |         0 |        2193 |   100 |    101 | -0.09763639\n 0.00023333334 | product_property_default | product | f         |         0 | -0.15221785 |     1 |    101 |  0.08643274\n\n\nUsing windows docker with wsl2.Both cases are run with cold cache.All database memory is limited to 1GB by using .wslconfig file with memory=1GB, also the docker container is limited to 1GB. \nMy requirement is to optimize disk access with this limited memory\n\n\nPostgres 12.4\n\n\n\n\n\n\n\n\n\nTable \"product\" has a GIN index on \"lexeme\" column (tsvector) that is not used.Query that doesn't use lexeme idx:  https://explain.dalibo.com/plan/BlB#plan, ~8s, ~60.000 blocks neededQuery forced to use lexeme idx: https://explain.dalibo.com/plan/i52, ~800ms (10x less), ~15.000 blocks needed (x4 less)Table metdata:         relname          | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size--------------------------+----------+-----------+---------------+---------+----------+----------------+------------+--------------- product_property_default |     8992 |    622969 |          8992 | r       |       16 | f              |            |      73719808 product                  |    49686 |    413840 |         49686 | r       |       14 | f              |            |     493314048Table stats:   frac_mcv    |        tablename         | attname | inherited | null_frac | n_distinct  | n_mcv | n_hist | correlation---------------+--------------------------+---------+-----------+-----------+-------------+-------+--------+-------------               | product                  | lexeme  | f         |         0 |          -1 |       |        |    0.99773335 | product_property_default | meaning | f         |         0 |          63 |    39 |     24 |  0.19444875     0.6416333 | product_property_default | first   | f         |         0 |        2193 |   100 |    101 | -0.09763639 0.00023333334 | product_property_default | product | f         |         0 | -0.15221785 |     1 |    101 |  0.08643274Using windows docker with wsl2.\nBoth cases are run with cold cache.All database memory is limited to 1GB by using .wslconfig file with memory=1GB, also the docker container is limited to 1GB. My requirement is to optimize disk access with this limited memoryPostgres 12.4", "msg_date": "Sat, 7 Aug 2021 19:35:25 +0000 (UTC)", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query because lexeme index not used" }, { "msg_contents": "On Sat, Aug 07, 2021 at 07:35:25PM +0000, Alex wrote:\n> Table \"product\" has a GIN index on \"lexeme\" column (tsvector) that is not used.\n> \n> Query that doesn't use lexeme idx:��https://explain.dalibo.com/plan/BlB#plan, ~8s, ~60.000 blocks needed\n> \n> Query forced to use lexeme idx: https://explain.dalibo.com/plan/i52, ~800ms (10x less), ~15.000 blocks needed (x4 less)\n\nCould you show the table stats for product.id ? In particular its\n\"correlation\".\n\nI guess the correlation is ~1, and the 10,659 index scans on product.id are\nconsidered to be cheaper than scannning the lexeme index - since there are no\ncorrelation stats for tsvector.\n\nHow large is shared_buffers ?\n\nDoes the query plan improve if you increase work_mem ?\n\nMaybe you could encourage scanning in order of product_property.product.\nYou could CLUSTER product_property_default on an index on \"product\" and then\nANALYZE. Or you could write the query with a temp table:\n\nCREATE TEMP TABLE product_ids AS\nSELECT product\nFROM product_property\nWHERE \"meaning\" = 'B' AND \"first\" = 1.7179869184E10\nGROUP BY 1 -- or DISTINCT, because the table is only used for EXISTS\nORDER BY 1; -- to scan product in order of id\nANALYZE product_ids;\n\nThe index scans on product.id should be faster when you use\nEXISTS(SELECT 1 FROM product_ids ...), even though it didn't use the lexeme index.\n\nMaybe it would help to create stats on \"first\" and \"meaning\"; the rowcount is\nunderestimated by 3x, which means it did several times more index scans into\n\"product\" than planned.\n| Bitmap Heap Scan on product_property_default product_property_default (cost=2,748.6..8,823.4 rows=6,318 width=4) (actual time=43.945..211.621 rows=21,061 loops=1) \n\nCREATE STATISTICS first_meaning ON first,meaning FROM product_property;\nANALYZE product_property;\n\n> Table metdata:\n> �������� relname��������� | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size\n> --------------------------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n> �product_property_default |���� 8992 |��� 622969 |��������� 8992 | r������ |������ 16 | f������������� |����������� |����� 73719808\n> �product����������������� |��� 49686 |��� 413840 |�������� 49686 | r������ |������ 14 | f������������� |����������� |���� 493314048\n>\n> Table stats:\n> �� frac_mcv��� |������� tablename�������� | attname | inherited | null_frac | n_distinct� | n_mcv | n_hist | correlation\n> ---------------+--------------------------+---------+-----------+-----------+-------------+-------+--------+-------------\n> �������������� | product����������������� | lexeme� | f�������� |�������� 0 |��������� -1 |������ |������� |\n> ��� 0.99773335 | product_property_default | meaning | f�������� |�������� 0 |��������� 63 |��� 39 |���� 24 |� 0.19444875\n> ���� 0.6416333 | product_property_default | first�� | f�������� |�������� 0 |������� 2193 |�� 100 |��� 101 | -0.09763639\n> �0.00023333334 | product_property_default | product | f�������� |�������� 0 | -0.15221785 |���� 1 |��� 101 |� 0.08643274\n> \n> \n> Using windows docker with wsl2.Both cases are run with cold cache.All database memory is limited to 1GB by using .wslconfig file with memory=1GB, also the docker container is limited to 1GB. \n> My requirement is to optimize disk access with this limited memory\n\n\n", "msg_date": "Sat, 7 Aug 2021 19:35:28 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query because lexeme index not used" }, { "msg_contents": "> Could you show the table stats for product.id ?  In particular its\n\"correlation\".\n frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation\n----------+-----------+---------+-----------+-----------+------------+-------+--------+-------------\n          | product   | id      | f         |         0 |         -1 |       |    101 |   0.3857521\n\n> How large is shared_buffers ?256MB\n\n> Does the query plan improve if you increase work_mem ?No, same plan.\n> Maybe you could encourage scanning in order of product_property.product...Clustering \"product_property_default\" on \"product_property_default_product_idx\" followed by analyze, does not change the plan.\n\n> Or you could write the query with a temp table:Creating the temp table \"product_ids\" changes the plan to use lexeme_idx: https://explain.dalibo.com/plan/19h\nBut I prefer not to use an extra step for my query.\n\n> Maybe it would help to create stats on \"first\" and \"meaning\"...\nI've played around with statistics, and increasing column stats with the extended statistics improve the planner estimation (nested loop 69x to 12x), but the same ineffective plan is issued, without lexeme_idx:https://explain.dalibo.com/plan/B7d#plan (has query with stats)\n\n\n\n\n On Sunday, August 8, 2021, 3:35:31 AM GMT+3, Justin Pryzby <[email protected]> wrote: \n \n On Sat, Aug 07, 2021 at 07:35:25PM +0000, Alex wrote:\n> Table \"product\" has a GIN index on \"lexeme\" column (tsvector) that is not used.\n> \n> Query that doesn't use lexeme idx:  https://explain.dalibo.com/plan/BlB#plan, ~8s, ~60.000 blocks needed\n> \n> Query forced to use lexeme idx: https://explain.dalibo.com/plan/i52, ~800ms (10x less), ~15.000 blocks needed (x4 less)\n\nCould you show the table stats for product.id ?  In particular its\n\"correlation\".\n\nI guess the correlation is ~1, and the 10,659 index scans on product.id are\nconsidered to be cheaper than scannning the lexeme index - since there are no\ncorrelation stats for tsvector.\n\nHow large is shared_buffers ?\n\nDoes the query plan improve if you increase work_mem ?\n\nMaybe you could encourage scanning in order of product_property.product.\nYou could CLUSTER product_property_default on an index on \"product\" and then\nANALYZE.  Or you could write the query with a temp table:\n\nCREATE TEMP TABLE product_ids AS\nSELECT product\nFROM product_property\nWHERE \"meaning\" = 'B' AND \"first\" = 1.7179869184E10\nGROUP BY 1 -- or DISTINCT, because the table is only used for EXISTS\nORDER BY 1; -- to scan product in order of id\nANALYZE product_ids;\n\nThe index scans on product.id should be faster when you use\nEXISTS(SELECT 1 FROM product_ids ...), even though it didn't use the lexeme index.\n\nMaybe it would help to create stats on \"first\" and \"meaning\"; the rowcount is\nunderestimated by 3x, which means it did several times more index scans into\n\"product\" than planned.\n| Bitmap Heap Scan on product_property_default product_property_default (cost=2,748.6..8,823.4 rows=6,318 width=4) (actual time=43.945..211.621 rows=21,061 loops=1) \n\nCREATE STATISTICS first_meaning ON first,meaning FROM product_property;\nANALYZE product_property;\n\n> Table metdata:\n>          relname          | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size\n> --------------------------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n>  product_property_default |     8992 |    622969 |          8992 | r       |       16 | f              |            |      73719808\n>  product                  |    49686 |    413840 |         49686 | r       |       14 | f              |            |     493314048\n>\n> Table stats:\n>    frac_mcv    |        tablename         | attname | inherited | null_frac | n_distinct  | n_mcv | n_hist | correlation\n> ---------------+--------------------------+---------+-----------+-----------+-------------+-------+--------+-------------\n>                | product                  | lexeme  | f         |         0 |          -1 |       |        |\n>     0.99773335 | product_property_default | meaning | f         |         0 |          63 |    39 |     24 |  0.19444875\n>      0.6416333 | product_property_default | first   | f         |         0 |        2193 |   100 |    101 | -0.09763639\n>  0.00023333334 | product_property_default | product | f         |         0 | -0.15221785 |     1 |    101 |  0.08643274\n> \n> \n> Using windows docker with wsl2.Both cases are run with cold cache.All database memory is limited to 1GB by using .wslconfig file with memory=1GB, also the docker container is limited to 1GB. \n> My requirement is to optimize disk access with this limited memory\n \n\n > Could you show the table stats for product.id ?  In particular its\"correlation\". frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation----------+-----------+---------+-----------+-----------+------------+-------+--------+-------------          | product   | id      | f         |         0 |         -1 |       |    101 |   0.3857521> How large is shared_buffers ?256MB> Does the query plan improve if you increase work_mem ?No, same plan.> Maybe you could encourage scanning in order of product_property.product...Clustering \"product_property_default\" on \"product_property_default_product_idx\" followed by analyze, does not change the plan.> Or you could write the query with a temp table:Creating the temp table \"product_ids\" changes the plan to use lexeme_idx: https://explain.dalibo.com/plan/19hBut I prefer not to use an extra step for my query.> Maybe it would help to create stats on \"first\" and \"meaning\"...I've played around with statistics, and increasing column stats with the extended statistics improve the planner estimation (nested loop 69x to 12x), but the same ineffective plan is issued, without lexeme_idx:https://explain.dalibo.com/plan/B7d#plan (has query with stats)\n\n\n\n On Sunday, August 8, 2021, 3:35:31 AM GMT+3, Justin Pryzby <[email protected]> wrote:\n \n\n\nOn Sat, Aug 07, 2021 at 07:35:25PM +0000, Alex wrote:> Table \"product\" has a GIN index on \"lexeme\" column (tsvector) that is not used.> > Query that doesn't use lexeme idx:  https://explain.dalibo.com/plan/BlB#plan, ~8s, ~60.000 blocks needed> > Query forced to use lexeme idx: https://explain.dalibo.com/plan/i52, ~800ms (10x less), ~15.000 blocks needed (x4 less)Could you show the table stats for product.id ?  In particular its\"correlation\".I guess the correlation is ~1, and the 10,659 index scans on product.id areconsidered to be cheaper than scannning the lexeme index - since there are nocorrelation stats for tsvector.How large is shared_buffers ?Does the query plan improve if you increase work_mem ?Maybe you could encourage scanning in order of product_property.product.You could CLUSTER product_property_default on an index on \"product\" and thenANALYZE.  Or you could write the query with a temp table:CREATE TEMP TABLE product_ids ASSELECT productFROM product_propertyWHERE \"meaning\" = 'B' AND \"first\" = 1.7179869184E10GROUP BY 1 -- or DISTINCT, because the table is only used for EXISTSORDER BY 1; -- to scan product in order of idANALYZE product_ids;The index scans on product.id should be faster when you useEXISTS(SELECT 1 FROM product_ids ...), even though it didn't use the lexeme index.Maybe it would help to create stats on \"first\" and \"meaning\"; the rowcount isunderestimated by 3x, which means it did several times more index scans into\"product\" than planned.| Bitmap Heap Scan on product_property_default product_property_default (cost=2,748.6..8,823.4 rows=6,318 width=4) (actual time=43.945..211.621 rows=21,061 loops=1) CREATE STATISTICS first_meaning ON first,meaning FROM product_property;ANALYZE product_property;> Table metdata:>          relname          | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size> --------------------------+----------+-----------+---------------+---------+----------+----------------+------------+--------------->  product_property_default |     8992 |    622969 |          8992 | r       |       16 | f              |            |      73719808>  product                  |    49686 |    413840 |         49686 | r       |       14 | f              |            |     493314048>> Table stats:>    frac_mcv    |        tablename         | attname | inherited | null_frac | n_distinct  | n_mcv | n_hist | correlation> ---------------+--------------------------+---------+-----------+-----------+-------------+-------+--------+------------->                | product                  | lexeme  | f         |         0 |          -1 |       |        |>     0.99773335 | product_property_default | meaning | f         |         0 |          63 |    39 |     24 |  0.19444875>      0.6416333 | product_property_default | first   | f         |         0 |        2193 |   100 |    101 | -0.09763639>  0.00023333334 | product_property_default | product | f         |         0 | -0.15221785 |     1 |    101 |  0.08643274> > > Using windows docker with wsl2.Both cases are run with cold cache.All database memory is limited to 1GB by using .wslconfig file with memory=1GB, also the docker container is limited to 1GB. > My requirement is to optimize disk access with this limited memory", "msg_date": "Sun, 8 Aug 2021 09:43:40 +0000 (UTC)", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query because lexeme index not used" } ]
[ { "msg_contents": "I've created a partial index that I expect the query planner to use in\nexecuting a query, but it's using another index instead. Using this other\npartial index results in a slower query. I'd really appreciate some help\nunderstanding why this is occurring. Thanks in advance!\n\n*Postgres Version*\n\nPostgreSQL 12.7 (Ubuntu 12.7-1.pgdg20.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n\n*Problem Description*\n\nHere's the index I expect the planner to use:\n\nCREATE INDEX other_events_1004175222_pim_evdef_67951aef14bc_idx ON\npublic.other_events_1004175222 USING btree (\"time\", user_id) WHERE (\n (user_id <= '(1080212440,9007199254740991)'::app_user_id) AND\n(user_id >= '(1080212440,0)'::app_user_id) AND\n (\n (\n (type = 'click'::text) AND (library = 'web'::text) AND\n (strpos(hierarchy, '#close_onborading;'::text)\n<> 0) AND (object IS NULL)\n ) OR\n (\n (type = 'click'::text) AND (library = 'web'::text) AND\n (strpos(hierarchy,\n'#proceedOnboarding;'::text) <> 0) AND (object IS NULL)\n )\n )\n );\n\n\nHere's the query:\n\nEXPLAIN (ANALYZE, VERBOSE, BUFFERS)\nSELECT user_id,\n \"time\",\n 0 AS event,\n session_id\nFROM test_yasp_events_exp_1004175222\nWHERE ((test_yasp_events_exp_1004175222.user_id >=\n '(1080212440,0)'::app_user_id) AND\n (test_yasp_events_exp_1004175222.user_id <=\n '(1080212440,9007199254740991)'::app_user_id) AND\n (\"time\" >=\n '1624777200000'::bigint) AND\n (\"time\" <=\n '1627369200000'::bigint) AND (\n (\n (type = 'click'::text) AND\n (library = 'web'::text) AND\n (strpos(hierarchy, '#close_onborading;'::text) <>\n 0) AND\n (object IS NULL)) OR\n (\n (type = 'click'::text) AND\n (library = 'web'::text) AND\n (strpos(hierarchy,\n '#proceedOnboarding;'::text) <>\n 0) AND (object IS NULL))))\n\n\nHere's the plan: https://explain.depesz.com/s/uNGg\n\nNote that the index being used is\nother_events_1004175222_pim_core_custom_2_8e65d072fbdd_idx, which is\ndefined this way:\n\nCREATE INDEX other_events_1004175222_pim_core_custom_2_8e65d072fbdd_idx\nON public.other_events_1004175222 USING btree (type, \"time\", user_id)\nWHERE (\n (type IS NOT NULL) AND (object IS NULL) AND\n ((user_id >= '(1080212440,0)'::app_user_id) AND (user_id <=\n'(1080212440,9007199254740991)'::app_user_id)))\n\nYou can view the definition of test_yasp_events_exp_1004175222 here\n<https://pastebin.com/3wYiiTMn>. Note the child tables,\nother_events_1004175222, pageviews_1004175222, and sessions_1004175222\nwhich have the following constraints:\n\nother_events_1004175222: CHECK (object IS NULL)\npageviews_1004175222: CHECK (object IS NOT NULL AND object = 'pageview'::text)\nsessions_1004175222: CHECK (object IS NOT NULL AND object = 'session'::text)\n\nAlso note that these child tables have 100s of partial indexes. You\ncan find history on why we have things set up this way here\n<https://heap.io/blog/running-10-million-postgresql-indexes-in-production>.\n\nHere's the table metadata for other_events_1004175222:\n\nSELECT relname,\n relpages,\n reltuples,\n relallvisible,\n relkind,\n relnatts,\n relhassubclass,\n reloptions,\n pg_table_size(oid)\nFROM pg_class\nWHERE relname = 'other_events_1004175222';\n\nResults:\n\n[image: image.png]\n\n-- \n\nK. Matt Dupree\n\nData Science Engineer\n321.754.0526 | [email protected]", "msg_date": "Tue, 10 Aug 2021 12:47:20 -0400", "msg_from": "Matt Dupree <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres using the wrong index index" }, { "msg_contents": "On Tue, Aug 10, 2021 at 12:47:20PM -0400, Matt Dupree wrote:\n> Here's the plan: https://explain.depesz.com/s/uNGg\n> \n> Note that the index being used is\n\nCould you show the plan if you force use of the intended index ?\nFor example by doing begin; DROP INDEX indexbeingused; explain thequery; rollback;\nOr: begin; UPDATE pg_index SET indisvalid=false WHERE indexrelid='indexbeingused'::regclass explain thequery; rollback;\n\nCould you show the table statistics for the time, user_id, and type columns on\nall 4 tables ?\n| SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname='...' AND tablename='...' ORDER BY 1 DESC; \n\nIt might be interesting to see both query plans when index scans are disabled\nand bitmap scan are used instead (this might be as simple as begin; SET LOCAL\nenable_indexscan=off ...; rollback;);\n\n> Also note that these child tables have 100s of partial indexes. You\n> can find history on why we have things set up this way here\n> <https://heap.io/blog/running-10-million-postgresql-indexes-in-production>.\n\nI have read it before :)\n\n> SELECT relname, relpages, reltuples, relallvisible, pg_table_size(oid)\n> FROM pg_class WHERE relname = 'other_events_1004175222';\n\nCould you also show the table stats for the two indexes ?\n\nOne problem is that the rowcount estimate is badly off:\n| Index Scan using other_events_1004175222_pim_core_custom_2_8e65d072fbdd_idx on public.other_events_1004175222 (cost=0.57..1,213,327.64 rows=1,854,125 width=32) (actual time=450.588..29,057.269 rows=23 loops=1) \n\nTo my eyes, this looks like a typo ; it's used in the index predicate as well\nas the query, but maybe it's still relevant ?\n| #close_onborading\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 11 Aug 2021 07:38:56 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": "Thanks for your response, Justin!\n\nHere's <https://explain.depesz.com/s/kCvN> the plan if we disable the\ncustom_2 index. It uses the index I expect and it's much faster.\n\nHere's <https://explain.depesz.com/s/KBgG> a plan if we disable index\nscans. It uses both indexes and is much faster.\n\nHere are the stats you asked for:\n\n[image: image.png]\n\nAnd here are the table stats for\nother_events_1004175222_pim_core_custom_2_8e65d072fbdd_idx and\nother_events_1004175222_pim_evdef_67951aef14bc_idx:\n\n[image: image.png]\n\nThanks again for your help!\n\n\n\n\nOn Wed, Aug 11, 2021 at 8:38 AM Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Aug 10, 2021 at 12:47:20PM -0400, Matt Dupree wrote:\n> > Here's the plan: https://explain.depesz.com/s/uNGg\n> >\n> > Note that the index being used is\n>\n> Could you show the plan if you force use of the intended index ?\n> For example by doing begin; DROP INDEX indexbeingused; explain thequery;\n> rollback;\n> Or: begin; UPDATE pg_index SET indisvalid=false WHERE\n> indexrelid='indexbeingused'::regclass explain thequery; rollback;\n>\n> Could you show the table statistics for the time, user_id, and type\n> columns on\n> all 4 tables ?\n> | SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\n> tablename, attname, inherited, null_frac, n_distinct,\n> array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\n> n_hist, correlation FROM pg_stats WHERE attname='...' AND tablename='...'\n> ORDER BY 1 DESC;\n>\n> It might be interesting to see both query plans when index scans are\n> disabled\n> and bitmap scan are used instead (this might be as simple as begin; SET\n> LOCAL\n> enable_indexscan=off ...; rollback;);\n>\n> > Also note that these child tables have 100s of partial indexes. You\n> > can find history on why we have things set up this way here\n> > <\n> https://heap.io/blog/running-10-million-postgresql-indexes-in-production>.\n>\n> I have read it before :)\n>\n> > SELECT relname, relpages, reltuples, relallvisible, pg_table_size(oid)\n> > FROM pg_class WHERE relname = 'other_events_1004175222';\n>\n> Could you also show the table stats for the two indexes ?\n>\n> One problem is that the rowcount estimate is badly off:\n> | Index Scan using\n> other_events_1004175222_pim_core_custom_2_8e65d072fbdd_idx on\n> public.other_events_1004175222 (cost=0.57..1,213,327.64 rows=1,854,125\n> width=32) (actual time=450.588..29,057.269 rows=23 loops=1)\n>\n> To my eyes, this looks like a typo ; it's used in the index predicate as\n> well\n> as the query, but maybe it's still relevant ?\n> | #close_onborading\n>\n> --\n> Justin\n>\n\n\n-- \n\nK. Matt Dupree\n\nData Science Engineer\n321.754.0526 | [email protected]", "msg_date": "Wed, 11 Aug 2021 15:56:51 -0400", "msg_from": "Matt Dupree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": "You know that you can use pg_hint_plan extension? That way you don't \nhave to disable indexes or set session parameters.\n\nRegards\n\nOn 8/11/21 3:56 PM, Matt Dupree wrote:\n> Thanks for your response, Justin!\n>\n> Here's <https://explain.depesz.com/s/kCvN> the plan if we disable the \n> custom_2 index. It uses the index I expect and it's much faster.\n>\n> Here's <https://explain.depesz.com/s/KBgG> a plan if we disable index \n> scans. It uses both indexes and is much faster.\n>\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\n\nYou know that you can use pg_hint_plan extension? That way you\n don't have to disable indexes or set session parameters.\n\nRegards\n\nOn 8/11/21 3:56 PM, Matt Dupree wrote:\n\n\nThanks for your response, Justin!\n\n\nHere's the plan if we disable the\n custom_2 index. It uses the index I expect and it's much faster.\n\n\n\nHere's a plan if we disable index\n scans. It uses both indexes and is much faster.\n\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Wed, 11 Aug 2021 17:45:34 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": "The rowcount estimate for the time column is bad for all these plans - do you\nknow why ? You're using inheritence - have you analyzed the parent tables\nrecently ?\n\n| Index Scan using other_events_1004175222_pim_evdef_67951aef14bc_idx on public.other_events_1004175222 (cost=0.28..1,648,877.92 rows=1,858,891 width=32) (actual time=1.008..15.245 rows=23 loops=1)\n| Index Cond: ((other_events_1004175222.\"time\" >= '1624777200000'::bigint) AND (other_events_1004175222.\"time\" <= '1627369200000'::bigint))\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 11 Aug 2021 22:45:11 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": "Justin,\n\nThe rowcount estimate for the time column is bad for all these plans - do\n> you\n> know why ? You're using inheritence - have you analyzed the parent tables\n> recently ?\n>\n\nYes. I used ANALYZE before posting, as it's one of the \"things to try\"\nlisted in the slow queries wiki. I even ran the queries immediately after\nanalyzing. No difference. Can you say more about why the bad row estimate\nwould cause Postgres to use the bigger index? I would expect Postgres to\nuse the smaller index if it's over-estimating how many rows will be\nreturned.\n\nMladen,\n\nYou know that you can use pg_hint_plan extension? That way you don't have\n> to disable indexes or set session parameters.\n>\n\nThanks for the tip! I didn't know you could use pg_hint_plan to force the\nuse of certain indexes. For now, I'd like to avoid hinting and fix the\nunderlying issue.\n\nOn Wed, Aug 11, 2021 at 11:45 PM Justin Pryzby <[email protected]> wrote:\n\n> The rowcount estimate for the time column is bad for all these plans - do\n> you\n> know why ? You're using inheritence - have you analyzed the parent tables\n> recently ?\n>\n> | Index Scan using other_events_1004175222_pim_evdef_67951aef14bc_idx on\n> public.other_events_1004175222 (cost=0.28..1,648,877.92 rows=1,858,891\n> width=32) (actual time=1.008..15.245 rows=23 loops=1)\n> | Index Cond: ((other_events_1004175222.\"time\" >=\n> '1624777200000'::bigint) AND (other_events_1004175222.\"time\" <=\n> '1627369200000'::bigint))\n>\n> --\n> Justin\n>\n\n\n-- \n\nK. Matt Dupree\n\nData Science Engineer\n321.754.0526 | [email protected]\n\nJustin,The rowcount estimate for the time column is bad for all these plans - do you\nknow why ?  You're using inheritence - have you analyzed the parent tables\nrecently ?Yes. I used ANALYZE before posting, as it's one of the \"things to try\" listed in the slow queries wiki. I even ran the queries immediately after analyzing. No difference. Can you say more about why the bad row estimate would cause Postgres to use the bigger index? I would expect Postgres to use the smaller index if it's over-estimating how many rows will be returned.  Mladen,You know that you can use pg_hint_plan extension? That way you\n don't have to disable indexes or set session parameters.Thanks for the tip! I didn't know you could use pg_hint_plan to force the use of certain indexes. For now, I'd like to avoid hinting and fix the underlying issue. On Wed, Aug 11, 2021 at 11:45 PM Justin Pryzby <[email protected]> wrote:The rowcount estimate for the time column is bad for all these plans - do you\nknow why ?  You're using inheritence - have you analyzed the parent tables\nrecently ?\n\n| Index Scan using other_events_1004175222_pim_evdef_67951aef14bc_idx on public.other_events_1004175222 (cost=0.28..1,648,877.92 rows=1,858,891 width=32) (actual time=1.008..15.245 rows=23 loops=1)\n|    Index Cond: ((other_events_1004175222.\"time\" >= '1624777200000'::bigint) AND (other_events_1004175222.\"time\" <= '1627369200000'::bigint))\n\n-- \nJustin\n-- K. Matt DupreeData Science Engineer321.754.0526  |  [email protected]", "msg_date": "Thu, 12 Aug 2021 09:38:45 -0400", "msg_from": "Matt Dupree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": "On Thu, Aug 12, 2021 at 09:38:45AM -0400, Matt Dupree wrote:\n> > The rowcount estimate for the time column is bad for all these plans - do you\n> > know why ? You're using inheritence - have you analyzed the parent tables recently ?\n> \n> Yes. I used ANALYZE before posting, as it's one of the \"things to try\"\n> listed in the slow queries wiki. I even ran the queries immediately after\n> analyzing. No difference. Can you say more about why the bad row estimate\n> would cause Postgres to use the bigger index? I would expect Postgres to\n> use the smaller index if it's over-estimating how many rows will be\n> returned.\n\nThe overestimate is in the table's \"time\" column (not index) and applies to all\nthe plans. Is either half of the AND estimated correctly? If you do a query\nwith only \">=\", and a query with only \"<=\", do either of them give an accurate\nrowcount estimate ?\n\n|Index Scan using other_events_1004175222_pim_evdef_67951aef14bc_idx on public.other_events_1004175222 (cost=0.28..1,648,877.92 rows=1,858,891 width=32) (actual time=1.008..15.245 rows=23 loops=1) \n|Index Cond: ((other_events_1004175222.\"time\" >= '1624777200000'::bigint) AND (other_events_1004175222.\"time\" <= '1627369200000'::bigint))\n\nIt seems like postgres expects the scan to return a large number of matching\nrows, so tries to use the more selective index which includes the \"type\"\ncolumn. But \"type\" is not very selective either (it has only 4 distinct\nvalues), and \"time\" is not the first column, so it reads a large fraction of\nthe table, slowly.\n\nCould you check pg_stat_all_tables and be sure the last_analyzed is recent for\nboth parent and child tables ?\n\nCould you send the histogram bounds for \"time\" ?\nSELECT tablename, attname, inherited, array_length(histogram_bounds,1), (histogram_bounds::text::text[])[1], (histogram_bounds::text::text[])[array_length(histogram_bounds,1)]\nFROM pg_stats ... ;\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 12 Aug 2021 18:20:06 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": ">\n> Is either half of the AND estimated correctly? If you do a query\n> with only \">=\", and a query with only \"<=\", do either of them give an\n> accurate\n> rowcount estimate ?\n>\n\nDropping >= results in the correct index being used. Dropping <= doesn't\nhave this effect.\n\nCould you send the histogram bounds for \"time\" ?\n>\n\n\n[image: image.png]\n\nCould you check pg_stat_all_tables and be sure the last_analyzed is recent\n> for\n> both parent and child tables ?\n>\n\nLooks like I forgot to ANALYZE the other_events partition, but Postgres is\nstill using the wrong index either way (unless I drop >=, as mentioned\nabove). Here are the results:\n\n[image: image.png]\n\nOn Thu, Aug 12, 2021 at 7:20 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Aug 12, 2021 at 09:38:45AM -0400, Matt Dupree wrote:\n> > > The rowcount estimate for the time column is bad for all these plans -\n> do you\n> > > know why ? You're using inheritence - have you analyzed the parent\n> tables recently ?\n> >\n> > Yes. I used ANALYZE before posting, as it's one of the \"things to try\"\n> > listed in the slow queries wiki. I even ran the queries immediately after\n> > analyzing. No difference. Can you say more about why the bad row estimate\n> > would cause Postgres to use the bigger index? I would expect Postgres to\n> > use the smaller index if it's over-estimating how many rows will be\n> > returned.\n>\n> The overestimate is in the table's \"time\" column (not index) and applies\n> to all\n> the plans. Is either half of the AND estimated correctly? If you do a\n> query\n> with only \">=\", and a query with only \"<=\", do either of them give an\n> accurate\n> rowcount estimate ?\n>\n> |Index Scan using other_events_1004175222_pim_evdef_67951aef14bc_idx on\n> public.other_events_1004175222 (cost=0.28..1,648,877.92 rows=1,858,891\n> width=32) (actual time=1.008..15.245 rows=23 loops=1)\n> |Index Cond: ((other_events_1004175222.\"time\" >= '1624777200000'::bigint)\n> AND (other_events_1004175222.\"time\" <= '1627369200000'::bigint))\n>\n> It seems like postgres expects the scan to return a large number of\n> matching\n> rows, so tries to use the more selective index which includes the \"type\"\n> column. But \"type\" is not very selective either (it has only 4 distinct\n> values), and \"time\" is not the first column, so it reads a large fraction\n> of\n> the table, slowly.\n>\n> Could you check pg_stat_all_tables and be sure the last_analyzed is recent\n> for\n> both parent and child tables ?\n>\n> Could you send the histogram bounds for \"time\" ?\n> SELECT tablename, attname, inherited, array_length(histogram_bounds,1),\n> (histogram_bounds::text::text[])[1],\n> (histogram_bounds::text::text[])[array_length(histogram_bounds,1)]\n> FROM pg_stats ... ;\n>\n> --\n> Justin\n>\n\n\n-- \n\nK. Matt Dupree\n\nData Science Engineer\n321.754.0526 | [email protected]", "msg_date": "Mon, 16 Aug 2021 11:22:44 -0400", "msg_from": "Matt Dupree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": "On Mon, Aug 16, 2021 at 11:22:44AM -0400, Matt Dupree wrote:\n> > Is either half of the AND estimated correctly? If you do a query\n> > with only \">=\", and a query with only \"<=\", do either of them give an\n> > accurate rowcount estimate ?\n> \n> Dropping >= results in the correct index being used. Dropping <= doesn't\n> have this effect.\n\nThis doesn't answer the question though: are the rowcount estimes accurate (say\nwithin 10%).\n\nIt sounds like interpolating the histogram is giving a poor result, at least\nover that range of values. It'd be interesting to see the entire histogram.\n\nYou might try increasing (or decreasing) the stats target for that column, and\nre-analyzing.\n\nYour histogram bounds are for ~38 months of data, and your query is for the\nprevious month (July).\n\n$ date -d @1530186399\nThu Jun 28 06:46:39 CDT 2018\n$ date -d @1629125609\nMon Aug 16 09:53:29 CDT 2021\n\n$ date -d @1627369200\nTue Jul 27 02:00:00 CDT 2021\n$ date -d @1624777200\nSun Jun 27 02:00:00 CDT 2021\n\nThe timestamp column has ndistinct near -1, similar to a continuous\ndistribution, so I'm not sure why the estimate would be so bad.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 17 Aug 2021 13:52:32 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": "I increased (and decreased) the stats target for the column and\nre-analyzed. Didn't make a difference.\n\nIs it possible that the row estimate is off because of a column other than\ntime? I looked at the # of events in that time period and 1.8 million is\nactually a good estimate. What about the\n((strpos(other_events_1004175222.hierarchy, '#close_onborading;'::text) <>\n0) condition in the filter? It makes sense that Postgres wouldn't have a\nway to estimate how selective this condition is.\n\nOn Tue, Aug 17, 2021 at 2:52 PM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Aug 16, 2021 at 11:22:44AM -0400, Matt Dupree wrote:\n> > > Is either half of the AND estimated correctly? If you do a query\n> > > with only \">=\", and a query with only \"<=\", do either of them give an\n> > > accurate rowcount estimate ?\n> >\n> > Dropping >= results in the correct index being used. Dropping <= doesn't\n> > have this effect.\n>\n> This doesn't answer the question though: are the rowcount estimes accurate\n> (say\n> within 10%).\n>\n> It sounds like interpolating the histogram is giving a poor result, at\n> least\n> over that range of values. It'd be interesting to see the entire\n> histogram.\n>\n> You might try increasing (or decreasing) the stats target for that column,\n> and\n> re-analyzing.\n>\n> Your histogram bounds are for ~38 months of data, and your query is for the\n> previous month (July).\n>\n> $ date -d @1530186399\n> Thu Jun 28 06:46:39 CDT 2018\n> $ date -d @1629125609\n> Mon Aug 16 09:53:29 CDT 2021\n>\n> $ date -d @1627369200\n> Tue Jul 27 02:00:00 CDT 2021\n> $ date -d @1624777200\n> Sun Jun 27 02:00:00 CDT 2021\n>\n> The timestamp column has ndistinct near -1, similar to a continuous\n> distribution, so I'm not sure why the estimate would be so bad.\n>\n> --\n> Justin\n>\n\n\n-- \n\nK. Matt Dupree\n\nData Science Engineer\n321.754.0526 | [email protected]\n\nI increased (and decreased) the stats target for the column and re-analyzed. Didn't make a difference.Is it possible that the row estimate is off because of a column other than time? I looked at the # of events in that time period and 1.8 million is actually a good estimate. What about the ((strpos(other_events_1004175222.hierarchy, '#close_onborading;'::text) <> 0) condition in the filter? It makes sense that Postgres wouldn't have a way to estimate how selective this condition is.On Tue, Aug 17, 2021 at 2:52 PM Justin Pryzby <[email protected]> wrote:On Mon, Aug 16, 2021 at 11:22:44AM -0400, Matt Dupree wrote:\n> > Is either half of the AND estimated correctly?  If you do a query\n> > with only \">=\", and a query with only \"<=\", do either of them give an\n> > accurate rowcount estimate ?\n> \n> Dropping >= results in the correct index being used. Dropping <= doesn't\n> have this effect.\n\nThis doesn't answer the question though: are the rowcount estimes accurate (say\nwithin 10%).\n\nIt sounds like interpolating the histogram is giving a poor result, at least\nover that range of values.  It'd be interesting to see the entire histogram.\n\nYou might try increasing (or decreasing) the stats target for that column, and\nre-analyzing.\n\nYour histogram bounds are for ~38 months of data, and your query is for the\nprevious month (July).\n\n$ date -d @1530186399\nThu Jun 28 06:46:39 CDT 2018\n$ date -d @1629125609\nMon Aug 16 09:53:29 CDT 2021\n\n$ date -d @1627369200\nTue Jul 27 02:00:00 CDT 2021\n$ date -d @1624777200\nSun Jun 27 02:00:00 CDT 2021\n\nThe timestamp column has ndistinct near -1, similar to a continuous\ndistribution, so I'm not sure why the estimate would be so bad.\n\n-- \nJustin\n-- K. Matt DupreeData Science Engineer321.754.0526  |  [email protected]", "msg_date": "Mon, 23 Aug 2021 20:53:15 -0400", "msg_from": "Matt Dupree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres using the wrong index index" }, { "msg_contents": "On Mon, Aug 23, 2021 at 08:53:15PM -0400, Matt Dupree wrote:\n> Is it possible that the row estimate is off because of a column other than\n> time?\n\nI would test this by writing the simplest query that reproduces the\nmis-estimate.\n\n> I looked at the # of events in that time period and 1.8 million is\n> actually a good estimate. What about the\n> ((strpos(other_events_1004175222.hierarchy, '#close_onborading;'::text) <>\n> 0) condition in the filter? It makes sense that Postgres wouldn't have a\n> way to estimate how selective this condition is.\n\nThe issue I see is here. I don't know where else I'd start but to understand\nthis.\n\n| Index Scan using other_events_1004175222_pim_evdef_67951aef14bc_idx on public.other_events_1004175222 (cost=0.28..1,648,877.92 ROWS=1,858,891 width=32) (actual time=1.008..15.245 ROWS=23 loops=1)\n| Output: other_events_1004175222.user_id, other_events_1004175222.\"time\", other_events_1004175222.session_id\n| Index Cond: ((other_events_1004175222.\"time\" >= '1624777200000'::bigint) AND (other_events_1004175222.\"time\" <= '1627369200000'::bigint))\n| Buffers: shared read=25\n\nThis has no \"filter\" condition, it's a \"scan\" node with bad over-estimate.\nNote that this is due to the table's column stats, not any index's stats, so\nevery plan is affected. even though some happen to work well. The consequences\nof over-estimates are not as terrible as for under-estimates, but it's bad to\nstart with inputs that are off by 10^5.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 23 Aug 2021 20:38:19 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres using the wrong index index" } ]
[ { "msg_contents": "Hi All,\nWhat is the difference between pg_triggers and information_schema.triggers?\nI want to list all triggers in the database.\n\nThe count differs in both.\n\nselect count(1) from information_schema.triggers -55\nselect count(1) from pg_trigger - 48\n\nWhat is the best way to list all objects in PostgreSQL?(similar to\nall_objects in Oracle).\n\n\nRegards,\nAditya.\n\nHi All,What is the difference between pg_triggers and information_schema.triggers? I want to list all triggers in the database.The count differs in both.select count(1) from information_schema.triggers  -55select count(1) from pg_trigger - 48What is the best way to list all objects in PostgreSQL?(similar to all_objects in Oracle).Regards,Aditya.", "msg_date": "Wed, 11 Aug 2021 23:58:22 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "difference between pg_triggers and information_schema.triggers" }, { "msg_contents": "On Wednesday, August 11, 2021, aditya desai <[email protected]> wrote:\n\n> Hi All,\n> What is the difference between pg_triggers and\n> information_schema.triggers? I want to list all triggers in the database.\n>\n\nRead the docs for information_schema.triggers.\n\n\n> What is the best way to list all objects in PostgreSQL?(similar to\n> all_objects in Oracle).\n>\n>\nWith pg_catalog tables. But I’m not aware of anything that combines all\nobject types into a single result. Seems like an easy enough query to put\ntogether though.\n\nDavid J.\n\nOn Wednesday, August 11, 2021, aditya desai <[email protected]> wrote:Hi All,What is the difference between pg_triggers and information_schema.triggers? I want to list all triggers in the database.Read the docs for information_schema.triggers. What is the best way to list all objects in PostgreSQL?(similar to all_objects in Oracle).With pg_catalog tables.  But I’m not aware of anything that combines all object types into a single result.  Seems like an easy enough query to put together though.David J.", "msg_date": "Wed, 11 Aug 2021 11:37:27 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difference between pg_triggers and information_schema.triggers" }, { "msg_contents": "Seems like multiple entries in information_schema.triggers for\nINSERT/UPDATE/DELETE. Understood thanks.\n\npostgres=# select tgname,tgtype from pg_trigger;\n tgname | tgtype\n--------------------+--------\n insert_empployee | 31\n insert_empployee_1 | 31\n(2 rows)\n\n\npostgres=# select tgname from pg_trigger;\n tgname\n--------------------\n insert_empployee\n insert_empployee_1\n(2 rows)\n\n\npostgres=# select trigger_name,event_manipulation from\ninformation_schema.triggers;\n trigger_name | event_manipulation\n--------------------+--------------------\n insert_empployee | INSERT\n insert_empployee | DELETE\n insert_empployee | UPDATE\n insert_empployee_1 | INSERT\n insert_empployee_1 | DELETE\n insert_empployee_1 | UPDATE\n(6 rows)\n\nRegards,\nAditya.\n\nOn Thu, Aug 12, 2021 at 12:07 AM David G. Johnston <\[email protected]> wrote:\n\n> On Wednesday, August 11, 2021, aditya desai <[email protected]> wrote:\n>\n>> Hi All,\n>> What is the difference between pg_triggers and\n>> information_schema.triggers? I want to list all triggers in the database.\n>>\n>\n> Read the docs for information_schema.triggers.\n>\n>\n>> What is the best way to list all objects in PostgreSQL?(similar to\n>> all_objects in Oracle).\n>>\n>>\n> With pg_catalog tables. But I’m not aware of anything that combines all\n> object types into a single result. Seems like an easy enough query to put\n> together though.\n>\n> David J.\n>\n>\n\nSeems like multiple entries in information_schema.triggers for INSERT/UPDATE/DELETE. Understood thanks.postgres=# select tgname,tgtype  from pg_trigger;       tgname       | tgtype--------------------+-------- insert_empployee   |     31 insert_empployee_1 |     31(2 rows)postgres=# select tgname  from pg_trigger;       tgname-------------------- insert_empployee insert_empployee_1(2 rows)postgres=# select trigger_name,event_manipulation from information_schema.triggers;    trigger_name    | event_manipulation--------------------+-------------------- insert_empployee   | INSERT insert_empployee   | DELETE insert_empployee   | UPDATE insert_empployee_1 | INSERT insert_empployee_1 | DELETE insert_empployee_1 | UPDATE(6 rows)Regards,Aditya.On Thu, Aug 12, 2021 at 12:07 AM David G. Johnston <[email protected]> wrote:On Wednesday, August 11, 2021, aditya desai <[email protected]> wrote:Hi All,What is the difference between pg_triggers and information_schema.triggers? I want to list all triggers in the database.Read the docs for information_schema.triggers. What is the best way to list all objects in PostgreSQL?(similar to all_objects in Oracle).With pg_catalog tables.  But I’m not aware of anything that combines all object types into a single result.  Seems like an easy enough query to put together though.David J.", "msg_date": "Thu, 12 Aug 2021 00:24:18 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: difference between pg_triggers and information_schema.triggers" } ]
[ { "msg_contents": "Hi,\nWe are migrating Oracle to PostgreSQL. We need the equivalent of UTL_HTTP.\nHow to invoke Web service from PostgreSQL.\n\nAlso please let me know the PostgreSQL equivalents of below\nOracle utilities..\n\nutl.logger,UTL_FILE,UTL_SMTP\n\nRegards,\nAditya.\n\nHi,We are migrating Oracle to PostgreSQL. We need the equivalent of UTL_HTTP.How to invoke Web service from PostgreSQL.Also please let me know the PostgreSQL equivalents of below Oracle utilities..utl.logger,UTL_FILE,UTL_SMTPRegards,Aditya.", "msg_date": "Thu, 12 Aug 2021 00:26:58 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL equivalent of UTL_HTTP" }, { "msg_contents": "Hi\n\nst 11. 8. 2021 v 20:57 odesílatel aditya desai <[email protected]> napsal:\n\n> Hi,\n> We are migrating Oracle to PostgreSQL. We need the equivalent of UTL_HTTP.\n> How to invoke Web service from PostgreSQL.\n>\n> Also please let me know the PostgreSQL equivalents of below\n> Oracle utilities..\n>\n> utl.logger,UTL_FILE,UTL_SMTP\n>\n\nyou can use extensions https://github.com/pramsey/pgsql-http or\nhttps://github.com/RekGRpth/pg_curl\n\nYou can use an routines in untrusted PLPerl or untrusted PLPython, but\nthese routines can be really unsafe (due possibility to break signal\nhandling).\n\nPersonally, I think using http access in stored procedures is a very bad\nidea - access from transactional to non-transactional (and possibly pretty\nslow) environments creates a lot of ugly problems. Stored procedures are\ngreat technology with a pretty bad reputation, and one reason why is usage\nof this technology for bad cases.\n\nI think this mailing list is wrong for this kind of question. There is no\nrelation to performance.\n\nRegards\n\nPavel\n\n\n\n\n\n> Regards,\n> Aditya.\n>\n>\n>\n\nHist 11. 8. 2021 v 20:57 odesílatel aditya desai <[email protected]> napsal:Hi,We are migrating Oracle to PostgreSQL. We need the equivalent of UTL_HTTP.How to invoke Web service from PostgreSQL.Also please let me know the PostgreSQL equivalents of below Oracle utilities..utl.logger,UTL_FILE,UTL_SMTPyou can use extensions https://github.com/pramsey/pgsql-http or https://github.com/RekGRpth/pg_curl You can use an routines in untrusted PLPerl or untrusted PLPython, but these routines can be really unsafe (due possibility to break signal handling).Personally, I think using http access in stored procedures is a very bad idea - access from transactional to non-transactional (and possibly pretty slow) environments creates a lot of ugly problems. Stored procedures are great technology with a pretty bad reputation, and one reason why is usage of this technology for bad cases. I think this mailing list is wrong for this kind of question. There is no relation to performance.RegardsPavelRegards,Aditya.", "msg_date": "Wed, 11 Aug 2021 21:13:41 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL equivalent of UTL_HTTP" }, { "msg_contents": "Hi All,\n\nHave anyone tried to install PostgreSQL on the VM provisioned on IBM Z?\n\nIf yes please could you share the installation instructions or point to the\nblog etc.\n\nThanks in advance.\n\nThanks & Regards,\nManish\n\nHi All, Have anyone tried to install PostgreSQL on the VM provisioned on IBM Z? If yes please could you share the installation instructions or point to the blog etc.Thanks in advance. Thanks & Regards,Manish", "msg_date": "Wed, 6 Oct 2021 10:43:19 +0530", "msg_from": "Manish Lad <[email protected]>", "msg_from_op": false, "msg_subject": "Installation of PostgreSQL on fedora zVM" } ]
[ { "msg_contents": "Hello all,\n\nI think I have identified a major performance issue between V11.2 and 13.4 with respect to exception handling in UDFs. I have the following simplified query that pivots data and makes use of a UDF to convert data to a specific type, in this case, float:\n\n\nselect \"iccqa_iccassmt_fk\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null) as \"iccqa_DEPTH_CM\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null) as \"iccqa_LENGTH_CM\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null) as \"iccqa_WIDTH_CM\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null) as \"iccqa_DRAIN_PRESENT\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null) as \"iccqa_MEASUREMENTS_TAKEN\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null) as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n ) group by 1, 2\n) T\n group by 1\n;\n\n\nThe UDF is simple as follows:\n\n\nCREATE OR REPLACE FUNCTION TILDA.toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n\n\n\nIt works as a coalesce but with a conversion. I think I have identified some large performance difference with the exception handling. It so happens that with the last 3 columns ('DRAIN PRESENT', 'MEASUREMENTS TAKEN' and 'SIGNS AND SYMPTOMS OF INFECTION'), the data is VERY dirty. There is a mix of 0/1, YES/NO, and other mistyped stuff. This means these 3 columns throw lots of exceptions in the UDF. To illustrate, I simply break this into 2 queries.\n\n\n\nselect \"iccqa_iccassmt_fk\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null))::real as \"iccqa_DEPTH_CM\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null))::real as \"iccqa_LENGTH_CM\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null))::real as \"iccqa_WIDTH_CM\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null))::real as \"iccqa_DRAIN_PRESENT\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null))::real as \"iccqa_MEASUREMENTS_TAKEN\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null))::real as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n )\ngroup by 1, 2\n) T\n group by 1\n;\n\n\nThe performance is as expected.\n\n\nHashAggregate (cost=448463.70..448467.20 rows=200 width=16) (actual time=6760.797..9585.397 rows=677899 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Batches: 1 Memory Usage: 147489kB\n Buffers: shared hit=158815\n -> HashAggregate (cost=405997.87..417322.09 rows=1132422 width=56) (actual time=4576.514..5460.770 rows=2374628 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Batches: 1 Memory Usage: 368657kB\n Buffers: shared hit=158815\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..388224.53 rows=2369779 width=38) (actual time=0.033..3298.544 rows=2374628 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n Rows Removed by Filter: 10734488\n Buffers: shared hit=158815\nPlanning:\n Buffers: shared hit=3\nPlanning Time: 0.198 ms\nExecution Time: 9678.120 ms\n\n\n\nHowever, once we switch with the three \"bad\" columns, the results fall apart.\n\n\n\nselect \"iccqa_iccassmt_fk\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null))::real as \"iccqa_DEPTH_CM\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null))::real as \"iccqa_LENGTH_CM\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null))::real as \"iccqa_WIDTH_CM\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null))::real as \"iccqa_DRAIN_PRESENT\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null))::real as \"iccqa_MEASUREMENTS_TAKEN\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null))::real as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n )\ngroup by 1, 2\n) T\n group by 1\n;\n\n\n\nThe performance falls apart. It is a huge performance difference from ~10s to ~11mn and the only difference that I can think of is that the data is dirty which causes the exception path to be taken. The explain is:\n\n\nHashAggregate (cost=448463.70..448467.20 rows=200 width=16) (actual time=6672.921..696753.080 rows=677899 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Batches: 1 Memory Usage: 131105kB\n Buffers: shared hit=158815\n -> HashAggregate (cost=405997.87..417322.09 rows=1132422 width=56) (actual time=4574.918..5446.022 rows=2374628 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Batches: 1 Memory Usage: 368657kB\n Buffers: shared hit=158815\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..388224.53 rows=2369779 width=38) (actual time=0.032..3300.616 rows=2374628 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n Rows Removed by Filter: 10734488\n Buffers: shared hit=158815\nPlanning:\n Buffers: shared hit=3\nPlanning Time: 0.201 ms\nExecution Time: 696868.845 ms\n\n\n\nNow, on V11.2, the explain is:\n\n\nHashAggregate (cost=492171.36..492174.86 rows=200 width=16) (actual time=19322.522..50556.738 rows=743723 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Buffers: shared hit=11 read=174155 dirtied=13\n -> HashAggregate (cost=445458.43..457915.21 rows=1245678 width=56) (actual time=16260.015..17575.088 rows=2601088 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Buffers: shared read=174155 dirtied=13\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..425803.93 rows=2620600 width=38) (actual time=0.126..14425.239 rows=2601088 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n Rows Removed by Filter: 11778360\n Buffers: shared read=174155 dirtied=13\nPlanning Time: 36.121 ms\nExecution Time: 50730.255 ms\n\n\n\nSo, we are seeing two issues:\n\n * I think exception handling is significantly slower between V11.2 and v13.4. I see almost a 14x difference from 50s to 700s.\n * Comparing the two queries on V11.2, the difference is 13s vs 50s. So even on V11.2, the exception handling adds a significant overhead which I was not expecting.\n\nI'll be happy to update my test cases and share additional info if needed.\n\nThank you,\nLaurent Hasson.\n\n\n\n\n\n\n\n\n\n\nHello all,\n \nI think I have identified a major performance issue between V11.2 and 13.4 with respect to exception handling in UDFs. I have the following simplified query that pivots data and makes use of a UDF to convert data to a specific type, in\n this case, float:\n \n \nselect \"iccqa_iccassmt_fk\" \n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null) as \"iccqa_DEPTH_CM\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null) as \"iccqa_LENGTH_CM\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null) as \"iccqa_WIDTH_CM\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null) as \"iccqa_DRAIN_PRESENT\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null) as \"iccqa_MEASUREMENTS_TAKEN\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null) as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom  (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n     , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n     , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n  from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n                                                               , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n                                                               ) group by 1, 2\n) T\n     group by 1\n;\n \n \nThe UDF is simple as follows:\n \n \nCREATE OR REPLACE FUNCTION TILDA.toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n  RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n  RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n \n \n \nIt works as a coalesce but with a conversion. I think I have identified some large performance difference with the exception handling. It so happens that with the last 3 columns ('DRAIN PRESENT', 'MEASUREMENTS TAKEN' and 'SIGNS AND SYMPTOMS\n OF INFECTION'), the data is VERY dirty. There is a mix of 0/1, YES/NO, and other mistyped stuff. This means these 3 columns throw lots of exceptions in the UDF. To illustrate, I simply break this into 2 queries.\n \n \n \nselect \"iccqa_iccassmt_fk\" \n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null))::real as \"iccqa_DEPTH_CM\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null))::real as \"iccqa_LENGTH_CM\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null))::real as \"iccqa_WIDTH_CM\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null))::real as \"iccqa_DRAIN_PRESENT\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null))::real as \"iccqa_MEASUREMENTS_TAKEN\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null))::real as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom  (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n     , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n     , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n  from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n                                                               , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n                                                               )\ngroup by 1, 2\n) T\n     group by 1\n;\n \n \nThe performance is as expected.\n \n \nHashAggregate  (cost=448463.70..448467.20 rows=200 width=16) (actual time=6760.797..9585.397 rows=677899 loops=1)\n  Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n  Batches: 1  Memory Usage: 147489kB\n  Buffers: shared hit=158815\n  ->  HashAggregate  (cost=405997.87..417322.09 rows=1132422 width=56) (actual time=4576.514..5460.770 rows=2374628 loops=1)\n        Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n        Batches: 1  Memory Usage: 368657kB\n        Buffers: shared hit=158815\n        ->  Seq Scan on assessmenticcqa_raw  (cost=0.00..388224.53 rows=2369779 width=38) (actual time=0.033..3298.544 rows=2374628 loops=1)\n              Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n              Rows Removed by Filter: 10734488\n              Buffers: shared hit=158815\nPlanning:\n  Buffers: shared hit=3\nPlanning Time: 0.198 ms\nExecution Time: 9678.120 ms\n \n \n \nHowever, once we switch with the three “bad” columns, the results fall apart.\n \n \n \nselect \"iccqa_iccassmt_fk\" \n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null))::real as \"iccqa_DEPTH_CM\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null))::real as \"iccqa_LENGTH_CM\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null))::real as \"iccqa_WIDTH_CM\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null))::real as \"iccqa_DRAIN_PRESENT\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null))::real as \"iccqa_MEASUREMENTS_TAKEN\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null))::real as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom  (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n     , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n     , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n  from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n                                                               , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n                                                               )\ngroup by 1, 2\n) T\n     group by 1\n;\n \n \n \nThe performance falls apart. It is a huge performance difference from ~10s to ~11mn and the only difference that I can think of is that the data is dirty which causes the exception path to be taken. The explain is:\n \n \nHashAggregate  (cost=448463.70..448467.20 rows=200 width=16) (actual time=6672.921..696753.080 rows=677899 loops=1)\n  Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n  Batches: 1  Memory Usage: 131105kB\n  Buffers: shared hit=158815\n  ->  HashAggregate  (cost=405997.87..417322.09 rows=1132422 width=56) (actual time=4574.918..5446.022 rows=2374628 loops=1)\n        Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n        Batches: 1  Memory Usage: 368657kB\n        Buffers: shared hit=158815\n        ->  Seq Scan on assessmenticcqa_raw  (cost=0.00..388224.53 rows=2369779 width=38) (actual time=0.032..3300.616 rows=2374628 loops=1)\n              Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n              Rows Removed by Filter: 10734488\n              Buffers: shared hit=158815\nPlanning:\n  Buffers: shared hit=3\nPlanning Time: 0.201 ms\nExecution Time: 696868.845 ms\n \n \n \nNow, on V11.2, the explain is:\n \n \nHashAggregate  (cost=492171.36..492174.86 rows=200 width=16) (actual time=19322.522..50556.738 rows=743723 loops=1)\n  Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n  Buffers: shared hit=11 read=174155 dirtied=13\n  ->  HashAggregate  (cost=445458.43..457915.21 rows=1245678 width=56) (actual time=16260.015..17575.088 rows=2601088 loops=1)\n        Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n        Buffers: shared read=174155 dirtied=13\n        ->  Seq Scan on assessmenticcqa_raw  (cost=0.00..425803.93 rows=2620600 width=38) (actual time=0.126..14425.239 rows=2601088 loops=1)\n              Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n              Rows Removed by Filter: 11778360\n              Buffers: shared read=174155 dirtied=13\nPlanning Time: 36.121 ms\nExecution Time: 50730.255 ms\n \n \n \nSo, we are seeing two issues:\n\n\nI think exception handling is significantly slower between V11.2 and v13.4. I see almost a 14x difference from 50s to 700s.\nComparing the two queries on V11.2, the difference is 13s vs 50s. So even on V11.2, the exception handling adds a significant overhead which I was not expecting.\n \nI’ll be happy to update my test cases and share additional info if needed.\n \nThank you,\nLaurent Hasson.", "msg_date": "Sat, 21 Aug 2021 07:56:34 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "OK... I apologize for the long email before. Right after I sent it, I thought of a much simpler use-case to illustrate the issue which doesn't depend on any special data I have access o and complex pivoting. It's as raw as I can make it.\n\nI create a table with 1M rows and 2 columns. Column \"a\" is a random string, while column \"b\" is a random integer as a string. Then I use a UDF that converts strings to floats and handles an exception if the incoming string is not parsable as a float. Then I do a simple select of each column. In the \"a\" case, the UDF throws and catches lots of exceptions. In the \"b\" case, the conversion is clean and exceptions are not thrown.\n\n\ncreate table sampletest (a varchar, b varchar);\n\ninsert into sampletest (a, b)\nselect substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\nfrom generate_series(1,1000000);\n\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n\nselect MAX(toFloat(a, null)) as \"a\" from sampletest;\n\nselect MAX(toFloat(b, null)) as \"b\" from sampletest;\n\n\n\nOn purpose, I am doing a max(toFloat) instead of toFloat(max) to exercise the UDF 1M times.\n\n\nV13.4 \"a\" scenario (exceptions)\n-------------------------------------------------------------\nAggregate (cost=14778.40..14778.41 rows=1 width=4) (actual time=774098.537..774098.538 rows=1 loops=1)\n Buffers: shared hit=6373\n -> Seq Scan on sampletest (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.011..285.458 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning Time: 0.066 ms\nExecution Time: 774,098.563 ms\n\n\nV13.4 \"b\" scenario (no exceptions)\n-------------------------------------------------------------\nAggregate (cost=14778.40..14778.41 rows=1 width=4) (actual time=1510.200..1510.201 rows=1 loops=1)\n Buffers: shared hit=6385\n -> Seq Scan on sampletest (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.024..115.196 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning:\n Buffers: shared hit=26\nPlanning Time: 0.361 ms\nExecution Time: 1,530.659 ms\n\n\nV11.2 \"a\" scenario (exceptions)\n-------------------------------------------------------------\nAggregate (cost=21658.00..21658.01 rows=1 width=4) (actual time=26528.286..26528.286 rows=1 loops=1)\n Buffers: shared hit=6393\n -> Seq Scan on sampletest (cost=0.00..16562.00 rows=1019200 width=15) (actual time=0.037..190.633 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning Time: 1.182 ms\nExecution Time: 26,530.492 ms\n\n\nV11.2 \"b\" scenario (no exceptions)\n-------------------------------------------------------------\nAggregate (cost=21658.00..21658.01 rows=1 width=4) (actual time=1856.116..1856.116 rows=1 loops=1)\n Buffers: shared hit=6370\n -> Seq Scan on sampletest (cost=0.00..16562.00 rows=1019200 width=8) (actual time=0.014..88.152 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning Time: 0.098 ms\nExecution Time: 1,856.152 ms\n\n\n\n\n\nSummary:\n\n * Scenario V11.2/a: 26.6s\n * Scenario V11.2/b: 1.9s\n * Scenario V13.4/a: 774.1s\n * Scenario V13.4/b: 1.5s\n\nConclusion:\n\n * The no-exception scenario performs 20% better on 13.4 vs 11.2 (nice for a straight scan!)\n * On 11.2, exceptions add an overhead of over 14x (1.9s vs 26.6s). I did not expect exceptions to add such a large overhead. Why is that?\n * Between 11.2 and 13.4, the no-exceptions scenario \"b\" performs 30x slower (26.6s vs 774.1s).\n\nThank you!\nLaurent Hasson.\n\n\n\nFrom: [email protected] <[email protected]>\nSent: Saturday, August 21, 2021 03:57\nTo: [email protected]\nSubject: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\n\nHello all,\n\nI think I have identified a major performance issue between V11.2 and 13.4 with respect to exception handling in UDFs. I have the following simplified query that pivots data and makes use of a UDF to convert data to a specific type, in this case, float:\n\n\nselect \"iccqa_iccassmt_fk\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null) as \"iccqa_DEPTH_CM\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null) as \"iccqa_LENGTH_CM\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null) as \"iccqa_WIDTH_CM\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null) as \"iccqa_DRAIN_PRESENT\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null) as \"iccqa_MEASUREMENTS_TAKEN\"\n , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null) as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n ) group by 1, 2\n) T\n group by 1\n;\n\n\nThe UDF is simple as follows:\n\n\nCREATE OR REPLACE FUNCTION TILDA.toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n\n\n\nIt works as a coalesce but with a conversion. I think I have identified some large performance difference with the exception handling. It so happens that with the last 3 columns ('DRAIN PRESENT', 'MEASUREMENTS TAKEN' and 'SIGNS AND SYMPTOMS OF INFECTION'), the data is VERY dirty. There is a mix of 0/1, YES/NO, and other mistyped stuff. This means these 3 columns throw lots of exceptions in the UDF. To illustrate, I simply break this into 2 queries.\n\n\n\nselect \"iccqa_iccassmt_fk\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null))::real as \"iccqa_DEPTH_CM\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null))::real as \"iccqa_LENGTH_CM\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null))::real as \"iccqa_WIDTH_CM\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null))::real as \"iccqa_DRAIN_PRESENT\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null))::real as \"iccqa_MEASUREMENTS_TAKEN\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null))::real as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n )\ngroup by 1, 2\n) T\n group by 1\n;\n\n\nThe performance is as expected.\n\n\nHashAggregate (cost=448463.70..448467.20 rows=200 width=16) (actual time=6760.797..9585.397 rows=677899 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Batches: 1 Memory Usage: 147489kB\n Buffers: shared hit=158815\n -> HashAggregate (cost=405997.87..417322.09 rows=1132422 width=56) (actual time=4576.514..5460.770 rows=2374628 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Batches: 1 Memory Usage: 368657kB\n Buffers: shared hit=158815\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..388224.53 rows=2369779 width=38) (actual time=0.033..3298.544 rows=2374628 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n Rows Removed by Filter: 10734488\n Buffers: shared hit=158815\nPlanning:\n Buffers: shared hit=3\nPlanning Time: 0.198 ms\nExecution Time: 9678.120 ms\n\n\n\nHowever, once we switch with the three \"bad\" columns, the results fall apart.\n\n\n\nselect \"iccqa_iccassmt_fk\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null))::real as \"iccqa_DEPTH_CM\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null))::real as \"iccqa_LENGTH_CM\"\n-- , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null))::real as \"iccqa_WIDTH_CM\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null))::real as \"iccqa_DRAIN_PRESENT\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null))::real as \"iccqa_MEASUREMENTS_TAKEN\"\n , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null))::real as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n )\ngroup by 1, 2\n) T\n group by 1\n;\n\n\n\nThe performance falls apart. It is a huge performance difference from ~10s to ~11mn and the only difference that I can think of is that the data is dirty which causes the exception path to be taken. The explain is:\n\n\nHashAggregate (cost=448463.70..448467.20 rows=200 width=16) (actual time=6672.921..696753.080 rows=677899 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Batches: 1 Memory Usage: 131105kB\n Buffers: shared hit=158815\n -> HashAggregate (cost=405997.87..417322.09 rows=1132422 width=56) (actual time=4574.918..5446.022 rows=2374628 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Batches: 1 Memory Usage: 368657kB\n Buffers: shared hit=158815\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..388224.53 rows=2369779 width=38) (actual time=0.032..3300.616 rows=2374628 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n Rows Removed by Filter: 10734488\n Buffers: shared hit=158815\nPlanning:\n Buffers: shared hit=3\nPlanning Time: 0.201 ms\nExecution Time: 696868.845 ms\n\n\n\nNow, on V11.2, the explain is:\n\n\nHashAggregate (cost=492171.36..492174.86 rows=200 width=16) (actual time=19322.522..50556.738 rows=743723 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Buffers: shared hit=11 read=174155 dirtied=13\n -> HashAggregate (cost=445458.43..457915.21 rows=1245678 width=56) (actual time=16260.015..17575.088 rows=2601088 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Buffers: shared read=174155 dirtied=13\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..425803.93 rows=2620600 width=38) (actual time=0.126..14425.239 rows=2601088 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n Rows Removed by Filter: 11778360\n Buffers: shared read=174155 dirtied=13\nPlanning Time: 36.121 ms\nExecution Time: 50730.255 ms\n\n\n\nSo, we are seeing two issues:\n\n- I think exception handling is significantly slower between V11.2 and v13.4. I see almost a 14x difference from 50s to 700s.\n\n- Comparing the two queries on V11.2, the difference is 13s vs 50s. So even on V11.2, the exception handling adds a significant overhead which I was not expecting.\n\nI'll be happy to update my test cases and share additional info if needed.\n\nThank you,\nLaurent Hasson.\n\n\n\n\n\n\n\n\n\n\nOK… I apologize for the long email before. Right after I sent it, I thought of a much simpler use-case to illustrate the issue which doesn’t depend on any special data I have access o and complex pivoting. It’s as raw as I can make it.\n \nI create a table with 1M rows and 2 columns. Column “a” is a random string, while column “b” is a random integer as a string. Then I use a UDF that converts strings to floats and handles an exception if the incoming string is not parsable\n as a float. Then I do a simple select of each column. In the “a” case, the UDF throws and catches lots of exceptions. In the “b” case, the conversion is clean and exceptions are not thrown.\n \n \ncreate table sampletest (a varchar, b varchar);\n \ninsert into sampletest (a, b)\nselect substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\nfrom generate_series(1,1000000);\n \nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n  RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n  RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n \nselect MAX(toFloat(a, null)) as \"a\" from sampletest;\n \nselect MAX(toFloat(b, null)) as \"b\" from sampletest;\n \n \n \nOn purpose, I am doing a max(toFloat) instead of toFloat(max) to exercise the UDF 1M times.\n \n \nV13.4 “a” scenario (exceptions)\n-------------------------------------------------------------\nAggregate  (cost=14778.40..14778.41 rows=1 width=4) (actual time=774098.537..774098.538 rows=1 loops=1)\n  Buffers: shared hit=6373\n  ->  Seq Scan on sampletest  (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.011..285.458 rows=1000000 loops=1)\n        Buffers: shared hit=6370\nPlanning Time: 0.066 ms\nExecution Time: 774,098.563 ms\n \n \nV13.4 “b” scenario (no exceptions)\n-------------------------------------------------------------\nAggregate  (cost=14778.40..14778.41 rows=1 width=4) (actual time=1510.200..1510.201 rows=1 loops=1)\n  Buffers: shared hit=6385\n  ->  Seq Scan on sampletest  (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.024..115.196 rows=1000000 loops=1)\n        Buffers: shared hit=6370\nPlanning:\n  Buffers: shared hit=26\nPlanning Time: 0.361 ms\nExecution Time: 1,530.659 ms\n \n \nV11.2 “a” scenario (exceptions)\n-------------------------------------------------------------\nAggregate  (cost=21658.00..21658.01 rows=1 width=4) (actual time=26528.286..26528.286 rows=1 loops=1)\n  Buffers: shared hit=6393\n  ->  Seq Scan on sampletest  (cost=0.00..16562.00 rows=1019200 width=15) (actual time=0.037..190.633 rows=1000000 loops=1)\n        Buffers: shared hit=6370\nPlanning Time: 1.182 ms\nExecution Time: 26,530.492 ms\n \n \nV11.2 “b” scenario (no exceptions)\n-------------------------------------------------------------\nAggregate  (cost=21658.00..21658.01 rows=1 width=4) (actual time=1856.116..1856.116 rows=1 loops=1)\n  Buffers: shared hit=6370\n  ->  Seq Scan on sampletest  (cost=0.00..16562.00 rows=1019200 width=8) (actual time=0.014..88.152 rows=1000000 loops=1)\n        Buffers: shared hit=6370\nPlanning Time: 0.098 ms\nExecution Time: 1,856.152 ms\n \n \n \n \n \nSummary:\n\nScenario V11.2/a: 26.6sScenario V11.2/b: 1.9sScenario V13.4/a: 774.1sScenario V13.4/b: 1.5s\n \nConclusion:\n\nThe no-exception scenario performs 20% better on 13.4 vs 11.2 (nice for a straight scan!)On 11.2, exceptions add an overhead of over 14x (1.9s vs 26.6s). I did not expect exceptions to add such a large overhead. Why is that?Between 11.2 and 13.4, the no-exceptions scenario “b” performs 30x slower (26.6s vs 774.1s).\n \nThank you!\nLaurent Hasson.\n \n \n \n\n\nFrom: [email protected] <[email protected]>\n\nSent: Saturday, August 21, 2021 03:57\nTo: [email protected]\nSubject: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\n\n\n \nHello all,\n \nI think I have identified a major performance issue between V11.2 and 13.4 with respect to exception handling in UDFs. I have the following simplified query that pivots data and makes use of a UDF to convert data\n to a specific type, in this case, float:\n \n \nselect \"iccqa_iccassmt_fk\" \n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null) as \"iccqa_DEPTH_CM\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null) as \"iccqa_LENGTH_CM\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null) as \"iccqa_WIDTH_CM\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null) as \"iccqa_DRAIN_PRESENT\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null) as \"iccqa_MEASUREMENTS_TAKEN\"\n     , Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null) as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom  (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n     , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n     , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n  from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n                                                               , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n                                                               ) group by 1, 2\n) T\n     group by 1\n;\n \n \nThe UDF is simple as follows:\n \n \nCREATE OR REPLACE FUNCTION TILDA.toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n  RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n  RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n \n \n \nIt works as a coalesce but with a conversion. I think I have identified some large performance difference with the exception handling. It so happens that with the last 3 columns ('DRAIN PRESENT', 'MEASUREMENTS TAKEN'\n and 'SIGNS AND SYMPTOMS OF INFECTION'), the data is VERY dirty. There is a mix of 0/1, YES/NO, and other mistyped stuff. This means these 3 columns throw lots of exceptions in the UDF. To illustrate, I simply break this into 2 queries.\n \n \n \nselect \"iccqa_iccassmt_fk\" \n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null))::real as \"iccqa_DEPTH_CM\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null))::real as \"iccqa_LENGTH_CM\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null))::real as \"iccqa_WIDTH_CM\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null))::real as \"iccqa_DRAIN_PRESENT\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null))::real as \"iccqa_MEASUREMENTS_TAKEN\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null))::real as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom  (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n     , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n     , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n  from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n                                                               , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n                                                               )\ngroup by 1, 2\n) T\n     group by 1\n;\n \n \nThe performance is as expected.\n \n \nHashAggregate  (cost=448463.70..448467.20 rows=200 width=16) (actual time=6760.797..9585.397 rows=677899 loops=1)\n  Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n  Batches: 1  Memory Usage: 147489kB\n  Buffers: shared hit=158815\n  ->  HashAggregate  (cost=405997.87..417322.09 rows=1132422 width=56) (actual time=4576.514..5460.770 rows=2374628 loops=1)\n        Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n        Batches: 1  Memory Usage: 368657kB\n        Buffers: shared hit=158815\n        ->  Seq Scan on assessmenticcqa_raw  (cost=0.00..388224.53 rows=2369779 width=38) (actual time=0.033..3298.544 rows=2374628 loops=1)\n              Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n              Rows Removed by Filter: 10734488\n              Buffers: shared hit=158815\nPlanning:\n  Buffers: shared hit=3\nPlanning Time: 0.198 ms\nExecution Time: 9678.120 ms\n \n \n \nHowever, once we switch with the three “bad” columns, the results fall apart.\n \n \n \nselect \"iccqa_iccassmt_fk\" \n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)') ,null))::real as \"iccqa_DEPTH_CM\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'LENGTH (CM)') ,null))::real as \"iccqa_LENGTH_CM\"\n--     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'WIDTH (CM)') ,null))::real as \"iccqa_WIDTH_CM\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DRAIN PRESENT') ,null))::real as \"iccqa_DRAIN_PRESENT\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'MEASUREMENTS TAKEN') ,null))::real as \"iccqa_MEASUREMENTS_TAKEN\"\n     , (Tilda.toFloat(MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'SIGNS AND SYMPTOMS OF INFECTION') ,null))::real as \"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\"\nfrom  (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n     , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n     , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n  from VNAHGEDW_FACTS.AssessmentICCQA_Raw\nwhere VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEPTH (CM)', 'LENGTH (CM)', 'WIDTH (CM)'\n                                                               , 'DRAIN PRESENT', 'MEASUREMENTS TAKEN', 'SIGNS AND SYMPTOMS OF INFECTION'\n                                                               )\ngroup by 1, 2\n) T\n     group by 1\n;\n \n \n \nThe performance falls apart. It is a huge performance difference from ~10s to ~11mn and the only difference that I can think of is that the data is dirty which causes the exception path to be taken. The explain\n is:\n \n \nHashAggregate  (cost=448463.70..448467.20 rows=200 width=16) (actual time=6672.921..696753.080 rows=677899 loops=1)\n  Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n  Batches: 1  Memory Usage: 131105kB\n  Buffers: shared hit=158815\n  ->  HashAggregate  (cost=405997.87..417322.09 rows=1132422 width=56) (actual time=4574.918..5446.022 rows=2374628 loops=1)\n        Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n        Batches: 1  Memory Usage: 368657kB\n        Buffers: shared hit=158815\n        ->  Seq Scan on assessmenticcqa_raw  (cost=0.00..388224.53 rows=2369779 width=38) (actual time=0.032..3300.616 rows=2374628 loops=1)\n              Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n              Rows Removed by Filter: 10734488\n              Buffers: shared hit=158815\nPlanning:\n  Buffers: shared hit=3\nPlanning Time: 0.201 ms\nExecution Time: 696868.845 ms\n \n \n \nNow, on V11.2, the explain is:\n \n \nHashAggregate  (cost=492171.36..492174.86 rows=200 width=16) (actual time=19322.522..50556.738 rows=743723 loops=1)\n  Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n  Buffers: shared hit=11 read=174155 dirtied=13\n  ->  HashAggregate  (cost=445458.43..457915.21 rows=1245678 width=56) (actual time=16260.015..17575.088 rows=2601088 loops=1)\n        Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n        Buffers: shared read=174155 dirtied=13\n        ->  Seq Scan on assessmenticcqa_raw  (cost=0.00..425803.93 rows=2620600 width=38) (actual time=0.126..14425.239 rows=2601088 loops=1)\n              Filter: ((iccqar_ques_code)::text = ANY ('{\"DEPTH (CM)\",\"LENGTH (CM)\",\"WIDTH (CM)\",\"DRAIN PRESENT\",\"MEASUREMENTS TAKEN\",\"SIGNS AND SYMPTOMS OF INFECTION\"}'::text[]))\n              Rows Removed by Filter: 11778360\n              Buffers: shared read=174155 dirtied=13\nPlanning Time: 36.121 ms\nExecution Time: 50730.255 ms\n \n \n \nSo, we are seeing two issues:\n\n-         \nI think exception handling is significantly slower between V11.2 and v13.4. I see almost a 14x difference from 50s to 700s.\n\n-         \nComparing the two queries on V11.2, the difference is 13s vs 50s. So even on V11.2, the exception handling adds a significant overhead which I was not expecting.\n \nI’ll be happy to update my test cases and share additional info if needed.\n \nThank you,\nLaurent Hasson.", "msg_date": "Sat, 21 Aug 2021 08:55:37 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> OK... I apologize for the long email before. Right after I sent it, I thought of a much simpler use-case to illustrate the issue which doesn't depend on any special data I have access o and complex pivoting. It's as raw as I can make it.\n> I create a table with 1M rows and 2 columns. Column \"a\" is a random string, while column \"b\" is a random integer as a string. Then I use a UDF that converts strings to floats and handles an exception if the incoming string is not parsable as a float. Then I do a simple select of each column. In the \"a\" case, the UDF throws and catches lots of exceptions. In the \"b\" case, the conversion is clean and exceptions are not thrown.\n\nI tried this script on a few different versions and got\nthese psql-measured timings for the test queries:\n\nHEAD:\nTime: 12234.297 ms (00:12.234)\nTime: 3029.643 ms (00:03.030)\n\nv14:\nTime: 12519.038 ms (00:12.519)\nTime: 3211.315 ms (00:03.211)\n\nv13:\nTime: 12132.026 ms (00:12.132)\nTime: 3114.582 ms (00:03.115)\n\nv12:\nTime: 11787.554 ms (00:11.788)\nTime: 3520.875 ms (00:03.521)\n\nv11:\nTime: 13066.495 ms (00:13.066)\nTime: 3503.790 ms (00:03.504)\n\nv10:\nTime: 15890.844 ms (00:15.891)\nTime: 4999.843 ms (00:05.000)\n\n(Caveats: these are assert-enabled debug builds, so they're all\nslower than production builds, but the overhead should be pretty\nuniform across branches I think. Also, I wasn't trying hard to\neliminate noise, e.g. I didn't do multiple runs. So I wouldn't\ntrust these results to be reproducible to better than 10% or so.)\n\nThe overhead of an EXCEPTION block is definitely high, and more\nso when an exception actually occurs, but these are known facts\nand my results are not out of line with my expectations. Yours\nare though, so something is drastically slowing the exception-\nrecovery path in your installation. Do you have any extensions\nloaded?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Aug 2021 11:04:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "I know that 14 is a beta version but the performance is significantly \nworse than v13 (I assume it's 13.4). Head revision is better than v14 \nbut still worse than v13.  Can you expand a bit on the difference? Where \ndoes the difference come from? Are there any differences in the \nexecution plan?  I am looking at the first query, taking slightly more \nthan 12s.\n\nRegards\n\nOn 8/21/21 11:04 AM, Tom Lane wrote:\n> HEAD:\n> Time: 12234.297 ms (00:12.234)\n> Time: 3029.643 ms (00:03.030)\n>\n> v14:\n> Time: 12519.038 ms (00:12.519)\n> Time: 3211.315 ms (00:03.211)\n>\n> v13:\n> Time: 12132.026 ms (00:12.132)\n> Time: 3114.582 ms (00:03.115)\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Sat, 21 Aug 2021 11:29:44 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]> \nSent: Saturday, August 21, 2021 11:05\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\n\n\"[email protected]\" <[email protected]> writes:\n> OK... I apologize for the long email before. Right after I sent it, I thought of a much simpler use-case to illustrate the issue which doesn't depend on any special data I have access o and complex pivoting. It's as raw as I can make it.\n> I create a table with 1M rows and 2 columns. Column \"a\" is a random string, while column \"b\" is a random integer as a string. Then I use a UDF that converts strings to floats and handles an exception if the incoming string is not parsable as a float. Then I do a simple select of each column. In the \"a\" case, the UDF throws and catches lots of exceptions. In the \"b\" case, the conversion is clean and exceptions are not thrown.\n\nI tried this script on a few different versions and got these psql-measured timings for the test queries:\n\nHEAD:\nTime: 12234.297 ms (00:12.234)\nTime: 3029.643 ms (00:03.030)\n\nv14:\nTime: 12519.038 ms (00:12.519)\nTime: 3211.315 ms (00:03.211)\n\nv13:\nTime: 12132.026 ms (00:12.132)\nTime: 3114.582 ms (00:03.115)\n\nv12:\nTime: 11787.554 ms (00:11.788)\nTime: 3520.875 ms (00:03.521)\n\nv11:\nTime: 13066.495 ms (00:13.066)\nTime: 3503.790 ms (00:03.504)\n\nv10:\nTime: 15890.844 ms (00:15.891)\nTime: 4999.843 ms (00:05.000)\n\n(Caveats: these are assert-enabled debug builds, so they're all slower than production builds, but the overhead should be pretty uniform across branches I think. Also, I wasn't trying hard to eliminate noise, e.g. I didn't do multiple runs. So I wouldn't trust these results to be reproducible to better than 10% or so.)\n\nThe overhead of an EXCEPTION block is definitely high, and more so when an exception actually occurs, but these are known facts and my results are not out of line with my expectations. Yours are though, so something is drastically slowing the exception- recovery path in your installation. Do you have any extensions loaded?\n\n\t\t\tregards, tom lane\n\n\n------------------------------------------------------------------------------------------------------\n\nSo you mean that on average, the 4x overhead of exceptions is around what you'd expect?\n\nAs for results in general, yes, your numbers look pretty uniform across versions. On my end, comparing V11.2 vs V13.4 shows a much different picture!\n\nI have a few extensions installed: plpgsql, fuzzystrmatch, pg_trgm and tablefunc. Same on either versions of the db installs I have, and same extension versions.\n\nV11.2:\nextname |extowner|extnamespace|extrelocatable|extversion|extconfig|extcondition|\n-------------|--------|------------|--------------|----------|---------|------------|\nplpgsql | 10| 11|false |1.0 |NULL |NULL |\nfuzzystrmatch| 10| 2200|true |1.1 |NULL |NULL |\npg_trgm | 10| 2200|true |1.3 |NULL |NULL |\ntablefunc | 10| 2200|true |1.0 |NULL |NULL |\n\nV13.4\noid |extname |extowner|extnamespace|extrelocatable|extversion|extconfig|extcondition|\n-----|-------------|--------|------------|--------------|----------|---------|------------|\n13428|plpgsql | 10| 11|false |1.0 |NULL |NULL |\n16676|fuzzystrmatch| 10| 2200|true |1.1 |NULL |NULL |\n16677|pg_trgm | 10| 2200|true |1.4 |NULL |NULL |\n16678|tablefunc | 10| 2200|true |1.0 |NULL |NULL |\n\nThank you,\nLaurent.\n\n\n", "msg_date": "Sat, 21 Aug 2021 16:01:16 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Mladen Gogala <[email protected]> writes:\n> I know that 14 is a beta version but the performance is significantly \n> worse than v13 (I assume it's 13.4). Head revision is better than v14 \n> but still worse than v13.  Can you expand a bit on the difference?\n\n[ shrug... ] I don't see any meaningful differences between those\nnumbers --- they're within 3% or so across versions, which is less\nthan the margin of error considering I wasn't trying to control\nfor outside effects like CPU speed stepping. Microbenchmarks like\nthis one are notoriously noisy. Maybe there's some real difference\nthere, but these numbers aren't to be trusted that much.\n\nWhat I was looking for was some evidence matching Laurent's report of\nthe exception-recovery path being 500X slower than non-exception.\nThat would have been obvious even with the sloppiest of measurements\n... but I'm not seeing it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Aug 2021 14:04:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> So you mean that on average, the 4x overhead of exceptions is around what you'd expect?\n\nDoesn't surprise me any, no. Exception recovery has to clean up after\na wide variety of possible errors, with only minimal assumptions about\nwhat the system state had been. So it's expensive. More to the point,\nthe overhead's been broadly the same for quite some time.\n\n> As for results in general, yes, your numbers look pretty uniform across versions. On my end, comparing V11.2 vs V13.4 shows a much different picture!\n\nI'm baffled why that should be so. I do not think any of the extensions\nyou mention add any exception-recovery overhead, especially not in\nsessions that haven't used them.\n\nAs an additional test, I checked out 11.2 exactly, and got timings\nthat pretty much matched my previous test of v11 branch tip. So that\neliminates the theory that we broke something since 11.2 in a patch\nthat was also back-patched into that branch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Aug 2021 14:17:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Sat, Aug 21, 2021 at 02:17:26PM -0400, Tom Lane wrote:\n> \"[email protected]\" <[email protected]> writes:\n> > So you mean that on average, the 4x overhead of exceptions is around what you'd expect?\n> \n> Doesn't surprise me any, no. Exception recovery has to clean up after\n> a wide variety of possible errors, with only minimal assumptions about\n> what the system state had been. So it's expensive. More to the point,\n> the overhead's been broadly the same for quite some time.\n> \n> > As for results in general, yes, your numbers look pretty uniform across versions. On my end, comparing V11.2 vs V13.4 shows a much different picture!\n> \n> I'm baffled why that should be so. I do not think any of the extensions\n> you mention add any exception-recovery overhead, especially not in\n> sessions that haven't used them.\n\nLaurent, did you install binaries for v13.4 or compile it ?\n\nWhat about these ?\n\nSHOW shared_preload_libraries;\nSHOW session_preload_libraries;\nSHOW local_preload_libraries;\n\nWould you try to reproduce the issue with a fresh database:\nCREATE DATABASE udftest; ...\n\nOr a fresh instance created with initdb.\n\nAs I recall, you're running postgres under a windows VM - I'm not sure if\nthat's relevant.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 21 Aug 2021 14:19:50 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "I happen to have a stock 13.3 and 11.12 on Ubuntu here so I thought I'd\ncontribute numbers in case it's helpful:\n\nv13.3:\nTime: 4368.413 ms (00:04.368)\nTime: 837.046 ms\n\nv11.12:\nTime: 5178.595 ms (00:05.179)\nTime: 1027.857 ms (00:01.028)\n\nSo I'm also seeing a slight improvement in 13, not a degradation.\nauto_explain and pg_stat_statements are installed in both; otherwise\nthey're pretty vanilla.\n\nI happen to have a stock 13.3 and 11.12 on Ubuntu here so I thought I'd contribute numbers in case it's helpful:v13.3:Time: 4368.413 ms (00:04.368)Time: 837.046 msv11.12:Time: 5178.595 ms (00:05.179)Time: 1027.857 ms (00:01.028)So I'm also seeing a slight improvement in 13, not a degradation. auto_explain and pg_stat_statements are installed in both; otherwise they're pretty vanilla.", "msg_date": "Sat, 21 Aug 2021 13:58:21 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Sat, Aug 21, 2021 at 02:19:50PM -0500, Justin Pryzby wrote:\n> As I recall, you're running postgres under a windows VM - I'm not sure if\n> that's relevant.\n\nI tried under a couple hyperv VMs but could not reproduce the issue (only an\n~8x difference \"with exceptions\").\n\nWhich hypervisor are you using ?\n\nI don't know if any of it matters, but would you also send:\n\nSELECT version();\nSELECT * FROM pg_config();\n\nAnd maybe the CPU info ?\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 21 Aug 2021 16:24:11 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Tom Lane <[email protected]>\r\n > Sent: Saturday, August 21, 2021 14:05\r\n > To: Mladen Gogala <[email protected]>\r\n > Cc: [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > Mladen Gogala <[email protected]> writes:\r\n > > I know that 14 is a beta version but the performance is significantly\r\n > > worse than v13 (I assume it's 13.4). Head revision is better than v14\r\n > > but still worse than v13.  Can you expand a bit on the difference?\r\n > \r\n > [ shrug... ] I don't see any meaningful differences between those\r\n > numbers --- they're within 3% or so across versions, which is less than\r\n > the margin of error considering I wasn't trying to control for outside\r\n > effects like CPU speed stepping. Microbenchmarks like this one are\r\n > notoriously noisy. Maybe there's some real difference there, but these\r\n > numbers aren't to be trusted that much.\r\n > \r\n > What I was looking for was some evidence matching Laurent's report of\r\n > the exception-recovery path being 500X slower than non-exception.\r\n > That would have been obvious even with the sloppiest of measurements\r\n > ... but I'm not seeing it.\r\n > \r\n > \t\t\tregards, tom lane\r\n > \r\n\r\nHello Tom,\r\n\r\nThe difference for the Exceptions-scenario between V11.2 and V13.4 that I observed was 30x.\r\nIt is the difference on V13.4 between the Exceptions and no-exceptions scenarios that is 500x+.\r\n\r\nJust to clarify.\r\n\r\nI am following up with Justin's suggestions and will respond with updated info soon.\r\n\r\nThank you!\r\nLaurent Hasson.\r\n", "msg_date": "Sat, 21 Aug 2021 21:48:45 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Justin Pryzby <[email protected]>\n > Sent: Saturday, August 21, 2021 15:20\n > To: Tom Lane <[email protected]>\n > Cc: [email protected]; [email protected]\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > On Sat, Aug 21, 2021 at 02:17:26PM -0400, Tom Lane wrote:\n > > \"[email protected]\" <[email protected]> writes:\n > > > So you mean that on average, the 4x overhead of exceptions is\n > around what you'd expect?\n > >\n > > Doesn't surprise me any, no. Exception recovery has to clean up after\n > > a wide variety of possible errors, with only minimal assumptions about\n > > what the system state had been. So it's expensive. More to the\n > > point, the overhead's been broadly the same for quite some time.\n > >\n > > > As for results in general, yes, your numbers look pretty uniform\n > across versions. On my end, comparing V11.2 vs V13.4 shows a much\n > different picture!\n > >\n > > I'm baffled why that should be so. I do not think any of the\n > > extensions you mention add any exception-recovery overhead,\n > especially\n > > not in sessions that haven't used them.\n > \n > Laurent, did you install binaries for v13.4 or compile it ?\n > \n > What about these ?\n > \n > SHOW shared_preload_libraries;\n > SHOW session_preload_libraries;\n > SHOW local_preload_libraries;\n > \n > Would you try to reproduce the issue with a fresh database:\n > CREATE DATABASE udftest; ...\n > \n > Or a fresh instance created with initdb.\n > \n > As I recall, you're running postgres under a windows VM - I'm not sure if\n > that's relevant.\n > \n > --\n > Justin\n\nHello Justin,\n\n- I used the standard installer from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads for Windows X64 and upgraded from 13.3, which itself was pg_upgraded from 11.2.\n- Yes, windows VM on VMWARE.\n- No entries from shared_preload_libraries, session_preload_libraries or local_preload_libraries.\n- Version is \"PostgreSQL 13.4, compiled by Visual C++ build 1914, 64-bit\".\n- I created a new database and reran the scenarios without much of a change.\n- I think I am going to install a whole fresh new instance from scratch and see if there may have been some weird stuff happening with the upgrade path I took?\n\nThank you,\nLaurent Hasson.\n\n\n\n\n\n", "msg_date": "Sat, 21 Aug 2021 21:56:52 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Could you send SELECT * FROM pg_config()\nand try to find the CPU model ?\n\nI think it's possible the hypervisor is trapping and emulating unhandled CPU\ninstructions.\n\nActually, it would be interesting to see if the performance differs between\n11.2 and 11.13. It's possible that EDB compiled 11.13 on a newer CPU (or a\nnewer compiler) than 11.2 was compiled.\n\nIf you test that, it should be on a separate VM, unless the existing data dir\ncan be restored from backup. Once you've started a cluster with updated\nbinaries, you should avoid downgrading the binaries.\n\n\n", "msg_date": "Sat, 21 Aug 2021 17:17:29 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Justin Pryzby <[email protected]>\r\n > Sent: Saturday, August 21, 2021 18:17\r\n > To: [email protected]\r\n > Cc: Tom Lane <[email protected]>; [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > Could you send SELECT * FROM pg_config() and try to find the CPU\r\n > model ?\r\n > \r\n > I think it's possible the hypervisor is trapping and emulating unhandled\r\n > CPU instructions.\r\n > \r\n > Actually, it would be interesting to see if the performance differs\r\n > between\r\n > 11.2 and 11.13. It's possible that EDB compiled 11.13 on a newer CPU\r\n > (or a newer compiler) than 11.2 was compiled.\r\n > \r\n > If you test that, it should be on a separate VM, unless the existing data\r\n > dir can be restored from backup. Once you've started a cluster with\r\n > updated binaries, you should avoid downgrading the binaries.\r\n\r\n\r\n\r\nHello all,\r\n\r\nOK, I was able to do a clean install of 13.4 on the VM. All stock settings, no extensions loaded, pure clean straight out of the install.\r\n\r\ncreate table sampletest (a varchar, b varchar);\r\n-- truncate table sampletest;\r\ninsert into sampletest (a, b)\r\nselect substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\r\n from generate_series(1,1000000);\r\n\r\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\r\nRETURNS real AS $$\r\nBEGIN\r\n RETURN case when str is null then val else str::real end;\r\nEXCEPTION WHEN OTHERS THEN\r\n RETURN val;\r\nEND;\r\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\r\n\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b, null)) as \"b\" from sampletest\r\n\r\nAggregate (cost=21370.00..21370.01 rows=1 width=4) (actual time=1780.561..1780.563 rows=1 loops=1)\r\n Buffers: shared hit=6387\r\n -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000 width=8) (actual time=0.053..97.329 rows=1000000 loops=1)\r\n Buffers: shared hit=6370\r\nPlanning:\r\n Buffers: shared hit=36\r\nPlanning Time: 2.548 ms\r\nExecution Time: 1,810.330 ms\r\n\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as \"a\" from sampletest\r\n\r\nAggregate (cost=21370.00..21370.01 rows=1 width=4) (actual time=863243.876..863243.877 rows=1 loops=1)\r\n Buffers: shared hit=6373\r\n -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000 width=15) (actual time=0.009..301.553 rows=1000000 loops=1)\r\n Buffers: shared hit=6370\r\nPlanning:\r\n Buffers: shared hit=44\r\nPlanning Time: 0.469 ms\r\nExecution Time: 863,243.911 ms\r\n\r\n\r\nSo I am still able to reproduce this on a different VM and a clean install of 13.4 ☹\r\n\r\n\r\nSELECT * FROM pg_config();\r\n\r\nBINDIR\tC:/PROGRA~1/POSTGR~1/13/bin\r\nDOCDIR\tC:/PROGRA~1/POSTGR~1/13/doc\r\nHTMLDIR\tC:/PROGRA~1/POSTGR~1/13/doc\r\nINCLUDEDIR\tC:/PROGRA~1/POSTGR~1/13/include\r\nPKGINCLUDEDIR\tC:/PROGRA~1/POSTGR~1/13/include\r\nINCLUDEDIR-SERVER\tC:/PROGRA~1/POSTGR~1/13/include/server\r\nLIBDIR\tC:/PROGRA~1/POSTGR~1/13/lib\r\nPKGLIBDIR\tC:/PROGRA~1/POSTGR~1/13/lib\r\nLOCALEDIR\tC:/PROGRA~1/POSTGR~1/13/share/locale\r\nMANDIR\tC:/Program Files/PostgreSQL/13/man\r\nSHAREDIR\tC:/PROGRA~1/POSTGR~1/13/share\r\nSYSCONFDIR\tC:/Program Files/PostgreSQL/13/etc\r\nPGXS\tC:/Program Files/PostgreSQL/13/lib/pgxs/src/makefiles/pgxs.mk\r\nCONFIGURE\t--enable-thread-safety --enable-nls --with-ldap --with-openssl --with-uuid --with-libxml --with-libxslt --with-icu --with-tcl --with-perl --with-python\r\nCC\tnot recorded\r\nCPPFLAGS\tnot recorded\r\nCFLAGS\tnot recorded\r\nCFLAGS_SL\tnot recorded\r\nLDFLAGS\tnot recorded\r\nLDFLAGS_EX\tnot recorded\r\nLDFLAGS_SL\tnot recorded\r\nLIBS\tnot recorded\r\nVERSION\tPostgreSQL 13.4\r\n\r\n\r\nAnd here is SYSINFO:\r\n\r\nC:\\Users\\LHASSON>systeminfo\r\n\r\nHost Name: PRODDB\r\nOS Name: Microsoft Windows Server 2012 R2 Standard\r\nOS Version: 6.3.9600 N/A Build 9600\r\nOS Manufacturer: Microsoft Corporation\r\nOS Configuration: Member Server\r\nOS Build Type: Multiprocessor Free\r\nOriginal Install Date: 2015-09-19, 18:19:41\r\nSystem Boot Time: 2021-07-22, 11:45:09\r\nSystem Manufacturer: VMware, Inc.\r\nSystem Model: VMware Virtual Platform\r\nSystem Type: x64-based PC\r\nProcessor(s): 4 Processor(s) Installed.\r\n [01]: Intel64 Family 6 Model 79 Stepping 1 GenuineIntel ~2397 Mhz\r\n [02]: Intel64 Family 6 Model 79 Stepping 1 GenuineIntel ~2397 Mhz\r\n [03]: Intel64 Family 6 Model 79 Stepping 1 GenuineIntel ~2397 Mhz\r\n [04]: Intel64 Family 6 Model 79 Stepping 1 GenuineIntel ~2397 Mhz\r\nBIOS Version: Phoenix Technologies LTD 6.00, 2020-05-28\r\nWindows Directory: C:\\Windows\r\nSystem Directory: C:\\Windows\\system32\r\nBoot Device: \\Device\\HarddiskVolume1\r\nSystem Locale: en-us;English (United States)\r\nInput Locale: en-us;English (United States)\r\nTime Zone: (UTC-05:00) Eastern Time (US & Canada)\r\nTotal Physical Memory: 65,535 MB\r\nAvailable Physical Memory: 57,791 MB\r\nVirtual Memory: Max Size: 75,263 MB\r\nVirtual Memory: Available: 66,956 MB\r\nVirtual Memory: In Use: 8,307 MB\r\nPage File Location(s): C:\\pagefile.sys\r\n\r\n\r\n", "msg_date": "Sat, 21 Aug 2021 23:01:52 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: [email protected] <[email protected]>\r\n > Sent: Saturday, August 21, 2021 19:02\r\n > To: Justin Pryzby <[email protected]>\r\n > Cc: Tom Lane <[email protected]>; [email protected]\r\n > Subject: RE: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \r\n > \r\n > > -----Original Message-----\r\n > > From: Justin Pryzby <[email protected]>\r\n > > Sent: Saturday, August 21, 2021 18:17\r\n > > To: [email protected]\r\n > > Cc: Tom Lane <[email protected]>; pgsql-\r\n > [email protected]\r\n > > Subject: Re: Big Performance drop of Exceptions in UDFs between\r\n > V11.2\r\n > > and 13.4\r\n > >\r\n > > Could you send SELECT * FROM pg_config() and try to find the CPU\r\n > > model ?\r\n > >\r\n > > I think it's possible the hypervisor is trapping and emulating\r\n > unhandled\r\n > > CPU instructions.\r\n > >\r\n > > Actually, it would be interesting to see if the performance differs\r\n > > between\r\n > > 11.2 and 11.13. It's possible that EDB compiled 11.13 on a newer\r\n > CPU\r\n > > (or a newer compiler) than 11.2 was compiled.\r\n > >\r\n > > If you test that, it should be on a separate VM, unless the existing\r\n > data\r\n > > dir can be restored from backup. Once you've started a cluster with\r\n > > updated binaries, you should avoid downgrading the binaries.\r\n > \r\n > \r\n > \r\n > Hello all,\r\n > \r\n > OK, I was able to do a clean install of 13.4 on the VM. All stock settings,\r\n > no extensions loaded, pure clean straight out of the install.\r\n > \r\n > create table sampletest (a varchar, b varchar);\r\n > -- truncate table sampletest;\r\n > insert into sampletest (a, b)\r\n > select substr(md5(random()::text), 0, 15),\r\n > (100000000*random())::integer::varchar\r\n > from generate_series(1,1000000);\r\n > \r\n > CREATE OR REPLACE FUNCTION toFloat(str varchar, val real) RETURNS\r\n > real AS $$ BEGIN\r\n > RETURN case when str is null then val else str::real end; EXCEPTION\r\n > WHEN OTHERS THEN\r\n > RETURN val;\r\n > END;\r\n > $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\r\n > \r\n > \r\n > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b, null)) as\r\n > \"b\" from sampletest\r\n > \r\n > Aggregate (cost=21370.00..21370.01 rows=1 width=4) (actual\r\n > time=1780.561..1780.563 rows=1 loops=1)\r\n > Buffers: shared hit=6387\r\n > -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000\r\n > width=8) (actual time=0.053..97.329 rows=1000000 loops=1)\r\n > Buffers: shared hit=6370\r\n > Planning:\r\n > Buffers: shared hit=36\r\n > Planning Time: 2.548 ms\r\n > Execution Time: 1,810.330 ms\r\n > \r\n > \r\n > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as\r\n > \"a\" from sampletest\r\n > \r\n > Aggregate (cost=21370.00..21370.01 rows=1 width=4) (actual\r\n > time=863243.876..863243.877 rows=1 loops=1)\r\n > Buffers: shared hit=6373\r\n > -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000\r\n > width=15) (actual time=0.009..301.553 rows=1000000 loops=1)\r\n > Buffers: shared hit=6370\r\n > Planning:\r\n > Buffers: shared hit=44\r\n > Planning Time: 0.469 ms\r\n > Execution Time: 863,243.911 ms\r\n > \r\n > \r\n > So I am still able to reproduce this on a different VM and a clean install\r\n > of 13.4 ☹\r\n > \r\n > \r\n > SELECT * FROM pg_config();\r\n > \r\n > BINDIR\tC:/PROGRA~1/POSTGR~1/13/bin\r\n > DOCDIR\tC:/PROGRA~1/POSTGR~1/13/doc\r\n > HTMLDIR\tC:/PROGRA~1/POSTGR~1/13/doc\r\n > INCLUDEDIR\tC:/PROGRA~1/POSTGR~1/13/include\r\n > PKGINCLUDEDIR\tC:/PROGRA~1/POSTGR~1/13/include\r\n > INCLUDEDIR-SERVER\tC:/PROGRA~1/POSTGR~1/13/include/server\r\n > LIBDIR\tC:/PROGRA~1/POSTGR~1/13/lib\r\n > PKGLIBDIR\tC:/PROGRA~1/POSTGR~1/13/lib\r\n > LOCALEDIR\tC:/PROGRA~1/POSTGR~1/13/share/locale\r\n > MANDIR\tC:/Program Files/PostgreSQL/13/man\r\n > SHAREDIR\tC:/PROGRA~1/POSTGR~1/13/share\r\n > SYSCONFDIR\tC:/Program Files/PostgreSQL/13/etc\r\n > PGXS\tC:/Program Files/PostgreSQL/13/lib/pgxs/src/makefiles/pgxs.mk\r\n > CONFIGURE\t--enable-thread-safety --enable-nls --with-ldap --with-\r\n > openssl --with-uuid --with-libxml --with-libxslt --with-icu --with-tcl --with-\r\n > perl --with-python\r\n > CC\tnot recorded\r\n > CPPFLAGS\tnot recorded\r\n > CFLAGS\tnot recorded\r\n > CFLAGS_SL\tnot recorded\r\n > LDFLAGS\tnot recorded\r\n > LDFLAGS_EX\tnot recorded\r\n > LDFLAGS_SL\tnot recorded\r\n > LIBS\tnot recorded\r\n > VERSION\tPostgreSQL 13.4\r\n > \r\n > \r\n > And here is SYSINFO:\r\n > \r\n > C:\\Users\\LHASSON>systeminfo\r\n > \r\n > Host Name: PRODDB\r\n > OS Name: Microsoft Windows Server 2012 R2 Standard\r\n > OS Version: 6.3.9600 N/A Build 9600\r\n > OS Manufacturer: Microsoft Corporation\r\n > OS Configuration: Member Server\r\n > OS Build Type: Multiprocessor Free\r\n > Original Install Date: 2015-09-19, 18:19:41\r\n > System Boot Time: 2021-07-22, 11:45:09\r\n > System Manufacturer: VMware, Inc.\r\n > System Model: VMware Virtual Platform\r\n > System Type: x64-based PC\r\n > Processor(s): 4 Processor(s) Installed.\r\n > [01]: Intel64 Family 6 Model 79 Stepping 1 GenuineIntel\r\n > ~2397 Mhz\r\n > [02]: Intel64 Family 6 Model 79 Stepping 1 GenuineIntel\r\n > ~2397 Mhz\r\n > [03]: Intel64 Family 6 Model 79 Stepping 1 GenuineIntel\r\n > ~2397 Mhz\r\n > [04]: Intel64 Family 6 Model 79 Stepping 1 GenuineIntel\r\n > ~2397 Mhz\r\n > BIOS Version: Phoenix Technologies LTD 6.00, 2020-05-28\r\n > Windows Directory: C:\\Windows\r\n > System Directory: C:\\Windows\\system32\r\n > Boot Device: \\Device\\HarddiskVolume1\r\n > System Locale: en-us;English (United States)\r\n > Input Locale: en-us;English (United States)\r\n > Time Zone: (UTC-05:00) Eastern Time (US & Canada)\r\n > Total Physical Memory: 65,535 MB\r\n > Available Physical Memory: 57,791 MB\r\n > Virtual Memory: Max Size: 75,263 MB\r\n > Virtual Memory: Available: 66,956 MB\r\n > Virtual Memory: In Use: 8,307 MB\r\n > Page File Location(s): C:\\pagefile.sys\r\n > \r\n\r\n\r\nAnd by the way, I reproduced this again on my personal laptop with a fresh clean base-line install of 13.4.\r\n\r\nSysteminfo\r\n-------------------\r\nOS Name: Microsoft Windows 10 Pro\r\nOS Version: 10.0.19043 N/A Build 19043\r\nOS Manufacturer: Microsoft Corporation\r\nOS Configuration: Standalone Workstation\r\nOS Build Type: Multiprocessor Free\r\nRegistered Owner: Windows User\r\nRegistered Organization:\r\nProduct ID: 00330-50535-98614-AAOEM\r\nOriginal Install Date: 2021-04-04, 09:50:59\r\nSystem Boot Time: 2021-08-19, 10:18:03\r\nSystem Manufacturer: LENOVO\r\nSystem Model: 20HRCTO1WW\r\nSystem Type: x64-based PC\r\nProcessor(s): 1 Processor(s) Installed.\r\n [01]: Intel64 Family 6 Model 142 Stepping 9 GenuineIntel ~801 Mhz\r\nBIOS Version: LENOVO N1MET64W (1.49 ), 2020-10-14\r\nWindows Directory: C:\\WINDOWS\r\nSystem Directory: C:\\WINDOWS\\system32\r\nBoot Device: \\Device\\HarddiskVolume1\r\nSystem Locale: en-us;English (United States)\r\nInput Locale: en-us;English (United States)\r\nTime Zone: (UTC-05:00) Eastern Time (US & Canada)\r\nTotal Physical Memory: 16,219 MB\r\nAvailable Physical Memory: 4,971 MB\r\nVirtual Memory: Max Size: 32,603 MB\r\nVirtual Memory: Available: 12,168 MB\r\nVirtual Memory: In Use: 20,435 MB\r\nPage File Location(s): C:\\pagefile.sys\r\n\r\n\r\nSELECT * FROM pg_config();\r\n--------------------------------------------\r\nBINDIR\tC:/PROGRA~1/POSTGR~1/13/bin\r\nDOCDIR\tC:/PROGRA~1/POSTGR~1/13/doc\r\nHTMLDIR\tC:/PROGRA~1/POSTGR~1/13/doc\r\nINCLUDEDIR\tC:/PROGRA~1/POSTGR~1/13/include\r\nPKGINCLUDEDIR\tC:/PROGRA~1/POSTGR~1/13/include\r\nINCLUDEDIR-SERVER\tC:/PROGRA~1/POSTGR~1/13/include/server\r\nLIBDIR\tC:/PROGRA~1/POSTGR~1/13/lib\r\nPKGLIBDIR\tC:/PROGRA~1/POSTGR~1/13/lib\r\nLOCALEDIR\tC:/PROGRA~1/POSTGR~1/13/share/locale\r\nMANDIR\tC:/Program Files/PostgreSQL/13/man\r\nSHAREDIR\tC:/PROGRA~1/POSTGR~1/13/share\r\nSYSCONFDIR\tC:/Program Files/PostgreSQL/13/etc\r\nPGXS\tC:/Program Files/PostgreSQL/13/lib/pgxs/src/makefiles/pgxs.mk\r\nCONFIGURE\t--enable-thread-safety --enable-nls --with-ldap --with-openssl --with-uuid --with-libxml --with-libxslt --with-icu --with-tcl --with-perl --with-python\r\nCC\tnot recorded\r\nCPPFLAGS\tnot recorded\r\nCFLAGS\tnot recorded\r\nCFLAGS_SL\tnot recorded\r\nLDFLAGS\tnot recorded\r\nLDFLAGS_EX\tnot recorded\r\nLDFLAGS_SL\tnot recorded\r\nLIBS\tnot recorded\r\nVERSION\tPostgreSQL 13.4\r\n\r\n", "msg_date": "Sun, 22 Aug 2021 00:14:53 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em sáb., 21 de ago. de 2021 às 21:15, [email protected] <\[email protected]> escreveu:\n\n>\n>\n> > -----Original Message-----\n> > From: [email protected] <[email protected]>\n> > Sent: Saturday, August 21, 2021 19:02\n> > To: Justin Pryzby <[email protected]>\n> > Cc: Tom Lane <[email protected]>; [email protected]\n> > Subject: RE: Big Performance drop of Exceptions in UDFs between V11.2\n> > and 13.4\n> >\n> >\n> >\n> > > -----Original Message-----\n> > > From: Justin Pryzby <[email protected]>\n> > > Sent: Saturday, August 21, 2021 18:17\n> > > To: [email protected]\n> > > Cc: Tom Lane <[email protected]>; pgsql-\n> > [email protected]\n> > > Subject: Re: Big Performance drop of Exceptions in UDFs between\n> > V11.2\n> > > and 13.4\n> > >\n> > > Could you send SELECT * FROM pg_config() and try to find the\n> CPU\n> > > model ?\n> > >\n> > > I think it's possible the hypervisor is trapping and emulating\n> > unhandled\n> > > CPU instructions.\n> > >\n> > > Actually, it would be interesting to see if the performance\n> differs\n> > > between\n> > > 11.2 and 11.13. It's possible that EDB compiled 11.13 on a\n> newer\n> > CPU\n> > > (or a newer compiler) than 11.2 was compiled.\n> > >\n> > > If you test that, it should be on a separate VM, unless the\n> existing\n> > data\n> > > dir can be restored from backup. Once you've started a\n> cluster with\n> > > updated binaries, you should avoid downgrading the binaries.\n> >\n> >\n> >\n> > Hello all,\n> >\n> > OK, I was able to do a clean install of 13.4 on the VM. All stock\n> settings,\n> > no extensions loaded, pure clean straight out of the install.\n> >\n> > create table sampletest (a varchar, b varchar);\n> > -- truncate table sampletest;\n> > insert into sampletest (a, b)\n> > select substr(md5(random()::text), 0, 15),\n> > (100000000*random())::integer::varchar\n> > from generate_series(1,1000000);\n> >\n> > CREATE OR REPLACE FUNCTION toFloat(str varchar, val real) RETURNS\n> > real AS $$ BEGIN\n> > RETURN case when str is null then val else str::real end; EXCEPTION\n> > WHEN OTHERS THEN\n> > RETURN val;\n> > END;\n> > $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n> >\n> >\n> > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b, null))\n> as\n> > \"b\" from sampletest\n> >\n> > Aggregate (cost=21370.00..21370.01 rows=1 width=4) (actual\n> > time=1780.561..1780.563 rows=1 loops=1)\n> > Buffers: shared hit=6387\n> > -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000\n> > width=8) (actual time=0.053..97.329 rows=1000000 loops=1)\n> > Buffers: shared hit=6370\n> > Planning:\n> > Buffers: shared hit=36\n> > Planning Time: 2.548 ms\n> > Execution Time: 1,810.330 ms\n> >\n> >\n> > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null))\n> as\n> > \"a\" from sampletest\n> >\n> > Aggregate (cost=21370.00..21370.01 rows=1 width=4) (actual\n> > time=863243.876..863243.877 rows=1 loops=1)\n> > Buffers: shared hit=6373\n> > -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000\n> > width=15) (actual time=0.009..301.553 rows=1000000 loops=1)\n> > Buffers: shared hit=6370\n> > Planning:\n> > Buffers: shared hit=44\n> > Planning Time: 0.469 ms\n> > Execution Time: 863,243.911 ms\n> >\n> >\n> > So I am still able to reproduce this on a different VM and a clean\n> install\n> > of 13.4 ☹\n> >\n> >\n> > SELECT * FROM pg_config();\n> >\n> > BINDIR C:/PROGRA~1/POSTGR~1/13/bin\n> > DOCDIR C:/PROGRA~1/POSTGR~1/13/doc\n> > HTMLDIR C:/PROGRA~1/POSTGR~1/13/doc\n> > INCLUDEDIR C:/PROGRA~1/POSTGR~1/13/include\n> > PKGINCLUDEDIR C:/PROGRA~1/POSTGR~1/13/include\n> > INCLUDEDIR-SERVER C:/PROGRA~1/POSTGR~1/13/include/server\n> > LIBDIR C:/PROGRA~1/POSTGR~1/13/lib\n> > PKGLIBDIR C:/PROGRA~1/POSTGR~1/13/lib\n> > LOCALEDIR C:/PROGRA~1/POSTGR~1/13/share/locale\n> > MANDIR C:/Program Files/PostgreSQL/13/man\n> > SHAREDIR C:/PROGRA~1/POSTGR~1/13/share\n> > SYSCONFDIR C:/Program Files/PostgreSQL/13/etc\n> > PGXS C:/Program Files/PostgreSQL/13/lib/pgxs/src/makefiles/\n> pgxs.mk\n> > CONFIGURE --enable-thread-safety --enable-nls --with-ldap --with-\n> > openssl --with-uuid --with-libxml --with-libxslt --with-icu\n> --with-tcl --with-\n> > perl --with-python\n> > CC not recorded\n> > CPPFLAGS not recorded\n> > CFLAGS not recorded\n> > CFLAGS_SL not recorded\n> > LDFLAGS not recorded\n> > LDFLAGS_EX not recorded\n> > LDFLAGS_SL not recorded\n> > LIBS not recorded\n> > VERSION PostgreSQL 13.4\n> >\n> >\n> > And here is SYSINFO:\n> >\n> > C:\\Users\\LHASSON>systeminfo\n> >\n> > Host Name: PRODDB\n> > OS Name: Microsoft Windows Server 2012 R2 Standard\n> > OS Version: 6.3.9600 N/A Build 9600\n> > OS Manufacturer: Microsoft Corporation\n> > OS Configuration: Member Server\n> > OS Build Type: Multiprocessor Free\n> > Original Install Date: 2015-09-19, 18:19:41\n> > System Boot Time: 2021-07-22, 11:45:09\n> > System Manufacturer: VMware, Inc.\n> > System Model: VMware Virtual Platform\n> > System Type: x64-based PC\n> > Processor(s): 4 Processor(s) Installed.\n> > [01]: Intel64 Family 6 Model 79 Stepping\n> 1 GenuineIntel\n> > ~2397 Mhz\n> > [02]: Intel64 Family 6 Model 79 Stepping\n> 1 GenuineIntel\n> > ~2397 Mhz\n> > [03]: Intel64 Family 6 Model 79 Stepping\n> 1 GenuineIntel\n> > ~2397 Mhz\n> > [04]: Intel64 Family 6 Model 79 Stepping\n> 1 GenuineIntel\n> > ~2397 Mhz\n> > BIOS Version: Phoenix Technologies LTD 6.00, 2020-05-28\n> > Windows Directory: C:\\Windows\n> > System Directory: C:\\Windows\\system32\n> > Boot Device: \\Device\\HarddiskVolume1\n> > System Locale: en-us;English (United States)\n> > Input Locale: en-us;English (United States)\n> > Time Zone: (UTC-05:00) Eastern Time (US & Canada)\n> > Total Physical Memory: 65,535 MB\n> > Available Physical Memory: 57,791 MB\n> > Virtual Memory: Max Size: 75,263 MB\n> > Virtual Memory: Available: 66,956 MB\n> > Virtual Memory: In Use: 8,307 MB\n> > Page File Location(s): C:\\pagefile.sys\n> >\n>\n>\n> And by the way, I reproduced this again on my personal laptop with a fresh\n> clean base-line install of 13.4.\n>\n> Systeminfo\n> -------------------\n> OS Name: Microsoft Windows 10 Pro\n> OS Version: 10.0.19043 N/A Build 19043\n> OS Manufacturer: Microsoft Corporation\n> OS Configuration: Standalone Workstation\n> OS Build Type: Multiprocessor Free\n> Registered Owner: Windows User\n> Registered Organization:\n> Product ID: 00330-50535-98614-AAOEM\n> Original Install Date: 2021-04-04, 09:50:59\n> System Boot Time: 2021-08-19, 10:18:03\n> System Manufacturer: LENOVO\n> System Model: 20HRCTO1WW\n> System Type: x64-based PC\n> Processor(s): 1 Processor(s) Installed.\n> [01]: Intel64 Family 6 Model 142 Stepping 9\n> GenuineIntel ~801 Mhz\n> BIOS Version: LENOVO N1MET64W (1.49 ), 2020-10-14\n> Windows Directory: C:\\WINDOWS\n> System Directory: C:\\WINDOWS\\system32\n> Boot Device: \\Device\\HarddiskVolume1\n> System Locale: en-us;English (United States)\n> Input Locale: en-us;English (United States)\n> Time Zone: (UTC-05:00) Eastern Time (US & Canada)\n> Total Physical Memory: 16,219 MB\n> Available Physical Memory: 4,971 MB\n> Virtual Memory: Max Size: 32,603 MB\n> Virtual Memory: Available: 12,168 MB\n> Virtual Memory: In Use: 20,435 MB\n> Page File Location(s): C:\\pagefile.sys\n>\n>\n> SELECT * FROM pg_config();\n> --------------------------------------------\n> BINDIR C:/PROGRA~1/POSTGR~1/13/bin\n> DOCDIR C:/PROGRA~1/POSTGR~1/13/doc\n> HTMLDIR C:/PROGRA~1/POSTGR~1/13/doc\n> INCLUDEDIR C:/PROGRA~1/POSTGR~1/13/include\n> PKGINCLUDEDIR C:/PROGRA~1/POSTGR~1/13/include\n> INCLUDEDIR-SERVER C:/PROGRA~1/POSTGR~1/13/include/server\n> LIBDIR C:/PROGRA~1/POSTGR~1/13/lib\n> PKGLIBDIR C:/PROGRA~1/POSTGR~1/13/lib\n> LOCALEDIR C:/PROGRA~1/POSTGR~1/13/share/locale\n> MANDIR C:/Program Files/PostgreSQL/13/man\n> SHAREDIR C:/PROGRA~1/POSTGR~1/13/share\n> SYSCONFDIR C:/Program Files/PostgreSQL/13/etc\n> PGXS C:/Program Files/PostgreSQL/13/lib/pgxs/src/makefiles/pgxs.mk\n> CONFIGURE --enable-thread-safety --enable-nls --with-ldap\n> --with-openssl --with-uuid --with-libxml --with-libxslt --with-icu\n> --with-tcl --with-perl --with-python\n> CC not recorded\n> CPPFLAGS not recorded\n> CFLAGS not recorded\n> CFLAGS_SL not recorded\n> LDFLAGS not recorded\n> LDFLAGS_EX not recorded\n> LDFLAGS_SL not recorded\n> LIBS not recorded\n> VERSION PostgreSQL 13.4\n>\n> Tried to check this with Very Sleepy at Windows 10 (bare metal).\nNot sure it can help if someone can guide how to test this better?\n\nPostgres (head)\nDebug build with msvc 2019 64 bits.\n\nexplain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as \"a\"\nfrom sampletest;\n\n1. Postgres (head) with normal startup:\npostgres=# explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a,\nnull)) as \"a\" from sampletest;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=21370.00..21370.01 rows=1 width=4) (actual\ntime=103064.061..103064.062 rows=1 loops=1)\n Buffers: shared hit=6370\n -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000 width=15)\n(actual time=0.037..1253.552 rows=1000000 loops=1)\n Buffers: shared hit=6370\n Planning Time: 0.252 ms\n Execution Time: 103064.136 ms\n(6 rows)\n\nFiles:\npostgres.png (print screen from Very Sleepy)\npostgres.csv\npostgres.capture\n\n2. Postgres (head) with --single startup:\nbackend> explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a,\nnull)) as \"a\" from sampletest;\n 1: QUERY PLAN (typeid = 25, len = -1, typmod = -1, byval = f)\n ----\n 1: QUERY PLAN = \"Aggregate (cost=21370.00..21370.01 rows=1\nwidth=4) (actual time=61820.815..61820.816 rows=1 loops=1)\" (typeid\n= 25, len = -1, typmod = -1, byval = f)\n ----\n 1: QUERY PLAN = \" Buffers: shared hit=11 read=6379\" (typeid =\n25, len = -1, typmod = -1, byval = f)\n ----\n 1: QUERY PLAN = \" -> Seq Scan on sampletest\n (cost=0.00..16370.00 rows=1000000 width=15) (actual time=0.113..1607.444\nrows=1000000 loops=1)\" (typeid = 25, len = -1, typmod = -1, byval = f)\n ----\n 1: QUERY PLAN = \" Buffers: shared read=6370\" (typeid =\n25, len = -1, typmod = -1, byval = f)\n ----\n 1: QUERY PLAN = \"Planning:\" (typeid = 25, len = -1, typmod =\n-1, byval = f)\n ----\n 1: QUERY PLAN = \" Buffers: shared hit=51 read=24\" (typeid =\n25, len = -1, typmod = -1, byval = f)\n ----\n 1: QUERY PLAN = \"Planning Time: 21.647 ms\" (typeid = 25, len =\n-1, typmod = -1, byval = f)\n ----\n 1: QUERY PLAN = \"Execution Time: 61835.470 ms\" (typeid = 25, len =\n-1, typmod = -1, byval = f)\n\npostgres_single.png (print screen from Very Sleepy)\n\nAttached some files with results.\n\nregards,\nRanier Vilela", "msg_date": "Sun, 22 Aug 2021 10:50:47 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Sun, Aug 22, 2021 at 10:50:47AM -0300, Ranier Vilela wrote:\n> > Tried to check this with Very Sleepy at Windows 10 (bare metal).\n> > Not sure it can help if someone can guide how to test this better?\n\n> explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as \"a\" from sampletest;\n\nYour 100sec result *seems* to reproduce the problem, but it'd be more clear if\nyou showed the results of both queries (toFloat(a) vs toFloat(b)).\nLaurent's queries took 800sec vs 2sec.\n\n> postgres.png (print screen from Very Sleepy)\n> postgres.csv\n\nThis looks useful, thanks. It seems like maybe win64 builds are very slow\nrunning this:\n\nexec_stmt_block() /\nBeginInternalSubTransaction() /\nAbortSubTransaction() /\nreschedule_timeouts() /\nschedule_alarm() / \nsetitimer() /\npg_timer_thread() /\nWaitForSingleObjectEx () \n\nWe should confirm whether there's a dramatic regression caused by postgres\nsource code (and not by compilation environment or windows version changes).\nTest if there's a dramatic difference between v11 and v12, or v12 and v13.\nTo be clear, the ~4x difference in v11 between Laurent's \"exceptional\" and\n\"nonexceptional\" cases is expected. But the 400x difference in v13 is not.\n\nIf it's due to a change in postgres source code, we should find what commit\ncaused the regression.\n\nFirst, check if v12 is affected. Right now, we know that v11.2 is ok and v13.4\nis not ok. Then (unless someone has a hunch where to look), you could use git\nbisect to find the culprit commit.\n\nGit log shows 85 commits affecting those files across the 2 branches - once we\ndetermine whether v12 is affected, that alone eliminates a significant fraction of\nthe commits to be checked.\n\ngit log --oneline --cherry-pick origin/REL_11_STABLE...origin/REL_13_STABLE src/backend/access/transam/xact.c src/backend/port/win32/timer.c src/backend/utils/misc/timeout.c src/pl/plpgsql/src/pl_exec.c\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Aug 2021 10:47:58 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n> This looks useful, thanks. It seems like maybe win64 builds are very slow\n> running this:\n\n> exec_stmt_block() /\n> BeginInternalSubTransaction() /\n> AbortSubTransaction() /\n> reschedule_timeouts() /\n> schedule_alarm() / \n> setitimer() /\n> pg_timer_thread() /\n> WaitForSingleObjectEx () \n\nHmm ... we should not be there unless there are active timeout events,\nwhich there aren't by default. I wonder whether either Ranier or\nLaurent have statement_timeout or some similar option enabled.\n\nI tried setting statement_timeout = '1 min' just to see if that\nwould affect the results. It does, but only incrementally on\nmy Linux box (on v13, the exception-causing query slows from\n~13sec to ~14sec). It's possible that our Windows version of\nsetitimer() is far slower, but that doesn't make a lot of\nsense really --- the client side of that just briefly takes\na critical section. It shouldn't be blocking.\n\nAlso, the Windows version (src/backend/port/win32/timer.c)\nhasn't changed at all since before v11. So even if it's\nslow, that doesn't tell us what changed.\n\nThere is a patch in v14 (09cf1d522) that drastically reduces\nthe rate at which we make setitimer() calls, which would likely\nbe enough to fix any performance problem that may exist here.\nBut it's still unclear what's different between v11 and v13.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Aug 2021 13:50:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Justin Pryzby <[email protected]>\n > Sent: Sunday, August 22, 2021 11:48\n > To: Ranier Vilela <[email protected]>\n > Cc: [email protected]; Tom Lane <[email protected]>; pgsql-\n > [email protected]\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > On Sun, Aug 22, 2021 at 10:50:47AM -0300, Ranier Vilela wrote:\n > > > Tried to check this with Very Sleepy at Windows 10 (bare metal).\n > > > Not sure it can help if someone can guide how to test this better?\n > \n > > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as\n > > \"a\" from sampletest;\n > \n > Your 100sec result *seems* to reproduce the problem, but it'd be more\n > clear if you showed the results of both queries (toFloat(a) vs toFloat(b)).\n > Laurent's queries took 800sec vs 2sec.\n > \n > > postgres.png (print screen from Very Sleepy) postgres.csv\n > \n > This looks useful, thanks. It seems like maybe win64 builds are very slow\n > running this:\n > \n > exec_stmt_block() /\n > BeginInternalSubTransaction() /\n > AbortSubTransaction() /\n > reschedule_timeouts() /\n > schedule_alarm() /\n > setitimer() /\n > pg_timer_thread() /\n > WaitForSingleObjectEx ()\n > \n > We should confirm whether there's a dramatic regression caused by\n > postgres source code (and not by compilation environment or windows\n > version changes).\n > Test if there's a dramatic difference between v11 and v12, or v12 and\n > v13.\n > To be clear, the ~4x difference in v11 between Laurent's \"exceptional\"\n > and \"nonexceptional\" cases is expected. But the 400x difference in v13\n > is not.\n > \n > If it's due to a change in postgres source code, we should find what\n > commit caused the regression.\n > \n > First, check if v12 is affected. Right now, we know that v11.2 is ok and\n > v13.4 is not ok. Then (unless someone has a hunch where to look), you\n > could use git bisect to find the culprit commit.\n > \n > Git log shows 85 commits affecting those files across the 2 branches -\n > once we determine whether v12 is affected, that alone eliminates a\n > significant fraction of the commits to be checked.\n > \n > git log --oneline --cherry-pick\n > origin/REL_11_STABLE...origin/REL_13_STABLE\n > src/backend/access/transam/xact.c src/backend/port/win32/timer.c\n > src/backend/utils/misc/timeout.c src/pl/plpgsql/src/pl_exec.c\n > \n > --\n > Justin\n\n\n\nSo, I have other installs of Postgres I can also test on my laptop. No VM, straight install of Windows 10.\n\n\nPostgreSQL 12.3, compiled by Visual C++ build 1914, 64-bit install\nNo-exceptions scenario\n---------------------------------------\nAggregate (cost=14778.40..14778.41 rows=1 width=4) (actual time=1462.836..1462.837 rows=1 loops=1)\n Buffers: shared hit=6379\n -> Seq Scan on sampletest (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.020..86.506 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning Time: 0.713 ms\nExecution Time: 1463.359 ms\n\nExceptions scenario\n---------------------------------------\nI canceled the query after 18mn...\n\n\n\nPostgreSQL 11.1, compiled by Visual C++ build 1914, 64-bit\nNo-exceptions scenario\n---------------------------------------\nAggregate (cost=14778.40..14778.41 rows=1 width=4) (actual time=1784.915..1784.915 rows=1 loops=1)\n Buffers: shared hit=6377\n -> Seq Scan on sampletest (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.026..107.194 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning Time: 0.374 ms\nExecution Time: 1785.203 ms\n\nExceptions scenario\n---------------------------------------\nAggregate (cost=14778.40..14778.41 rows=1 width=4) (actual time=33891.778..33891.778 rows=1 loops=1)\n Buffers: shared hit=6372\n -> Seq Scan on sampletest (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.015..171.325 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning Time: 0.090 ms\nExecution Time: 33891.806 ms\n\n\n\n\n\n", "msg_date": "Sun, 22 Aug 2021 18:32:23 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Tom Lane <[email protected]>\n > Sent: Sunday, August 22, 2021 13:51\n > To: Justin Pryzby <[email protected]>\n > Cc: Ranier Vilela <[email protected]>; [email protected];\n > [email protected]\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > Justin Pryzby <[email protected]> writes:\n > > This looks useful, thanks. It seems like maybe win64 builds are very\n > > slow running this:\n > \n > > exec_stmt_block() /\n > > BeginInternalSubTransaction() /\n > > AbortSubTransaction() /\n > > reschedule_timeouts() /\n > > schedule_alarm() /\n > > setitimer() /\n > > pg_timer_thread() /\n > > WaitForSingleObjectEx ()\n > \n > Hmm ... we should not be there unless there are active timeout events,\n > which there aren't by default. I wonder whether either Ranier or\n > Laurent have statement_timeout or some similar option enabled.\n > \n > I tried setting statement_timeout = '1 min' just to see if that would affect\n > the results. It does, but only incrementally on my Linux box (on v13, the\n > exception-causing query slows from ~13sec to ~14sec). It's possible that\n > our Windows version of\n > setitimer() is far slower, but that doesn't make a lot of sense really --- the\n > client side of that just briefly takes a critical section. It shouldn't be\n > blocking.\n > \n > Also, the Windows version (src/backend/port/win32/timer.c) hasn't\n > changed at all since before v11. So even if it's slow, that doesn't tell us\n > what changed.\n > \n > There is a patch in v14 (09cf1d522) that drastically reduces the rate at\n > which we make setitimer() calls, which would likely be enough to fix any\n > performance problem that may exist here.\n > But it's still unclear what's different between v11 and v13.\n > \n > \t\t\tregards, tom lane\n\n\nHello Tom,\n\nOn both my clean 13.4 install and current 11.2 install, I have\n#statement_timeout = 0\t\t\t# in milliseconds, 0 is disabled\n\nNote that the 13.4 clean install I gave last measurements for has all stock settings.\n\nThank you,\nLaurent.\n\n\n\n", "msg_date": "Sun, 22 Aug 2021 18:37:04 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: [email protected] <[email protected]>\r\n > Sent: Sunday, August 22, 2021 14:37\r\n > To: Tom Lane <[email protected]>; Justin Pryzby\r\n > <[email protected]>\r\n > Cc: Ranier Vilela <[email protected]>; pgsql-\r\n > [email protected]\r\n > Subject: RE: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \r\n > \r\n > > -----Original Message-----\r\n > > From: Tom Lane <[email protected]>\r\n > > Sent: Sunday, August 22, 2021 13:51\r\n > > To: Justin Pryzby <[email protected]>\r\n > > Cc: Ranier Vilela <[email protected]>; [email protected];\r\n > > [email protected]\r\n > > Subject: Re: Big Performance drop of Exceptions in UDFs between\r\n > V11.2\r\n > > and 13.4\r\n > >\r\n > > Justin Pryzby <[email protected]> writes:\r\n > > > This looks useful, thanks. It seems like maybe win64 builds are\r\n > very\r\n > > > slow running this:\r\n > >\r\n > > > exec_stmt_block() /\r\n > > > BeginInternalSubTransaction() /\r\n > > > AbortSubTransaction() /\r\n > > > reschedule_timeouts() /\r\n > > > schedule_alarm() /\r\n > > > setitimer() /\r\n > > > pg_timer_thread() /\r\n > > > WaitForSingleObjectEx ()\r\n > >\r\n > > Hmm ... we should not be there unless there are active timeout\r\n > events,\r\n > > which there aren't by default. I wonder whether either Ranier or\r\n > > Laurent have statement_timeout or some similar option enabled.\r\n > >\r\n > > I tried setting statement_timeout = '1 min' just to see if that would\r\n > affect\r\n > > the results. It does, but only incrementally on my Linux box (on v13,\r\n > the\r\n > > exception-causing query slows from ~13sec to ~14sec). It's possible\r\n > that\r\n > > our Windows version of\r\n > > setitimer() is far slower, but that doesn't make a lot of sense really ---\r\n > the\r\n > > client side of that just briefly takes a critical section. It shouldn't be\r\n > > blocking.\r\n > >\r\n > > Also, the Windows version (src/backend/port/win32/timer.c) hasn't\r\n > > changed at all since before v11. So even if it's slow, that doesn't tell\r\n > us\r\n > > what changed.\r\n > >\r\n > > There is a patch in v14 (09cf1d522) that drastically reduces the rate\r\n > at\r\n > > which we make setitimer() calls, which would likely be enough to fix\r\n > any\r\n > > performance problem that may exist here.\r\n > > But it's still unclear what's different between v11 and v13.\r\n > >\r\n > > \t\t\tregards, tom lane\r\n > \r\n > \r\n > Hello Tom,\r\n > \r\n > On both my clean 13.4 install and current 11.2 install, I have\r\n > #statement_timeout = 0\t\t\t# in milliseconds, 0 is\r\n > disabled\r\n > \r\n > Note that the 13.4 clean install I gave last measurements for has all stock\r\n > settings.\r\n > \r\n > Thank you,\r\n > Laurent.\r\n > \r\n > \r\n\r\nOne more fresh install, of 11.13 this time and the issue is not there... 😊\r\n\r\nAggregate (cost=14778.40..14778.41 rows=1 width=4) (actual time=1963.573..1963.574 rows=1 loops=1)\r\n Buffers: shared hit=6377\r\n -> Seq Scan on sampletest (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.027..110.896 rows=1000000 loops=1)\r\n Buffers: shared hit=6370\r\nPlanning Time: 0.427 ms\r\nExecution Time: 1963.981 ms\r\n\r\n\r\nAggregate (cost=21370.00..21370.01 rows=1 width=4) (actual time=31685.853..31685.853 rows=1 loops=1)\r\n Buffers: shared hit=6370\r\n -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000 width=15) (actual time=0.029..180.664 rows=1000000 loops=1)\r\n Buffers: shared hit=6370\r\nPlanning Time: 0.092 ms\r\nExecution Time: 31685.904 ms\r\n\r\nI am still experiencing a larger slowdown in the \"with-exceptions\" scenario being 16x slower compared to other measurements you have all produced.. But at least, it's manageable compared to the multi 100x times.\r\n\r\nSo, now, in summary:\r\n\r\n- I have tried V13.4, V12.3, 11.13, 11.2, 11.1 on several Windows VMs and my personal laptop (no VM).\r\n- All V11.x seem to behave uniformly.\r\n- Starting with 12.3, I am experiencing the major slowdown in the \"with exceptions\" scenario.\r\n\r\n\r\nSo, I was thinking about stuff and a lot of your intuitions seem to drive towards an issue with the compiler used to compile the Winx64 version... But is it possible that the JIT is getting in there and making things weird? Given that it's a major change in V12 and this is when I am starting to see the issue popup, I figured it might be another avenue to look into?\r\n\r\nThank you,\r\nLaurent Hasson.\r\n\r\n \r\n\r\n\r\n", "msg_date": "Sun, 22 Aug 2021 19:07:43 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> So, now, in summary:\n\n> - I have tried V13.4, V12.3, 11.13, 11.2, 11.1 on several Windows VMs and my personal laptop (no VM).\n> - All V11.x seem to behave uniformly.\n> - Starting with 12.3, I am experiencing the major slowdown in the \"with exceptions\" scenario.\n\nInteresting. There's no meaningful difference between v11 and v12 as far\nas timeout handling goes, so I'm starting to think that that's a red\nherring.\n\n(Although, after having done some web-searching, I do wonder why timer.c\nis using a manual-reset event. It looks like auto-reset would work\njust as well with less code, and I found some suggestions that it might\nperform better.)\n\n> So, I was thinking about stuff and a lot of your intuitions seem to drive towards an issue with the compiler used to compile the Winx64 version... But is it possible that the JIT is getting in there and making things weird? Given that it's a major change in V12 and this is when I am starting to see the issue popup, I figured it might be another avenue to look into?\n\nHm, is JIT even enabled in your build? If so, does setting jit = 0\nchange anything?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Aug 2021 15:23:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Tom Lane <[email protected]>\n > Sent: Sunday, August 22, 2021 15:24\n > To: [email protected]\n > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n > <[email protected]>; [email protected]\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > \"[email protected]\" <[email protected]> writes:\n > > So, now, in summary:\n > \n > > - I have tried V13.4, V12.3, 11.13, 11.2, 11.1 on several Windows VMs\n > and my personal laptop (no VM).\n > > - All V11.x seem to behave uniformly.\n > > - Starting with 12.3, I am experiencing the major slowdown in the\n > \"with exceptions\" scenario.\n > \n > Interesting. There's no meaningful difference between v11 and v12 as\n > far as timeout handling goes, so I'm starting to think that that's a red\n > herring.\n > \n > (Although, after having done some web-searching, I do wonder why\n > timer.c is using a manual-reset event. It looks like auto-reset would\n > work just as well with less code, and I found some suggestions that it\n > might perform better.)\n > \n > > So, I was thinking about stuff and a lot of your intuitions seem to drive\n > towards an issue with the compiler used to compile the Winx64\n > version... But is it possible that the JIT is getting in there and making\n > things weird? Given that it's a major change in V12 and this is when I am\n > starting to see the issue popup, I figured it might be another avenue to\n > look into?\n > \n > Hm, is JIT even enabled in your build? If so, does setting jit = 0 change\n > anything?\n > \n > \t\t\tregards, tom lane\n\nHello Tom,\n\nI just ran the test with jit=off in the config and restated the server. No change on 13.4. I'd think that the query cost as per the planner would be too small to kick in the JIT but thought to check anyways. Doesn't seem to be the cause.\n\nThanks.,\nLaurent.\n\n\n\n\n\n", "msg_date": "Sun, 22 Aug 2021 19:28:34 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: [email protected] <[email protected]>\n > Sent: Sunday, August 22, 2021 15:29\n > To: Tom Lane <[email protected]>\n > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n > <[email protected]>; [email protected]\n > Subject: RE: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > \n > \n > > -----Original Message-----\n > > From: Tom Lane <[email protected]>\n > > Sent: Sunday, August 22, 2021 15:24\n > > To: [email protected]\n > > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n > > <[email protected]>; [email protected]\n > > Subject: Re: Big Performance drop of Exceptions in UDFs between\n > V11.2\n > > and 13.4\n > >\n > > \"[email protected]\" <[email protected]> writes:\n > > > So, now, in summary:\n > >\n > > > - I have tried V13.4, V12.3, 11.13, 11.2, 11.1 on several Windows\n > VMs\n > > and my personal laptop (no VM).\n > > > - All V11.x seem to behave uniformly.\n > > > - Starting with 12.3, I am experiencing the major slowdown in the\n > > \"with exceptions\" scenario.\n > >\n > > Interesting. There's no meaningful difference between v11 and v12\n > as\n > > far as timeout handling goes, so I'm starting to think that that's a red\n > > herring.\n > >\n > > (Although, after having done some web-searching, I do wonder why\n > > timer.c is using a manual-reset event. It looks like auto-reset would\n > > work just as well with less code, and I found some suggestions that it\n > > might perform better.)\n > >\n > > > So, I was thinking about stuff and a lot of your intuitions seem to\n > drive\n > > towards an issue with the compiler used to compile the Winx64\n > > version... But is it possible that the JIT is getting in there and making\n > > things weird? Given that it's a major change in V12 and this is when I\n > am\n > > starting to see the issue popup, I figured it might be another avenue\n > to\n > > look into?\n > >\n > > Hm, is JIT even enabled in your build? If so, does setting jit = 0\n > change\n > > anything?\n > >\n > > \t\t\tregards, tom lane\n > \n > Hello Tom,\n > \n > I just ran the test with jit=off in the config and restated the server. No\n > change on 13.4. I'd think that the query cost as per the planner would be\n > too small to kick in the JIT but thought to check anyways. Doesn't seem\n > to be the cause.\n > \n > Thanks.,\n > Laurent.\n > \n > \n > \n > \n\n\nAlso Tom,\n\nI do have a Linux install of 13.3, and things work beautifully, so this is definitely a Windows thing here that started in V12.\n\nNo exceptions\n-----------------------------\nAggregate (cost=21370.00..21370.01 rows=1 width=4) (actual time=1796.311..1796.313 rows=1 loops=1)\n Buffers: shared hit=6370\n -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000 width=8) (actual time=0.006..113.720 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning:\n Buffers: shared hit=5\nPlanning Time: 0.121 ms\nExecution Time: 1796.346 ms\n\nWith Exceptions\n------------------------------\nAggregate (cost=14778.40..14778.41 rows=1 width=4) (actual time=6355.051..6355.052 rows=1 loops=1)\n Buffers: shared hit=6373\n -> Seq Scan on sampletest (cost=0.00..11975.60 rows=560560 width=32) (actual time=0.011..163.499 rows=1000000 loops=1)\n Buffers: shared hit=6370\nPlanning Time: 0.064 ms\nExecution Time: 6355.077 ms\n\n\n\n", "msg_date": "Sun, 22 Aug 2021 19:31:54 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> I do have a Linux install of 13.3, and things work beautifully, so this is definitely a Windows thing here that started in V12.\n\nIt's good to have a box around it, but that's still a pretty large\nbox :-(.\n\nI'm hoping that one of our Windows-using developers will see if\nthey can reproduce this, and if so, try to bisect where it started.\nNot sure how to make further progress without that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Aug 2021 16:11:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Tom Lane <[email protected]>\n > Sent: Sunday, August 22, 2021 16:11\n > To: [email protected]\n > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n > <[email protected]>; [email protected]\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > \"[email protected]\" <[email protected]> writes:\n > > I do have a Linux install of 13.3, and things work beautifully, so this is\n > definitely a Windows thing here that started in V12.\n > \n > It's good to have a box around it, but that's still a pretty large box :-(.\n > \n > I'm hoping that one of our Windows-using developers will see if they can\n > reproduce this, and if so, try to bisect where it started.\n > Not sure how to make further progress without that.\n > \n > \t\t\tregards, tom lane\n\nHello Tom,\n\nIf there is any way I can help further... I am definitely not able to do a dev environment and local build, but if we have a windows developer reproducing the issue between 11 and 12, then that should help. If someone makes a debug build available to me, I can provide additional help based on that.\n\nThat being said, do you have any suggestion how I could circumvent the issue altogether? Is there a way I could convert a String to some type (integer, float, date...) without exceptions and in case of failure, return a default value? Maybe there is a way to do this and I can avoid exception handling altogether? Or use something else than plpgsql? I am always under the impression that plpgsql is the best performing option?\n\nI have seen regex-based options out there, but none being fully satisfying for floating points in particular.\n\nThank you,\nLaurent.\n\n\n\n\n\n", "msg_date": "Sun, 22 Aug 2021 21:12:43 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 8/22/21 4:11 PM, Tom Lane wrote:\n> \"[email protected]\" <[email protected]> writes:\n>> I do have a Linux install of 13.3, and things work beautifully, so this is definitely a Windows thing here that started in V12.\n> It's good to have a box around it, but that's still a pretty large\n> box :-(.\n>\n> I'm hoping that one of our Windows-using developers will see if\n> they can reproduce this, and if so, try to bisect where it started.\n> Not sure how to make further progress without that.\n>\n> \t\n\n\nCan do. Assuming the assertion that it started in Release 12 is correct,\nI should be able to find it by bisecting between the branch point for 12\nand the tip of that branch. That's a little over 20 probes by my\ncalculation.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 22 Aug 2021 17:26:30 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Andrew Dunstan <[email protected]>\r\n > Sent: Sunday, August 22, 2021 17:27\r\n > To: Tom Lane <[email protected]>; [email protected]\r\n > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\r\n > <[email protected]>; [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \r\n > On 8/22/21 4:11 PM, Tom Lane wrote:\r\n > > \"[email protected]\" <[email protected]> writes:\r\n > >> I do have a Linux install of 13.3, and things work beautifully, so this is\r\n > definitely a Windows thing here that started in V12.\r\n > > It's good to have a box around it, but that's still a pretty large box\r\n > > :-(.\r\n > >\r\n > > I'm hoping that one of our Windows-using developers will see if they\r\n > > can reproduce this, and if so, try to bisect where it started.\r\n > > Not sure how to make further progress without that.\r\n > >\r\n > >\r\n > \r\n > \r\n > Can do. Assuming the assertion that it started in Release 12 is correct, I\r\n > should be able to find it by bisecting between the branch point for 12\r\n > and the tip of that branch. That's a little over 20 probes by my\r\n > calculation.\r\n > \r\n > \r\n > cheers\r\n > \r\n > \r\n > andrew\r\n > \r\n > \r\n > --\r\n > Andrew Dunstan\r\n > EDB: https://www.enterprisedb.com\r\n\r\n\r\nI tried it on 11.13 and 12.3. Is there a place where I could download 12.1 and 12.2 and test that? Is it worth it or you think you have all you need?\r\n\r\nThanks,\r\nLaurent.\r\n\r\n", "msg_date": "Sun, 22 Aug 2021 21:59:00 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 8/22/21 5:59 PM, [email protected] wrote:\n>\n> > -----Original Message-----\n> > From: Andrew Dunstan <[email protected]>\n> > Sent: Sunday, August 22, 2021 17:27\n> > To: Tom Lane <[email protected]>; [email protected]\n> > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n> > <[email protected]>; [email protected]\n> > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n> > and 13.4\n> > \n> > \n> > On 8/22/21 4:11 PM, Tom Lane wrote:\n> > > \"[email protected]\" <[email protected]> writes:\n> > >> I do have a Linux install of 13.3, and things work beautifully, so this is\n> > definitely a Windows thing here that started in V12.\n> > > It's good to have a box around it, but that's still a pretty large box\n> > > :-(.\n> > >\n> > > I'm hoping that one of our Windows-using developers will see if they\n> > > can reproduce this, and if so, try to bisect where it started.\n> > > Not sure how to make further progress without that.\n> > >\n> > >\n> > \n> > \n> > Can do. Assuming the assertion that it started in Release 12 is correct, I\n> > should be able to find it by bisecting between the branch point for 12\n> > and the tip of that branch. That's a little over 20 probes by my\n> > calculation.\n> > \n> > \n> > cheers\n> > \n> > \n> > andrew\n> > \n> > \n> > --\n> > Andrew Dunstan\n> > EDB: https://www.enterprisedb.com\n>\n>\n> I tried it on 11.13 and 12.3. Is there a place where I could download 12.1 and 12.2 and test that? Is it worth it or you think you have all you need?\n>\n\n\nI think I have everything I need.\n\n\nStep one will be to verify that the difference exists between the branch\npoint and the tip of release 12. Once that's done it will be a matter of\nprobing until the commit at fault is identified.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 22 Aug 2021 18:11:32 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em dom., 22 de ago. de 2021 às 12:48, Justin Pryzby <[email protected]>\nescreveu:\n\n> On Sun, Aug 22, 2021 at 10:50:47AM -0300, Ranier Vilela wrote:\n> > > Tried to check this with Very Sleepy at Windows 10 (bare metal).\n> > > Not sure it can help if someone can guide how to test this better?\n>\n> > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as\n> \"a\" from sampletest;\n>\n> Your 100sec result *seems* to reproduce the problem, but it'd be more\n> clear if\n> you showed the results of both queries (toFloat(a) vs toFloat(b)).\n> Laurent's queries took 800sec vs 2sec.\n>\nNot, in this test is only with toFloat(a).\n\n\n> > postgres.png (print screen from Very Sleepy)\n> > postgres.csv\n>\n> This looks useful, thanks. It seems like maybe win64 builds are very slow\n> running this:\n>\n> exec_stmt_block() /\n> BeginInternalSubTransaction() /\n> AbortSubTransaction() /\n> reschedule_timeouts() /\n> schedule_alarm() /\n> setitimer() /\n> pg_timer_thread() /\n> WaitForSingleObjectEx ()\n>\nNow, test with toFloat(b):\n\npostgres=# explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b,\nnull)) as \"b\" from sampletest;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=21370.00..21370.01 rows=1 width=4) (actual\ntime=16878.424..16878.426 rows=1 loops=1)\n Buffers: shared hit=64 read=6306\n -> Seq Scan on sampletest (cost=0.00..16370.00 rows=1000000 width=8)\n(actual time=0.105..937.201 rows=1000000 loops=1)\n Buffers: shared hit=64 read=6306\n Planning Time: 0.273 ms\n Execution Time: 16878.490 ms\n(6 rows)\n\nIt seems to me that in this way, exec_stmt_block() is not called.\nNot sure if this really is correct. I need to choose postgres.exe to attach\nVery Sleepy.\n\nAttached:\nto_Float_b.png\nto_Float_b.csv\n\nregards,\nRanier Vilela", "msg_date": "Sun, 22 Aug 2021 20:15:04 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em dom., 22 de ago. de 2021 às 14:50, Tom Lane <[email protected]> escreveu:\n\n> Justin Pryzby <[email protected]> writes:\n> > This looks useful, thanks. It seems like maybe win64 builds are very\n> slow\n> > running this:\n>\n> > exec_stmt_block() /\n> > BeginInternalSubTransaction() /\n> > AbortSubTransaction() /\n> > reschedule_timeouts() /\n> > schedule_alarm() /\n> > setitimer() /\n> > pg_timer_thread() /\n> > WaitForSingleObjectEx ()\n>\n> Hmm ... we should not be there unless there are active timeout events,\n> which there aren't by default. I wonder whether either Ranier or\n> Laurent have statement_timeout or some similar option enabled.\n>\nTom, none settings, all default from Postgres install.\n\nregards,\nRanier Vilela\n\nEm dom., 22 de ago. de 2021 às 14:50, Tom Lane <[email protected]> escreveu:Justin Pryzby <[email protected]> writes:\n> This looks useful, thanks.  It seems like maybe win64 builds are very slow\n> running this:\n\n> exec_stmt_block() /\n> BeginInternalSubTransaction() /\n> AbortSubTransaction() /\n> reschedule_timeouts() /\n> schedule_alarm() / \n> setitimer() /\n> pg_timer_thread() /\n> WaitForSingleObjectEx () \n\nHmm ... we should not be there unless there are active timeout events,\nwhich there aren't by default.  I wonder whether either Ranier or\nLaurent have statement_timeout or some similar option enabled.Tom, none settings, all default from Postgres install.regards,Ranier Vilela", "msg_date": "Sun, 22 Aug 2021 20:16:31 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em dom., 22 de ago. de 2021 às 18:12, [email protected] <\[email protected]> escreveu:\n\n>\n>\n> > -----Original Message-----\n> > From: Tom Lane <[email protected]>\n> > Sent: Sunday, August 22, 2021 16:11\n> > To: [email protected]\n> > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n> > <[email protected]>; [email protected]\n> > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n> > and 13.4\n> >\n> > \"[email protected]\" <[email protected]> writes:\n> > > I do have a Linux install of 13.3, and things work beautifully, so\n> this is\n> > definitely a Windows thing here that started in V12.\n> >\n> > It's good to have a box around it, but that's still a pretty large\n> box :-(.\n> >\n> > I'm hoping that one of our Windows-using developers will see if they\n> can\n> > reproduce this, and if so, try to bisect where it started.\n> > Not sure how to make further progress without that.\n> >\n> > regards, tom lane\n>\n> Hello Tom,\n>\n> If there is any way I can help further... I am definitely not able to do a\n> dev environment and local build, but if we have a windows developer\n> reproducing the issue between 11 and 12, then that should help. If someone\n> makes a debug build available to me, I can provide additional help based on\n> that.\n>\nPlease, download from this link (Google Drive):\n\nhttps://drive.google.com/file/d/13kPbNmk54lR6t-lwcwi-63UdM55sA27t/view?usp=sharing\n\nPostgres Debug (64 bits) HEAD.\n\nregards,\nRanier Vilela\n\nEm dom., 22 de ago. de 2021 às 18:12, [email protected] <[email protected]> escreveu:\n\n   >  -----Original Message-----\n   >  From: Tom Lane <[email protected]>\n   >  Sent: Sunday, August 22, 2021 16:11\n   >  To: [email protected]\n   >  Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n   >  <[email protected]>; [email protected]\n   >  Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n   >  and 13.4\n   >  \n   >  \"[email protected]\" <[email protected]> writes:\n   >  > I do have a Linux install of 13.3, and things work beautifully, so this is\n   >  definitely a Windows thing here that started in V12.\n   >  \n   >  It's good to have a box around it, but that's still a pretty large box :-(.\n   >  \n   >  I'm hoping that one of our Windows-using developers will see if they can\n   >  reproduce this, and if so, try to bisect where it started.\n   >  Not sure how to make further progress without that.\n   >  \n   >                    regards, tom lane\n\nHello Tom,\n\nIf there is any way I can help further... I am definitely not able to do a dev environment and local build, but if we have a windows developer reproducing the issue between 11 and 12, then that should help. If someone makes a debug build available to me, I can provide additional help based on that.Please, download from this link (Google Drive):https://drive.google.com/file/d/13kPbNmk54lR6t-lwcwi-63UdM55sA27t/view?usp=sharingPostgres Debug (64 bits) HEAD.regards,Ranier Vilela", "msg_date": "Sun, 22 Aug 2021 20:44:34 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Sun, Aug 22, 2021 at 08:44:34PM -0300, Ranier Vilela wrote:\n> > If there is any way I can help further... I am definitely not able to do a\n> > dev environment and local build, but if we have a windows developer\n> > reproducing the issue between 11 and 12, then that should help. If someone\n> > makes a debug build available to me, I can provide additional help based on\n> > that.\n>\n> Please, download from this link (Google Drive):\n> \n> https://drive.google.com/file/d/13kPbNmk54lR6t-lwcwi-63UdM55sA27t/view?usp=sharing\n\nLaurent gave a recipe to reproduce the problem, and you seemed to be able to\nreproduce it, so I think Laurent's part is done. The burden now lies with\npostgres developers to isolate the issue, and Andrew said he would bisect to\nlook for the culprit commit.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Aug 2021 19:42:48 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Justin Pryzby <[email protected]>\r\n > Sent: Sunday, August 22, 2021 20:43\r\n > To: Ranier Vilela <[email protected]>\r\n > Cc: [email protected]; Tom Lane <[email protected]>; pgsql-\r\n > [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > On Sun, Aug 22, 2021 at 08:44:34PM -0300, Ranier Vilela wrote:\r\n > > > If there is any way I can help further... I am definitely not able\r\n > > > to do a dev environment and local build, but if we have a windows\r\n > > > developer reproducing the issue between 11 and 12, then that\r\n > should\r\n > > > help. If someone makes a debug build available to me, I can provide\r\n > > > additional help based on that.\r\n > >\r\n > > Please, download from this link (Google Drive):\r\n > >\r\n > > https://drive.google.com/file/d/13kPbNmk54lR6t-lwcwi-\r\n > 63UdM55sA27t/view\r\n > > ?usp=sharing\r\n > \r\n > Laurent gave a recipe to reproduce the problem, and you seemed to be\r\n > able to reproduce it, so I think Laurent's part is done. The burden now\r\n > lies with postgres developers to isolate the issue, and Andrew said he\r\n > would bisect to look for the culprit commit.\r\n > \r\n > --\r\n > Justin\r\n\r\n\r\nHello Ranier,\r\nI am not sure what to do with that build. I am a Java/JavaScript guy these days. I haven't coded C/C++ in over 15 years now and I don't have a debugging environment 😊 If I can run the scenario I created and get a log file, that I can do if that helps.\r\n\r\nJustin, I think I agree with you although I am concerned that none of you were able to truly reproduce the results I have now reproduced on plain base-line installs on 2 VMs (Windows Server 2012) and a laptop (Windows 10 pro), across multiple versions of the installer (11, 12 and 13).\r\n\r\nIn any case, i'll do my best to help. If you think you have a fix and it's in one dll or exe and I can just manually patch a 13.4 install and test again, I'll do that with pleasure.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n", "msg_date": "Mon, 23 Aug 2021 03:22:34 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Hello all,\r\n\r\nAny update on this issue?\r\n\r\nThank you!\r\nLaurent.\r\n\r\n > -----Original Message-----\r\n > From: [email protected] <[email protected]>\r\n > Sent: Sunday, August 22, 2021 23:23\r\n > To: Justin Pryzby <[email protected]>; Ranier Vilela\r\n > <[email protected]>\r\n > Cc: Tom Lane <[email protected]>; [email protected]\r\n > Subject: RE: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \r\n > \r\n > > -----Original Message-----\r\n > > From: Justin Pryzby <[email protected]>\r\n > > Sent: Sunday, August 22, 2021 20:43\r\n > > To: Ranier Vilela <[email protected]>\r\n > > Cc: [email protected]; Tom Lane <[email protected]>; pgsql-\r\n > > [email protected]\r\n > > Subject: Re: Big Performance drop of Exceptions in UDFs between\r\n > V11.2\r\n > > and 13.4\r\n > >\r\n > > On Sun, Aug 22, 2021 at 08:44:34PM -0300, Ranier Vilela wrote:\r\n > > > > If there is any way I can help further... I am definitely not able\r\n > > > > to do a dev environment and local build, but if we have a\r\n > windows\r\n > > > > developer reproducing the issue between 11 and 12, then that\r\n > > should\r\n > > > > help. If someone makes a debug build available to me, I can\r\n > provide\r\n > > > > additional help based on that.\r\n > > >\r\n > > > Please, download from this link (Google Drive):\r\n > > >\r\n > > > https://drive.google.com/file/d/13kPbNmk54lR6t-lwcwi-\r\n > > 63UdM55sA27t/view\r\n > > > ?usp=sharing\r\n > >\r\n > > Laurent gave a recipe to reproduce the problem, and you seemed to\r\n > be\r\n > > able to reproduce it, so I think Laurent's part is done. The burden\r\n > now\r\n > > lies with postgres developers to isolate the issue, and Andrew said\r\n > he\r\n > > would bisect to look for the culprit commit.\r\n > >\r\n > > --\r\n > > Justin\r\n > \r\n > \r\n > Hello Ranier,\r\n > I am not sure what to do with that build. I am a Java/JavaScript guy\r\n > these days. I haven't coded C/C++ in over 15 years now and I don't have\r\n > a debugging environment 😊 If I can run the scenario I created and get a\r\n > log file, that I can do if that helps.\r\n > \r\n > Justin, I think I agree with you although I am concerned that none of you\r\n > were able to truly reproduce the results I have now reproduced on plain\r\n > base-line installs on 2 VMs (Windows Server 2012) and a laptop\r\n > (Windows 10 pro), across multiple versions of the installer (11, 12 and\r\n > 13).\r\n > \r\n > In any case, i'll do my best to help. If you think you have a fix and it's in\r\n > one dll or exe and I can just manually patch a 13.4 install and test again,\r\n > I'll do that with pleasure.\r\n > \r\n > Thank you,\r\n > Laurent.\r\n > \r\n\r\n", "msg_date": "Thu, 26 Aug 2021 14:47:54 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 8/26/21 10:47 AM, [email protected] wrote:\n> Hello all,\n>\n> Any update on this issue?\n\n\n\nPlease don't top-post.\n\n\nWe are working on the issue. Please be patient.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 12:39:19 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Andrew Dunstan <[email protected]>\r\n > Sent: Thursday, August 26, 2021 12:39\r\n > To: [email protected]; Justin Pryzby <[email protected]>;\r\n > Ranier Vilela <[email protected]>\r\n > Cc: Tom Lane <[email protected]>; [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \r\n > On 8/26/21 10:47 AM, [email protected] wrote:\r\n > > Hello all,\r\n > >\r\n > > Any update on this issue?\r\n > \r\n > \r\n > \r\n > Please don't top-post.\r\n > \r\n > \r\n > We are working on the issue. Please be patient.\r\n > \r\n > \r\n > cheers\r\n > \r\n > \r\n > andrew\r\n > \r\n > --\r\n > Andrew Dunstan\r\n > EDB: https://www.enterprisedb.com\r\n\r\n\r\nOK... Outlook automatically top posts and I forgot.\r\n\r\nI am being patient.\r\n\r\nThanks,\r\nLaurent.\r\n\r\n", "msg_date": "Fri, 27 Aug 2021 00:21:40 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> That being said, do you have any suggestion how I could circumvent the\n> issue altogether?\n\nBased on Andrew's report, it seems like you might be able to work around\nit for the time being by disabling message translations, i.e.\n\tSET lc_messages = 'C';\nEven if that's not acceptable in your work environment, it would be useful\nto verify that you see an improvement from it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Aug 2021 13:43:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Tom Lane <[email protected]>\n > Sent: Friday, August 27, 2021 13:43\n > To: [email protected]\n > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n > <[email protected]>; [email protected]\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > \"[email protected]\" <[email protected]> writes:\n > > That being said, do you have any suggestion how I could circumvent\n > the\n > > issue altogether?\n > \n > Based on Andrew's report, it seems like you might be able to work\n > around it for the time being by disabling message translations, i.e.\n > \tSET lc_messages = 'C';\n > Even if that's not acceptable in your work environment, it would be\n > useful to verify that you see an improvement from it.\n > \n > \t\t\tregards, tom lane\n\n\n\nSET lc_messages = 'C';\ndrop table sampletest;\ncreate table sampletest (a varchar, b varchar);\ninsert into sampletest (a, b)\nselect substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\n from generate_series(1,100000);\n\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n\nshow lc_messages; -- OK 'C'\n\n\n\n\n\n", "msg_date": "Sat, 28 Aug 2021 18:21:43 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Tom Lane <[email protected]>\n > Sent: Friday, August 27, 2021 13:43\n > To: [email protected]\n > Cc: Justin Pryzby <[email protected]>; Ranier Vilela\n > <[email protected]>; [email protected]\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > \"[email protected]\" <[email protected]> writes:\n > > That being said, do you have any suggestion how I could circumvent\n > the\n > > issue altogether?\n > \n > Based on Andrew's report, it seems like you might be able to work\n > around it for the time being by disabling message translations, i.e.\n > \tSET lc_messages = 'C';\n > Even if that's not acceptable in your work environment, it would be\n > useful to verify that you see an improvement from it.\n > \n > \t\t\tregards, tom lane\n\nHello Tom.... hit the send button accidentally.\n\n\nSET lc_messages = 'C';\ndrop table sampletest;\ncreate table sampletest (a varchar, b varchar);\ninsert into sampletest (a, b)\nselect substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\n from generate_series(1,100000);\n\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n\nshow lc_messages; --> OK 'C'\n\nexplain (analyze,buffers,COSTS,TIMING) \nselect MAX(toFloat(b, null)) as \"b\" from sampletest\n\nAggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=175.551..175.552 rows=1 loops=1)\n Buffers: shared hit=637\n -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=8) (actual time=0.014..9.270 rows=100000 loops=1)\n Buffers: shared hit=637\nPlanning Time: 0.087 ms\nExecution Time: 175.600 ms\n\n\nexplain (analyze,buffers,COSTS,TIMING) \nselect MAX(toFloat(a, null)) as \"a\" from sampletest\n\nAggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=88031.549..88031.551 rows=1 loops=1)\n Buffers: shared hit=637\n -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=15) (actual time=0.008..34.494 rows=100000 loops=1)\n Buffers: shared hit=637\nPlanning:\n Buffers: shared hit=4\nPlanning Time: 0.171 ms\nExecution Time: 88031.585 ms\n\nDoesn't seem to make a difference unless I misunderstood what you were asking for regarding the locale?\n\nThank you,\nLaurent.\n\n\n\n", "msg_date": "Sat, 28 Aug 2021 18:27:27 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> SET lc_messages = 'C';\n> show lc_messages; --> OK 'C'\n\n> explain (analyze,buffers,COSTS,TIMING) \n> select MAX(toFloat(b, null)) as \"b\" from sampletest\n> ...\n> Execution Time: 175.600 ms\n\n> explain (analyze,buffers,COSTS,TIMING) \n> select MAX(toFloat(a, null)) as \"a\" from sampletest\n> ...\n> Execution Time: 88031.585 ms\n\n> Doesn't seem to make a difference unless I misunderstood what you were asking for regarding the locale?\n\nHmm. This suggests that whatever effect Andrew found with NLS\nis actually not the explanation for your problem. So I'm even\nmore confused than before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Aug 2021 15:50:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Tom Lane <[email protected]>\n > Sent: Saturday, August 28, 2021 15:51\n > To: [email protected]\n > Cc: Andrew Dunstan <[email protected]>; Justin Pryzby\n > <[email protected]>; Ranier Vilela <[email protected]>; pgsql-\n > [email protected]\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n > and 13.4\n > \n > \"[email protected]\" <[email protected]> writes:\n > > SET lc_messages = 'C';\n > > show lc_messages; --> OK 'C'\n > \n > > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b, null)) as\n > > \"b\" from sampletest ...\n > > Execution Time: 175.600 ms\n > \n > > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as\n > > \"a\" from sampletest ...\n > > Execution Time: 88031.585 ms\n > \n > > Doesn't seem to make a difference unless I misunderstood what you\n > were asking for regarding the locale?\n > \n > Hmm. This suggests that whatever effect Andrew found with NLS is\n > actually not the explanation for your problem. So I'm even more\n > confused than before.\n > \n > \t\t\tregards, tom lane\n\nI am so sorry to hear... So, curious on my end: is this something that you are not able to reproduce on your environments? On my end, I did reproduce it on different VMs and my local laptop, across windows Server 2012 and Windows 10, so I'd figure it would be pretty easy to reproduce?\n\nThank you!\nLaurent.\n\n\n", "msg_date": "Sun, 29 Aug 2021 01:55:38 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em sáb., 28 de ago. de 2021 às 22:55, [email protected] <\[email protected]> escreveu:\n\n>\n>\n> > -----Original Message-----\n> > From: Tom Lane <[email protected]>\n> > Sent: Saturday, August 28, 2021 15:51\n> > To: [email protected]\n> > Cc: Andrew Dunstan <[email protected]>; Justin Pryzby\n> > <[email protected]>; Ranier Vilela <[email protected]>; pgsql-\n> > [email protected]\n> > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n> > and 13.4\n> >\n> > \"[email protected]\" <[email protected]> writes:\n> > > SET lc_messages = 'C';\n> > > show lc_messages; --> OK 'C'\n> >\n> > > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b,\n> null)) as\n> > > \"b\" from sampletest ...\n> > > Execution Time: 175.600 ms\n> >\n> > > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a,\n> null)) as\n> > > \"a\" from sampletest ...\n> > > Execution Time: 88031.585 ms\n> >\n> > > Doesn't seem to make a difference unless I misunderstood what you\n> > were asking for regarding the locale?\n> >\n> > Hmm. This suggests that whatever effect Andrew found with NLS is\n> > actually not the explanation for your problem. So I'm even more\n> > confused than before.\n> >\n> > regards, tom lane\n>\n> I am so sorry to hear... So, curious on my end: is this something that you\n> are not able to reproduce on your environments? On my end, I did reproduce\n> it on different VMs and my local laptop, across windows Server 2012 and\n> Windows 10, so I'd figure it would be pretty easy to reproduce?\n>\nWhat does reproduction have to do with solving the problem?\nCan you tell how many commits there are between the affected versions?\n\nI retested this case with HEAD, and it seems to me that NLS does affect it.\n\npostgres=# create table sampletest (a varchar, b varchar);\nCREATE TABLE\npostgres=# insert into sampletest (a, b)\npostgres-# select substr(md5(random()::text), 0, 15),\n(100000000*random())::integer::varchar\npostgres-# from generate_series(1,100000);\nINSERT 0 100000\npostgres=#\npostgres=# CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\npostgres-# RETURNS real AS $$\npostgres$# BEGIN\npostgres$# RETURN case when str is null then val else str::real end;\npostgres$# EXCEPTION WHEN OTHERS THEN\npostgres$# RETURN val;\npostgres$# END;\npostgres$# $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\nCREATE FUNCTION\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(b, null)) as \"b\" from sampletest;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\ntime=386.990..386.991 rows=1 loops=1)\n Buffers: shared hit=643 read=1\n -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32)\n(actual time=0.032..17.325 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning:\n Buffers: shared hit=13 read=13\n Planning Time: 0.967 ms\n Execution Time: 387.989 ms\n(8 rows)\n\n\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\ntime=1812.556..1812.557 rows=1 loops=1)\n Buffers: shared hit=639 read=1\n -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32)\n(actual time=0.026..20.866 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning Time: 0.152 ms\n Execution Time: 1812.587 ms\n(6 rows)\n\n\npostgres=# SET lc_messages = 'C';\nSET\npostgres=# drop table sampletest;\nDROP TABLE\npostgres=# create table sampletest (a varchar, b varchar);\nCREATE TABLE\npostgres=# insert into sampletest (a, b)\npostgres-# select substr(md5(random()::text), 0, 15),\n(100000000*random())::integer::varchar\npostgres-# from generate_series(1,100000);\nINSERT 0 100000\npostgres=#\npostgres=# CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\npostgres-# RETURNS real AS $$\npostgres$# BEGIN\npostgres$# RETURN case when str is null then val else str::real end;\npostgres$# EXCEPTION WHEN OTHERS THEN\npostgres$# RETURN val;\npostgres$# END;\npostgres$# $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\nCREATE FUNCTION\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(b, null)) as \"b\" from sampletest;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\ntime=278.993..278.994 rows=1 loops=1)\n Buffers: shared hit=637\n -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32)\n(actual time=0.029..16.837 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning:\n Buffers: shared hit=4\n Planning Time: 0.181 ms\n Execution Time: 279.023 ms\n(8 rows)\n\n\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual\ntime=1783.434..1783.435 rows=1 loops=1)\n Buffers: shared hit=637\n -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=15)\n(actual time=0.016..21.098 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning:\n Buffers: shared hit=6\n Planning Time: 1.020 ms\n Execution Time: 1783.464 ms\n(8 rows)\n\nWith NLS:\nFloat_b:\nPlanning Time: 0.967 ms\nExecution Time: 387.989 ms\n\nFloat_a:\nPlanning Time: 0.152 ms\nExecution Time: 1812.587 ms\n\nWithout NLS:\nFloat_b:\nPlanning Time: 0.181 ms\nExecution Time: 279.023 ms\n\nFloat_a:\nPlanning Time: 1.020 ms\nExecution Time: 1783.464 ms\n\nregards,\nRanier Vilela\n\nEm sáb., 28 de ago. de 2021 às 22:55, [email protected] <[email protected]> escreveu:\n\n   >  -----Original Message-----\n   >  From: Tom Lane <[email protected]>\n   >  Sent: Saturday, August 28, 2021 15:51\n   >  To: [email protected]\n   >  Cc: Andrew Dunstan <[email protected]>; Justin Pryzby\n   >  <[email protected]>; Ranier Vilela <[email protected]>; pgsql-\n   >  [email protected]\n   >  Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n   >  and 13.4\n   >  \n   >  \"[email protected]\" <[email protected]> writes:\n   >  > SET lc_messages = 'C';\n   >  > show lc_messages; --> OK 'C'\n   >  \n   >  > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b, null)) as\n   >  > \"b\" from sampletest ...\n   >  > Execution Time: 175.600 ms\n   >  \n   >  > explain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as\n   >  > \"a\" from sampletest ...\n   >  > Execution Time: 88031.585 ms\n   >  \n   >  > Doesn't seem to make a difference unless I misunderstood what you\n   >  were asking for regarding the locale?\n   >  \n   >  Hmm.  This suggests that whatever effect Andrew found with NLS is\n   >  actually not the explanation for your problem.  So I'm even more\n   >  confused than before.\n   >  \n   >                    regards, tom lane\n\nI am so sorry to hear... So, curious on my end: is this something that you are not able to reproduce on your environments? On my end, I did reproduce it on different VMs and my local laptop, across windows Server 2012 and Windows 10, so I'd figure it would be pretty easy to reproduce?What does reproduction have to do with solving the problem?Can you tell how many commits there are between the affected versions?I retested this case with HEAD, and it seems to me that NLS does affect it.postgres=# create table sampletest (a varchar, b varchar);CREATE TABLEpostgres=# insert into sampletest (a, b)postgres-# select substr(md5(random()::text), 0, 15), (100000000*random())::integer::varcharpostgres-#   from generate_series(1,100000);INSERT 0 100000postgres=#postgres=# CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)postgres-# RETURNS real AS $$postgres$# BEGINpostgres$#   RETURN case when str is null then val else str::real end;postgres$# EXCEPTION WHEN OTHERS THENpostgres$#   RETURN val;postgres$# END;postgres$# $$ LANGUAGE plpgsql COST 1 IMMUTABLE;CREATE FUNCTIONpostgres=# explain (analyze,buffers,COSTS,TIMING)postgres-# select MAX(toFloat(b, null)) as \"b\" from sampletest;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=1477.84..1477.85 rows=1 width=4) (actual time=386.990..386.991 rows=1 loops=1)   Buffers: shared hit=643 read=1   ->  Seq Scan on sampletest  (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.032..17.325 rows=100000 loops=1)         Buffers: shared hit=637 Planning:   Buffers: shared hit=13 read=13 Planning Time: 0.967 ms Execution Time: 387.989 ms(8 rows)postgres=# explain (analyze,buffers,COSTS,TIMING)postgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=1477.84..1477.85 rows=1 width=4) (actual time=1812.556..1812.557 rows=1 loops=1)   Buffers: shared hit=639 read=1   ->  Seq Scan on sampletest  (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.026..20.866 rows=100000 loops=1)         Buffers: shared hit=637 Planning Time: 0.152 ms Execution Time: 1812.587 ms(6 rows)postgres=# SET lc_messages = 'C';SETpostgres=# drop table sampletest;DROP TABLEpostgres=# create table sampletest (a varchar, b varchar);CREATE TABLEpostgres=# insert into sampletest (a, b)postgres-# select substr(md5(random()::text), 0, 15), (100000000*random())::integer::varcharpostgres-#   from generate_series(1,100000);INSERT 0 100000postgres=#postgres=# CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)postgres-# RETURNS real AS $$postgres$# BEGINpostgres$#   RETURN case when str is null then val else str::real end;postgres$# EXCEPTION WHEN OTHERS THENpostgres$#   RETURN val;postgres$# END;postgres$# $$ LANGUAGE plpgsql COST 1 IMMUTABLE;CREATE FUNCTIONpostgres=# explain (analyze,buffers,COSTS,TIMING)postgres-# select MAX(toFloat(b, null)) as \"b\" from sampletest;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=1477.84..1477.85 rows=1 width=4) (actual time=278.993..278.994 rows=1 loops=1)   Buffers: shared hit=637   ->  Seq Scan on sampletest  (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.029..16.837 rows=100000 loops=1)         Buffers: shared hit=637 Planning:   Buffers: shared hit=4 Planning Time: 0.181 ms Execution Time: 279.023 ms(8 rows)postgres=# explain (analyze,buffers,COSTS,TIMING)postgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=2137.00..2137.01 rows=1 width=4) (actual time=1783.434..1783.435 rows=1 loops=1)   Buffers: shared hit=637   ->  Seq Scan on sampletest  (cost=0.00..1637.00 rows=100000 width=15) (actual time=0.016..21.098 rows=100000 loops=1)         Buffers: shared hit=637 Planning:   Buffers: shared hit=6 Planning Time: 1.020 ms Execution Time: 1783.464 ms(8 rows)With NLS:Float_b:Planning Time: 0.967 msExecution Time: 387.989 ms\n\nFloat_a:Planning Time: 0.152 msExecution Time: 1812.587 ms Without NLS:\nFloat_b:\nPlanning Time: 0.181 msExecution Time: 279.023 ms\n\n\nFloat_a:\n\nPlanning Time: 1.020 msExecution Time: 1783.464 ms\n\nregards,Ranier Vilela", "msg_date": "Sun, 29 Aug 2021 09:53:57 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Ranier Vilela <[email protected]> writes:\n> I retested this case with HEAD, and it seems to me that NLS does affect it.\n\nSure, there's no question that message translation will have *some* cost.\nBut on my machine it is an incremental tens-of-percent kind of cost,\nand that is the result you're getting as well. So it's not very clear\nwhere these factor-of-several-hundred differences are coming from.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Aug 2021 09:35:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em dom., 29 de ago. de 2021 às 10:35, Tom Lane <[email protected]> escreveu:\n\n> Ranier Vilela <[email protected]> writes:\n> > I retested this case with HEAD, and it seems to me that NLS does affect\n> it.\n>\n> Sure, there's no question that message translation will have *some* cost.\n> But on my machine it is an incremental tens-of-percent kind of cost,\n> and that is the result you're getting as well. So it's not very clear\n> where these factor-of-several-hundred differences are coming from.\n>\nA hypothesis that has not yet come up, may be some defect in the code\ngeneration,\nby the previous msvc compiler used, because in all my tests I always use\nthe latest version,\nwhich has several corrections in the code generation part.\n\nView this test with one of the attempts to reproduce the problem.\nmsvc: 19.29.30133 para x64\nwindows 10 64 bits\nPostgres: 12.8\n\npostgres=# select version();\n version\n------------------------------------------------------------\n PostgreSQL 12.8, compiled by Visual C++ build 1929, 64-bit\n(1 row)\n\n\npostgres=# drop table sampletest;\nDROP TABLE\npostgres=# create table sampletest (a varchar, b varchar);\nCREATE TABLE\npostgres=# insert into sampletest (a, b)\npostgres-# select substr(md5(random()::text), 0, 15),\n(100000000*random())::integer::varchar\npostgres-# from generate_series(1,100000);\nINSERT 0 100000\npostgres=#\npostgres=# CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\npostgres-# RETURNS real AS $$\npostgres$# BEGIN\npostgres$# RETURN case when str is null then val else str::real end;\npostgres$# EXCEPTION WHEN OTHERS THEN\npostgres$# RETURN val;\npostgres$# END;\npostgres$# $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\nCREATE FUNCTION\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(b, null)) as \"b\" from sampletest;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\ntime=339.978..339.979 rows=1 loops=1)\n Buffers: shared hit=644\n -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32)\n(actual time=0.032..18.132 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning Time: 3.631 ms\n Execution Time: 340.330 ms\n(6 rows)\n\n\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\ntime=1724.902..1724.903 rows=1 loops=1)\n Buffers: shared hit=640\n -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32)\n(actual time=0.021..23.489 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning Time: 0.150 ms\n Execution Time: 1724.930 ms\n(6 rows)\n\nregards,\nRanier Vilela\n\nEm dom., 29 de ago. de 2021 às 10:35, Tom Lane <[email protected]> escreveu:Ranier Vilela <[email protected]> writes:\n> I retested this case with HEAD, and it seems to me that NLS does affect it.\n\nSure, there's no question that message translation will have *some* cost.\nBut on my machine it is an incremental tens-of-percent kind of cost,\nand that is the result you're getting as well.  So it's not very clear\nwhere these factor-of-several-hundred differences are coming from.A hypothesis that has not yet come up, may be some defect in the code generation, by the previous msvc compiler used, because in all my tests I always use the latest version, which has several corrections in the code generation part.View this test with one of the attempts to reproduce the problem.msvc: 19.29.30133 para x64windows 10 64 bitsPostgres: 12.8postgres=# select version();                          version------------------------------------------------------------ PostgreSQL 12.8, compiled by Visual C++ build 1929, 64-bit(1 row)postgres=# drop table sampletest;DROP TABLEpostgres=# create table sampletest (a varchar, b varchar);CREATE TABLEpostgres=# insert into sampletest (a, b)postgres-# select substr(md5(random()::text), 0, 15), (100000000*random())::integer::varcharpostgres-#   from generate_series(1,100000);INSERT 0 100000postgres=#postgres=# CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)postgres-# RETURNS real AS $$postgres$# BEGINpostgres$#   RETURN case when str is null then val else str::real end;postgres$# EXCEPTION WHEN OTHERS THENpostgres$#   RETURN val;postgres$# END;postgres$# $$ LANGUAGE plpgsql COST 1 IMMUTABLE;CREATE FUNCTIONpostgres=# explain (analyze,buffers,COSTS,TIMING)postgres-# select MAX(toFloat(b, null)) as \"b\" from sampletest;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=1477.84..1477.85 rows=1 width=4) (actual time=339.978..339.979 rows=1 loops=1)   Buffers: shared hit=644   ->  Seq Scan on sampletest  (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.032..18.132 rows=100000 loops=1)         Buffers: shared hit=637 Planning Time: 3.631 ms Execution Time: 340.330 ms(6 rows)postgres=# explain (analyze,buffers,COSTS,TIMING)postgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=1477.84..1477.85 rows=1 width=4) (actual time=1724.902..1724.903 rows=1 loops=1)   Buffers: shared hit=640   ->  Seq Scan on sampletest  (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.021..23.489 rows=100000 loops=1)         Buffers: shared hit=637 Planning Time: 0.150 ms Execution Time: 1724.930 ms(6 rows)regards,Ranier Vilela", "msg_date": "Sun, 29 Aug 2021 11:00:44 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": ">Sure, there's no question that message translation will have *some* cost.\r\n>But on my machine it is an incremental tens-of-percent kind of cost,\r\n>and that is the result you're getting as well.  So it's not very clear\r\n>where these factor-of-several-hundred differences are coming from.\r\n>A hypothesis that has not yet come up, may be some defect in the code generation, \r\n>by the previous msvc compiler used, because in all my tests I always use the latest version, \r\n>which has several corrections in the code generation part.\r\n\r\n------------------------------------------------------------------------------------------------------------------------\r\n\r\nHello all,\r\n\r\nI don't think this reproduces the issue I experience. I saw a difference of around 500x! What you see is 5x, which according to Tom would be expected for an execution path involving exceptions. And NLS should have an impact as well since more work happens. From the numbers you published, I see 10-15% change which again would be expected?\r\n\r\nI cannot think of anything that would be specific to me with regards to this scenario given that I have tried it in quite a few environments from plain stock installs. Until one of you is able to reproduce this, you may be chasing other issues. \r\n\r\nIs it possible that the client I am using or the way I am creating the test database might affect this scenario? I use DBeaver and use the default settings to create the database:\r\n- default encoding: UTF8\r\n- collate: English_United States.1252\r\n- ctype: English_United States.1252\r\n- default tablespace: pg_default\r\n\r\nSettings:\r\nName\tValue\tUnit\r\nallow_system_table_mods\toff\t[NULL]\r\napplication_name\tDBeaver 21.1.3 - Main <postgres>\t[NULL]\r\narchive_cleanup_command\t\t[NULL]\r\narchive_command\t(disabled)\t[NULL]\r\narchive_mode\toff\t[NULL]\r\narchive_timeout\t0\ts\r\narray_nulls\ton\t[NULL]\r\nauthentication_timeout\t60\ts\r\nautovacuum\ton\t[NULL]\r\nautovacuum_analyze_scale_factor\t0.1\t[NULL]\r\nautovacuum_analyze_threshold\t50\t[NULL]\r\nautovacuum_freeze_max_age\t200000000\t[NULL]\r\nautovacuum_max_workers\t3\t[NULL]\r\nautovacuum_multixact_freeze_max_age\t400000000\t[NULL]\r\nautovacuum_naptime\t60\ts\r\nautovacuum_vacuum_cost_delay\t2\tms\r\nautovacuum_vacuum_cost_limit\t-1\t[NULL]\r\nautovacuum_vacuum_insert_scale_factor\t0.2\t[NULL]\r\nautovacuum_vacuum_insert_threshold\t1000\t[NULL]\r\nautovacuum_vacuum_scale_factor\t0.2\t[NULL]\r\nautovacuum_vacuum_threshold\t50\t[NULL]\r\nautovacuum_work_mem\t-1\tkB\r\nbackend_flush_after\t0\t8kB\r\nbackslash_quote\tsafe_encoding\t[NULL]\r\nbacktrace_functions\t\t[NULL]\r\nbgwriter_delay\t200\tms\r\nbgwriter_flush_after\t0\t8kB\r\nbgwriter_lru_maxpages\t100\t[NULL]\r\nbgwriter_lru_multiplier\t2\t[NULL]\r\nblock_size\t8192\t[NULL]\r\nbonjour\toff\t[NULL]\r\nbonjour_name\t\t[NULL]\r\nbytea_output\thex\t[NULL]\r\ncheck_function_bodies\ton\t[NULL]\r\ncheckpoint_completion_target\t0.5\t[NULL]\r\ncheckpoint_flush_after\t0\t8kB\r\ncheckpoint_timeout\t300\ts\r\ncheckpoint_warning\t30\ts\r\nclient_encoding\tUTF8\t[NULL]\r\nclient_min_messages\tnotice\t[NULL]\r\ncluster_name\t\t[NULL]\r\ncommit_delay\t0\t[NULL]\r\ncommit_siblings\t5\t[NULL]\r\nconfig_file\tC:/Program Files/PostgreSQL/13/data/postgresql.conf\t[NULL]\r\nconstraint_exclusion\tpartition\t[NULL]\r\ncpu_index_tuple_cost\t0.005\t[NULL]\r\ncpu_operator_cost\t0.0025\t[NULL]\r\ncpu_tuple_cost\t0.01\t[NULL]\r\ncursor_tuple_fraction\t0.1\t[NULL]\r\ndata_checksums\toff\t[NULL]\r\ndata_directory\tC:/Program Files/PostgreSQL/13/data\t[NULL]\r\ndata_directory_mode\t700\t[NULL]\r\ndata_sync_retry\toff\t[NULL]\r\nDateStyle\tISO, YMD\t[NULL]\r\ndb_user_namespace\toff\t[NULL]\r\ndeadlock_timeout\t1000\tms\r\ndebug_assertions\toff\t[NULL]\r\ndebug_pretty_print\ton\t[NULL]\r\ndebug_print_parse\toff\t[NULL]\r\ndebug_print_plan\toff\t[NULL]\r\ndebug_print_rewritten\toff\t[NULL]\r\ndefault_statistics_target\t100\t[NULL]\r\ndefault_table_access_method\theap\t[NULL]\r\ndefault_tablespace\t\t[NULL]\r\ndefault_text_search_config\tpg_catalog.english\t[NULL]\r\ndefault_transaction_deferrable\toff\t[NULL]\r\ndefault_transaction_isolation\tread committed\t[NULL]\r\ndefault_transaction_read_only\toff\t[NULL]\r\ndynamic_library_path\t$libdir\t[NULL]\r\ndynamic_shared_memory_type\twindows\t[NULL]\r\neffective_cache_size\t524288\t8kB\r\neffective_io_concurrency\t0\t[NULL]\r\nenable_bitmapscan\ton\t[NULL]\r\nenable_gathermerge\ton\t[NULL]\r\nenable_hashagg\ton\t[NULL]\r\nenable_hashjoin\ton\t[NULL]\r\nenable_incremental_sort\ton\t[NULL]\r\nenable_indexonlyscan\ton\t[NULL]\r\nenable_indexscan\ton\t[NULL]\r\nenable_material\ton\t[NULL]\r\nenable_mergejoin\ton\t[NULL]\r\nenable_nestloop\ton\t[NULL]\r\nenable_parallel_append\ton\t[NULL]\r\nenable_parallel_hash\ton\t[NULL]\r\nenable_partition_pruning\ton\t[NULL]\r\nenable_partitionwise_aggregate\toff\t[NULL]\r\nenable_partitionwise_join\toff\t[NULL]\r\nenable_seqscan\ton\t[NULL]\r\nenable_sort\ton\t[NULL]\r\nenable_tidscan\ton\t[NULL]\r\nescape_string_warning\ton\t[NULL]\r\nevent_source\tPostgreSQL\t[NULL]\r\nexit_on_error\toff\t[NULL]\r\nexternal_pid_file\t\t[NULL]\r\nextra_float_digits\t3\t[NULL]\r\nforce_parallel_mode\toff\t[NULL]\r\nfrom_collapse_limit\t8\t[NULL]\r\nfsync\ton\t[NULL]\r\nfull_page_writes\ton\t[NULL]\r\ngeqo\ton\t[NULL]\r\ngeqo_effort\t5\t[NULL]\r\ngeqo_generations\t0\t[NULL]\r\ngeqo_pool_size\t0\t[NULL]\r\ngeqo_seed\t0\t[NULL]\r\ngeqo_selection_bias\t2\t[NULL]\r\ngeqo_threshold\t12\t[NULL]\r\ngin_fuzzy_search_limit\t0\t[NULL]\r\ngin_pending_list_limit\t4096\tkB\r\nhash_mem_multiplier\t1\t[NULL]\r\nhba_file\tC:/Program Files/PostgreSQL/13/data/pg_hba.conf\t[NULL]\r\nhot_standby\ton\t[NULL]\r\nhot_standby_feedback\toff\t[NULL]\r\nhuge_pages\ttry\t[NULL]\r\nident_file\tC:/Program Files/PostgreSQL/13/data/pg_ident.conf\t[NULL]\r\nidle_in_transaction_session_timeout\t0\tms\r\nignore_checksum_failure\toff\t[NULL]\r\nignore_invalid_pages\toff\t[NULL]\r\nignore_system_indexes\toff\t[NULL]\r\ninteger_datetimes\ton\t[NULL]\r\nIntervalStyle\tpostgres\t[NULL]\r\njit\toff\t[NULL]\r\njit_above_cost\t100000\t[NULL]\r\njit_debugging_support\toff\t[NULL]\r\njit_dump_bitcode\toff\t[NULL]\r\njit_expressions\ton\t[NULL]\r\njit_inline_above_cost\t500000\t[NULL]\r\njit_optimize_above_cost\t500000\t[NULL]\r\njit_profiling_support\toff\t[NULL]\r\njit_provider\tllvmjit\t[NULL]\r\njit_tuple_deforming\ton\t[NULL]\r\njoin_collapse_limit\t8\t[NULL]\r\nkrb_caseins_users\toff\t[NULL]\r\nkrb_server_keyfile\t\t[NULL]\r\nlc_collate\tEnglish_United States.1252\t[NULL]\r\nlc_ctype\tEnglish_United States.1252\t[NULL]\r\nlc_messages\tEnglish_United States.1252\t[NULL]\r\nlc_monetary\tEnglish_United States.1252\t[NULL]\r\nlc_numeric\tEnglish_United States.1252\t[NULL]\r\nlc_time\tEnglish_United States.1252\t[NULL]\r\nlisten_addresses\t*\t[NULL]\r\nlo_compat_privileges\toff\t[NULL]\r\nlocal_preload_libraries\t\t[NULL]\r\nlock_timeout\t0\tms\r\nlog_autovacuum_min_duration\t-1\tms\r\nlog_checkpoints\toff\t[NULL]\r\nlog_connections\toff\t[NULL]\r\nlog_destination\tstderr\t[NULL]\r\nlog_directory\tlog\t[NULL]\r\nlog_disconnections\toff\t[NULL]\r\nlog_duration\toff\t[NULL]\r\nlog_error_verbosity\tdefault\t[NULL]\r\nlog_executor_stats\toff\t[NULL]\r\nlog_file_mode\t640\t[NULL]\r\nlog_filename\tpostgresql-%Y-%m-%d_%H%M%S.log\t[NULL]\r\nlog_hostname\toff\t[NULL]\r\nlog_line_prefix\t%m [%p] \t[NULL]\r\nlog_lock_waits\toff\t[NULL]\r\nlog_min_duration_sample\t-1\tms\r\nlog_min_duration_statement\t-1\tms\r\nlog_min_error_statement\terror\t[NULL]\r\nlog_min_messages\twarning\t[NULL]\r\nlog_parameter_max_length\t-1\tB\r\nlog_parameter_max_length_on_error\t0\tB\r\nlog_parser_stats\toff\t[NULL]\r\nlog_planner_stats\toff\t[NULL]\r\nlog_replication_commands\toff\t[NULL]\r\nlog_rotation_age\t1440\tmin\r\nlog_rotation_size\t10240\tkB\r\nlog_statement\tnone\t[NULL]\r\nlog_statement_sample_rate\t1\t[NULL]\r\nlog_statement_stats\toff\t[NULL]\r\nlog_temp_files\t-1\tkB\r\nlog_timezone\tUS/Eastern\t[NULL]\r\nlog_transaction_sample_rate\t0\t[NULL]\r\nlog_truncate_on_rotation\toff\t[NULL]\r\nlogging_collector\ton\t[NULL]\r\nlogical_decoding_work_mem\t65536\tkB\r\nmaintenance_io_concurrency\t0\t[NULL]\r\nmaintenance_work_mem\t65536\tkB\r\nmax_connections\t100\t[NULL]\r\nmax_files_per_process\t1000\t[NULL]\r\nmax_function_args\t100\t[NULL]\r\nmax_identifier_length\t63\t[NULL]\r\nmax_index_keys\t32\t[NULL]\r\nmax_locks_per_transaction\t64\t[NULL]\r\nmax_logical_replication_workers\t4\t[NULL]\r\nmax_parallel_maintenance_workers\t2\t[NULL]\r\nmax_parallel_workers\t8\t[NULL]\r\nmax_parallel_workers_per_gather\t2\t[NULL]\r\nmax_pred_locks_per_page\t2\t[NULL]\r\nmax_pred_locks_per_relation\t-2\t[NULL]\r\nmax_pred_locks_per_transaction\t64\t[NULL]\r\nmax_prepared_transactions\t0\t[NULL]\r\nmax_replication_slots\t10\t[NULL]\r\nmax_slot_wal_keep_size\t-1\tMB\r\nmax_stack_depth\t2048\tkB\r\nmax_standby_archive_delay\t30000\tms\r\nmax_standby_streaming_delay\t30000\tms\r\nmax_sync_workers_per_subscription\t2\t[NULL]\r\nmax_wal_senders\t10\t[NULL]\r\nmax_wal_size\t1024\tMB\r\nmax_worker_processes\t8\t[NULL]\r\nmin_parallel_index_scan_size\t64\t8kB\r\nmin_parallel_table_scan_size\t1024\t8kB\r\nmin_wal_size\t80\tMB\r\nold_snapshot_threshold\t-1\tmin\r\noperator_precedence_warning\toff\t[NULL]\r\nparallel_leader_participation\ton\t[NULL]\r\nparallel_setup_cost\t1000\t[NULL]\r\nparallel_tuple_cost\t0.1\t[NULL]\r\npassword_encryption\tscram-sha-256\t[NULL]\r\nplan_cache_mode\tauto\t[NULL]\r\nport\t5433\t[NULL]\r\npost_auth_delay\t0\ts\r\npre_auth_delay\t0\ts\r\nprimary_conninfo\t\t[NULL]\r\nprimary_slot_name\t\t[NULL]\r\npromote_trigger_file\t\t[NULL]\r\nquote_all_identifiers\toff\t[NULL]\r\nrandom_page_cost\t4\t[NULL]\r\nrecovery_end_command\t\t[NULL]\r\nrecovery_min_apply_delay\t0\tms\r\nrecovery_target\t\t[NULL]\r\nrecovery_target_action\tpause\t[NULL]\r\nrecovery_target_inclusive\ton\t[NULL]\r\nrecovery_target_lsn\t\t[NULL]\r\nrecovery_target_name\t\t[NULL]\r\nrecovery_target_time\t\t[NULL]\r\nrecovery_target_timeline\tlatest\t[NULL]\r\nrecovery_target_xid\t\t[NULL]\r\nrestart_after_crash\ton\t[NULL]\r\nrestore_command\t\t[NULL]\r\nrow_security\ton\t[NULL]\r\nsearch_path\t$user, public\t[NULL]\r\nsegment_size\t131072\t8kB\r\nseq_page_cost\t1\t[NULL]\r\nserver_encoding\tUTF8\t[NULL]\r\nserver_version\t13.4\t[NULL]\r\nserver_version_num\t130004\t[NULL]\r\nsession_preload_libraries\t\t[NULL]\r\nsession_replication_role\torigin\t[NULL]\r\nshared_buffers\t16384\t8kB\r\nshared_memory_type\twindows\t[NULL]\r\nshared_preload_libraries\t\t[NULL]\r\nssl\toff\t[NULL]\r\nssl_ca_file\t\t[NULL]\r\nssl_cert_file\tserver.crt\t[NULL]\r\nssl_ciphers\tHIGH:MEDIUM:+3DES:!aNULL\t[NULL]\r\nssl_crl_file\t\t[NULL]\r\nssl_dh_params_file\t\t[NULL]\r\nssl_ecdh_curve\tprime256v1\t[NULL]\r\nssl_key_file\tserver.key\t[NULL]\r\nssl_library\tOpenSSL\t[NULL]\r\nssl_max_protocol_version\t\t[NULL]\r\nssl_min_protocol_version\tTLSv1.2\t[NULL]\r\nssl_passphrase_command\t\t[NULL]\r\nssl_passphrase_command_supports_reload\toff\t[NULL]\r\nssl_prefer_server_ciphers\ton\t[NULL]\r\nstandard_conforming_strings\ton\t[NULL]\r\nstatement_timeout\t0\tms\r\nstats_temp_directory\tpg_stat_tmp\t[NULL]\r\nsuperuser_reserved_connections\t3\t[NULL]\r\nsynchronize_seqscans\ton\t[NULL]\r\nsynchronous_commit\ton\t[NULL]\r\nsynchronous_standby_names\t\t[NULL]\r\nsyslog_facility\tnone\t[NULL]\r\nsyslog_ident\tpostgres\t[NULL]\r\nsyslog_sequence_numbers\ton\t[NULL]\r\nsyslog_split_messages\ton\t[NULL]\r\ntcp_keepalives_count\t0\t[NULL]\r\ntcp_keepalives_idle\t-1\ts\r\ntcp_keepalives_interval\t-1\ts\r\ntcp_user_timeout\t0\tms\r\ntemp_buffers\t1024\t8kB\r\ntemp_file_limit\t-1\tkB\r\ntemp_tablespaces\t\t[NULL]\r\nTimeZone\tAmerica/New_York\t[NULL]\r\ntimezone_abbreviations\tDefault\t[NULL]\r\ntrace_notify\toff\t[NULL]\r\ntrace_recovery_messages\tlog\t[NULL]\r\ntrace_sort\toff\t[NULL]\r\ntrack_activities\ton\t[NULL]\r\ntrack_activity_query_size\t1024\tB\r\ntrack_commit_timestamp\toff\t[NULL]\r\ntrack_counts\ton\t[NULL]\r\ntrack_functions\tnone\t[NULL]\r\ntrack_io_timing\toff\t[NULL]\r\ntransaction_deferrable\toff\t[NULL]\r\ntransaction_isolation\tread committed\t[NULL]\r\ntransaction_read_only\toff\t[NULL]\r\ntransform_null_equals\toff\t[NULL]\r\nunix_socket_directories\t\t[NULL]\r\nunix_socket_group\t\t[NULL]\r\nunix_socket_permissions\t777\t[NULL]\r\nupdate_process_title\toff\t[NULL]\r\nvacuum_cleanup_index_scale_factor\t0.1\t[NULL]\r\nvacuum_cost_delay\t0\tms\r\nvacuum_cost_limit\t200\t[NULL]\r\nvacuum_cost_page_dirty\t20\t[NULL]\r\nvacuum_cost_page_hit\t1\t[NULL]\r\nvacuum_cost_page_miss\t10\t[NULL]\r\nvacuum_defer_cleanup_age\t0\t[NULL]\r\nvacuum_freeze_min_age\t50000000\t[NULL]\r\nvacuum_freeze_table_age\t150000000\t[NULL]\r\nvacuum_multixact_freeze_min_age\t5000000\t[NULL]\r\nvacuum_multixact_freeze_table_age\t150000000\t[NULL]\r\nwal_block_size\t8192\t[NULL]\r\nwal_buffers\t512\t8kB\r\nwal_compression\toff\t[NULL]\r\nwal_consistency_checking\t\t[NULL]\r\nwal_init_zero\ton\t[NULL]\r\nwal_keep_size\t0\tMB\r\nwal_level\treplica\t[NULL]\r\nwal_log_hints\toff\t[NULL]\r\nwal_receiver_create_temp_slot\toff\t[NULL]\r\nwal_receiver_status_interval\t10\ts\r\nwal_receiver_timeout\t60000\tms\r\nwal_recycle\ton\t[NULL]\r\nwal_retrieve_retry_interval\t5000\tms\r\nwal_segment_size\t16777216\tB\r\nwal_sender_timeout\t60000\tms\r\nwal_skip_threshold\t2048\tkB\r\nwal_sync_method\topen_datasync\t[NULL]\r\nwal_writer_delay\t200\tms\r\nwal_writer_flush_after\t128\t8kB\r\nwork_mem\t4096\tkB\r\nxmlbinary\tbase64\t[NULL]\r\nxmloption\tcontent\t[NULL]\r\nzero_damaged_pages\toff\t[NULL]\r\n\r\n\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n", "msg_date": "Sun, 29 Aug 2021 16:03:02 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> Is it possible that the client I am using or the way I am creating the test database might affect this scenario? I use DBeaver and use the default settings to create the database:\n> - default encoding: UTF8\n> - collate: English_United States.1252\n> - ctype: English_United States.1252\n\nYeah, I was thinking of quizzing you about that. I wonder whether\nsomething is thinking it needs to transcode to WIN1252 encoding and then\nback to UTF8, based on the .1252 property of the LC_XXX settings. That\nshouldn't account for any 500X factor either, but we're kind of grasping\nat straws here.\n\nDoes Windows have any locale choices that imply UTF8 encoding exactly,\nand if so, do your results change when using that? Alternatively,\ntry creating a database with WIN1252 encoding and those locale settings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Aug 2021 12:19:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em dom., 29 de ago. de 2021 às 13:03, [email protected] <\[email protected]> escreveu:\n\n> >Sure, there's no question that message translation will have *some* cost.\n> >But on my machine it is an incremental tens-of-percent kind of cost,\n> >and that is the result you're getting as well. So it's not very clear\n> >where these factor-of-several-hundred differences are coming from.\n> >A hypothesis that has not yet come up, may be some defect in the code\n> generation,\n> >by the previous msvc compiler used, because in all my tests I always use\n> the latest version,\n> >which has several corrections in the code generation part.\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------\n>\n> Hello all,\n>\n> I don't think this reproduces the issue I experience. I saw a difference\n> of around 500x! What you see is 5x, which according to Tom would be\n> expected for an execution path involving exceptions. And NLS should have an\n> impact as well since more work happens. From the numbers you published, I\n> see 10-15% change which again would be expected?\n>\nYes, It seems to me that is expected for NLS usage.\n\n\n>\n> I cannot think of anything that would be specific to me with regards to\n> this scenario given that I have tried it in quite a few environments from\n> plain stock installs. Until one of you is able to reproduce this, you may\n> be chasing other issues.\n>\nI think I'm unable to reproduce the issue, because I didn't use any plain\nstock installs.\nPostgres env tests here, is a fresh build with the latest msvc.\nI have no intention of repeating the issue, with something exactly the same\nas your environment,\nbut with a very different environment.\n\nCan you show the version of Postgres, at your Windows 10 env, who got this\nresult?\nPlanning Time: 0.171 ms\nExecution Time: 88031.585 ms\n\nregards,\nRanier Vilela\n\nEm dom., 29 de ago. de 2021 às 13:03, [email protected] <[email protected]> escreveu:>Sure, there's no question that message translation will have *some* cost.\n>But on my machine it is an incremental tens-of-percent kind of cost,\n>and that is the result you're getting as well.  So it's not very clear\n>where these factor-of-several-hundred differences are coming from.\n>A hypothesis that has not yet come up, may be some defect in the code generation, \n>by the previous msvc compiler used, because in all my tests I always use the latest version, \n>which has several corrections in the code generation part.\n\n------------------------------------------------------------------------------------------------------------------------\n\nHello all,\n\nI don't think this reproduces the issue I experience. I saw a difference of around 500x! What you see is 5x, which according to Tom would be expected for an execution path involving exceptions. And NLS should have an impact as well since more work happens. From the numbers you published, I see 10-15% change which again would be expected?Yes, It seems to me that is expected for NLS usage. \n\nI cannot think of anything that would be specific to me with regards to this scenario given that I have tried it in quite a few environments from plain stock installs. Until one of you is able to reproduce this, you may be chasing other issues. I think I'm unable to reproduce the issue, because I didn't use any plain stock installs.Postgres env tests here, is a fresh build with the latest msvc.I have no intention of repeating the issue, with something exactly the same as your environment, but with a very different environment.Can you show the version of Postgres, at your Windows 10 env, who got this result?\nPlanning Time: 0.171 ms\nExecution Time: 88031.585 msregards,Ranier Vilela", "msg_date": "Sun, 29 Aug 2021 15:20:28 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\nFrom: Ranier Vilela <[email protected]> \r\nSent: Sunday, August 29, 2021 14:20\r\nTo: [email protected]\r\nCc: Tom Lane <[email protected]>; Andrew Dunstan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\r\n\r\nEm dom., 29 de ago. de 2021 às 13:03, mailto:[email protected] <mailto:[email protected]> escreveu:\r\n>Sure, there's no question that message translation will have *some* cost.\r\n>But on my machine it is an incremental tens-of-percent kind of cost,\r\n>and that is the result you're getting as well.  So it's not very clear\r\n>where these factor-of-several-hundred differences are coming from.\r\n>A hypothesis that has not yet come up, may be some defect in the code generation, \r\n>by the previous msvc compiler used, because in all my tests I always use the latest version, \r\n>which has several corrections in the code generation part.\r\n\r\n------------------------------------------------------------------------------------------------------------------------\r\n\r\nHello all,\r\n\r\nI don't think this reproduces the issue I experience. I saw a difference of around 500x! What you see is 5x, which according to Tom would be expected for an execution path involving exceptions. And NLS should have an impact as well since more work happens. From the numbers you published, I see 10-15% change which again would be expected?\r\nYes, It seems to me that is expected for NLS usage.\r\n \r\n\r\nI cannot think of anything that would be specific to me with regards to this scenario given that I have tried it in quite a few environments from plain stock installs. Until one of you is able to reproduce this, you may be chasing other issues. \r\nI think I'm unable to reproduce the issue, because I didn't use any plain stock installs.\r\nPostgres env tests here, is a fresh build with the latest msvc.\r\nI have no intention of repeating the issue, with something exactly the same as your environment, \r\nbut with a very different environment.\r\n\r\nCan you show the version of Postgres, at your Windows 10 env, who got this result?\r\nPlanning Time: 0.171 ms\r\nExecution Time: 88031.585 ms\r\n\r\nregards,\r\nRanier Vilela\r\n\r\n\r\n\r\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nHello Ranier,\r\n\r\nAll my tests were on latest 13.4 install I downloaded from the main site.\r\n\r\nSELECT version();\r\nPostgreSQL 13.4, compiled by Visual C++ build 1914, 64-bit\r\n\r\n\r\nAs per the following:\r\n\r\n> I think I'm unable to reproduce the issue, because I didn't use any plain stock installs.\r\n> Postgres env tests here, is a fresh build with the latest msvc.\r\n> I have no intention of repeating the issue, with something exactly the same as your environment, \r\n> but with a very different environment.\r\n\r\nI am not sure I understand. Are you saying the standard installs may be faulty? A stock install from the stock installer on a windows machine should take 10mn top. If it doesn't reproduce the issue out of the box, then at least I have a confirmation that there may be something weird that I am somehow repeating across all the installs I have performed???\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n", "msg_date": "Mon, 30 Aug 2021 00:29:26 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Tom Lane <[email protected]>\r\n > Sent: Sunday, August 29, 2021 12:19\r\n > To: [email protected]\r\n > Cc: Ranier Vilela <[email protected]>; Andrew Dunstan\r\n > <[email protected]>; Justin Pryzby <[email protected]>; pgsql-\r\n > [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \"[email protected]\" <[email protected]> writes:\r\n > > Is it possible that the client I am using or the way I am creating the test\r\n > database might affect this scenario? I use DBeaver and use the default\r\n > settings to create the database:\r\n > > - default encoding: UTF8\r\n > > - collate: English_United States.1252\r\n > > - ctype: English_United States.1252\r\n > \r\n > Yeah, I was thinking of quizzing you about that. I wonder whether\r\n > something is thinking it needs to transcode to WIN1252 encoding and\r\n > then back to UTF8, based on the .1252 property of the LC_XXX settings.\r\n > That shouldn't account for any 500X factor either, but we're kind of\r\n > grasping at straws here.\r\n > \r\n > Does Windows have any locale choices that imply UTF8 encoding\r\n > exactly, and if so, do your results change when using that? Alternatively,\r\n > try creating a database with WIN1252 encoding and those locale\r\n > settings.\r\n > \r\n > \t\t\tregards, tom lane\r\n\r\nYeah, grasping at straws... and no material changes 😊 This is mystifying.\r\n\r\nshow lc_messages;\r\n-- English_United States.1252\r\n\r\ncreate table sampletest (a varchar, b varchar);\r\ninsert into sampletest (a, b)\r\nselect substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\r\n from generate_series(1,100000);\r\n\r\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\r\nRETURNS real AS $$\r\nBEGIN\r\n RETURN case when str is null then val else str::real end;\r\nEXCEPTION WHEN OTHERS THEN\r\n RETURN val;\r\nEND;\r\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) \r\nselect MAX(toFloat(a, null)) as \"a\" from sampletest\r\n--Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual time=89527.032..89527.033 rows=1 loops=1)\r\n-- Buffers: shared hit=647\r\n-- -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.024..37.811 rows=100000 loops=1)\r\n-- Buffers: shared hit=637\r\n--Planning:\r\n-- Buffers: shared hit=24\r\n--Planning Time: 0.347 ms\r\n--Execution Time: 89527.501 ms\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) \r\nselect MAX(toFloat(b, null)) as \"b\" from sampletest\r\n--Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=186.605..186.606 rows=1 loops=1)\r\n-- Buffers: shared hit=637\r\n-- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=8) (actual time=0.008..9.679 rows=100000 loops=1)\r\n-- Buffers: shared hit=637\r\n--Planning:\r\n-- Buffers: shared hit=4\r\n--Planning Time: 0.339 ms\r\n--Execution Time: 186.641 ms\r\n\r\n\r\nAt this point, I am not sure how to proceed except to rethink that toFloat() function and many other places where we use exceptions. We get such dirty data that I need a \"safe\" way to convert a string to float without throwing an exception. BTW, I tried other combinations in case there may have been some weird interactions with the ::REAL conversion operator, but nothing made any change. Could you recommend another approach off the top of your head? I could use regexes for testing etc... Or maybe there is another option like a no-throw conversion that's built in or in some extension that you may know of? Like the \"SAFE.\" Prefix in BigQuery.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n\r\n", "msg_date": "Mon, 30 Aug 2021 00:44:22 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em dom., 29 de ago. de 2021 às 21:29, [email protected] <\[email protected]> escreveu:\n\n>\n>\n> From: Ranier Vilela <[email protected]>\n> Sent: Sunday, August 29, 2021 14:20\n> To: [email protected]\n> Cc: Tom Lane <[email protected]>; Andrew Dunstan <[email protected]>;\n> Justin Pryzby <[email protected]>; [email protected]\n> Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2 and\n> 13.4\n>\n> Em dom., 29 de ago. de 2021 às 13:03, mailto:[email protected]\n> <mailto:[email protected]> escreveu:\n> >Sure, there's no question that message translation will have *some* cost.\n> >But on my machine it is an incremental tens-of-percent kind of cost,\n> >and that is the result you're getting as well. So it's not very clear\n> >where these factor-of-several-hundred differences are coming from.\n> >A hypothesis that has not yet come up, may be some defect in the code\n> generation,\n> >by the previous msvc compiler used, because in all my tests I always use\n> the latest version,\n> >which has several corrections in the code generation part.\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------\n>\n> Hello all,\n>\n> I don't think this reproduces the issue I experience. I saw a difference\n> of around 500x! What you see is 5x, which according to Tom would be\n> expected for an execution path involving exceptions. And NLS should have an\n> impact as well since more work happens. From the numbers you published, I\n> see 10-15% change which again would be expected?\n> Yes, It seems to me that is expected for NLS usage.\n>\n>\n> I cannot think of anything that would be specific to me with regards to\n> this scenario given that I have tried it in quite a few environments from\n> plain stock installs. Until one of you is able to reproduce this, you may\n> be chasing other issues.\n> I think I'm unable to reproduce the issue, because I didn't use any plain\n> stock installs.\n> Postgres env tests here, is a fresh build with the latest msvc.\n> I have no intention of repeating the issue, with something exactly the\n> same as your environment,\n> but with a very different environment.\n>\n> Can you show the version of Postgres, at your Windows 10 env, who got this\n> result?\n> Planning Time: 0.171 ms\n> Execution Time: 88031.585 ms\n>\n> regards,\n> Ranier Vilela\n>\n>\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hello Ranier,\n>\n> All my tests were on latest 13.4 install I downloaded from the main site.\n>\n> SELECT version();\n> PostgreSQL 13.4, compiled by Visual C++ build 1914, 64-bit\n>\n>\n> As per the following:\n>\n> > I think I'm unable to reproduce the issue, because I didn't use any\n> plain stock installs.\n> > Postgres env tests here, is a fresh build with the latest msvc.\n> > I have no intention of repeating the issue, with something exactly the\n> same as your environment,\n> > but with a very different environment.\n>\n> I am not sure I understand. Are you saying the standard installs may be\n> faulty?\n\nNot exactly.\n\nA stock install from the stock installer on a windows machine should take\n> 10mn top. If it doesn't reproduce the issue out of the box, then at least I\n> have a confirmation that there may be something weird that I am somehow\n> repeating across all the installs I have performed???\n>\nMost likely it's something in your environment, along with your client.\n\nAll I can say is that it is unreproducible with a build/test made with the\nlatest version of msvc.\nWindows 10 64 bits.\nmsvc 2019 64 bits.\n\ngit clone --branch remote/origins/REL_13_4\nhttps://github.com/postgres/postgres/ postgres_13_4\ncd postgres_13_4\ncd src\ncd tools\ncd msvc\nbuild\ninstall c:\\postgres_bench\ncd\\postgres_bench\\bin\ninitdb -D c:\\postgres_bench\\data -E UTF-8 -U postgres -W\npg_ctl -D c:\\postgres_bench\\data -l c:\\postgres_bench\\log\\log1 start\npsql -U postgres\n\npostgres=# select version();\n version\n------------------------------------------------------------\n PostgreSQL 13.4, compiled by Visual C++ build 1929, 64-bit\n(1 row)\n\npostgres=# create table sampletest (a varchar, b varchar);\nCREATE TABLE\npostgres=# insert into sampletest (a, b)\npostgres-# select substr(md5(random()::text), 0, 15),\n(100000000*random())::integer::varchar\npostgres-# from generate_series(1,100000);\nINSERT 0 100000\npostgres=#\npostgres=# CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\npostgres-# RETURNS real AS $$\npostgres$# BEGIN\npostgres$# RETURN case when str is null then val else str::real end;\npostgres$# EXCEPTION WHEN OTHERS THEN\npostgres$# RETURN val;\npostgres$# END;\npostgres$# $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\nCREATE FUNCTION\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\ntime=830.404..830.404 rows=1 loops=1)\n Buffers: shared hit=646 read=1\n -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32)\n(actual time=0.035..12.222 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning:\n Buffers: shared hit=12 read=12\n Planning Time: 0.923 ms\n Execution Time: 830.743 ms\n(8 rows)\n\n\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(b, null)) as \"b\" from sampletest;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\ntime=123.660..123.660 rows=1 loops=1)\n Buffers: shared hit=637\n -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32)\n(actual time=0.028..7.762 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning Time: 0.152 ms\n Execution Time: 123.691 ms\n(6 rows)\n\n regards,\nRanier Vilela\n\nEm dom., 29 de ago. de 2021 às 21:29, [email protected] <[email protected]> escreveu:\n\nFrom: Ranier Vilela <[email protected]> \nSent: Sunday, August 29, 2021 14:20\nTo: [email protected]\nCc: Tom Lane <[email protected]>; Andrew Dunstan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\n\nEm dom., 29 de ago. de 2021 às 13:03, mailto:[email protected] <mailto:[email protected]> escreveu:\n>Sure, there's no question that message translation will have *some* cost.\n>But on my machine it is an incremental tens-of-percent kind of cost,\n>and that is the result you're getting as well.  So it's not very clear\n>where these factor-of-several-hundred differences are coming from.\n>A hypothesis that has not yet come up, may be some defect in the code generation, \n>by the previous msvc compiler used, because in all my tests I always use the latest version, \n>which has several corrections in the code generation part.\n\n------------------------------------------------------------------------------------------------------------------------\n\nHello all,\n\nI don't think this reproduces the issue I experience. I saw a difference of around 500x! What you see is 5x, which according to Tom would be expected for an execution path involving exceptions. And NLS should have an impact as well since more work happens. From the numbers you published, I see 10-15% change which again would be expected?\nYes, It seems to me that is expected for NLS usage.\n \n\nI cannot think of anything that would be specific to me with regards to this scenario given that I have tried it in quite a few environments from plain stock installs. Until one of you is able to reproduce this, you may be chasing other issues. \nI think I'm unable to reproduce the issue, because I didn't use any plain stock installs.\nPostgres env tests here, is a fresh build with the latest msvc.\nI have no intention of repeating the issue, with something exactly the same as your environment, \nbut with a very different environment.\n\nCan you show the version of Postgres, at your Windows 10 env, who got this result?\nPlanning Time: 0.171 ms\nExecution Time: 88031.585 ms\n\nregards,\nRanier Vilela\n\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHello Ranier,\n\nAll my tests were on latest 13.4 install I downloaded from the main site.\n\nSELECT version();\nPostgreSQL 13.4, compiled by Visual C++ build 1914, 64-bit\n\n\nAs per the following:\n\n> I think I'm unable to reproduce the issue, because I didn't use any plain stock installs.\n> Postgres env tests here, is a fresh build with the latest msvc.\n> I have no intention of repeating the issue, with something exactly the same as your environment, \n> but with a very different environment.\n\nI am not sure I understand. Are you saying the standard installs may be faulty?Not exactly. A stock install from the stock installer on a windows machine should take 10mn top. If it doesn't reproduce the issue out of the box, then at least I have a confirmation that there may be something weird that I am somehow repeating across all the installs I have performed???Most likely it's something in your environment, along with your client. All I can say is that it is unreproducible with a build/test made with the latest version of msvc.Windows 10 64 bits.msvc 2019 64 bits.git clone --branch remote/origins/REL_13_4 https://github.com/postgres/postgres/ postgres_13_4cd postgres_13_4cd srccd toolscd msvcbuildinstall c:\\postgres_benchcd\\postgres_bench\\bininitdb -D c:\\postgres_bench\\data -E UTF-8 -U postgres -Wpg_ctl -D c:\\postgres_bench\\data -l c:\\postgres_bench\\log\\log1 startpsql -U postgrespostgres=# select version();                          version------------------------------------------------------------ PostgreSQL 13.4, compiled by Visual C++ build 1929, 64-bit(1 row)postgres=# create table sampletest (a varchar, b varchar);CREATE TABLEpostgres=# insert into sampletest (a, b)postgres-# select substr(md5(random()::text), 0, 15), (100000000*random())::integer::varcharpostgres-#   from generate_series(1,100000);INSERT 0 100000postgres=#postgres=# CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)postgres-# RETURNS real AS $$postgres$# BEGINpostgres$#   RETURN case when str is null then val else str::real end;postgres$# EXCEPTION WHEN OTHERS THENpostgres$#   RETURN val;postgres$# END;postgres$# $$ LANGUAGE plpgsql COST 1 IMMUTABLE;CREATE FUNCTIONpostgres=# explain (analyze,buffers,COSTS,TIMING)postgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=1477.84..1477.85 rows=1 width=4) (actual time=830.404..830.404 rows=1 loops=1)   Buffers: shared hit=646 read=1   ->  Seq Scan on sampletest  (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.035..12.222 rows=100000 loops=1)         Buffers: shared hit=637 Planning:   Buffers: shared hit=12 read=12 Planning Time: 0.923 ms Execution Time: 830.743 ms(8 rows)postgres=# explain (analyze,buffers,COSTS,TIMING)postgres-# select MAX(toFloat(b, null)) as \"b\" from sampletest;                                                      QUERY PLAN----------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=1477.84..1477.85 rows=1 width=4) (actual time=123.660..123.660 rows=1 loops=1)   Buffers: shared hit=637   ->  Seq Scan on sampletest  (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.028..7.762 rows=100000 loops=1)         Buffers: shared hit=637 Planning Time: 0.152 ms Execution Time: 123.691 ms(6 rows) regards,Ranier Vilela", "msg_date": "Sun, 29 Aug 2021 22:55:53 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Mon, Aug 30, 2021 at 8:44 AM [email protected]\n<[email protected]> wrote:\n>\n> Yeah, grasping at straws... and no material changes 😊 This is mystifying.\n>\n> show lc_messages;\n> -- English_United States.1252\n>\n> create table sampletest (a varchar, b varchar);\n> insert into sampletest (a, b)\n> select substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\n> from generate_series(1,100000);\n>\n> CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\n> RETURNS real AS $$\n> BEGIN\n> RETURN case when str is null then val else str::real end;\n> EXCEPTION WHEN OTHERS THEN\n> RETURN val;\n> END;\n> $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n>\n> explain (analyze,buffers,COSTS,TIMING)\n> select MAX(toFloat(a, null)) as \"a\" from sampletest\n> --Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual time=89527.032..89527.033 rows=1 loops=1)\n> -- Buffers: shared hit=647\n> -- -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.024..37.811 rows=100000 loops=1)\n> -- Buffers: shared hit=637\n> --Planning:\n> -- Buffers: shared hit=24\n> --Planning Time: 0.347 ms\n> --Execution Time: 89527.501 ms\n>\n> explain (analyze,buffers,COSTS,TIMING)\n> select MAX(toFloat(b, null)) as \"b\" from sampletest\n> --Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=186.605..186.606 rows=1 loops=1)\n> -- Buffers: shared hit=637\n> -- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=8) (actual time=0.008..9.679 rows=100000 loops=1)\n> -- Buffers: shared hit=637\n> --Planning:\n> -- Buffers: shared hit=4\n> --Planning Time: 0.339 ms\n> --Execution Time: 186.641 ms\n>\n>\n> At this point, I am not sure how to proceed except to rethink that toFloat() function and many other places where we use exceptions. We get such dirty data that I need a \"safe\" way to convert a string to float without throwing an exception. BTW, I tried other combinations in case there may have been some weird interactions with the ::REAL conversion operator, but nothing made any change. Could you recommend another approach off the top of your head? I could use regexes for testing etc... Or maybe there is another option like a no-throw conversion that's built in or in some extension that you may know of? Like the \"SAFE.\" Prefix in BigQuery.\n\nI tried this scenario using edb's 13.3 x64 install:\n\npostgres=# select version();\n version\n------------------------------------------------------------\n PostgreSQL 13.3, compiled by Visual C++ build 1914, 64-bit\n(1 row)\n\n\npostgres=# \\l postgres\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access privileges\n----------+----------+----------+---------+-------+-------------------\n postgres | postgres | UTF8 | C | C |\n(1 row)\n\npostgres=# explain (analyze,buffers,COSTS,TIMING)\npostgres-# select MAX(toFloat(a, null)) as \"a\" from sampletest;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=2137.00..2137.01 rows=1 width=4) (actual\ntime=44962.279..44962.280 rows=1 loops=1)\n Buffers: shared hit=657\n -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000\nwidth=15) (actual time=0.009..8.900 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning:\n Buffers: shared hit=78\n Planning Time: 0.531 ms\n Execution Time: 44963.747 ms\n(8 rows)\n\nand with locally compiled REL_13_STABLE's head on the same machine:\n\nrjuju=# select version();\n version\n------------------------------------------------------------\n PostgreSQL 13.4, compiled by Visual C++ build 1929, 64-bit\n(1 row)\n\nrjuju=# \\l rjuju\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access privileges\n-------+-------+----------+---------+-------+-------------------\n rjuju | rjuju | UTF8 | C | C |\n(1 row)\n\nrjuju-# select MAX(toFloat(a, null)) as \"a\" from sampletest;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\ntime=460.334..460.334 rows=1 loops=1)\n Buffers: shared hit=646 read=1\n -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056\nwidth=32) (actual time=0.010..7.612 rows=100000 loops=1)\n Buffers: shared hit=637\n Planning:\n Buffers: shared hit=20 read=1\n Planning Time: 0.125 ms\n Execution Time: 460.527 ms\n(8 rows)\n\nNote that I followed [1], so I simply used \"build\" and \"install\". I\nhave no idea what is done by default and if NLS is included or not.\n\nSo if default build on windows has NLS included, it probably means\nthat either there's something specific on edb's build (I have no idea\nhow their build is produced) or their version of msvc is responsible\nfor that.\n\n[1]: https://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.4.8.10\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:23:08 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Hi\n\npo 30. 8. 2021 v 2:44 odesílatel [email protected] <\[email protected]> napsal:\n\n>\n>\n>\n> At this point, I am not sure how to proceed except to rethink that\n> toFloat() function and many other places where we use exceptions. We get\n> such dirty data that I need a \"safe\" way to convert a string to float\n> without throwing an exception. BTW, I tried other combinations in case\n> there may have been some weird interactions with the ::REAL conversion\n> operator, but nothing made any change. Could you recommend another approach\n> off the top of your head? I could use regexes for testing etc... Or maybe\n> there is another option like a no-throw conversion that's built in or in\n> some extension that you may know of? Like the \"SAFE.\" Prefix in BigQuery.\n>\n\nCREATE OR REPLACE FUNCTION safe_to_double_precision(t text)\nRETURNS double precision AS $$\nBEGIN\n IF $1 SIMILAR TO '[+-]?([0-9]*[.])?[0-9]+' THEN\n RETURN $1::double precision;\n ELSE\n RETURN NULL;\n END IF;\nEND;\n$$ LANGUAGE plpgsql IMMUTABLE STRICT;\n\nRegards\n\nPavel\n\n\n>\n> Thank you,\n> Laurent.\n>\n>\n>\n>\n\nHipo 30. 8. 2021 v 2:44 odesílatel [email protected] <[email protected]> napsal:\n\nAt this point, I am not sure how to proceed except to rethink that toFloat() function and many other places where we use exceptions. We get such dirty data that I need a \"safe\" way to convert a string to float without throwing an exception. BTW, I tried other combinations in case there may have been some weird interactions with the ::REAL conversion operator, but nothing made any change. Could you recommend another approach off the top of your head? I could use regexes for testing etc... Or maybe there is another option like a no-throw conversion that's built in or in some extension that you may know of? Like the \"SAFE.\" Prefix in BigQuery.CREATE OR REPLACE FUNCTION safe_to_double_precision(t text)RETURNS double precision AS $$BEGIN  IF $1 SIMILAR TO '[+-]?([0-9]*[.])?[0-9]+' THEN    RETURN $1::double precision;  ELSE    RETURN NULL;  END IF;END;$$ LANGUAGE plpgsql IMMUTABLE STRICT;RegardsPavel \n\nThank you,\nLaurent.", "msg_date": "Mon, 30 Aug 2021 04:43:23 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Mon, Aug 30, 2021 at 04:43:23AM +0200, Pavel Stehule wrote:\n> po 30. 8. 2021 v 2:44 odes�latel [email protected] napsal:\n> > At this point, I am not sure how to proceed except to rethink that\n> > toFloat() function and many other places where we use exceptions. We get\n> > such dirty data that I need a \"safe\" way to convert a string to float\n> > without throwing an exception. BTW, I tried other combinations in case\n> > there may have been some weird interactions with the ::REAL conversion\n> > operator, but nothing made any change. Could you recommend another approach\n> > off the top of your head? I could use regexes for testing etc... Or maybe\n> > there is another option like a no-throw conversion that's built in or in\n> > some extension that you may know of? Like the \"SAFE.\" Prefix in BigQuery.\n> \n> CREATE OR REPLACE FUNCTION safe_to_double_precision(t text)\n> RETURNS double precision AS $$\n> BEGIN\n> IF $1 SIMILAR TO '[+-]?([0-9]*[.])?[0-9]+' THEN\n> RETURN $1::double precision;\n> ELSE\n> RETURN NULL;\n> END IF;\n> END;\n> $$ LANGUAGE plpgsql IMMUTABLE STRICT;\n\nThis tries to use a regex to determine if something is a \"Number\" or not.\nWhich has all the issues enumerated in painful detail by long answers on stack\noverflow, and other wiki/blog/forums.\n\nRather than trying to define Numbers using regex, I'd try to avoid only the\nmost frequent exceptions and get 90% of the performance back. I don't know\nwhat your data looks like, but you might try things like this:\n\nIF $1 IS NULL THEN RETURN $2\nELSE IF $1 ~ '^$' THEN RETURN $2\nELSE IF $1 ~ '[[:alpha:]]{2}' THEN RETURN $2\nELSE IF $1 !~ '[[:digit:]]' THEN RETURN $2\nBEGIN \n RETURN $1::float;\nEXCEPTION WHEN OTHERS THEN \n RETURN $2;\nEND; \n\nYou can check the stackoverflow page for ideas as to what kind of thing to\nreject, but it may depend mostly on your data (what is the most common string?\nThe most common exceptional string?).\n\nI think it's possible that could even be *faster* than the original, since it\navoids the exception block for values which are for sure going to cause an\nexception anyway. It might be that using alternation (|) is faster (if less\nreadable) than using a handful of IF branches.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 29 Aug 2021 22:16:48 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and\n 13.4 (workarounds)" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Justin Pryzby <[email protected]>\r\n > Sent: Sunday, August 29, 2021 23:17\r\n > To: Pavel Stehule <[email protected]>\r\n > Cc: [email protected]; Tom Lane <[email protected]>; Ranier\r\n > Vilela <[email protected]>; Andrew Dunstan\r\n > <[email protected]>; [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4 (workarounds)\r\n > \r\n > On Mon, Aug 30, 2021 at 04:43:23AM +0200, Pavel Stehule wrote:\r\n > > po 30. 8. 2021 v 2:44 odesílatel [email protected] napsal:\r\n > > > At this point, I am not sure how to proceed except to rethink that\r\n > > > toFloat() function and many other places where we use exceptions.\r\n > We\r\n > > > get such dirty data that I need a \"safe\" way to convert a string to\r\n > > > float without throwing an exception. BTW, I tried other\r\n > combinations\r\n > > > in case there may have been some weird interactions with the ::REAL\r\n > > > conversion operator, but nothing made any change. Could you\r\n > > > recommend another approach off the top of your head? I could use\r\n > > > regexes for testing etc... Or maybe there is another option like a\r\n > > > no-throw conversion that's built in or in some extension that you\r\n > may know of? Like the \"SAFE.\" Prefix in BigQuery.\r\n > >\r\n > > CREATE OR REPLACE FUNCTION safe_to_double_precision(t text)\r\n > RETURNS\r\n > > double precision AS $$ BEGIN\r\n > > IF $1 SIMILAR TO '[+-]?([0-9]*[.])?[0-9]+' THEN\r\n > > RETURN $1::double precision;\r\n > > ELSE\r\n > > RETURN NULL;\r\n > > END IF;\r\n > > END;\r\n > > $$ LANGUAGE plpgsql IMMUTABLE STRICT;\r\n > \r\n > This tries to use a regex to determine if something is a \"Number\" or not.\r\n > Which has all the issues enumerated in painful detail by long answers on\r\n > stack overflow, and other wiki/blog/forums.\r\n > \r\n > Rather than trying to define Numbers using regex, I'd try to avoid only\r\n > the most frequent exceptions and get 90% of the performance back. I\r\n > don't know what your data looks like, but you might try things like this:\r\n > \r\n > IF $1 IS NULL THEN RETURN $2\r\n > ELSE IF $1 ~ '^$' THEN RETURN $2\r\n > ELSE IF $1 ~ '[[:alpha:]]{2}' THEN RETURN $2 ELSE IF $1 !~ '[[:digit:]]' THEN\r\n > RETURN $2\r\n > BEGIN\r\n > RETURN $1::float;\r\n > EXCEPTION WHEN OTHERS THEN\r\n > RETURN $2;\r\n > END;\r\n > \r\n > You can check the stackoverflow page for ideas as to what kind of thing\r\n > to reject, but it may depend mostly on your data (what is the most\r\n > common string?\r\n > The most common exceptional string?).\r\n > \r\n > I think it's possible that could even be *faster* than the original, since it\r\n > avoids the exception block for values which are for sure going to cause\r\n > an exception anyway. It might be that using alternation (|) is faster (if\r\n > less\r\n > readable) than using a handful of IF branches.\r\n > \r\n > --\r\n > Justin\r\n\r\nThat's exactly where my head was at. I have looked different way to test for a floating point number and recognize the challenge 😊\r\n\r\nThe data is very messy with people entering data by hand. We have seen alpha and punctuation, people copy/pasting from excel so large numbers get the \"e\" notation. It's a total mess. The application that authors that data is a piece of crap and we have no chance to change it unfortunately. Short of rolling out an ETL process, which is painful for the way our data comes in, I need an in-db solution.\r\n\r\nThank you!\r\nLaurent.\r\n", "msg_date": "Mon, 30 Aug 2021 04:20:38 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\n (workarounds)" }, { "msg_contents": "\r\n > I tried this scenario using edb's 13.3 x64 install:\r\n > \r\n > postgres=# select version();\r\n > version\r\n > ------------------------------------------------------------\r\n > PostgreSQL 13.3, compiled by Visual C++ build 1914, 64-bit\r\n > (1 row)\r\n > \r\n > \r\n > postgres=# \\l postgres\r\n > List of databases\r\n > Name | Owner | Encoding | Collate | Ctype | Access privileges\r\n > ----------+----------+----------+---------+-------+-------------------\r\n > postgres | postgres | UTF8 | C | C |\r\n > (1 row)\r\n > \r\n > postgres=# explain (analyze,buffers,COSTS,TIMING) postgres-# select\r\n > MAX(toFloat(a, null)) as \"a\" from sampletest;\r\n > QUERY PLAN\r\n > -----------------------------------------------------------------------------------------------------\r\n > -------------------\r\n > Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual\r\n > time=44962.279..44962.280 rows=1 loops=1)\r\n > Buffers: shared hit=657\r\n > -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000\r\n > width=15) (actual time=0.009..8.900 rows=100000 loops=1)\r\n > Buffers: shared hit=637\r\n > Planning:\r\n > Buffers: shared hit=78\r\n > Planning Time: 0.531 ms\r\n > Execution Time: 44963.747 ms\r\n > (8 rows)\r\n > \r\n > and with locally compiled REL_13_STABLE's head on the same machine:\r\n > \r\n > rjuju=# select version();\r\n > version\r\n > ------------------------------------------------------------\r\n > PostgreSQL 13.4, compiled by Visual C++ build 1929, 64-bit\r\n > (1 row)\r\n > \r\n > rjuju=# \\l rjuju\r\n > List of databases Name | Owner | Encoding | Collate |\r\n > Ctype | Access privileges\r\n > -------+-------+----------+---------+-------+-------------------\r\n > rjuju | rjuju | UTF8 | C | C |\r\n > (1 row)\r\n > \r\n > rjuju-# select MAX(toFloat(a, null)) as \"a\" from sampletest;\r\n > QUERY PLAN\r\n > -----------------------------------------------------------------------------------------------------\r\n > ------------------\r\n > Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual\r\n > time=460.334..460.334 rows=1 loops=1)\r\n > Buffers: shared hit=646 read=1\r\n > -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056\r\n > width=32) (actual time=0.010..7.612 rows=100000 loops=1)\r\n > Buffers: shared hit=637\r\n > Planning:\r\n > Buffers: shared hit=20 read=1\r\n > Planning Time: 0.125 ms\r\n > Execution Time: 460.527 ms\r\n > (8 rows)\r\n > \r\n > Note that I followed [1], so I simply used \"build\" and \"install\". I have no\r\n > idea what is done by default and if NLS is included or not.\r\n > \r\n > So if default build on windows has NLS included, it probably means that\r\n > either there's something specific on edb's build (I have no idea how their\r\n > build is produced) or their version of msvc is responsible for that.\r\n > \r\n > [1]: https://www.postgresql.org/docs/current/install-windows-\r\n > full.html#id-1.6.4.8.10\r\n\r\n\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------\r\n\r\nHello,\r\n\r\nSo you are seeing a 100x difference.\r\n\r\n > Execution Time: 44963.747 ms\r\n > Execution Time: 460.527 ms\r\n\r\nI see on https://www.postgresql.org/download/ that there is a different installer from 2ndQuadrant. I am going to try that one and see what I come up with. Are there any other \"standard\" distros of Postgres that I could try out?\r\n\r\nAdditionally, is there a DLL or EXE file that you could make available to me that I could simply patch on my current install and see if it makes any difference? Or a zip of the lib/bin folders? I found out I could download Visual Studio community edition so I am trying this, but may not have the time to get through a build any time soon as per my unfamiliarity with the process. I'll follow Ranier's steps and see if that gets me somewhere.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n", "msg_date": "Mon, 30 Aug 2021 16:04:04 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n > I see on https://www.postgresql.org/download/ that there is a different\r\n > installer from 2ndQuadrant. I am going to try that one and see what I\r\n > come up with. Are there any other \"standard\" distros of Postgres that I\r\n > could try out?\r\n > \r\n > I found out I could download Visual Studio community edition so I am\r\n > trying this, but may not have the time to get through a build any time\r\n > soon as per my unfamiliarity with the process. I'll follow Ranier's steps\r\n > and see if that gets me somewhere.\r\n > \r\n > Thank you,\r\n > Laurent.\r\n\r\n\r\nHello all,\r\n\r\nI think I had a breakthrough. I tried to create a local build and wasn't able to. But I downloaded the 2nd Quadrant installer and the issue disappeared!!! I think this is proof that it's not my personal environment, nor something intrinsic in the codebase, but definitely something in the standard EDB installer.\r\n\r\n\r\ncreate table sampletest (a varchar, b varchar);\r\ninsert into sampletest (a, b)\r\nselect substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\r\n from generate_series(1,100000);\r\n\r\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\r\nRETURNS real AS $$\r\nBEGIN\r\n RETURN case when str is null then val else str::real end;\r\nEXCEPTION WHEN OTHERS THEN\r\n RETURN val;\r\nEND;\r\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as \"a\" from sampletest;\r\n--Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=2092.922..2092.923 rows=1 loops=1)\r\n-- Buffers: shared hit=637\r\n-- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=15) (actual time=0.028..23.925 rows=100000 loops=1)\r\n-- Buffers: shared hit=637\r\n--Planning Time: 0.168 ms\r\n--Execution Time: 2092.957 ms\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b, null)) as \"b\" from sampletest;\r\n--Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=369.475..369.476 rows=1 loops=1)\r\n-- Buffers: shared hit=637\r\n-- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=8) (actual time=0.020..18.746 rows=100000 loops=1)\r\n-- Buffers: shared hit=637\r\n--Planning Time: 0.129 ms\r\n--Execution Time: 369.507 ms\r\n\r\n\r\nThank you,\r\nLaurent!\r\n\r\n\r\n", "msg_date": "Tue, 31 Aug 2021 02:18:16 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "-----Message d'origine-----\nDe : [email protected] <[email protected]> \nEnvoyé : mardi 31 août 2021 04:18\nÀ : [email protected]; Julien Rouhaud <[email protected]>\nCc : Tom Lane <[email protected]>; Ranier Vilela <[email protected]>; Andrew Dunstan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\nObjet : RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\nImportance : Haute\n\n\n > I see on https://www.postgresql.org/download/ that there is a different\n > installer from 2ndQuadrant. I am going to try that one and see what I\n > come up with. Are there any other \"standard\" distros of Postgres that I\n > could try out?\n > \n > I found out I could download Visual Studio community edition so I am\n > trying this, but may not have the time to get through a build any time\n > soon as per my unfamiliarity with the process. I'll follow Ranier's steps\n > and see if that gets me somewhere.\n > \n > Thank you,\n > Laurent.\n\n\nHello all,\n\nI think I had a breakthrough. I tried to create a local build and wasn't able to. But I downloaded the 2nd Quadrant installer and the issue disappeared!!! I think this is proof that it's not my personal environment, nor something intrinsic in the codebase, but definitely something in the standard EDB installer.\n\n\ncreate table sampletest (a varchar, b varchar); insert into sampletest (a, b) select substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\n from generate_series(1,100000);\n\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real) RETURNS real AS $$ BEGIN\n RETURN case when str is null then val else str::real end; EXCEPTION WHEN OTHERS THEN\n RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n\nexplain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(a, null)) as \"a\" from sampletest; --Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=2092.922..2092.923 rows=1 loops=1)\n-- Buffers: shared hit=637\n-- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=15) (actual time=0.028..23.925 rows=100000 loops=1)\n-- Buffers: shared hit=637\n--Planning Time: 0.168 ms\n--Execution Time: 2092.957 ms\n\nexplain (analyze,buffers,COSTS,TIMING) select MAX(toFloat(b, null)) as \"b\" from sampletest; --Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=369.475..369.476 rows=1 loops=1)\n-- Buffers: shared hit=637\n-- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=8) (actual time=0.020..18.746 rows=100000 loops=1)\n-- Buffers: shared hit=637\n--Planning Time: 0.129 ms\n--Execution Time: 369.507 ms\n\n\nThank you,\nLaurent!\n\n_________________________________________________________\nHi,\n\nSomething which has nothing with the thread but I think it must be said :-)\nWhy substring(x, 0, ...)?\nmsym=> select substr('abcde', 0, 3), substr('abcde', 1, 3);\n substr | substr\n--------+--------\n ab | abc\n\nMichel SALAIS\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 08:27:40 +0200", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 8/30/21 10:18 PM, [email protected] wrote:\n> > I see on https://www.postgresql.org/download/ that there is a different\n> > installer from 2ndQuadrant. I am going to try that one and see what I\n> > come up with. Are there any other \"standard\" distros of Postgres that I\n> > could try out?\n> > \n> > I found out I could download Visual Studio community edition so I am\n> > trying this, but may not have the time to get through a build any time\n> > soon as per my unfamiliarity with the process. I'll follow Ranier's steps\n> > and see if that gets me somewhere.\n> > \n> > Thank you,\n> > Laurent.\n>\n>\n> Hello all,\n>\n> I think I had a breakthrough. I tried to create a local build and wasn't able to. But I downloaded the 2nd Quadrant installer and the issue disappeared!!! I think this is proof that it's not my personal environment, nor something intrinsic in the codebase, but definitely something in the standard EDB installer.\n>\n>\n\nNo, you're on the wrong track. As I reported earlier, I have reproduced\nthis issue with a vanilla build which has no installer involvement\nwhatsoever.\n\nI'm pretty sure the reason you are not seeing this with the 2ndQuadrant\ninstaller is quite simple: it wasn't build with NLS support.\n\nLet me repeat what I said earlier. I will get to the bottom of this.\nPlease be patient and stop running after red herrings.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 09:40:12 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Andrew Dunstan <[email protected]>\r\n > Sent: Tuesday, August 31, 2021 09:40\r\n > To: [email protected]; Julien Rouhaud <[email protected]>\r\n > Cc: Tom Lane <[email protected]>; Ranier Vilela <[email protected]>;\r\n > Justin Pryzby <[email protected]>; pgsql-\r\n > [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \r\n > On 8/30/21 10:18 PM, [email protected] wrote:\r\n > > > I see on https://www.postgresql.org/download/ that there is a\r\n > different\r\n > > > installer from 2ndQuadrant. I am going to try that one and see\r\n > what I\r\n > > > come up with. Are there any other \"standard\" distros of Postgres\r\n > that I\r\n > > > could try out?\r\n > > >\r\n > > > I found out I could download Visual Studio community edition so I\r\n > am\r\n > > > trying this, but may not have the time to get through a build any\r\n > time\r\n > > > soon as per my unfamiliarity with the process. I'll follow Ranier's\r\n > steps\r\n > > > and see if that gets me somewhere.\r\n > > >\r\n > > > Thank you,\r\n > > > Laurent.\r\n > >\r\n > >\r\n > > Hello all,\r\n > >\r\n > > I think I had a breakthrough. I tried to create a local build and wasn't\r\n > able to. But I downloaded the 2nd Quadrant installer and the issue\r\n > disappeared!!! I think this is proof that it's not my personal\r\n > environment, nor something intrinsic in the codebase, but definitely\r\n > something in the standard EDB installer.\r\n > >\r\n > >\r\n > \r\n > No, you're on the wrong track. As I reported earlier, I have reproduced\r\n > this issue with a vanilla build which has no installer involvement\r\n > whatsoever.\r\n > \r\n > I'm pretty sure the reason you are not seeing this with the 2ndQuadrant\r\n > installer is quite simple: it wasn't build with NLS support.\r\n > \r\n > Let me repeat what I said earlier. I will get to the bottom of this.\r\n > Please be patient and stop running after red herrings.\r\n > \r\n > \r\n > cheers\r\n > \r\n > \r\n > andrew\r\n > \r\n > \r\n > --\r\n > Andrew Dunstan\r\n > EDB: https://www.enterprisedb.com\r\n\r\nOK... I thought that track had been abandoned as per Julien's last message. Anyways, I'll be patient!\r\n\r\nThank you for all the work.\r\nLaurent.\r\n\r\n\r\n", "msg_date": "Tue, 31 Aug 2021 14:51:49 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Tue, Aug 31, 2021 at 10:51 PM [email protected]\n<[email protected]> wrote:\n>\n> OK... I thought that track had been abandoned as per Julien's last message. Anyways, I'll be patient!\n>\n\nI just happened to have both standard installer and locally compiled\nversions available, so I could confirm that I reproduced the problem\nat least with the standard installer. Note that my message also said\n\" if default build on windows has NLS included\". After looking a bit\nmore into the Windows build system, I confirm that NLS isn't included\nby default so this is not the problem, as Andrew said.\n\nAfter installing gettext and a few other dependencies, adapting\nconfig.pl I wish I could also confirm being able to reproduce the\nproblem on my build, but apparently I'm missing something as I can't\nget any modification in config.pl have any effect. I'm not gonna\nwaste more time on that since Andrew is already in the middle of the\ninvestigation.\n\n\n", "msg_date": "Tue, 31 Aug 2021 23:37:33 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 8/31/21 11:37 AM, Julien Rouhaud wrote:\n> On Tue, Aug 31, 2021 at 10:51 PM [email protected]\n> <[email protected]> wrote:\n>> OK... I thought that track had been abandoned as per Julien's last message. Anyways, I'll be patient!\n>>\n> I just happened to have both standard installer and locally compiled\n> versions available, so I could confirm that I reproduced the problem\n> at least with the standard installer. Note that my message also said\n> \" if default build on windows has NLS included\". After looking a bit\n> more into the Windows build system, I confirm that NLS isn't included\n> by default so this is not the problem, as Andrew said.\n>\n> After installing gettext and a few other dependencies, adapting\n> config.pl I wish I could also confirm being able to reproduce the\n> problem on my build, but apparently I'm missing something as I can't\n> get any modification in config.pl have any effect. I'm not gonna\n> waste more time on that since Andrew is already in the middle of the\n> investigation.\n\n\n\nThe culprit turns out to be the precise version of libiconv/libintl\nused. There is a slight difference between the versions used in the\n11.13 installer and the 13.4 installer. We need to dig into performance\nmore (e.g. why does the test take much longer on an NLS enabled build\neven when we are using 'initdb --no-locale'?) But I'm pretty confident\nnow that this is the issue. I've started talks with our installer guys\nabout fixing it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 13:55:55 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Wed, Sep 1, 2021 at 1:56 AM Andrew Dunstan <[email protected]> wrote:\n>\n> The culprit turns out to be the precise version of libiconv/libintl\n> used. There is a slight difference between the versions used in the\n> 11.13 installer and the 13.4 installer. We need to dig into performance\n> more (e.g. why does the test take much longer on an NLS enabled build\n> even when we are using 'initdb --no-locale'?) But I'm pretty confident\n> now that this is the issue. I've started talks with our installer guys\n> about fixing it.\n\nFTR it's consistent with my own setup. I could finally compile\npostgres with NLS support and libintl 0.18.1 and I only got a limited\noverhead: the runtime increases from ~460ms to ~1.5s (and ~2s with\nlc_messages to something else than C), but that's way better than the\n~44s with the current edb version.\n\n\n", "msg_date": "Wed, 1 Sep 2021 13:14:24 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On 8/31/21 1:55 PM, Andrew Dunstan wrote:\n> On 8/31/21 11:37 AM, Julien Rouhaud wrote:\n>> On Tue, Aug 31, 2021 at 10:51 PM [email protected]\n>> <[email protected]> wrote:\n>>> OK... I thought that track had been abandoned as per Julien's last message. Anyways, I'll be patient!\n>>>\n>> I just happened to have both standard installer and locally compiled\n>> versions available, so I could confirm that I reproduced the problem\n>> at least with the standard installer. Note that my message also said\n>> \" if default build on windows has NLS included\". After looking a bit\n>> more into the Windows build system, I confirm that NLS isn't included\n>> by default so this is not the problem, as Andrew said.\n>>\n>> After installing gettext and a few other dependencies, adapting\n>> config.pl I wish I could also confirm being able to reproduce the\n>> problem on my build, but apparently I'm missing something as I can't\n>> get any modification in config.pl have any effect. I'm not gonna\n>> waste more time on that since Andrew is already in the middle of the\n>> investigation.\n>\n>\n> The culprit turns out to be the precise version of libiconv/libintl\n> used. There is a slight difference between the versions used in the\n> 11.13 installer and the 13.4 installer. We need to dig into performance\n> more (e.g. why does the test take much longer on an NLS enabled build\n> even when we are using 'initdb --no-locale'?) But I'm pretty confident\n> now that this is the issue. I've started talks with our installer guys\n> about fixing it.\n>\n>\n\n\nHere are a couple of pictures of profiles made with a tool called\nsleepy. The bad profile is from release 13.4 built with the latest\ngettext, built with vcpkg. The good profile is the same build but using\nthe intl-8.dll copied from the release 11.13 installer. The good run\ntakes about a minute. The bad run takes about 30 minutes.\n\n\nI'm not exactly sure what the profiles tell us.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 2 Sep 2021 11:22:54 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On Thu, Sep 2, 2021 at 11:22 PM Andrew Dunstan <[email protected]> wrote:\n>\n> Here are a couple of pictures of profiles made with a tool called\n> sleepy. The bad profile is from release 13.4 built with the latest\n> gettext, built with vcpkg. The good profile is the same build but using\n> the intl-8.dll copied from the release 11.13 installer. The good run\n> takes about a minute. The bad run takes about 30 minutes.\n>\n>\n> I'm not exactly sure what the profiles tell us.\n\nIsn't GetLocaleInfoA suspicious? Especially since the doc [1] says\nthat it shouldn't be called anymore unless you want to have\ncompatibility with OS from more than a decade ago?\n\n[1] https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getlocaleinfoa\n\n\n", "msg_date": "Thu, 2 Sep 2021 23:34:23 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "Em qui., 2 de set. de 2021 às 12:22, Andrew Dunstan <[email protected]>\nescreveu:\n\n>\n> On 8/31/21 1:55 PM, Andrew Dunstan wrote:\n> > On 8/31/21 11:37 AM, Julien Rouhaud wrote:\n> >> On Tue, Aug 31, 2021 at 10:51 PM [email protected]\n> >> <[email protected]> wrote:\n> >>> OK... I thought that track had been abandoned as per Julien's last\n> message. Anyways, I'll be patient!\n> >>>\n> >> I just happened to have both standard installer and locally compiled\n> >> versions available, so I could confirm that I reproduced the problem\n> >> at least with the standard installer. Note that my message also said\n> >> \" if default build on windows has NLS included\". After looking a bit\n> >> more into the Windows build system, I confirm that NLS isn't included\n> >> by default so this is not the problem, as Andrew said.\n> >>\n> >> After installing gettext and a few other dependencies, adapting\n> >> config.pl I wish I could also confirm being able to reproduce the\n> >> problem on my build, but apparently I'm missing something as I can't\n> >> get any modification in config.pl have any effect. I'm not gonna\n> >> waste more time on that since Andrew is already in the middle of the\n> >> investigation.\n> >\n> >\n> > The culprit turns out to be the precise version of libiconv/libintl\n> > used. There is a slight difference between the versions used in the\n> > 11.13 installer and the 13.4 installer. We need to dig into performance\n> > more (e.g. why does the test take much longer on an NLS enabled build\n> > even when we are using 'initdb --no-locale'?) But I'm pretty confident\n> > now that this is the issue. I've started talks with our installer guys\n> > about fixing it.\n> >\n> >\n>\n>\n> Here are a couple of pictures of profiles made with a tool called\n> sleepy. The bad profile is from release 13.4 built with the latest\n> gettext, built with vcpkg. The good profile is the same build but using\n> the intl-8.dll copied from the release 11.13 installer. The good run\n> takes about a minute. The bad run takes about 30 minutes.\n>\n>\n> I'm not exactly sure what the profiles tell us.\n>\nBug in the libintl?\nlibintl doesn't cache untranslated strings\nhttps://savannah.gnu.org/bugs/?58006\n\nregards,\nRanier Vilela\n\nEm qui., 2 de set. de 2021 às 12:22, Andrew Dunstan <[email protected]> escreveu:\nOn 8/31/21 1:55 PM, Andrew Dunstan wrote:\n> On 8/31/21 11:37 AM, Julien Rouhaud wrote:\n>> On Tue, Aug 31, 2021 at 10:51 PM [email protected]\n>> <[email protected]> wrote:\n>>> OK... I thought that track had been abandoned as per Julien's last message. Anyways, I'll be patient!\n>>>\n>> I just happened to have both standard installer and locally compiled\n>> versions available, so I could confirm that I reproduced the problem\n>> at least with the standard installer.  Note that my message also said\n>> \" if default build on windows has NLS included\".  After looking a bit\n>> more into the Windows build system, I confirm that NLS isn't included\n>> by default so this is not the problem, as Andrew said.\n>>\n>> After installing gettext and a few other dependencies, adapting\n>> config.pl I wish I could also confirm being able to reproduce the\n>> problem on my build, but apparently I'm missing something as I can't\n>> get any modification in config.pl have any effect.  I'm not gonna\n>> waste more time on that since Andrew is already in the middle of the\n>> investigation.\n>\n>\n> The culprit turns out to be the precise version of libiconv/libintl\n> used. There is a slight difference between the versions used in the\n> 11.13 installer and the 13.4 installer. We need to dig into performance\n> more (e.g. why does the test take much longer on an NLS enabled build\n> even when we are using 'initdb --no-locale'?) But I'm pretty confident\n> now that this is the issue. I've started talks with our installer guys\n> about fixing it.\n>\n>\n\n\nHere are a couple of pictures of profiles made with a tool called\nsleepy. The bad profile is from release 13.4 built with the latest\ngettext, built with vcpkg. The good profile is the same build but using\nthe intl-8.dll copied from the release 11.13 installer. The good run\ntakes about a minute. The bad run takes about 30 minutes.\n\n\nI'm not exactly sure what the profiles tell us.Bug in the libintl?\nlibintl doesn't cache untranslated strings\nhttps://savannah.gnu.org/bugs/?58006 regards,Ranier Vilela", "msg_date": "Thu, 2 Sep 2021 13:39:38 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 9/2/21 11:34 AM, Julien Rouhaud wrote:\n> On Thu, Sep 2, 2021 at 11:22 PM Andrew Dunstan <[email protected]> wrote:\n>> Here are a couple of pictures of profiles made with a tool called\n>> sleepy. The bad profile is from release 13.4 built with the latest\n>> gettext, built with vcpkg. The good profile is the same build but using\n>> the intl-8.dll copied from the release 11.13 installer. The good run\n>> takes about a minute. The bad run takes about 30 minutes.\n>>\n>>\n>> I'm not exactly sure what the profiles tell us.\n> Isn't GetLocaleInfoA suspicious? Especially since the doc [1] says\n> that it shouldn't be called anymore unless you want to have\n> compatibility with OS from more than a decade ago?\n>\n> [1] https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getlocaleinfoa\n\nPossibly, but the profile doesn't show it as having a great impact.\n\nMaybe surrounding code is affected.\n\ncheers\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 2 Sep 2021 12:59:51 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Andrew Dunstan <[email protected]>\r\n > Sent: Thursday, September 2, 2021 13:00\r\n > To: Julien Rouhaud <[email protected]>\r\n > Cc: [email protected]; Tom Lane <[email protected]>; Ranier\r\n > Vilela <[email protected]>; Justin Pryzby <[email protected]>;\r\n > [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \r\n > On 9/2/21 11:34 AM, Julien Rouhaud wrote:\r\n > > On Thu, Sep 2, 2021 at 11:22 PM Andrew Dunstan\r\n > <[email protected]> wrote:\r\n > >> Here are a couple of pictures of profiles made with a tool called\r\n > >> sleepy. The bad profile is from release 13.4 built with the latest\r\n > >> gettext, built with vcpkg. The good profile is the same build but\r\n > >> using the intl-8.dll copied from the release 11.13 installer. The\r\n > >> good run takes about a minute. The bad run takes about 30 minutes.\r\n > >>\r\n > >>\r\n > >> I'm not exactly sure what the profiles tell us.\r\n > > Isn't GetLocaleInfoA suspicious? Especially since the doc [1] says\r\n > > that it shouldn't be called anymore unless you want to have\r\n > > compatibility with OS from more than a decade ago?\r\n > >\r\n > > [1]\r\n > > https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-\r\n > winnls-ge\r\n > > tlocaleinfoa\r\n > \r\n > Possibly, but the profile doesn't show it as having a great impact.\r\n > \r\n > Maybe surrounding code is affected.\r\n > \r\n > cheers\r\n > \r\n > andrew\r\n > \r\n > \r\n > --\r\n > Andrew Dunstan\r\n > EDB: https://www.enterprisedb.com\r\n\r\n\r\nHello all,\r\n\r\nAny further update or guidance on this issue at this time?\r\n\r\nThank you,\r\nLaurent.\r\n", "msg_date": "Mon, 13 Sep 2021 14:32:30 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 9/13/21 10:32 AM, [email protected] wrote:\n>\n> Hello all,\n>\n> Any further update or guidance on this issue at this time?\n>\n\nWait for a new installer. Our team is working on it. As I have\npreviously advised you, please be patient.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 13 Sep 2021 11:35:48 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Andrew Dunstan <[email protected]>\r\n > Sent: Monday, September 13, 2021 11:36\r\n > To: [email protected]; Julien Rouhaud <[email protected]>\r\n > Cc: Tom Lane <[email protected]>; Ranier Vilela <[email protected]>;\r\n > Justin Pryzby <[email protected]>; pgsql-\r\n > [email protected]\r\n > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\r\n > and 13.4\r\n > \r\n > \r\n > On 9/13/21 10:32 AM, [email protected] wrote:\r\n > >\r\n > > Hello all,\r\n > >\r\n > > Any further update or guidance on this issue at this time?\r\n > >\r\n > \r\n > Wait for a new installer. Our team is working on it. As I have previously\r\n > advised you, please be patient.\r\n > \r\n > \r\n > cheers\r\n > \r\n > \r\n > andrew\r\n > \r\n > --\r\n > Andrew Dunstan\r\n > EDB: https://www.enterprisedb.com\r\n\r\n\r\nHello Andrew,\r\n\r\nI'll be as patient as is needed and appreciate absolutely all the work you are all doing. I also know V14 is just around the corner too so the team is super busy 😊\r\n\r\nJust looking for some super-rough ETA for some rough planning on our end. Is this something potentially for 13.5 later this year? Or something that may happen before the end of Sept? Or still unknown? And I understand all is always tentative.\r\n\r\nThank you!\r\nLaurent.\r\n\r\n\r\n", "msg_date": "Mon, 13 Sep 2021 15:53:33 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 9/13/21 11:53 AM, [email protected] wrote:\n>\n> > -----Original Message-----\n> > From: Andrew Dunstan <[email protected]>\n> > Sent: Monday, September 13, 2021 11:36\n> > To: [email protected]; Julien Rouhaud <[email protected]>\n> > Cc: Tom Lane <[email protected]>; Ranier Vilela <[email protected]>;\n> > Justin Pryzby <[email protected]>; pgsql-\n> > [email protected]\n> > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n> > and 13.4\n> > \n> > \n> > On 9/13/21 10:32 AM, [email protected] wrote:\n> > >\n> > > Hello all,\n> > >\n> > > Any further update or guidance on this issue at this time?\n> > >\n> > \n> > Wait for a new installer. Our team is working on it. As I have previously\n> > advised you, please be patient.\n> > \n> > \n> > cheers\n> > \n> > \n> > andrew\n> > \n> > --\n> > Andrew Dunstan\n> > EDB: https://www.enterprisedb.com\n>\n>\n> Hello Andrew,\n>\n> I'll be as patient as is needed and appreciate absolutely all the work you are all doing. I also know V14 is just around the corner too so the team is super busy 😊\n>\n> Just looking for some super-rough ETA for some rough planning on our end. Is this something potentially for 13.5 later this year? Or something that may happen before the end of Sept? Or still unknown? And I understand all is always tentative.\n>\n\nThis is not governed at all by the Postgres release cycle. The issue is\nnot with Postgres but with the version of libintl used in the build. I\ncan't speak for the team, they will publish an updated installer when\nthey get it done. But rest assured it's being worked on. I got email\nabout it just this morning.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 13 Sep 2021 16:36:34 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 9/13/21 4:36 PM, Andrew Dunstan wrote:\n> On 9/13/21 11:53 AM, [email protected] wrote:\n>> > -----Original Message-----\n>> > From: Andrew Dunstan <[email protected]>\n>> > Sent: Monday, September 13, 2021 11:36\n>> > To: [email protected]; Julien Rouhaud <[email protected]>\n>> > Cc: Tom Lane <[email protected]>; Ranier Vilela <[email protected]>;\n>> > Justin Pryzby <[email protected]>; pgsql-\n>> > [email protected]\n>> > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n>> > and 13.4\n>> > \n>> > \n>> > On 9/13/21 10:32 AM, [email protected] wrote:\n>> > >\n>> > > Hello all,\n>> > >\n>> > > Any further update or guidance on this issue at this time?\n>> > >\n>> > \n>> > Wait for a new installer. Our team is working on it. As I have previously\n>> > advised you, please be patient.\n>> > \n>> > \n>> > cheers\n>> > \n>> > \n>> > andrew\n>> > \n>> > --\n>> > Andrew Dunstan\n>> > EDB: https://www.enterprisedb.com\n>>\n>>\n>> Hello Andrew,\n>>\n>> I'll be as patient as is needed and appreciate absolutely all the work you are all doing. I also know V14 is just around the corner too so the team is super busy 😊\n>>\n>> Just looking for some super-rough ETA for some rough planning on our end. Is this something potentially for 13.5 later this year? Or something that may happen before the end of Sept? Or still unknown? And I understand all is always tentative.\n>>\n> This is not governed at all by the Postgres release cycle. The issue is\n> not with Postgres but with the version of libintl used in the build. I\n> can't speak for the team, they will publish an updated installer when\n> they get it done. But rest assured it's being worked on. I got email\n> about it just this morning.\n>\n>\n\nEDB has now published new installers for versions later than release 11,\ncontaining Postgres built with an earlier version of gettext that does\nnot exhibit the problem. Please verify that these fix the issue. If you\nalready have Postgres installed from our installer you should be able to\nupgrade using Stackbuilder. Otherwise, you can download from our usual\ndownload sites.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 24 Sep 2021 16:56:30 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: Andrew Dunstan <[email protected]>\r\n> Sent: Friday, September 24, 2021 16:57\r\n> To: [email protected]; Julien Rouhaud <[email protected]>\r\n> Cc: Tom Lane <[email protected]>; Ranier Vilela <[email protected]>;\r\n> Justin Pryzby <[email protected]>; [email protected]\r\n> Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2 and\r\n> 13.4\r\n> \r\n> \r\n> On 9/13/21 4:36 PM, Andrew Dunstan wrote:\r\n> > On 9/13/21 11:53 AM, [email protected] wrote:\r\n> >> > -----Original Message-----\r\n> >> > From: Andrew Dunstan <[email protected]>\r\n> >> > Sent: Monday, September 13, 2021 11:36\r\n> >> > To: [email protected]; Julien Rouhaud <[email protected]>\r\n> >> > Cc: Tom Lane <[email protected]>; Ranier Vilela\r\n> <[email protected]>;\r\n> >> > Justin Pryzby <[email protected]>; pgsql-\r\n> >> > [email protected]\r\n> >> > Subject: Re: Big Performance drop of Exceptions in UDFs between\r\n> V11.2\r\n> >> > and 13.4\r\n> >> >\r\n> >> >\r\n> >> > On 9/13/21 10:32 AM, [email protected] wrote:\r\n> >> > >\r\n> >> > > Hello all,\r\n> >> > >\r\n> >> > > Any further update or guidance on this issue at this time?\r\n> >> > >\r\n> >> >\r\n> >> > Wait for a new installer. Our team is working on it. As I have previously\r\n> >> > advised you, please be patient.\r\n> >> >\r\n> >> >\r\n> >> > cheers\r\n> >> >\r\n> >> >\r\n> >> > andrew\r\n> >> >\r\n> >> > --\r\n> >> > Andrew Dunstan\r\n> >> > EDB: https://www.enterprisedb.com\r\n> >>\r\n> >>\r\n> >> Hello Andrew,\r\n> >>\r\n> >> I'll be as patient as is needed and appreciate absolutely all the\r\n> >> work you are all doing. I also know V14 is just around the corner too\r\n> >> so the team is super busy 😊\r\n> >>\r\n> >> Just looking for some super-rough ETA for some rough planning on our\r\n> end. Is this something potentially for 13.5 later this year? Or something that\r\n> may happen before the end of Sept? Or still unknown? And I understand all\r\n> is always tentative.\r\n> >>\r\n> > This is not governed at all by the Postgres release cycle. The issue\r\n> > is not with Postgres but with the version of libintl used in the\r\n> > build. I can't speak for the team, they will publish an updated\r\n> > installer when they get it done. But rest assured it's being worked\r\n> > on. I got email about it just this morning.\r\n> >\r\n> >\r\n> \r\n> EDB has now published new installers for versions later than release 11,\r\n> containing Postgres built with an earlier version of gettext that does not\r\n> exhibit the problem. Please verify that these fix the issue. If you already\r\n> have Postgres installed from our installer you should be able to upgrade\r\n> using Stackbuilder. Otherwise, you can download from our usual download\r\n> sites.\r\n> \r\n> \r\n> cheers\r\n> \r\n> \r\n> andrew\r\n> \r\n> \r\n> --\r\n> Andrew Dunstan\r\n> EDB: https://www.enterprisedb.com\r\n\r\n[Laurent Hasson] \r\n\r\nThank you Andrew!!! I may be able to check this over the weekend.\r\n\r\nThank you,\r\nLaurent.\r\n", "msg_date": "Sat, 25 Sep 2021 00:17:07 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\n > > EDB has now published new installers for versions later than release\r\n > > 11, containing Postgres built with an earlier version of gettext that\r\n > > does not exhibit the problem. Please verify that these fix the issue.\r\n > > If you already have Postgres installed from our installer you should\r\n > > be able to upgrade using Stackbuilder. Otherwise, you can download\r\n > > from our usual download sites.\r\n > >\r\n > > cheers\r\n > >\r\n > > andrew\r\n > >\r\n > > --\r\n > > Andrew Dunstan\r\n > > EDB: https://www.enterprisedb.com\r\n \r\n\r\nHello Andrew,\r\n\r\nI just download the 13.4 Windows x86-64 installer from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads but it's the exact same file bit for bit from the previous version I had. Am I looking at the wrong place?\r\n\r\nThank you\r\nLaurent.\r\n\r\n", "msg_date": "Sun, 26 Sep 2021 01:33:30 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\nOn 9/25/21 9:33 PM, [email protected] wrote:\n> > > EDB has now published new installers for versions later than release\n> > > 11, containing Postgres built with an earlier version of gettext that\n> > > does not exhibit the problem. Please verify that these fix the issue.\n> > > If you already have Postgres installed from our installer you should\n> > > be able to upgrade using Stackbuilder. Otherwise, you can download\n> > > from our usual download sites.\n> > >\n> > > cheers\n> > >\n> > > andrew\n> > >\n> > > --\n> > > Andrew Dunstan\n> > > EDB: https://www.enterprisedb.com\n> \n>\n> Hello Andrew,\n>\n> I just download the 13.4 Windows x86-64 installer from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads but it's the exact same file bit for bit from the previous version I had. Am I looking at the wrong place?\n>\n\nThanks. We're dealing with that. However, you can update that version\nvia stackbuilder. It will show you that 13.4.2 is available. This has\nthe correct libintl DLL. I just did this to verify it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 27 Sep 2021 09:25:03 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "On 9/27/21 6:55 PM, Andrew Dunstan wrote:\n>> Hello Andrew,\n>>\n>> I just download the 13.4 Windows x86-64 installer fromhttps://www.enterprisedb.com/downloads/postgres-postgresql-downloads but it's the exact same file bit for bit from the previous version I had. Am I looking at the wrong place?\n>>\n> Thanks. We're dealing with that. However, you can update that version\n> via stackbuilder. It will show you that 13.4.2 is available. This has\n> the correct libintl DLL. I just did this to verify it.\n\nThanks, look like the issue is fixed now, you can try to download the \n'postgresql-13.4-2-windows-x64.exe' installer from the above mentioned link.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 9/27/21 6:55 PM, Andrew Dunstan\n wrote:\n\n\n\nHello Andrew,\n\nI just download the 13.4 Windows x86-64 installer from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads but it's the exact same file bit for bit from the previous version I had. Am I looking at the wrong place?\n\n\n\nThanks. We're dealing with that. However, you can update that version\nvia stackbuilder. It will show you that 13.4.2 is available. This has\nthe correct libintl DLL. I just did this to verify it.\n\nThanks, look like the issue is fixed now, you can try to download\n the 'postgresql-13.4-2-windows-x64.exe' installer from the above\n mentioned link.\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 27 Sep 2021 21:19:31 +0530", "msg_from": "tushar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\nFrom: tushar <[email protected]> \r\nSent: Monday, September 27, 2021 11:50\r\nTo: Andrew Dunstan <[email protected]>; [email protected]; Julien Rouhaud <[email protected]>\r\nCc: Tom Lane <[email protected]>; Ranier Vilela <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\r\n\r\nOn 9/27/21 6:55 PM, Andrew Dunstan wrote:\r\nHello Andrew,\r\n\r\nI just download the 13.4 Windows x86-64 installer from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads but it's the exact same file bit for bit from the previous version I had. Am I looking at the wrong place?\r\n\r\nThanks. We're dealing with that. However, you can update that version\r\nvia stackbuilder. It will show you that 13.4.2 is available. This has\r\nthe correct libintl DLL. I just did this to verify it.\r\nThanks, look like the issue is fixed now, you can try to download the 'postgresql-13.4-2-windows-x64.exe' installer from the above mentioned link.\r\n-- \r\nregards,tushar\r\nEnterpriseDB https://www.enterprisedb.com/\r\nThe Enterprise PostgreSQL Company\r\n\r\n\r\nFantastic, I may be able to try again tonight and will report back. The environment I work in is isolated from the internet, so I can't use StackBuilder.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n", "msg_date": "Mon, 27 Sep 2021 16:05:26 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" }, { "msg_contents": "\r\nFrom: tushar <[email protected]> \r\nSent: Monday, September 27, 2021 11:50\r\nTo: Andrew Dunstan <[email protected]>; [email protected]; Julien Rouhaud <[email protected]>\r\nCc: Tom Lane <[email protected]>; Ranier Vilela <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4\r\n\r\nOn 9/27/21 6:55 PM, Andrew Dunstan wrote:\r\nHello Andrew,\r\n\r\nI just download the 13.4 Windows x86-64 installer from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads but it's the exact same file bit for bit from the previous version I had. Am I looking at the wrong place?\r\n\r\nThanks. We're dealing with that. However, you can update that version\r\nvia stackbuilder. It will show you that 13.4.2 is available. This has\r\nthe correct libintl DLL. I just did this to verify it.\r\n\r\nThanks, look like the issue is fixed now, you can try to download the 'postgresql-13.4-2-windows-x64.exe' installer from the above mentioned link.\r\n-- \r\nregards,tushar\r\nEnterpriseDB https://www.enterprisedb.com/\r\nThe Enterprise PostgreSQL Company\r\n\r\n\r\n-------------------------------------------------------------------------------------------------------------------\r\n\r\nHello all!\r\n\r\nWOW!!!! Time for a cigar as there is double good news 😊\r\n- The scenario no longer exacerbates the system and performance went from around 90s to around 2.7 seconds! That's in line with older 11.2 builds I was measuring against.\r\n- The simpler scenario (no throw) looks like it improved by roughly 20%, from 186ms to 146ms\r\n\r\nI had run the scenarios multiple times before and the times were on the average, so I think those gains are real. Thank you for all your efforts. The Postgres community is amazing!\r\n\r\n\r\nHere is the scenario again:\r\n\r\ndrop table sampletest;\r\ncreate table sampletest (a varchar, b varchar);\r\ninsert into sampletest (a, b)\r\nselect substr(md5(random()::text), 0, 15), (100000000*random())::integer::varchar\r\n from generate_series(1,100000);\r\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\r\nRETURNS real AS $$\r\nBEGIN\r\n RETURN case when str is null then val else str::real end;\r\nEXCEPTION WHEN OTHERS THEN\r\n RETURN val;\r\nEND;\r\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\r\n\r\nThis is what I had on the original 13.4 Windows x64 eDB build:\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) \r\nselect MAX(toFloat(a, null)) as \"a\" from sampletest\r\n--Aggregate (cost=1477.84..1477.85 rows=1 width=4) (actual time=89527.032..89527.033 rows=1 loops=1)\r\n-- Buffers: shared hit=647\r\n-- -> Seq Scan on sampletest (cost=0.00..1197.56 rows=56056 width=32) (actual time=0.024..37.811 rows=100000 loops=1)\r\n-- Buffers: shared hit=637\r\n--Planning:\r\n-- Buffers: shared hit=24\r\n--Planning Time: 0.347 ms\r\n--Execution Time: 89527.501 ms\r\n\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) \r\nselect MAX(toFloat(b, null)) as \"b\" from sampletest\r\n--Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=186.605..186.606 rows=1 loops=1)\r\n-- Buffers: shared hit=637\r\n-- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=8) (actual time=0.008..9.679 rows=100000 loops=1)\r\n-- Buffers: shared hit=637\r\n--Planning:\r\n-- Buffers: shared hit=4\r\n--Planning Time: 0.339 ms\r\n--Execution Time: 186.641 ms\r\n\r\n\r\nThis is what I get on the new build\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) \r\nselect MAX(toFloat(a, null)) as \"a\" from sampletest\r\n--QUERY PLAN |\r\n-------------------------------------------------------------------------------------------------------------------------|\r\n--Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=2711.314..2711.315 rows=1 loops=1) |\r\n-- Buffers: shared hit=637 |\r\n-- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=15) (actual time=0.009..12.557 rows=100000 loops=1)|\r\n-- Buffers: shared hit=637 |\r\n--Planning Time: 0.062 ms |\r\n--Execution Time: 2711.336 ms |\r\n\r\nexplain (analyze,buffers,COSTS,TIMING) \r\nselect MAX(toFloat(b, null)) as \"b\" from sampletest\r\n--QUERY PLAN |\r\n-----------------------------------------------------------------------------------------------------------------------|\r\n--Aggregate (cost=2137.00..2137.01 rows=1 width=4) (actual time=146.689..146.689 rows=1 loops=1) |\r\n-- Buffers: shared hit=637 |\r\n-- -> Seq Scan on sampletest (cost=0.00..1637.00 rows=100000 width=8) (actual time=0.009..8.060 rows=100000 loops=1)|\r\n-- Buffers: shared hit=637 |\r\n--Planning Time: 0.060 ms |\r\n--Execution Time: 146.709 ms |\r\n\r\n\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n", "msg_date": "Tue, 28 Sep 2021 04:23:05 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4" } ]
[ { "msg_contents": "Hi,\nI know I can alter schema name after restoring but the problem is the name already exist and I don't want to touch that existing schema.The dump type is \"custom\".\n\n\nSo effectively I want something like.pg_dump -U postgres --schema \"source_schema\" --format \"c\" --create --file \"source_schema.bak\" my_dbpg_restore -U postgres --exit-on-error --dbname \"my_db\"  --destination-schema \"destination_schema\"  \nCurrently this is not something can do. this functionality is there in oracle. \n\n\nIs this future considering to add?  (it would really help for create any test schemas without disturbing current schema. )\n\nThanks,Rj\nHi,I know I can alter schema name after restoring but the problem is the name already exist and I don't want to touch that existing schema.The dump type is \"custom\".So effectively I want something like.pg_dump -U postgres --schema \"source_schema\" --format \"c\" --create --file \"source_schema.bak\" my_dbpg_restore -U postgres --exit-on-error --dbname \"my_db\"  --destination-schema \"destination_schema\"  Currently this is not something can do. this functionality is there in oracle. Is this future considering to add?  (it would really help for create any test schemas without disturbing current schema. )Thanks,Rj", "msg_date": "Mon, 23 Aug 2021 09:44:07 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "pg_restore schema dump to schema with different name" }, { "msg_contents": "On Mon, 2021-08-23 at 09:44 +0000, Nagaraj Raj wrote:\n> I know I can alter schema name after restoring but the problem is the name already exist and I don't want to touch that existing schema.\n> The dump type is \"custom\".\n> \n> So effectively I want something like.\n> pg_dump -U postgres --schema \"source_schema\" --format \"c\" --create --file \"source_schema.bak\" my_db\n> pg_restore -U postgres --exit-on-error --dbname \"my_db\"  --destination-schema \"destination_schema\"\n\nThe only way to do that is to create a new database, import the data there,\nrename the schema and dump again.\n\nThen import that dump into the target database.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Mon, 23 Aug 2021 15:19:26 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore schema dump to schema with different name" }, { "msg_contents": "\n> The only way to do that is to create a new database, import the data\n> there, rename the schema and dump again.\n> \n> Then import that dump into the target database.\n\nOr maybe (if you can afford to have source_schema unavailable for some\ntime) :\n\n* rename source_schema to tmp_source\n* import (that will re-create source_schema)\n* rename source_schema to destination_schema\n* rename back tmp_source to source_schema\n\n\n", "msg_date": "Mon, 23 Aug 2021 15:38:44 +0200", "msg_from": "Jean-Christophe Boggio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore schema dump to schema with different name" }, { "msg_contents": "Wouldn’t be easy if we have option to_schema ? \nAbsolutely, I should not alter current schema, as it live 24/7.\nThanks,Rj On Monday, August 23, 2021, 06:39:03 AM PDT, Jean-Christophe Boggio <[email protected]> wrote: \n \n \n> The only way to do that is to create a new database, import the data\n> there, rename the schema and dump again.\n> \n> Then import that dump into the target database.\n\nOr maybe (if you can afford to have source_schema unavailable for some\ntime) :\n\n* rename source_schema to tmp_source\n* import (that will re-create  source_schema)\n* rename source_schema to destination_schema\n* rename back tmp_source to source_schema\n\n\n \n\nWouldn’t be easy if we have option to_schema ? Absolutely, I should not alter current schema, as it live 24/7.Thanks,Rj\n\n\n\n On Monday, August 23, 2021, 06:39:03 AM PDT, Jean-Christophe Boggio <[email protected]> wrote:\n \n\n\n> The only way to do that is to create a new database, import the data> there, rename the schema and dump again.> > Then import that dump into the target database.Or maybe (if you can afford to have source_schema unavailable for sometime) :* rename source_schema to tmp_source* import (that will re-create  source_schema)* rename source_schema to destination_schema* rename back tmp_source to source_schema", "msg_date": "Mon, 23 Aug 2021 17:54:33 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_restore schema dump to schema with different name" }, { "msg_contents": "On Mon, 2021-08-23 at 17:54 +0000, Nagaraj Raj wrote:\n> Wouldn’t be easy if we have option to_schema ?\n\nSure, but it wouldn't be easy to implement that.\nIt would have to be a part of \"pg_dump\".\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 24 Aug 2021 11:53:42 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore schema dump to schema with different name" }, { "msg_contents": "On Mon, Aug 23, 2021 at 2:46 AM Nagaraj Raj <[email protected]> wrote:\n\n>\n> Currently this is not something can do. this functionality is there in\n> oracle.\n>\n> Is this future considering to add? (it would really help for create any\n> test schemas without disturbing current schema. )\n>\n>\nI find this to be not all that useful. Current practice is to avoid\nrelying on search_path and, in general, to schema-qualify object references\n(yes, attaching a local SET search_path to a function works, not sure how\nit would play out in this context). Performing a dependency and contextual\nrename of one schema name to another is challenging given all of that, and\nimpossible if the schema name is hard-coded into a function body.\n\nI won't say we wouldn't accept such a patch, but as this isn't exactly a\nnew problem or realization, and the feature doesn't presently exist, that\nfor whatever reasons individuals may have no one has chosen to volunteer or\nfund such development. I don't even remember seeing a proposal in the past\n5 or so years.\n\nDavid J.\n\nOn Mon, Aug 23, 2021 at 2:46 AM Nagaraj Raj <[email protected]> wrote:Currently this is not something can do. this functionality is there in oracle. Is this future considering to add?  (it would really help for create any test schemas without disturbing current schema. )I find this to be not all that useful.  Current practice is to avoid relying on search_path and, in general, to schema-qualify object references (yes, attaching a local SET search_path to a function works, not sure how it would play out in this context).  Performing a dependency and contextual rename of one schema name to another is challenging given all of that, and impossible if the schema name is hard-coded into a function body.I won't say we wouldn't accept such a patch, but as this isn't exactly a new problem or realization, and the feature doesn't presently exist, that for whatever reasons individuals may have no one has chosen to volunteer or fund such development.  I don't even remember seeing a proposal in the past 5 or so years.David J.", "msg_date": "Tue, 24 Aug 2021 07:55:46 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore schema dump to schema with different name" }, { "msg_contents": "I agree with that.But, probably its good idea to add this feature as many people are migrating from oracle to postgres. clone/restore schemas to existing cluster for any test cases like sandbox schema, temp schema as live backup schema etc. \nThanks,Rj\n On Tuesday, August 24, 2021, 07:56:20 AM PDT, David G. Johnston <[email protected]> wrote: \n \n On Mon, Aug 23, 2021 at 2:46 AM Nagaraj Raj <[email protected]> wrote:\n\n\nCurrently this is not something can do. this functionality is there in oracle. \nIs this future considering to add?  (it would really help for create any test schemas without disturbing current schema. )\n\n\nI find this to be not all that useful.  Current practice is to avoid relying on search_path and, in general, to schema-qualify object references (yes, attaching a local SET search_path to a function works, not sure how it would play out in this context).  Performing a dependency and contextual rename of one schema name to another is challenging given all of that, and impossible if the schema name is hard-coded into a function body.\nI won't say we wouldn't accept such a patch, but as this isn't exactly a new problem or realization, and the feature doesn't presently exist, that for whatever reasons individuals may have no one has chosen to volunteer or fund such development.  I don't even remember seeing a proposal in the past 5 or so years.\nDavid J.\n \n\nI agree with that.But, probably its good idea to add this feature as many people are migrating from oracle to postgres. clone/restore schemas to existing cluster for any test cases like sandbox schema, temp schema as live backup schema etc. Thanks,Rj\n\n\n\n On Tuesday, August 24, 2021, 07:56:20 AM PDT, David G. Johnston <[email protected]> wrote:\n \n\n\nOn Mon, Aug 23, 2021 at 2:46 AM Nagaraj Raj <[email protected]> wrote:Currently this is not something can do. this functionality is there in oracle. Is this future considering to add?  (it would really help for create any test schemas without disturbing current schema. )I find this to be not all that useful.  Current practice is to avoid relying on search_path and, in general, to schema-qualify object references (yes, attaching a local SET search_path to a function works, not sure how it would play out in this context).  Performing a dependency and contextual rename of one schema name to another is challenging given all of that, and impossible if the schema name is hard-coded into a function body.I won't say we wouldn't accept such a patch, but as this isn't exactly a new problem or realization, and the feature doesn't presently exist, that for whatever reasons individuals may have no one has chosen to volunteer or fund such development.  I don't even remember seeing a proposal in the past 5 or so years.David J.", "msg_date": "Mon, 30 Aug 2021 16:52:21 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_restore schema dump to schema with different name" } ]
[ { "msg_contents": "I have items that need to be categorized by user defined matching rules.\r\nTrusted users can create rules that include regular expressions. I've\r\nreduced the problem to this example.\r\n\r\n Table \"public.items\"\r\n Column │ Type │ Collation │ Nullable │ Default\r\n────────┼─────────┼───────────┼──────────┼─────────\r\n id │ integer │ │ not null │\r\n name │ text │ │ not null │\r\nIndexes:\r\n \"items_pkey\" PRIMARY KEY, btree (id)\r\n\r\n Table \"public.matching_rules\"\r\n Column │ Type │ Collation │ Nullable │ Default\r\n──────────────┼─────────┼───────────┼──────────┼─────────\r\n id │ integer │ │ not null │\r\n name_matches │ text │ │ not null │\r\nIndexes:\r\n \"matching_rules_pkey\" PRIMARY KEY, btree (id)\r\n\r\nI use the following query to find matches:\r\n\r\nselect r.id, i.id\r\nfrom items i\r\n join matching_rules r on i.name ~ r.name_matches;\r\n\r\nWhen there are few rules the query runs quickly. But as the number of rules\r\nincreases the runtime often increases at a greater than linear rate.\r\n\r\nFor example if I run two queries, one the tests rule IDs 0 - 30 and another\r\nthat tests 30 - 60 the total runtime is less than 100ms. But if I instead\r\ntest rule IDs 0 - 60 in a single query the runtime balloons to over 1300ms.\r\n\r\nexplain analyze\r\nselect r.id, i.id\r\nfrom items i\r\n join matching_rules r on i.name ~ r.name_matches\r\nwhere r.id >= 0 and r.id < 30\r\n;\r\n─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n Nested Loop (cost=0.00..260.82 rows=80 width=8) (actual\r\ntime=0.820..28.334 rows=172 loops=1)\r\n Join Filter: (i.name ~ r.name_matches)\r\n Rows Removed by Join Filter: 16828\r\n -> Seq Scan on items i (cost=0.00..18.00 rows=1000 width=27) (actual\r\ntime=0.006..0.176 rows=1000 loops=1)\r\n -> Materialize (cost=0.00..2.86 rows=16 width=26) (actual\r\ntime=0.000..0.001 rows=17 loops=1000)\r\n -> Seq Scan on matching_rules r (cost=0.00..2.78 rows=16\r\nwidth=26) (actual time=0.004..0.012 rows=17 loops=1)\r\n Filter: ((id >= 0) AND (id < 30))\r\n Rows Removed by Filter: 35\r\n Planning Time: 0.086 ms\r\n Execution Time: 28.364 ms\r\n\r\n\r\nexplain analyze\r\nselect r.id, i.id\r\nfrom items i\r\n join matching_rules r on i.name ~ r.name_matches\r\nwhere r.id >= 30 and r.id < 60\r\n;\r\n QUERY PLAN\r\n─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n Nested Loop (cost=0.00..470.86 rows=150 width=8) (actual\r\ntime=1.418..65.508 rows=530 loops=1)\r\n Join Filter: (i.name ~ r.name_matches)\r\n Rows Removed by Join Filter: 28470\r\n -> Seq Scan on items i (cost=0.00..18.00 rows=1000 width=27) (actual\r\ntime=0.007..0.193 rows=1000 loops=1)\r\n -> Materialize (cost=0.00..2.93 rows=30 width=26) (actual\r\ntime=0.000..0.002 rows=29 loops=1000)\r\n -> Seq Scan on matching_rules r (cost=0.00..2.78 rows=30\r\nwidth=26) (actual time=0.005..0.020 rows=29 loops=1)\r\n Filter: ((id >= 30) AND (id < 60))\r\n Rows Removed by Filter: 23\r\n Planning Time: 0.076 ms\r\n Execution Time: 65.573 ms\r\n\r\n\r\nexplain analyze\r\nselect r.id, i.id\r\nfrom items i\r\n join matching_rules r on i.name ~ r.name_matches\r\nwhere r.id >= 0 and r.id < 60\r\n;\r\n QUERY PLAN\r\n─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n Nested Loop (cost=0.00..710.89 rows=230 width=8) (actual\r\ntime=3.731..1344.834 rows=702 loops=1)\r\n Join Filter: (i.name ~ r.name_matches)\r\n Rows Removed by Join Filter: 45298\r\n -> Seq Scan on items i (cost=0.00..18.00 rows=1000 width=27) (actual\r\ntime=0.006..0.442 rows=1000 loops=1)\r\n -> Materialize (cost=0.00..3.01 rows=46 width=26) (actual\r\ntime=0.000..0.004 rows=46 loops=1000)\r\n -> Seq Scan on matching_rules r (cost=0.00..2.78 rows=46\r\nwidth=26) (actual time=0.004..0.019 rows=46 loops=1)\r\n Filter: ((id >= 0) AND (id < 60))\r\n Rows Removed by Filter: 6\r\n Planning Time: 0.084 ms\r\n Execution Time: 1344.967 ms\r\n\r\nIt's also not predictable when additional regexp rows will trigger the poor\r\nperformance. There's not a specific number of rows or kind of regexp that I\r\ncan discern that triggers the issue. The regexps themselves are pretty\r\ntrivial too. Only normal text, start and end of string anchors, and\r\nalternation.\r\n\r\nI've vacuumed, analyzed, and I am on PostgreSQL 13.4 on\r\nx86_64-apple-darwin20.4.0, compiled by Apple clang version 12.0.5\r\n(clang-1205.0.22.9), 64-bit.\r\n\r\nAny ideas what's causing this?\r\n\r\nThanks.\r\n\r\nJack\r\n\nI have items that need to be categorized by user defined matching rules. Trusted users can create rules that include regular expressions. I've reduced the problem to this example.               Table \"public.items\" Column │  Type   │ Collation │ Nullable │ Default────────┼─────────┼───────────┼──────────┼───────── id     │ integer │           │ not null │ name   │ text    │           │ not null │Indexes:    \"items_pkey\" PRIMARY KEY, btree (id)              Table \"public.matching_rules\"    Column    │  Type   │ Collation │ Nullable │ Default──────────────┼─────────┼───────────┼──────────┼───────── id           │ integer │           │ not null │ name_matches │ text    │           │ not null │Indexes:    \"matching_rules_pkey\" PRIMARY KEY, btree (id)I use the following query to find matches:select r.id, i.idfrom items i  join matching_rules r on i.name ~ r.name_matches;When there are few rules the query runs quickly. But as the number of rules increases the runtime often increases at a greater than linear rate.For example if I run two queries, one the tests rule IDs 0 - 30 and another that tests 30 - 60 the total runtime is less than 100ms. But if I instead test rule IDs 0 - 60 in a single query the runtime balloons to over 1300ms. explain analyzeselect r.id, i.idfrom items i  join matching_rules r on i.name ~ r.name_matcheswhere r.id >= 0 and r.id < 30;───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Nested Loop  (cost=0.00..260.82 rows=80 width=8) (actual time=0.820..28.334 rows=172 loops=1)   Join Filter: (i.name ~ r.name_matches)   Rows Removed by Join Filter: 16828   ->  Seq Scan on items i  (cost=0.00..18.00 rows=1000 width=27) (actual time=0.006..0.176 rows=1000 loops=1)   ->  Materialize  (cost=0.00..2.86 rows=16 width=26) (actual time=0.000..0.001 rows=17 loops=1000)         ->  Seq Scan on matching_rules r  (cost=0.00..2.78 rows=16 width=26) (actual time=0.004..0.012 rows=17 loops=1)               Filter: ((id >= 0) AND (id < 30))               Rows Removed by Filter: 35 Planning Time: 0.086 ms Execution Time: 28.364 msexplain analyzeselect r.id, i.idfrom items i  join matching_rules r on i.name ~ r.name_matcheswhere r.id >= 30 and r.id < 60;                                                       QUERY PLAN───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Nested Loop  (cost=0.00..470.86 rows=150 width=8) (actual time=1.418..65.508 rows=530 loops=1)   Join Filter: (i.name ~ r.name_matches)   Rows Removed by Join Filter: 28470   ->  Seq Scan on items i  (cost=0.00..18.00 rows=1000 width=27) (actual time=0.007..0.193 rows=1000 loops=1)   ->  Materialize  (cost=0.00..2.93 rows=30 width=26) (actual time=0.000..0.002 rows=29 loops=1000)         ->  Seq Scan on matching_rules r  (cost=0.00..2.78 rows=30 width=26) (actual time=0.005..0.020 rows=29 loops=1)               Filter: ((id >= 30) AND (id < 60))               Rows Removed by Filter: 23 Planning Time: 0.076 ms Execution Time: 65.573 msexplain analyzeselect r.id, i.idfrom items i  join matching_rules r on i.name ~ r.name_matcheswhere r.id >= 0 and r.id < 60;                                                       QUERY PLAN───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Nested Loop  (cost=0.00..710.89 rows=230 width=8) (actual time=3.731..1344.834 rows=702 loops=1)   Join Filter: (i.name ~ r.name_matches)   Rows Removed by Join Filter: 45298   ->  Seq Scan on items i  (cost=0.00..18.00 rows=1000 width=27) (actual time=0.006..0.442 rows=1000 loops=1)   ->  Materialize  (cost=0.00..3.01 rows=46 width=26) (actual time=0.000..0.004 rows=46 loops=1000)         ->  Seq Scan on matching_rules r  (cost=0.00..2.78 rows=46 width=26) (actual time=0.004..0.019 rows=46 loops=1)               Filter: ((id >= 0) AND (id < 60))               Rows Removed by Filter: 6 Planning Time: 0.084 ms Execution Time: 1344.967 msIt's also not predictable when additional regexp rows will trigger the poor performance. There's not a specific number of rows or kind of regexp that I can discern that triggers the issue. The regexps themselves are pretty trivial too. Only normal text, start and end of string anchors, and alternation.I've vacuumed, analyzed, and I am on PostgreSQL 13.4 on x86_64-apple-darwin20.4.0, compiled by Apple clang version 12.0.5 (clang-1205.0.22.9), 64-bit.Any ideas what's causing this?Thanks.Jack", "msg_date": "Wed, 25 Aug 2021 11:47:43 -0500", "msg_from": "Jack Christensen <[email protected]>", "msg_from_op": true, "msg_subject": "Using regexp from table has unpredictable poor performance" }, { "msg_contents": "On Wed, Aug 25, 2021 at 11:47:43AM -0500, Jack Christensen wrote:\n> I have items that need to be categorized by user defined matching rules.\n> Trusted users can create rules that include regular expressions. I've\n> reduced the problem to this example.\n\n> I use the following query to find matches:\n> \n> select r.id, i.id\n> from items i\n> join matching_rules r on i.name ~ r.name_matches;\n> \n> When there are few rules the query runs quickly. But as the number of rules\n> increases the runtime often increases at a greater than linear rate.\n\nMaybe it's because the REs are cached by RE_compile_and_cache(), but if you\nloop over the REs in the inner loop, then the caching is ineffecive.\n\nMaybe you can force it to join with REs on the outer loop by writing it as:\n| rules LEFT JOIN items WHERE rules.id IS NOT NULL,\n..to improve performance, or at least test that theory.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 25 Aug 2021 16:05:00 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using regexp from table has unpredictable poor performance" }, { "msg_contents": "The optimizer was a bit too clever. It used the same plan for the LEFT\nJOIN. But that put me on the right track. I tried a LATERAL join. But the\noptimizer saw through that too and used the same plan. So I tried a\nmaterialized CTE and that finally forced it to use a different plan. That\nmade it run in ~70ms -- about 18x faster. Thanks!\n\nexplain analyze\nwith r as materialized (\n select * from matching_rules\n where id >= 0 and id < 60\n)\nselect r.id, i.id\nfrom r\n join items i on i.name ~ r.name_matches\n;\n\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Nested Loop (cost=2.78..714.20 rows=230 width=8) (actual\ntime=0.071..69.545 rows=702 loops=1)\n Join Filter: (i.name ~ r.name_matches)\n Rows Removed by Join Filter: 45298\n CTE r\n -> Seq Scan on matching_rules (cost=0.00..2.78 rows=46 width=26)\n(actual time=0.007..0.047 rows=46 loops=1)\n Filter: ((id >= 0) AND (id < 60))\n Rows Removed by Filter: 6\n -> CTE Scan on r (cost=0.00..0.92 rows=46 width=36) (actual\ntime=0.008..0.090 rows=46 loops=1)\n -> Materialize (cost=0.00..23.00 rows=1000 width=27) (actual\ntime=0.000..0.081 rows=1000 loops=46)\n -> Seq Scan on items i (cost=0.00..18.00 rows=1000 width=27)\n(actual time=0.003..0.092 rows=1000 loops=1)\n Planning Time: 0.206 ms\n Execution Time: 69.633 ms\n\n\nOn Wed, Aug 25, 2021 at 4:05 PM Justin Pryzby <[email protected]> wrote:\n\n> On Wed, Aug 25, 2021 at 11:47:43AM -0500, Jack Christensen wrote:\n> > I have items that need to be categorized by user defined matching rules.\n> > Trusted users can create rules that include regular expressions. I've\n> > reduced the problem to this example.\n>\n> > I use the following query to find matches:\n> >\n> > select r.id, i.id\n> > from items i\n> > join matching_rules r on i.name ~ r.name_matches;\n> >\n> > When there are few rules the query runs quickly. But as the number of\n> rules\n> > increases the runtime often increases at a greater than linear rate.\n>\n> Maybe it's because the REs are cached by RE_compile_and_cache(), but if you\n> loop over the REs in the inner loop, then the caching is ineffecive.\n>\n> Maybe you can force it to join with REs on the outer loop by writing it as:\n> | rules LEFT JOIN items WHERE rules.id IS NOT NULL,\n> ..to improve performance, or at least test that theory.\n>\n> --\n> Justin\n>\n\nThe optimizer was a bit too clever. It used the same plan for the LEFT JOIN. But that put me on the right track. I tried a LATERAL join. But the optimizer saw through that too and used the same plan. So I tried a materialized CTE and that finally forced it to use a different plan. That made it  run in ~70ms -- about 18x faster. Thanks!explain analyzewith r as materialized (  select * from matching_rules  where id >= 0 and id < 60)select r.id, i.idfrom r  join items i on i.name ~ r.name_matches;                                                     QUERY PLAN───────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Nested Loop  (cost=2.78..714.20 rows=230 width=8) (actual time=0.071..69.545 rows=702 loops=1)   Join Filter: (i.name ~ r.name_matches)   Rows Removed by Join Filter: 45298   CTE r     ->  Seq Scan on matching_rules  (cost=0.00..2.78 rows=46 width=26) (actual time=0.007..0.047 rows=46 loops=1)           Filter: ((id >= 0) AND (id < 60))           Rows Removed by Filter: 6   ->  CTE Scan on r  (cost=0.00..0.92 rows=46 width=36) (actual time=0.008..0.090 rows=46 loops=1)   ->  Materialize  (cost=0.00..23.00 rows=1000 width=27) (actual time=0.000..0.081 rows=1000 loops=46)         ->  Seq Scan on items i  (cost=0.00..18.00 rows=1000 width=27) (actual time=0.003..0.092 rows=1000 loops=1) Planning Time: 0.206 ms Execution Time: 69.633 msOn Wed, Aug 25, 2021 at 4:05 PM Justin Pryzby <[email protected]> wrote:On Wed, Aug 25, 2021 at 11:47:43AM -0500, Jack Christensen wrote:\n> I have items that need to be categorized by user defined matching rules.\n> Trusted users can create rules that include regular expressions. I've\n> reduced the problem to this example.\n\n> I use the following query to find matches:\n> \n> select r.id, i.id\n> from items i\n>   join matching_rules r on i.name ~ r.name_matches;\n> \n> When there are few rules the query runs quickly. But as the number of rules\n> increases the runtime often increases at a greater than linear rate.\n\nMaybe it's because the REs are cached by RE_compile_and_cache(), but if you\nloop over the REs in the inner loop, then the caching is ineffecive.\n\nMaybe you can force it to join with REs on the outer loop by writing it as:\n| rules LEFT JOIN items WHERE rules.id IS NOT NULL,\n..to improve performance, or at least test that theory.\n\n-- \nJustin", "msg_date": "Wed, 25 Aug 2021 16:21:55 -0500", "msg_from": "Jack Christensen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using regexp from table has unpredictable poor performance" }, { "msg_contents": "Btw: if you still run out of cache later with more regexes may be it makes\nsense to do prefiltering first my making a single gigantic regexp as\nstring_agg(‘(‘||name_matches||’)’,’|’) and then only filter ones that match\nlater. If postgresql provides capturing groups you may even be able to\nexplode the result without postfilter.\n\nср, 25 серп. 2021 о 14:22 Jack Christensen <[email protected]> пише:\n\n> The optimizer was a bit too clever. It used the same plan for the LEFT\n> JOIN. But that put me on the right track. I tried a LATERAL join. But the\n> optimizer saw through that too and used the same plan. So I tried a\n> materialized CTE and that finally forced it to use a different plan. That\n> made it run in ~70ms -- about 18x faster. Thanks!\n>\n> explain analyze\n> with r as materialized (\n> select * from matching_rules\n> where id >= 0 and id < 60\n> )\n> select r.id, i.id\n> from r\n> join items i on i.name ~ r.name_matches\n> ;\n>\n> QUERY PLAN\n>\n> ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> Nested Loop (cost=2.78..714.20 rows=230 width=8) (actual\n> time=0.071..69.545 rows=702 loops=1)\n> Join Filter: (i.name ~ r.name_matches)\n> Rows Removed by Join Filter: 45298\n> CTE r\n> -> Seq Scan on matching_rules (cost=0.00..2.78 rows=46 width=26)\n> (actual time=0.007..0.047 rows=46 loops=1)\n> Filter: ((id >= 0) AND (id < 60))\n> Rows Removed by Filter: 6\n> -> CTE Scan on r (cost=0.00..0.92 rows=46 width=36) (actual\n> time=0.008..0.090 rows=46 loops=1)\n> -> Materialize (cost=0.00..23.00 rows=1000 width=27) (actual\n> time=0.000..0.081 rows=1000 loops=46)\n> -> Seq Scan on items i (cost=0.00..18.00 rows=1000 width=27)\n> (actual time=0.003..0.092 rows=1000 loops=1)\n> Planning Time: 0.206 ms\n> Execution Time: 69.633 ms\n>\n>\n> On Wed, Aug 25, 2021 at 4:05 PM Justin Pryzby <[email protected]>\n> wrote:\n>\n>> On Wed, Aug 25, 2021 at 11:47:43AM -0500, Jack Christensen wrote:\n>> > I have items that need to be categorized by user defined matching rules.\n>> > Trusted users can create rules that include regular expressions. I've\n>> > reduced the problem to this example.\n>>\n>> > I use the following query to find matches:\n>> >\n>> > select r.id, i.id\n>> > from items i\n>> > join matching_rules r on i.name ~ r.name_matches;\n>> >\n>> > When there are few rules the query runs quickly. But as the number of\n>> rules\n>> > increases the runtime often increases at a greater than linear rate.\n>>\n>> Maybe it's because the REs are cached by RE_compile_and_cache(), but if\n>> you\n>> loop over the REs in the inner loop, then the caching is ineffecive.\n>>\n>> Maybe you can force it to join with REs on the outer loop by writing it\n>> as:\n>> | rules LEFT JOIN items WHERE rules.id IS NOT NULL,\n>> ..to improve performance, or at least test that theory.\n>>\n>> --\n>> Justin\n>>\n>\n\nBtw: if you still run out of cache later with more regexes may be it makes sense to do prefiltering first my making a single gigantic regexp as string_agg(‘(‘||name_matches||’)’,’|’) and then only filter ones that match later. If postgresql provides capturing groups you may even be able to explode the result without postfilter.ср, 25 серп. 2021 о 14:22 Jack Christensen <[email protected]> пише:The optimizer was a bit too clever. It used the same plan for the LEFT JOIN. But that put me on the right track. I tried a LATERAL join. But the optimizer saw through that too and used the same plan. So I tried a materialized CTE and that finally forced it to use a different plan. That made it  run in ~70ms -- about 18x faster. Thanks!explain analyzewith r as materialized (  select * from matching_rules  where id >= 0 and id < 60)select r.id, i.idfrom r  join items i on i.name ~ r.name_matches;                                                     QUERY PLAN───────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Nested Loop  (cost=2.78..714.20 rows=230 width=8) (actual time=0.071..69.545 rows=702 loops=1)   Join Filter: (i.name ~ r.name_matches)   Rows Removed by Join Filter: 45298   CTE r     ->  Seq Scan on matching_rules  (cost=0.00..2.78 rows=46 width=26) (actual time=0.007..0.047 rows=46 loops=1)           Filter: ((id >= 0) AND (id < 60))           Rows Removed by Filter: 6   ->  CTE Scan on r  (cost=0.00..0.92 rows=46 width=36) (actual time=0.008..0.090 rows=46 loops=1)   ->  Materialize  (cost=0.00..23.00 rows=1000 width=27) (actual time=0.000..0.081 rows=1000 loops=46)         ->  Seq Scan on items i  (cost=0.00..18.00 rows=1000 width=27) (actual time=0.003..0.092 rows=1000 loops=1) Planning Time: 0.206 ms Execution Time: 69.633 msOn Wed, Aug 25, 2021 at 4:05 PM Justin Pryzby <[email protected]> wrote:On Wed, Aug 25, 2021 at 11:47:43AM -0500, Jack Christensen wrote:\n> I have items that need to be categorized by user defined matching rules.\n> Trusted users can create rules that include regular expressions. I've\n> reduced the problem to this example.\n\n> I use the following query to find matches:\n> \n> select r.id, i.id\n> from items i\n>   join matching_rules r on i.name ~ r.name_matches;\n> \n> When there are few rules the query runs quickly. But as the number of rules\n> increases the runtime often increases at a greater than linear rate.\n\nMaybe it's because the REs are cached by RE_compile_and_cache(), but if you\nloop over the REs in the inner loop, then the caching is ineffecive.\n\nMaybe you can force it to join with REs on the outer loop by writing it as:\n| rules LEFT JOIN items WHERE rules.id IS NOT NULL,\n..to improve performance, or at least test that theory.\n\n-- \nJustin", "msg_date": "Wed, 25 Aug 2021 15:16:26 -0700", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using regexp from table has unpredictable poor performance" } ]
[ { "msg_contents": "Hello all,\n\nWe have some PG servers which we merge into a \"coordinator\" node using FDW\nand partitioned tables, we partition them by a synthetic \"shard_id\" field.\nThere are around 30 tables coordinated this way, with all foreign servers\nhaving the same schema structure.\n\nWe have some performance issues when joining foreign tables, always done by\nthe same \"shard_id\", where the major bottleneck is how rows from joined\ntables are fetched. explain(verbose) shows:\n\nRemote SQL: SELECT entity_id, execution_id, shard_id FROM entity_execution\nWHERE ((shard_id = 5)) AND (($1::bigint = entity_id))\n\nThis way, PG is doing a lot of round trips between the coordinator and the\nforeign nodes, fetching a single row every time, and we have a very high\nlatency between the coordinator and the nodes.\n\nAs the joins are done on the same node, it could send the whole query and\nfetch all results in a single round trip.\n\nThe FDW are configured with 'use_remote_estimate' to true and we have the\nparameters enable_partition_pruning, enable_partitionwise_aggregate and\nenable_partitionwise_join activated.\nThe tables involved can have from a million rows to more than 1000\nmillions, but the queries usually return a few thousand rows.\n\nA full sample plan and it's query: https://explain.depesz.com/s/TbJy\nexplain(verbose)\nselect *\nfrom nlp.note_entity_label nel\njoin nlp.note_entity ne on ne.note_entity_id = nel.note_entity_id and\nne.shard_id = nel.shard_id\njoin nlp.note_entity_execution nex on nex.note_entity_id =\nne.note_entity_id and nex.shard_id = nel.shard_id\nwhere\n nel.label_id = 192\n and nel.shard_id = 5\n\nThe row estimates are quite off the true ones, even though we have run\n'analyze' on the remote nodes before, and 'use_remote_estimate' is on.\nThe above query ends in about 6 minutes.\n\nThe interesting part is that if we change the 'join' by 'full joins', with\nsome extra filter, the plan is the one we believe is the optimal one, and\nindeed the query ends in 1 second: https://explain.depesz.com/s/b3As\n\nexplain(verbose)\nwith ents as(\n select nel.note_entity_id nelid, ne.note_entity_id neid,\nnex.note_entity_id nexid, *\n from nlp.note_entity_label nel\n full join nlp.note_entity ne on ne.note_entity_id = nel.note_entity_id\nand ne.shard_id = nel.shard_id\n full join nlp.note_entity_execution nex on nex.note_entity_id =\nne.note_entity_id and nex.shard_id = nel.shard_id\n where\n nel.label_id = 192\n and nel.shard_id = 5\n)\nselect *\nfrom ents\nwhere nelid is not null\n and neid is not null\n and nexid is not null\n;\n\nHere we can see that the whole query is sent to the fdw and it finishes in\na reasonable time.\n\nSo, the question is if we can do something to make the fdw send the whole\nquery to the remote nodes when the involved joins use the same partition,\nor why isn't PG sending it when we use 'inner join'.\nWe have tried tweaking the \"fdw_tuple_cost\" , increasing and lowering it to\nunreasonable values\n10, 1000, 100000 and 1000000 without the desired result.\n\nThanks,\n\nHello all,We have some PG servers which we merge into a \"coordinator\" node using FDW and partitioned tables, we partition them by a synthetic \"shard_id\" field.There are around 30 tables coordinated this way, with all foreign servers having the same schema structure.We have some performance issues when joining foreign tables, always done by the same \"shard_id\", where the major bottleneck is how rows from joined tables are fetched. explain(verbose) shows:Remote SQL: SELECT entity_id, execution_id, shard_id FROM entity_execution WHERE ((shard_id = 5)) AND (($1::bigint = entity_id))This way, PG is doing a lot of round trips between the coordinator and the foreign nodes, fetching a single row every time, and we have a very high latency between the coordinator and the nodes.As the joins are done on the same node, it could send the whole query and fetch all results in a single round trip.The FDW are configured with 'use_remote_estimate' to true and we have the parameters enable_partition_pruning, enable_partitionwise_aggregate and enable_partitionwise_join activated.The tables involved can have from a million rows to more than 1000 millions, but the queries usually return a few thousand rows.A full sample plan and it's query: https://explain.depesz.com/s/TbJyexplain(verbose)select *from nlp.note_entity_label neljoin nlp.note_entity ne on ne.note_entity_id = nel.note_entity_id and ne.shard_id = nel.shard_idjoin nlp.note_entity_execution nex on nex.note_entity_id = ne.note_entity_id and nex.shard_id = nel.shard_idwhere    nel.label_id = 192    and nel.shard_id = 5The row estimates are quite off the true ones, even though we have run 'analyze' on the remote nodes before, and 'use_remote_estimate' is on.The above query ends in about 6 minutes.The interesting part is that if we change the 'join' by 'full joins', with some extra filter, the plan is the one we believe is the optimal one, and indeed the query ends in 1 second: https://explain.depesz.com/s/b3Asexplain(verbose)with ents as(    select nel.note_entity_id nelid, ne.note_entity_id neid, nex.note_entity_id nexid, *    from nlp.note_entity_label nel    full join nlp.note_entity ne on ne.note_entity_id = nel.note_entity_id and ne.shard_id = nel.shard_id    full join nlp.note_entity_execution nex on nex.note_entity_id = ne.note_entity_id and nex.shard_id = nel.shard_id     where        nel.label_id = 192        and nel.shard_id = 5)select *from entswhere nelid is not null    and neid is not null    and nexid is not null;Here we can see that the whole query is sent to the fdw and it finishes in a reasonable time.So, the question is if we can do something to make the fdw send the whole query to the remote nodes when the involved joins use the same partition, or why isn't PG sending it when we use 'inner join'.We have tried tweaking the \"fdw_tuple_cost\" , increasing and lowering it to unreasonable values10, 1000, 100000 and 1000000 without the desired result.Thanks,", "msg_date": "Tue, 7 Sep 2021 16:02:03 +0200", "msg_from": "=?UTF-8?Q?Marc_Oliv=C3=A9?= <[email protected]>", "msg_from_op": true, "msg_subject": "FDW join vs full join push down" } ]
[ { "msg_contents": "Hi All!\n\nWe are using such feature as Foreign table as partition in PG 13 under CentOS\nHere is our table\nCREATE TABLE dwh.l1_snapshot (\n l1_snapshot_id int8 NOT NULL DEFAULT nextval('sq_l1_snapshot_id'::regclass),\n start_date_id int4 NULL,\n...\n...\n...\n dataset_id int4 NULL, -- ETL needs\n transaction_time timestamp NULL\n)\nPARTITION BY RANGE (start_date_id);\n\n\nWe have several partitions locally and one partition for storing historical data as foreign table which is stored on another PG13\nWhen I run following query . Partition pruning redirect query to that foreign table\nselect count(1) from dwh.l1_snapshot ls where start_date_id = 20201109;\nI see remote SQL as following\n\nSELECT NULL FROM dwh.l1_snapshot_tail2 WHERE ((start_date_id = 20201109)).\nIt transfers vie network hundred million records in our case\n\nWhen I query directly partition (almost the same what partition pruning does) I see another remote sql\n\nselect count(1) from partitions.l1_snapshot_tail2 ls where start_date_id = 20201109;\n\nAnd remote sql is\nSELECT count(1) FROM dwh.l1_snapshot_tail2 WHERE ((start_date_id = 20201109));\n\nSo in case querying foreign table we see aggregation is propagated to remote host (Like driving_site in oracle)\nBut in the first case with partition pruning the aggregation is not propagated to remote host.\nAnd of course different performance 22 sec vs 75sec\n\n\nThat would great to have the same behavior in both cases (pushing aggregation to remote side).\nIt should be possible at least for simple aggregation (without distinct etc)\n\n\nThanks!\nStepan Yankevych\n\nOffice: +380 322 424 642xx58840<tel:+380%20322%20424%20642;ext=x58840> Cell: +380 96 915 9551<tel:+380%2096%20915%209551> Email: [email protected]<mailto:[email protected]>\nLviv, Ukraine epam.com<http://www.epam.com>\n\n\nCONFIDENTIALITY CAUTION AND DISCLAIMER\nThis message is intended only for the use of the individual(s) or entity(ies) to which it is addressed and contains information that is legally privileged and confidential. If you are not the intended recipient, or the person responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. All unintended recipients are obliged to delete this message and destroy any printed copies.\n\n\n\n\n\n\n\n\n\n\nHi All!\n \nWe are using such feature as Foreign table as partition in PG 13 under CentOS\n\nHere is our table \nCREATE\nTABLE dwh.l1_snapshot (\n        l1_snapshot_id\nint8\nNOT\nNULL\nDEFAULT\nnextval('sq_l1_snapshot_id'::regclass),\n\n        start_date_id\nint4\nNULL,\n...\n...\n...\n        dataset_id\nint4\nNULL,\n-- ETL needs\n\n        transaction_time\ntimestamp\nNULL\n)\nPARTITION\nBY\nRANGE (start_date_id);\n \n \nWe have several partitions locally and one partition for storing historical data as foreign table which is stored on another PG13\n\nWhen I run following query . Partition pruning redirect query to that foreign table\nselect count(1) from dwh.l1_snapshot ls where start_date_id  = 20201109;\nI see remote SQL as following \n \nSELECT NULL FROM dwh.l1_snapshot_tail2 WHERE ((start_date_id = 20201109)).\n\nIt transfers vie network hundred million records in our case\n\n \nWhen I query directly partition (almost the same what partition pruning does) I see another remote sql\n\n \nselect count(1) from partitions.l1_snapshot_tail2 ls where start_date_id  = 20201109;\n \nAnd remote sql is \nSELECT count(1) FROM dwh.l1_snapshot_tail2 WHERE ((start_date_id = 20201109));\n \nSo in case querying foreign table we see aggregation is propagated to remote host (Like driving_site in oracle)\n\nBut in the first case with partition pruning the aggregation is not propagated to remote host.\n\nAnd of course different performance 22 sec vs 75sec\n \n \nThat would great to have the same behavior in both cases (pushing aggregation to remote side).\n\nIt should be possible at least for simple aggregation (without distinct etc)\n\n \n \nThanks!\nStepan Yankevych\n \nOffice: +380\n 322 424 642xx58840  Cell: +380\n 96 915 9551  Email: [email protected]\nLviv, \nUkraine  epam.com\n \n \nCONFIDENTIALITY CAUTION AND DISCLAIMER\nThis message is intended only for the use of the individual(s) or entity(ies) to which it is addressed and contains information that is legally privileged and confidential. If you are not the intended recipient, or the person responsible for delivering the\n message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. All unintended recipients are obliged to delete this message and destroy any printed copies.", "msg_date": "Tue, 7 Sep 2021 18:05:42 +0000", "msg_from": "Stepan Yankevych <[email protected]>", "msg_from_op": true, "msg_subject": "Foreign table as partition - Non optimal aggregation plan" } ]
[ { "msg_contents": "Hello,\n\nSome databases such as SQLServer (try_cast) or BigQuery (safe.cast) offer not-throw conversion. In general, these tend to perform better than custom UDFs that catch exceptions and are also simpler to use. For example, in Postgres, I have a function that does the following:\n\nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n\nI couldn't find a reference to such capabilities in Postgres and wondered if I missed it, and if not, is there any plan to add such a feature?\n\nThank you!\nLaurent Hasson.\n\n\n\n\n\n\n\n\n\nHello,\n \nSome databases such as SQLServer (try_cast) or BigQuery (safe.cast) offer not-throw conversion. In general, these tend to perform better than custom UDFs that catch exceptions and are also simpler to use. For example, in Postgres, I have\n a function that does the following:\n \nCREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\nRETURNS real AS $$\nBEGIN\n  RETURN case when str is null then val else str::real end;\nEXCEPTION WHEN OTHERS THEN\n  RETURN val;\nEND;\n$$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n \nI couldn’t find a reference to such capabilities in Postgres and wondered if I missed it, and if not, is there any plan to add such a feature?\n \nThank you!\nLaurent Hasson.", "msg_date": "Wed, 8 Sep 2021 17:17:35 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Better performance no-throw conversion?" }, { "msg_contents": "\nOn 9/8/21 1:17 PM, [email protected] wrote:\n>\n> Hello,\n>\n> �\n>\n> Some databases such as SQLServer (try_cast) or BigQuery (safe.cast)\n> offer not-throw conversion. In general, these tend to perform better\n> than custom UDFs that catch exceptions and are also simpler to use.\n> For example, in Postgres, I have a function that does the following:\n>\n> �\n>\n> CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\n>\n> RETURNS real AS $$\n>\n> BEGIN\n>\n> � RETURN case when str is null then val else str::real end;\n>\n> EXCEPTION WHEN OTHERS THEN\n>\n> � RETURN val;\n>\n> END;\n>\n> $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n>\n> �\n>\n> I couldn�t find a reference to such capabilities in Postgres and\n> wondered if I missed it, and if not, is there any plan to add such a\n> feature?\n>\n> �\n>\n\n\nNot that I know of, but you could probably do this fairly simply in C.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 8 Sep 2021 13:31:01 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better performance no-throw conversion?" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> Some databases such as SQLServer (try_cast) or BigQuery (safe.cast) offer not-throw conversion.\n> ...\n> I couldn't find a reference to such capabilities in Postgres and wondered if I missed it, and if not, is there any plan to add such a feature?\n\nThere is not anybody working on that AFAIK. It seems like it'd have\nto be done on a case-by-case basis, which makes it awfully tedious.\nThe only way I can see to do it generically is to put a subtransaction\nwrapper around the cast-function call, which is a lousy idea for a\ncouple of reasons:\n\n1. It pretty much negates any performance benefit.\n\n2. It'd be very hard to tell which errors are safe to ignore\nand which are not (e.g., internal errors shouldn't be trapped\nthis way).\n\nOf course, point 2 also applies to user-level implementations\n(IOW, your code feels pretty unsafe to me). So anything we might\ndo here would be an improvement. But it's still problematic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Sep 2021 13:32:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better performance no-throw conversion?" }, { "msg_contents": "On Wed, Sep 8, 2021 at 11:33 AM Tom Lane <[email protected]> wrote:\n\n> \"[email protected]\" <[email protected]> writes:\n> > Some databases such as SQLServer (try_cast) or BigQuery (safe.cast)\n> offer not-throw conversion.\n> > ...\n> > I couldn't find a reference to such capabilities in Postgres and\n> wondered if I missed it, and if not, is there any plan to add such a\n> feature?\n>\n> There is not anybody working on that AFAIK. It seems like it'd have\n> to be done on a case-by-case basis, which makes it awfully tedious.\n>\n\nDo you just mean a separate function for each data type? I use similar\nfunctions (without a default value though) to ensure that values extracted\nfrom jsonb keys can be used as needed. Sanitizing the data on input is a\nlong term goal, but not possible immediately.\n\nIs there any documentation on the impact of many many exception blocks?\nThat is, if such a cast function is used on a dataset of 1 million rows,\nwhat overhead does that exception incur? Is it only when there is an\nexception or is it on every row?\n\nOn Wed, Sep 8, 2021 at 11:33 AM Tom Lane <[email protected]> wrote:\"[email protected]\" <[email protected]> writes:\n> Some databases such as SQLServer (try_cast) or BigQuery (safe.cast) offer not-throw conversion.\n> ...\n> I couldn't find a reference to such capabilities in Postgres and wondered if I missed it, and if not, is there any plan to add such a feature?\n\nThere is not anybody working on that AFAIK.  It seems like it'd have\nto be done on a case-by-case basis, which makes it awfully tedious.Do you just mean a separate function for each data type? I use similar functions (without a default value though) to ensure that values extracted from jsonb keys can be used as needed. Sanitizing the data on input is a long term goal, but not possible immediately.Is there any documentation on the impact of many many exception blocks? That is, if such a cast function is used on a dataset of 1 million rows, what overhead does that exception incur? Is it only when there is an exception or is it on every row?", "msg_date": "Wed, 8 Sep 2021 11:39:47 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better performance no-throw conversion?" }, { "msg_contents": "\r\n> From: Michael Lewis <[email protected]> \r\n> Sent: Wednesday, September 8, 2021 13:40\r\n> To: Tom Lane <[email protected]>\r\n> Cc: [email protected]; [email protected]\r\n> Subject: Re: Better performance no-throw conversion?\r\n>\r\n> On Wed, Sep 8, 2021 at 11:33 AM Tom Lane <mailto:[email protected]> wrote:\r\n> \"mailto:[email protected]\" <mailto:[email protected]> writes:\r\n> > Some databases such as SQLServer (try_cast) or BigQuery (safe.cast) offer not-throw conversion.\r\n> > ...\r\n> > I couldn't find a reference to such capabilities in Postgres and wondered if I missed it, and if not, is there any plan to add such a feature?\r\n>\r\n> There is not anybody working on that AFAIK.  It seems like it'd have\r\n> to be done on a case-by-case basis, which makes it awfully tedious.\r\n>\r\n> Do you just mean a separate function for each data type? I use similar functions (without a default value though) to ensure that values extracted from jsonb keys can be used as needed. Sanitizing the data on input is a long term goal, but not possible immediately.\r\n>\r\n> Is there any documentation on the impact of many many exception blocks? That is, if such a cast function is used on a dataset of 1 million rows, what overhead does that exception incur? Is it only when there is an exception or is it on every row?\r\n>\r\n>\r\n\r\nHello Michael,\r\n\r\nThere was a recent thread (Big Performance drop of Exceptions in UDFs between V11.2 and 13.4) that I started a few weeks back where it was identified that the exception block in the function I posted would cause a rough 3x-5x performance overhead for exception handling and was as expected. I identified a separate issue with the performance plummeting 100x on certain Windows builds, but that's a separate issue.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Wed, 8 Sep 2021 17:55:51 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Better performance no-throw conversion?" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: Andrew Dunstan <[email protected]>\n > Sent: Wednesday, September 8, 2021 13:31\n > To: [email protected]; [email protected]\n > Subject: Re: Better performance no-throw conversion?\n > \n > \n > On 9/8/21 1:17 PM, [email protected] wrote:\n > >\n > > Hello,\n > >\n > >\n > >\n > > Some databases such as SQLServer (try_cast) or BigQuery (safe.cast)\n > > offer not-throw conversion. In general, these tend to perform better\n > > than custom UDFs that catch exceptions and are also simpler to use.\n > > For example, in Postgres, I have a function that does the following:\n > >\n > >\n > >\n > > CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)\n > >\n > > RETURNS real AS $$\n > >\n > > BEGIN\n > >\n > >   RETURN case when str is null then val else str::real end;\n > >\n > > EXCEPTION WHEN OTHERS THEN\n > >\n > >   RETURN val;\n > >\n > > END;\n > >\n > > $$ LANGUAGE plpgsql COST 1 IMMUTABLE;\n > >\n > >\n > >\n > > I couldn't find a reference to such capabilities in Postgres and\n > > wondered if I missed it, and if not, is there any plan to add such a\n > > feature?\n > >\n > >\n > >\n > \n > \n > Not that I know of, but you could probably do this fairly simply in C.\n > \n > \n > cheers\n > \n > \n > andrew\n > \n > --\n > Andrew Dunstan\n > EDB: https://www.enterprisedb.com\n\n\nHello Andrew,\n\nI work across multiple platforms (windows, linux, multiple managed cloud versions...) and a C-based solution would be problematic for us.\n\nThank you,\nLaurent.\n\n\n\n\n\n", "msg_date": "Wed, 8 Sep 2021 17:57:39 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Better performance no-throw conversion?" } ]
[ { "msg_contents": "Hi,\n I have an issue with my PostgreSql 9.4 version database. Almost every week I get the following error\n\nError: #2147500037\nCould not connect to the server;\nCould not connect to remote socket immedaitely\nSource: Microsoft OLE DB Provider for ODBC Drivers\nSQL State: 08001\n\nMy application and database are on the same system. The communicating port being 5432.\ninitially I thought it was the permissible connection limit issue. So I raised the max_connections\nparameter to 300.But still the problem exists.Would appreciate your speedy response to my\nproblem\nThanks & Regards\nLionel\n\n\n\n\n\n\n\n\nHi,\n\n      I have an issue with my PostgreSql 9.4 version database. Almost every week I get the following error\n\n\n\n\nError: #2147500037\nCould not connect to the server;\nCould not connect to remote socket immedaitely\nSource: Microsoft OLE DB Provider for ODBC Drivers\nSQL State: 08001\n\n\n\n\n\nMy application and database are on the same system. The communicating port being 5432.\n\ninitially I thought it was the permissible connection limit issue. So I raised the max_connections\n\nparameter to 300.But still the problem exists.Would appreciate your speedy response to my\n\nproblem\n\nThanks & Regards\n\nLionel", "msg_date": "Thu, 9 Sep 2021 07:45:47 +0000", "msg_from": "Lionel Napoleon <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSql 9.4 Database connection failure" }, { "msg_contents": "Em qui., 9 de set. de 2021 às 04:46, Lionel Napoleon <\[email protected]> escreveu:\n\n> Hi,\n> I have an issue with my PostgreSql 9.4 version database. Almost\n> every week I get the following error\n>\n> Error: #2147500037\n> Could not connect to the server;\n> Could not connect to remote socket immedaitely\n> Source: Microsoft OLE DB Provider for ODBC Drivers\n> SQL State: 08001\n>\n> My application and database are on the same system. The communicating port\n> being 5432.\n> initially I thought it was the permissible connection limit issue. So I\n> raised the max_connections\n> parameter to 300.But still the problem exists.\n>\n> I think that question will be better answered at:\nhttps://www.postgresql.org/list/pgsql-general/\n\nHowever, it seems to me that this is a bug with Microsoft ODBC.\n\nregards,\nRanier Vilela\n\nEm qui., 9 de set. de 2021 às 04:46, Lionel Napoleon <[email protected]> escreveu:\n\n\nHi,\n\n      I have an issue with my PostgreSql 9.4 version database. Almost every week I get the following error\n\n\n\n\nError: #2147500037\nCould not connect to the server;\nCould not connect to remote socket immedaitely\nSource: Microsoft OLE DB Provider for ODBC Drivers\nSQL State: 08001\n\n\n\n\n\nMy application and database are on the same system. The communicating port being 5432.\n\ninitially I thought it was the permissible connection limit issue. So I raised the max_connections\n\nparameter to 300.But still the problem exists.I think that question will be better answered at:https://www.postgresql.org/list/pgsql-general/\n However, it seems to me that this is a bug with Microsoft ODBC.regards,Ranier Vilela", "msg_date": "Thu, 9 Sep 2021 08:55:42 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql 9.4 Database connection failure" }, { "msg_contents": "Em sex., 10 de set. de 2021 às 06:26, Lionel Napoleon <\[email protected]> escreveu:\n\n> Hi ,\n>\nHi, please when you post, choose to post to all.\n\n I was able to rectify most of the problems since they were from my\n> application side bugs.\n>\nGood to know.\n\nHowever the following entry keeps coming in the postgresql log file:\n>\n> 2021-09-10 12:06:33 IST LOG: could not receive data from client: No\n> connection could be made because the target machine actively refused it.\n>\nSome connection (socket) from server, can't receive data from client.\nClient stuck?\n\n\n> The application seems to be working despite of this log.My question is\n> ..do I need to be worried of the above message\n>\nI think yes, your client still has some bug, but you can solve this\nduring development.\n\nregards,\nRanier Vilela\n\nEm sex., 10 de set. de 2021 às 06:26, Lionel Napoleon <[email protected]> escreveu:\n\n\nHi ,Hi, please when you post, choose to post to all. \n\n     I was able to rectify most of the problems since they were from my application side bugs.Good to know. \n\nHowever  the following entry keeps coming in the postgresql log file:\n\n\n\n\n2021-09-10 12:06:33 IST LOG:  could not receive data from client: No connection could be made because the target machine actively refused it.Some connection (socket) from server, can't receive data from client.Client stuck?\n\n\n\n\n\nThe application seems to be working despite of this log.My question is ..do I need to be worried of the above messageI think yes, your client still has some bug, but you can solve thisduring development.regards,Ranier Vilela", "msg_date": "Fri, 10 Sep 2021 07:58:19 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql 9.4 Database connection failure" } ]
[ { "msg_contents": "Hi all,\n\n \n\nI think that total_time in pg_stat_statements is cpu time + possible waits.\nSo, can I say that:\n\nTotal_sql_time = total_time + blk_read_time + blk_write_time\n\n \n\nDocumentation is not clear at all on that.\n\n \n\nThanks in advance \n\n \n\nMichel SALAIS\n\n\nHi all, I think that total_time in pg_stat_statements is cpu time + possible waits. So, can I say that:Total_sql_time = total_time + blk_read_time + blk_write_time Documentation is not clear at all on that. Thanks in advance  Michel SALAIS", "msg_date": "Thu, 9 Sep 2021 20:13:33 +0200", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": true, "msg_subject": "sql execution time in pg_stat_statements" }, { "msg_contents": "Just to say that for PostgreSQL 13, total_time is replaced by\n“total_exec_time + total_plan_time”\n\n \n\nMichel SALAIS\n\nDe : Michel SALAIS <[email protected]> \nEnvoyé : jeudi 9 septembre 2021 20:14\nÀ : [email protected]\nObjet : sql execution time in pg_stat_statements\n\n \n\nHi all,\n\n \n\nI think that total_time in pg_stat_statements is cpu time + possible waits.\nSo, can I say that:\n\nTotal_sql_time = total_time + blk_read_time + blk_write_time\n\n \n\nDocumentation is not clear at all on that.\n\n \n\nThanks in advance \n\n \n\nMichel SALAIS\n\n\nJust to say that for PostgreSQL 13, total_time is replaced by “total_exec_time + total_plan_time” Michel SALAISDe : Michel SALAIS <[email protected]> Envoyé : jeudi 9 septembre 2021 20:14À : [email protected] : sql execution time in pg_stat_statements Hi all, I think that total_time in pg_stat_statements is cpu time + possible waits. So, can I say that:Total_sql_time = total_time + blk_read_time + blk_write_time Documentation is not clear at all on that. Thanks in advance  Michel SALAIS", "msg_date": "Thu, 9 Sep 2021 20:49:32 +0200", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: sql execution time in pg_stat_statements" }, { "msg_contents": "On Fri, Sep 10, 2021 at 2:49 AM Michel SALAIS <[email protected]> wrote:\n>\n> I think that total_time in pg_stat_statements is cpu time + possible waits. So, can I say that:\n>\n> Total_sql_time = total_time + blk_read_time + blk_write_time\n>\n> Documentation is not clear at all on that.\n\nIn version 12 and below, total_time is the elapsed time between the\nexecution start and stop, so it includes all underlying events. That\nincludes any IO activity, wait events or nested statements (if\npg_stat_statemetns.track is set to all). This corresponds to the new\ntotal_exec_time field in version 13 and later.\n\n\n> Just to say that for PostgreSQL 13, total_time is replaced by “total_exec_time + total_plan_time”\n\nIndeed, as this version also tracks planning activity.\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:18:29 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql execution time in pg_stat_statements" }, { "msg_contents": "\n\n-----Message d'origine-----\nDe : Julien Rouhaud <[email protected]> \nEnvoyé : vendredi 10 septembre 2021 07:18\nÀ : Michel SALAIS <[email protected]>\nCc : postgres performance list <[email protected]>\nObjet : Re: sql execution time in pg_stat_statements\n\nOn Fri, Sep 10, 2021 at 2:49 AM Michel SALAIS <[email protected]> wrote:\n>\n> I think that total_time in pg_stat_statements is cpu time + possible waits. So, can I say that:\n>\n> Total_sql_time = total_time + blk_read_time + blk_write_time\n>\n> Documentation is not clear at all on that.\n\nIn version 12 and below, total_time is the elapsed time between the execution start and stop, so it includes all underlying events. That includes any IO activity, wait events or nested statements (if pg_stat_statemetns.track is set to all). This corresponds to the new total_exec_time field in version 13 and later.\n\n\n> Just to say that for PostgreSQL 13, total_time is replaced by “total_exec_time + total_plan_time”\n\nIndeed, as this version also tracks planning activity.\n--------------------------------------------------------------------------------\nHi,\n\nI thaught that total_time (total_exec_time + total_plan_time) included I/O but when we have blk_read_time + blk_write_time equals several times total_time it is difficult to continue to think that...\n\nSo What is really total_time (total_exec_time + total_plan_time) ?\n\n\n\n", "msg_date": "Fri, 10 Sep 2021 19:12:48 +0200", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: sql execution time in pg_stat_statements" }, { "msg_contents": "\"Michel SALAIS\" <[email protected]> writes:\n> I thaught that total_time (total_exec_time + total_plan_time) included I/O but when we have blk_read_time + blk_write_time equals several times total_time it is difficult to continue to think that...\n\nThat's an interesting report, but on the whole I'd be more inclined\nto disbelieve the I/O timings than the overall time. Can you create\na reproducible test case where this happens? Also, exactly what\nPG version and pg_stat_statements version are you using?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:42:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql execution time in pg_stat_statements" }, { "msg_contents": "On Sat, Sep 11, 2021 at 1:12 AM Michel SALAIS <[email protected]> wrote:\n>\n> I thaught that total_time (total_exec_time + total_plan_time) included I/O but when we have blk_read_time + blk_write_time equals several times total_time it is difficult to continue to think that...\n\nMaybe not that difficult. If the queries use parallelism, the query\ntotal execution time may be say 1 second, but if there were X workers\nit could actually cumulate X+1 seconds of execution time, and\ntherefore reach more than 1 second of cumulated read/write time.\n\n\n", "msg_date": "Sat, 11 Sep 2021 01:43:06 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql execution time in pg_stat_statements" }, { "msg_contents": "Hi Tom,\n\nI was out of this list for a while.\nUnfortunately, I have now no access to the server where I have seen this.\n\nBest regards\n\nMichel SALAIS\n\n-----Message d'origine-----\nDe : Tom Lane <[email protected]> \nEnvoyé : vendredi 10 septembre 2021 19:42\nÀ : Michel SALAIS <[email protected]>\nCc : 'Julien Rouhaud' <[email protected]>; 'postgres performance list'\n<[email protected]>\nObjet : Re: sql execution time in pg_stat_statements\n\n\"Michel SALAIS\" <[email protected]> writes:\n> I thaught that total_time (total_exec_time + total_plan_time) included I/O\nbut when we have blk_read_time + blk_write_time equals several times\ntotal_time it is difficult to continue to think that...\n\nThat's an interesting report, but on the whole I'd be more inclined to\ndisbelieve the I/O timings than the overall time. Can you create a\nreproducible test case where this happens? Also, exactly what PG version\nand pg_stat_statements version are you using?\n\n\t\t\tregards, tom lane\n\n\n\n", "msg_date": "Sun, 19 Sep 2021 17:41:48 +0200", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: sql execution time in pg_stat_statements" } ]
[ { "msg_contents": "Dear community,\n\nI have a query that most of the time gets executed in a few\nmilliseconds yet occasionally takes ~20+ seconds. The difference, as\nfar as I am able to tell, comes whether it uses the table Primary Key\n(fast) or an additional index with smaller size. The table in question\nis INSERT ONLY - no updates or deletes done there.\n\nPg 11.12, total OS mem 124G\n\nshared_buffers: 31GB\nwork_mem: 27MB\neffective_cache_size: 93GB\n\nThe query:\n\nSELECT\n *\nFROM\n myschema.mytable pbh\nWHERE\n pbh.product_code = $1\n AND pbh.cage_player_id = $2\n AND pbh.cage_code = $3\n AND balance_type = $4\n AND pbh.modified_time < $5\nORDER BY\n pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY;\n\n\\d myschema.mytable\n Table \"myschema.mytable\"\n Column │ Type │ Collation │ Nullable │ Default\n────────────────┼─────────────────────────────┼───────────┼──────────┼─────────\n cage_code │ integer │ │ not null │\n cage_player_id │ bigint │ │ not null │\n product_code │ character varying(30) │ │ not null │\n balance_type │ character varying(30) │ │ not null │\n version │ bigint │ │ not null │\n modified_time │ timestamp(3) with time zone │ │ not null │\n amount │ numeric(38,8) │ │ not null │\n change │ numeric(38,8) │ │ not null │\n transaction_id │ bigint │ │ not null │\nIndexes:\n \"mytable_pk\" PRIMARY KEY, btree (cage_code, cage_player_id,\nproduct_code, balance_type, version)\n \"mytable_idx1\" btree (modified_time)\n \"mytable_idx2\" btree (cage_code, cage_player_id, modified_time)\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\nrelname='mytable';\n─[ RECORD 1 ]──┬───────────────────────\nrelname │ mytable\nrelpages │ 18630554\nreltuples │ 1.45045e+09\nrelallvisible │ 18629741\nrelkind │ r\nrelnatts │ 9\nrelhassubclass │ f\nreloptions │ ¤\npg_table_size │ 152695029760 (142 GB)\n\nI have caught this with AUTOEXPLAIN:\n\nQuery Text: SELECT * FROM myschema.mytable pbh WHERE\npbh.product_code = $1 AND pbh.cage_player_id = $2 AND\npbh.cage_code = $3 AND balance_type = $4 AND pbh.modified_time <\n$5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY\n Limit (cost=0.70..6.27 rows=1 width=66)\n -> Index Scan Backward using mytable_idx2 on mytable pbh\n(cost=0.70..21552.55 rows=3869 width=66)\n Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND\n(modified_time < $5))\n Filter: (((product_code)::text = ($1)::text) AND\n((balance_type)::text = ($4)::text))\n\nAnd when I run EXPLAIN ANALYZE on the same query with the same\nparameters manually:\n\n Limit (cost=177.75..177.75 rows=1 width=66) (actual\ntime=8.635..8.635 rows=1 loops=1)\n -> Sort (cost=177.75..178.21 rows=186 width=66) (actual\ntime=8.634..8.634 rows=1 loops=1)\n Sort Key: modified_time DESC\n Sort Method: top-N heapsort Memory: 25kB\n -> Index Scan using mytable_pk on mytable pbh\n(cost=0.70..176.82 rows=186 width=66) (actual time=1.001..8.610\nrows=25 loops=1)\n Index Cond: ((cage_code = 123) AND (cage_player_id =\n'12345'::bigint) AND ((product_code)::text = 'PRODUCT'::text) AND\n((balance_type)::text = 'TOTAL'::text))\n Filter: (modified_time < '2021-09-13\n04:00:00+00'::timestamp with time zone)\n Planning Time: 2.117 ms\n Execution Time: 8.658 ms\n\nI have played around with SET STATISTICS, work_mem and even tried\nCREATE STATISTICS although there is no functional dependency on the\ntable columns in questions, but nothing seems to work.\n\nAny ideas, hints are very much appreciated!\n\n\nWith best regards,\n-- \nKristjan Mustkivi\n\nEmail: [email protected]\n\n\n", "msg_date": "Mon, 13 Sep 2021 16:24:57 +0300", "msg_from": "Kristjan Mustkivi <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres chooses slow query plan from time to time" }, { "msg_contents": "On 9/13/21 3:24 PM, Kristjan Mustkivi wrote:\n> Dear community,\n> \n> I have a query that most of the time gets executed in a few\n> milliseconds yet occasionally takes ~20+ seconds. The difference, as\n> far as I am able to tell, comes whether it uses the table Primary Key\n> (fast) or an additional index with smaller size. The table in question\n> is INSERT ONLY - no updates or deletes done there.\n> \n\nIt'd be really useful to have explain analyze for the slow execution.\n\nMy guess is there's a poor estimate, affecting some of the parameter\nvalues, and it probably resolves itself after autoanalyze run.\n\nI see you mentioned SET STATISTICS, so you tried increasing the\nstatistics target for some of the columns? Have you tried lowering\nautovacuum_analyze_scale_factor to make autoanalyze more frequent?\n\nIt's also possible most values are independent, but some values have a\nrather strong dependency, skewing the estimates. The MCV would help with\nthat, but those are in PG12 :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Sep 2021 15:50:46 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Autovacuum will only run for freezing, right? Insert only tables don't get\nautovacuumed/analyzed until PG13 if I remember right.\n\nAutovacuum will only run for freezing, right? Insert only tables don't get autovacuumed/analyzed until PG13 if I remember right.", "msg_date": "Mon, 13 Sep 2021 08:19:40 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "On Mon, Sep 13, 2021 at 08:19:40AM -0600, Michael Lewis wrote:\n> Autovacuum will only run for freezing, right? Insert only tables don't get\n> autovacuumed/analyzed until PG13 if I remember right.\n\nTomas is talking about autovacuum running *analyze*, not vacuum.\n\nIt runs for analyze, except on partitioned tables and (empty) inheritence\nparents.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Sep 2021 09:22:49 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "On Mon, Sep 13, 2021 at 9:25 AM Kristjan Mustkivi <[email protected]>\nwrote:\n\n\n> SELECT\n> *\n> FROM\n> myschema.mytable pbh\n> WHERE\n> pbh.product_code = $1\n> AND pbh.cage_player_id = $2\n> AND pbh.cage_code = $3\n> AND balance_type = $4\n> AND pbh.modified_time < $5\n> ORDER BY\n> pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY;\n>\n\n\n> \"mytable_idx2\" btree (cage_code, cage_player_id, modified_time)\n>\n\nWhy does this index exist? It seems rather specialized, but what is it\nspecialized for?\n\nIf you are into specialized indexes, the ideal index for this query would\nbe:\n\nbtree (cage_code, cage_player_id, product_code, balance_type, modified_time)\n\nBut the first 4 columns can appear in any order if that helps you\ncombine indexes. If this index existed, then it wouldn't have to choose\nbetween two other suboptimal indexes, and so would be unlikely to choose\nincorrectly between them.\n\nCheers,\n\nJeff\n\nOn Mon, Sep 13, 2021 at 9:25 AM Kristjan Mustkivi <[email protected]> wrote: \nSELECT\n    *\nFROM\n    myschema.mytable pbh\nWHERE\n    pbh.product_code = $1\n    AND pbh.cage_player_id = $2\n    AND pbh.cage_code = $3\n    AND balance_type = $4\n    AND pbh.modified_time < $5\nORDER BY\n    pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY; \n    \"mytable_idx2\" btree (cage_code, cage_player_id, modified_time)Why does this index exist?  It seems rather specialized, but what is it specialized for?If you are into specialized indexes, the ideal index for this query would be: btree (cage_code, cage_player_id, product_code, balance_type, modified_time)But the first 4 columns can appear in any order if that helps you combine indexes.  If this index existed, then it wouldn't have to choose between two other suboptimal indexes, and so would be unlikely to choose incorrectly between them.Cheers,Jeff", "msg_date": "Mon, 13 Sep 2021 15:21:39 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "On Mon, Sep 13, 2021 at 9:25 AM Kristjan Mustkivi <[email protected]>\nwrote:\n\n>\n> I have caught this with AUTOEXPLAIN:\n>\n> Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND\n> (modified_time < $5))\n> Filter: (((product_code)::text = ($1)::text) AND\n> ((balance_type)::text = ($4)::text))\n>\n>\nIs it always the case that autoexplain shows plans with $1 etc, rather than\nreal values, for the slow queries?\n\nIf so, then it could be that the switch from custom to generic plans is\ncausing the problem.\n\nCheers,\n\nJeff\n\nOn Mon, Sep 13, 2021 at 9:25 AM Kristjan Mustkivi <[email protected]> wrote:\nI have caught this with AUTOEXPLAIN:\n          Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND\n(modified_time < $5))\n          Filter: (((product_code)::text = ($1)::text) AND\n((balance_type)::text = ($4)::text))\nIs it always the case that autoexplain shows plans with $1 etc, rather than real values, for the slow queries?If so, then it could be that the switch from custom to generic plans is causing the problem.Cheers,Jeff", "msg_date": "Mon, 13 Sep 2021 15:39:05 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Hello Tomas,\n\nThe auto explain analyze caught this:\n\n2021-09-14 06:55:33 UTC, pid=12345 db=mydb, usr=myuser, client=ip,\napp=PostgreSQL JDBC Driver, line=55 LOG: duration: 5934.165 ms plan:\n Query Text: SELECT * FROM myschema.mytable pbh WHERE\npbh.product_code = $1 AND pbh.cage_player_id = $2 AND\npbh.cage_code = $3 AND balance_type = $4 AND pbh.modified_time <\n$5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY\n Limit (cost=0.70..6.27 rows=1 width=66) (actual\ntime=5934.154..5934.155 rows=1 loops=1)\n Buffers: shared hit=7623 read=18217\n -> Index Scan Backward using mytable_idx2 on mytable pbh\n(cost=0.70..21639.94 rows=3885 width=66) (actual\ntime=5934.153..5934.153 rows=1 loops=1)\n Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND\n(modified_time < $5))\n\nSo it expected to get 3885 rows, but got just 1. So this is the\nstatistics issue, right?\n\nFor testing, I set autovacuum_vacuum_scale_factor = 0.0 and\nautovacuum_vacuum_threshold = 10000 for the table and am now\nmonitoring the behavior.\n\nBest regards,\n\nKristjan\n\nOn Mon, Sep 13, 2021 at 4:50 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 9/13/21 3:24 PM, Kristjan Mustkivi wrote:\n> > Dear community,\n> >\n> > I have a query that most of the time gets executed in a few\n> > milliseconds yet occasionally takes ~20+ seconds. The difference, as\n> > far as I am able to tell, comes whether it uses the table Primary Key\n> > (fast) or an additional index with smaller size. The table in question\n> > is INSERT ONLY - no updates or deletes done there.\n> >\n>\n> It'd be really useful to have explain analyze for the slow execution.\n>\n> My guess is there's a poor estimate, affecting some of the parameter\n> values, and it probably resolves itself after autoanalyze run.\n>\n> I see you mentioned SET STATISTICS, so you tried increasing the\n> statistics target for some of the columns? Have you tried lowering\n> autovacuum_analyze_scale_factor to make autoanalyze more frequent?\n>\n> It's also possible most values are independent, but some values have a\n> rather strong dependency, skewing the estimates. The MCV would help with\n> that, but those are in PG12 :-(\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n-- \nKristjan Mustkivi\n\nEmail: [email protected]\n\n\n", "msg_date": "Tue, 14 Sep 2021 10:55:01 +0300", "msg_from": "Kristjan Mustkivi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Hi Jeff,\n\nThe specialized index is present due to some other queries and the\nindex is used frequently (according to the statistics). I do agree\nthat in this particular case the index btree (cage_code,\ncage_player_id, product_code, balance_type, modified_time) would solve\nthe problem but at the moment it is not possible to change that\nwithout unexpected consequences (this odd behavior manifests only in\none of our sites).\n\nI will try if more aggressive autovacuum analyze will alleviate the\ncase as Tomas Vondra suggested.\n\n\nThank you for the help!\n\nKristjan\n\nOn Mon, Sep 13, 2021 at 10:21 PM Jeff Janes <[email protected]> wrote:\n>\n> On Mon, Sep 13, 2021 at 9:25 AM Kristjan Mustkivi <[email protected]> wrote:\n>\n>>\n>> SELECT\n>> *\n>> FROM\n>> myschema.mytable pbh\n>> WHERE\n>> pbh.product_code = $1\n>> AND pbh.cage_player_id = $2\n>> AND pbh.cage_code = $3\n>> AND balance_type = $4\n>> AND pbh.modified_time < $5\n>> ORDER BY\n>> pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY;\n>\n>\n>>\n>> \"mytable_idx2\" btree (cage_code, cage_player_id, modified_time)\n>\n>\n> Why does this index exist? It seems rather specialized, but what is it specialized for?\n>\n> If you are into specialized indexes, the ideal index for this query would be:\n>\n> btree (cage_code, cage_player_id, product_code, balance_type, modified_time)\n>\n> But the first 4 columns can appear in any order if that helps you combine indexes. If this index existed, then it wouldn't have to choose between two other suboptimal indexes, and so would be unlikely to choose incorrectly between them.\n>\n> Cheers,\n>\n> Jeff\n\n\n\n-- \nKristjan Mustkivi\n\nEmail: [email protected]\n\n\n", "msg_date": "Tue, 14 Sep 2021 11:03:38 +0300", "msg_from": "Kristjan Mustkivi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "On Tue, 2021-09-14 at 10:55 +0300, Kristjan Mustkivi wrote:\n> 2021-09-14 06:55:33 UTC, pid=12345  db=mydb, usr=myuser, client=ip,\n> app=PostgreSQL JDBC Driver, line=55 LOG:  duration: 5934.165 ms  plan:\n>   Query Text: SELECT *   FROM myschema.mytable pbh WHERE\n> pbh.product_code = $1   AND pbh.cage_player_id = $2   AND\n> pbh.cage_code = $3   AND balance_type = $4   AND pbh.modified_time <\n> $5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY\n>   Limit  (cost=0.70..6.27 rows=1 width=66) (actual time=5934.154..5934.155 rows=1 loops=1)\n>     Buffers: shared hit=7623 read=18217\n>     ->  Index Scan Backward using mytable_idx2 on mytable pbh (cost=0.70..21639.94 rows=3885 width=66) (actual time=5934.153..5934.153 rows=1 loops=1)\n>           Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND (modified_time < $5))\n\nIf it scanned the index for 6 seconds before finding the first result,\nI'd suspect one of the following:\n\n- the index is terribly bloated\n\n- there were lots of deleted rows, and the index entries were marked as \"dead\"\n\n- something locked the table for a long time\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 14 Sep 2021 14:11:32 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "On Tue, Sep 14, 2021 at 3:55 AM Kristjan Mustkivi <[email protected]>\nwrote:\n\n> Hello Tomas,\n>\n> The auto explain analyze caught this:\n>\n> 2021-09-14 06:55:33 UTC, pid=12345 db=mydb, usr=myuser, client=ip,\n> app=PostgreSQL JDBC Driver, line=55 LOG: duration: 5934.165 ms plan:\n> Query Text: SELECT * FROM myschema.mytable pbh WHERE\n> pbh.product_code = $1 AND pbh.cage_player_id = $2 AND\n> pbh.cage_code = $3 AND balance_type = $4 AND pbh.modified_time <\n> $5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY\n> Limit (cost=0.70..6.27 rows=1 width=66) (actual\n> time=5934.154..5934.155 rows=1 loops=1)\n> Buffers: shared hit=7623 read=18217\n> -> Index Scan Backward using mytable_idx2 on mytable pbh\n> (cost=0.70..21639.94 rows=3885 width=66) (actual\n> time=5934.153..5934.153 rows=1 loops=1)\n> Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND\n> (modified_time < $5))\n>\n> So it expected to get 3885 rows, but got just 1. So this is the\n> statistics issue, right?\n>\n\nThat would be true if there were no LIMIT. But with the LIMIT, all this\nmeans is that it stopped actually scanning after it found one row, but it\nestimates that if it didn't stop it would have found 3885. So it is not\nvery informative. But the above plan appears incomplete, there should be a\nline for \"Rows Removed by Filter\", and I think that that is what we really\nwant to see in this case.\n\nCheers,\n\nJeff\nCheers,\n\nJeff\n\nOn Tue, Sep 14, 2021 at 3:55 AM Kristjan Mustkivi <[email protected]> wrote:Hello Tomas,\n\nThe auto explain analyze caught this:\n\n2021-09-14 06:55:33 UTC, pid=12345  db=mydb, usr=myuser, client=ip,\napp=PostgreSQL JDBC Driver, line=55 LOG:  duration: 5934.165 ms  plan:\n  Query Text: SELECT *   FROM myschema.mytable pbh WHERE\npbh.product_code = $1   AND pbh.cage_player_id = $2   AND\npbh.cage_code = $3   AND balance_type = $4   AND pbh.modified_time <\n$5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY\n  Limit  (cost=0.70..6.27 rows=1 width=66) (actual\ntime=5934.154..5934.155 rows=1 loops=1)\n    Buffers: shared hit=7623 read=18217\n    ->  Index Scan Backward using mytable_idx2 on mytable pbh\n(cost=0.70..21639.94 rows=3885 width=66) (actual\ntime=5934.153..5934.153 rows=1 loops=1)\n          Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND\n(modified_time < $5))\n\nSo it expected to get 3885 rows, but got just 1. So this is the\nstatistics issue, right?That would be true if there were no LIMIT.  But with the LIMIT, all this means is that it stopped actually scanning after it found one row, but it estimates that if it didn't stop it would have found 3885.  So it is not very informative.  But the above plan appears incomplete, there should be a line for \"Rows Removed by Filter\", and I think that that is what we really want to see in this case.Cheers,JeffCheers,Jeff", "msg_date": "Tue, 14 Sep 2021 08:26:02 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "I am very sorry, I indeed copy-pasted an incomplete plan. Here it is in full:\n\n2021-09-14 06:55:33 UTC, pid=27576 db=mydb, usr=myuser, client=ip,\napp=PostgreSQL JDBC Driver, line=55 LOG: duration: 5934.165 ms plan:\n Query Text: SELECT * FROM myschema.mytable pbh WHERE\npbh.product_code = $1 AND pbh.cage_player_id = $2 AND\npbh.cage_code = $3 AND balance_type = $4 AND pbh.modified_time <\n$5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY\n Limit (cost=0.70..6.27 rows=1 width=66) (actual\ntime=5934.154..5934.155 rows=1 loops=1)\n Buffers: shared hit=7623 read=18217\n -> Index Scan Backward using player_balance_history_idx2 on\nmytable pbh (cost=0.70..21639.94 rows=3885 width=66) (actual\ntime=5934.153..5934.153 rows=1 loops=1)\n Index Cond: ((cage_code = $3) AND (cage_player_id =\n$2) AND (modified_time < $5))\n Filter: (((product_code)::text = ($1)::text) AND\n((balance_type)::text = ($4)::text))\n Rows Removed by Filter: 95589\n Buffers: shared hit=7623 read=18217\n\nAlso, I have made incrementally the following changes: set autovacuum\nmore aggressive, then added column based stats targets and then\ncreated a statistics object. Yet there is no change in the plan\nbehavior. Table as it is now:\n\n\\d+ myschema.mytable\n Table \"myschema.mytable\"\n Column │ Type │ Collation │ Nullable │\nDefault │ Storage │ Stats target │ Description\n────────────────┼─────────────────────────────┼───────────┼──────────┼─────────┼──────────┼──────────────┼─────────────\n cage_code │ integer │ │ not null │\n │ plain │ 500 │\n cage_player_id │ bigint │ │ not null │\n │ plain │ 500 │\n product_code │ character varying(30) │ │ not null │\n │ extended │ 500 │\n balance_type │ character varying(30) │ │ not null │\n │ extended │ 500 │\n version │ bigint │ │ not null │\n │ plain │ │\n modified_time │ timestamp(3) with time zone │ │ not null │\n │ plain │ 500 │\n amount │ numeric(38,8) │ │ not null │\n │ main │ │\n change │ numeric(38,8) │ │ not null │\n │ main │ │\n transaction_id │ bigint │ │ not null │\n │ plain │ │\nIndexes:\n \"mytable_pk\" PRIMARY KEY, btree (cage_code, cage_player_id,\nproduct_code, balance_type, version)\n \"mytable_idx1\" btree (modified_time)\n \"mytable_idx2\" btree (cage_code, cage_player_id, modified_time)\nStatistics objects:\n \"myschema\".\"statistics_pbh_1\" (ndistinct, dependencies) ON\ncage_code, cage_player_id, product_code, balance_type FROM\nmyschema.mytable\nOptions: autovacuum_vacuum_scale_factor=0.0, autovacuum_vacuum_threshold=1000\n\nI will investigate the index bloat and if something is blocking the\nquery as suggested by Laurenz Albe.\n\nBest,\n\nKristjan\n\nOn Tue, Sep 14, 2021 at 3:26 PM Jeff Janes <[email protected]> wrote:\n>\n> On Tue, Sep 14, 2021 at 3:55 AM Kristjan Mustkivi <[email protected]> wrote:\n>>\n>> Hello Tomas,\n>>\n>> The auto explain analyze caught this:\n>>\n>> 2021-09-14 06:55:33 UTC, pid=12345 db=mydb, usr=myuser, client=ip,\n>> app=PostgreSQL JDBC Driver, line=55 LOG: duration: 5934.165 ms plan:\n>> Query Text: SELECT * FROM myschema.mytable pbh WHERE\n>> pbh.product_code = $1 AND pbh.cage_player_id = $2 AND\n>> pbh.cage_code = $3 AND balance_type = $4 AND pbh.modified_time <\n>> $5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY\n>> Limit (cost=0.70..6.27 rows=1 width=66) (actual\n>> time=5934.154..5934.155 rows=1 loops=1)\n>> Buffers: shared hit=7623 read=18217\n>> -> Index Scan Backward using mytable_idx2 on mytable pbh\n>> (cost=0.70..21639.94 rows=3885 width=66) (actual\n>> time=5934.153..5934.153 rows=1 loops=1)\n>> Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND\n>> (modified_time < $5))\n>>\n>> So it expected to get 3885 rows, but got just 1. So this is the\n>> statistics issue, right?\n>\n>\n> That would be true if there were no LIMIT. But with the LIMIT, all this means is that it stopped actually scanning after it found one row, but it estimates that if it didn't stop it would have found 3885. So it is not very informative. But the above plan appears incomplete, there should be a line for \"Rows Removed by Filter\", and I think that that is what we really want to see in this case.\n>\n> Cheers,\n>\n> Jeff\n> Cheers,\n>\n> Jeff\n\n\n\n-- \nKristjan Mustkivi\n\nEmail: [email protected]\n\n\n", "msg_date": "Tue, 14 Sep 2021 15:41:54 +0300", "msg_from": "Kristjan Mustkivi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Kristjan Mustkivi <[email protected]> writes:\n> -> Index Scan Backward using player_balance_history_idx2 on\n> mytable pbh (cost=0.70..21639.94 rows=3885 width=66) (actual\n> time=5934.153..5934.153 rows=1 loops=1)\n> Index Cond: ((cage_code = $3) AND (cage_player_id =\n> $2) AND (modified_time < $5))\n> Filter: (((product_code)::text = ($1)::text) AND\n> ((balance_type)::text = ($4)::text))\n> Rows Removed by Filter: 95589\n> Buffers: shared hit=7623 read=18217\n\nSo indeed, the core issue is that that filter condition is very selective,\nand applying it after the index scan is expensive. Perhaps it would help\nto create an index that includes those columns along with cage_code and\ncage_player_id. (It's not clear whether to bother with modified_time in\nthis specialized index, but if you do include it, it needs to be the last\ncolumn since you're putting a non-equality condition on it.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Sep 2021 10:15:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "On Tue, Sep 14, 2021 at 5:15 PM Tom Lane <[email protected]> wrote:\n>\n> Kristjan Mustkivi <[email protected]> writes:\n> > -> Index Scan Backward using player_balance_history_idx2 on\n> > mytable pbh (cost=0.70..21639.94 rows=3885 width=66) (actual\n> > time=5934.153..5934.153 rows=1 loops=1)\n> > Index Cond: ((cage_code = $3) AND (cage_player_id =\n> > $2) AND (modified_time < $5))\n> > Filter: (((product_code)::text = ($1)::text) AND\n> > ((balance_type)::text = ($4)::text))\n> > Rows Removed by Filter: 95589\n> > Buffers: shared hit=7623 read=18217\n>\n> So indeed, the core issue is that that filter condition is very selective,\n> and applying it after the index scan is expensive. Perhaps it would help\n> to create an index that includes those columns along with cage_code and\n> cage_player_id. (It's not clear whether to bother with modified_time in\n> this specialized index, but if you do include it, it needs to be the last\n> column since you're putting a non-equality condition on it.)\n>\n> regards, tom lane\n\nBut the Primary Key is defined as btree (cage_code, cage_player_id,\nproduct_code, balance_type, version) so this should be exactly that\n(apart from the extra \"version\" column). And the majority of the query\nplans are using the PK with only a small number of cases going for the\nIDX2 that is btree (cage_code, cage_player_id, modified_time). So I am\nwondering how to make them not do that.\n\nBut perhaps the index bloat is indeed playing a part here as both the\nPK and IDX2 have ~50% bloat ratio. I will try REINDEX-ing the table\nalthough finding a good window to do it might require some time.\n\nBest regards,\n\nKristjan\n\n\n", "msg_date": "Tue, 14 Sep 2021 18:36:45 +0300", "msg_from": "Kristjan Mustkivi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Kristjan Mustkivi <[email protected]> writes:\n>>> Filter: (((product_code)::text = ($1)::text) AND\n>>> ((balance_type)::text = ($4)::text))\n\n> But the Primary Key is defined as btree (cage_code, cage_player_id,\n> product_code, balance_type, version) so this should be exactly that\n> (apart from the extra \"version\" column).\n\nOh, interesting. So this is really a datatype mismatch problem.\nI'd wondered idly why you were getting the explicit casts to text\nin these conditions, but now it seems that that's key to the\nproblem: the casts prevent these clauses from being matched to\nthe index. What are the declared data types of product_code\nand balance_type? And of the parameters they're compared to?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Sep 2021 11:47:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Hello!\n\nBoth are of type varchar(30).\n\nSo is this something odd: Filter: (((product_code)::text = ($1)::text)\nAND ((balance_type)::text = ($4)::text)) ?\n\nBut why does it do the type-cast if both product_code and balance_type\nare of type text (although with constraint 30) and the values are also\nof type text?\n\nBest regards,\n\nKristjan\n\nOn Tue, Sep 14, 2021 at 6:47 PM Tom Lane <[email protected]> wrote:\n>\n> Kristjan Mustkivi <[email protected]> writes:\n> >>> Filter: (((product_code)::text = ($1)::text) AND\n> >>> ((balance_type)::text = ($4)::text))\n>\n> > But the Primary Key is defined as btree (cage_code, cage_player_id,\n> > product_code, balance_type, version) so this should be exactly that\n> > (apart from the extra \"version\" column).\n>\n> Oh, interesting. So this is really a datatype mismatch problem.\n> I'd wondered idly why you were getting the explicit casts to text\n> in these conditions, but now it seems that that's key to the\n> problem: the casts prevent these clauses from being matched to\n> the index. What are the declared data types of product_code\n> and balance_type? And of the parameters they're compared to?\n>\n> regards, tom lane\n\n\n\n-- \nKristjan Mustkivi\n\nEmail: [email protected]\n\n\n", "msg_date": "Wed, 15 Sep 2021 09:47:34 +0300", "msg_from": "Kristjan Mustkivi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Kristjan Mustkivi <[email protected]> writes:\n> Both are of type varchar(30).\n\nAh, right, you showed that back at the top of the thread.\n\n> So is this something odd: Filter: (((product_code)::text = ($1)::text)\n> AND ((balance_type)::text = ($4)::text)) ?\n\nYes, that is very darn odd. When I try this I get:\n\nregression=# create table foo(f1 varchar(30), f2 int, primary key (f2,f1));\nCREATE TABLE\n\nregression=# explain select * from foo where f2 = 11 and f1 = 'bar';\n QUERY PLAN \n--------------------------------------------------------------------------\n Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1 width=37)\n Index Cond: ((f2 = 11) AND (f1 = 'bar'::text))\n(2 rows)\n\nregression=# explain select * from foo where f2 = 11 and f1::text = 'bar';\n QUERY PLAN \n--------------------------------------------------------------------------\n Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1 width=37)\n Index Cond: ((f2 = 11) AND (f1 = 'bar'::text))\n(2 rows)\n\nregression=# prepare p as select * from foo where f2 = $1 and f1 = $2;\nPREPARE\n\nregression=# explain execute p(11,'bar');\n QUERY PLAN \n--------------------------------------------------------------------------\n Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1 width=37)\n Index Cond: ((f2 = 11) AND (f1 = 'bar'::text))\n(2 rows)\n\n-- repeat a few times till it switches to a generic plan ...\n\nregression=# explain execute p(11,'bar');\n QUERY PLAN \n--------------------------------------------------------------------------\n Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1 width=37)\n Index Cond: ((f2 = $1) AND (f1 = $2))\n(2 rows)\n\nNote the lack of any visible cast on the varchar column, in each one of\nthese queries, even where I tried to force one to appear. There is\nsomething happening in your database that is not happening in mine.\n\nMy mind is now running to the possibility that you've got some extension\nthat creates an \"=\" operator that is capturing the syntax.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Sep 2021 08:16:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "On Wed, Sep 15, 2021 at 3:16 PM Tom Lane <[email protected]> wrote:\r\n\r\n> Note the lack of any visible cast on the varchar column, in each one of\r\n> these queries, even where I tried to force one to appear. There is\r\n> something happening in your database that is not happening in mine.\r\n>\r\n> My mind is now running to the possibility that you've got some extension\r\n> that creates an \"=\" operator that is capturing the syntax.\r\n>\r\n> regards, tom lane\r\n\r\nThe following extensions have been installed:\r\n\r\n─[ RECORD 1 ]──────────────────────────────────────────────────────────\r\nName │ btree_gist\r\nVersion │ 1.5\r\nSchema │ public\r\nDescription │ support for indexing common datatypes in GiST\r\n─[ RECORD 2 ]──────────────────────────────────────────────────────────\r\nName │ pg_stat_statements\r\nVersion │ 1.6\r\nSchema │ public\r\nDescription │ track execution statistics of all SQL statements executed\r\n─[ RECORD 3 ]──────────────────────────────────────────────────────────\r\nName │ pgcrypto\r\nVersion │ 1.3\r\nSchema │ public\r\nDescription │ cryptographic functions\r\n─[ RECORD 4 ]──────────────────────────────────────────────────────────\r\nName │ plpgsql\r\nVersion │ 1.0\r\nSchema │ pg_catalog\r\nDescription │ PL/pgSQL procedural language\r\n\r\nPlus the some libraries preloaded: shared_preload_libraries =\r\n'pg_stat_statements,pg_cron,auto_explain'\r\n\r\nThank you so much for looking into this!\r\n\r\nBest regards,\r\n-- \r\nKristjan Mustkivi\r\n\r\nEmail: [email protected]\r\n", "msg_date": "Wed, 15 Sep 2021 16:01:50 +0300", "msg_from": "Kristjan Mustkivi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Kristjan Mustkivi <[email protected]> writes:\n> On Wed, Sep 15, 2021 at 3:16 PM Tom Lane <[email protected]> wrote:\n>> Note the lack of any visible cast on the varchar column, in each one of\n>> these queries, even where I tried to force one to appear. There is\n>> something happening in your database that is not happening in mine.\n\n> The following extensions have been installed:\n> [ nothing very exciting ]\n\nI still get the same results after installing those extensions.\n\nI realized that the reason I don't see a cast is that\nfix_indexqual_operand removes the cast from an index qualifier\nexpression's index-column side. In any other context, there would\nbe a cast there, since the operator is =(text,text) not\n=(varchar,varchar). So that seems like a red herring ... or is it?\nNow I'm confused by your original report, in which you show\n\n>>> -> Index Scan using mytable_pk on mytable pbh (cost=0.70..176.82 rows=186 width=66) (actual time=1.001..8.610 rows=25 loops=1)\n>>> Index Cond: ((cage_code = 123) AND (cage_player_id = '12345'::bigint) AND ((product_code)::text = 'PRODUCT'::text) AND ((balance_type)::text = 'TOTAL'::text))\n>>> Filter: (modified_time < '2021-09-13 04:00:00+00'::timestamp with time zone)\n\nAccording to the code I just looked at, there should absolutely not\nbe casts on the product_code and balance_type index columns here.\nSo I'm not quite sure what's up ... -ENOCAFFEINE perhaps.\n\nNonetheless, this point is probably just a sideshow. The above\nEXPLAIN output proves that the planner *can* match this index,\nwhich destroys my idea that you had a datatype mismatch preventing\nit from doing so.\n\nAfter looking again at the original problem, I think you are getting\nbit by an issue we've seen before. The planner is coming out with\na decently accurate cost estimate for the query when specific values\nare inserted for the parameters. However, when it considers a generic\nversion of the query with no known parameter values, the cost estimates\nare not so good, and by chance it comes out estimating a very low cost\nfor the alternative plan that uses the other index. That cost is not\nright, but the planner doesn't know that, so it seizes on that plan.\n\nThis is a hard problem to fix, and we don't have a good answer for it.\nIn v12 and up, you can use the big hammer of disabling generic plans by\nsetting plan_cache_mode to \"force_custom_plan\", but v11 doesn't have\nthat parameter. You might need to avoid using a prepared statement for\nthis query.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Sep 2021 10:34:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres chooses slow query plan from time to time" }, { "msg_contents": "Understood.\n\nThank you so much for looking into this!\n\nBest regards,\n\nKristjan\n\nOn Wed, Sep 15, 2021 at 5:34 PM Tom Lane <[email protected]> wrote:\n>\n> Kristjan Mustkivi <[email protected]> writes:\n> > On Wed, Sep 15, 2021 at 3:16 PM Tom Lane <[email protected]> wrote:\n> >> Note the lack of any visible cast on the varchar column, in each one of\n> >> these queries, even where I tried to force one to appear. There is\n> >> something happening in your database that is not happening in mine.\n>\n> > The following extensions have been installed:\n> > [ nothing very exciting ]\n>\n> I still get the same results after installing those extensions.\n>\n> I realized that the reason I don't see a cast is that\n> fix_indexqual_operand removes the cast from an index qualifier\n> expression's index-column side. In any other context, there would\n> be a cast there, since the operator is =(text,text) not\n> =(varchar,varchar). So that seems like a red herring ... or is it?\n> Now I'm confused by your original report, in which you show\n>\n> >>> -> Index Scan using mytable_pk on mytable pbh (cost=0.70..176.82 rows=186 width=66) (actual time=1.001..8.610 rows=25 loops=1)\n> >>> Index Cond: ((cage_code = 123) AND (cage_player_id = '12345'::bigint) AND ((product_code)::text = 'PRODUCT'::text) AND ((balance_type)::text = 'TOTAL'::text))\n> >>> Filter: (modified_time < '2021-09-13 04:00:00+00'::timestamp with time zone)\n>\n> According to the code I just looked at, there should absolutely not\n> be casts on the product_code and balance_type index columns here.\n> So I'm not quite sure what's up ... -ENOCAFFEINE perhaps.\n>\n> Nonetheless, this point is probably just a sideshow. The above\n> EXPLAIN output proves that the planner *can* match this index,\n> which destroys my idea that you had a datatype mismatch preventing\n> it from doing so.\n>\n> After looking again at the original problem, I think you are getting\n> bit by an issue we've seen before. The planner is coming out with\n> a decently accurate cost estimate for the query when specific values\n> are inserted for the parameters. However, when it considers a generic\n> version of the query with no known parameter values, the cost estimates\n> are not so good, and by chance it comes out estimating a very low cost\n> for the alternative plan that uses the other index. That cost is not\n> right, but the planner doesn't know that, so it seizes on that plan.\n>\n> This is a hard problem to fix, and we don't have a good answer for it.\n> In v12 and up, you can use the big hammer of disabling generic plans by\n> setting plan_cache_mode to \"force_custom_plan\", but v11 doesn't have\n> that parameter. You might need to avoid using a prepared statement for\n> this query.\n>\n> regards, tom lane\n\n\n\n-- \nKristjan Mustkivi\n\nEmail: [email protected]\n\n\n", "msg_date": "Thu, 16 Sep 2021 10:09:09 +0300", "msg_from": "Kristjan Mustkivi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres chooses slow query plan from time to time" } ]
[ { "msg_contents": "The company I work for will test EnterpriseDB. I am fairly well \nacquainted with Postgres but have no experience whatsoever with \nEnterpriseDB. How compatible to Postgres it is? Do pgAdmin4 and pgBadger \nwork with EnterpriseDB? Are psql commands the same? Can anyone here \nshare some impressions?\n\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Mon, 13 Sep 2021 22:52:53 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "EnterpriseDB" }, { "msg_contents": "EnterpriseDB is basically postgres with the added oracle compatability \nand some added external tools. THe default user & db will no longer be \npostgres but 'enterprisedb', but it is still postgresql so you won't \nhave any issues working with EDB if you already know postgres\n\n\n\nOn 9/13/21 20:52, Mladen Gogala wrote:\n> The company I work for will test EnterpriseDB. I am fairly well \n> acquainted with Postgres but have no experience whatsoever with \n> EnterpriseDB. How compatible to Postgres it is? Do pgAdmin4 and \n> pgBadger work with EnterpriseDB? Are psql commands the same? Can \n> anyone here share some impressions?\n>\n> Regards\n>\n\n\n", "msg_date": "Tue, 14 Sep 2021 09:03:15 -0600", "msg_from": "sbob <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EnterpriseDB" }, { "msg_contents": "In addition to the Sbob comments, if we install the EDB with postgres compatibility option in such a case we may continue to use postgres user as default super user and other configuration parameters would remain the same as community PostgreSQL.  \nIt would work perfectly fine with pgAdmin4 and pgbadger.\nIf you choose to install in Compatible with PostgreSQL mode, the Advanced Server superuser name is postgres.\nThanks and Regards,\nManish Yadav\nMobile : +91 8527607945 \n\n On Tuesday, 14 September 2021, 08:33:31 PM IST, sbob <[email protected]> wrote: \n \n EnterpriseDB is basically postgres with the added oracle compatability \nand some added external tools. THe default user & db will no longer be \npostgres but 'enterprisedb', but it is still postgresql so you won't \nhave any issues working with EDB if you already know postgres\n\n\n\nOn 9/13/21 20:52, Mladen Gogala wrote:\n> The company I work for will test EnterpriseDB. I am fairly well \n> acquainted with Postgres but have no experience whatsoever with \n> EnterpriseDB. How compatible to Postgres it is? Do pgAdmin4 and \n> pgBadger work with EnterpriseDB? Are psql commands the same? Can \n> anyone here share some impressions?\n>\n> Regards\n>\n\n\n \nIn addition to the Sbob comments, if we install the EDB with postgres compatibility option in such a case we may continue to use postgres user as default super user and other configuration parameters would remain the same as community PostgreSQL.  It would work perfectly fine with pgAdmin4 and pgbadger.If you choose to install in Compatible with PostgreSQL mode, the Advanced Server superuser name is postgres.Thanks and Regards,Manish YadavMobile : +91 8527607945\n\n\n\n\n On Tuesday, 14 September 2021, 08:33:31 PM IST, sbob <[email protected]> wrote:\n \n\n\nEnterpriseDB is basically postgres with the added oracle compatability and some added external tools. THe default user & db will no longer be postgres but 'enterprisedb', but it is still postgresql so you won't have any issues working with EDB if you already know postgresOn 9/13/21 20:52, Mladen Gogala wrote:> The company I work for will test EnterpriseDB. I am fairly well > acquainted with Postgres but have no experience whatsoever with > EnterpriseDB. How compatible to Postgres it is? Do pgAdmin4 and > pgBadger work with EnterpriseDB? Are psql commands the same? Can > anyone here share some impressions?>> Regards>", "msg_date": "Tue, 14 Sep 2021 16:22:53 +0000 (UTC)", "msg_from": "manish yadav <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EnterpriseDB" } ]
[ { "msg_contents": "I have a PL/pgSQL function that I want to call within a query, but the\nfunction is fairly expensive to execute so I only want it executed once\nwithin the query. However the planner seems to reorganize my query so that\nit calls the function for every row.\n\nWe were previously on Pg 9.6 and this wasn't a problem then. But now that\nwe have upgraded to Pg 13, the behaviour has changed.\n\nI thought that marking the function as STABLE would mean that the function\nwould only be called once within a query, but this doesn't seem to be the\ncase. (Note: the function isn't IMMUTABLE). I've also tried increasing the\ncost of the function, but this doesn't make any difference.\n\n From looking at previous posts I discovered that putting \"offset 0\" on the\nfunction call in a \"with\" clause means that it only gets called once\n(because then the Common Table Expression isn't combined with the rest of\nthe query).\n\nThis does work, however it seems rather a kludge (and might not work in\nfuture versions of PostgreSQL).\n\nThere must be a \"proper\" way to get the planner to call a function only\nonce.\n\nPostgres version: PostgreSQL 13.3 on x86_64-pc-linux-gnu, compiled by gcc\n(GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit\n\nHere's a simple test case that demonstrates the issue:\n\ncreate or replace function test_caching(v integer)\n returns text\nas\n$BODY$\nbegin\n raise NOTICE 'In test_caching(%) function', v;\n return 'Test';\nend\n$BODY$\nLANGUAGE plpgsql STABLE\nCOST 500;\n\nselect n, test_caching(7) from generate_series(1, 10) n;\n-- test_caching(...) is called 10 times\n\nwith tc as (\n select test_caching(7)\n)\nselect n, tc.test_caching\nfrom tc\ncross join generate_series(1, 10) n;\n-- test_caching(...) is called 10 times\n-- (in Pg 9.6, test_caching(...) is only called once)\n\nwith tc as (\n select test_caching(7) offset 0\n)\nselect n, tc.test_caching\nfrom tc\ncross join generate_series(1, 10) n;\n-- test_caching(...) is called once\n-- works, but a kludge\n\nSteve\n-- \nSteve Pritchard\nDatabase Developer\n\nBritish Trust for Ornithology, The Nunnery, Thetford, Norfolk IP24 2PU, UK\nTel: +44 (0)1842 750050, fax: +44 (0)1842 750030\nRegistered Charity No 216652 (England & Wales) No SC039193 (Scotland)\nCompany Limited by Guarantee No 357284 (England & Wales)\n\nI have a PL/pgSQL function that I want to call within a query, but the function is fairly expensive to execute so I only want it executed once within the query. However the planner seems to reorganize my query so that it calls the function for every row.We were previously on Pg 9.6 and this wasn't a problem then. But now that we have upgraded to Pg 13, the behaviour has changed.I thought that marking the function as STABLE would mean that the function would only be called once within a query, but this doesn't seem to be the case. (Note: the function isn't IMMUTABLE). I've also tried increasing the cost of the function, but this doesn't make any difference.From looking at previous posts I discovered that putting \"offset 0\" on the function call in a \"with\" clause means that it only gets called once (because then the Common Table Expression isn't combined with the rest of the query).This does work, however it seems rather a kludge (and might not work in future versions of PostgreSQL). There must be a \"proper\" way to get the planner to call a function only once.Postgres version: PostgreSQL 13.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bitHere's a simple test case that demonstrates the issue:create or replace function test_caching(v integer)  returns textas$BODY$begin  raise NOTICE 'In test_caching(%) function', v;  return 'Test';end$BODY$LANGUAGE plpgsql STABLECOST 500;select n, test_caching(7) from generate_series(1, 10) n;-- test_caching(...) is called 10 timeswith tc as (  select test_caching(7))select n, tc.test_cachingfrom tccross join generate_series(1, 10) n;-- test_caching(...) is called 10 times-- (in Pg 9.6, test_caching(...) is only called once)with tc as (  select test_caching(7) offset 0)select n, tc.test_cachingfrom tccross join generate_series(1, 10) n;-- test_caching(...) is called once-- works, but a kludgeSteve-- Steve PritchardDatabase DeveloperBritish Trust for Ornithology, The Nunnery, Thetford, Norfolk IP24 2PU, UK Tel: +44 (0)1842 750050, fax: +44 (0)1842 750030Registered Charity No 216652 (England & Wales) No SC039193 (Scotland)Company Limited by Guarantee No 357284 (England & Wales)", "msg_date": "Thu, 16 Sep 2021 09:51:31 +0100", "msg_from": "Steve Pritchard <[email protected]>", "msg_from_op": true, "msg_subject": "Want function to be called only once in query" }, { "msg_contents": "On Thu, Sep 16, 2021 at 4:51 AM Steve Pritchard <[email protected]> wrote:\n>\n> I have a PL/pgSQL function that I want to call within a query, but the function is fairly expensive to execute so I only want it executed once within the query. However the planner seems to reorganize my query so that it calls the function for every row.\n>\n> We were previously on Pg 9.6 and this wasn't a problem then. But now that we have upgraded to Pg 13, the behaviour has changed.\n>\n\nThe behavior for planning a CTE changed in PG12.\n\n> There must be a \"proper\" way to get the planner to call a function only once.\n>\nAdd the MATERIALIZED keyword to the WITH statement\n\n\n", "msg_date": "Thu, 16 Sep 2021 06:56:05 -0400", "msg_from": "Jim Mlodgenski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Want function to be called only once in query" }, { "msg_contents": "> Add the MATERIALIZED keyword to the WITH statement\n\nMany thanks Jim, that's just what I needed - that does the trick.\n\nIt's hard to keep abreast of these SQL changes. Thank goodness for mailing\nlists!\n\nSteve\n\nOn Thu, 16 Sept 2021 at 11:56, Jim Mlodgenski <[email protected]> wrote:\n\n> On Thu, Sep 16, 2021 at 4:51 AM Steve Pritchard <[email protected]>\n> wrote:\n> >\n> > I have a PL/pgSQL function that I want to call within a query, but the\n> function is fairly expensive to execute so I only want it executed once\n> within the query. However the planner seems to reorganize my query so that\n> it calls the function for every row.\n> >\n> > We were previously on Pg 9.6 and this wasn't a problem then. But now\n> that we have upgraded to Pg 13, the behaviour has changed.\n> >\n>\n> The behavior for planning a CTE changed in PG12.\n>\n> > There must be a \"proper\" way to get the planner to call a function only\n> once.\n> >\n> Add the MATERIALIZED keyword to the WITH statement\n>\n\n\n-- \nSteve Pritchard\nDatabase Developer\n\nBritish Trust for Ornithology, The Nunnery, Thetford, Norfolk IP24 2PU, UK\nTel: +44 (0)1842 750050, fax: +44 (0)1842 750030\nRegistered Charity No 216652 (England & Wales) No SC039193 (Scotland)\nCompany Limited by Guarantee No 357284 (England & Wales)\n\n> Add the MATERIALIZED keyword to the WITH statementMany thanks Jim, that's just what I needed - that does the trick.It's hard to keep abreast of these SQL changes. Thank goodness for mailing lists!SteveOn Thu, 16 Sept 2021 at 11:56, Jim Mlodgenski <[email protected]> wrote:On Thu, Sep 16, 2021 at 4:51 AM Steve Pritchard <[email protected]> wrote:\n>\n> I have a PL/pgSQL function that I want to call within a query, but the function is fairly expensive to execute so I only want it executed once within the query. However the planner seems to reorganize my query so that it calls the function for every row.\n>\n> We were previously on Pg 9.6 and this wasn't a problem then. But now that we have upgraded to Pg 13, the behaviour has changed.\n>\n\nThe behavior for planning a CTE changed in PG12.\n\n> There must be a \"proper\" way to get the planner to call a function only once.\n>\nAdd the MATERIALIZED keyword to the WITH statement\n-- Steve PritchardDatabase DeveloperBritish Trust for Ornithology, The Nunnery, Thetford, Norfolk IP24 2PU, UK Tel: +44 (0)1842 750050, fax: +44 (0)1842 750030Registered Charity No 216652 (England & Wales) No SC039193 (Scotland)Company Limited by Guarantee No 357284 (England & Wales)", "msg_date": "Thu, 16 Sep 2021 12:07:53 +0100", "msg_from": "Steve Pritchard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Want function to be called only once in query" } ]
[ { "msg_contents": "Hi there,\n\nA database cluster (PostgreSQL 12.4 running on Amazon Aurora @\ndb.r5.xlarge) with a single database of mine consists of 1,656,618 rows in\npg_class. Using pg_dump on that database leads to excessive memory usage\nand sometimes even a kill by signal 9:\n\n2021-09-18 16:51:24 UTC::@:[29787]:LOG: Aurora Runtime process (PID 29794)\nwas terminated by signal 9: Killed\n2021-09-18 16:51:25 UTC::@:[29787]:LOG: terminating any other active\nserver processes\n2021-09-18 16:51:27 UTC::@:[29787]:FATAL: Can't handle storage runtime\nprocess crash\n2021-09-18 16:51:31 UTC::@:[29787]:LOG: database system is shut down\n\nThe query that is being fired by pg_dump is the following:\nSELECT t.tableoid, t.oid, t.typname, t.typnamespace, (SELECT\npg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM\npg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\nWITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM\npg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\nAS init(init_acl) WHERE acl = init_acl)) as foo) AS typacl, (SELECT\npg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM\npg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\nWITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM\npg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\nAS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rtypacl, NULL AS\ninittypacl, NULL AS initrtypacl, (SELECT rolname FROM pg_catalog.pg_roles\nWHERE oid = t.typowner) AS rolname, t.typelem, t.typrelid, CASE WHEN\nt.typrelid = 0 THEN ' '::\"char\" ELSE (SELECT relkind FROM pg_class WHERE\noid = t.typrelid) END AS typrelkind, t.typtype, t.typisdefined,\nt.typname[0] = '_' AND t.typelem != 0 AND (SELECT typarray FROM pg_type te\nWHERE oid = t.typelem) = t.oid AS isarray FROM pg_type t LEFT JOIN\npg_init_privs pip ON (t.oid = pip.objoid AND pip.classoid =\n'pg_type'::regclass AND pip.objsubid = 0);\n\nThe query plan looks like this. It takes almost 13 minutes(!) to execute\nthat query:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=4.65..8147153.76 rows=1017962 width=280) (actual\ntime=2.526..106999.294 rows=1026902 loops=1)\n Hash Cond: (t.oid = pip.objoid)\n -> Seq Scan on pg_type t (cost=0.00..36409.62 rows=1017962 width=122)\n(actual time=0.008..8836.693 rows=1026902 loops=1)\n -> Hash (cost=4.64..4.64 rows=1 width=45) (actual time=2.342..41.972\nrows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n -> Seq Scan on pg_init_privs pip (cost=0.00..4.64 rows=1\nwidth=45) (actual time=2.341..22.109 rows=0 loops=1)\n Filter: ((classoid = '1247'::oid) AND (objsubid = 0))\n Rows Removed by Filter: 176\n SubPlan 1\n -> Aggregate (cost=0.38..0.39 rows=1 width=32) (actual\ntime=0.031..0.031 rows=1 loops=1026902)\n -> Hash Anti Join (cost=0.24..0.37 rows=1 width=20) (actual\ntime=0.008..0.008 rows=0 loops=1026902)\n Hash Cond: (perm.acl = init.init_acl)\n -> Function Scan on unnest perm (cost=0.01..0.11 rows=10\nwidth=20) (actual time=0.001..0.001 rows=2 loops=1026902)\n -> Hash (cost=0.11..0.11 rows=10 width=12) (actual\ntime=0.002..0.002 rows=2 loops=1026902)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Function Scan on unnest init (cost=0.01..0.11\nrows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)\n SubPlan 2\n -> Aggregate (cost=0.38..0.39 rows=1 width=32) (actual\ntime=0.050..0.050 rows=1 loops=1026902)\n -> Hash Anti Join (cost=0.24..0.37 rows=1 width=20) (actual\ntime=0.008..0.008 rows=0 loops=1026902)\n Hash Cond: (initp.acl = permp.orig_acl)\n -> Function Scan on unnest initp (cost=0.01..0.11\nrows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)\n -> Hash (cost=0.11..0.11 rows=10 width=12) (actual\ntime=0.002..0.002 rows=2 loops=1026902)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Function Scan on unnest permp (cost=0.01..0.11\nrows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)\n SubPlan 3\n -> Index Scan using pg_authid_oid_index on pg_authid\n (cost=0.28..2.29 rows=1 width=64) (actual time=0.002..0.002 rows=1\nloops=1026902)\n Index Cond: (oid = t.typowner)\n SubPlan 4\n -> Index Scan using pg_class_oid_index on pg_class (cost=0.43..2.45\nrows=1 width=1) (actual time=0.003..0.003 rows=1 loops=671368)\n Index Cond: (oid = t.typrelid)\n SubPlan 5\n -> Index Scan using pg_type_oid_index on pg_type te (cost=0.42..2.44\nrows=1 width=4) (actual time=0.020..0.020 rows=1 loops=355428)\n Index Cond: (oid = t.typelem)\n Planning Time: 0.535 ms\n Execution Time: 774011.175 ms\n(35 rows)\n\nThe high number of rows in pg_class result from more than ~550 schemata,\neach containing more than 600 tables. It's part of a multi tenant setup\nwhere each tenant lives in its own schema.\n\nI began to move schemata to another database cluster to reduce the number\nof rows in pg_class but I'm having a hard time doing so as a call to\npg_dump might result in a database restart.\n\nIs there anything I can do to improve that situation? Next thing that comes\nto my mind is to distribute those ~550 schemata over 5 to 6 databases in\none database cluster instead of having one single database.\n\nBest regards\nUlf\n\nHi there,A database cluster (PostgreSQL 12.4 running on Amazon Aurora @ db.r5.xlarge) with a single database of mine consists of 1,656,618 rows in pg_class. Using pg_dump on that database leads to excessive memory usage and sometimes even a kill by signal 9:2021-09-18 16:51:24 UTC::@:[29787]:LOG:  Aurora Runtime process (PID 29794) was terminated by signal 9: Killed2021-09-18 16:51:25 UTC::@:[29787]:LOG:  terminating any other active server processes2021-09-18 16:51:27 UTC::@:[29787]:FATAL:  Can't handle storage runtime process crash2021-09-18 16:51:31 UTC::@:[29787]:LOG:  database system is shut downThe query that is being fired by pg_dump is the following:SELECT t.tableoid, t.oid, t.typname, t.typnamespace, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) AS typacl, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rtypacl, NULL AS inittypacl, NULL AS initrtypacl, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = t.typowner) AS rolname, t.typelem, t.typrelid, CASE WHEN t.typrelid = 0 THEN ' '::\"char\" ELSE (SELECT relkind FROM pg_class WHERE oid = t.typrelid) END AS typrelkind, t.typtype, t.typisdefined, t.typname[0] = '_' AND t.typelem != 0 AND (SELECT typarray FROM pg_type te WHERE oid = t.typelem) = t.oid AS isarray FROM pg_type t LEFT JOIN pg_init_privs pip ON (t.oid = pip.objoid AND pip.classoid = 'pg_type'::regclass AND pip.objsubid = 0);The query plan looks like this. It takes almost 13 minutes(!) to execute that query:                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join  (cost=4.65..8147153.76 rows=1017962 width=280) (actual time=2.526..106999.294 rows=1026902 loops=1)   Hash Cond: (t.oid = pip.objoid)   ->  Seq Scan on pg_type t  (cost=0.00..36409.62 rows=1017962 width=122) (actual time=0.008..8836.693 rows=1026902 loops=1)   ->  Hash  (cost=4.64..4.64 rows=1 width=45) (actual time=2.342..41.972 rows=0 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 8kB         ->  Seq Scan on pg_init_privs pip  (cost=0.00..4.64 rows=1 width=45) (actual time=2.341..22.109 rows=0 loops=1)               Filter: ((classoid = '1247'::oid) AND (objsubid = 0))               Rows Removed by Filter: 176   SubPlan 1     ->  Aggregate  (cost=0.38..0.39 rows=1 width=32) (actual time=0.031..0.031 rows=1 loops=1026902)           ->  Hash Anti Join  (cost=0.24..0.37 rows=1 width=20) (actual time=0.008..0.008 rows=0 loops=1026902)                 Hash Cond: (perm.acl = init.init_acl)                 ->  Function Scan on unnest perm  (cost=0.01..0.11 rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)                 ->  Hash  (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=2 loops=1026902)                       Buckets: 1024  Batches: 1  Memory Usage: 9kB                       ->  Function Scan on unnest init  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)   SubPlan 2     ->  Aggregate  (cost=0.38..0.39 rows=1 width=32) (actual time=0.050..0.050 rows=1 loops=1026902)           ->  Hash Anti Join  (cost=0.24..0.37 rows=1 width=20) (actual time=0.008..0.008 rows=0 loops=1026902)                 Hash Cond: (initp.acl = permp.orig_acl)                 ->  Function Scan on unnest initp  (cost=0.01..0.11 rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)                 ->  Hash  (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=2 loops=1026902)                       Buckets: 1024  Batches: 1  Memory Usage: 9kB                       ->  Function Scan on unnest permp  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)   SubPlan 3     ->  Index Scan using pg_authid_oid_index on pg_authid  (cost=0.28..2.29 rows=1 width=64) (actual time=0.002..0.002 rows=1 loops=1026902)           Index Cond: (oid = t.typowner)   SubPlan 4     ->  Index Scan using pg_class_oid_index on pg_class  (cost=0.43..2.45 rows=1 width=1) (actual time=0.003..0.003 rows=1 loops=671368)           Index Cond: (oid = t.typrelid)   SubPlan 5     ->  Index Scan using pg_type_oid_index on pg_type te  (cost=0.42..2.44 rows=1 width=4) (actual time=0.020..0.020 rows=1 loops=355428)           Index Cond: (oid = t.typelem) Planning Time: 0.535 ms Execution Time: 774011.175 ms(35 rows)The high number of rows in pg_class result from more than ~550 schemata, each containing more than 600 tables. It's part of a multi tenant setup where each tenant lives in its own schema.I began to move schemata to another database cluster to reduce the number of rows in pg_class but I'm having a hard time doing so as a call to pg_dump might result in a database restart.Is there anything I can do to improve that situation? Next thing that comes to my mind is to distribute those ~550 schemata over 5 to 6 databases in one database cluster instead of having one single database.Best regardsUlf", "msg_date": "Sun, 19 Sep 2021 12:05:25 +0200", "msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>", "msg_from_op": true, "msg_subject": "Query executed during pg_dump leads to excessive memory usage" }, { "msg_contents": "Em dom., 19 de set. de 2021 às 07:05, Ulf Lohbrügge <\[email protected]> escreveu:\n\n> Hi there,\n>\n> A database cluster (PostgreSQL 12.4 running on Amazon Aurora @\n> db.r5.xlarge) with a single database of mine consists of 1,656,618 rows in\n> pg_class. Using pg_dump on that database leads to excessive memory usage\n> and sometimes even a kill by signal 9:\n>\n> 2021-09-18 16:51:24 UTC::@:[29787]:LOG: Aurora Runtime process (PID\n> 29794) was terminated by signal 9: Killed\n> 2021-09-18 16:51:25 UTC::@:[29787]:LOG: terminating any other active\n> server processes\n> 2021-09-18 16:51:27 UTC::@:[29787]:FATAL: Can't handle storage runtime\n> process crash\n> 2021-09-18 16:51:31 UTC::@:[29787]:LOG: database system is shut down\n>\n> The query that is being fired by pg_dump is the following:\n> SELECT t.tableoid, t.oid, t.typname, t.typnamespace, (SELECT\n> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM\n> pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\n> WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM\n> pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\n> AS init(init_acl) WHERE acl = init_acl)) as foo) AS typacl, (SELECT\n> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM\n> pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\n> WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM\n> pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\n> AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rtypacl, NULL AS\n> inittypacl, NULL AS initrtypacl, (SELECT rolname FROM pg_catalog.pg_roles\n> WHERE oid = t.typowner) AS rolname, t.typelem, t.typrelid, CASE WHEN\n> t.typrelid = 0 THEN ' '::\"char\" ELSE (SELECT relkind FROM pg_class WHERE\n> oid = t.typrelid) END AS typrelkind, t.typtype, t.typisdefined,\n> t.typname[0] = '_' AND t.typelem != 0 AND (SELECT typarray FROM pg_type te\n> WHERE oid = t.typelem) = t.oid AS isarray FROM pg_type t LEFT JOIN\n> pg_init_privs pip ON (t.oid = pip.objoid AND pip.classoid =\n> 'pg_type'::regclass AND pip.objsubid = 0);\n>\n> The query plan looks like this. It takes almost 13 minutes(!) to execute\n> that query:\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=4.65..8147153.76 rows=1017962 width=280) (actual\n> time=2.526..106999.294 rows=1026902 loops=1)\n> Hash Cond: (t.oid = pip.objoid)\n> -> Seq Scan on pg_type t (cost=0.00..36409.62 rows=1017962 width=122)\n> (actual time=0.008..8836.693 rows=1026902 loops=1)\n> -> Hash (cost=4.64..4.64 rows=1 width=45) (actual time=2.342..41.972\n> rows=0 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 8kB\n> -> Seq Scan on pg_init_privs pip (cost=0.00..4.64 rows=1\n> width=45) (actual time=2.341..22.109 rows=0 loops=1)\n> Filter: ((classoid = '1247'::oid) AND (objsubid = 0))\n> Rows Removed by Filter: 176\n> SubPlan 1\n> -> Aggregate (cost=0.38..0.39 rows=1 width=32) (actual\n> time=0.031..0.031 rows=1 loops=1026902)\n> -> Hash Anti Join (cost=0.24..0.37 rows=1 width=20) (actual\n> time=0.008..0.008 rows=0 loops=1026902)\n> Hash Cond: (perm.acl = init.init_acl)\n> -> Function Scan on unnest perm (cost=0.01..0.11\n> rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)\n> -> Hash (cost=0.11..0.11 rows=10 width=12) (actual\n> time=0.002..0.002 rows=2 loops=1026902)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> Function Scan on unnest init (cost=0.01..0.11\n> rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)\n> SubPlan 2\n> -> Aggregate (cost=0.38..0.39 rows=1 width=32) (actual\n> time=0.050..0.050 rows=1 loops=1026902)\n> -> Hash Anti Join (cost=0.24..0.37 rows=1 width=20) (actual\n> time=0.008..0.008 rows=0 loops=1026902)\n> Hash Cond: (initp.acl = permp.orig_acl)\n> -> Function Scan on unnest initp (cost=0.01..0.11\n> rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)\n> -> Hash (cost=0.11..0.11 rows=10 width=12) (actual\n> time=0.002..0.002 rows=2 loops=1026902)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> Function Scan on unnest permp (cost=0.01..0.11\n> rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)\n> SubPlan 3\n> -> Index Scan using pg_authid_oid_index on pg_authid\n> (cost=0.28..2.29 rows=1 width=64) (actual time=0.002..0.002 rows=1\n> loops=1026902)\n> Index Cond: (oid = t.typowner)\n> SubPlan 4\n> -> Index Scan using pg_class_oid_index on pg_class (cost=0.43..2.45\n> rows=1 width=1) (actual time=0.003..0.003 rows=1 loops=671368)\n> Index Cond: (oid = t.typrelid)\n> SubPlan 5\n> -> Index Scan using pg_type_oid_index on pg_type te\n> (cost=0.42..2.44 rows=1 width=4) (actual time=0.020..0.020 rows=1\n> loops=355428)\n> Index Cond: (oid = t.typelem)\n> Planning Time: 0.535 ms\n> Execution Time: 774011.175 ms\n> (35 rows)\n>\n> The high number of rows in pg_class result from more than ~550 schemata,\n> each containing more than 600 tables. It's part of a multi tenant setup\n> where each tenant lives in its own schema.\n>\n> I began to move schemata to another database cluster to reduce the number\n> of rows in pg_class but I'm having a hard time doing so as a call to\n> pg_dump might result in a database restart.\n>\n> Is there anything I can do to improve that situation?\n>\nCan you try:\n\n1. Limit resource usage by Postgres, with cgroups configuration.\n2. pg_dump compression: man pgsql -Z\n3. Run vacuum and reindex before?\n\nregards,\nRanier Vilela\n\nEm dom., 19 de set. de 2021 às 07:05, Ulf Lohbrügge <[email protected]> escreveu:Hi there,A database cluster (PostgreSQL 12.4 running on Amazon Aurora @ db.r5.xlarge) with a single database of mine consists of 1,656,618 rows in pg_class. Using pg_dump on that database leads to excessive memory usage and sometimes even a kill by signal 9:2021-09-18 16:51:24 UTC::@:[29787]:LOG:  Aurora Runtime process (PID 29794) was terminated by signal 9: Killed2021-09-18 16:51:25 UTC::@:[29787]:LOG:  terminating any other active server processes2021-09-18 16:51:27 UTC::@:[29787]:FATAL:  Can't handle storage runtime process crash2021-09-18 16:51:31 UTC::@:[29787]:LOG:  database system is shut downThe query that is being fired by pg_dump is the following:SELECT t.tableoid, t.oid, t.typname, t.typnamespace, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) AS typacl, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rtypacl, NULL AS inittypacl, NULL AS initrtypacl, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = t.typowner) AS rolname, t.typelem, t.typrelid, CASE WHEN t.typrelid = 0 THEN ' '::\"char\" ELSE (SELECT relkind FROM pg_class WHERE oid = t.typrelid) END AS typrelkind, t.typtype, t.typisdefined, t.typname[0] = '_' AND t.typelem != 0 AND (SELECT typarray FROM pg_type te WHERE oid = t.typelem) = t.oid AS isarray FROM pg_type t LEFT JOIN pg_init_privs pip ON (t.oid = pip.objoid AND pip.classoid = 'pg_type'::regclass AND pip.objsubid = 0);The query plan looks like this. It takes almost 13 minutes(!) to execute that query:                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join  (cost=4.65..8147153.76 rows=1017962 width=280) (actual time=2.526..106999.294 rows=1026902 loops=1)   Hash Cond: (t.oid = pip.objoid)   ->  Seq Scan on pg_type t  (cost=0.00..36409.62 rows=1017962 width=122) (actual time=0.008..8836.693 rows=1026902 loops=1)   ->  Hash  (cost=4.64..4.64 rows=1 width=45) (actual time=2.342..41.972 rows=0 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 8kB         ->  Seq Scan on pg_init_privs pip  (cost=0.00..4.64 rows=1 width=45) (actual time=2.341..22.109 rows=0 loops=1)               Filter: ((classoid = '1247'::oid) AND (objsubid = 0))               Rows Removed by Filter: 176   SubPlan 1     ->  Aggregate  (cost=0.38..0.39 rows=1 width=32) (actual time=0.031..0.031 rows=1 loops=1026902)           ->  Hash Anti Join  (cost=0.24..0.37 rows=1 width=20) (actual time=0.008..0.008 rows=0 loops=1026902)                 Hash Cond: (perm.acl = init.init_acl)                 ->  Function Scan on unnest perm  (cost=0.01..0.11 rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)                 ->  Hash  (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=2 loops=1026902)                       Buckets: 1024  Batches: 1  Memory Usage: 9kB                       ->  Function Scan on unnest init  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)   SubPlan 2     ->  Aggregate  (cost=0.38..0.39 rows=1 width=32) (actual time=0.050..0.050 rows=1 loops=1026902)           ->  Hash Anti Join  (cost=0.24..0.37 rows=1 width=20) (actual time=0.008..0.008 rows=0 loops=1026902)                 Hash Cond: (initp.acl = permp.orig_acl)                 ->  Function Scan on unnest initp  (cost=0.01..0.11 rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)                 ->  Hash  (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=2 loops=1026902)                       Buckets: 1024  Batches: 1  Memory Usage: 9kB                       ->  Function Scan on unnest permp  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)   SubPlan 3     ->  Index Scan using pg_authid_oid_index on pg_authid  (cost=0.28..2.29 rows=1 width=64) (actual time=0.002..0.002 rows=1 loops=1026902)           Index Cond: (oid = t.typowner)   SubPlan 4     ->  Index Scan using pg_class_oid_index on pg_class  (cost=0.43..2.45 rows=1 width=1) (actual time=0.003..0.003 rows=1 loops=671368)           Index Cond: (oid = t.typrelid)   SubPlan 5     ->  Index Scan using pg_type_oid_index on pg_type te  (cost=0.42..2.44 rows=1 width=4) (actual time=0.020..0.020 rows=1 loops=355428)           Index Cond: (oid = t.typelem) Planning Time: 0.535 ms Execution Time: 774011.175 ms(35 rows)The high number of rows in pg_class result from more than ~550 schemata, each containing more than 600 tables. It's part of a multi tenant setup where each tenant lives in its own schema.I began to move schemata to another database cluster to reduce the number of rows in pg_class but I'm having a hard time doing so as a call to pg_dump might result in a database restart.Is there anything I can do to improve that situation?Can you try:1. Limit resource usage by Postgres, with cgroups configuration.2. pg_dump compression: man pgsql -Z3. Run vacuum and reindex before?regards,Ranier Vilela", "msg_date": "Sun, 19 Sep 2021 09:05:56 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query executed during pg_dump leads to excessive memory usage" }, { "msg_contents": "On Sun, Sep 19, 2021 at 2:06 PM Ranier Vilela <[email protected]> wrote:\n\n> Can you try:\n>\n> 1. Limit resource usage by Postgres, with cgroups configuration.\n>\nSince the database cluster is running at AWS, I have no access to any\ncgroups configuration.\n\n\n> 2. pg_dump compression: man pgsql -Z\n>\nI don't see how this will improve the actual process of dumping the\ndatabase. If I understand correctly the compression is applied after the\ndata has been created by pg_dump.\n\n\n> 3. Run vacuum and reindex before?\n>\nI did a manual 'VACUUM ANALYZE;' on the whole database 2-3 weeks ago but\ndidn't check if a reindex is necessary yet. Table pg_stat_user_indexes\ncurrently lists 672,244 indexes. But I don't see how this will help with\nthe query I posted that almost takes 13 minutes to finish.\n\nBest regards\nUlf\n\nOn Sun, Sep 19, 2021 at 2:06 PM Ranier Vilela <[email protected]> wrote:\n\n> Em dom., 19 de set. de 2021 às 07:05, Ulf Lohbrügge <\n> [email protected]> escreveu:\n>\n>> Hi there,\n>>\n>> A database cluster (PostgreSQL 12.4 running on Amazon Aurora @\n>> db.r5.xlarge) with a single database of mine consists of 1,656,618 rows in\n>> pg_class. Using pg_dump on that database leads to excessive memory usage\n>> and sometimes even a kill by signal 9:\n>>\n>> 2021-09-18 16:51:24 UTC::@:[29787]:LOG: Aurora Runtime process (PID\n>> 29794) was terminated by signal 9: Killed\n>> 2021-09-18 16:51:25 UTC::@:[29787]:LOG: terminating any other active\n>> server processes\n>> 2021-09-18 16:51:27 UTC::@:[29787]:FATAL: Can't handle storage runtime\n>> process crash\n>> 2021-09-18 16:51:31 UTC::@:[29787]:LOG: database system is shut down\n>>\n>> The query that is being fired by pg_dump is the following:\n>> SELECT t.tableoid, t.oid, t.typname, t.typnamespace, (SELECT\n>> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM\n>> pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\n>> WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM\n>> pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\n>> AS init(init_acl) WHERE acl = init_acl)) as foo) AS typacl, (SELECT\n>> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM\n>> pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\n>> WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM\n>> pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\n>> AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rtypacl, NULL AS\n>> inittypacl, NULL AS initrtypacl, (SELECT rolname FROM pg_catalog.pg_roles\n>> WHERE oid = t.typowner) AS rolname, t.typelem, t.typrelid, CASE WHEN\n>> t.typrelid = 0 THEN ' '::\"char\" ELSE (SELECT relkind FROM pg_class WHERE\n>> oid = t.typrelid) END AS typrelkind, t.typtype, t.typisdefined,\n>> t.typname[0] = '_' AND t.typelem != 0 AND (SELECT typarray FROM pg_type te\n>> WHERE oid = t.typelem) = t.oid AS isarray FROM pg_type t LEFT JOIN\n>> pg_init_privs pip ON (t.oid = pip.objoid AND pip.classoid =\n>> 'pg_type'::regclass AND pip.objsubid = 0);\n>>\n>> The query plan looks like this. It takes almost 13 minutes(!) to execute\n>> that query:\n>> QUERY\n>> PLAN\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------\n>> Hash Left Join (cost=4.65..8147153.76 rows=1017962 width=280) (actual\n>> time=2.526..106999.294 rows=1026902 loops=1)\n>> Hash Cond: (t.oid = pip.objoid)\n>> -> Seq Scan on pg_type t (cost=0.00..36409.62 rows=1017962\n>> width=122) (actual time=0.008..8836.693 rows=1026902 loops=1)\n>> -> Hash (cost=4.64..4.64 rows=1 width=45) (actual time=2.342..41.972\n>> rows=0 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 8kB\n>> -> Seq Scan on pg_init_privs pip (cost=0.00..4.64 rows=1\n>> width=45) (actual time=2.341..22.109 rows=0 loops=1)\n>> Filter: ((classoid = '1247'::oid) AND (objsubid = 0))\n>> Rows Removed by Filter: 176\n>> SubPlan 1\n>> -> Aggregate (cost=0.38..0.39 rows=1 width=32) (actual\n>> time=0.031..0.031 rows=1 loops=1026902)\n>> -> Hash Anti Join (cost=0.24..0.37 rows=1 width=20) (actual\n>> time=0.008..0.008 rows=0 loops=1026902)\n>> Hash Cond: (perm.acl = init.init_acl)\n>> -> Function Scan on unnest perm (cost=0.01..0.11\n>> rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)\n>> -> Hash (cost=0.11..0.11 rows=10 width=12) (actual\n>> time=0.002..0.002 rows=2 loops=1026902)\n>> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>> -> Function Scan on unnest init (cost=0.01..0.11\n>> rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)\n>> SubPlan 2\n>> -> Aggregate (cost=0.38..0.39 rows=1 width=32) (actual\n>> time=0.050..0.050 rows=1 loops=1026902)\n>> -> Hash Anti Join (cost=0.24..0.37 rows=1 width=20) (actual\n>> time=0.008..0.008 rows=0 loops=1026902)\n>> Hash Cond: (initp.acl = permp.orig_acl)\n>> -> Function Scan on unnest initp (cost=0.01..0.11\n>> rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)\n>> -> Hash (cost=0.11..0.11 rows=10 width=12) (actual\n>> time=0.002..0.002 rows=2 loops=1026902)\n>> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>> -> Function Scan on unnest permp\n>> (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=2\n>> loops=1026902)\n>> SubPlan 3\n>> -> Index Scan using pg_authid_oid_index on pg_authid\n>> (cost=0.28..2.29 rows=1 width=64) (actual time=0.002..0.002 rows=1\n>> loops=1026902)\n>> Index Cond: (oid = t.typowner)\n>> SubPlan 4\n>> -> Index Scan using pg_class_oid_index on pg_class\n>> (cost=0.43..2.45 rows=1 width=1) (actual time=0.003..0.003 rows=1\n>> loops=671368)\n>> Index Cond: (oid = t.typrelid)\n>> SubPlan 5\n>> -> Index Scan using pg_type_oid_index on pg_type te\n>> (cost=0.42..2.44 rows=1 width=4) (actual time=0.020..0.020 rows=1\n>> loops=355428)\n>> Index Cond: (oid = t.typelem)\n>> Planning Time: 0.535 ms\n>> Execution Time: 774011.175 ms\n>> (35 rows)\n>>\n>> The high number of rows in pg_class result from more than ~550 schemata,\n>> each containing more than 600 tables. It's part of a multi tenant setup\n>> where each tenant lives in its own schema.\n>>\n>> I began to move schemata to another database cluster to reduce the number\n>> of rows in pg_class but I'm having a hard time doing so as a call to\n>> pg_dump might result in a database restart.\n>>\n>> Is there anything I can do to improve that situation?\n>>\n> Can you try:\n>\n> 1. Limit resource usage by Postgres, with cgroups configuration.\n> 2. pg_dump compression: man pgsql -Z\n> 3. Run vacuum and reindex before?\n>\n> regards,\n> Ranier Vilela\n>\n\nOn Sun, Sep 19, 2021 at 2:06 PM Ranier Vilela <[email protected]> wrote:Can you try:1. Limit resource usage by Postgres, with cgroups configuration.Since the database cluster is running at AWS, I have no access to any cgroups configuration. 2. pg_dump compression: man pgsql -ZI don't see how this will improve the actual process of dumping the database. If I understand correctly the compression is applied after the data has been created by pg_dump. 3. Run vacuum and reindex before?I did a manual 'VACUUM ANALYZE;' on the whole database 2-3 weeks ago but didn't check if a reindex is necessary yet. Table pg_stat_user_indexes currently lists 672,244 indexes. But I don't see how this will help with the query I posted that almost takes 13 minutes to finish.Best regardsUlfOn Sun, Sep 19, 2021 at 2:06 PM Ranier Vilela <[email protected]> wrote:Em dom., 19 de set. de 2021 às 07:05, Ulf Lohbrügge <[email protected]> escreveu:Hi there,A database cluster (PostgreSQL 12.4 running on Amazon Aurora @ db.r5.xlarge) with a single database of mine consists of 1,656,618 rows in pg_class. Using pg_dump on that database leads to excessive memory usage and sometimes even a kill by signal 9:2021-09-18 16:51:24 UTC::@:[29787]:LOG:  Aurora Runtime process (PID 29794) was terminated by signal 9: Killed2021-09-18 16:51:25 UTC::@:[29787]:LOG:  terminating any other active server processes2021-09-18 16:51:27 UTC::@:[29787]:FATAL:  Can't handle storage runtime process crash2021-09-18 16:51:31 UTC::@:[29787]:LOG:  database system is shut downThe query that is being fired by pg_dump is the following:SELECT t.tableoid, t.oid, t.typname, t.typnamespace, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) AS typacl, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rtypacl, NULL AS inittypacl, NULL AS initrtypacl, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = t.typowner) AS rolname, t.typelem, t.typrelid, CASE WHEN t.typrelid = 0 THEN ' '::\"char\" ELSE (SELECT relkind FROM pg_class WHERE oid = t.typrelid) END AS typrelkind, t.typtype, t.typisdefined, t.typname[0] = '_' AND t.typelem != 0 AND (SELECT typarray FROM pg_type te WHERE oid = t.typelem) = t.oid AS isarray FROM pg_type t LEFT JOIN pg_init_privs pip ON (t.oid = pip.objoid AND pip.classoid = 'pg_type'::regclass AND pip.objsubid = 0);The query plan looks like this. It takes almost 13 minutes(!) to execute that query:                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join  (cost=4.65..8147153.76 rows=1017962 width=280) (actual time=2.526..106999.294 rows=1026902 loops=1)   Hash Cond: (t.oid = pip.objoid)   ->  Seq Scan on pg_type t  (cost=0.00..36409.62 rows=1017962 width=122) (actual time=0.008..8836.693 rows=1026902 loops=1)   ->  Hash  (cost=4.64..4.64 rows=1 width=45) (actual time=2.342..41.972 rows=0 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 8kB         ->  Seq Scan on pg_init_privs pip  (cost=0.00..4.64 rows=1 width=45) (actual time=2.341..22.109 rows=0 loops=1)               Filter: ((classoid = '1247'::oid) AND (objsubid = 0))               Rows Removed by Filter: 176   SubPlan 1     ->  Aggregate  (cost=0.38..0.39 rows=1 width=32) (actual time=0.031..0.031 rows=1 loops=1026902)           ->  Hash Anti Join  (cost=0.24..0.37 rows=1 width=20) (actual time=0.008..0.008 rows=0 loops=1026902)                 Hash Cond: (perm.acl = init.init_acl)                 ->  Function Scan on unnest perm  (cost=0.01..0.11 rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)                 ->  Hash  (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=2 loops=1026902)                       Buckets: 1024  Batches: 1  Memory Usage: 9kB                       ->  Function Scan on unnest init  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)   SubPlan 2     ->  Aggregate  (cost=0.38..0.39 rows=1 width=32) (actual time=0.050..0.050 rows=1 loops=1026902)           ->  Hash Anti Join  (cost=0.24..0.37 rows=1 width=20) (actual time=0.008..0.008 rows=0 loops=1026902)                 Hash Cond: (initp.acl = permp.orig_acl)                 ->  Function Scan on unnest initp  (cost=0.01..0.11 rows=10 width=20) (actual time=0.001..0.001 rows=2 loops=1026902)                 ->  Hash  (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=2 loops=1026902)                       Buckets: 1024  Batches: 1  Memory Usage: 9kB                       ->  Function Scan on unnest permp  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=2 loops=1026902)   SubPlan 3     ->  Index Scan using pg_authid_oid_index on pg_authid  (cost=0.28..2.29 rows=1 width=64) (actual time=0.002..0.002 rows=1 loops=1026902)           Index Cond: (oid = t.typowner)   SubPlan 4     ->  Index Scan using pg_class_oid_index on pg_class  (cost=0.43..2.45 rows=1 width=1) (actual time=0.003..0.003 rows=1 loops=671368)           Index Cond: (oid = t.typrelid)   SubPlan 5     ->  Index Scan using pg_type_oid_index on pg_type te  (cost=0.42..2.44 rows=1 width=4) (actual time=0.020..0.020 rows=1 loops=355428)           Index Cond: (oid = t.typelem) Planning Time: 0.535 ms Execution Time: 774011.175 ms(35 rows)The high number of rows in pg_class result from more than ~550 schemata, each containing more than 600 tables. It's part of a multi tenant setup where each tenant lives in its own schema.I began to move schemata to another database cluster to reduce the number of rows in pg_class but I'm having a hard time doing so as a call to pg_dump might result in a database restart.Is there anything I can do to improve that situation?Can you try:1. Limit resource usage by Postgres, with cgroups configuration.2. pg_dump compression: man pgsql -Z3. Run vacuum and reindex before?regards,Ranier Vilela", "msg_date": "Sun, 19 Sep 2021 20:28:48 +0200", "msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query executed during pg_dump leads to excessive memory usage" }, { "msg_contents": "On 9/19/2021 8:05 AM, Ranier Vilela wrote:\n> Em dom., 19 de set. de 2021 às 07:05, Ulf Lohbrügge \n> <[email protected] <mailto:[email protected]>> escreveu:\n>\n> Hi there,\n>\n> A database cluster (PostgreSQL 12.4 running on Amazon Aurora @\n> db.r5.xlarge) with a single database of mine consists of 1,656,618\n> rows in pg_class. Using pg_dump on that database leads to\n> excessive memory usage and sometimes even a kill by signal 9:\n>\n> 2021-09-18 16:51:24 UTC::@:[29787]:LOG:  Aurora Runtime process\n> (PID 29794) was terminated by signal 9: Killed\n> 2021-09-18 16:51:25 UTC::@:[29787]:LOG:  terminating any other\n> active server processes\n> 2021-09-18 16:51:27 UTC::@:[29787]:FATAL:  Can't handle storage\n> runtime process crash\n> 2021-09-18 16:51:31 UTC::@:[29787]:LOG:  database system is shut down\n>\n> The query that is being fired by pg_dump is the following:\n> SELECT t.tableoid, t.oid, t.typname, t.typnamespace, (SELECT\n> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n\n> FROM\n> pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\n> WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1\n> FROM\n> pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\n> AS init(init_acl) WHERE acl = init_acl)) as foo) AS typacl,\n> (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl,\n> row_n FROM\n> pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\n> WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1\n> FROM\n> pg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\n> AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rtypacl, NULL\n> AS inittypacl, NULL AS initrtypacl, (SELECT rolname FROM\n> pg_catalog.pg_roles WHERE oid = t.typowner) AS rolname, t.typelem,\n> t.typrelid, CASE WHEN t.typrelid = 0 THEN ' '::\"char\" ELSE (SELECT\n> relkind FROM pg_class WHERE oid = t.typrelid) END AS typrelkind,\n> t.typtype, t.typisdefined, t.typname[0] = '_' AND t.typelem != 0\n> AND (SELECT typarray FROM pg_type te WHERE oid = t.typelem) =\n> t.oid AS isarray FROM pg_type t LEFT JOIN pg_init_privs pip ON\n> (t.oid = pip.objoid AND pip.classoid = 'pg_type'::regclass AND\n> pip.objsubid = 0);\n>\n> The query plan looks like this. It takes almost 13 minutes(!) to\n> execute that query:\n>             QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n>  Hash Left Join  (cost=4.65..8147153.76 rows=1017962 width=280)\n> (actual time=2.526..106999.294 rows=1026902 loops=1)\n>    Hash Cond: (t.oid = pip.objoid)\n>    ->  Seq Scan on pg_type t  (cost=0.00..36409.62 rows=1017962\n> width=122) (actual time=0.008..8836.693 rows=1026902 loops=1)\n>    ->  Hash  (cost=4.64..4.64 rows=1 width=45) (actual\n> time=2.342..41.972 rows=0 loops=1)\n>          Buckets: 1024  Batches: 1  Memory Usage: 8kB\n>          ->  Seq Scan on pg_init_privs pip  (cost=0.00..4.64\n> rows=1 width=45) (actual time=2.341..22.109 rows=0 loops=1)\n>                Filter: ((classoid = '1247'::oid) AND (objsubid = 0))\n>                Rows Removed by Filter: 176\n>    SubPlan 1\n>      ->  Aggregate  (cost=0.38..0.39 rows=1 width=32) (actual\n> time=0.031..0.031 rows=1 loops=1026902)\n>            ->  Hash Anti Join  (cost=0.24..0.37 rows=1 width=20)\n> (actual time=0.008..0.008 rows=0 loops=1026902)\n>                  Hash Cond: (perm.acl = init.init_acl)\n>                  ->  Function Scan on unnest perm\n>  (cost=0.01..0.11 rows=10 width=20) (actual time=0.001..0.001\n> rows=2 loops=1026902)\n>                  ->  Hash  (cost=0.11..0.11 rows=10 width=12)\n> (actual time=0.002..0.002 rows=2 loops=1026902)\n>                        Buckets: 1024  Batches: 1  Memory Usage: 9kB\n>                        ->  Function Scan on unnest init\n>  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001\n> rows=2 loops=1026902)\n>    SubPlan 2\n>      ->  Aggregate  (cost=0.38..0.39 rows=1 width=32) (actual\n> time=0.050..0.050 rows=1 loops=1026902)\n>            ->  Hash Anti Join  (cost=0.24..0.37 rows=1 width=20)\n> (actual time=0.008..0.008 rows=0 loops=1026902)\n>                  Hash Cond: (initp.acl = permp.orig_acl)\n>                  ->  Function Scan on unnest initp\n>  (cost=0.01..0.11 rows=10 width=20) (actual time=0.001..0.001\n> rows=2 loops=1026902)\n>                  ->  Hash  (cost=0.11..0.11 rows=10 width=12)\n> (actual time=0.002..0.002 rows=2 loops=1026902)\n>                        Buckets: 1024  Batches: 1  Memory Usage: 9kB\n>                        ->  Function Scan on unnest permp\n>  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001\n> rows=2 loops=1026902)\n>    SubPlan 3\n>      ->  Index Scan using pg_authid_oid_index on pg_authid\n>  (cost=0.28..2.29 rows=1 width=64) (actual time=0.002..0.002\n> rows=1 loops=1026902)\n>            Index Cond: (oid = t.typowner)\n>    SubPlan 4\n>      ->  Index Scan using pg_class_oid_index on pg_class\n>  (cost=0.43..2.45 rows=1 width=1) (actual time=0.003..0.003 rows=1\n> loops=671368)\n>            Index Cond: (oid = t.typrelid)\n>    SubPlan 5\n>      ->  Index Scan using pg_type_oid_index on pg_type te\n>  (cost=0.42..2.44 rows=1 width=4) (actual time=0.020..0.020 rows=1\n> loops=355428)\n>            Index Cond: (oid = t.typelem)\n>  Planning Time: 0.535 ms\n>  Execution Time: 774011.175 ms\n> (35 rows)\n>\n> The high number of rows in pg_class result from more than ~550\n> schemata, each containing more than 600 tables. It's part of a\n> multi tenant setup where each tenant lives in its own schema.\n>\n> I began to move schemata to another database cluster to reduce the\n> number of rows in pg_class but I'm having a hard time doing so as\n> a call to pg_dump might result in a database restart.\n>\n> Is there anything I can do to improve that situation?\n>\n> Can you try:\n>\n> 1. Limit resource usage by Postgres, with cgroups configuration.\n> 2. pg_dump compression: man pgsql -Z\n> 3. Run vacuum and reindex before?\n>\n> regards,\n> Ranier Vilela\n\n\nTry setting enable_seqscan=off for the user executing pg_dump, usually \n\"postgres\".  You have a ton of full table scans and hash joins which are \nthe best plan for pg_dump, given the fact that you are dumping the \nentire database, but use a lot of memory.  Index scan will be slower, \nprobably a lot slower, but will avoid allocating large memory areas for \nhash.\n\n-- \nMladen Gogala\nOracle DBA\nTel: (347) 321-1217\nBlog: https://dbwhisperer.wordpress.com\n\n\n\n\n\n\n\n\nOn 9/19/2021 8:05 AM, Ranier Vilela\n wrote:\n\n\n\n\n\nEm dom., 19 de set. de 2021\n às 07:05, Ulf Lohbrügge <[email protected]>\n escreveu:\n\n\nHi there,\n \n\nA database cluster (PostgreSQL 12.4 running on Amazon\n Aurora @ db.r5.xlarge) with a single database of mine\n consists of 1,656,618 rows in pg_class. Using pg_dump on\n that database leads to excessive memory usage and\n sometimes even a kill by signal 9:\n\n\n2021-09-18 16:51:24 UTC::@:[29787]:LOG:  Aurora\n Runtime process (PID 29794) was terminated by signal 9:\n Killed\n 2021-09-18 16:51:25 UTC::@:[29787]:LOG:  terminating any\n other active server processes\n\n2021-09-18 16:51:27 UTC::@:[29787]:FATAL:  Can't\n handle storage runtime process crash\n\n2021-09-18 16:51:31 UTC::@:[29787]:LOG:  database\n system is shut down\n\n\n\nThe query that is being fired by pg_dump is the\n following:\nSELECT t.tableoid, t.oid, t.typname, t.typnamespace,\n (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM\n (SELECT acl, row_n FROM\npg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\n WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS (\n SELECT 1 FROM\npg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\n AS init(init_acl) WHERE acl = init_acl)) as foo) AS\n typacl, (SELECT pg_catalog.array_agg(acl ORDER BY row_n)\n FROM (SELECT acl, row_n FROM\npg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('T',t.typowner)))\n WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS (\n SELECT 1 FROM\npg_catalog.unnest(coalesce(t.typacl,pg_catalog.acldefault('T',t.typowner)))\n AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS\n rtypacl, NULL AS inittypacl, NULL AS initrtypacl,\n (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =\n t.typowner) AS rolname, t.typelem, t.typrelid, CASE WHEN\n t.typrelid = 0 THEN ' '::\"char\" ELSE (SELECT relkind\n FROM pg_class WHERE oid = t.typrelid) END AS typrelkind,\n t.typtype, t.typisdefined, t.typname[0] = '_' AND\n t.typelem != 0 AND (SELECT typarray FROM pg_type te\n WHERE oid = t.typelem) = t.oid AS isarray FROM pg_type t\n LEFT JOIN pg_init_privs pip ON (t.oid = pip.objoid AND\n pip.classoid = 'pg_type'::regclass AND pip.objsubid =\n 0);\n\n\n\nThe query plan looks like this. It takes almost 13\n minutes(!) to execute that query:\n                                                     \n             QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n  Hash Left Join  (cost=4.65..8147153.76 rows=1017962\n width=280) (actual time=2.526..106999.294 rows=1026902\n loops=1)\n    Hash Cond: (t.oid = pip.objoid)\n    ->  Seq Scan on pg_type t  (cost=0.00..36409.62\n rows=1017962 width=122) (actual time=0.008..8836.693\n rows=1026902 loops=1)\n    ->  Hash  (cost=4.64..4.64 rows=1 width=45)\n (actual time=2.342..41.972 rows=0 loops=1)\n          Buckets: 1024  Batches: 1  Memory Usage: 8kB\n          ->  Seq Scan on pg_init_privs pip\n  (cost=0.00..4.64 rows=1 width=45) (actual\n time=2.341..22.109 rows=0 loops=1)\n                Filter: ((classoid = '1247'::oid) AND\n (objsubid = 0))\n                Rows Removed by Filter: 176\n    SubPlan 1\n      ->  Aggregate  (cost=0.38..0.39 rows=1 width=32)\n (actual time=0.031..0.031 rows=1 loops=1026902)\n            ->  Hash Anti Join  (cost=0.24..0.37\n rows=1 width=20) (actual time=0.008..0.008 rows=0\n loops=1026902)\n                  Hash Cond: (perm.acl = init.init_acl)\n                  ->  Function Scan on unnest perm\n  (cost=0.01..0.11 rows=10 width=20) (actual\n time=0.001..0.001 rows=2 loops=1026902)\n                  ->  Hash  (cost=0.11..0.11 rows=10\n width=12) (actual time=0.002..0.002 rows=2\n loops=1026902)\n                        Buckets: 1024  Batches: 1  Memory\n Usage: 9kB\n                        ->  Function Scan on unnest\n init  (cost=0.01..0.11 rows=10 width=12) (actual\n time=0.001..0.001 rows=2 loops=1026902)\n    SubPlan 2\n      ->  Aggregate  (cost=0.38..0.39 rows=1 width=32)\n (actual time=0.050..0.050 rows=1 loops=1026902)\n            ->  Hash Anti Join  (cost=0.24..0.37\n rows=1 width=20) (actual time=0.008..0.008 rows=0\n loops=1026902)\n                  Hash Cond: (initp.acl = permp.orig_acl)\n                  ->  Function Scan on unnest initp\n  (cost=0.01..0.11 rows=10 width=20) (actual\n time=0.001..0.001 rows=2 loops=1026902)\n                  ->  Hash  (cost=0.11..0.11 rows=10\n width=12) (actual time=0.002..0.002 rows=2\n loops=1026902)\n                        Buckets: 1024  Batches: 1  Memory\n Usage: 9kB\n                        ->  Function Scan on unnest\n permp  (cost=0.01..0.11 rows=10 width=12) (actual\n time=0.001..0.001 rows=2 loops=1026902)\n    SubPlan 3\n      ->  Index Scan using pg_authid_oid_index on\n pg_authid  (cost=0.28..2.29 rows=1 width=64) (actual\n time=0.002..0.002 rows=1 loops=1026902)\n            Index Cond: (oid = t.typowner)\n    SubPlan 4\n      ->  Index Scan using pg_class_oid_index on\n pg_class  (cost=0.43..2.45 rows=1 width=1) (actual\n time=0.003..0.003 rows=1 loops=671368)\n            Index Cond: (oid = t.typrelid)\n    SubPlan 5\n      ->  Index Scan using pg_type_oid_index on\n pg_type te  (cost=0.42..2.44 rows=1 width=4) (actual\n time=0.020..0.020 rows=1 loops=355428)\n            Index Cond: (oid = t.typelem)\n  Planning Time: 0.535 ms\n  Execution Time: 774011.175 ms\n (35 rows)\n\n\n\nThe high number of rows in pg_class result from more\n than ~550 schemata, each containing more than 600\n tables. It's part of a multi tenant setup where each\n tenant lives in its own schema.\n\n\nI began to move schemata to another database cluster\n to reduce the number of rows in pg_class but I'm having\n a hard time doing so as a call to pg_dump might result\n in a database restart.\n\n\nIs there anything I can do to improve that situation?\n\n\n Can you try:\n\n\n1. Limit resource usage by Postgres,\n with cgroups configuration.\n2. pg_dump compression: man pgsql -Z\n3. Run vacuum and reindex before?\n\n\nregards,\nRanier Vilela\n\n\n\n\nTry setting enable_seqscan=off for the user executing pg_dump,\n usually \"postgres\".  You have a ton of full table scans and hash\n joins which are the best plan for pg_dump, given the fact that you\n are dumping the entire database, but use a lot of memory.  Index\n scan will be slower, probably a lot slower, but will avoid\n allocating large memory areas for hash.\n\n-- \n Mladen Gogala\n Oracle DBA\n Tel: (347) 321-1217\n Blog: https://dbwhisperer.wordpress.com", "msg_date": "Sun, 19 Sep 2021 15:20:56 -0400", "msg_from": "\"Gogala, Mladen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query executed during pg_dump leads to excessive memory usage" }, { "msg_contents": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> A database cluster (PostgreSQL 12.4 running on Amazon Aurora @\n> db.r5.xlarge) with a single database of mine consists of 1,656,618 rows in\n> pg_class.\n\nOuch.\n\n> Using pg_dump on that database leads to excessive memory usage\n> and sometimes even a kill by signal 9:\n\n> 2021-09-18 16:51:24 UTC::@:[29787]:LOG: Aurora Runtime process (PID 29794)\n> was terminated by signal 9: Killed\n\nFor the record, Aurora isn't Postgres. It's a heavily-modified fork,\nwith (I imagine) different performance bottlenecks. Likely you\nshould be asking Amazon support about this before the PG community.\n\nHaving said that ...\n\n> The high number of rows in pg_class result from more than ~550 schemata,\n> each containing more than 600 tables. It's part of a multi tenant setup\n> where each tenant lives in its own schema.\n\n... you might have some luck dumping each schema separately, or at least\nin small groups, using pg_dump's --schema switch.\n\n> Is there anything I can do to improve that situation? Next thing that comes\n> to my mind is to distribute those ~550 schemata over 5 to 6 databases in\n> one database cluster instead of having one single database.\n\nYeah, you definitely don't want to have this many tables in one\ndatabase, especially not on a platform that's going to be chary\nof memory.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Sep 2021 15:21:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query executed during pg_dump leads to excessive memory usage" } ]
[ { "msg_contents": "Hello,\n \nAs modern software is typically multi-tenant aware it is critical for DB to effectively filter\ndatabase records based on tenant ID context. Yet, we constantly hit the situations when Postgres 13.4 performs poorly.\nIf community is interested I can report such trivial and obvious cases for optimisation. Or even sponsor development a bit.\n \n1. Here is an example when tasks are selected for 1 tenant and everything is fine and index on (tenant_id, id) is used:\n \nSELECT * FROM \"tasks\" WHERE\n(tenant_id IN ('45AQ7HARTXQG1P6QNEDDA8A5V0'))\nORDER BY id desc LIMIT 100\nLimit  (cost=0.69..426.01 rows=100 width=1679) (actual time=0.023..0.209 rows=100 loops=1)\n  ->  Index Scan Backward using task_tenant_id_status_idx on tasks  (cost=0.69..25770.78 rows=6059 width=1679) (actual time=0.023..0.200 rows=100 loops=1)\n        Index Cond: (tenant_id = '45AQ7HARTXQG1P6QNEDDA8A5V0'::text)\nPlanning Time: 0.125 ms\nExecution Time: 0.231 ms\n \n2. Now when I add 2 additional tenant IDs to the query everything gets 100x worse, despite the fact that those 2 tenants do NOT have any records at all.\nThe reason is the wrong index on (tenant_id, status) is used:\n \nSELECT * FROM \"tasks\" WHERE\n(tenant_id IN ('222P0TQT0FAR86BR30BB50TZZX','1X2W2J9B2VVJFSXGWZYR3XEHJO','45AQ7HARTXQG1P6QNEDDA8A5V0'))\nORDER BY id desc LIMIT 100\nLimit  (cost=65506.24..65506.49 rows=100 width=1679) (actual time=93.972..93.989 rows=100 loops=1)\n  ->  Sort  (cost=65506.24..65551.68 rows=18178 width=1679) (actual time=93.970..93.979 rows=100 loops=1)\n        Sort Key: id DESC\n        Sort Method: top-N heapsort  Memory: 97kB\n        ->  Bitmap Heap Scan on tasks  (cost=322.56..64811.49 rows=18178 width=1679) (actual time=10.546..65.559 rows=29159 loops=1)\n              Recheck Cond: (tenant_id = ANY ('{222P0TQT0FAR86BR30BB50TZZX,1X2W2J9B2VVJFSXGWZYR3XEHJO,45AQ7HARTXQG1P6QNEDDA8A5V0}'::text[]))\n              Heap Blocks: exact=27594\n              ->  Bitmap Index Scan on task_tenant_status_idx  (cost=0.00..318.01 rows=18178 width=0) (actual time=4.268..4.268 rows=29236 loops=1)\n                    Index Cond: (tenant_id = ANY ('{222P0TQT0FAR86BR30BB50TZZX,1X2W2J9B2VVJFSXGWZYR3XEHJO,45AQ7HARTXQG1P6QNEDDA8A5V0}'::text[]))\nPlanning Time: 0.212 ms\nExecution Time: 94.051 ms\n \nis it possible somehow to force PG to use the correct index?\n \nRegards,\nKirill\nHello, As modern software is typically multi-tenant aware it is critical for DB to effectively filterdatabase records based on tenant ID context. Yet, we constantly hit the situations when Postgres 13.4 performs poorly.If community is interested I can report such trivial and obvious cases for optimisation. Or even sponsor development a bit. 1. Here is an example when tasks are selected for 1 tenant and everything is fine and index on (tenant_id, id) is used: SELECT * FROM \"tasks\" WHERE(tenant_id IN ('45AQ7HARTXQG1P6QNEDDA8A5V0'))ORDER BY id desc LIMIT 100Limit  (cost=0.69..426.01 rows=100 width=1679) (actual time=0.023..0.209 rows=100 loops=1)  ->  Index Scan Backward using task_tenant_id_status_idx on tasks  (cost=0.69..25770.78 rows=6059 width=1679) (actual time=0.023..0.200 rows=100 loops=1)        Index Cond: (tenant_id = '45AQ7HARTXQG1P6QNEDDA8A5V0'::text)Planning Time: 0.125 msExecution Time: 0.231 ms 2. Now when I add 2 additional tenant IDs to the query everything gets 100x worse, despite the fact that those 2 tenants do NOT have any records at all.The reason is the wrong index on (tenant_id, status) is used: SELECT * FROM \"tasks\" WHERE(tenant_id IN ('222P0TQT0FAR86BR30BB50TZZX','1X2W2J9B2VVJFSXGWZYR3XEHJO','45AQ7HARTXQG1P6QNEDDA8A5V0'))ORDER BY id desc LIMIT 100Limit  (cost=65506.24..65506.49 rows=100 width=1679) (actual time=93.972..93.989 rows=100 loops=1)  ->  Sort  (cost=65506.24..65551.68 rows=18178 width=1679) (actual time=93.970..93.979 rows=100 loops=1)        Sort Key: id DESC        Sort Method: top-N heapsort  Memory: 97kB        ->  Bitmap Heap Scan on tasks  (cost=322.56..64811.49 rows=18178 width=1679) (actual time=10.546..65.559 rows=29159 loops=1)              Recheck Cond: (tenant_id = ANY ('{222P0TQT0FAR86BR30BB50TZZX,1X2W2J9B2VVJFSXGWZYR3XEHJO,45AQ7HARTXQG1P6QNEDDA8A5V0}'::text[]))              Heap Blocks: exact=27594              ->  Bitmap Index Scan on task_tenant_status_idx  (cost=0.00..318.01 rows=18178 width=0) (actual time=4.268..4.268 rows=29236 loops=1)                    Index Cond: (tenant_id = ANY ('{222P0TQT0FAR86BR30BB50TZZX,1X2W2J9B2VVJFSXGWZYR3XEHJO,45AQ7HARTXQG1P6QNEDDA8A5V0}'::text[]))Planning Time: 0.212 msExecution Time: 94.051 ms is it possible somehow to force PG to use the correct index? Regards,Kirill", "msg_date": "Mon, 20 Sep 2021 15:33:16 +0300", "msg_from": "=?UTF-8?B?S2lyaWxs?= <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?bXVsdGktdGVuYW50IHF1ZXJpZXMgc2VsZWN0IHdyb25nIGluZGV4?=" }, { "msg_contents": "On 09/20/21 15:33, Kirill wrote:\n> Hello,\n> As modern software is typically multi-tenant aware it is critical for \n> DB to effectively filter\n> database records based on tenant ID context. Yet, we constantly hit \n> the situations when Postgres 13.4 performs poorly.\n> If community is interested I can report such trivial and obvious cases \n> for optimisation. Or even sponsor development a bit.\n> 1. Here is an example when tasks are selected for 1 tenant and \n> everything is fine and index on (tenant_id, id) is used:\n> SELECT * FROM \"tasks\" WHERE\n> (tenant_id IN ('45AQ7HARTXQG1P6QNEDDA8A5V0'))\n> ORDER BY id desc LIMIT 100\n> Limit  (cost=0.69..426.01 rows=100 width=1679) (actual \n> time=0.023..0.209 rows=100 loops=1)\n>   ->  Index Scan Backward using task_tenant_id_status_idx on tasks \n>  (cost=0.69..25770.78 rows=6059 width=1679) (actual time=0.023..0.200 \n> rows=100 loops=1)\n>         Index Cond: (tenant_id = '45AQ7HARTXQG1P6QNEDDA8A5V0'::text)\n> Planning Time: 0.125 ms\n> Execution Time: 0.231 ms\n> 2. Now when I add 2 additional tenant IDs to the query everything gets \n> 100x worse, despite the fact that those 2 tenants do NOT have any \n> records at all.\n> The reason is the wrong index on (tenant_id, status) is used:\n> SELECT * FROM \"tasks\" WHERE\n> (tenant_id IN \n> ('222P0TQT0FAR86BR30BB50TZZX','1X2W2J9B2VVJFSXGWZYR3XEHJO','45AQ7HARTXQG1P6QNEDDA8A5V0'))\n> ORDER BY id desc LIMIT 100\n> Limit  (cost=65506.24..65506.49 rows=100 width=1679) (actual \n> time=93.972..93.989 rows=100 loops=1)\n>   ->  Sort  (cost=65506.24..65551.68 rows=18178 width=1679) (actual \n> time=93.970..93.979 rows=100 loops=1)\n>         Sort Key: id DESC\n>         Sort Method: top-N heapsort  Memory: 97kB\n>         ->  Bitmap Heap Scan on tasks  (cost=322.56..64811.49 \n> rows=18178 width=1679) (actual time=10.546..65.559 rows=29159 loops=1)\n>               Recheck Cond: (tenant_id = ANY \n> ('{222P0TQT0FAR86BR30BB50TZZX,1X2W2J9B2VVJFSXGWZYR3XEHJO,45AQ7HARTXQG1P6QNEDDA8A5V0}'::text[]))\n>               Heap Blocks: exact=27594\n>               ->  Bitmap Index Scan on task_tenant_status_idx \n>  (cost=0.00..318.01 rows=18178 width=0) (actual time=4.268..4.268 \n> rows=29236 loops=1)\n>                     Index Cond: (tenant_id = ANY \n> ('{222P0TQT0FAR86BR30BB50TZZX,1X2W2J9B2VVJFSXGWZYR3XEHJO,45AQ7HARTXQG1P6QNEDDA8A5V0}'::text[]))\n> Planning Time: 0.212 ms\n> Execution Time: 94.051 ms\n> is it possible somehow to force PG to use the correct index?\nTry \"set enable_bitmapscan to off;\", but it is not a solution.\nHave you try to analyze table, vacuum table, create statistics [...] on \n... from ... ?\n> Regards,\n> Kirill\n\n\n\n\n", "msg_date": "Tue, 21 Sep 2021 17:00:40 +0300", "msg_from": "Alexey M Boltenkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-tenant queries select wrong index" } ]
[ { "msg_contents": "Dear All,\n\nWe use (a somewhat old version of) Liquibase to implement changes in our\ndatabases. We also use Liquibase scripts to keep track of database\nmigration (mostly schema, but a little bit of data too). At some point we\ncleaned up all our primary indexes as well as constraints and implemented\nthem as Liquibase scripts (i.e., recreated them). For that purpose\nLiquibase usually fires a query like this to postgres:\n\nSELECT\n\tFK.TABLE_NAME as \"TABLE_NAME\"\n\t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n\t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n\t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n\t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\"\nFROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C\nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON\nC.CONSTRAINT_NAME = FK.CONSTRAINT_NAME\nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON\nC.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME\nINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME\n= CU.CONSTRAINT_NAME\nINNER JOIN (\n\tSELECT\n\t\ti1.TABLE_NAME\n\t\t, i2.COLUMN_NAME\n\t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1\n\t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON\ni1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n\t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'\n) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE\nlower(FK.TABLE_NAME)='secrole_condcollection'\n\nPostgres decides to use a hashjoin (see the query plan below) and 20\nseconds later spits out 2 rows. It does not matter if one sets random_page_cost\nto 2, 1.5, or 1.0 (or even 0.09, which does not make any sense) one waits\n20 seconds. hashjoin is used to answer this query. If one switches off the\nhashjoins (set enable_hashjoin = false;), it takes 0.1 second to compute to\nspit two rows. The views in information_schema are tiny:\n\nselect 'REFERENTIAL_CONSTRAINTS', count(1) from\nINFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS\nunion all\nselect 'TABLE_CONSTRAINTS', count(1) from INFORMATION_SCHEMA.TABLE_CONSTRAINTS\nunion all\nselect 'KEY_COLUMN_USAGE', count(1) from INFORMATION_SCHEMA.KEY_COLUMN_USAGE\nunion all\nselect 'TABLE_CONSTRAINTS', count(1) from INFORMATION_SCHEMA.TABLE_CONSTRAINTS\n\nREFERENTIAL_CONSTRAINTS\t1079\nTABLE_CONSTRAINTS\t4359\nKEY_COLUMN_USAGE\t1999\nTABLE_CONSTRAINTS\t4359\n\nthe whole schema eats up 300Kb space:\n\nSELECT pg_size_pretty(sum(pg_total_relation_size(C.oid))) AS \"total_size\"\n FROM pg_class C\n LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n WHERE nspname = 'information_schema'\n\n--344 kB\nAny clues how I could \"save face of the hash joins\"?\n\nCheers,\nArturas\n\n\n\n\n\nquery plan hash (please note that random_page_cost is overwritten there:\n\nset enable_hashjoin = 1;\n\nSELECT\n\tFK.TABLE_NAME as \"TABLE_NAME\"\n\t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n\t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n\t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n\t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\"\nFROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C\nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON\nC.CONSTRAINT_NAME = FK.CONSTRAINT_NAME\nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON\nC.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME\nINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME\n= CU.CONSTRAINT_NAME\nINNER JOIN (\n\tSELECT\n\t\ti1.TABLE_NAME\n\t\t, i2.COLUMN_NAME\n\t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1\n\t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON\ni1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n\t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'\n) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE\nlower(FK.TABLE_NAME)='secrole_condcollection'\n\nNested Loop (cost=2174.36..13670.47 rows=1 width=320) (actual\ntime=5499.728..26310.137 rows=2 loops=1)\n Output: \"*SELECT* 1\".table_name,\n(a.attname)::information_schema.sql_identifier, \"*SELECT*\n1_1\".table_name, (a_1.attname)::information_schema.sql_identifier,\n(con.conname)::information_schema.sql_identifier\n Inner Unique: true\n Buffers: shared hit=1961035\n -> Nested Loop (cost=2174.07..13670.12 rows=1 width=296) (actual\ntime=5499.716..26310.115 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT*\n1_1\".table_name, a.attname, r.oid,\n(information_schema._pg_expandarray(c_1.conkey)), r.relowner\n Inner Unique: true\n Buffers: shared hit=1961029\n -> Nested Loop (cost=2173.78..13669.78 rows=1 width=272)\n(actual time=5499.689..26310.066 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT*\n1_1\".table_name, r_2.oid,\n(information_schema._pg_expandarray(c_3.conkey)), r_2.relowner, r.oid,\n(information_schema._pg_expandarray(c_1.conkey)), r.relowner\n Join Filter: ((\"*SELECT* 1_2\".table_name)::name =\n(\"*SELECT* 1_1\".table_name)::name)\n Rows Removed by Join Filter: 1670\n Buffers: shared hit=1961023\n -> Hash Join (cost=497.90..5313.80 rows=1 width=104)\n(actual time=7.586..29.643 rows=836 loops=1)\n Output: \"*SELECT* 1_2\".table_name, r.oid,\n(information_schema._pg_expandarray(c_1.conkey)), r.relowner\n Hash Cond: (c_1.conname = (\"*SELECT*\n1_2\".constraint_name)::name)\n Buffers: shared hit=3355\n -> ProjectSet (cost=324.56..1716.71 rows=249000\nwidth=341) (actual time=1.385..21.087 rows=1983 loops=1)\n Output: r.oid, NULL::name, r.relowner,\nNULL::name, NULL::name, NULL::oid, c_1.conname, NULL::\"char\",\nNULL::oid, NULL::smallint[], NULL::oid,\ninformation_schema._pg_expandarray(c_1.conkey)\n Buffers: shared hit=328\n -> Hash Join (cost=324.56..408.21 rows=249\nwidth=95) (actual time=1.246..6.050 rows=1707 loops=1)\n Output: c_1.conkey, r.oid, r.relowner,\nc_1.conname\n Inner Unique: true\n Hash Cond: (c_1.connamespace = nc.oid)\n Buffers: shared hit=328\n -> Hash Join (cost=323.42..405.96\nrows=249 width=99) (actual time=1.226..4.977 rows=1707 loops=1)\n Output: r.oid, r.relowner,\nc_1.conname, c_1.conkey, c_1.connamespace\n Inner Unique: true\n Hash Cond: (r.relnamespace = nr.oid)\n Buffers: shared hit=327\n -> Hash Join\n(cost=322.30..403.16 rows=374 width=103) (actual time=1.209..3.807\nrows=1707 loops=1)\n Output: r.oid, r.relowner,\nr.relnamespace, c_1.conname, c_1.conkey, c_1.connamespace\n Inner Unique: true\n Hash Cond: (c_1.conrelid = r.oid)\n Buffers: shared hit=326\n -> Seq Scan on\npg_catalog.pg_constraint c_1 (cost=0.00..76.23 rows=1760 width=95)\n(actual time=0.006..0.894 rows=1707 loops=1)\n Output: c_1.oid,\nc_1.conname, c_1.connamespace, c_1.contype, c_1.condeferrable,\nc_1.condeferred, c_1.convalidated, c_1.conrelid, c_1.contypid,\nc_1.conindid, c_1.conparentid, c_1.confrelid, c_1.confupdtype,\nc_1.confdeltype, c_1.confmatchtype, c_1.conislocal, c_1.coninhcount,\nc_1.connoinherit, c_1.conkey, c_1.confkey, c_1.conpfeqop,\nc_1.conppeqop, c_1.conffeqop, c_1.conexclop, c_1.conbin\n Filter: (c_1.contype\n= ANY ('{p,u,f}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=52\n -> Hash\n(cost=313.84..313.84 rows=677 width=12) (actual time=1.135..1.136\nrows=694 loops=1)\n Output: r.oid,\nr.relowner, r.relnamespace\n Buckets: 1024\nBatches: 1 Memory Usage: 38kB\n Buffers: shared hit=274\n -> Seq Scan on\npg_catalog.pg_class r (cost=0.00..313.84 rows=677 width=12) (actual\ntime=0.009..1.024 rows=694 loops=1)\n Output: r.oid,\nr.relowner, r.relnamespace\n Filter:\n(r.relkind = ANY ('{r,p}'::\"char\"[]))\n Rows Removed\nby Filter: 2559\n Buffers: shared hit=274\n -> Hash (cost=1.07..1.07\nrows=4 width=4) (actual time=0.009..0.009 rows=7 loops=1)\n Output: nr.oid\n Buckets: 1024 Batches: 1\nMemory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nr (cost=0.00..1.07 rows=4 width=4) (actual\ntime=0.004..0.006 rows=7 loops=1)\n Output: nr.oid\n Filter: (NOT\npg_is_other_temp_schema(nr.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Hash (cost=1.06..1.06 rows=6\nwidth=4) (actual time=0.008..0.009 rows=9 loops=1)\n Output: nc.oid\n Buckets: 1024 Batches: 1\nMemory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nc (cost=0.00..1.06 rows=6 width=4) (actual\ntime=0.003..0.004 rows=9 loops=1)\n Output: nc.oid\n Buffers: shared hit=1\n -> Hash (cost=173.32..173.32 rows=1 width=128)\n(actual time=6.192..6.196 rows=595 loops=1)\n Output: \"*SELECT* 1_2\".constraint_name,\n\"*SELECT* 1_2\".table_name\n Buckets: 1024 Batches: 1 Memory Usage: 101kB\n Buffers: shared hit=3027\n -> Subquery Scan on \"*SELECT* 1_2\"\n(cost=0.28..173.32 rows=1 width=128) (actual time=0.041..5.955\nrows=595 loops=1)\n Output: \"*SELECT*\n1_2\".constraint_name, \"*SELECT* 1_2\".table_name\n Buffers: shared hit=3027\n -> Nested Loop (cost=0.28..173.31\nrows=1 width=512) (actual time=0.040..5.849 rows=595 loops=1)\n Output:\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(c_2.conname)::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_1.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Join Filter: (r_1.relnamespace = nr_1.oid)\n Rows Removed by Join Filter: 1836\n Buffers: shared hit=3027\n -> Nested Loop\n(cost=0.28..172.19 rows=1 width=132) (actual time=0.033..3.736\nrows=595 loops=1)\n Output: c_2.conname,\nr_1.relname, r_1.relnamespace\n Inner Unique: true\n Join Filter:\n(c_2.connamespace = nc_1.oid)\n Rows Removed by Join Filter: 3026\n Buffers: shared hit=2432\n -> Nested Loop\n(cost=0.28..171.05 rows=1 width=136) (actual time=0.027..1.913\nrows=595 loops=1)\n Output: c_2.conname,\nc_2.connamespace, r_1.relname, r_1.relnamespace\n Inner Unique: true\n Buffers: shared hit=1837\n -> Seq Scan on\npg_catalog.pg_constraint c_2 (cost=0.00..96.05 rows=9 width=72)\n(actual time=0.012..0.508 rows=595 loops=1)\n Output:\nc_2.oid, c_2.conname, c_2.connamespace, c_2.contype,\nc_2.condeferrable, c_2.condeferred, c_2.convalidated, c_2.conrelid,\nc_2.contypid, c_2.conindid, c_2.conparentid, c_2.confrelid,\nc_2.confupdtype, c_2.confdeltype, c_2.confmatchtype, c_2.conislocal,\nc_2.coninhcount, c_2.connoinherit, c_2.conkey, c_2.confkey,\nc_2.conpfeqop, c_2.conppeqop, c_2.conffeqop, c_2.conexclop, c_2.conbin\n Filter:\n((c_2.contype <> ALL ('{t,x}'::\"char\"[])) AND ((CASE c_2.contype WHEN\n'c'::\"char\" THEN 'CHECK'::text WHEN 'f'::\"char\" THEN 'FOREIGN\nKEY'::text WHEN 'p'::\"char\" THEN 'PRIMARY KEY'::text WHEN 'u'::\"char\"\nTHEN 'UNIQUE'::text ELSE NULL::text END)::text = 'PRIMARY KEY'::text))\n Rows Removed\nby Filter: 1114\n Buffers: shared hit=52\n -> Index Scan using\npg_class_oid_index on pg_catalog.pg_class r_1 (cost=0.28..8.33 rows=1\nwidth=72) (actual time=0.002..0.002 rows=1 loops=595)\n Output:\nr_1.oid, r_1.relname, r_1.relnamespace, r_1.reltype, r_1.reloftype,\nr_1.relowner, r_1.relam, r_1.relfilenode, r_1.reltablespace,\nr_1.relpages, r_1.reltuples, r_1.relallvisible, r_1.reltoastrelid,\nr_1.relhasindex, r_1.relisshared, r_1.relpersistence, r_1.relkind,\nr_1.relnatts, r_1.relchecks, r_1.relhasrules, r_1.relhastriggers,\nr_1.relhassubclass, r_1.relrowsecurity, r_1.relforcerowsecurity,\nr_1.relispopulated, r_1.relreplident, r_1.relispartition,\nr_1.relrewrite, r_1.relfrozenxid, r_1.relminmxid, r_1.relacl,\nr_1.reloptions, r_1.relpartbound\n Index Cond:\n(r_1.oid = c_2.conrelid)\n Filter:\n((r_1.relkind = ANY ('{r,p}'::\"char\"[])) AND\n(pg_has_role(r_1.relowner, 'USAGE'::text) OR\nhas_table_privilege(r_1.oid, 'INSERT, UPDATE, DELETE, TRUNCATE,\nREFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_1.oid,\n'INSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=1785\n -> Seq Scan on\npg_catalog.pg_namespace nc_1 (cost=0.00..1.06 rows=6 width=4) (actual\ntime=0.000..0.001 rows=6 loops=595)\n Output: nc_1.oid,\nnc_1.nspname, nc_1.nspowner, nc_1.nspacl\n Buffers: shared hit=595\n -> Seq Scan on\npg_catalog.pg_namespace nr_1 (cost=0.00..1.07 rows=4 width=4) (actual\ntime=0.001..0.001 rows=4 loops=595)\n Output: nr_1.oid,\nnr_1.nspname, nr_1.nspowner, nr_1.nspacl\n Filter: (NOT\npg_is_other_temp_schema(nr_1.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=595\n -> Nested Loop (cost=1675.88..8355.96 rows=1\nwidth=232) (actual time=9.154..31.424 rows=2 loops=836)\n Output: con.conname, \"*SELECT* 1\".table_name,\n\"*SELECT* 1_1\".table_name, r_2.oid,\n(information_schema._pg_expandarray(c_3.conkey)), r_2.relowner\n Join Filter: (pkc.conname = (\"*SELECT*\n1_1\".constraint_name)::name)\n Rows Removed by Join Filter: 8572\n Buffers: shared hit=1957668\n -> Hash Join (cost=1258.23..6074.13 rows=1\nwidth=232) (actual time=8.894..11.130 rows=2 loops=836)\n Output: con.conname, pkc.conname, \"*SELECT*\n1\".table_name, r_2.oid,\n(information_schema._pg_expandarray(c_3.conkey)), r_2.relowner\n Hash Cond: (c_3.conname = con.conname)\n Buffers: shared hit=44349\n -> ProjectSet (cost=324.56..1716.71\nrows=249000 width=341) (actual time=0.013..10.797 rows=1983 loops=836)\n Output: r_2.oid, NULL::name,\nr_2.relowner, NULL::name, NULL::name, NULL::oid, c_3.conname,\nNULL::\"char\", NULL::oid, NULL::smallint[], NULL::oid,\ninformation_schema._pg_expandarray(c_3.conkey)\n Buffers: shared hit=43748\n -> Hash Join (cost=324.56..408.21\nrows=249 width=95) (actual time=0.007..2.055 rows=1707 loops=836)\n Output: c_3.conkey, r_2.oid,\nr_2.relowner, c_3.conname\n Inner Unique: true\n Hash Cond: (c_3.connamespace = nc_2.oid)\n Buffers: shared hit=43748\n -> Hash Join\n(cost=323.42..405.96 rows=249 width=99) (actual time=0.006..1.624\nrows=1707 loops=836)\n Output: r_2.oid,\nr_2.relowner, c_3.conname, c_3.conkey, c_3.connamespace\n Inner Unique: true\n Hash Cond:\n(r_2.relnamespace = nr_2.oid)\n Buffers: shared hit=43747\n -> Hash Join\n(cost=322.30..403.16 rows=374 width=103) (actual time=0.006..1.224\nrows=1707 loops=836)\n Output: r_2.oid,\nr_2.relowner, r_2.relnamespace, c_3.conname, c_3.conkey,\nc_3.connamespace\n Inner Unique: true\n Hash Cond:\n(c_3.conrelid = r_2.oid)\n Buffers: shared hit=43746\n -> Seq Scan on\npg_catalog.pg_constraint c_3 (cost=0.00..76.23 rows=1760 width=95)\n(actual time=0.004..0.511 rows=1707 loops=836)\n Output:\nc_3.oid, c_3.conname, c_3.connamespace, c_3.contype,\nc_3.condeferrable, c_3.condeferred, c_3.convalidated, c_3.conrelid,\nc_3.contypid, c_3.conindid, c_3.conparentid, c_3.confrelid,\nc_3.confupdtype, c_3.confdeltype, c_3.confmatchtype, c_3.conislocal,\nc_3.coninhcount, c_3.connoinherit, c_3.conkey, c_3.confkey,\nc_3.conpfeqop, c_3.conppeqop, c_3.conffeqop, c_3.conexclop, c_3.conbin\n Filter:\n(c_3.contype = ANY ('{p,u,f}'::\"char\"[]))\n Rows Removed\nby Filter: 2\n Buffers:\nshared hit=43472\n -> Hash\n(cost=313.84..313.84 rows=677 width=12) (actual time=0.988..0.989\nrows=694 loops=1)\n Output:\nr_2.oid, r_2.relowner, r_2.relnamespace\n Buckets: 1024\nBatches: 1 Memory Usage: 38kB\n Buffers: shared hit=274\n -> Seq Scan\non pg_catalog.pg_class r_2 (cost=0.00..313.84 rows=677 width=12)\n(actual time=0.006..0.875 rows=694 loops=1)\n Output:\nr_2.oid, r_2.relowner, r_2.relnamespace\n Filter:\n(r_2.relkind = ANY ('{r,p}'::\"char\"[]))\n Rows\nRemoved by Filter: 2559\n Buffers:\nshared hit=274\n -> Hash (cost=1.07..1.07\nrows=4 width=4) (actual time=0.009..0.010 rows=7 loops=1)\n Output: nr_2.oid\n Buckets: 1024\nBatches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nr_2 (cost=0.00..1.07 rows=4 width=4) (actual\ntime=0.004..0.007 rows=7 loops=1)\n Output: nr_2.oid\n Filter: (NOT\npg_is_other_temp_schema(nr_2.oid))\n Rows Removed\nby Filter: 2\n Buffers: shared hit=1\n -> Hash (cost=1.06..1.06\nrows=6 width=4) (actual time=0.012..0.013 rows=9 loops=1)\n Output: nc_2.oid\n Buckets: 1024 Batches: 1\nMemory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nc_2 (cost=0.00..1.06 rows=6 width=4) (actual\ntime=0.007..0.009 rows=9 loops=1)\n Output: nc_2.oid\n Buffers: shared hit=1\n -> Hash (cost=933.65..933.65 rows=1\nwidth=256) (actual time=2.158..2.170 rows=2 loops=1)\n Output: con.conname, pkc.conname,\n\"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=601\n -> Nested Loop (cost=5.71..933.65\nrows=1 width=256) (actual time=1.185..2.163 rows=2 loops=1)\n Output: con.conname,\npkc.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Join Filter: (d2.refobjid = pkc.oid)\n Buffers: shared hit=601\n -> Nested Loop\n(cost=5.43..933.00 rows=1 width=200) (actual time=1.174..2.146 rows=2\nloops=1)\n Output: con.conname,\ncon.confrelid, d2.refobjid, \"*SELECT* 1\".table_name, \"*SELECT*\n1\".constraint_name\n Buffers: shared hit=593\n -> Nested Loop\n(cost=5.15..931.14 rows=1 width=200) (actual time=1.163..2.129 rows=2\nloops=1)\n Output: con.conname,\ncon.confrelid, d1.refobjid, \"*SELECT* 1\".table_name, \"*SELECT*\n1\".constraint_name\n Buffers: shared hit=587\n -> Nested Loop\n(cost=4.86..929.16 rows=1 width=200) (actual time=1.147..2.108 rows=2\nloops=1)\n Output:\ncon.conname, con.oid, con.confrelid, \"*SELECT* 1\".table_name,\n\"*SELECT* 1\".constraint_name\n Inner Unique: true\n Join Filter:\n(con.connamespace = ncon.oid)\n Rows Removed\nby Join Filter: 10\n Buffers: shared hit=581\n -> Nested\nLoop (cost=4.86..928.02 rows=1 width=204) (actual time=1.143..2.100\nrows=2 loops=1)\n Output:\ncon.conname, con.connamespace, con.oid, con.confrelid, \"*SELECT*\n1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Buffers:\nshared hit=579\n ->\nNested Loop (cost=4.58..925.06 rows=2 width=208) (actual\ntime=1.129..2.082 rows=2 loops=1)\n\nOutput: con.conname, con.connamespace, con.conrelid, con.oid,\ncon.confrelid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n\nBuffers: shared hit=573\n ->\n Append (cost=4.30..900.14 rows=3 width=128) (actual\ntime=1.105..2.056 rows=5 loops=1)\n\n Buffers: shared hit=560\n\n -> Subquery Scan on \"*SELECT* 1\" (cost=4.30..449.91 rows=1\nwidth=128) (actual time=1.104..1.121 rows=3 loops=1)\n\n Output: \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n\n Buffers: shared hit=282\n\n -> Nested Loop (cost=4.30..449.90 rows=1 width=512) (actual\ntime=1.103..1.119 rows=3 loops=1)\n\n Output: NULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(c_4.conname)::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_3.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n\n Inner Unique: true\n\n Join Filter: (c_4.connamespace = nc_3.oid)\n\n Rows Removed by Join Filter: 15\n\n Buffers: shared hit=282\n\n -> Nested Loop (cost=4.30..448.76 rows=1 width=132)\n(actual time=1.096..1.104 rows=3 loops=1)\n\n Output: r_3.relname, c_4.conname,\nc_4.connamespace\n\n Buffers: shared hit=279\n\n -> Nested Loop (cost=0.00..434.55 rows=1\nwidth=68) (actual time=1.062..1.066 rows=1 loops=1)\n\n Output: r_3.relname, r_3.oid\n\n Join Filter: (nr_3.oid = r_3.relnamespace)\n\n Rows Removed by Join Filter: 6\n\n Buffers: shared hit=275\n\n -> Seq Scan on pg_catalog.pg_namespace\nnr_3 (cost=0.00..1.07 rows=4 width=4) (actual time=0.009..0.015\nrows=7 loops=1)\n\n Output: nr_3.oid, nr_3.nspname,\nnr_3.nspowner, nr_3.nspacl\n\n Filter: (NOT\npg_is_other_temp_schema(nr_3.oid))\n\n Rows Removed by Filter: 2\n\n Buffers: shared hit=1\n\n -> Materialize (cost=0.00..433.36 rows=2\nwidth=72) (actual time=0.004..0.149 rows=1 loops=7)\n\n Output: r_3.relname,\nr_3.relnamespace, r_3.oid\n\n Buffers: shared hit=274\n\n -> Seq Scan on pg_catalog.pg_class\nr_3 (cost=0.00..433.35 rows=2 width=72) (actual time=0.026..1.039\nrows=1 loops=1)\n\n Output: r_3.relname,\nr_3.relnamespace, r_3.oid\n\n Filter: ((r_3.relkind = ANY\n('{r,p}'::\"char\"[])) AND\n(lower(((r_3.relname)::information_schema.sql_identifier)::text) =\n'secrole_condcollection'::text) AND (pg_has_role(r_3.relowner,\n'USAGE'::text) OR has_table_privilege(r_3.oid, 'INSERT, UPDATE,\nDELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR\nhas_any_column_privilege(r_3.oid, 'INSERT, UPDATE,\nREFERENCES'::text)))\n\n Rows Removed by Filter: 3252\n\n Buffers: shared hit=274\n\n -> Bitmap Heap Scan on pg_catalog.pg_constraint\nc_4 (cost=4.30..14.18 rows=3 width=72) (actual time=0.026..0.029\nrows=3 loops=1)\n\n Output: c_4.oid, c_4.conname,\nc_4.connamespace, c_4.contype, c_4.condeferrable, c_4.condeferred,\nc_4.convalidated, c_4.conrelid, c_4.contypid, c_4.conindid,\nc_4.conparentid, c_4.confrelid, c_4.confupdtype, c_4.confdeltype,\nc_4.confmatchtype, c_4.conislocal, c_4.coninhcount, c_4.connoinherit,\nc_4.conkey, c_4.confkey, c_4.conpfeqop, c_4.conppeqop, c_4.conffeqop,\nc_4.conexclop, c_4.conbin\n\n Recheck Cond: (c_4.conrelid = r_3.oid)\n\n Filter: (c_4.contype <> ALL\n('{t,x}'::\"char\"[]))\n\n Heap Blocks: exact=2\n\n Buffers: shared hit=4\n\n -> Bitmap Index Scan on\npg_constraint_conrelid_contypid_conname_index (cost=0.00..4.30 rows=3\nwidth=0) (actual time=0.020..0.020 rows=3 loops=1)\n\n Index Cond: (c_4.conrelid = r_3.oid)\n\n Buffers: shared hit=2\n\n -> Seq Scan on pg_catalog.pg_namespace nc_3\n(cost=0.00..1.06 rows=6 width=4) (actual time=0.002..0.002 rows=6\nloops=3)\n\n Output: nc_3.oid, nc_3.nspname, nc_3.nspowner,\nnc_3.nspacl\n\n Buffers: shared hit=3\n\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.29..450.21 rows=2\nwidth=128) (actual time=0.924..0.931 rows=2 loops=1)\n\n Output: \"*SELECT* 2\".table_name, \"*SELECT* 2\".constraint_name\n\n Buffers: shared hit=278\n\n -> Nested Loop (cost=0.29..450.19 rows=2 width=512) (actual\ntime=0.923..0.929 rows=2 loops=1)\n\n Output: NULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier, (((((((nr_4.oid)::text ||\n'_'::text) || (r_4.oid)::text) || '_'::text) || (a_2.attnum)::text) ||\n'_not_null'::text))::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_4.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n\n Buffers: shared hit=278\n\n -> Nested Loop (cost=0.00..434.55 rows=1 width=72)\n(actual time=0.904..0.907 rows=1 loops=1)\n\n Output: nr_4.oid, r_4.oid, r_4.relname\n\n Join Filter: (nr_4.oid = r_4.relnamespace)\n\n Rows Removed by Join Filter: 6\n\n Buffers: shared hit=275\n\n -> Seq Scan on pg_catalog.pg_namespace nr_4\n(cost=0.00..1.07 rows=4 width=4) (actual time=0.004..0.007 rows=7\nloops=1)\n\n Output: nr_4.oid, nr_4.nspname,\nnr_4.nspowner, nr_4.nspacl\n\n Filter: (NOT\npg_is_other_temp_schema(nr_4.oid))\n\n Rows Removed by Filter: 2\n\n Buffers: shared hit=1\n\n -> Materialize (cost=0.00..433.36 rows=2\nwidth=72) (actual time=0.004..0.128 rows=1 loops=7)\n\n Output: r_4.oid, r_4.relname,\nr_4.relnamespace\n\n Buffers: shared hit=274\n\n -> Seq Scan on pg_catalog.pg_class r_4\n(cost=0.00..433.35 rows=2 width=72) (actual time=0.021..0.893 rows=1\nloops=1)\n\n Output: r_4.oid, r_4.relname,\nr_4.relnamespace\n\n Filter: ((r_4.relkind = ANY\n('{r,p}'::\"char\"[])) AND\n(lower(((r_4.relname)::information_schema.sql_identifier)::text) =\n'secrole_condcollection'::text) AND (pg_has_role(r_4.relowner,\n'USAGE'::text) OR has_table_privilege(r_4.oid, 'INSERT, UPDATE,\nDELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR\nhas_any_column_privilege(r_4.oid, 'INSERT, UPDATE,\nREFERENCES'::text)))\n\n Rows Removed by Filter: 3252\n\n Buffers: shared hit=274\n\n -> Index Scan using pg_attribute_relid_attnum_index on\npg_catalog.pg_attribute a_2 (cost=0.29..15.56 rows=2 width=6) (actual\ntime=0.014..0.015 rows=2 loops=1)\n\n Output: a_2.attrelid, a_2.attname, a_2.atttypid,\na_2.attstattarget, a_2.attlen, a_2.attnum, a_2.attndims,\na_2.attcacheoff, a_2.atttypmod, a_2.attbyval, a_2.attstorage,\na_2.attalign, a_2.attnotnull, a_2.atthasdef, a_2.atthasmissing,\na_2.attidentity, a_2.attgenerated, a_2.attisdropped, a_2.attislocal,\na_2.attinhcount, a_2.attcollation, a_2.attacl, a_2.attoptions,\na_2.attfdwoptions, a_2.attmissingval\n\n Index Cond: ((a_2.attrelid = r_4.oid) AND\n(a_2.attnum > 0))\n\n Filter: (a_2.attnotnull AND (NOT\na_2.attisdropped))\n\n Buffers: shared hit=3\n ->\n Index Scan using pg_constraint_conname_nsp_index on\npg_catalog.pg_constraint con (cost=0.28..8.30 rows=1 width=80)\n(actual time=0.004..0.004 rows=0 loops=5)\n\n Output: con.oid, con.conname, con.connamespace, con.contype,\ncon.condeferrable, con.condeferred, con.convalidated, con.conrelid,\ncon.contypid, con.conindid, con.conparentid, con.confrelid,\ncon.confupdtype, con.confdeltype, con.confmatchtype, con.conislocal,\ncon.coninhcount, con.connoinherit, con.conkey, con.confkey,\ncon.conpfeqop, con.conppeqop, con.conffeqop, con.conexclop, con.conbin\n\n Index Cond: (con.conname = (\"*SELECT* 1\".constraint_name)::name)\n\n Filter: (con.contype = 'f'::\"char\")\n\n Rows Removed by Filter: 0\n\n Buffers: shared hit=13\n ->\nIndex Scan using pg_class_oid_index on pg_catalog.pg_class c\n(cost=0.28..1.48 rows=1 width=4) (actual time=0.007..0.007 rows=1\nloops=2)\n\nOutput: c.oid, c.relname, c.relnamespace, c.reltype, c.reloftype,\nc.relowner, c.relam, c.relfilenode, c.reltablespace, c.relpages,\nc.reltuples, c.relallvisible, c.reltoastrelid, c.relhasindex,\nc.relisshared, c.relpersistence, c.relkind, c.relnatts, c.relchecks,\nc.relhasrules, c.relhastriggers, c.relhassubclass, c.relrowsecurity,\nc.relforcerowsecurity, c.relispopulated, c.relreplident,\nc.relispartition, c.relrewrite, c.relfrozenxid, c.relminmxid,\nc.relacl, c.reloptions, c.relpartbound\n\nIndex Cond: (c.oid = con.conrelid)\n\nFilter: (pg_has_role(c.relowner, 'USAGE'::text) OR\nhas_table_privilege(c.oid, 'INSERT, UPDATE, DELETE, TRUNCATE,\nREFERENCES, TRIGGER'::text) OR has_any_column_privilege(c.oid,\n'INSERT, UPDATE, REFERENCES'::text))\n\nBuffers: shared hit=6\n -> Seq Scan\non pg_catalog.pg_namespace ncon (cost=0.00..1.06 rows=6 width=4)\n(actual time=0.002..0.002 rows=6 loops=2)\n Output:\nncon.oid, ncon.nspname, ncon.nspowner, ncon.nspacl\n Buffers:\nshared hit=2\n -> Index Scan using\npg_depend_depender_index on pg_catalog.pg_depend d1 (cost=0.29..1.97\nrows=1 width=8) (actual time=0.008..0.009 rows=1 loops=2)\n Output:\nd1.classid, d1.objid, d1.objsubid, d1.refclassid, d1.refobjid,\nd1.refobjsubid, d1.deptype\n Index Cond:\n((d1.classid = '2606'::oid) AND (d1.objid = con.oid))\n Filter:\n((d1.refclassid = '1259'::oid) AND (d1.refobjsubid = 0))\n Rows Removed\nby Filter: 2\n Buffers: shared hit=6\n -> Index Scan using\npg_depend_depender_index on pg_catalog.pg_depend d2 (cost=0.29..1.85\nrows=1 width=8) (actual time=0.006..0.007 rows=1 loops=2)\n Output: d2.classid,\nd2.objid, d2.objsubid, d2.refclassid, d2.refobjid, d2.refobjsubid,\nd2.deptype\n Index Cond:\n((d2.classid = '1259'::oid) AND (d2.objid = d1.refobjid) AND\n(d2.objsubid = 0))\n Filter:\n((d2.refclassid = '2606'::oid) AND (d2.deptype = 'i'::\"char\"))\n Buffers: shared hit=6\n -> Index Scan using\npg_constraint_conrelid_contypid_conname_index on\npg_catalog.pg_constraint pkc (cost=0.28..0.64 rows=1 width=76)\n(actual time=0.007..0.007 rows=1 loops=2)\n Output: pkc.oid,\npkc.conname, pkc.connamespace, pkc.contype, pkc.condeferrable,\npkc.condeferred, pkc.convalidated, pkc.conrelid, pkc.contypid,\npkc.conindid, pkc.conparentid, pkc.confrelid, pkc.confupdtype,\npkc.confdeltype, pkc.confmatchtype, pkc.conislocal, pkc.coninhcount,\npkc.connoinherit, pkc.conkey, pkc.confkey, pkc.conpfeqop,\npkc.conppeqop, pkc.conffeqop, pkc.conexclop, pkc.conbin\n Index Cond: (pkc.conrelid\n= con.confrelid)\n Filter: (pkc.contype = ANY\n('{p,u}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=8\n -> Append (cost=417.66..2272.67 rows=733\nwidth=128) (actual time=0.011..9.830 rows=4287 loops=1672)\n Buffers: shared hit=1913319\n -> Subquery Scan on \"*SELECT* 1_1\"\n(cost=417.66..500.03 rows=175 width=128) (actual time=0.010..1.720\nrows=1707 loops=1672)\n Output: \"*SELECT* 1_1\".table_name,\n\"*SELECT* 1_1\".constraint_name\n Buffers: shared hit=87220\n -> Hash Join (cost=417.66..498.28\nrows=175 width=512) (actual time=0.010..1.584 rows=1707 loops=1672)\n Output:\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(c_5.conname)::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_5.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Hash Cond: (c_5.connamespace = nc_4.oid)\n Buffers: shared hit=87220\n -> Hash Join\n(cost=416.52..496.36 rows=175 width=132) (actual time=0.008..1.190\nrows=1707 loops=1672)\n Output: r_5.relname,\nc_5.conname, c_5.connamespace\n Inner Unique: true\n Hash Cond:\n(r_5.relnamespace = nr_5.oid)\n Buffers: shared hit=87219\n -> Hash Join\n(cost=415.40..494.06 rows=263 width=136) (actual time=0.007..0.869\nrows=1707 loops=1672)\n Output: c_5.conname,\nc_5.connamespace, r_5.relname, r_5.relnamespace\n Inner Unique: true\n Hash Cond:\n(c_5.conrelid = r_5.oid)\n Buffers: shared hit=87218\n -> Seq Scan on\npg_catalog.pg_constraint c_5 (cost=0.00..74.03 rows=1762 width=72)\n(actual time=0.004..0.379 rows=1709 loops=1672)\n Output:\nc_5.oid, c_5.conname, c_5.connamespace, c_5.contype,\nc_5.condeferrable, c_5.condeferred, c_5.convalidated, c_5.conrelid,\nc_5.contypid, c_5.conindid, c_5.conparentid, c_5.confrelid,\nc_5.confupdtype, c_5.confdeltype, c_5.confmatchtype, c_5.conislocal,\nc_5.coninhcount, c_5.connoinherit, c_5.conkey, c_5.confkey,\nc_5.conpfeqop, c_5.conppeqop, c_5.conffeqop, c_5.conexclop, c_5.conbin\n Filter:\n(c_5.contype <> ALL ('{t,x}'::\"char\"[]))\n Buffers:\nshared hit=86944\n -> Hash\n(cost=409.45..409.45 rows=476 width=72) (actual time=1.244..1.245\nrows=694 loops=1)\n Output:\nr_5.relname, r_5.relnamespace, r_5.oid\n Buckets: 1024\nBatches: 1 Memory Usage: 79kB\n Buffers: shared hit=274\n -> Seq Scan\non pg_catalog.pg_class r_5 (cost=0.00..409.45 rows=476 width=72)\n(actual time=0.011..1.118 rows=694 loops=1)\n Output:\nr_5.relname, r_5.relnamespace, r_5.oid\n Filter:\n((r_5.relkind = ANY ('{r,p}'::\"char\"[])) AND\n(pg_has_role(r_5.relowner, 'USAGE'::text) OR\nhas_table_privilege(r_5.oid, 'INSERT, UPDATE, DELETE, TRUNCATE,\nREFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_5.oid,\n'INSERT, UPDATE, REFERENCES'::text)))\n Rows\nRemoved by Filter: 2559\n Buffers:\nshared hit=274\n -> Hash (cost=1.07..1.07\nrows=4 width=4) (actual time=0.019..0.019 rows=7 loops=1)\n Output: nr_5.oid\n Buckets: 1024\nBatches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nr_5 (cost=0.00..1.07 rows=4 width=4) (actual\ntime=0.004..0.008 rows=7 loops=1)\n Output: nr_5.oid\n Filter: (NOT\npg_is_other_temp_schema(nr_5.oid))\n Rows Removed\nby Filter: 2\n Buffers: shared hit=1\n -> Hash (cost=1.06..1.06\nrows=6 width=4) (actual time=0.015..0.016 rows=9 loops=1)\n Output: nc_4.oid\n Buckets: 1024 Batches: 1\nMemory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nc_4 (cost=0.00..1.06 rows=6 width=4) (actual\ntime=0.010..0.011 rows=9 loops=1)\n Output: nc_4.oid\n Buffers: shared hit=1\n -> Subquery Scan on \"*SELECT* 2_1\"\n(cost=416.52..1768.97 rows=558 width=128) (actual time=0.010..7.839\nrows=2580 loops=1672)\n Output: \"*SELECT* 2_1\".table_name,\n\"*SELECT* 2_1\".constraint_name\n Buffers: shared hit=1826099\n -> Hash Join (cost=416.52..1763.39\nrows=558 width=512) (actual time=0.009..7.622 rows=2580 loops=1672)\n Output:\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier, (((((((nr_6.oid)::text ||\n'_'::text) || (r_6.oid)::text) || '_'::text) || (a_3.attnum)::text) ||\n'_not_null'::text))::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_6.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Hash Cond: (r_6.relnamespace = nr_6.oid)\n Buffers: shared hit=1826099\n -> Hash Join\n(cost=415.40..1741.77 rows=837 width=74) (actual time=0.004..5.410\nrows=2580 loops=1672)\n Output: r_6.oid,\nr_6.relname, r_6.relnamespace, a_3.attnum\n Inner Unique: true\n Hash Cond: (a_3.attrelid = r_6.oid)\n Buffers: shared hit=1826098\n -> Seq Scan on\npg_catalog.pg_attribute a_3 (cost=0.00..1311.64 rows=5606 width=6)\n(actual time=0.002..4.792 rows=2598 loops=1672)\n Output:\na_3.attrelid, a_3.attname, a_3.atttypid, a_3.attstattarget,\na_3.attlen, a_3.attnum, a_3.attndims, a_3.attcacheoff, a_3.atttypmod,\na_3.attbyval, a_3.attstorage, a_3.attalign, a_3.attnotnull,\na_3.atthasdef, a_3.atthasmissing, a_3.attidentity, a_3.attgenerated,\na_3.attisdropped, a_3.attislocal, a_3.attinhcount, a_3.attcollation,\na_3.attacl, a_3.attoptions, a_3.attfdwoptions, a_3.attmissingval\n Filter:\n(a_3.attnotnull AND (NOT a_3.attisdropped) AND (a_3.attnum > 0))\n Rows Removed by Filter: 15396\n Buffers: shared hit=1825824\n -> Hash\n(cost=409.45..409.45 rows=476 width=72) (actual time=1.227..1.227\nrows=694 loops=1)\n Output: r_6.oid,\nr_6.relname, r_6.relnamespace\n Buckets: 1024\nBatches: 1 Memory Usage: 79kB\n Buffers: shared hit=274\n -> Seq Scan on\npg_catalog.pg_class r_6 (cost=0.00..409.45 rows=476 width=72) (actual\ntime=0.011..1.087 rows=694 loops=1)\n Output:\nr_6.oid, r_6.relname, r_6.relnamespace\n Filter:\n((r_6.relkind = ANY ('{r,p}'::\"char\"[])) AND\n(pg_has_role(r_6.relowner, 'USAGE'::text) OR\nhas_table_privilege(r_6.oid, 'INSERT, UPDATE, DELETE, TRUNCATE,\nREFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_6.oid,\n'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed\nby Filter: 2559\n Buffers: shared hit=274\n -> Hash (cost=1.07..1.07\nrows=4 width=4) (actual time=0.015..0.015 rows=7 loops=1)\n Output: nr_6.oid\n Buckets: 1024 Batches: 1\nMemory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nr_6 (cost=0.00..1.07 rows=4 width=4) (actual\ntime=0.008..0.011 rows=7 loops=1)\n Output: nr_6.oid\n Filter: (NOT\npg_is_other_temp_schema(nr_6.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Index Scan using pg_attribute_relid_attnum_index on\npg_catalog.pg_attribute a (cost=0.29..0.33 rows=1 width=70) (actual\ntime=0.019..0.019 rows=1 loops=2)\n Output: a.attrelid, a.attname, a.atttypid,\na.attstattarget, a.attlen, a.attnum, a.attndims, a.attcacheoff,\na.atttypmod, a.attbyval, a.attstorage, a.attalign, a.attnotnull,\na.atthasdef, a.atthasmissing, a.attidentity, a.attgenerated,\na.attisdropped, a.attislocal, a.attinhcount, a.attcollation, a.attacl,\na.attoptions, a.attfdwoptions, a.attmissingval\n Index Cond: ((a.attrelid = r_2.oid) AND (a.attnum =\n((information_schema._pg_expandarray(c_3.conkey))).x))\n Filter: ((NOT a.attisdropped) AND\n(pg_has_role(r_2.relowner, 'USAGE'::text) OR\nhas_column_privilege(r_2.oid, a.attnum, 'SELECT, INSERT, UPDATE,\nREFERENCES'::text)))\n Buffers: shared hit=6\n -> Index Scan using pg_attribute_relid_attnum_index on\npg_catalog.pg_attribute a_1 (cost=0.29..0.33 rows=1 width=70) (actual\ntime=0.007..0.007 rows=1 loops=2)\n Output: a_1.attrelid, a_1.attname, a_1.atttypid,\na_1.attstattarget, a_1.attlen, a_1.attnum, a_1.attndims,\na_1.attcacheoff, a_1.atttypmod, a_1.attbyval, a_1.attstorage,\na_1.attalign, a_1.attnotnull, a_1.atthasdef, a_1.atthasmissing,\na_1.attidentity, a_1.attgenerated, a_1.attisdropped, a_1.attislocal,\na_1.attinhcount, a_1.attcollation, a_1.attacl, a_1.attoptions,\na_1.attfdwoptions, a_1.attmissingval\n Index Cond: ((a_1.attrelid = r.oid) AND (a_1.attnum =\n((information_schema._pg_expandarray(c_1.conkey))).x))\n Filter: ((NOT a_1.attisdropped) AND (pg_has_role(r.relowner,\n'USAGE'::text) OR has_column_privilege(r.oid, a_1.attnum, 'SELECT,\nINSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=6\nPlanning Time: 8.688 ms\nExecution Time: 26311.005 ms\n\n\n\nindex scan\n\nset enable_hashjoin = 0;\n\nSELECT\n\tFK.TABLE_NAME as \"TABLE_NAME\"\n\t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n\t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n\t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n\t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\"\nFROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C\nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON\nC.CONSTRAINT_NAME = FK.CONSTRAINT_NAME\nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON\nC.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME\nINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME\n= CU.CONSTRAINT_NAME\nINNER JOIN (\n\tSELECT\n\t\ti1.TABLE_NAME\n\t\t, i2.COLUMN_NAME\n\t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1\n\t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON\ni1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n\t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'\n) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE\nlower(FK.TABLE_NAME)='secrole_condcollection'\n\nNested Loop (cost=1736.10..18890.44 rows=1 width=320) (actual\ntime=30.780..79.572 rows=2 loops=1)\n Output: \"*SELECT* 1\".table_name,\n(a.attname)::information_schema.sql_identifier, \"*SELECT*\n1_1\".table_name, (a_1.attname)::information_schema.sql_identifier,\n(con.conname)::information_schema.sql_identifier\n Inner Unique: true\n Buffers: shared hit=9018\n -> Nested Loop (cost=1735.81..18890.10 rows=1 width=296) (actual\ntime=30.752..79.531 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT*\n1_1\".table_name, a.attname, r_6.oid,\n(information_schema._pg_expandarray(c_5.conkey)), r_6.relowner\n Join Filter: ((\"*SELECT* 1_2\".constraint_name)::name = c_5.conname)\n Rows Removed by Join Filter: 3964\n Buffers: shared hit=9012\n -> Nested Loop (cost=1170.86..11411.63 rows=1 width=320)\n(actual time=18.709..57.524 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT*\n1_1\".table_name, a.attname, \"*SELECT* 1_2\".constraint_name\n Join Filter: ((\"*SELECT* 1_1\".table_name)::name =\n(\"*SELECT* 1_2\".table_name)::name)\n Rows Removed by Join Filter: 1188\n Buffers: shared hit=8684\n -> Nested Loop (cost=1170.58..11238.29 rows=1\nwidth=256) (actual time=16.937..45.450 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name,\n\"*SELECT* 1_1\".table_name, a.attname\n Inner Unique: true\n Buffers: shared hit=2630\n -> Nested Loop (cost=1170.30..11237.95 rows=1\nwidth=232) (actual time=16.909..45.398 rows=2 loops=1)\n Output: con.conname, \"*SELECT*\n1\".table_name, \"*SELECT* 1_1\".table_name, r_4.oid,\n(information_schema._pg_expandarray(c_3.conkey)), r_4.relowner\n Join Filter: (con.conname = c_3.conname)\n Rows Removed by Join Filter: 3964\n Buffers: shared hit=2624\n -> Nested Loop (cost=605.35..3759.48\nrows=1 width=256) (actual time=5.769..23.698 rows=2 loops=1)\n Output: con.conname, \"*SELECT*\n1\".table_name, \"*SELECT* 1\".constraint_name, \"*SELECT* 1_1\".table_name\n Join Filter: (pkc.conname = (\"*SELECT*\n1_1\".constraint_name)::name)\n Rows Removed by Join Filter: 8572\n Buffers: shared hit=2296\n -> Nested Loop (cost=5.71..933.65\nrows=1 width=256) (actual time=1.324..2.731 rows=2 loops=1)\n Output: con.conname,\npkc.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Join Filter: (d2.refobjid = pkc.oid)\n Buffers: shared hit=601\n -> Nested Loop\n(cost=5.43..933.00 rows=1 width=200) (actual time=1.315..2.713 rows=2\nloops=1)\n Output: con.conname,\ncon.confrelid, d2.refobjid, \"*SELECT* 1\".table_name, \"*SELECT*\n1\".constraint_name\n Buffers: shared hit=593\n -> Nested Loop\n(cost=5.15..931.14 rows=1 width=200) (actual time=1.305..2.687 rows=2\nloops=1)\n Output: con.conname,\ncon.confrelid, d1.refobjid, \"*SELECT* 1\".table_name, \"*SELECT*\n1\".constraint_name\n Buffers: shared hit=587\n -> Nested Loop\n(cost=4.86..929.16 rows=1 width=200) (actual time=1.292..2.662 rows=2\nloops=1)\n Output:\ncon.conname, con.oid, con.confrelid, \"*SELECT* 1\".table_name,\n\"*SELECT* 1\".constraint_name\n Inner Unique: true\n Join Filter:\n(con.connamespace = ncon.oid)\n Rows Removed\nby Join Filter: 10\n Buffers: shared hit=581\n -> Nested\nLoop (cost=4.86..928.02 rows=1 width=204) (actual time=1.288..2.652\nrows=2 loops=1)\n Output:\ncon.conname, con.connamespace, con.oid, con.confrelid, \"*SELECT*\n1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Buffers:\nshared hit=579\n ->\nNested Loop (cost=4.58..925.06 rows=2 width=208) (actual\ntime=1.273..2.626 rows=2 loops=1)\n\nOutput: con.conname, con.connamespace, con.conrelid, con.oid,\ncon.confrelid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n\nBuffers: shared hit=573\n ->\n Append (cost=4.30..900.14 rows=3 width=128) (actual\ntime=1.250..2.586 rows=5 loops=1)\n\n Buffers: shared hit=560\n\n -> Subquery Scan on \"*SELECT* 1\" (cost=4.30..449.91 rows=1\nwidth=128) (actual time=1.249..1.283 rows=3 loops=1)\n\n Output: \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n\n Buffers: shared hit=282\n\n -> Nested Loop (cost=4.30..449.90 rows=1 width=512) (actual\ntime=1.249..1.280 rows=3 loops=1)\n\n Output: NULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(c_1.conname)::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n\n Inner Unique: true\n\n Join Filter: (c_1.connamespace = nc.oid)\n\n Rows Removed by Join Filter: 15\n\n Buffers: shared hit=282\n\n -> Nested Loop (cost=4.30..448.76 rows=1 width=132)\n(actual time=1.242..1.257 rows=3 loops=1)\n\n Output: r.relname, c_1.conname, c_1.connamespace\n\n Buffers: shared hit=279\n\n -> Nested Loop (cost=0.00..434.55 rows=1\nwidth=68) (actual time=1.217..1.225 rows=1 loops=1)\n\n Output: r.relname, r.oid\n\n Join Filter: (nr.oid = r.relnamespace)\n\n Rows Removed by Join Filter: 6\n\n Buffers: shared hit=275\n\n -> Seq Scan on pg_catalog.pg_namespace nr\n(cost=0.00..1.07 rows=4 width=4) (actual time=0.010..0.017 rows=7\nloops=1)\n\n Output: nr.oid, nr.nspname,\nnr.nspowner, nr.nspacl\n\n Filter: (NOT\npg_is_other_temp_schema(nr.oid))\n\n Rows Removed by Filter: 2\n\n Buffers: shared hit=1\n\n -> Materialize (cost=0.00..433.36 rows=2\nwidth=72) (actual time=0.004..0.172 rows=1 loops=7)\n\n Output: r.relname, r.relnamespace,\nr.oid\n\n Buffers: shared hit=274\n\n -> Seq Scan on pg_catalog.pg_class r\n (cost=0.00..433.35 rows=2 width=72) (actual time=0.028..1.198 rows=1\nloops=1)\n\n Output: r.relname,\nr.relnamespace, r.oid\n\n Filter: ((r.relkind = ANY\n('{r,p}'::\"char\"[])) AND\n(lower(((r.relname)::information_schema.sql_identifier)::text) =\n'secrole_condcollection'::text) AND (pg_has_role(r.relowner,\n'USAGE'::text) OR has_table_privilege(r.oid, 'INSERT, UPDATE, DELETE,\nTRUNCATE, REFERENCES, TRIGGER'::text) OR\nhas_any_column_privilege(r.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n\n Rows Removed by Filter: 3252\n\n Buffers: shared hit=274\n\n -> Bitmap Heap Scan on pg_catalog.pg_constraint\nc_1 (cost=4.30..14.18 rows=3 width=72) (actual time=0.020..0.026\nrows=3 loops=1)\n\n Output: c_1.oid, c_1.conname,\nc_1.connamespace, c_1.contype, c_1.condeferrable, c_1.condeferred,\nc_1.convalidated, c_1.conrelid, c_1.contypid, c_1.conindid,\nc_1.conparentid, c_1.confrelid, c_1.confupdtype, c_1.confdeltype,\nc_1.confmatchtype, c_1.conislocal, c_1.coninhcount, c_1.connoinherit,\nc_1.conkey, c_1.confkey, c_1.conpfeqop, c_1.conppeqop, c_1.conffeqop,\nc_1.conexclop, c_1.conbin\n\n Recheck Cond: (c_1.conrelid = r.oid)\n\n Filter: (c_1.contype <> ALL\n('{t,x}'::\"char\"[]))\n\n Heap Blocks: exact=2\n\n Buffers: shared hit=4\n\n -> Bitmap Index Scan on\npg_constraint_conrelid_contypid_conname_index (cost=0.00..4.30 rows=3\nwidth=0) (actual time=0.016..0.016 rows=3 loops=1)\n\n Index Cond: (c_1.conrelid = r.oid)\n\n Buffers: shared hit=2\n\n -> Seq Scan on pg_catalog.pg_namespace nc\n(cost=0.00..1.06 rows=6 width=4) (actual time=0.002..0.003 rows=6\nloops=3)\n\n Output: nc.oid, nc.nspname, nc.nspowner,\nnc.nspacl\n\n Buffers: shared hit=3\n\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.29..450.21 rows=2\nwidth=128) (actual time=1.294..1.300 rows=2 loops=1)\n\n Output: \"*SELECT* 2\".table_name, \"*SELECT* 2\".constraint_name\n\n Buffers: shared hit=278\n\n -> Nested Loop (cost=0.29..450.19 rows=2 width=512) (actual\ntime=1.294..1.299 rows=2 loops=1)\n\n Output: NULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier, (((((((nr_1.oid)::text ||\n'_'::text) || (r_1.oid)::text) || '_'::text) || (a_2.attnum)::text) ||\n'_not_null'::text))::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_1.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n\n Buffers: shared hit=278\n\n -> Nested Loop (cost=0.00..434.55 rows=1 width=72)\n(actual time=1.273..1.276 rows=1 loops=1)\n\n Output: nr_1.oid, r_1.oid, r_1.relname\n\n Join Filter: (nr_1.oid = r_1.relnamespace)\n\n Rows Removed by Join Filter: 6\n\n Buffers: shared hit=275\n\n -> Seq Scan on pg_catalog.pg_namespace nr_1\n(cost=0.00..1.07 rows=4 width=4) (actual time=0.013..0.017 rows=7\nloops=1)\n\n Output: nr_1.oid, nr_1.nspname,\nnr_1.nspowner, nr_1.nspacl\n\n Filter: (NOT\npg_is_other_temp_schema(nr_1.oid))\n\n Rows Removed by Filter: 2\n\n Buffers: shared hit=1\n\n -> Materialize (cost=0.00..433.36 rows=2\nwidth=72) (actual time=0.006..0.179 rows=1 loops=7)\n\n Output: r_1.oid, r_1.relname,\nr_1.relnamespace\n\n Buffers: shared hit=274\n\n -> Seq Scan on pg_catalog.pg_class r_1\n(cost=0.00..433.35 rows=2 width=72) (actual time=0.030..1.245 rows=1\nloops=1)\n\n Output: r_1.oid, r_1.relname,\nr_1.relnamespace\n\n Filter: ((r_1.relkind = ANY\n('{r,p}'::\"char\"[])) AND\n(lower(((r_1.relname)::information_schema.sql_identifier)::text) =\n'secrole_condcollection'::text) AND (pg_has_role(r_1.relowner,\n'USAGE'::text) OR has_table_privilege(r_1.oid, 'INSERT, UPDATE,\nDELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR\nhas_any_column_privilege(r_1.oid, 'INSERT, UPDATE,\nREFERENCES'::text)))\n\n Rows Removed by Filter: 3252\n\n Buffers: shared hit=274\n\n -> Index Scan using pg_attribute_relid_attnum_index on\npg_catalog.pg_attribute a_2 (cost=0.29..15.56 rows=2 width=6) (actual\ntime=0.015..0.016 rows=2 loops=1)\n\n Output: a_2.attrelid, a_2.attname, a_2.atttypid,\na_2.attstattarget, a_2.attlen, a_2.attnum, a_2.attndims,\na_2.attcacheoff, a_2.atttypmod, a_2.attbyval, a_2.attstorage,\na_2.attalign, a_2.attnotnull, a_2.atthasdef, a_2.atthasmissing,\na_2.attidentity, a_2.attgenerated, a_2.attisdropped, a_2.attislocal,\na_2.attinhcount, a_2.attcollation, a_2.attacl, a_2.attoptions,\na_2.attfdwoptions, a_2.attmissingval\n\n Index Cond: ((a_2.attrelid = r_1.oid) AND\n(a_2.attnum > 0))\n\n Filter: (a_2.attnotnull AND (NOT\na_2.attisdropped))\n\n Buffers: shared hit=3\n ->\n Index Scan using pg_constraint_conname_nsp_index on\npg_catalog.pg_constraint con (cost=0.28..8.30 rows=1 width=80)\n(actual time=0.006..0.007 rows=0 loops=5)\n\n Output: con.oid, con.conname, con.connamespace, con.contype,\ncon.condeferrable, con.condeferred, con.convalidated, con.conrelid,\ncon.contypid, con.conindid, con.conparentid, con.confrelid,\ncon.confupdtype, con.confdeltype, con.confmatchtype, con.conislocal,\ncon.coninhcount, con.connoinherit, con.conkey, con.confkey,\ncon.conpfeqop, con.conppeqop, con.conffeqop, con.conexclop, con.conbin\n\n Index Cond: (con.conname = (\"*SELECT* 1\".constraint_name)::name)\n\n Filter: (con.contype = 'f'::\"char\")\n\n Rows Removed by Filter: 0\n\n Buffers: shared hit=13\n ->\nIndex Scan using pg_class_oid_index on pg_catalog.pg_class c\n(cost=0.28..1.48 rows=1 width=4) (actual time=0.010..0.011 rows=1\nloops=2)\n\nOutput: c.oid, c.relname, c.relnamespace, c.reltype, c.reloftype,\nc.relowner, c.relam, c.relfilenode, c.reltablespace, c.relpages,\nc.reltuples, c.relallvisible, c.reltoastrelid, c.relhasindex,\nc.relisshared, c.relpersistence, c.relkind, c.relnatts, c.relchecks,\nc.relhasrules, c.relhastriggers, c.relhassubclass, c.relrowsecurity,\nc.relforcerowsecurity, c.relispopulated, c.relreplident,\nc.relispartition, c.relrewrite, c.relfrozenxid, c.relminmxid,\nc.relacl, c.reloptions, c.relpartbound\n\nIndex Cond: (c.oid = con.conrelid)\n\nFilter: (pg_has_role(c.relowner, 'USAGE'::text) OR\nhas_table_privilege(c.oid, 'INSERT, UPDATE, DELETE, TRUNCATE,\nREFERENCES, TRIGGER'::text) OR has_any_column_privilege(c.oid,\n'INSERT, UPDATE, REFERENCES'::text))\n\nBuffers: shared hit=6\n -> Seq Scan\non pg_catalog.pg_namespace ncon (cost=0.00..1.06 rows=6 width=4)\n(actual time=0.002..0.002 rows=6 loops=2)\n Output:\nncon.oid, ncon.nspname, ncon.nspowner, ncon.nspacl\n Buffers:\nshared hit=2\n -> Index Scan using\npg_depend_depender_index on pg_catalog.pg_depend d1 (cost=0.29..1.97\nrows=1 width=8) (actual time=0.010..0.011 rows=1 loops=2)\n Output:\nd1.classid, d1.objid, d1.objsubid, d1.refclassid, d1.refobjid,\nd1.refobjsubid, d1.deptype\n Index Cond:\n((d1.classid = '2606'::oid) AND (d1.objid = con.oid))\n Filter:\n((d1.refclassid = '1259'::oid) AND (d1.refobjsubid = 0))\n Rows Removed\nby Filter: 2\n Buffers: shared hit=6\n -> Index Scan using\npg_depend_depender_index on pg_catalog.pg_depend d2 (cost=0.29..1.85\nrows=1 width=8) (actual time=0.006..0.010 rows=1 loops=2)\n Output: d2.classid,\nd2.objid, d2.objsubid, d2.refclassid, d2.refobjid, d2.refobjsubid,\nd2.deptype\n Index Cond:\n((d2.classid = '1259'::oid) AND (d2.objid = d1.refobjid) AND\n(d2.objsubid = 0))\n Filter:\n((d2.refclassid = '2606'::oid) AND (d2.deptype = 'i'::\"char\"))\n Buffers: shared hit=6\n -> Index Scan using\npg_constraint_conrelid_contypid_conname_index on\npg_catalog.pg_constraint pkc (cost=0.28..0.64 rows=1 width=76)\n(actual time=0.007..0.007 rows=1 loops=2)\n Output: pkc.oid,\npkc.conname, pkc.connamespace, pkc.contype, pkc.condeferrable,\npkc.condeferred, pkc.convalidated, pkc.conrelid, pkc.contypid,\npkc.conindid, pkc.conparentid, pkc.confrelid, pkc.confupdtype,\npkc.confdeltype, pkc.confmatchtype, pkc.conislocal, pkc.coninhcount,\npkc.connoinherit, pkc.conkey, pkc.confkey, pkc.conpfeqop,\npkc.conppeqop, pkc.conffeqop, pkc.conexclop, pkc.conbin\n Index Cond: (pkc.conrelid\n= con.confrelid)\n Filter: (pkc.contype = ANY\n('{p,u}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=8\n -> Append (cost=599.64..2816.66\nrows=733 width=128) (actual time=1.033..10.237 rows=4287 loops=2)\n Buffers: shared hit=1695\n -> Subquery Scan on \"*SELECT*\n1_1\" (cost=599.64..645.39 rows=175 width=128) (actual\ntime=1.032..3.966 rows=1707 loops=2)\n Output: \"*SELECT*\n1_1\".table_name, \"*SELECT* 1_1\".constraint_name\n Buffers: shared hit=328\n -> Nested Loop\n(cost=599.64..643.64 rows=175 width=512) (actual time=1.032..3.842\nrows=1707 loops=2)\n Output:\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(c_2.conname)::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_2.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Join Filter:\n(c_2.connamespace = nc_1.oid)\n Rows Removed by Join\nFilter: 8624\n Buffers: shared hit=328\n -> Nested Loop\n(cost=599.64..628.68 rows=175 width=132) (actual time=1.028..2.578\nrows=1707 loops=2)\n Output:\nr_2.relname, c_2.conname, c_2.connamespace\n Inner Unique: true\n Join Filter:\n(r_2.relnamespace = nr_2.oid)\n Rows Removed\nby Join Filter: 5210\n Buffers: shared hit=327\n -> Merge Join\n (cost=599.64..613.40 rows=263 width=136) (actual time=1.019..1.684\nrows=1707 loops=2)\n Output:\nc_2.conname, c_2.connamespace, r_2.relname, r_2.relnamespace\n Inner Unique: true\n Merge\nCond: (c_2.conrelid = r_2.oid)\n Buffers:\nshared hit=326\n -> Sort\n (cost=169.02..173.43 rows=1762 width=72) (actual time=0.473..0.622\nrows=1709 loops=2)\n\nOutput: c_2.conname, c_2.connamespace, c_2.conrelid\n\nSort Key: c_2.conrelid\n\nSort Method: quicksort Memory: 289kB\n\nBuffers: shared hit=52\n ->\n Seq Scan on pg_catalog.pg_constraint c_2 (cost=0.00..74.03 rows=1762\nwidth=72) (actual time=0.005..0.469 rows=1709 loops=1)\n\n Output: c_2.conname, c_2.connamespace, c_2.conrelid\n\n Filter: (c_2.contype <> ALL ('{t,x}'::\"char\"[]))\n\n Buffers: shared hit=52\n -> Sort\n (cost=430.62..431.81 rows=476 width=72) (actual time=0.533..0.604\nrows=694 loops=2)\n\nOutput: r_2.relname, r_2.relnamespace, r_2.oid\n\nSort Key: r_2.oid\n\nSort Method: quicksort Memory: 122kB\n\nBuffers: shared hit=274\n ->\n Seq Scan on pg_catalog.pg_class r_2 (cost=0.00..409.45 rows=476\nwidth=72) (actual time=0.007..0.882 rows=694 loops=1)\n\n Output: r_2.relname, r_2.relnamespace, r_2.oid\n\n Filter: ((r_2.relkind = ANY ('{r,p}'::\"char\"[])) AND\n(pg_has_role(r_2.relowner, 'USAGE'::text) OR\nhas_table_privilege(r_2.oid, 'INSERT, UPDATE, DELETE, TRUNCATE,\nREFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_2.oid,\n'INSERT, UPDATE, REFERENCES'::text)))\n\n Rows Removed by Filter: 2559\n\n Buffers: shared hit=274\n ->\nMaterialize (cost=0.00..1.09 rows=4 width=4) (actual\ntime=0.000..0.000 rows=4 loops=3414)\n Output: nr_2.oid\n Buffers:\nshared hit=1\n -> Seq\nScan on pg_catalog.pg_namespace nr_2 (cost=0.00..1.07 rows=4 width=4)\n(actual time=0.009..0.015 rows=7 loops=1)\n\nOutput: nr_2.oid\n\nFilter: (NOT pg_is_other_temp_schema(nr_2.oid))\n\nRows Removed by Filter: 2\n\nBuffers: shared hit=1\n -> Materialize\n(cost=0.00..1.09 rows=6 width=4) (actual time=0.000..0.000 rows=6\nloops=3414)\n Output: nc_1.oid\n Buffers: shared hit=1\n -> Seq Scan\non pg_catalog.pg_namespace nc_1 (cost=0.00..1.06 rows=6 width=4)\n(actual time=0.003..0.004 rows=9 loops=1)\n Output: nc_1.oid\n Buffers:\nshared hit=1\n -> Subquery Scan on \"*SELECT*\n2_1\" (cost=2110.11..2167.61 rows=558 width=128) (actual\ntime=3.730..6.052 rows=2580 loops=2)\n Output: \"*SELECT*\n2_1\".table_name, \"*SELECT* 2_1\".constraint_name\n Buffers: shared hit=1367\n -> Merge Join\n(cost=2110.11..2162.03 rows=558 width=512) (actual time=3.729..5.866\nrows=2580 loops=2)\n Output:\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier, (((((((nr_3.oid)::text ||\n'_'::text) || (r_3.oid)::text) || '_'::text) || (a_3.attnum)::text) ||\n'_not_null'::text))::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_3.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Merge Cond: (r_3.oid\n= a_3.attrelid)\n Buffers: shared hit=1367\n -> Sort\n(cost=449.42..450.21 rows=317 width=72) (actual time=0.900..0.965\nrows=694 loops=2)\n Output:\nnr_3.oid, r_3.oid, r_3.relname\n Sort Key: r_3.oid\n Sort Method:\nquicksort Memory: 122kB\n Buffers: shared hit=275\n -> Nested\nLoop (cost=0.00..436.25 rows=317 width=72) (actual time=0.038..1.605\nrows=694 loops=1)\n Output:\nnr_3.oid, r_3.oid, r_3.relname\n Inner Unique: true\n Join\nFilter: (nr_3.oid = r_3.relnamespace)\n Rows\nRemoved by Join Filter: 2013\n Buffers:\nshared hit=275\n -> Seq\nScan on pg_catalog.pg_class r_3 (cost=0.00..409.45 rows=476 width=72)\n(actual time=0.022..1.227 rows=694 loops=1)\n\nOutput: r_3.oid, r_3.relname, r_3.relnamespace, r_3.reltype,\nr_3.reloftype, r_3.relowner, r_3.relam, r_3.relfilenode,\nr_3.reltablespace, r_3.relpages, r_3.reltuples, r_3.relallvisible,\nr_3.reltoastrelid, r_3.relhasindex, r_3.relisshared,\nr_3.relpersistence, r_3.relkind, r_3.relnatts, r_3.relchecks,\nr_3.relhasrules, r_3.relhastriggers, r_3.relhassubclass,\nr_3.relrowsecurity, r_3.relforcerowsecurity, r_3.relispopulated,\nr_3.relreplident, r_3.relispartition, r_3.relrewrite,\nr_3.relfrozenxid, r_3.relminmxid, r_3.relacl, r_3.reloptions,\nr_3.relpartbound\n\nFilter: ((r_3.relkind = ANY ('{r,p}'::\"char\"[])) AND\n(pg_has_role(r_3.relowner, 'USAGE'::text) OR\nhas_table_privilege(r_3.oid, 'INSERT, UPDATE, DELETE, TRUNCATE,\nREFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_3.oid,\n'INSERT, UPDATE, REFERENCES'::text)))\n\nRows Removed by Filter: 2559\n\nBuffers: shared hit=274\n ->\nMaterialize (cost=0.00..1.09 rows=4 width=4) (actual\ntime=0.000..0.000 rows=4 loops=694)\n\nOutput: nr_3.oid\n\nBuffers: shared hit=1\n ->\n Seq Scan on pg_catalog.pg_namespace nr_3 (cost=0.00..1.07 rows=4\nwidth=4) (actual time=0.006..0.009 rows=7 loops=1)\n\n Output: nr_3.oid\n\n Filter: (NOT pg_is_other_temp_schema(nr_3.oid))\n\n Rows Removed by Filter: 2\n\n Buffers: shared hit=1\n -> Sort\n(cost=1660.69..1674.70 rows=5606 width=6) (actual time=2.822..2.946\nrows=2598 loops=2)\n Output:\na_3.attnum, a_3.attrelid\n Sort Key: a_3.attrelid\n Sort Method:\nquicksort Memory: 218kB\n Buffers: shared hit=1092\n -> Seq Scan\non pg_catalog.pg_attribute a_3 (cost=0.00..1311.64 rows=5606 width=6)\n(actual time=0.008..5.054 rows=2598 loops=1)\n Output:\na_3.attnum, a_3.attrelid\n Filter:\n(a_3.attnotnull AND (NOT a_3.attisdropped) AND (a_3.attnum > 0))\n Rows\nRemoved by Filter: 15396\n Buffers:\nshared hit=1092\n -> ProjectSet (cost=564.95..1875.97\nrows=249000 width=341) (actual time=2.154..10.656 rows=1983 loops=2)\n Output: r_4.oid, NULL::name,\nr_4.relowner, NULL::name, NULL::name, NULL::oid, c_3.conname,\nNULL::\"char\", NULL::oid, NULL::smallint[], NULL::oid,\ninformation_schema._pg_expandarray(c_3.conkey)\n Buffers: shared hit=328\n -> Merge Join (cost=564.95..567.48\nrows=249 width=95) (actual time=2.034..2.481 rows=1707 loops=2)\n Output: c_3.conkey, r_4.oid,\nr_4.relowner, c_3.conname\n Inner Unique: true\n Merge Cond: (c_3.connamespace = nc_2.oid)\n Buffers: shared hit=328\n -> Sort (cost=563.80..564.43\nrows=249 width=99) (actual time=2.026..2.119 rows=1707 loops=2)\n Output: r_4.oid,\nr_4.relowner, c_3.conname, c_3.conkey, c_3.connamespace\n Sort Key: c_3.connamespace\n Sort Method: quicksort\nMemory: 289kB\n Buffers: shared hit=327\n -> Nested Loop\n(cost=516.77..553.89 rows=249 width=99) (actual time=2.080..3.571\nrows=1707 loops=1)\n Output: r_4.oid,\nr_4.relowner, c_3.conname, c_3.conkey, c_3.connamespace\n Inner Unique: true\n Join Filter:\n(r_4.relnamespace = nr_4.oid)\n Rows Removed by Join\nFilter: 5210\n Buffers: shared hit=327\n -> Merge Join\n(cost=516.77..532.60 rows=374 width=103) (actual time=2.065..2.631\nrows=1707 loops=1)\n Output:\nr_4.oid, r_4.relowner, r_4.relnamespace, c_3.conname, c_3.conkey,\nc_3.connamespace\n Merge Cond:\n(r_4.oid = c_3.conrelid)\n Buffers: shared hit=326\n -> Sort\n(cost=345.67..347.36 rows=677 width=12) (actual time=0.999..1.034\nrows=694 loops=1)\n Output:\nr_4.oid, r_4.relowner, r_4.relnamespace\n Sort Key: r_4.oid\n Sort\nMethod: quicksort Memory: 57kB\n Buffers:\nshared hit=274\n -> Seq\nScan on pg_catalog.pg_class r_4 (cost=0.00..313.84 rows=677 width=12)\n(actual time=0.014..0.848 rows=694 loops=1)\n\nOutput: r_4.oid, r_4.relowner, r_4.relnamespace\n\nFilter: (r_4.relkind = ANY ('{r,p}'::\"char\"[]))\n\nRows Removed by Filter: 2559\n\nBuffers: shared hit=274\n -> Sort\n(cost=171.10..175.50 rows=1760 width=95) (actual time=1.056..1.164\nrows=1707 loops=1)\n Output:\nc_3.conname, c_3.conkey, c_3.conrelid, c_3.connamespace\n Sort\nKey: c_3.conrelid\n Sort\nMethod: quicksort Memory: 289kB\n Buffers:\nshared hit=52\n -> Seq\nScan on pg_catalog.pg_constraint c_3 (cost=0.00..76.23 rows=1760\nwidth=95) (actual time=0.009..0.519 rows=1707 loops=1)\n\nOutput: c_3.conname, c_3.conkey, c_3.conrelid, c_3.connamespace\n\nFilter: (c_3.contype = ANY ('{p,u,f}'::\"char\"[]))\n\nRows Removed by Filter: 2\n\nBuffers: shared hit=52\n -> Materialize\n(cost=0.00..1.09 rows=4 width=4) (actual time=0.000..0.000 rows=4\nloops=1707)\n Output: nr_4.oid\n Buffers: shared hit=1\n -> Seq Scan\non pg_catalog.pg_namespace nr_4 (cost=0.00..1.07 rows=4 width=4)\n(actual time=0.007..0.011 rows=7 loops=1)\n Output: nr_4.oid\n Filter:\n(NOT pg_is_other_temp_schema(nr_4.oid))\n Rows\nRemoved by Filter: 2\n Buffers:\nshared hit=1\n -> Sort (cost=1.14..1.15\nrows=6 width=4) (actual time=0.006..0.008 rows=9 loops=2)\n Output: nc_2.oid\n Sort Key: nc_2.oid\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nc_2 (cost=0.00..1.06 rows=6 width=4) (actual\ntime=0.006..0.007 rows=9 loops=1)\n Output: nc_2.oid\n Buffers: shared hit=1\n -> Index Scan using\npg_attribute_relid_attnum_index on pg_catalog.pg_attribute a\n(cost=0.29..0.33 rows=1 width=70) (actual time=0.020..0.020 rows=1\nloops=2)\n Output: a.attrelid, a.attname, a.atttypid,\na.attstattarget, a.attlen, a.attnum, a.attndims, a.attcacheoff,\na.atttypmod, a.attbyval, a.attstorage, a.attalign, a.attnotnull,\na.atthasdef, a.atthasmissing, a.attidentity, a.attgenerated,\na.attisdropped, a.attislocal, a.attinhcount, a.attcollation, a.attacl,\na.attoptions, a.attfdwoptions, a.attmissingval\n Index Cond: ((a.attrelid = r_4.oid) AND\n(a.attnum = ((information_schema._pg_expandarray(c_3.conkey))).x))\n Filter: ((NOT a.attisdropped) AND\n(pg_has_role(r_4.relowner, 'USAGE'::text) OR\nhas_column_privilege(r_4.oid, a.attnum, 'SELECT, INSERT, UPDATE,\nREFERENCES'::text)))\n Buffers: shared hit=6\n -> Subquery Scan on \"*SELECT* 1_2\" (cost=0.28..173.32\nrows=1 width=128) (actual time=0.040..5.978 rows=595 loops=2)\n Output: \"*SELECT* 1_2\".constraint_name, \"*SELECT*\n1_2\".table_name\n Buffers: shared hit=6054\n -> Nested Loop (cost=0.28..173.31 rows=1\nwidth=512) (actual time=0.040..5.914 rows=595 loops=2)\n Output:\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(c_4.conname)::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\nNULL::information_schema.sql_identifier,\n(r_5.relname)::information_schema.sql_identifier,\nNULL::information_schema.character_data,\nNULL::information_schema.yes_or_no,\nNULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Join Filter: (r_5.relnamespace = nr_5.oid)\n Rows Removed by Join Filter: 1836\n Buffers: shared hit=6054\n -> Nested Loop (cost=0.28..172.19 rows=1\nwidth=132) (actual time=0.031..3.784 rows=595 loops=2)\n Output: c_4.conname, r_5.relname,\nr_5.relnamespace\n Inner Unique: true\n Join Filter: (c_4.connamespace = nc_3.oid)\n Rows Removed by Join Filter: 3026\n Buffers: shared hit=4864\n -> Nested Loop (cost=0.28..171.05\nrows=1 width=136) (actual time=0.024..1.976 rows=595 loops=2)\n Output: c_4.conname,\nc_4.connamespace, r_5.relname, r_5.relnamespace\n Inner Unique: true\n Buffers: shared hit=3674\n -> Seq Scan on\npg_catalog.pg_constraint c_4 (cost=0.00..96.05 rows=9 width=72)\n(actual time=0.012..0.489 rows=595 loops=2)\n Output: c_4.oid,\nc_4.conname, c_4.connamespace, c_4.contype, c_4.condeferrable,\nc_4.condeferred, c_4.convalidated, c_4.conrelid, c_4.contypid,\nc_4.conindid, c_4.conparentid, c_4.confrelid, c_4.confupdtype,\nc_4.confdeltype, c_4.confmatchtype, c_4.conislocal, c_4.coninhcount,\nc_4.connoinherit, c_4.conkey, c_4.confkey, c_4.conpfeqop,\nc_4.conppeqop, c_4.conffeqop, c_4.conexclop, c_4.conbin\n Filter: ((c_4.contype <>\nALL ('{t,x}'::\"char\"[])) AND ((CASE c_4.contype WHEN 'c'::\"char\" THEN\n'CHECK'::text WHEN 'f'::\"char\" THEN 'FOREIGN KEY'::text WHEN\n'p'::\"char\" THEN 'PRIMARY KEY'::text WHEN 'u'::\"char\" THEN\n'UNIQUE'::text ELSE NULL::text END)::text = 'PRIMARY KEY'::text))\n Rows Removed by Filter: 1114\n Buffers: shared hit=104\n -> Index Scan using\npg_class_oid_index on pg_catalog.pg_class r_5 (cost=0.28..8.33 rows=1\nwidth=72) (actual time=0.002..0.002 rows=1 loops=1190)\n Output: r_5.oid,\nr_5.relname, r_5.relnamespace, r_5.reltype, r_5.reloftype,\nr_5.relowner, r_5.relam, r_5.relfilenode, r_5.reltablespace,\nr_5.relpages, r_5.reltuples, r_5.relallvisible, r_5.reltoastrelid,\nr_5.relhasindex, r_5.relisshared, r_5.relpersistence, r_5.relkind,\nr_5.relnatts, r_5.relchecks, r_5.relhasrules, r_5.relhastriggers,\nr_5.relhassubclass, r_5.relrowsecurity, r_5.relforcerowsecurity,\nr_5.relispopulated, r_5.relreplident, r_5.relispartition,\nr_5.relrewrite, r_5.relfrozenxid, r_5.relminmxid, r_5.relacl,\nr_5.reloptions, r_5.relpartbound\n Index Cond: (r_5.oid = c_4.conrelid)\n Filter: ((r_5.relkind =\nANY ('{r,p}'::\"char\"[])) AND (pg_has_role(r_5.relowner, 'USAGE'::text)\nOR has_table_privilege(r_5.oid, 'INSERT, UPDATE, DELETE, TRUNCATE,\nREFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_5.oid,\n'INSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=3570\n -> Seq Scan on\npg_catalog.pg_namespace nc_3 (cost=0.00..1.06 rows=6 width=4) (actual\ntime=0.000..0.001 rows=6 loops=1190)\n Output: nc_3.oid, nc_3.nspname,\nnc_3.nspowner, nc_3.nspacl\n Buffers: shared hit=1190\n -> Seq Scan on pg_catalog.pg_namespace nr_5\n (cost=0.00..1.07 rows=4 width=4) (actual time=0.001..0.002 rows=4\nloops=1190)\n Output: nr_5.oid, nr_5.nspname,\nnr_5.nspowner, nr_5.nspacl\n Filter: (NOT pg_is_other_temp_schema(nr_5.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1190\n -> ProjectSet (cost=564.95..1875.97 rows=249000 width=341)\n(actual time=2.653..10.818 rows=1983 loops=2)\n Output: r_6.oid, NULL::name, r_6.relowner, NULL::name,\nNULL::name, NULL::oid, c_5.conname, NULL::\"char\", NULL::oid,\nNULL::smallint[], NULL::oid,\ninformation_schema._pg_expandarray(c_5.conkey)\n Buffers: shared hit=328\n -> Merge Join (cost=564.95..567.48 rows=249 width=95)\n(actual time=2.571..3.014 rows=1707 loops=2)\n Output: c_5.conkey, r_6.oid, r_6.relowner, c_5.conname\n Inner Unique: true\n Merge Cond: (c_5.connamespace = nc_4.oid)\n Buffers: shared hit=328\n -> Sort (cost=563.80..564.43 rows=249 width=99)\n(actual time=2.557..2.654 rows=1707 loops=2)\n Output: r_6.oid, r_6.relowner, c_5.conname,\nc_5.conkey, c_5.connamespace\n Sort Key: c_5.connamespace\n Sort Method: quicksort Memory: 289kB\n Buffers: shared hit=327\n -> Nested Loop (cost=516.77..553.89\nrows=249 width=99) (actual time=2.335..4.616 rows=1707 loops=1)\n Output: r_6.oid, r_6.relowner,\nc_5.conname, c_5.conkey, c_5.connamespace\n Inner Unique: true\n Join Filter: (r_6.relnamespace = nr_6.oid)\n Rows Removed by Join Filter: 5210\n Buffers: shared hit=327\n -> Merge Join (cost=516.77..532.60\nrows=374 width=103) (actual time=2.320..2.962 rows=1707 loops=1)\n Output: r_6.oid, r_6.relowner,\nr_6.relnamespace, c_5.conname, c_5.conkey, c_5.connamespace\n Merge Cond: (r_6.oid = c_5.conrelid)\n Buffers: shared hit=326\n -> Sort (cost=345.67..347.36\nrows=677 width=12) (actual time=1.185..1.231 rows=694 loops=1)\n Output: r_6.oid,\nr_6.relowner, r_6.relnamespace\n Sort Key: r_6.oid\n Sort Method: quicksort Memory: 57kB\n Buffers: shared hit=274\n -> Seq Scan on\npg_catalog.pg_class r_6 (cost=0.00..313.84 rows=677 width=12) (actual\ntime=0.008..1.020 rows=694 loops=1)\n Output: r_6.oid,\nr_6.relowner, r_6.relnamespace\n Filter: (r_6.relkind\n= ANY ('{r,p}'::\"char\"[]))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Sort (cost=171.10..175.50\nrows=1760 width=95) (actual time=1.124..1.233 rows=1707 loops=1)\n Output: c_5.conname,\nc_5.conkey, c_5.conrelid, c_5.connamespace\n Sort Key: c_5.conrelid\n Sort Method: quicksort\nMemory: 289kB\n Buffers: shared hit=52\n -> Seq Scan on\npg_catalog.pg_constraint c_5 (cost=0.00..76.23 rows=1760 width=95)\n(actual time=0.007..0.544 rows=1707 loops=1)\n Output: c_5.conname,\nc_5.conkey, c_5.conrelid, c_5.connamespace\n Filter: (c_5.contype\n= ANY ('{p,u,f}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=52\n -> Materialize (cost=0.00..1.09\nrows=4 width=4) (actual time=0.000..0.001 rows=4 loops=1707)\n Output: nr_6.oid\n Buffers: shared hit=1\n -> Seq Scan on\npg_catalog.pg_namespace nr_6 (cost=0.00..1.07 rows=4 width=4) (actual\ntime=0.006..0.013 rows=7 loops=1)\n Output: nr_6.oid\n Filter: (NOT\npg_is_other_temp_schema(nr_6.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Sort (cost=1.14..1.15 rows=6 width=4) (actual\ntime=0.010..0.011 rows=9 loops=2)\n Output: nc_4.oid\n Sort Key: nc_4.oid\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nc_4\n (cost=0.00..1.06 rows=6 width=4) (actual time=0.013..0.014 rows=9\nloops=1)\n Output: nc_4.oid\n Buffers: shared hit=1\n -> Index Scan using pg_attribute_relid_attnum_index on\npg_catalog.pg_attribute a_1 (cost=0.29..0.33 rows=1 width=70) (actual\ntime=0.015..0.015 rows=1 loops=2)\n Output: a_1.attrelid, a_1.attname, a_1.atttypid,\na_1.attstattarget, a_1.attlen, a_1.attnum, a_1.attndims,\na_1.attcacheoff, a_1.atttypmod, a_1.attbyval, a_1.attstorage,\na_1.attalign, a_1.attnotnull, a_1.atthasdef, a_1.atthasmissing,\na_1.attidentity, a_1.attgenerated, a_1.attisdropped, a_1.attislocal,\na_1.attinhcount, a_1.attcollation, a_1.attacl, a_1.attoptions,\na_1.attfdwoptions, a_1.attmissingval\n Index Cond: ((a_1.attrelid = r_6.oid) AND (a_1.attnum =\n((information_schema._pg_expandarray(c_5.conkey))).x))\n Filter: ((NOT a_1.attisdropped) AND (pg_has_role(r_6.relowner,\n'USAGE'::text) OR has_column_privilege(r_6.oid, a_1.attnum, 'SELECT,\nINSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=6\nPlanning Time: 7.329 ms\nExecution Time: 80.546 ms\n\nserver parameters (everything except random_page_cost )\n\nallow_system_table_mods\toff\tAllows modifications of the structure of\nsystem tables.\napplication_name\tpgAdmin 4 - CONN:6043198\tSets the application name to\nbe reported in statistics and logs.\narchive_cleanup_command\t\tSets the shell command that will be executed\nat every restart point.\narchive_command\t(disabled)\tSets the shell command that will be called\nto archive a WAL file.\narchive_mode\toff\tAllows archiving of WAL files using archive_command.\narchive_timeout\t0\tForces a switch to the next WAL file if a new file\nhas not been started within N seconds.\narray_nulls\ton\tEnable input of NULL elements in arrays.\nauthentication_timeout\t1min\tSets the maximum allowed time to complete\nclient authentication.\nautovacuum\ton\tStarts the autovacuum subprocess.\nautovacuum_analyze_scale_factor\t0.1\tNumber of tuple inserts, updates,\nor deletes prior to analyze as a fraction of reltuples.\nautovacuum_analyze_threshold\t50\tMinimum number of tuple inserts,\nupdates, or deletes prior to analyze.\nautovacuum_freeze_max_age\t200000000\tAge at which to autovacuum a table\nto prevent transaction ID wraparound.\nautovacuum_max_workers\t3\tSets the maximum number of simultaneously\nrunning autovacuum worker processes.\nautovacuum_multixact_freeze_max_age\t400000000\tMultixact age at which\nto autovacuum a table to prevent multixact wraparound.\nautovacuum_naptime\t1min\tTime to sleep between autovacuum runs.\nautovacuum_vacuum_cost_delay\t2ms\tVacuum cost delay in milliseconds,\nfor autovacuum.\nautovacuum_vacuum_cost_limit\t-1\tVacuum cost amount available before\nnapping, for autovacuum.\nautovacuum_vacuum_scale_factor\t0.2\tNumber of tuple updates or deletes\nprior to vacuum as a fraction of reltuples.\nautovacuum_vacuum_threshold\t50\tMinimum number of tuple updates or\ndeletes prior to vacuum.\nautovacuum_work_mem\t-1\tSets the maximum memory to be used by each\nautovacuum worker process.\nbackend_flush_after\t0\tNumber of pages after which previously performed\nwrites are flushed to disk.\nbackslash_quote\tsafe_encoding\tSets whether \"\\'\" is allowed in string literals.\nbgwriter_delay\t200ms\tBackground writer sleep time between rounds.\nbgwriter_flush_after\t0\tNumber of pages after which previously\nperformed writes are flushed to disk.\nbgwriter_lru_maxpages\t100\tBackground writer maximum number of LRU\npages to flush per round.\nbgwriter_lru_multiplier\t2\tMultiple of the average buffer usage to free\nper round.\nblock_size\t8192\tShows the size of a disk block.\nbonjour\toff\tEnables advertising the server via Bonjour.\nbonjour_name\t\tSets the Bonjour service name.\nbytea_output\thex\tSets the output format for bytea.\ncheck_function_bodies\ton\tCheck function bodies during CREATE FUNCTION.\ncheckpoint_completion_target\t0.5\tTime spent flushing dirty buffers\nduring checkpoint, as fraction of checkpoint interval.\ncheckpoint_flush_after\t0\tNumber of pages after which previously\nperformed writes are flushed to disk.\ncheckpoint_timeout\t5min\tSets the maximum time between automatic WAL checkpoints.\ncheckpoint_warning\t30s\tEnables warnings if checkpoint segments are\nfilled more frequently than this.\nclient_encoding\tUNICODE\tSets the client's character set encoding.\nclient_min_messages\tnotice\tSets the message levels that are sent to the client.\ncluster_name\t\tSets the name of the cluster, which is included in the\nprocess title.\ncommit_delay\t0\tSets the delay in microseconds between transaction\ncommit and flushing WAL to disk.\ncommit_siblings\t5\tSets the minimum concurrent open transactions before\nperforming commit_delay.\nconfig_file\tD:/ASCDB/postgresql.conf\tSets the server's main configuration file.\nconstraint_exclusion\tpartition\tEnables the planner to use constraints\nto optimize queries.\ncpu_index_tuple_cost\t0.005\tSets the planner's estimate of the cost of\nprocessing each index entry during an index scan.\ncpu_operator_cost\t0.0025\tSets the planner's estimate of the cost of\nprocessing each operator or function call.\ncpu_tuple_cost\t0.01\tSets the planner's estimate of the cost of\nprocessing each tuple (row).\ncursor_tuple_fraction\t0.1\tSets the planner's estimate of the fraction\nof a cursor's rows that will be retrieved.\ndata_checksums\toff\tShows whether data checksums are turned on for this cluster.\ndata_directory\tD:/ASCDB\tSets the server's data directory.\ndata_directory_mode\t0700\tMode of the data directory.\ndata_sync_retry\toff\tWhether to continue running after a failure to\nsync data files.\nDateStyle\tISO, DMY\tSets the display format for date and time values.\ndb_user_namespace\toff\tEnables per-database user names.\ndeadlock_timeout\t1s\tSets the time to wait on a lock before checking\nfor deadlock.\ndebug_assertions\toff\tShows whether the running server has assertion\nchecks enabled.\ndebug_pretty_print\ton\tIndents parse and plan tree displays.\ndebug_print_parse\toff\tLogs each query's parse tree.\ndebug_print_plan\toff\tLogs each query's execution plan.\ndebug_print_rewritten\toff\tLogs each query's rewritten parse tree.\ndefault_statistics_target\t100\tSets the default statistics target.\ndefault_table_access_method\theap\tSets the default table access method\nfor new tables.\ndefault_tablespace\t\tSets the default tablespace to create tables and indexes in.\ndefault_text_search_config\tpg_catalog.english\tSets default text search\nconfiguration.\ndefault_transaction_deferrable\toff\tSets the default deferrable status\nof new transactions.\ndefault_transaction_isolation\tread committed\tSets the transaction\nisolation level of each new transaction.\ndefault_transaction_read_only\toff\tSets the default read-only status of\nnew transactions.\ndynamic_library_path\t$libdir\tSets the path for dynamically loadable modules.\ndynamic_shared_memory_type\twindows\tSelects the dynamic shared memory\nimplementation used.\neffective_cache_size\t9GB\tSets the planner's assumption about the total\nsize of the data caches.\neffective_io_concurrency\t0\tNumber of simultaneous requests that can be\nhandled efficiently by the disk subsystem.\nenable_bitmapscan\ton\tEnables the planner's use of bitmap-scan plans.\nenable_gathermerge\ton\tEnables the planner's use of gather merge plans.\nenable_hashagg\ton\tEnables the planner's use of hashed aggregation plans.\nenable_hashjoin\ton\tEnables the planner's use of hash join plans.\nenable_indexonlyscan\ton\tEnables the planner's use of index-only-scan plans.\nenable_indexscan\ton\tEnables the planner's use of index-scan plans.\nenable_material\ton\tEnables the planner's use of materialization.\nenable_mergejoin\ton\tEnables the planner's use of merge join plans.\nenable_nestloop\ton\tEnables the planner's use of nested-loop join plans.\nenable_parallel_append\ton\tEnables the planner's use of parallel append plans.\nenable_parallel_hash\ton\tEnables the planner's use of parallel hash plans.\nenable_partition_pruning\ton\tEnables plan-time and run-time partition pruning.\nenable_partitionwise_aggregate\toff\tEnables partitionwise aggregation\nand grouping.\nenable_partitionwise_join\toff\tEnables partitionwise join.\nenable_seqscan\ton\tEnables the planner's use of sequential-scan plans.\nenable_sort\ton\tEnables the planner's use of explicit sort steps.\nenable_tidscan\ton\tEnables the planner's use of TID scan plans.\nescape_string_warning\ton\tWarn about backslash escapes in ordinary\nstring literals.\nevent_source\tPostgreSQL\tSets the application name used to identify\nPostgreSQL messages in the event log.\nexit_on_error\toff\tTerminate session on any error.\nexternal_pid_file\t\tWrites the postmaster PID to the specified file.\nextra_float_digits\t1\tSets the number of digits displayed for\nfloating-point values.\nforce_parallel_mode\toff\tForces use of parallel query facilities.\nfrom_collapse_limit\t80\tSets the FROM-list size beyond which subqueries\nare not collapsed.\nfsync\ton\tForces synchronization of updates to disk.\nfull_page_writes\ton\tWrites full pages to WAL when first modified after\na checkpoint.\ngeqo\ton\tEnables genetic query optimization.\ngeqo_effort\t5\tGEQO: effort is used to set the default for other GEQO parameters.\ngeqo_generations\t0\tGEQO: number of iterations of the algorithm.\ngeqo_pool_size\t0\tGEQO: number of individuals in the population.\ngeqo_seed\t0\tGEQO: seed for random path selection.\ngeqo_selection_bias\t2\tGEQO: selective pressure within the population.\ngeqo_threshold\t12\tSets the threshold of FROM items beyond which GEQO is used.\ngin_fuzzy_search_limit\t0\tSets the maximum allowed result for exact\nsearch by GIN.\ngin_pending_list_limit\t4MB\tSets the maximum size of the pending list\nfor GIN index.\nhba_file\tD:/ASCDB/pg_hba.conf\tSets the server's \"hba\" configuration file.\nhot_standby\ton\tAllows connections and queries during recovery.\nhot_standby_feedback\toff\tAllows feedback from a hot standby to the\nprimary that will avoid query conflicts.\nhuge_pages\ttry\tUse of huge pages on Linux or Windows.\nident_file\tD:/ASCDB/pg_ident.conf\tSets the server's \"ident\" configuration file.\nidle_in_transaction_session_timeout\t0\tSets the maximum allowed\nduration of any idling transaction.\nignore_checksum_failure\toff\tContinues processing after a checksum failure.\nignore_system_indexes\toff\tDisables reading from system indexes.\ninteger_datetimes\ton\tDatetimes are integer based.\nIntervalStyle\tpostgres\tSets the display format for interval values.\njit\ton\tAllow JIT compilation.\njit_above_cost\t100000\tPerform JIT compilation if query is more expensive.\njit_debugging_support\toff\tRegister JIT compiled function with debugger.\njit_dump_bitcode\toff\tWrite out LLVM bitcode to facilitate JIT debugging.\njit_expressions\ton\tAllow JIT compilation of expressions.\njit_inline_above_cost\t500000\tPerform JIT inlining if query is more expensive.\njit_optimize_above_cost\t500000\tOptimize JITed functions if query is\nmore expensive.\njit_profiling_support\toff\tRegister JIT compiled function with perf profiler.\njit_provider\tllvmjit\tJIT provider to use.\njit_tuple_deforming\ton\tAllow JIT compilation of tuple deforming.\njoin_collapse_limit\t80\tSets the FROM-list size beyond which JOIN\nconstructs are not flattened.\nkrb_caseins_users\toff\tSets whether Kerberos and GSSAPI user names\nshould be treated as case-insensitive.\nkrb_server_keyfile\t\tSets the location of the Kerberos server key file.\nlc_collate\tEnglish_United Kingdom.1252\tShows the collation order locale.\nlc_ctype\tEnglish_United Kingdom.1252\tShows the character\nclassification and case conversion locale.\nlc_messages\tEnglish_United States.1252\tSets the language in which\nmessages are displayed.\nlc_monetary\tEnglish_United States.1252\tSets the locale for formatting\nmonetary amounts.\nlc_numeric\tEnglish_United States.1252\tSets the locale for formatting numbers.\nlc_time\tEnglish_United Kingdom.1252\tSets the locale for formatting\ndate and time values.\nlisten_addresses\t*\tSets the host name or IP address(es) to listen to.\nlo_compat_privileges\toff\tEnables backward compatibility mode for\nprivilege checks on large objects.\nlocal_preload_libraries\t\tLists unprivileged shared libraries to\npreload into each backend.\nlock_timeout\t0\tSets the maximum allowed duration of any wait for a lock.\nlog_autovacuum_min_duration\t-1\tSets the minimum execution time above\nwhich autovacuum actions will be logged.\nlog_checkpoints\toff\tLogs each checkpoint.\nlog_connections\toff\tLogs each successful connection.\nlog_destination\tstderr\tSets the destination for server log output.\nlog_directory\tlog\tSets the destination directory for log files.\nlog_disconnections\toff\tLogs end of a session, including duration.\nlog_duration\toff\tLogs the duration of each completed SQL statement.\nlog_error_verbosity\tdefault\tSets the verbosity of logged messages.\nlog_executor_stats\toff\tWrites executor performance statistics to the server log.\nlog_file_mode\t0640\tSets the file permissions for log files.\nlog_filename\tpostgresql-%Y-%m-%d_%H%M%S.log\tSets the file name pattern\nfor log files.\nlog_hostname\toff\tLogs the host name in the connection logs.\nlog_line_prefix\t%m [%p] \tControls information prefixed to each log line.\nlog_lock_waits\toff\tLogs long lock waits.\nlog_min_duration_statement\t-1\tSets the minimum execution time above\nwhich statements will be logged.\nlog_min_error_statement\terror\tCauses all statements generating error\nat or above this level to be logged.\nlog_min_messages\twarning\tSets the message levels that are logged.\nlog_parser_stats\toff\tWrites parser performance statistics to the server log.\nlog_planner_stats\toff\tWrites planner performance statistics to the server log.\nlog_replication_commands\toff\tLogs each replication command.\nlog_rotation_age\t1d\tAutomatic log file rotation will occur after N minutes.\nlog_rotation_size\t10MB\tAutomatic log file rotation will occur after N kilobytes.\nlog_statement\tnone\tSets the type of statements logged.\nlog_statement_stats\toff\tWrites cumulative performance statistics to\nthe server log.\nlog_temp_files\t-1\tLog the use of temporary files larger than this\nnumber of kilobytes.\nlog_timezone\tEurope/London\tSets the time zone to use in log messages.\nlog_transaction_sample_rate\t0\tSet the fraction of transactions to log\nfor new transactions.\nlog_truncate_on_rotation\toff\tTruncate existing log files of same name\nduring log rotation.\nlogging_collector\ton\tStart a subprocess to capture stderr output\nand/or csvlogs into log files.\nmaintenance_work_mem\t2047MB\tSets the maximum memory to be used for\nmaintenance operations.\nmax_connections\t140\tSets the maximum number of concurrent connections.\nmax_files_per_process\t1000\tSets the maximum number of simultaneously\nopen files for each server process.\nmax_function_args\t100\tShows the maximum number of function arguments.\nmax_identifier_length\t63\tShows the maximum identifier length.\nmax_index_keys\t32\tShows the maximum number of index keys.\nmax_locks_per_transaction\t64\tSets the maximum number of locks per transaction.\nmax_logical_replication_workers\t4\tMaximum number of logical\nreplication worker processes.\nmax_parallel_maintenance_workers\t2\tSets the maximum number of parallel\nprocesses per maintenance operation.\nmax_parallel_workers\t8\tSets the maximum number of parallel workers\nthat can be active at one time.\nmax_parallel_workers_per_gather\t2\tSets the maximum number of parallel\nprocesses per executor node.\nmax_pred_locks_per_page\t2\tSets the maximum number of predicate-locked\ntuples per page.\nmax_pred_locks_per_relation\t-2\tSets the maximum number of\npredicate-locked pages and tuples per relation.\nmax_pred_locks_per_transaction\t64\tSets the maximum number of predicate\nlocks per transaction.\nmax_prepared_transactions\t0\tSets the maximum number of simultaneously\nprepared transactions.\nmax_replication_slots\t10\tSets the maximum number of simultaneously\ndefined replication slots.\nmax_stack_depth\t2MB\tSets the maximum stack depth, in kilobytes.\nmax_standby_archive_delay\t30s\tSets the maximum delay before canceling\nqueries when a hot standby server is processing archived WAL data.\nmax_standby_streaming_delay\t30s\tSets the maximum delay before\ncanceling queries when a hot standby server is processing streamed WAL\ndata.\nmax_sync_workers_per_subscription\t2\tMaximum number of table\nsynchronization workers per subscription.\nmax_wal_senders\t10\tSets the maximum number of simultaneously running\nWAL sender processes.\nmax_wal_size\t2GB\tSets the WAL size that triggers a checkpoint.\nmax_worker_processes\t8\tMaximum number of concurrent worker processes.\nmin_parallel_index_scan_size\t512kB\tSets the minimum amount of index\ndata for a parallel scan.\nmin_parallel_table_scan_size\t8MB\tSets the minimum amount of table data\nfor a parallel scan.\nmin_wal_size\t1GB\tSets the minimum size to shrink the WAL to.\nold_snapshot_threshold\t-1\tTime before a snapshot is too old to read\npages changed after the snapshot was taken.\noperator_precedence_warning\toff\tEmit a warning for constructs that\nchanged meaning since PostgreSQL 9.4.\nparallel_leader_participation\ton\tControls whether Gather and Gather\nMerge also run subplans.\nparallel_setup_cost\t1000\tSets the planner's estimate of the cost of\nstarting up worker processes for parallel query.\nparallel_tuple_cost\t0.1\tSets the planner's estimate of the cost of\npassing each tuple (row) from worker to master backend.\npassword_encryption\tmd5\tChooses the algorithm for encrypting passwords.\nplan_cache_mode\tauto\tControls the planner's selection of custom or generic plan.\nport\t5432\tSets the TCP port the server listens on.\npost_auth_delay\t0\tWaits N seconds on connection startup after authentication.\npre_auth_delay\t0\tWaits N seconds on connection startup before authentication.\nprimary_conninfo\t\tSets the connection string to be used to connect to\nthe sending server.\nprimary_slot_name\t\tSets the name of the replication slot to use on the\nsending server.\npromote_trigger_file\t\tSpecifies a file name whose presence ends\nrecovery in the standby.\nquote_all_identifiers\toff\tWhen generating SQL fragments, quote all identifiers.\nrandom_page_cost\t4\tSets the planner's estimate of the cost of a\nnonsequentially fetched disk page.\nrecovery_end_command\t\tSets the shell command that will be executed\nonce at the end of recovery.\nrecovery_min_apply_delay\t0\tSets the minimum delay for applying changes\nduring recovery.\nrecovery_target\t\tSet to \"immediate\" to end recovery as soon as a\nconsistent state is reached.\nrecovery_target_action\tpause\tSets the action to perform upon reaching\nthe recovery target.\nrecovery_target_inclusive\ton\tSets whether to include or exclude\ntransaction with recovery target.\nrecovery_target_lsn\t\tSets the LSN of the write-ahead log location up\nto which recovery will proceed.\nrecovery_target_name\t\tSets the named restore point up to which\nrecovery will proceed.\nrecovery_target_time\t\tSets the time stamp up to which recovery will proceed.\nrecovery_target_timeline\tlatest\tSpecifies the timeline to recover into.\nrecovery_target_xid\t\tSets the transaction ID up to which recovery will proceed.\nrestart_after_crash\ton\tReinitialize server after backend crash.\nrestore_command\t\tSets the shell command that will retrieve an archived WAL file.\nrow_security\ton\tEnable row security.\nsearch_path\t\"$user\", public\tSets the schema search order for names\nthat are not schema-qualified.\nsegment_size\t1GB\tShows the number of pages per disk file.\nseq_page_cost\t1\tSets the planner's estimate of the cost of a\nsequentially fetched disk page.\nserver_encoding\tUTF8\tSets the server (database) character set encoding.\nserver_version\t12.5\tShows the server version.\nserver_version_num\t120005\tShows the server version as an integer.\nsession_preload_libraries\t\tLists shared libraries to preload into each backend.\nsession_replication_role\torigin\tSets the session's behavior for\ntriggers and rewrite rules.\nshared_buffers\t5100MB\tSets the number of shared memory buffers used by\nthe server.\nshared_memory_type\twindows\tSelects the shared memory implementation\nused for the main shared memory region.\nshared_preload_libraries\t\tLists shared libraries to preload into server.\nssl\toff\tEnables SSL connections.\nssl_ca_file\t\tLocation of the SSL certificate authority file.\nssl_cert_file\tserver.crt\tLocation of the SSL server certificate file.\nssl_ciphers\tHIGH:MEDIUM:+3DES:!aNULL\tSets the list of allowed SSL ciphers.\nssl_crl_file\t\tLocation of the SSL certificate revocation list file.\nssl_dh_params_file\t\tLocation of the SSL DH parameters file.\nssl_ecdh_curve\tprime256v1\tSets the curve to use for ECDH.\nssl_key_file\tserver.key\tLocation of the SSL server private key file.\nssl_library\tOpenSSL\tName of the SSL library.\nssl_max_protocol_version\t\tSets the maximum SSL/TLS protocol version to use.\nssl_min_protocol_version\tTLSv1\tSets the minimum SSL/TLS protocol version to use.\nssl_passphrase_command\t\tCommand to obtain passphrases for SSL.\nssl_passphrase_command_supports_reload\toff\tAlso use\nssl_passphrase_command during server reload.\nssl_prefer_server_ciphers\ton\tGive priority to server ciphersuite order.\nstandard_conforming_strings\ton\tCauses '...' strings to treat\nbackslashes literally.\nstatement_timeout\t0\tSets the maximum allowed duration of any statement.\nstats_temp_directory\tpg_stat_tmp\tWrites temporary statistics files to\nthe specified directory.\nsuperuser_reserved_connections\t3\tSets the number of connection slots\nreserved for superusers.\nsynchronize_seqscans\ton\tEnable synchronized sequential scans.\nsynchronous_commit\ton\tSets the current transaction's synchronization level.\nsynchronous_standby_names\t\tNumber of synchronous standbys and list of\nnames of potential synchronous ones.\nsyslog_facility\tnone\tSets the syslog \"facility\" to be used when syslog enabled.\nsyslog_ident\tpostgres\tSets the program name used to identify\nPostgreSQL messages in syslog.\nsyslog_sequence_numbers\ton\tAdd sequence number to syslog messages to\navoid duplicate suppression.\nsyslog_split_messages\ton\tSplit messages sent to syslog by lines and to\nfit into 1024 bytes.\ntcp_keepalives_count\t0\tMaximum number of TCP keepalive retransmits.\ntcp_keepalives_idle\t-1\tTime between issuing TCP keepalives.\ntcp_keepalives_interval\t-1\tTime between TCP keepalive retransmits.\ntcp_user_timeout\t0\tTCP user timeout.\ntemp_buffers\t8MB\tSets the maximum number of temporary buffers used by\neach session.\ntemp_file_limit\t-1\tLimits the total size of all temporary files used\nby each process.\ntemp_tablespaces\t\tSets the tablespace(s) to use for temporary tables\nand sort files.\nTimeZone\tEurope/London\tSets the time zone for displaying and\ninterpreting time stamps.\ntimezone_abbreviations\tDefault\tSelects a file of time zone abbreviations.\ntrace_notify\toff\tGenerates debugging output for LISTEN and NOTIFY.\ntrace_recovery_messages\tlog\tEnables logging of recovery-related\ndebugging information.\ntrace_sort\toff\tEmit information about resource usage in sorting.\ntrack_activities\ton\tCollects information about executing commands.\ntrack_activity_query_size\t1kB\tSets the size reserved for\npg_stat_activity.query, in bytes.\ntrack_commit_timestamp\toff\tCollects transaction commit time.\ntrack_counts\ton\tCollects statistics on database activity.\ntrack_functions\tnone\tCollects function-level statistics on database activity.\ntrack_io_timing\toff\tCollects timing statistics for database I/O activity.\ntransaction_deferrable\toff\tWhether to defer a read-only serializable\ntransaction until it can be executed with no possible serialization\nfailures.\ntransaction_isolation\tread committed\tSets the current transaction's\nisolation level.\ntransaction_read_only\toff\tSets the current transaction's read-only status.\ntransform_null_equals\toff\tTreats \"expr=NULL\" as \"expr IS NULL\".\nunix_socket_directories\t\tSets the directories where Unix-domain\nsockets will be created.\nunix_socket_group\t\tSets the owning group of the Unix-domain socket.\nunix_socket_permissions\t0777\tSets the access permissions of the\nUnix-domain socket.\nupdate_process_title\toff\tUpdates the process title to show the active\nSQL command.\nvacuum_cleanup_index_scale_factor\t0.1\tNumber of tuple inserts prior to\nindex cleanup as a fraction of reltuples.\nvacuum_cost_delay\t0\tVacuum cost delay in milliseconds.\nvacuum_cost_limit\t200\tVacuum cost amount available before napping.\nvacuum_cost_page_dirty\t20\tVacuum cost for a page dirtied by vacuum.\nvacuum_cost_page_hit\t1\tVacuum cost for a page found in the buffer cache.\nvacuum_cost_page_miss\t10\tVacuum cost for a page not found in the buffer cache.\nvacuum_defer_cleanup_age\t0\tNumber of transactions by which VACUUM and\nHOT cleanup should be deferred, if any.\nvacuum_freeze_min_age\t50000000\tMinimum age at which VACUUM should\nfreeze a table row.\nvacuum_freeze_table_age\t150000000\tAge at which VACUUM should scan\nwhole table to freeze tuples.\nvacuum_multixact_freeze_min_age\t5000000\tMinimum age at which VACUUM\nshould freeze a MultiXactId in a table row.\nvacuum_multixact_freeze_table_age\t150000000\tMultixact age at which\nVACUUM should scan whole table to freeze tuples.\nwal_block_size\t8192\tShows the block size in the write ahead log.\nwal_buffers\t16MB\tSets the number of disk-page buffers in shared memory for WAL.\nwal_compression\toff\tCompresses full-page writes written in WAL file.\nwal_consistency_checking\t\tSets the WAL resource managers for which WAL\nconsistency checks are done.\nwal_init_zero\ton\tWrites zeroes to new WAL files before first use.\nwal_keep_segments\t0\tSets the number of WAL files held for standby servers.\nwal_level\treplica\tSet the level of information written to the WAL.\nwal_log_hints\toff\tWrites full pages to WAL when first modified after a\ncheckpoint, even for a non-critical modifications.\nwal_receiver_status_interval\t10s\tSets the maximum interval between WAL\nreceiver status reports to the sending server.\nwal_receiver_timeout\t1min\tSets the maximum wait time to receive data\nfrom the sending server.\nwal_recycle\ton\tRecycles WAL files by renaming them.\nwal_retrieve_retry_interval\t5s\tSets the time to wait before retrying\nto retrieve WAL after a failed attempt.\nwal_segment_size\t16MB\tShows the size of write ahead log segments.\nwal_sender_timeout\t1min\tSets the maximum time to wait for WAL replication.\nwal_sync_method\topen_datasync\tSelects the method used for forcing WAL\nupdates to disk.\nwal_writer_delay\t200ms\tTime between WAL flushes performed in the WAL writer.\nwal_writer_flush_after\t1MB\tAmount of WAL written out by WAL writer\nthat triggers a flush.\nwork_mem\t256MB\tSets the maximum memory to be used for query workspaces.\nxmlbinary\tbase64\tSets how binary values are to be encoded in XML.\nxmloption\tcontent\tSets whether XML data in implicit parsing and\nserialization operations is to be considered as documents or content\nfragments.\nzero_damaged_pages\toff\tContinues processing past damaged page headers.\n\nDear All,We use (a somewhat old version of) Liquibase to implement changes in our databases. We also use Liquibase scripts to keep track of database migration (mostly schema, but a little bit of data too). At some point we cleaned up all our primary indexes as well as constraints and implemented them as Liquibase scripts (i.e., recreated them). For that purpose Liquibase usually fires a query like this to postgres:\nSELECT \n\tFK.TABLE_NAME as \"TABLE_NAME\"\n\t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n\t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n\t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n\t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\" \nFROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C \nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME \nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME \nINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME \nINNER JOIN ( \n\tSELECT \n\t\ti1.TABLE_NAME\n\t\t, i2.COLUMN_NAME\n\t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1 \n\t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n\t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY' \n) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE \nlower(FK.TABLE_NAME)='secrole_condcollection'\n\nPostgres decides to use a hashjoin (see the query plan below) and 20 seconds later spits out 2 rows. It does not matter if one sets random_page_cost to 2, 1.5, or 1.0 (or even 0.09, which does not make any sense) one waits 20 seconds. hashjoin is used to answer this query. If one switches off the hashjoins (set enable_hashjoin = false;), it takes 0.1 second to compute to spit two rows. The views in information_schema are tiny: \nselect 'REFERENTIAL_CONSTRAINTS', count(1) from INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS\nunion all\nselect 'TABLE_CONSTRAINTS', count(1) from INFORMATION_SCHEMA.TABLE_CONSTRAINTS\nunion all\nselect 'KEY_COLUMN_USAGE', count(1) from INFORMATION_SCHEMA.KEY_COLUMN_USAGE\nunion all\nselect 'TABLE_CONSTRAINTS', count(1) from INFORMATION_SCHEMA.TABLE_CONSTRAINTS\n\nREFERENTIAL_CONSTRAINTS\t1079\nTABLE_CONSTRAINTS\t4359\nKEY_COLUMN_USAGE\t1999\nTABLE_CONSTRAINTS\t4359the whole schema eats up 300Kb space:\n\nSELECT pg_size_pretty(sum(pg_total_relation_size(C.oid))) AS \"total_size\"\n FROM pg_class C\n LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n WHERE nspname = 'information_schema'\n\n\n--344 kB \nAny clues how I could \"save \nface\n\nof the hash joins\"?Cheers,Arturasquery plan hash (please note that \nrandom_page_cost is overwritten there:\nset enable_hashjoin = 1;\n\nSELECT \n\tFK.TABLE_NAME as \"TABLE_NAME\"\n\t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n\t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n\t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n\t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\" \nFROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C \nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME \nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME \nINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME \nINNER JOIN ( \n\tSELECT \n\t\ti1.TABLE_NAME\n\t\t, i2.COLUMN_NAME\n\t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1 \n\t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n\t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY' \n) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE \nlower(FK.TABLE_NAME)='secrole_condcollection'\n\nNested Loop (cost=2174.36..13670.47 rows=1 width=320) (actual time=5499.728..26310.137 rows=2 loops=1)\n Output: \"*SELECT* 1\".table_name, (a.attname)::information_schema.sql_identifier, \"*SELECT* 1_1\".table_name, (a_1.attname)::information_schema.sql_identifier, (con.conname)::information_schema.sql_identifier\n Inner Unique: true\n Buffers: shared hit=1961035\n -> Nested Loop (cost=2174.07..13670.12 rows=1 width=296) (actual time=5499.716..26310.115 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1_1\".table_name, a.attname, r.oid, (information_schema._pg_expandarray(c_1.conkey)), r.relowner\n Inner Unique: true\n Buffers: shared hit=1961029\n -> Nested Loop (cost=2173.78..13669.78 rows=1 width=272) (actual time=5499.689..26310.066 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1_1\".table_name, r_2.oid, (information_schema._pg_expandarray(c_3.conkey)), r_2.relowner, r.oid, (information_schema._pg_expandarray(c_1.conkey)), r.relowner\n Join Filter: ((\"*SELECT* 1_2\".table_name)::name = (\"*SELECT* 1_1\".table_name)::name)\n Rows Removed by Join Filter: 1670\n Buffers: shared hit=1961023\n -> Hash Join (cost=497.90..5313.80 rows=1 width=104) (actual time=7.586..29.643 rows=836 loops=1)\n Output: \"*SELECT* 1_2\".table_name, r.oid, (information_schema._pg_expandarray(c_1.conkey)), r.relowner\n Hash Cond: (c_1.conname = (\"*SELECT* 1_2\".constraint_name)::name)\n Buffers: shared hit=3355\n -> ProjectSet (cost=324.56..1716.71 rows=249000 width=341) (actual time=1.385..21.087 rows=1983 loops=1)\n Output: r.oid, NULL::name, r.relowner, NULL::name, NULL::name, NULL::oid, c_1.conname, NULL::\"char\", NULL::oid, NULL::smallint[], NULL::oid, information_schema._pg_expandarray(c_1.conkey)\n Buffers: shared hit=328\n -> Hash Join (cost=324.56..408.21 rows=249 width=95) (actual time=1.246..6.050 rows=1707 loops=1)\n Output: c_1.conkey, r.oid, r.relowner, c_1.conname\n Inner Unique: true\n Hash Cond: (c_1.connamespace = nc.oid)\n Buffers: shared hit=328\n -> Hash Join (cost=323.42..405.96 rows=249 width=99) (actual time=1.226..4.977 rows=1707 loops=1)\n Output: r.oid, r.relowner, c_1.conname, c_1.conkey, c_1.connamespace\n Inner Unique: true\n Hash Cond: (r.relnamespace = nr.oid)\n Buffers: shared hit=327\n -> Hash Join (cost=322.30..403.16 rows=374 width=103) (actual time=1.209..3.807 rows=1707 loops=1)\n Output: r.oid, r.relowner, r.relnamespace, c_1.conname, c_1.conkey, c_1.connamespace\n Inner Unique: true\n Hash Cond: (c_1.conrelid = r.oid)\n Buffers: shared hit=326\n -> Seq Scan on pg_catalog.pg_constraint c_1 (cost=0.00..76.23 rows=1760 width=95) (actual time=0.006..0.894 rows=1707 loops=1)\n Output: c_1.oid, c_1.conname, c_1.connamespace, c_1.contype, c_1.condeferrable, c_1.condeferred, c_1.convalidated, c_1.conrelid, c_1.contypid, c_1.conindid, c_1.conparentid, c_1.confrelid, c_1.confupdtype, c_1.confdeltype, c_1.confmatchtype, c_1.conislocal, c_1.coninhcount, c_1.connoinherit, c_1.conkey, c_1.confkey, c_1.conpfeqop, c_1.conppeqop, c_1.conffeqop, c_1.conexclop, c_1.conbin\n Filter: (c_1.contype = ANY ('{p,u,f}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=52\n -> Hash (cost=313.84..313.84 rows=677 width=12) (actual time=1.135..1.136 rows=694 loops=1)\n Output: r.oid, r.relowner, r.relnamespace\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r (cost=0.00..313.84 rows=677 width=12) (actual time=0.009..1.024 rows=694 loops=1)\n Output: r.oid, r.relowner, r.relnamespace\n Filter: (r.relkind = ANY ('{r,p}'::\"char\"[]))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Hash (cost=1.07..1.07 rows=4 width=4) (actual time=0.009..0.009 rows=7 loops=1)\n Output: nr.oid\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nr (cost=0.00..1.07 rows=4 width=4) (actual time=0.004..0.006 rows=7 loops=1)\n Output: nr.oid\n Filter: (NOT pg_is_other_temp_schema(nr.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Hash (cost=1.06..1.06 rows=6 width=4) (actual time=0.008..0.009 rows=9 loops=1)\n Output: nc.oid\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nc (cost=0.00..1.06 rows=6 width=4) (actual time=0.003..0.004 rows=9 loops=1)\n Output: nc.oid\n Buffers: shared hit=1\n -> Hash (cost=173.32..173.32 rows=1 width=128) (actual time=6.192..6.196 rows=595 loops=1)\n Output: \"*SELECT* 1_2\".constraint_name, \"*SELECT* 1_2\".table_name\n Buckets: 1024 Batches: 1 Memory Usage: 101kB\n Buffers: shared hit=3027\n -> Subquery Scan on \"*SELECT* 1_2\" (cost=0.28..173.32 rows=1 width=128) (actual time=0.041..5.955 rows=595 loops=1)\n Output: \"*SELECT* 1_2\".constraint_name, \"*SELECT* 1_2\".table_name\n Buffers: shared hit=3027\n -> Nested Loop (cost=0.28..173.31 rows=1 width=512) (actual time=0.040..5.849 rows=595 loops=1)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (c_2.conname)::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_1.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Join Filter: (r_1.relnamespace = nr_1.oid)\n Rows Removed by Join Filter: 1836\n Buffers: shared hit=3027\n -> Nested Loop (cost=0.28..172.19 rows=1 width=132) (actual time=0.033..3.736 rows=595 loops=1)\n Output: c_2.conname, r_1.relname, r_1.relnamespace\n Inner Unique: true\n Join Filter: (c_2.connamespace = nc_1.oid)\n Rows Removed by Join Filter: 3026\n Buffers: shared hit=2432\n -> Nested Loop (cost=0.28..171.05 rows=1 width=136) (actual time=0.027..1.913 rows=595 loops=1)\n Output: c_2.conname, c_2.connamespace, r_1.relname, r_1.relnamespace\n Inner Unique: true\n Buffers: shared hit=1837\n -> Seq Scan on pg_catalog.pg_constraint c_2 (cost=0.00..96.05 rows=9 width=72) (actual time=0.012..0.508 rows=595 loops=1)\n Output: c_2.oid, c_2.conname, c_2.connamespace, c_2.contype, c_2.condeferrable, c_2.condeferred, c_2.convalidated, c_2.conrelid, c_2.contypid, c_2.conindid, c_2.conparentid, c_2.confrelid, c_2.confupdtype, c_2.confdeltype, c_2.confmatchtype, c_2.conislocal, c_2.coninhcount, c_2.connoinherit, c_2.conkey, c_2.confkey, c_2.conpfeqop, c_2.conppeqop, c_2.conffeqop, c_2.conexclop, c_2.conbin\n Filter: ((c_2.contype <> ALL ('{t,x}'::\"char\"[])) AND ((CASE c_2.contype WHEN 'c'::\"char\" THEN 'CHECK'::text WHEN 'f'::\"char\" THEN 'FOREIGN KEY'::text WHEN 'p'::\"char\" THEN 'PRIMARY KEY'::text WHEN 'u'::\"char\" THEN 'UNIQUE'::text ELSE NULL::text END)::text = 'PRIMARY KEY'::text))\n Rows Removed by Filter: 1114\n Buffers: shared hit=52\n -> Index Scan using pg_class_oid_index on pg_catalog.pg_class r_1 (cost=0.28..8.33 rows=1 width=72) (actual time=0.002..0.002 rows=1 loops=595)\n Output: r_1.oid, r_1.relname, r_1.relnamespace, r_1.reltype, r_1.reloftype, r_1.relowner, r_1.relam, r_1.relfilenode, r_1.reltablespace, r_1.relpages, r_1.reltuples, r_1.relallvisible, r_1.reltoastrelid, r_1.relhasindex, r_1.relisshared, r_1.relpersistence, r_1.relkind, r_1.relnatts, r_1.relchecks, r_1.relhasrules, r_1.relhastriggers, r_1.relhassubclass, r_1.relrowsecurity, r_1.relforcerowsecurity, r_1.relispopulated, r_1.relreplident, r_1.relispartition, r_1.relrewrite, r_1.relfrozenxid, r_1.relminmxid, r_1.relacl, r_1.reloptions, r_1.relpartbound\n Index Cond: (r_1.oid = c_2.conrelid)\n Filter: ((r_1.relkind = ANY ('{r,p}'::\"char\"[])) AND (pg_has_role(r_1.relowner, 'USAGE'::text) OR has_table_privilege(r_1.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_1.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=1785\n -> Seq Scan on pg_catalog.pg_namespace nc_1 (cost=0.00..1.06 rows=6 width=4) (actual time=0.000..0.001 rows=6 loops=595)\n Output: nc_1.oid, nc_1.nspname, nc_1.nspowner, nc_1.nspacl\n Buffers: shared hit=595\n -> Seq Scan on pg_catalog.pg_namespace nr_1 (cost=0.00..1.07 rows=4 width=4) (actual time=0.001..0.001 rows=4 loops=595)\n Output: nr_1.oid, nr_1.nspname, nr_1.nspowner, nr_1.nspacl\n Filter: (NOT pg_is_other_temp_schema(nr_1.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=595\n -> Nested Loop (cost=1675.88..8355.96 rows=1 width=232) (actual time=9.154..31.424 rows=2 loops=836)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1_1\".table_name, r_2.oid, (information_schema._pg_expandarray(c_3.conkey)), r_2.relowner\n Join Filter: (pkc.conname = (\"*SELECT* 1_1\".constraint_name)::name)\n Rows Removed by Join Filter: 8572\n Buffers: shared hit=1957668\n -> Hash Join (cost=1258.23..6074.13 rows=1 width=232) (actual time=8.894..11.130 rows=2 loops=836)\n Output: con.conname, pkc.conname, \"*SELECT* 1\".table_name, r_2.oid, (information_schema._pg_expandarray(c_3.conkey)), r_2.relowner\n Hash Cond: (c_3.conname = con.conname)\n Buffers: shared hit=44349\n -> ProjectSet (cost=324.56..1716.71 rows=249000 width=341) (actual time=0.013..10.797 rows=1983 loops=836)\n Output: r_2.oid, NULL::name, r_2.relowner, NULL::name, NULL::name, NULL::oid, c_3.conname, NULL::\"char\", NULL::oid, NULL::smallint[], NULL::oid, information_schema._pg_expandarray(c_3.conkey)\n Buffers: shared hit=43748\n -> Hash Join (cost=324.56..408.21 rows=249 width=95) (actual time=0.007..2.055 rows=1707 loops=836)\n Output: c_3.conkey, r_2.oid, r_2.relowner, c_3.conname\n Inner Unique: true\n Hash Cond: (c_3.connamespace = nc_2.oid)\n Buffers: shared hit=43748\n -> Hash Join (cost=323.42..405.96 rows=249 width=99) (actual time=0.006..1.624 rows=1707 loops=836)\n Output: r_2.oid, r_2.relowner, c_3.conname, c_3.conkey, c_3.connamespace\n Inner Unique: true\n Hash Cond: (r_2.relnamespace = nr_2.oid)\n Buffers: shared hit=43747\n -> Hash Join (cost=322.30..403.16 rows=374 width=103) (actual time=0.006..1.224 rows=1707 loops=836)\n Output: r_2.oid, r_2.relowner, r_2.relnamespace, c_3.conname, c_3.conkey, c_3.connamespace\n Inner Unique: true\n Hash Cond: (c_3.conrelid = r_2.oid)\n Buffers: shared hit=43746\n -> Seq Scan on pg_catalog.pg_constraint c_3 (cost=0.00..76.23 rows=1760 width=95) (actual time=0.004..0.511 rows=1707 loops=836)\n Output: c_3.oid, c_3.conname, c_3.connamespace, c_3.contype, c_3.condeferrable, c_3.condeferred, c_3.convalidated, c_3.conrelid, c_3.contypid, c_3.conindid, c_3.conparentid, c_3.confrelid, c_3.confupdtype, c_3.confdeltype, c_3.confmatchtype, c_3.conislocal, c_3.coninhcount, c_3.connoinherit, c_3.conkey, c_3.confkey, c_3.conpfeqop, c_3.conppeqop, c_3.conffeqop, c_3.conexclop, c_3.conbin\n Filter: (c_3.contype = ANY ('{p,u,f}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=43472\n -> Hash (cost=313.84..313.84 rows=677 width=12) (actual time=0.988..0.989 rows=694 loops=1)\n Output: r_2.oid, r_2.relowner, r_2.relnamespace\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_2 (cost=0.00..313.84 rows=677 width=12) (actual time=0.006..0.875 rows=694 loops=1)\n Output: r_2.oid, r_2.relowner, r_2.relnamespace\n Filter: (r_2.relkind = ANY ('{r,p}'::\"char\"[]))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Hash (cost=1.07..1.07 rows=4 width=4) (actual time=0.009..0.010 rows=7 loops=1)\n Output: nr_2.oid\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nr_2 (cost=0.00..1.07 rows=4 width=4) (actual time=0.004..0.007 rows=7 loops=1)\n Output: nr_2.oid\n Filter: (NOT pg_is_other_temp_schema(nr_2.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Hash (cost=1.06..1.06 rows=6 width=4) (actual time=0.012..0.013 rows=9 loops=1)\n Output: nc_2.oid\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nc_2 (cost=0.00..1.06 rows=6 width=4) (actual time=0.007..0.009 rows=9 loops=1)\n Output: nc_2.oid\n Buffers: shared hit=1\n -> Hash (cost=933.65..933.65 rows=1 width=256) (actual time=2.158..2.170 rows=2 loops=1)\n Output: con.conname, pkc.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=601\n -> Nested Loop (cost=5.71..933.65 rows=1 width=256) (actual time=1.185..2.163 rows=2 loops=1)\n Output: con.conname, pkc.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Join Filter: (d2.refobjid = pkc.oid)\n Buffers: shared hit=601\n -> Nested Loop (cost=5.43..933.00 rows=1 width=200) (actual time=1.174..2.146 rows=2 loops=1)\n Output: con.conname, con.confrelid, d2.refobjid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buffers: shared hit=593\n -> Nested Loop (cost=5.15..931.14 rows=1 width=200) (actual time=1.163..2.129 rows=2 loops=1)\n Output: con.conname, con.confrelid, d1.refobjid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buffers: shared hit=587\n -> Nested Loop (cost=4.86..929.16 rows=1 width=200) (actual time=1.147..2.108 rows=2 loops=1)\n Output: con.conname, con.oid, con.confrelid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Join Filter: (con.connamespace = ncon.oid)\n Rows Removed by Join Filter: 10\n Buffers: shared hit=581\n -> Nested Loop (cost=4.86..928.02 rows=1 width=204) (actual time=1.143..2.100 rows=2 loops=1)\n Output: con.conname, con.connamespace, con.oid, con.confrelid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Buffers: shared hit=579\n -> Nested Loop (cost=4.58..925.06 rows=2 width=208) (actual time=1.129..2.082 rows=2 loops=1)\n Output: con.conname, con.connamespace, con.conrelid, con.oid, con.confrelid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buffers: shared hit=573\n -> Append (cost=4.30..900.14 rows=3 width=128) (actual time=1.105..2.056 rows=5 loops=1)\n Buffers: shared hit=560\n -> Subquery Scan on \"*SELECT* 1\" (cost=4.30..449.91 rows=1 width=128) (actual time=1.104..1.121 rows=3 loops=1)\n Output: \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buffers: shared hit=282\n -> Nested Loop (cost=4.30..449.90 rows=1 width=512) (actual time=1.103..1.119 rows=3 loops=1)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (c_4.conname)::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_3.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Join Filter: (c_4.connamespace = nc_3.oid)\n Rows Removed by Join Filter: 15\n Buffers: shared hit=282\n -> Nested Loop (cost=4.30..448.76 rows=1 width=132) (actual time=1.096..1.104 rows=3 loops=1)\n Output: r_3.relname, c_4.conname, c_4.connamespace\n Buffers: shared hit=279\n -> Nested Loop (cost=0.00..434.55 rows=1 width=68) (actual time=1.062..1.066 rows=1 loops=1)\n Output: r_3.relname, r_3.oid\n Join Filter: (nr_3.oid = r_3.relnamespace)\n Rows Removed by Join Filter: 6\n Buffers: shared hit=275\n -> Seq Scan on pg_catalog.pg_namespace nr_3 (cost=0.00..1.07 rows=4 width=4) (actual time=0.009..0.015 rows=7 loops=1)\n Output: nr_3.oid, nr_3.nspname, nr_3.nspowner, nr_3.nspacl\n Filter: (NOT pg_is_other_temp_schema(nr_3.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Materialize (cost=0.00..433.36 rows=2 width=72) (actual time=0.004..0.149 rows=1 loops=7)\n Output: r_3.relname, r_3.relnamespace, r_3.oid\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_3 (cost=0.00..433.35 rows=2 width=72) (actual time=0.026..1.039 rows=1 loops=1)\n Output: r_3.relname, r_3.relnamespace, r_3.oid\n Filter: ((r_3.relkind = ANY ('{r,p}'::\"char\"[])) AND (lower(((r_3.relname)::information_schema.sql_identifier)::text) = 'secrole_condcollection'::text) AND (pg_has_role(r_3.relowner, 'USAGE'::text) OR has_table_privilege(r_3.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_3.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 3252\n Buffers: shared hit=274\n -> Bitmap Heap Scan on pg_catalog.pg_constraint c_4 (cost=4.30..14.18 rows=3 width=72) (actual time=0.026..0.029 rows=3 loops=1)\n Output: c_4.oid, c_4.conname, c_4.connamespace, c_4.contype, c_4.condeferrable, c_4.condeferred, c_4.convalidated, c_4.conrelid, c_4.contypid, c_4.conindid, c_4.conparentid, c_4.confrelid, c_4.confupdtype, c_4.confdeltype, c_4.confmatchtype, c_4.conislocal, c_4.coninhcount, c_4.connoinherit, c_4.conkey, c_4.confkey, c_4.conpfeqop, c_4.conppeqop, c_4.conffeqop, c_4.conexclop, c_4.conbin\n Recheck Cond: (c_4.conrelid = r_3.oid)\n Filter: (c_4.contype <> ALL ('{t,x}'::\"char\"[]))\n Heap Blocks: exact=2\n Buffers: shared hit=4\n -> Bitmap Index Scan on pg_constraint_conrelid_contypid_conname_index (cost=0.00..4.30 rows=3 width=0) (actual time=0.020..0.020 rows=3 loops=1)\n Index Cond: (c_4.conrelid = r_3.oid)\n Buffers: shared hit=2\n -> Seq Scan on pg_catalog.pg_namespace nc_3 (cost=0.00..1.06 rows=6 width=4) (actual time=0.002..0.002 rows=6 loops=3)\n Output: nc_3.oid, nc_3.nspname, nc_3.nspowner, nc_3.nspacl\n Buffers: shared hit=3\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.29..450.21 rows=2 width=128) (actual time=0.924..0.931 rows=2 loops=1)\n Output: \"*SELECT* 2\".table_name, \"*SELECT* 2\".constraint_name\n Buffers: shared hit=278\n -> Nested Loop (cost=0.29..450.19 rows=2 width=512) (actual time=0.923..0.929 rows=2 loops=1)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (((((((nr_4.oid)::text || '_'::text) || (r_4.oid)::text) || '_'::text) || (a_2.attnum)::text) || '_not_null'::text))::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_4.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Buffers: shared hit=278\n -> Nested Loop (cost=0.00..434.55 rows=1 width=72) (actual time=0.904..0.907 rows=1 loops=1)\n Output: nr_4.oid, r_4.oid, r_4.relname\n Join Filter: (nr_4.oid = r_4.relnamespace)\n Rows Removed by Join Filter: 6\n Buffers: shared hit=275\n -> Seq Scan on pg_catalog.pg_namespace nr_4 (cost=0.00..1.07 rows=4 width=4) (actual time=0.004..0.007 rows=7 loops=1)\n Output: nr_4.oid, nr_4.nspname, nr_4.nspowner, nr_4.nspacl\n Filter: (NOT pg_is_other_temp_schema(nr_4.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Materialize (cost=0.00..433.36 rows=2 width=72) (actual time=0.004..0.128 rows=1 loops=7)\n Output: r_4.oid, r_4.relname, r_4.relnamespace\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_4 (cost=0.00..433.35 rows=2 width=72) (actual time=0.021..0.893 rows=1 loops=1)\n Output: r_4.oid, r_4.relname, r_4.relnamespace\n Filter: ((r_4.relkind = ANY ('{r,p}'::\"char\"[])) AND (lower(((r_4.relname)::information_schema.sql_identifier)::text) = 'secrole_condcollection'::text) AND (pg_has_role(r_4.relowner, 'USAGE'::text) OR has_table_privilege(r_4.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_4.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 3252\n Buffers: shared hit=274\n -> Index Scan using pg_attribute_relid_attnum_index on pg_catalog.pg_attribute a_2 (cost=0.29..15.56 rows=2 width=6) (actual time=0.014..0.015 rows=2 loops=1)\n Output: a_2.attrelid, a_2.attname, a_2.atttypid, a_2.attstattarget, a_2.attlen, a_2.attnum, a_2.attndims, a_2.attcacheoff, a_2.atttypmod, a_2.attbyval, a_2.attstorage, a_2.attalign, a_2.attnotnull, a_2.atthasdef, a_2.atthasmissing, a_2.attidentity, a_2.attgenerated, a_2.attisdropped, a_2.attislocal, a_2.attinhcount, a_2.attcollation, a_2.attacl, a_2.attoptions, a_2.attfdwoptions, a_2.attmissingval\n Index Cond: ((a_2.attrelid = r_4.oid) AND (a_2.attnum > 0))\n Filter: (a_2.attnotnull AND (NOT a_2.attisdropped))\n Buffers: shared hit=3\n -> Index Scan using pg_constraint_conname_nsp_index on pg_catalog.pg_constraint con (cost=0.28..8.30 rows=1 width=80) (actual time=0.004..0.004 rows=0 loops=5)\n Output: con.oid, con.conname, con.connamespace, con.contype, con.condeferrable, con.condeferred, con.convalidated, con.conrelid, con.contypid, con.conindid, con.conparentid, con.confrelid, con.confupdtype, con.confdeltype, con.confmatchtype, con.conislocal, con.coninhcount, con.connoinherit, con.conkey, con.confkey, con.conpfeqop, con.conppeqop, con.conffeqop, con.conexclop, con.conbin\n Index Cond: (con.conname = (\"*SELECT* 1\".constraint_name)::name)\n Filter: (con.contype = 'f'::\"char\")\n Rows Removed by Filter: 0\n Buffers: shared hit=13\n -> Index Scan using pg_class_oid_index on pg_catalog.pg_class c (cost=0.28..1.48 rows=1 width=4) (actual time=0.007..0.007 rows=1 loops=2)\n Output: c.oid, c.relname, c.relnamespace, c.reltype, c.reloftype, c.relowner, c.relam, c.relfilenode, c.reltablespace, c.relpages, c.reltuples, c.relallvisible, c.reltoastrelid, c.relhasindex, c.relisshared, c.relpersistence, c.relkind, c.relnatts, c.relchecks, c.relhasrules, c.relhastriggers, c.relhassubclass, c.relrowsecurity, c.relforcerowsecurity, c.relispopulated, c.relreplident, c.relispartition, c.relrewrite, c.relfrozenxid, c.relminmxid, c.relacl, c.reloptions, c.relpartbound\n Index Cond: (c.oid = con.conrelid)\n Filter: (pg_has_role(c.relowner, 'USAGE'::text) OR has_table_privilege(c.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(c.oid, 'INSERT, UPDATE, REFERENCES'::text))\n Buffers: shared hit=6\n -> Seq Scan on pg_catalog.pg_namespace ncon (cost=0.00..1.06 rows=6 width=4) (actual time=0.002..0.002 rows=6 loops=2)\n Output: ncon.oid, ncon.nspname, ncon.nspowner, ncon.nspacl\n Buffers: shared hit=2\n -> Index Scan using pg_depend_depender_index on pg_catalog.pg_depend d1 (cost=0.29..1.97 rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=2)\n Output: d1.classid, d1.objid, d1.objsubid, d1.refclassid, d1.refobjid, d1.refobjsubid, d1.deptype\n Index Cond: ((d1.classid = '2606'::oid) AND (d1.objid = con.oid))\n Filter: ((d1.refclassid = '1259'::oid) AND (d1.refobjsubid = 0))\n Rows Removed by Filter: 2\n Buffers: shared hit=6\n -> Index Scan using pg_depend_depender_index on pg_catalog.pg_depend d2 (cost=0.29..1.85 rows=1 width=8) (actual time=0.006..0.007 rows=1 loops=2)\n Output: d2.classid, d2.objid, d2.objsubid, d2.refclassid, d2.refobjid, d2.refobjsubid, d2.deptype\n Index Cond: ((d2.classid = '1259'::oid) AND (d2.objid = d1.refobjid) AND (d2.objsubid = 0))\n Filter: ((d2.refclassid = '2606'::oid) AND (d2.deptype = 'i'::\"char\"))\n Buffers: shared hit=6\n -> Index Scan using pg_constraint_conrelid_contypid_conname_index on pg_catalog.pg_constraint pkc (cost=0.28..0.64 rows=1 width=76) (actual time=0.007..0.007 rows=1 loops=2)\n Output: pkc.oid, pkc.conname, pkc.connamespace, pkc.contype, pkc.condeferrable, pkc.condeferred, pkc.convalidated, pkc.conrelid, pkc.contypid, pkc.conindid, pkc.conparentid, pkc.confrelid, pkc.confupdtype, pkc.confdeltype, pkc.confmatchtype, pkc.conislocal, pkc.coninhcount, pkc.connoinherit, pkc.conkey, pkc.confkey, pkc.conpfeqop, pkc.conppeqop, pkc.conffeqop, pkc.conexclop, pkc.conbin\n Index Cond: (pkc.conrelid = con.confrelid)\n Filter: (pkc.contype = ANY ('{p,u}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=8\n -> Append (cost=417.66..2272.67 rows=733 width=128) (actual time=0.011..9.830 rows=4287 loops=1672)\n Buffers: shared hit=1913319\n -> Subquery Scan on \"*SELECT* 1_1\" (cost=417.66..500.03 rows=175 width=128) (actual time=0.010..1.720 rows=1707 loops=1672)\n Output: \"*SELECT* 1_1\".table_name, \"*SELECT* 1_1\".constraint_name\n Buffers: shared hit=87220\n -> Hash Join (cost=417.66..498.28 rows=175 width=512) (actual time=0.010..1.584 rows=1707 loops=1672)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (c_5.conname)::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_5.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Hash Cond: (c_5.connamespace = nc_4.oid)\n Buffers: shared hit=87220\n -> Hash Join (cost=416.52..496.36 rows=175 width=132) (actual time=0.008..1.190 rows=1707 loops=1672)\n Output: r_5.relname, c_5.conname, c_5.connamespace\n Inner Unique: true\n Hash Cond: (r_5.relnamespace = nr_5.oid)\n Buffers: shared hit=87219\n -> Hash Join (cost=415.40..494.06 rows=263 width=136) (actual time=0.007..0.869 rows=1707 loops=1672)\n Output: c_5.conname, c_5.connamespace, r_5.relname, r_5.relnamespace\n Inner Unique: true\n Hash Cond: (c_5.conrelid = r_5.oid)\n Buffers: shared hit=87218\n -> Seq Scan on pg_catalog.pg_constraint c_5 (cost=0.00..74.03 rows=1762 width=72) (actual time=0.004..0.379 rows=1709 loops=1672)\n Output: c_5.oid, c_5.conname, c_5.connamespace, c_5.contype, c_5.condeferrable, c_5.condeferred, c_5.convalidated, c_5.conrelid, c_5.contypid, c_5.conindid, c_5.conparentid, c_5.confrelid, c_5.confupdtype, c_5.confdeltype, c_5.confmatchtype, c_5.conislocal, c_5.coninhcount, c_5.connoinherit, c_5.conkey, c_5.confkey, c_5.conpfeqop, c_5.conppeqop, c_5.conffeqop, c_5.conexclop, c_5.conbin\n Filter: (c_5.contype <> ALL ('{t,x}'::\"char\"[]))\n Buffers: shared hit=86944\n -> Hash (cost=409.45..409.45 rows=476 width=72) (actual time=1.244..1.245 rows=694 loops=1)\n Output: r_5.relname, r_5.relnamespace, r_5.oid\n Buckets: 1024 Batches: 1 Memory Usage: 79kB\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_5 (cost=0.00..409.45 rows=476 width=72) (actual time=0.011..1.118 rows=694 loops=1)\n Output: r_5.relname, r_5.relnamespace, r_5.oid\n Filter: ((r_5.relkind = ANY ('{r,p}'::\"char\"[])) AND (pg_has_role(r_5.relowner, 'USAGE'::text) OR has_table_privilege(r_5.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_5.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Hash (cost=1.07..1.07 rows=4 width=4) (actual time=0.019..0.019 rows=7 loops=1)\n Output: nr_5.oid\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nr_5 (cost=0.00..1.07 rows=4 width=4) (actual time=0.004..0.008 rows=7 loops=1)\n Output: nr_5.oid\n Filter: (NOT pg_is_other_temp_schema(nr_5.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Hash (cost=1.06..1.06 rows=6 width=4) (actual time=0.015..0.016 rows=9 loops=1)\n Output: nc_4.oid\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nc_4 (cost=0.00..1.06 rows=6 width=4) (actual time=0.010..0.011 rows=9 loops=1)\n Output: nc_4.oid\n Buffers: shared hit=1\n -> Subquery Scan on \"*SELECT* 2_1\" (cost=416.52..1768.97 rows=558 width=128) (actual time=0.010..7.839 rows=2580 loops=1672)\n Output: \"*SELECT* 2_1\".table_name, \"*SELECT* 2_1\".constraint_name\n Buffers: shared hit=1826099\n -> Hash Join (cost=416.52..1763.39 rows=558 width=512) (actual time=0.009..7.622 rows=2580 loops=1672)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (((((((nr_6.oid)::text || '_'::text) || (r_6.oid)::text) || '_'::text) || (a_3.attnum)::text) || '_not_null'::text))::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_6.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Hash Cond: (r_6.relnamespace = nr_6.oid)\n Buffers: shared hit=1826099\n -> Hash Join (cost=415.40..1741.77 rows=837 width=74) (actual time=0.004..5.410 rows=2580 loops=1672)\n Output: r_6.oid, r_6.relname, r_6.relnamespace, a_3.attnum\n Inner Unique: true\n Hash Cond: (a_3.attrelid = r_6.oid)\n Buffers: shared hit=1826098\n -> Seq Scan on pg_catalog.pg_attribute a_3 (cost=0.00..1311.64 rows=5606 width=6) (actual time=0.002..4.792 rows=2598 loops=1672)\n Output: a_3.attrelid, a_3.attname, a_3.atttypid, a_3.attstattarget, a_3.attlen, a_3.attnum, a_3.attndims, a_3.attcacheoff, a_3.atttypmod, a_3.attbyval, a_3.attstorage, a_3.attalign, a_3.attnotnull, a_3.atthasdef, a_3.atthasmissing, a_3.attidentity, a_3.attgenerated, a_3.attisdropped, a_3.attislocal, a_3.attinhcount, a_3.attcollation, a_3.attacl, a_3.attoptions, a_3.attfdwoptions, a_3.attmissingval\n Filter: (a_3.attnotnull AND (NOT a_3.attisdropped) AND (a_3.attnum > 0))\n Rows Removed by Filter: 15396\n Buffers: shared hit=1825824\n -> Hash (cost=409.45..409.45 rows=476 width=72) (actual time=1.227..1.227 rows=694 loops=1)\n Output: r_6.oid, r_6.relname, r_6.relnamespace\n Buckets: 1024 Batches: 1 Memory Usage: 79kB\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_6 (cost=0.00..409.45 rows=476 width=72) (actual time=0.011..1.087 rows=694 loops=1)\n Output: r_6.oid, r_6.relname, r_6.relnamespace\n Filter: ((r_6.relkind = ANY ('{r,p}'::\"char\"[])) AND (pg_has_role(r_6.relowner, 'USAGE'::text) OR has_table_privilege(r_6.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_6.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Hash (cost=1.07..1.07 rows=4 width=4) (actual time=0.015..0.015 rows=7 loops=1)\n Output: nr_6.oid\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nr_6 (cost=0.00..1.07 rows=4 width=4) (actual time=0.008..0.011 rows=7 loops=1)\n Output: nr_6.oid\n Filter: (NOT pg_is_other_temp_schema(nr_6.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Index Scan using pg_attribute_relid_attnum_index on pg_catalog.pg_attribute a (cost=0.29..0.33 rows=1 width=70) (actual time=0.019..0.019 rows=1 loops=2)\n Output: a.attrelid, a.attname, a.atttypid, a.attstattarget, a.attlen, a.attnum, a.attndims, a.attcacheoff, a.atttypmod, a.attbyval, a.attstorage, a.attalign, a.attnotnull, a.atthasdef, a.atthasmissing, a.attidentity, a.attgenerated, a.attisdropped, a.attislocal, a.attinhcount, a.attcollation, a.attacl, a.attoptions, a.attfdwoptions, a.attmissingval\n Index Cond: ((a.attrelid = r_2.oid) AND (a.attnum = ((information_schema._pg_expandarray(c_3.conkey))).x))\n Filter: ((NOT a.attisdropped) AND (pg_has_role(r_2.relowner, 'USAGE'::text) OR has_column_privilege(r_2.oid, a.attnum, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=6\n -> Index Scan using pg_attribute_relid_attnum_index on pg_catalog.pg_attribute a_1 (cost=0.29..0.33 rows=1 width=70) (actual time=0.007..0.007 rows=1 loops=2)\n Output: a_1.attrelid, a_1.attname, a_1.atttypid, a_1.attstattarget, a_1.attlen, a_1.attnum, a_1.attndims, a_1.attcacheoff, a_1.atttypmod, a_1.attbyval, a_1.attstorage, a_1.attalign, a_1.attnotnull, a_1.atthasdef, a_1.atthasmissing, a_1.attidentity, a_1.attgenerated, a_1.attisdropped, a_1.attislocal, a_1.attinhcount, a_1.attcollation, a_1.attacl, a_1.attoptions, a_1.attfdwoptions, a_1.attmissingval\n Index Cond: ((a_1.attrelid = r.oid) AND (a_1.attnum = ((information_schema._pg_expandarray(c_1.conkey))).x))\n Filter: ((NOT a_1.attisdropped) AND (pg_has_role(r.relowner, 'USAGE'::text) OR has_column_privilege(r.oid, a_1.attnum, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=6\nPlanning Time: 8.688 ms\nExecution Time: 26311.005 ms\nindex scan\nset enable_hashjoin = 0;\n\nSELECT \n\tFK.TABLE_NAME as \"TABLE_NAME\"\n\t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n\t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n\t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n\t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\" \nFROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C \nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME \nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME \nINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME \nINNER JOIN ( \n\tSELECT \n\t\ti1.TABLE_NAME\n\t\t, i2.COLUMN_NAME\n\t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1 \n\t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n\t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY' \n) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE \nlower(FK.TABLE_NAME)='secrole_condcollection'\n\nNested Loop (cost=1736.10..18890.44 rows=1 width=320) (actual time=30.780..79.572 rows=2 loops=1)\n Output: \"*SELECT* 1\".table_name, (a.attname)::information_schema.sql_identifier, \"*SELECT* 1_1\".table_name, (a_1.attname)::information_schema.sql_identifier, (con.conname)::information_schema.sql_identifier\n Inner Unique: true\n Buffers: shared hit=9018\n -> Nested Loop (cost=1735.81..18890.10 rows=1 width=296) (actual time=30.752..79.531 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1_1\".table_name, a.attname, r_6.oid, (information_schema._pg_expandarray(c_5.conkey)), r_6.relowner\n Join Filter: ((\"*SELECT* 1_2\".constraint_name)::name = c_5.conname)\n Rows Removed by Join Filter: 3964\n Buffers: shared hit=9012\n -> Nested Loop (cost=1170.86..11411.63 rows=1 width=320) (actual time=18.709..57.524 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1_1\".table_name, a.attname, \"*SELECT* 1_2\".constraint_name\n Join Filter: ((\"*SELECT* 1_1\".table_name)::name = (\"*SELECT* 1_2\".table_name)::name)\n Rows Removed by Join Filter: 1188\n Buffers: shared hit=8684\n -> Nested Loop (cost=1170.58..11238.29 rows=1 width=256) (actual time=16.937..45.450 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1_1\".table_name, a.attname\n Inner Unique: true\n Buffers: shared hit=2630\n -> Nested Loop (cost=1170.30..11237.95 rows=1 width=232) (actual time=16.909..45.398 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1_1\".table_name, r_4.oid, (information_schema._pg_expandarray(c_3.conkey)), r_4.relowner\n Join Filter: (con.conname = c_3.conname)\n Rows Removed by Join Filter: 3964\n Buffers: shared hit=2624\n -> Nested Loop (cost=605.35..3759.48 rows=1 width=256) (actual time=5.769..23.698 rows=2 loops=1)\n Output: con.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name, \"*SELECT* 1_1\".table_name\n Join Filter: (pkc.conname = (\"*SELECT* 1_1\".constraint_name)::name)\n Rows Removed by Join Filter: 8572\n Buffers: shared hit=2296\n -> Nested Loop (cost=5.71..933.65 rows=1 width=256) (actual time=1.324..2.731 rows=2 loops=1)\n Output: con.conname, pkc.conname, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Join Filter: (d2.refobjid = pkc.oid)\n Buffers: shared hit=601\n -> Nested Loop (cost=5.43..933.00 rows=1 width=200) (actual time=1.315..2.713 rows=2 loops=1)\n Output: con.conname, con.confrelid, d2.refobjid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buffers: shared hit=593\n -> Nested Loop (cost=5.15..931.14 rows=1 width=200) (actual time=1.305..2.687 rows=2 loops=1)\n Output: con.conname, con.confrelid, d1.refobjid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buffers: shared hit=587\n -> Nested Loop (cost=4.86..929.16 rows=1 width=200) (actual time=1.292..2.662 rows=2 loops=1)\n Output: con.conname, con.oid, con.confrelid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Join Filter: (con.connamespace = ncon.oid)\n Rows Removed by Join Filter: 10\n Buffers: shared hit=581\n -> Nested Loop (cost=4.86..928.02 rows=1 width=204) (actual time=1.288..2.652 rows=2 loops=1)\n Output: con.conname, con.connamespace, con.oid, con.confrelid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Inner Unique: true\n Buffers: shared hit=579\n -> Nested Loop (cost=4.58..925.06 rows=2 width=208) (actual time=1.273..2.626 rows=2 loops=1)\n Output: con.conname, con.connamespace, con.conrelid, con.oid, con.confrelid, \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buffers: shared hit=573\n -> Append (cost=4.30..900.14 rows=3 width=128) (actual time=1.250..2.586 rows=5 loops=1)\n Buffers: shared hit=560\n -> Subquery Scan on \"*SELECT* 1\" (cost=4.30..449.91 rows=1 width=128) (actual time=1.249..1.283 rows=3 loops=1)\n Output: \"*SELECT* 1\".table_name, \"*SELECT* 1\".constraint_name\n Buffers: shared hit=282\n -> Nested Loop (cost=4.30..449.90 rows=1 width=512) (actual time=1.249..1.280 rows=3 loops=1)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (c_1.conname)::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Join Filter: (c_1.connamespace = nc.oid)\n Rows Removed by Join Filter: 15\n Buffers: shared hit=282\n -> Nested Loop (cost=4.30..448.76 rows=1 width=132) (actual time=1.242..1.257 rows=3 loops=1)\n Output: r.relname, c_1.conname, c_1.connamespace\n Buffers: shared hit=279\n -> Nested Loop (cost=0.00..434.55 rows=1 width=68) (actual time=1.217..1.225 rows=1 loops=1)\n Output: r.relname, r.oid\n Join Filter: (nr.oid = r.relnamespace)\n Rows Removed by Join Filter: 6\n Buffers: shared hit=275\n -> Seq Scan on pg_catalog.pg_namespace nr (cost=0.00..1.07 rows=4 width=4) (actual time=0.010..0.017 rows=7 loops=1)\n Output: nr.oid, nr.nspname, nr.nspowner, nr.nspacl\n Filter: (NOT pg_is_other_temp_schema(nr.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Materialize (cost=0.00..433.36 rows=2 width=72) (actual time=0.004..0.172 rows=1 loops=7)\n Output: r.relname, r.relnamespace, r.oid\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r (cost=0.00..433.35 rows=2 width=72) (actual time=0.028..1.198 rows=1 loops=1)\n Output: r.relname, r.relnamespace, r.oid\n Filter: ((r.relkind = ANY ('{r,p}'::\"char\"[])) AND (lower(((r.relname)::information_schema.sql_identifier)::text) = 'secrole_condcollection'::text) AND (pg_has_role(r.relowner, 'USAGE'::text) OR has_table_privilege(r.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 3252\n Buffers: shared hit=274\n -> Bitmap Heap Scan on pg_catalog.pg_constraint c_1 (cost=4.30..14.18 rows=3 width=72) (actual time=0.020..0.026 rows=3 loops=1)\n Output: c_1.oid, c_1.conname, c_1.connamespace, c_1.contype, c_1.condeferrable, c_1.condeferred, c_1.convalidated, c_1.conrelid, c_1.contypid, c_1.conindid, c_1.conparentid, c_1.confrelid, c_1.confupdtype, c_1.confdeltype, c_1.confmatchtype, c_1.conislocal, c_1.coninhcount, c_1.connoinherit, c_1.conkey, c_1.confkey, c_1.conpfeqop, c_1.conppeqop, c_1.conffeqop, c_1.conexclop, c_1.conbin\n Recheck Cond: (c_1.conrelid = r.oid)\n Filter: (c_1.contype <> ALL ('{t,x}'::\"char\"[]))\n Heap Blocks: exact=2\n Buffers: shared hit=4\n -> Bitmap Index Scan on pg_constraint_conrelid_contypid_conname_index (cost=0.00..4.30 rows=3 width=0) (actual time=0.016..0.016 rows=3 loops=1)\n Index Cond: (c_1.conrelid = r.oid)\n Buffers: shared hit=2\n -> Seq Scan on pg_catalog.pg_namespace nc (cost=0.00..1.06 rows=6 width=4) (actual time=0.002..0.003 rows=6 loops=3)\n Output: nc.oid, nc.nspname, nc.nspowner, nc.nspacl\n Buffers: shared hit=3\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.29..450.21 rows=2 width=128) (actual time=1.294..1.300 rows=2 loops=1)\n Output: \"*SELECT* 2\".table_name, \"*SELECT* 2\".constraint_name\n Buffers: shared hit=278\n -> Nested Loop (cost=0.29..450.19 rows=2 width=512) (actual time=1.294..1.299 rows=2 loops=1)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (((((((nr_1.oid)::text || '_'::text) || (r_1.oid)::text) || '_'::text) || (a_2.attnum)::text) || '_not_null'::text))::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_1.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Buffers: shared hit=278\n -> Nested Loop (cost=0.00..434.55 rows=1 width=72) (actual time=1.273..1.276 rows=1 loops=1)\n Output: nr_1.oid, r_1.oid, r_1.relname\n Join Filter: (nr_1.oid = r_1.relnamespace)\n Rows Removed by Join Filter: 6\n Buffers: shared hit=275\n -> Seq Scan on pg_catalog.pg_namespace nr_1 (cost=0.00..1.07 rows=4 width=4) (actual time=0.013..0.017 rows=7 loops=1)\n Output: nr_1.oid, nr_1.nspname, nr_1.nspowner, nr_1.nspacl\n Filter: (NOT pg_is_other_temp_schema(nr_1.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Materialize (cost=0.00..433.36 rows=2 width=72) (actual time=0.006..0.179 rows=1 loops=7)\n Output: r_1.oid, r_1.relname, r_1.relnamespace\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_1 (cost=0.00..433.35 rows=2 width=72) (actual time=0.030..1.245 rows=1 loops=1)\n Output: r_1.oid, r_1.relname, r_1.relnamespace\n Filter: ((r_1.relkind = ANY ('{r,p}'::\"char\"[])) AND (lower(((r_1.relname)::information_schema.sql_identifier)::text) = 'secrole_condcollection'::text) AND (pg_has_role(r_1.relowner, 'USAGE'::text) OR has_table_privilege(r_1.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_1.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 3252\n Buffers: shared hit=274\n -> Index Scan using pg_attribute_relid_attnum_index on pg_catalog.pg_attribute a_2 (cost=0.29..15.56 rows=2 width=6) (actual time=0.015..0.016 rows=2 loops=1)\n Output: a_2.attrelid, a_2.attname, a_2.atttypid, a_2.attstattarget, a_2.attlen, a_2.attnum, a_2.attndims, a_2.attcacheoff, a_2.atttypmod, a_2.attbyval, a_2.attstorage, a_2.attalign, a_2.attnotnull, a_2.atthasdef, a_2.atthasmissing, a_2.attidentity, a_2.attgenerated, a_2.attisdropped, a_2.attislocal, a_2.attinhcount, a_2.attcollation, a_2.attacl, a_2.attoptions, a_2.attfdwoptions, a_2.attmissingval\n Index Cond: ((a_2.attrelid = r_1.oid) AND (a_2.attnum > 0))\n Filter: (a_2.attnotnull AND (NOT a_2.attisdropped))\n Buffers: shared hit=3\n -> Index Scan using pg_constraint_conname_nsp_index on pg_catalog.pg_constraint con (cost=0.28..8.30 rows=1 width=80) (actual time=0.006..0.007 rows=0 loops=5)\n Output: con.oid, con.conname, con.connamespace, con.contype, con.condeferrable, con.condeferred, con.convalidated, con.conrelid, con.contypid, con.conindid, con.conparentid, con.confrelid, con.confupdtype, con.confdeltype, con.confmatchtype, con.conislocal, con.coninhcount, con.connoinherit, con.conkey, con.confkey, con.conpfeqop, con.conppeqop, con.conffeqop, con.conexclop, con.conbin\n Index Cond: (con.conname = (\"*SELECT* 1\".constraint_name)::name)\n Filter: (con.contype = 'f'::\"char\")\n Rows Removed by Filter: 0\n Buffers: shared hit=13\n -> Index Scan using pg_class_oid_index on pg_catalog.pg_class c (cost=0.28..1.48 rows=1 width=4) (actual time=0.010..0.011 rows=1 loops=2)\n Output: c.oid, c.relname, c.relnamespace, c.reltype, c.reloftype, c.relowner, c.relam, c.relfilenode, c.reltablespace, c.relpages, c.reltuples, c.relallvisible, c.reltoastrelid, c.relhasindex, c.relisshared, c.relpersistence, c.relkind, c.relnatts, c.relchecks, c.relhasrules, c.relhastriggers, c.relhassubclass, c.relrowsecurity, c.relforcerowsecurity, c.relispopulated, c.relreplident, c.relispartition, c.relrewrite, c.relfrozenxid, c.relminmxid, c.relacl, c.reloptions, c.relpartbound\n Index Cond: (c.oid = con.conrelid)\n Filter: (pg_has_role(c.relowner, 'USAGE'::text) OR has_table_privilege(c.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(c.oid, 'INSERT, UPDATE, REFERENCES'::text))\n Buffers: shared hit=6\n -> Seq Scan on pg_catalog.pg_namespace ncon (cost=0.00..1.06 rows=6 width=4) (actual time=0.002..0.002 rows=6 loops=2)\n Output: ncon.oid, ncon.nspname, ncon.nspowner, ncon.nspacl\n Buffers: shared hit=2\n -> Index Scan using pg_depend_depender_index on pg_catalog.pg_depend d1 (cost=0.29..1.97 rows=1 width=8) (actual time=0.010..0.011 rows=1 loops=2)\n Output: d1.classid, d1.objid, d1.objsubid, d1.refclassid, d1.refobjid, d1.refobjsubid, d1.deptype\n Index Cond: ((d1.classid = '2606'::oid) AND (d1.objid = con.oid))\n Filter: ((d1.refclassid = '1259'::oid) AND (d1.refobjsubid = 0))\n Rows Removed by Filter: 2\n Buffers: shared hit=6\n -> Index Scan using pg_depend_depender_index on pg_catalog.pg_depend d2 (cost=0.29..1.85 rows=1 width=8) (actual time=0.006..0.010 rows=1 loops=2)\n Output: d2.classid, d2.objid, d2.objsubid, d2.refclassid, d2.refobjid, d2.refobjsubid, d2.deptype\n Index Cond: ((d2.classid = '1259'::oid) AND (d2.objid = d1.refobjid) AND (d2.objsubid = 0))\n Filter: ((d2.refclassid = '2606'::oid) AND (d2.deptype = 'i'::\"char\"))\n Buffers: shared hit=6\n -> Index Scan using pg_constraint_conrelid_contypid_conname_index on pg_catalog.pg_constraint pkc (cost=0.28..0.64 rows=1 width=76) (actual time=0.007..0.007 rows=1 loops=2)\n Output: pkc.oid, pkc.conname, pkc.connamespace, pkc.contype, pkc.condeferrable, pkc.condeferred, pkc.convalidated, pkc.conrelid, pkc.contypid, pkc.conindid, pkc.conparentid, pkc.confrelid, pkc.confupdtype, pkc.confdeltype, pkc.confmatchtype, pkc.conislocal, pkc.coninhcount, pkc.connoinherit, pkc.conkey, pkc.confkey, pkc.conpfeqop, pkc.conppeqop, pkc.conffeqop, pkc.conexclop, pkc.conbin\n Index Cond: (pkc.conrelid = con.confrelid)\n Filter: (pkc.contype = ANY ('{p,u}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=8\n -> Append (cost=599.64..2816.66 rows=733 width=128) (actual time=1.033..10.237 rows=4287 loops=2)\n Buffers: shared hit=1695\n -> Subquery Scan on \"*SELECT* 1_1\" (cost=599.64..645.39 rows=175 width=128) (actual time=1.032..3.966 rows=1707 loops=2)\n Output: \"*SELECT* 1_1\".table_name, \"*SELECT* 1_1\".constraint_name\n Buffers: shared hit=328\n -> Nested Loop (cost=599.64..643.64 rows=175 width=512) (actual time=1.032..3.842 rows=1707 loops=2)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (c_2.conname)::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_2.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Join Filter: (c_2.connamespace = nc_1.oid)\n Rows Removed by Join Filter: 8624\n Buffers: shared hit=328\n -> Nested Loop (cost=599.64..628.68 rows=175 width=132) (actual time=1.028..2.578 rows=1707 loops=2)\n Output: r_2.relname, c_2.conname, c_2.connamespace\n Inner Unique: true\n Join Filter: (r_2.relnamespace = nr_2.oid)\n Rows Removed by Join Filter: 5210\n Buffers: shared hit=327\n -> Merge Join (cost=599.64..613.40 rows=263 width=136) (actual time=1.019..1.684 rows=1707 loops=2)\n Output: c_2.conname, c_2.connamespace, r_2.relname, r_2.relnamespace\n Inner Unique: true\n Merge Cond: (c_2.conrelid = r_2.oid)\n Buffers: shared hit=326\n -> Sort (cost=169.02..173.43 rows=1762 width=72) (actual time=0.473..0.622 rows=1709 loops=2)\n Output: c_2.conname, c_2.connamespace, c_2.conrelid\n Sort Key: c_2.conrelid\n Sort Method: quicksort Memory: 289kB\n Buffers: shared hit=52\n -> Seq Scan on pg_catalog.pg_constraint c_2 (cost=0.00..74.03 rows=1762 width=72) (actual time=0.005..0.469 rows=1709 loops=1)\n Output: c_2.conname, c_2.connamespace, c_2.conrelid\n Filter: (c_2.contype <> ALL ('{t,x}'::\"char\"[]))\n Buffers: shared hit=52\n -> Sort (cost=430.62..431.81 rows=476 width=72) (actual time=0.533..0.604 rows=694 loops=2)\n Output: r_2.relname, r_2.relnamespace, r_2.oid\n Sort Key: r_2.oid\n Sort Method: quicksort Memory: 122kB\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_2 (cost=0.00..409.45 rows=476 width=72) (actual time=0.007..0.882 rows=694 loops=1)\n Output: r_2.relname, r_2.relnamespace, r_2.oid\n Filter: ((r_2.relkind = ANY ('{r,p}'::\"char\"[])) AND (pg_has_role(r_2.relowner, 'USAGE'::text) OR has_table_privilege(r_2.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_2.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Materialize (cost=0.00..1.09 rows=4 width=4) (actual time=0.000..0.000 rows=4 loops=3414)\n Output: nr_2.oid\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nr_2 (cost=0.00..1.07 rows=4 width=4) (actual time=0.009..0.015 rows=7 loops=1)\n Output: nr_2.oid\n Filter: (NOT pg_is_other_temp_schema(nr_2.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Materialize (cost=0.00..1.09 rows=6 width=4) (actual time=0.000..0.000 rows=6 loops=3414)\n Output: nc_1.oid\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nc_1 (cost=0.00..1.06 rows=6 width=4) (actual time=0.003..0.004 rows=9 loops=1)\n Output: nc_1.oid\n Buffers: shared hit=1\n -> Subquery Scan on \"*SELECT* 2_1\" (cost=2110.11..2167.61 rows=558 width=128) (actual time=3.730..6.052 rows=2580 loops=2)\n Output: \"*SELECT* 2_1\".table_name, \"*SELECT* 2_1\".constraint_name\n Buffers: shared hit=1367\n -> Merge Join (cost=2110.11..2162.03 rows=558 width=512) (actual time=3.729..5.866 rows=2580 loops=2)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (((((((nr_3.oid)::text || '_'::text) || (r_3.oid)::text) || '_'::text) || (a_3.attnum)::text) || '_not_null'::text))::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_3.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Merge Cond: (r_3.oid = a_3.attrelid)\n Buffers: shared hit=1367\n -> Sort (cost=449.42..450.21 rows=317 width=72) (actual time=0.900..0.965 rows=694 loops=2)\n Output: nr_3.oid, r_3.oid, r_3.relname\n Sort Key: r_3.oid\n Sort Method: quicksort Memory: 122kB\n Buffers: shared hit=275\n -> Nested Loop (cost=0.00..436.25 rows=317 width=72) (actual time=0.038..1.605 rows=694 loops=1)\n Output: nr_3.oid, r_3.oid, r_3.relname\n Inner Unique: true\n Join Filter: (nr_3.oid = r_3.relnamespace)\n Rows Removed by Join Filter: 2013\n Buffers: shared hit=275\n -> Seq Scan on pg_catalog.pg_class r_3 (cost=0.00..409.45 rows=476 width=72) (actual time=0.022..1.227 rows=694 loops=1)\n Output: r_3.oid, r_3.relname, r_3.relnamespace, r_3.reltype, r_3.reloftype, r_3.relowner, r_3.relam, r_3.relfilenode, r_3.reltablespace, r_3.relpages, r_3.reltuples, r_3.relallvisible, r_3.reltoastrelid, r_3.relhasindex, r_3.relisshared, r_3.relpersistence, r_3.relkind, r_3.relnatts, r_3.relchecks, r_3.relhasrules, r_3.relhastriggers, r_3.relhassubclass, r_3.relrowsecurity, r_3.relforcerowsecurity, r_3.relispopulated, r_3.relreplident, r_3.relispartition, r_3.relrewrite, r_3.relfrozenxid, r_3.relminmxid, r_3.relacl, r_3.reloptions, r_3.relpartbound\n Filter: ((r_3.relkind = ANY ('{r,p}'::\"char\"[])) AND (pg_has_role(r_3.relowner, 'USAGE'::text) OR has_table_privilege(r_3.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_3.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Materialize (cost=0.00..1.09 rows=4 width=4) (actual time=0.000..0.000 rows=4 loops=694)\n Output: nr_3.oid\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nr_3 (cost=0.00..1.07 rows=4 width=4) (actual time=0.006..0.009 rows=7 loops=1)\n Output: nr_3.oid\n Filter: (NOT pg_is_other_temp_schema(nr_3.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Sort (cost=1660.69..1674.70 rows=5606 width=6) (actual time=2.822..2.946 rows=2598 loops=2)\n Output: a_3.attnum, a_3.attrelid\n Sort Key: a_3.attrelid\n Sort Method: quicksort Memory: 218kB\n Buffers: shared hit=1092\n -> Seq Scan on pg_catalog.pg_attribute a_3 (cost=0.00..1311.64 rows=5606 width=6) (actual time=0.008..5.054 rows=2598 loops=1)\n Output: a_3.attnum, a_3.attrelid\n Filter: (a_3.attnotnull AND (NOT a_3.attisdropped) AND (a_3.attnum > 0))\n Rows Removed by Filter: 15396\n Buffers: shared hit=1092\n -> ProjectSet (cost=564.95..1875.97 rows=249000 width=341) (actual time=2.154..10.656 rows=1983 loops=2)\n Output: r_4.oid, NULL::name, r_4.relowner, NULL::name, NULL::name, NULL::oid, c_3.conname, NULL::\"char\", NULL::oid, NULL::smallint[], NULL::oid, information_schema._pg_expandarray(c_3.conkey)\n Buffers: shared hit=328\n -> Merge Join (cost=564.95..567.48 rows=249 width=95) (actual time=2.034..2.481 rows=1707 loops=2)\n Output: c_3.conkey, r_4.oid, r_4.relowner, c_3.conname\n Inner Unique: true\n Merge Cond: (c_3.connamespace = nc_2.oid)\n Buffers: shared hit=328\n -> Sort (cost=563.80..564.43 rows=249 width=99) (actual time=2.026..2.119 rows=1707 loops=2)\n Output: r_4.oid, r_4.relowner, c_3.conname, c_3.conkey, c_3.connamespace\n Sort Key: c_3.connamespace\n Sort Method: quicksort Memory: 289kB\n Buffers: shared hit=327\n -> Nested Loop (cost=516.77..553.89 rows=249 width=99) (actual time=2.080..3.571 rows=1707 loops=1)\n Output: r_4.oid, r_4.relowner, c_3.conname, c_3.conkey, c_3.connamespace\n Inner Unique: true\n Join Filter: (r_4.relnamespace = nr_4.oid)\n Rows Removed by Join Filter: 5210\n Buffers: shared hit=327\n -> Merge Join (cost=516.77..532.60 rows=374 width=103) (actual time=2.065..2.631 rows=1707 loops=1)\n Output: r_4.oid, r_4.relowner, r_4.relnamespace, c_3.conname, c_3.conkey, c_3.connamespace\n Merge Cond: (r_4.oid = c_3.conrelid)\n Buffers: shared hit=326\n -> Sort (cost=345.67..347.36 rows=677 width=12) (actual time=0.999..1.034 rows=694 loops=1)\n Output: r_4.oid, r_4.relowner, r_4.relnamespace\n Sort Key: r_4.oid\n Sort Method: quicksort Memory: 57kB\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_4 (cost=0.00..313.84 rows=677 width=12) (actual time=0.014..0.848 rows=694 loops=1)\n Output: r_4.oid, r_4.relowner, r_4.relnamespace\n Filter: (r_4.relkind = ANY ('{r,p}'::\"char\"[]))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Sort (cost=171.10..175.50 rows=1760 width=95) (actual time=1.056..1.164 rows=1707 loops=1)\n Output: c_3.conname, c_3.conkey, c_3.conrelid, c_3.connamespace\n Sort Key: c_3.conrelid\n Sort Method: quicksort Memory: 289kB\n Buffers: shared hit=52\n -> Seq Scan on pg_catalog.pg_constraint c_3 (cost=0.00..76.23 rows=1760 width=95) (actual time=0.009..0.519 rows=1707 loops=1)\n Output: c_3.conname, c_3.conkey, c_3.conrelid, c_3.connamespace\n Filter: (c_3.contype = ANY ('{p,u,f}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=52\n -> Materialize (cost=0.00..1.09 rows=4 width=4) (actual time=0.000..0.000 rows=4 loops=1707)\n Output: nr_4.oid\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nr_4 (cost=0.00..1.07 rows=4 width=4) (actual time=0.007..0.011 rows=7 loops=1)\n Output: nr_4.oid\n Filter: (NOT pg_is_other_temp_schema(nr_4.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Sort (cost=1.14..1.15 rows=6 width=4) (actual time=0.006..0.008 rows=9 loops=2)\n Output: nc_2.oid\n Sort Key: nc_2.oid\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nc_2 (cost=0.00..1.06 rows=6 width=4) (actual time=0.006..0.007 rows=9 loops=1)\n Output: nc_2.oid\n Buffers: shared hit=1\n -> Index Scan using pg_attribute_relid_attnum_index on pg_catalog.pg_attribute a (cost=0.29..0.33 rows=1 width=70) (actual time=0.020..0.020 rows=1 loops=2)\n Output: a.attrelid, a.attname, a.atttypid, a.attstattarget, a.attlen, a.attnum, a.attndims, a.attcacheoff, a.atttypmod, a.attbyval, a.attstorage, a.attalign, a.attnotnull, a.atthasdef, a.atthasmissing, a.attidentity, a.attgenerated, a.attisdropped, a.attislocal, a.attinhcount, a.attcollation, a.attacl, a.attoptions, a.attfdwoptions, a.attmissingval\n Index Cond: ((a.attrelid = r_4.oid) AND (a.attnum = ((information_schema._pg_expandarray(c_3.conkey))).x))\n Filter: ((NOT a.attisdropped) AND (pg_has_role(r_4.relowner, 'USAGE'::text) OR has_column_privilege(r_4.oid, a.attnum, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=6\n -> Subquery Scan on \"*SELECT* 1_2\" (cost=0.28..173.32 rows=1 width=128) (actual time=0.040..5.978 rows=595 loops=2)\n Output: \"*SELECT* 1_2\".constraint_name, \"*SELECT* 1_2\".table_name\n Buffers: shared hit=6054\n -> Nested Loop (cost=0.28..173.31 rows=1 width=512) (actual time=0.040..5.914 rows=595 loops=2)\n Output: NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (c_4.conname)::information_schema.sql_identifier, NULL::information_schema.sql_identifier, NULL::information_schema.sql_identifier, (r_5.relname)::information_schema.sql_identifier, NULL::information_schema.character_data, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no, NULL::information_schema.yes_or_no\n Inner Unique: true\n Join Filter: (r_5.relnamespace = nr_5.oid)\n Rows Removed by Join Filter: 1836\n Buffers: shared hit=6054\n -> Nested Loop (cost=0.28..172.19 rows=1 width=132) (actual time=0.031..3.784 rows=595 loops=2)\n Output: c_4.conname, r_5.relname, r_5.relnamespace\n Inner Unique: true\n Join Filter: (c_4.connamespace = nc_3.oid)\n Rows Removed by Join Filter: 3026\n Buffers: shared hit=4864\n -> Nested Loop (cost=0.28..171.05 rows=1 width=136) (actual time=0.024..1.976 rows=595 loops=2)\n Output: c_4.conname, c_4.connamespace, r_5.relname, r_5.relnamespace\n Inner Unique: true\n Buffers: shared hit=3674\n -> Seq Scan on pg_catalog.pg_constraint c_4 (cost=0.00..96.05 rows=9 width=72) (actual time=0.012..0.489 rows=595 loops=2)\n Output: c_4.oid, c_4.conname, c_4.connamespace, c_4.contype, c_4.condeferrable, c_4.condeferred, c_4.convalidated, c_4.conrelid, c_4.contypid, c_4.conindid, c_4.conparentid, c_4.confrelid, c_4.confupdtype, c_4.confdeltype, c_4.confmatchtype, c_4.conislocal, c_4.coninhcount, c_4.connoinherit, c_4.conkey, c_4.confkey, c_4.conpfeqop, c_4.conppeqop, c_4.conffeqop, c_4.conexclop, c_4.conbin\n Filter: ((c_4.contype <> ALL ('{t,x}'::\"char\"[])) AND ((CASE c_4.contype WHEN 'c'::\"char\" THEN 'CHECK'::text WHEN 'f'::\"char\" THEN 'FOREIGN KEY'::text WHEN 'p'::\"char\" THEN 'PRIMARY KEY'::text WHEN 'u'::\"char\" THEN 'UNIQUE'::text ELSE NULL::text END)::text = 'PRIMARY KEY'::text))\n Rows Removed by Filter: 1114\n Buffers: shared hit=104\n -> Index Scan using pg_class_oid_index on pg_catalog.pg_class r_5 (cost=0.28..8.33 rows=1 width=72) (actual time=0.002..0.002 rows=1 loops=1190)\n Output: r_5.oid, r_5.relname, r_5.relnamespace, r_5.reltype, r_5.reloftype, r_5.relowner, r_5.relam, r_5.relfilenode, r_5.reltablespace, r_5.relpages, r_5.reltuples, r_5.relallvisible, r_5.reltoastrelid, r_5.relhasindex, r_5.relisshared, r_5.relpersistence, r_5.relkind, r_5.relnatts, r_5.relchecks, r_5.relhasrules, r_5.relhastriggers, r_5.relhassubclass, r_5.relrowsecurity, r_5.relforcerowsecurity, r_5.relispopulated, r_5.relreplident, r_5.relispartition, r_5.relrewrite, r_5.relfrozenxid, r_5.relminmxid, r_5.relacl, r_5.reloptions, r_5.relpartbound\n Index Cond: (r_5.oid = c_4.conrelid)\n Filter: ((r_5.relkind = ANY ('{r,p}'::\"char\"[])) AND (pg_has_role(r_5.relowner, 'USAGE'::text) OR has_table_privilege(r_5.oid, 'INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(r_5.oid, 'INSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=3570\n -> Seq Scan on pg_catalog.pg_namespace nc_3 (cost=0.00..1.06 rows=6 width=4) (actual time=0.000..0.001 rows=6 loops=1190)\n Output: nc_3.oid, nc_3.nspname, nc_3.nspowner, nc_3.nspacl\n Buffers: shared hit=1190\n -> Seq Scan on pg_catalog.pg_namespace nr_5 (cost=0.00..1.07 rows=4 width=4) (actual time=0.001..0.002 rows=4 loops=1190)\n Output: nr_5.oid, nr_5.nspname, nr_5.nspowner, nr_5.nspacl\n Filter: (NOT pg_is_other_temp_schema(nr_5.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1190\n -> ProjectSet (cost=564.95..1875.97 rows=249000 width=341) (actual time=2.653..10.818 rows=1983 loops=2)\n Output: r_6.oid, NULL::name, r_6.relowner, NULL::name, NULL::name, NULL::oid, c_5.conname, NULL::\"char\", NULL::oid, NULL::smallint[], NULL::oid, information_schema._pg_expandarray(c_5.conkey)\n Buffers: shared hit=328\n -> Merge Join (cost=564.95..567.48 rows=249 width=95) (actual time=2.571..3.014 rows=1707 loops=2)\n Output: c_5.conkey, r_6.oid, r_6.relowner, c_5.conname\n Inner Unique: true\n Merge Cond: (c_5.connamespace = nc_4.oid)\n Buffers: shared hit=328\n -> Sort (cost=563.80..564.43 rows=249 width=99) (actual time=2.557..2.654 rows=1707 loops=2)\n Output: r_6.oid, r_6.relowner, c_5.conname, c_5.conkey, c_5.connamespace\n Sort Key: c_5.connamespace\n Sort Method: quicksort Memory: 289kB\n Buffers: shared hit=327\n -> Nested Loop (cost=516.77..553.89 rows=249 width=99) (actual time=2.335..4.616 rows=1707 loops=1)\n Output: r_6.oid, r_6.relowner, c_5.conname, c_5.conkey, c_5.connamespace\n Inner Unique: true\n Join Filter: (r_6.relnamespace = nr_6.oid)\n Rows Removed by Join Filter: 5210\n Buffers: shared hit=327\n -> Merge Join (cost=516.77..532.60 rows=374 width=103) (actual time=2.320..2.962 rows=1707 loops=1)\n Output: r_6.oid, r_6.relowner, r_6.relnamespace, c_5.conname, c_5.conkey, c_5.connamespace\n Merge Cond: (r_6.oid = c_5.conrelid)\n Buffers: shared hit=326\n -> Sort (cost=345.67..347.36 rows=677 width=12) (actual time=1.185..1.231 rows=694 loops=1)\n Output: r_6.oid, r_6.relowner, r_6.relnamespace\n Sort Key: r_6.oid\n Sort Method: quicksort Memory: 57kB\n Buffers: shared hit=274\n -> Seq Scan on pg_catalog.pg_class r_6 (cost=0.00..313.84 rows=677 width=12) (actual time=0.008..1.020 rows=694 loops=1)\n Output: r_6.oid, r_6.relowner, r_6.relnamespace\n Filter: (r_6.relkind = ANY ('{r,p}'::\"char\"[]))\n Rows Removed by Filter: 2559\n Buffers: shared hit=274\n -> Sort (cost=171.10..175.50 rows=1760 width=95) (actual time=1.124..1.233 rows=1707 loops=1)\n Output: c_5.conname, c_5.conkey, c_5.conrelid, c_5.connamespace\n Sort Key: c_5.conrelid\n Sort Method: quicksort Memory: 289kB\n Buffers: shared hit=52\n -> Seq Scan on pg_catalog.pg_constraint c_5 (cost=0.00..76.23 rows=1760 width=95) (actual time=0.007..0.544 rows=1707 loops=1)\n Output: c_5.conname, c_5.conkey, c_5.conrelid, c_5.connamespace\n Filter: (c_5.contype = ANY ('{p,u,f}'::\"char\"[]))\n Rows Removed by Filter: 2\n Buffers: shared hit=52\n -> Materialize (cost=0.00..1.09 rows=4 width=4) (actual time=0.000..0.001 rows=4 loops=1707)\n Output: nr_6.oid\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nr_6 (cost=0.00..1.07 rows=4 width=4) (actual time=0.006..0.013 rows=7 loops=1)\n Output: nr_6.oid\n Filter: (NOT pg_is_other_temp_schema(nr_6.oid))\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Sort (cost=1.14..1.15 rows=6 width=4) (actual time=0.010..0.011 rows=9 loops=2)\n Output: nc_4.oid\n Sort Key: nc_4.oid\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1\n -> Seq Scan on pg_catalog.pg_namespace nc_4 (cost=0.00..1.06 rows=6 width=4) (actual time=0.013..0.014 rows=9 loops=1)\n Output: nc_4.oid\n Buffers: shared hit=1\n -> Index Scan using pg_attribute_relid_attnum_index on pg_catalog.pg_attribute a_1 (cost=0.29..0.33 rows=1 width=70) (actual time=0.015..0.015 rows=1 loops=2)\n Output: a_1.attrelid, a_1.attname, a_1.atttypid, a_1.attstattarget, a_1.attlen, a_1.attnum, a_1.attndims, a_1.attcacheoff, a_1.atttypmod, a_1.attbyval, a_1.attstorage, a_1.attalign, a_1.attnotnull, a_1.atthasdef, a_1.atthasmissing, a_1.attidentity, a_1.attgenerated, a_1.attisdropped, a_1.attislocal, a_1.attinhcount, a_1.attcollation, a_1.attacl, a_1.attoptions, a_1.attfdwoptions, a_1.attmissingval\n Index Cond: ((a_1.attrelid = r_6.oid) AND (a_1.attnum = ((information_schema._pg_expandarray(c_5.conkey))).x))\n Filter: ((NOT a_1.attisdropped) AND (pg_has_role(r_6.relowner, 'USAGE'::text) OR has_column_privilege(r_6.oid, a_1.attnum, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n Buffers: shared hit=6\nPlanning Time: 7.329 ms\nExecution Time: 80.546 ms\nserver parameters (everything except \nrandom_page_cost \n\n)\nallow_system_table_mods\toff\tAllows modifications of the structure of system tables.\napplication_name\tpgAdmin 4 - CONN:6043198\tSets the application name to be reported in statistics and logs.\narchive_cleanup_command\t\tSets the shell command that will be executed at every restart point.\narchive_command\t(disabled)\tSets the shell command that will be called to archive a WAL file.\narchive_mode\toff\tAllows archiving of WAL files using archive_command.\narchive_timeout\t0\tForces a switch to the next WAL file if a new file has not been started within N seconds.\narray_nulls\ton\tEnable input of NULL elements in arrays.\nauthentication_timeout\t1min\tSets the maximum allowed time to complete client authentication.\nautovacuum\ton\tStarts the autovacuum subprocess.\nautovacuum_analyze_scale_factor\t0.1\tNumber of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples.\nautovacuum_analyze_threshold\t50\tMinimum number of tuple inserts, updates, or deletes prior to analyze.\nautovacuum_freeze_max_age\t200000000\tAge at which to autovacuum a table to prevent transaction ID wraparound.\nautovacuum_max_workers\t3\tSets the maximum number of simultaneously running autovacuum worker processes.\nautovacuum_multixact_freeze_max_age\t400000000\tMultixact age at which to autovacuum a table to prevent multixact wraparound.\nautovacuum_naptime\t1min\tTime to sleep between autovacuum runs.\nautovacuum_vacuum_cost_delay\t2ms\tVacuum cost delay in milliseconds, for autovacuum.\nautovacuum_vacuum_cost_limit\t-1\tVacuum cost amount available before napping, for autovacuum.\nautovacuum_vacuum_scale_factor\t0.2\tNumber of tuple updates or deletes prior to vacuum as a fraction of reltuples.\nautovacuum_vacuum_threshold\t50\tMinimum number of tuple updates or deletes prior to vacuum.\nautovacuum_work_mem\t-1\tSets the maximum memory to be used by each autovacuum worker process.\nbackend_flush_after\t0\tNumber of pages after which previously performed writes are flushed to disk.\nbackslash_quote\tsafe_encoding\tSets whether \"\\'\" is allowed in string literals.\nbgwriter_delay\t200ms\tBackground writer sleep time between rounds.\nbgwriter_flush_after\t0\tNumber of pages after which previously performed writes are flushed to disk.\nbgwriter_lru_maxpages\t100\tBackground writer maximum number of LRU pages to flush per round.\nbgwriter_lru_multiplier\t2\tMultiple of the average buffer usage to free per round.\nblock_size\t8192\tShows the size of a disk block.\nbonjour\toff\tEnables advertising the server via Bonjour.\nbonjour_name\t\tSets the Bonjour service name.\nbytea_output\thex\tSets the output format for bytea.\ncheck_function_bodies\ton\tCheck function bodies during CREATE FUNCTION.\ncheckpoint_completion_target\t0.5\tTime spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval.\ncheckpoint_flush_after\t0\tNumber of pages after which previously performed writes are flushed to disk.\ncheckpoint_timeout\t5min\tSets the maximum time between automatic WAL checkpoints.\ncheckpoint_warning\t30s\tEnables warnings if checkpoint segments are filled more frequently than this.\nclient_encoding\tUNICODE\tSets the client's character set encoding.\nclient_min_messages\tnotice\tSets the message levels that are sent to the client.\ncluster_name\t\tSets the name of the cluster, which is included in the process title.\ncommit_delay\t0\tSets the delay in microseconds between transaction commit and flushing WAL to disk.\ncommit_siblings\t5\tSets the minimum concurrent open transactions before performing commit_delay.\nconfig_file\tD:/ASCDB/postgresql.conf\tSets the server's main configuration file.\nconstraint_exclusion\tpartition\tEnables the planner to use constraints to optimize queries.\ncpu_index_tuple_cost\t0.005\tSets the planner's estimate of the cost of processing each index entry during an index scan.\ncpu_operator_cost\t0.0025\tSets the planner's estimate of the cost of processing each operator or function call.\ncpu_tuple_cost\t0.01\tSets the planner's estimate of the cost of processing each tuple (row).\ncursor_tuple_fraction\t0.1\tSets the planner's estimate of the fraction of a cursor's rows that will be retrieved.\ndata_checksums\toff\tShows whether data checksums are turned on for this cluster.\ndata_directory\tD:/ASCDB\tSets the server's data directory.\ndata_directory_mode\t0700\tMode of the data directory.\ndata_sync_retry\toff\tWhether to continue running after a failure to sync data files.\nDateStyle\tISO, DMY\tSets the display format for date and time values.\ndb_user_namespace\toff\tEnables per-database user names.\ndeadlock_timeout\t1s\tSets the time to wait on a lock before checking for deadlock.\ndebug_assertions\toff\tShows whether the running server has assertion checks enabled.\ndebug_pretty_print\ton\tIndents parse and plan tree displays.\ndebug_print_parse\toff\tLogs each query's parse tree.\ndebug_print_plan\toff\tLogs each query's execution plan.\ndebug_print_rewritten\toff\tLogs each query's rewritten parse tree.\ndefault_statistics_target\t100\tSets the default statistics target.\ndefault_table_access_method\theap\tSets the default table access method for new tables.\ndefault_tablespace\t\tSets the default tablespace to create tables and indexes in.\ndefault_text_search_config\tpg_catalog.english\tSets default text search configuration.\ndefault_transaction_deferrable\toff\tSets the default deferrable status of new transactions.\ndefault_transaction_isolation\tread committed\tSets the transaction isolation level of each new transaction.\ndefault_transaction_read_only\toff\tSets the default read-only status of new transactions.\ndynamic_library_path\t$libdir\tSets the path for dynamically loadable modules.\ndynamic_shared_memory_type\twindows\tSelects the dynamic shared memory implementation used.\neffective_cache_size\t9GB\tSets the planner's assumption about the total size of the data caches.\neffective_io_concurrency\t0\tNumber of simultaneous requests that can be handled efficiently by the disk subsystem.\nenable_bitmapscan\ton\tEnables the planner's use of bitmap-scan plans.\nenable_gathermerge\ton\tEnables the planner's use of gather merge plans.\nenable_hashagg\ton\tEnables the planner's use of hashed aggregation plans.\nenable_hashjoin\ton\tEnables the planner's use of hash join plans.\nenable_indexonlyscan\ton\tEnables the planner's use of index-only-scan plans.\nenable_indexscan\ton\tEnables the planner's use of index-scan plans.\nenable_material\ton\tEnables the planner's use of materialization.\nenable_mergejoin\ton\tEnables the planner's use of merge join plans.\nenable_nestloop\ton\tEnables the planner's use of nested-loop join plans.\nenable_parallel_append\ton\tEnables the planner's use of parallel append plans.\nenable_parallel_hash\ton\tEnables the planner's use of parallel hash plans.\nenable_partition_pruning\ton\tEnables plan-time and run-time partition pruning.\nenable_partitionwise_aggregate\toff\tEnables partitionwise aggregation and grouping.\nenable_partitionwise_join\toff\tEnables partitionwise join.\nenable_seqscan\ton\tEnables the planner's use of sequential-scan plans.\nenable_sort\ton\tEnables the planner's use of explicit sort steps.\nenable_tidscan\ton\tEnables the planner's use of TID scan plans.\nescape_string_warning\ton\tWarn about backslash escapes in ordinary string literals.\nevent_source\tPostgreSQL\tSets the application name used to identify PostgreSQL messages in the event log.\nexit_on_error\toff\tTerminate session on any error.\nexternal_pid_file\t\tWrites the postmaster PID to the specified file.\nextra_float_digits\t1\tSets the number of digits displayed for floating-point values.\nforce_parallel_mode\toff\tForces use of parallel query facilities.\nfrom_collapse_limit\t80\tSets the FROM-list size beyond which subqueries are not collapsed.\nfsync\ton\tForces synchronization of updates to disk.\nfull_page_writes\ton\tWrites full pages to WAL when first modified after a checkpoint.\ngeqo\ton\tEnables genetic query optimization.\ngeqo_effort\t5\tGEQO: effort is used to set the default for other GEQO parameters.\ngeqo_generations\t0\tGEQO: number of iterations of the algorithm.\ngeqo_pool_size\t0\tGEQO: number of individuals in the population.\ngeqo_seed\t0\tGEQO: seed for random path selection.\ngeqo_selection_bias\t2\tGEQO: selective pressure within the population.\ngeqo_threshold\t12\tSets the threshold of FROM items beyond which GEQO is used.\ngin_fuzzy_search_limit\t0\tSets the maximum allowed result for exact search by GIN.\ngin_pending_list_limit\t4MB\tSets the maximum size of the pending list for GIN index.\nhba_file\tD:/ASCDB/pg_hba.conf\tSets the server's \"hba\" configuration file.\nhot_standby\ton\tAllows connections and queries during recovery.\nhot_standby_feedback\toff\tAllows feedback from a hot standby to the primary that will avoid query conflicts.\nhuge_pages\ttry\tUse of huge pages on Linux or Windows.\nident_file\tD:/ASCDB/pg_ident.conf\tSets the server's \"ident\" configuration file.\nidle_in_transaction_session_timeout\t0\tSets the maximum allowed duration of any idling transaction.\nignore_checksum_failure\toff\tContinues processing after a checksum failure.\nignore_system_indexes\toff\tDisables reading from system indexes.\ninteger_datetimes\ton\tDatetimes are integer based.\nIntervalStyle\tpostgres\tSets the display format for interval values.\njit\ton\tAllow JIT compilation.\njit_above_cost\t100000\tPerform JIT compilation if query is more expensive.\njit_debugging_support\toff\tRegister JIT compiled function with debugger.\njit_dump_bitcode\toff\tWrite out LLVM bitcode to facilitate JIT debugging.\njit_expressions\ton\tAllow JIT compilation of expressions.\njit_inline_above_cost\t500000\tPerform JIT inlining if query is more expensive.\njit_optimize_above_cost\t500000\tOptimize JITed functions if query is more expensive.\njit_profiling_support\toff\tRegister JIT compiled function with perf profiler.\njit_provider\tllvmjit\tJIT provider to use.\njit_tuple_deforming\ton\tAllow JIT compilation of tuple deforming.\njoin_collapse_limit\t80\tSets the FROM-list size beyond which JOIN constructs are not flattened.\nkrb_caseins_users\toff\tSets whether Kerberos and GSSAPI user names should be treated as case-insensitive.\nkrb_server_keyfile\t\tSets the location of the Kerberos server key file.\nlc_collate\tEnglish_United Kingdom.1252\tShows the collation order locale.\nlc_ctype\tEnglish_United Kingdom.1252\tShows the character classification and case conversion locale.\nlc_messages\tEnglish_United States.1252\tSets the language in which messages are displayed.\nlc_monetary\tEnglish_United States.1252\tSets the locale for formatting monetary amounts.\nlc_numeric\tEnglish_United States.1252\tSets the locale for formatting numbers.\nlc_time\tEnglish_United Kingdom.1252\tSets the locale for formatting date and time values.\nlisten_addresses\t*\tSets the host name or IP address(es) to listen to.\nlo_compat_privileges\toff\tEnables backward compatibility mode for privilege checks on large objects.\nlocal_preload_libraries\t\tLists unprivileged shared libraries to preload into each backend.\nlock_timeout\t0\tSets the maximum allowed duration of any wait for a lock.\nlog_autovacuum_min_duration\t-1\tSets the minimum execution time above which autovacuum actions will be logged.\nlog_checkpoints\toff\tLogs each checkpoint.\nlog_connections\toff\tLogs each successful connection.\nlog_destination\tstderr\tSets the destination for server log output.\nlog_directory\tlog\tSets the destination directory for log files.\nlog_disconnections\toff\tLogs end of a session, including duration.\nlog_duration\toff\tLogs the duration of each completed SQL statement.\nlog_error_verbosity\tdefault\tSets the verbosity of logged messages.\nlog_executor_stats\toff\tWrites executor performance statistics to the server log.\nlog_file_mode\t0640\tSets the file permissions for log files.\nlog_filename\tpostgresql-%Y-%m-%d_%H%M%S.log\tSets the file name pattern for log files.\nlog_hostname\toff\tLogs the host name in the connection logs.\nlog_line_prefix\t%m [%p] \tControls information prefixed to each log line.\nlog_lock_waits\toff\tLogs long lock waits.\nlog_min_duration_statement\t-1\tSets the minimum execution time above which statements will be logged.\nlog_min_error_statement\terror\tCauses all statements generating error at or above this level to be logged.\nlog_min_messages\twarning\tSets the message levels that are logged.\nlog_parser_stats\toff\tWrites parser performance statistics to the server log.\nlog_planner_stats\toff\tWrites planner performance statistics to the server log.\nlog_replication_commands\toff\tLogs each replication command.\nlog_rotation_age\t1d\tAutomatic log file rotation will occur after N minutes.\nlog_rotation_size\t10MB\tAutomatic log file rotation will occur after N kilobytes.\nlog_statement\tnone\tSets the type of statements logged.\nlog_statement_stats\toff\tWrites cumulative performance statistics to the server log.\nlog_temp_files\t-1\tLog the use of temporary files larger than this number of kilobytes.\nlog_timezone\tEurope/London\tSets the time zone to use in log messages.\nlog_transaction_sample_rate\t0\tSet the fraction of transactions to log for new transactions.\nlog_truncate_on_rotation\toff\tTruncate existing log files of same name during log rotation.\nlogging_collector\ton\tStart a subprocess to capture stderr output and/or csvlogs into log files.\nmaintenance_work_mem\t2047MB\tSets the maximum memory to be used for maintenance operations.\nmax_connections\t140\tSets the maximum number of concurrent connections.\nmax_files_per_process\t1000\tSets the maximum number of simultaneously open files for each server process.\nmax_function_args\t100\tShows the maximum number of function arguments.\nmax_identifier_length\t63\tShows the maximum identifier length.\nmax_index_keys\t32\tShows the maximum number of index keys.\nmax_locks_per_transaction\t64\tSets the maximum number of locks per transaction.\nmax_logical_replication_workers\t4\tMaximum number of logical replication worker processes.\nmax_parallel_maintenance_workers\t2\tSets the maximum number of parallel processes per maintenance operation.\nmax_parallel_workers\t8\tSets the maximum number of parallel workers that can be active at one time.\nmax_parallel_workers_per_gather\t2\tSets the maximum number of parallel processes per executor node.\nmax_pred_locks_per_page\t2\tSets the maximum number of predicate-locked tuples per page.\nmax_pred_locks_per_relation\t-2\tSets the maximum number of predicate-locked pages and tuples per relation.\nmax_pred_locks_per_transaction\t64\tSets the maximum number of predicate locks per transaction.\nmax_prepared_transactions\t0\tSets the maximum number of simultaneously prepared transactions.\nmax_replication_slots\t10\tSets the maximum number of simultaneously defined replication slots.\nmax_stack_depth\t2MB\tSets the maximum stack depth, in kilobytes.\nmax_standby_archive_delay\t30s\tSets the maximum delay before canceling queries when a hot standby server is processing archived WAL data.\nmax_standby_streaming_delay\t30s\tSets the maximum delay before canceling queries when a hot standby server is processing streamed WAL data.\nmax_sync_workers_per_subscription\t2\tMaximum number of table synchronization workers per subscription.\nmax_wal_senders\t10\tSets the maximum number of simultaneously running WAL sender processes.\nmax_wal_size\t2GB\tSets the WAL size that triggers a checkpoint.\nmax_worker_processes\t8\tMaximum number of concurrent worker processes.\nmin_parallel_index_scan_size\t512kB\tSets the minimum amount of index data for a parallel scan.\nmin_parallel_table_scan_size\t8MB\tSets the minimum amount of table data for a parallel scan.\nmin_wal_size\t1GB\tSets the minimum size to shrink the WAL to.\nold_snapshot_threshold\t-1\tTime before a snapshot is too old to read pages changed after the snapshot was taken.\noperator_precedence_warning\toff\tEmit a warning for constructs that changed meaning since PostgreSQL 9.4.\nparallel_leader_participation\ton\tControls whether Gather and Gather Merge also run subplans.\nparallel_setup_cost\t1000\tSets the planner's estimate of the cost of starting up worker processes for parallel query.\nparallel_tuple_cost\t0.1\tSets the planner's estimate of the cost of passing each tuple (row) from worker to master backend.\npassword_encryption\tmd5\tChooses the algorithm for encrypting passwords.\nplan_cache_mode\tauto\tControls the planner's selection of custom or generic plan.\nport\t5432\tSets the TCP port the server listens on.\npost_auth_delay\t0\tWaits N seconds on connection startup after authentication.\npre_auth_delay\t0\tWaits N seconds on connection startup before authentication.\nprimary_conninfo\t\tSets the connection string to be used to connect to the sending server.\nprimary_slot_name\t\tSets the name of the replication slot to use on the sending server.\npromote_trigger_file\t\tSpecifies a file name whose presence ends recovery in the standby.\nquote_all_identifiers\toff\tWhen generating SQL fragments, quote all identifiers.\nrandom_page_cost\t4\tSets the planner's estimate of the cost of a nonsequentially fetched disk page.\nrecovery_end_command\t\tSets the shell command that will be executed once at the end of recovery.\nrecovery_min_apply_delay\t0\tSets the minimum delay for applying changes during recovery.\nrecovery_target\t\tSet to \"immediate\" to end recovery as soon as a consistent state is reached.\nrecovery_target_action\tpause\tSets the action to perform upon reaching the recovery target.\nrecovery_target_inclusive\ton\tSets whether to include or exclude transaction with recovery target.\nrecovery_target_lsn\t\tSets the LSN of the write-ahead log location up to which recovery will proceed.\nrecovery_target_name\t\tSets the named restore point up to which recovery will proceed.\nrecovery_target_time\t\tSets the time stamp up to which recovery will proceed.\nrecovery_target_timeline\tlatest\tSpecifies the timeline to recover into.\nrecovery_target_xid\t\tSets the transaction ID up to which recovery will proceed.\nrestart_after_crash\ton\tReinitialize server after backend crash.\nrestore_command\t\tSets the shell command that will retrieve an archived WAL file.\nrow_security\ton\tEnable row security.\nsearch_path\t\"$user\", public\tSets the schema search order for names that are not schema-qualified.\nsegment_size\t1GB\tShows the number of pages per disk file.\nseq_page_cost\t1\tSets the planner's estimate of the cost of a sequentially fetched disk page.\nserver_encoding\tUTF8\tSets the server (database) character set encoding.\nserver_version\t12.5\tShows the server version.\nserver_version_num\t120005\tShows the server version as an integer.\nsession_preload_libraries\t\tLists shared libraries to preload into each backend.\nsession_replication_role\torigin\tSets the session's behavior for triggers and rewrite rules.\nshared_buffers\t5100MB\tSets the number of shared memory buffers used by the server.\nshared_memory_type\twindows\tSelects the shared memory implementation used for the main shared memory region.\nshared_preload_libraries\t\tLists shared libraries to preload into server.\nssl\toff\tEnables SSL connections.\nssl_ca_file\t\tLocation of the SSL certificate authority file.\nssl_cert_file\tserver.crt\tLocation of the SSL server certificate file.\nssl_ciphers\tHIGH:MEDIUM:+3DES:!aNULL\tSets the list of allowed SSL ciphers.\nssl_crl_file\t\tLocation of the SSL certificate revocation list file.\nssl_dh_params_file\t\tLocation of the SSL DH parameters file.\nssl_ecdh_curve\tprime256v1\tSets the curve to use for ECDH.\nssl_key_file\tserver.key\tLocation of the SSL server private key file.\nssl_library\tOpenSSL\tName of the SSL library.\nssl_max_protocol_version\t\tSets the maximum SSL/TLS protocol version to use.\nssl_min_protocol_version\tTLSv1\tSets the minimum SSL/TLS protocol version to use.\nssl_passphrase_command\t\tCommand to obtain passphrases for SSL.\nssl_passphrase_command_supports_reload\toff\tAlso use ssl_passphrase_command during server reload.\nssl_prefer_server_ciphers\ton\tGive priority to server ciphersuite order.\nstandard_conforming_strings\ton\tCauses '...' strings to treat backslashes literally.\nstatement_timeout\t0\tSets the maximum allowed duration of any statement.\nstats_temp_directory\tpg_stat_tmp\tWrites temporary statistics files to the specified directory.\nsuperuser_reserved_connections\t3\tSets the number of connection slots reserved for superusers.\nsynchronize_seqscans\ton\tEnable synchronized sequential scans.\nsynchronous_commit\ton\tSets the current transaction's synchronization level.\nsynchronous_standby_names\t\tNumber of synchronous standbys and list of names of potential synchronous ones.\nsyslog_facility\tnone\tSets the syslog \"facility\" to be used when syslog enabled.\nsyslog_ident\tpostgres\tSets the program name used to identify PostgreSQL messages in syslog.\nsyslog_sequence_numbers\ton\tAdd sequence number to syslog messages to avoid duplicate suppression.\nsyslog_split_messages\ton\tSplit messages sent to syslog by lines and to fit into 1024 bytes.\ntcp_keepalives_count\t0\tMaximum number of TCP keepalive retransmits.\ntcp_keepalives_idle\t-1\tTime between issuing TCP keepalives.\ntcp_keepalives_interval\t-1\tTime between TCP keepalive retransmits.\ntcp_user_timeout\t0\tTCP user timeout.\ntemp_buffers\t8MB\tSets the maximum number of temporary buffers used by each session.\ntemp_file_limit\t-1\tLimits the total size of all temporary files used by each process.\ntemp_tablespaces\t\tSets the tablespace(s) to use for temporary tables and sort files.\nTimeZone\tEurope/London\tSets the time zone for displaying and interpreting time stamps.\ntimezone_abbreviations\tDefault\tSelects a file of time zone abbreviations.\ntrace_notify\toff\tGenerates debugging output for LISTEN and NOTIFY.\ntrace_recovery_messages\tlog\tEnables logging of recovery-related debugging information.\ntrace_sort\toff\tEmit information about resource usage in sorting.\ntrack_activities\ton\tCollects information about executing commands.\ntrack_activity_query_size\t1kB\tSets the size reserved for pg_stat_activity.query, in bytes.\ntrack_commit_timestamp\toff\tCollects transaction commit time.\ntrack_counts\ton\tCollects statistics on database activity.\ntrack_functions\tnone\tCollects function-level statistics on database activity.\ntrack_io_timing\toff\tCollects timing statistics for database I/O activity.\ntransaction_deferrable\toff\tWhether to defer a read-only serializable transaction until it can be executed with no possible serialization failures.\ntransaction_isolation\tread committed\tSets the current transaction's isolation level.\ntransaction_read_only\toff\tSets the current transaction's read-only status.\ntransform_null_equals\toff\tTreats \"expr=NULL\" as \"expr IS NULL\".\nunix_socket_directories\t\tSets the directories where Unix-domain sockets will be created.\nunix_socket_group\t\tSets the owning group of the Unix-domain socket.\nunix_socket_permissions\t0777\tSets the access permissions of the Unix-domain socket.\nupdate_process_title\toff\tUpdates the process title to show the active SQL command.\nvacuum_cleanup_index_scale_factor\t0.1\tNumber of tuple inserts prior to index cleanup as a fraction of reltuples.\nvacuum_cost_delay\t0\tVacuum cost delay in milliseconds.\nvacuum_cost_limit\t200\tVacuum cost amount available before napping.\nvacuum_cost_page_dirty\t20\tVacuum cost for a page dirtied by vacuum.\nvacuum_cost_page_hit\t1\tVacuum cost for a page found in the buffer cache.\nvacuum_cost_page_miss\t10\tVacuum cost for a page not found in the buffer cache.\nvacuum_defer_cleanup_age\t0\tNumber of transactions by which VACUUM and HOT cleanup should be deferred, if any.\nvacuum_freeze_min_age\t50000000\tMinimum age at which VACUUM should freeze a table row.\nvacuum_freeze_table_age\t150000000\tAge at which VACUUM should scan whole table to freeze tuples.\nvacuum_multixact_freeze_min_age\t5000000\tMinimum age at which VACUUM should freeze a MultiXactId in a table row.\nvacuum_multixact_freeze_table_age\t150000000\tMultixact age at which VACUUM should scan whole table to freeze tuples.\nwal_block_size\t8192\tShows the block size in the write ahead log.\nwal_buffers\t16MB\tSets the number of disk-page buffers in shared memory for WAL.\nwal_compression\toff\tCompresses full-page writes written in WAL file.\nwal_consistency_checking\t\tSets the WAL resource managers for which WAL consistency checks are done.\nwal_init_zero\ton\tWrites zeroes to new WAL files before first use.\nwal_keep_segments\t0\tSets the number of WAL files held for standby servers.\nwal_level\treplica\tSet the level of information written to the WAL.\nwal_log_hints\toff\tWrites full pages to WAL when first modified after a checkpoint, even for a non-critical modifications.\nwal_receiver_status_interval\t10s\tSets the maximum interval between WAL receiver status reports to the sending server.\nwal_receiver_timeout\t1min\tSets the maximum wait time to receive data from the sending server.\nwal_recycle\ton\tRecycles WAL files by renaming them.\nwal_retrieve_retry_interval\t5s\tSets the time to wait before retrying to retrieve WAL after a failed attempt.\nwal_segment_size\t16MB\tShows the size of write ahead log segments.\nwal_sender_timeout\t1min\tSets the maximum time to wait for WAL replication.\nwal_sync_method\topen_datasync\tSelects the method used for forcing WAL updates to disk.\nwal_writer_delay\t200ms\tTime between WAL flushes performed in the WAL writer.\nwal_writer_flush_after\t1MB\tAmount of WAL written out by WAL writer that triggers a flush.\nwork_mem\t256MB\tSets the maximum memory to be used for query workspaces.\nxmlbinary\tbase64\tSets how binary values are to be encoded in XML.\nxmloption\tcontent\tSets whether XML data in implicit parsing and serialization operations is to be considered as documents or content fragments.\nzero_damaged_pages\toff\tContinues processing past damaged page headers.", "msg_date": "Thu, 23 Sep 2021 15:00:22 +0200", "msg_from": "Arturas Mazeika <[email protected]>", "msg_from_op": true, "msg_subject": "hashjoins, index loops to retrieve pk/ux constrains in pg12" }, { "msg_contents": "I believe that this is a planning problem with the number of tables/joins\ninvolved in the query you have written. If you take a look at the\ndefinition of the views in information_schema that you are using and read\nabout from_collapse_limit/join_collapse_limit, you may see that this is a\nbit painful for the planner. It might be cumbersome to use the actual\nsystem tables underneath, but that would certainly lead to much better\nperformance. Otherwise, I would look at perhaps putting the view that has a\nWHERE condition on it as the FROM to encourage the planner to perhaps\nfilter that set first and join the other tables after. If that didn't help,\nI might even use a materialized CTE to force the issue.\n\nHopefully a real expert will chime in with a better explanation of the\nchallenges or preferred solution.\n\nI believe that this is a planning problem with the number of tables/joins involved in the query you have written. If you take a look at the definition of the views in information_schema that you are using and read about from_collapse_limit/join_collapse_limit, you may see that this is a bit painful for the planner. It might be cumbersome to use the actual system tables underneath, but that would certainly lead to much better performance. Otherwise, I would look at perhaps putting the view that has a WHERE condition on it as the FROM to encourage the planner to perhaps filter that set first and join the other tables after. If that didn't help, I might even use a materialized CTE to force the issue.Hopefully a real expert will chime in with a better explanation of the challenges or preferred solution.", "msg_date": "Thu, 23 Sep 2021 23:33:58 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoins, index loops to retrieve pk/ux constrains in pg12" }, { "msg_contents": "Hi Michael,\n\nThanks for the answer.\n\nI agree that the tables behind the views makes the query processing\nchallenging. What makes it even more challenging to us is that this query\nis generated by a third party library that we use to operationalize the\nschema changes.\n\nI am trying to figure out what went wrong with query planning that\nhashjoins perform worse compared to index/sort joins. It looks to me that\nthis is mostly because (1) the temporal space for creating a hashtable is a\nlot larger compared to sort/index joins and (2) it is *not *that the\npredicted selectivity is way off compared to the actual selectivity. W.r.t\n(1) in almost all cases the IOs needed to do hashing is way bigger compared\nto indexes (see in red if your email client supports html formatting, only\nin one parameter the hash joins \"win\" against the index/sort joins see in\ngreen, and the actual times are always worse, see in blue):\n\n -> Hash Join (cost=415.40..494.06\nrows=263 width=136) (actual time=0.007..0.869 rows=1707 loops=1672)\n Output: c_5.conname,\nc_5.connamespace, r_5.relname, r_5.relnamespace\n Inner Unique: true\n Hash Cond: (c_5.conrelid\n= r_5.oid)\n Buffers: shared hit=87218\n\nvs. corresponding index/sort join:\n\n -> Nested Loop (cost=0.28..171.05 rows=1\nwidth=136) (actual time=0.024..1.976 rows=595 loops=2)\n Output: c_4.conname,\nc_4.connamespace, r_5.relname, r_5.relnamespace\n Inner Unique: true\n Buffers: shared hit=3674\n\n\nor looking at the global level:\n\nNested Loop (cost=2174.36..13670.47 rows=1 width=320) (actual\ntime=5499.728..26310.137 rows=2 loops=1)\n Output: \"*SELECT* 1\".table_name,\n(a.attname)::information_schema.sql_identifier, \"*SELECT* 1_1\".table_name,\n(a_1.attname)::information_schema.sql_identifier,\n(con.conname)::information_schema.sql_identifier\n Inner Unique: true\n Buffers: shared hit=1961035\n\nvs\n\nNested Loop (cost=1736.10..18890.44 rows=1 width=320) (actual\ntime=30.780..79.572 rows=2 loops=1)\n Output: \"*SELECT* 1\".table_name,\n(a.attname)::information_schema.sql_identifier, \"*SELECT* 1_1\".table_name,\n(a_1.attname)::information_schema.sql_identifier,\n(con.conname)::information_schema.sql_identifier\n Inner Unique: true\n Buffers: shared hit=9018\n\n\nWhich makes me wonder why hash join was chosen at all. Looks like a bug\nsomewhere in query optimization.\n\nCheers,\nArturas\n\nOn Fri, Sep 24, 2021 at 7:34 AM Michael Lewis <[email protected]> wrote:\n\n> I believe that this is a planning problem with the number of tables/joins\n> involved in the query you have written. If you take a look at the\n> definition of the views in information_schema that you are using and read\n> about from_collapse_limit/join_collapse_limit, you may see that this is a\n> bit painful for the planner. It might be cumbersome to use the actual\n> system tables underneath, but that would certainly lead to much better\n> performance. Otherwise, I would look at perhaps putting the view that has a\n> WHERE condition on it as the FROM to encourage the planner to perhaps\n> filter that set first and join the other tables after. If that didn't help,\n> I might even use a materialized CTE to force the issue.\n>\n> Hopefully a real expert will chime in with a better explanation of the\n> challenges or preferred solution.\n>\n\nHi Michael,Thanks for the answer.I agree that the tables behind the views makes the query processing challenging. What makes it even more challenging to us is that this query is generated by a third party library that we use to operationalize the schema changes.I am trying to figure out what went wrong with query planning that hashjoins perform worse compared to index/sort joins. It looks to me that this is mostly because (1) the temporal space for creating a hashtable is a lot larger compared to sort/index joins and (2) it is not that the predicted selectivity is way off compared to the actual selectivity. W.r.t (1) in almost all cases the IOs needed to do hashing is way bigger compared to indexes (see in red if your email client supports html formatting, only in one parameter the hash joins \"win\" against the index/sort joins see in green, and the actual times are always worse, see in blue):                                       ->  Hash Join  (cost=415.40..494.06 rows=263 width=136) (actual time=0.007..0.869 rows=1707 loops=1672)                                                  Output: c_5.conname, c_5.connamespace, r_5.relname, r_5.relnamespace                                                  Inner Unique: true                                                  Hash Cond: (c_5.conrelid = r_5.oid)                                                  Buffers: shared hit=87218vs. corresponding index/sort join:                               ->  Nested Loop  (cost=0.28..171.05 rows=1 width=136) (actual time=0.024..1.976 rows=595 loops=2)                                      Output: c_4.conname, c_4.connamespace, r_5.relname, r_5.relnamespace                                      Inner Unique: true                                      Buffers: shared hit=3674or looking at the global level:Nested Loop  (cost=2174.36..13670.47 rows=1 width=320) (actual time=5499.728..26310.137 rows=2 loops=1)  Output: \"*SELECT* 1\".table_name, (a.attname)::information_schema.sql_identifier, \"*SELECT* 1_1\".table_name, (a_1.attname)::information_schema.sql_identifier, (con.conname)::information_schema.sql_identifier  Inner Unique: true  Buffers: shared hit=1961035vsNested Loop  (cost=1736.10..18890.44 rows=1 width=320) (actual time=30.780..79.572 rows=2 loops=1)  Output: \"*SELECT* 1\".table_name, (a.attname)::information_schema.sql_identifier, \"*SELECT* 1_1\".table_name, (a_1.attname)::information_schema.sql_identifier, (con.conname)::information_schema.sql_identifier  Inner Unique: true  Buffers: shared hit=9018Which makes me wonder why hash join was chosen at all. Looks like a bug somewhere in query optimization.Cheers,ArturasOn Fri, Sep 24, 2021 at 7:34 AM Michael Lewis <[email protected]> wrote:I believe that this is a planning problem with the number of tables/joins involved in the query you have written. If you take a look at the definition of the views in information_schema that you are using and read about from_collapse_limit/join_collapse_limit, you may see that this is a bit painful for the planner. It might be cumbersome to use the actual system tables underneath, but that would certainly lead to much better performance. Otherwise, I would look at perhaps putting the view that has a WHERE condition on it as the FROM to encourage the planner to perhaps filter that set first and join the other tables after. If that didn't help, I might even use a materialized CTE to force the issue.Hopefully a real expert will chime in with a better explanation of the challenges or preferred solution.", "msg_date": "Mon, 27 Sep 2021 12:09:06 +0200", "msg_from": "Arturas Mazeika <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoins, index loops to retrieve pk/ux constrains in pg12" }, { "msg_contents": "I'm unclear what you changed to get the planner to choose one vs the other.\nDid you disable hashjoins? Without the full plan to review, it is tough to\nagre with any conclusion that these particular nodes are troublesome. It\nmight be that this was the right choice for that part of that plan, but\nimproper estimates at a earlier step were problematic.\n\nWhat configs have you changed such as work_mem, random_page_cost, and such?\nIf random_page_cost & seq_page_cost are still default values, then the\nplanner will tend to do more seq scans I believe, and hash them to join\nwith large sets of data, rather than do nested loop index scans. I think\nthat's how that works. With the lack of flexibility to change the query,\nyou might be able to set a few configs for the user that runs these schema\nchecks. If you can find changes that make an overall improvement.\n\n\n*Michael Lewis | Database Engineer*\n*Entrata*\n\n>\n\nI'm unclear what you changed to get the planner to choose one vs the other. Did you disable hashjoins? Without the full plan to review, it is tough to agre with any conclusion that these particular nodes are troublesome. It might be that this was the right choice for that part of that plan, but improper estimates at a earlier step were problematic.What configs have you changed such as work_mem, random_page_cost, and such? If random_page_cost & seq_page_cost are still default values, then the planner will tend to do more seq scans I believe, and hash them to join with large sets of data, rather than do nested loop index scans. I think that's how that works. With the lack of flexibility to change the query, you might be able to set a few configs for the user that runs these schema checks. If you can find changes that make an overall improvement.Michael Lewis  |  Database EngineerEntrata", "msg_date": "Mon, 27 Sep 2021 08:12:19 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoins, index loops to retrieve pk/ux constrains in pg12" }, { "msg_contents": "I\nHi Michael,\n\nThanks a lot for having a look at the query once again in more detail. In\nshort, you are right, I fired the liquibase scripts and observed the exact\nquery that was hanging in pg_stats_activity. The query was:\n\nSELECT\n\tFK.TABLE_NAME as \"TABLE_NAME\"\n\t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n\t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n\t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n\t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\"\nFROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C\nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON\nC.CONSTRAINT_NAME = FK.CONSTRAINT_NAME\nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON\nC.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME\nINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME\n= CU.CONSTRAINT_NAME\nINNER JOIN (\n\tSELECT\n\t\ti1.TABLE_NAME\n\t\t, i2.COLUMN_NAME\n\t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1\n\t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON\ni1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n\t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'\n) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE\nlower(FK.TABLE_NAME)='secrole_condcollection'\n\nI rerun this query twice. Once with set enable_hashjoin = false; and set\nenable_hashjoin = true; . I observed that the join order was very, very\nsimilar between the hash and index plans. I reran the above two queries\nwith random_page_cost to 2, 1.5, or 1.0 and observed no difference\nwhatsoever, the planner was always choosing the hashjoins over sort/index\nnested loops. the seq_page_cost is set to default value 1. The tables\nbehind the views do not have more than 10K rows, and do not exceed 400KB of\nspace. The work_mem parameter is set to 256MB, effective cache is 9GB, the\nmachine has something around 32-64GB of RAM, SSD as the primary drive, 140\ndefault connections. The query planner, of course thinks that the overall\nnested loop including hashes is better:\n\ncost=2174.36..13670.47 (hash)\n\nvs\n\ncost=1736.10..18890.44 (index/sort join)\n\nbut I think there's a problem there, cause I don't think that one can reuse\nthe pre-computed hashes over and over again, while sort/index joins end up\nhitting the same buffers, or am I wrong?\n\nMore details about the query plans as well as the complete set of settings\ncan be found in the original email at\nhttps://www.postgresql.org/message-id/CAAUL%3DcFcvUo%3D7b4T-K5PqiqrF6etp59qcgv77DyK2Swa4VhYuQ%40mail.gmail.com\n\nIf you could have another look into what's going on, I'd appreciate it a\nlot. in postgres 9.6 our setup goes through the liquibase scripts in 5\nminutes, and pg12 with hash joins may take up to 1.5 hours.\n\nCheers,\nArturas\n\nOn Mon, Sep 27, 2021 at 4:12 PM Michael Lewis <[email protected]> wrote:\n\n> I'm unclear what you changed to get the planner to choose one vs the\n> other. Did you disable hashjoins? Without the full plan to review, it is\n> tough to agre with any conclusion that these particular nodes are\n> troublesome. It might be that this was the right choice for that part of\n> that plan, but improper estimates at a earlier step were problematic.\n>\n> What configs have you changed such as work_mem, random_page_cost, and\n> such? If random_page_cost & seq_page_cost are still default values,\n> then the planner will tend to do more seq scans I believe, and hash them to\n> join with large sets of data, rather than do nested loop index scans. I\n> think that's how that works. With the lack of flexibility to change the\n> query, you might be able to set a few configs for the user that runs these\n> schema checks. If you can find changes that make an overall improvement.\n>\n>\n> *Michael Lewis | Database Engineer*\n> *Entrata*\n>\n>>\n\nIHi Michael,Thanks a lot for having a look at the query once again in more detail. In short, you are right, I fired the liquibase scripts and observed the exact query that was hanging in pg_stats_activity. The query was:\nSELECT \n\tFK.TABLE_NAME as \"TABLE_NAME\"\n\t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n\t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n\t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n\t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\" \nFROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C \nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME \nINNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME \nINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME \nINNER JOIN ( \n\tSELECT \n\t\ti1.TABLE_NAME\n\t\t, i2.COLUMN_NAME\n\t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1 \n\t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n\t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY' \n) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE \nlower(FK.TABLE_NAME)='secrole_condcollection'\nI rerun this query twice. Once with \nset enable_hashjoin = false; and \nset enable_hashjoin = true;\n\n. I observed that the join order was very, very similar between the hash and index plans. I reran the above two queries with \nrandom_page_cost to 2, 1.5, or 1.0 and observed no difference whatsoever, the planner was always choosing the hashjoins over sort/index nested loops. the \nseq_page_cost is set to default value 1. The tables behind the views do not have more than 10K rows, and do not exceed 400KB of space. The work_mem parameter is set to 256MB, effective cache is 9GB, the machine has something around 32-64GB of RAM, SSD as the primary drive, 140 default connections. The query planner, of course thinks that the overall nested loop including hashes is better:\ncost=2174.36..13670.47 (hash)vs \ncost=1736.10..18890.44 (index/sort join) but I think there's a problem there, cause I don't think that one can reuse the pre-computed hashes over and over again, while sort/index joins end up hitting the same buffers, or am I wrong?More details about the query plans as well as the complete set of settings can be found in the original email at https://www.postgresql.org/message-id/CAAUL%3DcFcvUo%3D7b4T-K5PqiqrF6etp59qcgv77DyK2Swa4VhYuQ%40mail.gmail.com If you could have another look into what's going on, I'd appreciate it a lot. in postgres 9.6 our setup goes through the liquibase scripts in 5 minutes, and pg12 with hash joins may take up to 1.5 hours.Cheers,ArturasOn Mon, Sep 27, 2021 at 4:12 PM Michael Lewis <[email protected]> wrote:I'm unclear what you changed to get the planner to choose one vs the other. Did you disable hashjoins? Without the full plan to review, it is tough to agre with any conclusion that these particular nodes are troublesome. It might be that this was the right choice for that part of that plan, but improper estimates at a earlier step were problematic.What configs have you changed such as work_mem, random_page_cost, and such? If random_page_cost & seq_page_cost are still default values, then the planner will tend to do more seq scans I believe, and hash them to join with large sets of data, rather than do nested loop index scans. I think that's how that works. With the lack of flexibility to change the query, you might be able to set a few configs for the user that runs these schema checks. If you can find changes that make an overall improvement.Michael Lewis  |  Database EngineerEntrata", "msg_date": "Mon, 27 Sep 2021 21:56:12 +0200", "msg_from": "Arturas Mazeika <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoins, index loops to retrieve pk/ux constrains in pg12" }, { "msg_contents": "Arturas Mazeika <[email protected]> writes:\n> Thanks a lot for having a look at the query once again in more detail. In\n> short, you are right, I fired the liquibase scripts and observed the exact\n> query that was hanging in pg_stats_activity. The query was:\n\n> SELECT\n> \tFK.TABLE_NAME as \"TABLE_NAME\"\n> \t, CU.COLUMN_NAME as \"COLUMN_NAME\"\n> \t, PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n> \t, PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n> \t, C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\"\n> FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C\n> INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON\n> C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME\n> INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON\n> C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME\n> INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME\n> = CU.CONSTRAINT_NAME\n> INNER JOIN (\n> \tSELECT\n> \t\ti1.TABLE_NAME\n> \t\t, i2.COLUMN_NAME\n> \t\tFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1\n> \t\tINNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON\n> i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n> \t\tWHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'\n> ) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE\n> lower(FK.TABLE_NAME)='secrole_condcollection'\n\nTBH, before worrying about performance you should be worrying about\ncorrectness. constraint_name alone is not a sufficient join key\nfor these tables, so who's to say whether you're even getting the\nright answers?\n\nPer SQL spec, the join key to use is probably constraint_catalog\nplus constraint_schema plus constraint_name. You might say you\ndon't need to compare constraint_catalog because that's fixed\nwithin any one Postgres database, and that observation would be\ncorrect. But you can't ignore the schema.\n\nWhat's worse, the SQL-spec join keys are based on the assumption that\nconstraint names are unique within schemas, which is not enforced in\nPostgres. Maybe you're all right here, because you're only looking\nat primary key constraints, which are associated with indexes, which\nbeing relations do indeed have unique-within-schema names. But you\nstill can't ignore the schema.\n\nOn the whole I don't think you're buying anything by going through\nthe SQL-spec information views, because this query is clearly pretty\ndependent on Postgres-specific assumptions even if it looks like it's\nportable. And you're definitely giving up a lot of performance, since\nthose views have so many complications from trying to map the spec's\nview of whats-a-constraint onto the Postgres objects (not to mention\nthe spec's arbitrary opinions about which objects you're allowed to\nsee). This query would be probably be simpler, more correct, and a\nlot faster if rewritten to query the Postgres catalogs directly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Sep 2021 10:13:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoins, index loops to retrieve pk/ux constrains in pg12" }, { "msg_contents": "Hi Tom,\n\nI agree that the query needs to be first correct, and second fast. I also\nagree that this query works only if there are no duplicates among schemas\n(if one chooses to create a table with the same names and index names and\nconstraint names in a different schema, this would not work). Provided the\nassumptions are correct (what it is on our customer systems), we use\nintermediate liquibase scripts to keep track of our database (schema)\nchanges, those intermediate scripts fire queries as mentioned above, i.e.,\nwe cannot directly influence how the query looks like.\n\nGiven these very hard constraints (i.e., the query is formulated using\ninformation_schema, and not directly) is it possible to assess why the hash\njoins plan is chosen? At the end of the day, the io block hit rate of this\nquery in hash joins is 3-4 orders of magnitude higher compared to\nsort/index joins? Is there anything one can do on the configuration side to\navoid such hash-join pitfalls?\n\nCheers,\nArturas\n\nOn Tue, Sep 28, 2021 at 4:13 PM Tom Lane <[email protected]> wrote:\n\n> Arturas Mazeika <[email protected]> writes:\n> > Thanks a lot for having a look at the query once again in more detail. In\n> > short, you are right, I fired the liquibase scripts and observed the\n> exact\n> > query that was hanging in pg_stats_activity. The query was:\n>\n> > SELECT\n> > FK.TABLE_NAME as \"TABLE_NAME\"\n> > , CU.COLUMN_NAME as \"COLUMN_NAME\"\n> > , PK.TABLE_NAME as \"REFERENCED_TABLE_NAME\"\n> > , PT.COLUMN_NAME as \"REFERENCED_COLUMN_NAME\"\n> > , C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\"\n> > FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C\n> > INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON\n> > C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME\n> > INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON\n> > C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME\n> > INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME\n> > = CU.CONSTRAINT_NAME\n> > INNER JOIN (\n> > SELECT\n> > i1.TABLE_NAME\n> > , i2.COLUMN_NAME\n> > FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1\n> > INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON\n> > i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n> > WHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'\n> > ) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE\n> > lower(FK.TABLE_NAME)='secrole_condcollection'\n>\n> TBH, before worrying about performance you should be worrying about\n> correctness. constraint_name alone is not a sufficient join key\n> for these tables, so who's to say whether you're even getting the\n> right answers?\n>\n> Per SQL spec, the join key to use is probably constraint_catalog\n> plus constraint_schema plus constraint_name. You might say you\n> don't need to compare constraint_catalog because that's fixed\n> within any one Postgres database, and that observation would be\n> correct. But you can't ignore the schema.\n>\n> What's worse, the SQL-spec join keys are based on the assumption that\n> constraint names are unique within schemas, which is not enforced in\n> Postgres. Maybe you're all right here, because you're only looking\n> at primary key constraints, which are associated with indexes, which\n> being relations do indeed have unique-within-schema names. But you\n> still can't ignore the schema.\n>\n> On the whole I don't think you're buying anything by going through\n> the SQL-spec information views, because this query is clearly pretty\n> dependent on Postgres-specific assumptions even if it looks like it's\n> portable. And you're definitely giving up a lot of performance, since\n> those views have so many complications from trying to map the spec's\n> view of whats-a-constraint onto the Postgres objects (not to mention\n> the spec's arbitrary opinions about which objects you're allowed to\n> see). This query would be probably be simpler, more correct, and a\n> lot faster if rewritten to query the Postgres catalogs directly.\n>\n> regards, tom lane\n>\n\nHi Tom,I agree that the query needs to be first correct, and second fast. I also agree that this query works only if there are no duplicates among schemas (if one chooses to create a table with the same names and index names and constraint names in a different schema, this would not work). Provided the assumptions are correct (what  it is on our customer systems), we use intermediate liquibase scripts to keep track of our database (schema) changes, those intermediate scripts fire queries as mentioned above, i.e., we cannot directly influence how the query looks like.Given these very hard constraints (i.e., the query is formulated using information_schema, and not directly) is it possible to assess why the hash joins plan is chosen? At the end of the day, the io block hit rate of this query in hash joins is 3-4 orders of magnitude higher compared to sort/index joins? Is there anything one can do on the configuration side to avoid such hash-join pitfalls? Cheers,ArturasOn Tue, Sep 28, 2021 at 4:13 PM Tom Lane <[email protected]> wrote:Arturas Mazeika <[email protected]> writes:\n> Thanks a lot for having a look at the query once again in more detail. In\n> short, you are right, I fired the liquibase scripts and observed the exact\n> query that was hanging in pg_stats_activity. The query was:\n\n> SELECT\n>       FK.TABLE_NAME       as \"TABLE_NAME\"\n>       , CU.COLUMN_NAME    as \"COLUMN_NAME\"\n>       , PK.TABLE_NAME     as \"REFERENCED_TABLE_NAME\"\n>       , PT.COLUMN_NAME    as \"REFERENCED_COLUMN_NAME\"\n>       , C.CONSTRAINT_NAME as \"CONSTRAINT_NAME\"\n> FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C\n> INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON\n> C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME\n> INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON\n> C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME\n> INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME\n> = CU.CONSTRAINT_NAME\n> INNER JOIN (\n>       SELECT\n>               i1.TABLE_NAME\n>               , i2.COLUMN_NAME\n>               FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1\n>               INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON\n> i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME\n>               WHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'\n> ) PT ON PT.TABLE_NAME = PK.TABLE_NAME WHERE\n> lower(FK.TABLE_NAME)='secrole_condcollection'\n\nTBH, before worrying about performance you should be worrying about\ncorrectness.  constraint_name alone is not a sufficient join key\nfor these tables, so who's to say whether you're even getting the\nright answers?\n\nPer SQL spec, the join key to use is probably constraint_catalog\nplus constraint_schema plus constraint_name.  You might say you\ndon't need to compare constraint_catalog because that's fixed\nwithin any one Postgres database, and that observation would be\ncorrect.  But you can't ignore the schema.\n\nWhat's worse, the SQL-spec join keys are based on the assumption that\nconstraint names are unique within schemas, which is not enforced in\nPostgres.  Maybe you're all right here, because you're only looking\nat primary key constraints, which are associated with indexes, which\nbeing relations do indeed have unique-within-schema names.  But you\nstill can't ignore the schema.\n\nOn the whole I don't think you're buying anything by going through\nthe SQL-spec information views, because this query is clearly pretty\ndependent on Postgres-specific assumptions even if it looks like it's\nportable.  And you're definitely giving up a lot of performance, since\nthose views have so many complications from trying to map the spec's\nview of whats-a-constraint onto the Postgres objects (not to mention\nthe spec's arbitrary opinions about which objects you're allowed to\nsee).  This query would be probably be simpler, more correct, and a\nlot faster if rewritten to query the Postgres catalogs directly.\n\n                        regards, tom lane", "msg_date": "Wed, 29 Sep 2021 15:05:36 +0200", "msg_from": "Arturas Mazeika <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoins, index loops to retrieve pk/ux constrains in pg12" } ]
[ { "msg_contents": "At Orcid we're trying to upgrade our Postgres database (10 to 13) using\npg_logical for no downtime. The problem we have is how long the initial\ncopy is taking for the ~500GB database. If it takes say 20days to complete,\nwill we need to have 20days of WAL files to start catching up when it's\ncomplete?\n\nI read an earlier thread which pointed me to the tool\npglogical_create_subscriber which does a pg_basebackup to start the initial\nreplication but this is only going to be useful for logical clusters on the\nsame version.\n\nI had hoped that the COPY could be parallelized more by\n\"max_sync_workers_per_subscription\" which is set to 2. However there's only\na single process:-\n\npostgres 1022196 6.0 24.5 588340 491564 ? Ds Sep22 193:19\npostgres: main: xxx xxxx 10.xx.xx.xx(59144) COPY\n\nOne of the best resources I've found of real world examples are thead on\ngitlabs own gitlab about their Postgres migrations. They discussed one\nmethod that might work:-\n\n1. Setup 9.6 secondary via streaming\n2. Turn physical secondary into logical secondary\n3. Shutdown and upgrade secondary to 10\n4. Turn secondary back on.\n\nIn which case we would only need the time required to perform the upgrade.\n\n-- \nGiles Westwood\nSenior Devops Engineer, ORCID\n\nAt Orcid we're trying to upgrade our Postgres database (10 to 13) using pg_logical for no downtime. The problem we have is how long the initial copy is taking for the ~500GB database. If it takes say 20days to complete, will we need to have 20days of WAL files to start catching up when it's complete?I read an earlier thread which pointed me to the tool pglogical_create_subscriber which does a pg_basebackup to start the initial replication but this is only going to be useful for logical clusters on the same version.I had hoped that the COPY could be parallelized more by \"max_sync_workers_per_subscription\" which is set to 2. However there's only a single process:-postgres 1022196  6.0 24.5 588340 491564 ?       Ds   Sep22 193:19 postgres: main: xxx xxxx 10.xx.xx.xx(59144) COPYOne of the best resources I've found of real world examples are thead on gitlabs own gitlab about their Postgres migrations. They discussed one method that might work:-1. Setup 9.6 secondary via streaming2. Turn physical secondary into logical secondary3. Shutdown and upgrade secondary to 104. Turn secondary back on.In which case we would only need the time required to perform the upgrade.  -- Giles WestwoodSenior Devops Engineer, ORCID", "msg_date": "Fri, 24 Sep 2021 15:28:50 +0100", "msg_from": "\"Westwood, Giles\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance for initial copy when using pg_logical to upgrade\n Postgres" }, { "msg_contents": "On Fri, Sep 24, 2021 at 03:28:50PM +0100, Westwood, Giles wrote:\n> At Orcid we're trying to upgrade our Postgres database (10 to 13) using\n> pg_logical for no downtime. The problem we have is how long the initial\n> copy is taking for the ~500GB database. If it takes say 20days to complete,\n> will we need to have 20days of WAL files to start catching up when it's\n> complete?\n\nDid you see this thread and its suggestions to 1) set bulk load parameters;\nand, 2) drop indexes and FKs ?\n\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 24 Sep 2021 09:39:35 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for initial copy when using pg_logical to upgrade\n Postgres" }, { "msg_contents": "On Fri, Sep 24, 2021 at 3:39 PM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Sep 24, 2021 at 03:28:50PM +0100, Westwood, Giles wrote:\n>\n> Did you see this thread and its suggestions to 1) set bulk load parameters;\n> and, 2) drop indexes and FKs ?\n>\n>\n> https://www.postgresql.org/message-id/flat/[email protected]\n>\n>\nI did actually but I wanted to avoid getting my hands dirty with anything\nschema wise. I've found another person with another similar situation:-\n\nhttps://github.com/2ndQuadrant/pglogical/issues/325\n\nOn Fri, Sep 24, 2021 at 3:39 PM Justin Pryzby <[email protected]> wrote:On Fri, Sep 24, 2021 at 03:28:50PM +0100, Westwood, Giles wrote:\nDid you see this thread and its suggestions to 1) set bulk load parameters;\nand, 2) drop indexes and FKs ?\n\nhttps://www.postgresql.org/message-id/flat/[email protected]\nI did actually but I wanted to avoid getting my hands dirty with anything schema wise. I've found another person with another similar situation:-https://github.com/2ndQuadrant/pglogical/issues/325", "msg_date": "Fri, 24 Sep 2021 16:48:45 +0100", "msg_from": "\"Westwood, Giles\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance for initial copy when using pg_logical to upgrade\n Postgres" }, { "msg_contents": "\nOn 9/24/21 10:28 AM, Westwood, Giles wrote:\n> At Orcid we're trying to upgrade our Postgres database (10 to 13)\n> using pg_logical for no downtime. The problem we have is how long the\n> initial copy is taking for the ~500GB database. If it takes say 20days\n> to complete, will we need to have 20days of WAL files to start\n> catching up when it's complete?\n>\n> I read an earlier thread which pointed me to the tool\n> pglogical_create_subscriber which does a pg_basebackup to start the\n> initial replication but this is only going to be useful for logical\n> clusters on the same version.\n>\n> I had hoped that the COPY could be parallelized more by\n> \"max_sync_workers_per_subscription\" which is set to 2. However there's\n> only a single process:-\n>\n> postgres 1022196  6.0 24.5 588340 491564 ?       Ds   Sep22 193:19\n> postgres: main: xxx xxxx 10.xx.xx.xx(59144) COPY\n>\n> One of the best resources I've found of real world examples are thead\n> on gitlabs own gitlab about their Postgres migrations. They discussed\n> one method that might work:-\n>\n> 1. Setup 9.6 secondary via streaming\n> 2. Turn physical secondary into logical secondary\n> 3. Shutdown and upgrade secondary to 10\n> 4. Turn secondary back on.\n>\n> In which case we would only need the time required to perform the upgrade.\n\n\nIf you're using the pglogical extension, the best way is often to create\nthe replica as a physical replica (using pg_basebackup for example), and\nthen using the extension's utility program pglogical_create_subscriber\nto convert the physical replica to a logical replica, which you then\nupgrade and switch over to.\n\n\nOf course, test it out before doing this for real.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 24 Sep 2021 12:00:42 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for initial copy when using pg_logical to upgrade\n Postgres" }, { "msg_contents": "I'm currently doing this with a 2.2 TB database.\n\nBest way IMO is to (UPDATE pg_index SET indisready = false ... ) for non PK\nindexes for the largest tables. Then just set it back to indisready = true\nafter its done and run a REINDEX CONCURRENTLY on the indexes that were\ndisabled.\n\nGot about a transfer speed of 100GB per 50 minutes with this method with\nconsistent results.\n\nOn Fri, Sep 24, 2021 at 11:49 AM Westwood, Giles <[email protected]>\nwrote:\n\n>\n>\n>\n>\n> On Fri, Sep 24, 2021 at 3:39 PM Justin Pryzby <[email protected]>\n> wrote:\n>\n>> On Fri, Sep 24, 2021 at 03:28:50PM +0100, Westwood, Giles wrote:\n>>\n>> Did you see this thread and its suggestions to 1) set bulk load\n>> parameters;\n>> and, 2) drop indexes and FKs ?\n>>\n>>\n>> https://www.postgresql.org/message-id/flat/[email protected]\n>>\n>>\n> I did actually but I wanted to avoid getting my hands dirty with anything\n> schema wise. I've found another person with another similar situation:-\n>\n> https://github.com/2ndQuadrant/pglogical/issues/325\n>\n>\n\nI'm currently doing this with a 2.2 TB database. Best way IMO is to (UPDATE pg_index SET indisready = false ... ) for non PK indexes for the largest tables. Then just set it back to indisready = true after its done and run a REINDEX CONCURRENTLY on the indexes that were disabled.Got about a transfer speed of 100GB per 50 minutes with this method with consistent results.On Fri, Sep 24, 2021 at 11:49 AM Westwood, Giles <[email protected]> wrote:On Fri, Sep 24, 2021 at 3:39 PM Justin Pryzby <[email protected]> wrote:On Fri, Sep 24, 2021 at 03:28:50PM +0100, Westwood, Giles wrote:\nDid you see this thread and its suggestions to 1) set bulk load parameters;\nand, 2) drop indexes and FKs ?\n\nhttps://www.postgresql.org/message-id/flat/[email protected]\nI did actually but I wanted to avoid getting my hands dirty with anything schema wise. I've found another person with another similar situation:-https://github.com/2ndQuadrant/pglogical/issues/325", "msg_date": "Fri, 24 Sep 2021 12:02:31 -0400", "msg_from": "Tim <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for initial copy when using pg_logical to upgrade\n Postgres" }, { "msg_contents": "On Fri, Sep 24, 2021 at 5:02 PM Tim <[email protected]> wrote:\n\n> I'm currently doing this with a 2.2 TB database.\n>\n> Best way IMO is to (UPDATE pg_index SET indisready = false ... ) for non\n> PK indexes for the largest tables. Then just set it back to indisready =\n> true after its done and run a REINDEX CONCURRENTLY on the indexes that were\n> disabled.\n>\n> Got about a transfer speed of 100GB per 50 minutes with this method with\n> consistent results.\n>\n\nThanks Tim, that has worked great. I'm trying to automate the whole process\nbut I can't see a way of seeing when the initial pglogical copy is complete\nshort of checking the disk space.\n\nAll I've found is:-\n\nselect * from pglogical.local_sync_status;\n sync_kind | sync_subid | sync_nspname | sync_relname | sync_status |\nsync_statuslsn\n-----------+------------+--------------+--------------+-------------+----------------\n d | 1821676733 | | | d | 0/0\n(1 row)\n\nor\n\nxxx=# select * from pg_stat_replication ;\n-[ RECORD 1 ]----+--------------------------------\npid | 3469521\nusesysid | 77668435\nusename | xxx\napplication_name | xxxx_snap\nclient_addr | 10.44.16.83\nclient_hostname |\nclient_port | 52594\nbackend_start | 2021-10-27 12:51:17.618734+00\nbackend_xmin | 221892481\nstate | startup\nsent_lsn |\nwrite_lsn |\nflush_lsn |\nreplay_lsn |\nwrite_lag |\nflush_lag |\nreplay_lag |\nsync_priority | 0\nsync_state | async\n\nOn Fri, Sep 24, 2021 at 5:02 PM Tim <[email protected]> wrote:I'm currently doing this with a 2.2 TB database. Best way IMO is to (UPDATE pg_index SET indisready = false ... ) for non PK indexes for the largest tables. Then just set it back to indisready = true after its done and run a REINDEX CONCURRENTLY on the indexes that were disabled.Got about a transfer speed of 100GB per 50 minutes with this method with consistent results.Thanks Tim, that has worked great. I'm trying to automate the whole process but I can't see a way of seeing when the initial pglogical copy is complete short of checking the disk space.All I've found is:-select * from pglogical.local_sync_status; sync_kind | sync_subid | sync_nspname | sync_relname | sync_status | sync_statuslsn-----------+------------+--------------+--------------+-------------+---------------- d         | 1821676733 |              |              | d           | 0/0(1 row)orxxx=# select * from pg_stat_replication ;-[ RECORD 1 ]----+--------------------------------pid              | 3469521usesysid         | 77668435usename          | xxxapplication_name | xxxx_snapclient_addr      | 10.44.16.83client_hostname  |client_port      | 52594backend_start    | 2021-10-27 12:51:17.618734+00backend_xmin     | 221892481state            | startupsent_lsn         |write_lsn        |flush_lsn        |replay_lsn       |write_lag        |flush_lag        |replay_lag       |sync_priority    | 0sync_state       | async", "msg_date": "Wed, 27 Oct 2021 14:39:40 +0100", "msg_from": "\"Westwood, Giles\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance for initial copy when using pg_logical to upgrade\n Postgres" } ]
[ { "msg_contents": "I have run into the following issue: A table contains an enum column, \nand a partial unique index is available on the table.\nThis index contains exactly the row I am querying for. Unfortunately the \nindex is not always used, and I don't really understand why.\n\nThe attachments enumTest.sql shows the script reproducing the behaviour, \nand the enumTest.log shows the result when running on PostgreSQL 13.4.\nThere doesn't seem to be any difference from PG11 through 14-RC1.\n\nFirst off I tried to do a simple test to see if the index was being used:\n\nEXPLAIN (analyze, costs, buffers, verbose) SELECT val FROM \ntable_test_enum WHERE val = 'Ole' and dat IS NULL;\n\t\t\t\t\t\t QUERY PLAN\n------------------------------------------------------------------------\n Index Only Scan using table_test_enum_val_idx on \npublic.table_test_enum (cost=0.12..4.14 rows=1 width=4) (actual \ntime=0.014..0.016 rows=1 loops=1)\n Output: val\n Heap Fetches: 0\n Planning Time: 0.436 ms\n Execution Time: 0.048 ms\n(5 rows)\n\nAll is fine, but in my application the query is executed as a prepared \nstatement, using a varchar parameter:\n\nPREPARE qry1(varchar) AS SELECT val FROM table_test_enum WHERE val = \n$1::type_table_test_enum AND dat IS NULL;\nEXPLAIN (analyze, costs, buffers, verbose) EXECUTE qry1('Ole');\n QUERY PLAN\n----------------------------------------------------------------------\n Seq Scan on public.table_test_enum (cost=0.00..66.52 rows=1 width=4) \n(actual time=1.131..1.133 rows=1 loops=1)\n Output: val\n Filter: ((table_test_enum.dat IS NULL) AND (table_test_enum.val = \n('Ole'::cstring)::type_table_test_enum))\n Rows Removed by Filter: 3000\n Planning Time: 0.261 ms\n Execution Time: 1.162 ms\n(6 rows)\n\nTo my surprise the planner decides not to use the index. This is the \npart I do not understand. Why is the result different here?\nThere is obviously a cast that happens before the equality, does the \ncstring cast have anything to do with this? Hints are welcome!\n\nSo I tried to prepare a statement with a parameter of type \ntype_table_test_enum instead, unsurprisingly, this works fine. No \nmentioning of cstring in the plan.\nI also tried to use a parameter of unknown type, which I would think \nwould be analogous to the first statement with the literal 'Ole', and \nthat looks fine.\nSo why is the varchar version not using the index?\nAny thoughs on this?\n\n\tRegards,\n\t\tKim Johan Andersson", "msg_date": "Mon, 27 Sep 2021 22:02:49 +0200", "msg_from": "Kim Johan Andersson <[email protected]>", "msg_from_op": true, "msg_subject": "Partial index on enum type is not being used, type issue?" }, { "msg_contents": "Kim Johan Andersson <[email protected]> writes:\n> [ uses partial index: ]\n> EXPLAIN (analyze, costs, buffers, verbose) SELECT val FROM \n> table_test_enum WHERE val = 'Ole' and dat IS NULL;\n> \n> [ doesn't: ]\n> PREPARE qry1(varchar) AS SELECT val FROM table_test_enum WHERE val = \n> $1::type_table_test_enum AND dat IS NULL;\n\nThere's no actual cast from varchar to that enum type. The system\nis letting you get away with it anyway, by applying what's called a\nCoerceViaIO cast --- which means convert the varchar to a simple\nstring (cstring) and then apply enum_in().\n\nUnfortunately for you, enum_in() is marked stable not immutable\n(probably on the grounds that it depends on catalog contents) so the\nexpression isn't reduced to a plain constant during constant-folding\nand thus fails to match the partial index's WHERE clause.\n\nIn the first case, 'Ole' is taken as a constant of type\ntype_table_test_enum right off the bat, as was the same constant\nin the index's WHERE clause, so everything matches fine.\n(This seems a little inconsistent now that I think about it ---\nif it's okay to fold the literal to an enum constant at parse time,\nwhy can't we do the equivalent at plan time? But these rules have\nstood for a good while so I'm hesitant to change them.)\n\nAnyway, the recommendable solution is the one you already found:\ndeclare the PREPARE's argument as type_table_test_enum not varchar.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Sep 2021 16:46:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index on enum type is not being used, type issue?" } ]
[ { "msg_contents": "Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.\n\nPostgres 13\n\n\"QUERY PLAN\"\n\"Limit (cost=1.13..26855.48 rows=30 width=137) (actual time=10886.585..429803.463 rows=4 loops=1)\"\n\" -> Nested Loop (cost=1.13..19531164.71 rows=21819 width=137) (actual time=10886.584..429803.457 rows=4 loops=1)\"\n\" Join Filter: (h.ult_eve_id = ev.evento_id)\"\n\" Rows Removed by Join Filter: 252\"\n\" -> Nested Loop (cost=1.13..19457514.32 rows=21819 width=62) (actual time=10886.326..429803.027 rows=4 loops=1)\"\n\" -> Nested Loop (cost=0.85..19450780.70 rows=21819 width=55) (actual time=10886.259..429802.908 rows=4 loops=1)\"\n\" -> Index Scan Backward using hawbs_pkey on hawbs h (cost=0.57..19444209.67 rows=21819 width=46) (actual time=10886.119..429802.676 rows=4 loops=1)\"\n\" Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))\"\n\" Rows Removed by Filter: 239188096\"\n\" -> Index Scan using empresas_pkey on empresas e (cost=0.28..0.30 rows=1 width=17) (actual time=0.028..0.028 rows=1 loops=4)\"\n\" Index Cond: (empresa_id = h.cliente_id)\"\n\" -> Index Scan using contratos_pkey on contratos c (cost=0.28..0.31 rows=1 width=15) (actual time=0.014..0.014 rows=1 loops=4)\"\n\" Index Cond: (ctt_id = h.ctt_id)\"\n\" -> Materialize (cost=0.00..7.23 rows=215 width=27) (actual time=0.009..0.025 rows=64 loops=4)\"\n\" -> Seq Scan on eventos ev (cost=0.00..6.15 rows=215 width=27) (actual time=0.029..0.066 rows=67 loops=1)\"\n\"Planning Time: 11.690 ms\"\n\"Execution Time: 429803.611 ms\"\n\n\nPostgres 10\n\n\"QUERY PLAN\"\n\"Limit (cost=28489.06..28494.39 rows=30 width=137) (actual time=211.568..211.581 rows=4 loops=1)\"\n\" -> Result (cost=28489.06..32296.61 rows=21451 width=137) (actual time=211.566..211.578 rows=4 loops=1)\"\n\" -> Sort (cost=28489.06..28542.69 rows=21451 width=105) (actual time=211.548..211.551 rows=4 loops=1)\"\n\" Sort Key: h.hawb_id DESC\"\n\" Sort Method: quicksort Memory: 25kB\"\n\" -> Hash Join (cost=2428.77..27855.52 rows=21451 width=105) (actual time=211.520..211.537 rows=4 loops=1)\"\n\" Hash Cond: (h.ult_eve_id = ev.evento_id)\"\n\" -> Hash Join (cost=2419.93..27735.63 rows=21451 width=62) (actual time=211.315..211.329 rows=4 loops=1)\"\n\" Hash Cond: (h.ctt_id = c.ctt_id)\"\n\" -> Hash Join (cost=2085.82..27345.18 rows=21451 width=55) (actual time=206.516..206.529 rows=4 loops=1)\"\n\" Hash Cond: (h.cliente_id = e.empresa_id)\"\n\" -> Bitmap Heap Scan on hawbs h (cost=1058.34..26261.32 rows=21451 width=46) (actual time=201.956..201.966 rows=4 loops=1)\"\n\" Recheck Cond: ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))\"\n\" Filter: (tipo_hawb_id = ANY ('{1,10,3}'::integer[]))\"\n\" Heap Blocks: exact=4\"\n\" -> Bitmap Index Scan on idx_nome_des (cost=0.00..1052.98 rows=22623 width=0) (actual time=201.942..201.943 rows=4 loops=1)\"\n\" Index Cond: ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))\"\n\" -> Hash (cost=982.77..982.77 rows=3577 width=17) (actual time=4.542..4.542 rows=3577 loops=1)\"\n\" Buckets: 4096 Batches: 1 Memory Usage: 211kB\"\n\" -> Seq Scan on empresas e (cost=0.00..982.77 rows=3577 width=17) (actual time=0.007..3.189 rows=3577 loops=1)\"\n\" -> Hash (cost=255.16..255.16 rows=6316 width=15) (actual time=4.777..4.777 rows=6316 loops=1)\"\n\" Buckets: 8192 Batches: 1 Memory Usage: 361kB\"\n\" -> Seq Scan on contratos c (cost=0.00..255.16 rows=6316 width=15) (actual time=0.006..2.420 rows=6316 loops=1)\"\n\" -> Hash (cost=6.15..6.15 rows=215 width=27) (actual time=0.186..0.186 rows=215 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 21kB\"\n\" -> Seq Scan on eventos ev (cost=0.00..6.15 rows=215 width=27) (actual time=0.008..0.103 rows=215 loops=1)\"\n\"Planning time: 2.267 ms\"\n\"Execution time: 211.776 ms\"\n\nComand:\n\nexplain analyse\nselect*\nfrom hawbs h\ninner join empresas e on h.cliente_id = e.empresa_id\ninner join contratos c on h.ctt_id = c.ctt_id\ninner join eventos ev on h.ult_eve_id = ev.evento_id\nwhere h.nome_des ilike '%STEPHANY STOEW LEANDRO%'\nand h.tipo_hawb_id in (1,10,3) order by h.hawb_id desc limit 30;\n\n[cid:flashcourier_6bc44896-f19b-4119-a728-f70d866e7cdd.png]\n\nDaniel Diniz\nDesenvolvimento\n\nCel.: 11981464923\n\n\nwww.flashcourier.com.br<http://www.flashcourier.com.br>\n\n[cid:SocialLink_Facebook_32x32_11ddcb69-c640-49e0-88b2-e1038ba38ffa.png]<https://www.facebook.com/flashcourieroficial> [cid:SocialLink_Instagram_32x32_1a56219d-8d68-474a-8e29-0e536d8241d4.png] <https://www.instagram.com/flashcourieroficial> [cid:SocialLink_Linkedin_32x32_6bb30d6c-bdcd-4446-ace2-9daa6eeb5e16.png] <https://www.linkedin.com/company/flashcourieroficial>\n\n#SomosTodosFlash #GrupoMOVE3\n [cid:QR6626a761-6a9c-427c-bd7e-09a0ad5c91a8.png]\n\n[cid:whatsappimage2021-08-31at18.36.01_c1d35f2c-7adc-42cd-98ff-acbc6b165aa6.jpeg]<https://premio.reclameaqui.com.br/votacao>\n\n\"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio.\"", "msg_date": "Tue, 28 Sep 2021 15:39:52 +0000", "msg_from": "Daniel Diniz <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with indices from 10 to 13" }, { "msg_contents": "Em ter., 28 de set. de 2021 às 12:40, Daniel Diniz <\[email protected]> escreveu:\n\n> Hello I migrated from postgres 10 to 13 and I noticed that there was a big\n> increase in a querie that I use, I did explain in 10 and 13 and the\n> difference is absurd, the indices and data are the same in 2. I've\n> re-created and re-indexed but I don't know what changed from 10 to 13 which\n> made the performance so bad, I don't know if it needs some extra parameter\n> in some conf on 13.\n>\n> Postgres 13\n>\n> \"QUERY PLAN\"\n> \"Limit (cost=1.13..26855.48 rows=30 width=137) (actual\n> time=10886.585..429803.463 rows=4 loops=1)\"\n> \" -> Nested Loop (cost=1.13..19531164.71 rows=21819 width=137) (actual\n> time=10886.584..429803.457 rows=4 loops=1)\"\n> \" Join Filter: (h.ult_eve_id = ev.evento_id)\"\n> \" Rows Removed by Join Filter: 252\"\n> \" -> Nested Loop (cost=1.13..19457514.32 rows=21819 width=62)\n> (actual time=10886.326..429803.027 rows=4 loops=1)\"\n> \" -> Nested Loop (cost=0.85..19450780.70 rows=21819\n> width=55) (actual time=10886.259..429802.908 rows=4 loops=1)\"\n> \" -> Index Scan Backward using hawbs_pkey on hawbs h\n> (cost=0.57..19444209.67 rows=21819 width=46) (actual\n> time=10886.119..429802.676 rows=4 loops=1)\"\n> \" Filter: ((tipo_hawb_id = ANY\n> ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~*\n> convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea,\n> 'LATIN1'::name)))\"\n> \" Rows Removed by Filter: 239188096\"\n>\nIndex Scan Backward looks suspicious to me.\n239,188,096 rows removed by filter it's a lot of work.\n\nDo you, run analyze?\n\nregards,\nRanier Vilela\n\nEm ter., 28 de set. de 2021 às 12:40, Daniel Diniz <[email protected]> escreveu:\n\n\nHello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what\n changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.\n\n\n\n\n\nPostgres 13\n\n\n\"QUERY PLAN\"\n\"Limit  (cost=1.13..26855.48 rows=30 width=137) (actual time=10886.585..429803.463 rows=4 loops=1)\"\n\"  ->  Nested Loop  (cost=1.13..19531164.71 rows=21819 width=137) (actual time=10886.584..429803.457 rows=4 loops=1)\"\n\"        Join Filter: (h.ult_eve_id = ev.evento_id)\"\n\"        Rows Removed by Join Filter: 252\"\n\"        ->  Nested Loop  (cost=1.13..19457514.32 rows=21819 width=62) (actual time=10886.326..429803.027 rows=4 loops=1)\"\n\"              ->  Nested Loop  (cost=0.85..19450780.70 rows=21819 width=55) (actual time=10886.259..429802.908 rows=4 loops=1)\"\n\"                    ->  Index Scan Backward using hawbs_pkey on hawbs h  (cost=0.57..19444209.67 rows=21819 width=46) (actual time=10886.119..429802.676 rows=4 loops=1)\"\n\"                          Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))\"\n\"                          Rows Removed by Filter: 239188096\"\nIndex Scan Backward looks suspicious to me.\n239,188,096  rows removed by filter it's a lot of work.Do you, run analyze?regards,Ranier Vilela", "msg_date": "Tue, 28 Sep 2021 14:27:01 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with indices from 10 to 13" }, { "msg_contents": "Daniel Diniz <[email protected]> writes:\n> Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.\n\nThis complaint is missing an awful lot of supporting information.\n\n> \" -> Bitmap Heap Scan on hawbs h (cost=1058.34..26261.32 rows=21451 width=46) (actual time=201.956..201.966 rows=4 loops=1)\"\n> \" Recheck Cond: ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))\"\n> \" Filter: (tipo_hawb_id = ANY ('{1,10,3}'::integer[]))\"\n> \" Heap Blocks: exact=4\"\n> \" -> Bitmap Index Scan on idx_nome_des (cost=0.00..1052.98 rows=22623 width=0) (actual time=201.942..201.943 rows=4 loops=1)\"\n> \" Index Cond: ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))\"\n\nFor starters, how in the world did you get that query condition out of\n\n> where h.nome_des ilike '%STEPHANY STOEW LEANDRO%'\n\n? What data type is h.nome_des, anyway? And what kind of index\nis that --- it couldn't be a plain btree, because we wouldn't consider\n~~* to be indexable by a btree.\n\nHowever, the long and the short of it is that this rowcount estimate\nis off by nearly four orders of magnitude (21451 estimated vs. 4\nactual is pretty awful). It's probably just luck that you got an\nacceptable plan out of v10, and bad luck that you didn't get one\nout of v13 --- v13's estimate is not better, but it's not much\nworse either. You need to do something about improving that\nestimate if you'd like reliable query planning. Since I'm not\ntoo sure which operator you're actually invoking, it's hard to\noffer good advice about how hard that might be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Sep 2021 13:45:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with indices from 10 to 13" }, { "msg_contents": "Tom,\n\nThe index I use is the GIN. I've been using it for about 2 years in 10 it always gave me an almost immediate response with ilike.\nBut testing on 13 I don't know why it takes I already redid the index and reindexed but without significant improvement from 10 seconds to minutes or even hour on 13. The brtree indices has the same behavior only that I have GIN q this occurs.\n\nName de index : \"idx_nome_des\" gin (nome_des)\n\n\n[cid:flashcourier_6bc44896-f19b-4119-a728-f70d866e7cdd.png]\n\nDaniel Diniz\nDesenvolvimento\n\nCel.: 11981464923\n\n\nwww.flashcourier.com.br<http://www.flashcourier.com.br>\n\n[cid:SocialLink_Facebook_32x32_11ddcb69-c640-49e0-88b2-e1038ba38ffa.png]<https://www.facebook.com/flashcourieroficial> [cid:SocialLink_Instagram_32x32_1a56219d-8d68-474a-8e29-0e536d8241d4.png] <https://www.instagram.com/flashcourieroficial> [cid:SocialLink_Linkedin_32x32_6bb30d6c-bdcd-4446-ace2-9daa6eeb5e16.png] <https://www.linkedin.com/company/flashcourieroficial>\n\n#SomosTodosFlash #GrupoMOVE3\n [cid:QRf774d402-83ce-43c8-af9e-d16f09c62123.png]\n\n[cid:whatsappimage2021-08-31at18.36.01_c1d35f2c-7adc-42cd-98ff-acbc6b165aa6.jpeg]<https://premio.reclameaqui.com.br/votacao>\n\n\"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio.\"\n\n________________________________\nDe: Tom Lane <[email protected]>\nEnviado: terça-feira, 28 de setembro de 2021 14:45\nPara: Daniel Diniz <[email protected]>\nCc: [email protected] <[email protected]>\nAssunto: Re: Problem with indices from 10 to 13\n\nDaniel Diniz <[email protected]> writes:\n> Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.\n\nThis complaint is missing an awful lot of supporting information.\n\n> \" -> Bitmap Heap Scan on hawbs h (cost=1058.34..26261.32 rows=21451 width=46) (actual time=201.956..201.966 rows=4 loops=1)\"\n> \" Recheck Cond: ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))\"\n> \" Filter: (tipo_hawb_id = ANY ('{1,10,3}'::integer[]))\"\n> \" Heap Blocks: exact=4\"\n> \" -> Bitmap Index Scan on idx_nome_des (cost=0.00..1052.98 rows=22623 width=0) (actual time=201.942..201.943 rows=4 loops=1)\"\n> \" Index Cond: ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))\"\n\nFor starters, how in the world did you get that query condition out of\n\n> where h.nome_des ilike '%STEPHANY STOEW LEANDRO%'\n\n? What data type is h.nome_des, anyway? And what kind of index\nis that --- it couldn't be a plain btree, because we wouldn't consider\n~~* to be indexable by a btree.\n\nHowever, the long and the short of it is that this rowcount estimate\nis off by nearly four orders of magnitude (21451 estimated vs. 4\nactual is pretty awful). It's probably just luck that you got an\nacceptable plan out of v10, and bad luck that you didn't get one\nout of v13 --- v13's estimate is not better, but it's not much\nworse either. You need to do something about improving that\nestimate if you'd like reliable query planning. Since I'm not\ntoo sure which operator you're actually invoking, it's hard to\noffer good advice about how hard that might be.\n\n regards, tom lane", "msg_date": "Tue, 28 Sep 2021 19:02:24 +0000", "msg_from": "Daniel Diniz <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Problem with indices from 10 to 13" }, { "msg_contents": "Ranier,\r\nran vacuumdb -U postgres -j100 -p5434 -azv\r\nand even so it didn't improve.\r\nNow ir running for 1h10min and not finished de explain after run the comand up.😥\r\n\r\n\r\n\r\n[cid:flashcourier_6bc44896-f19b-4119-a728-f70d866e7cdd.png]\r\n\r\nDaniel Diniz\r\nDesenvolvimento\r\n\r\nCel.: 11981464923\r\n\r\n\r\nwww.flashcourier.com.br<http://www.flashcourier.com.br>\r\n\r\n[cid:SocialLink_Facebook_32x32_11ddcb69-c640-49e0-88b2-e1038ba38ffa.png]<https://www.facebook.com/flashcourieroficial> [cid:SocialLink_Instagram_32x32_1a56219d-8d68-474a-8e29-0e536d8241d4.png] <https://www.instagram.com/flashcourieroficial> [cid:SocialLink_Linkedin_32x32_6bb30d6c-bdcd-4446-ace2-9daa6eeb5e16.png] <https://www.linkedin.com/company/flashcourieroficial>\r\n\r\n#SomosTodosFlash #GrupoMOVE3\r\n [cid:QR6626a761-6a9c-427c-bd7e-09a0ad5c91a8.png]\r\n\r\n[cid:whatsappimage2021-08-31at18.36.01_c1d35f2c-7adc-42cd-98ff-acbc6b165aa6.jpeg]<https://premio.reclameaqui.com.br/votacao>\r\n\r\n\"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio.\"\r\n\r\n________________________________\r\nDe: Ranier Vilela <[email protected]>\r\nEnviado: terça-feira, 28 de setembro de 2021 14:27\r\nPara: Daniel Diniz <[email protected]>\r\nCc: [email protected] <[email protected]>\r\nAssunto: Re: Problem with indices from 10 to 13\r\n\r\nEm ter., 28 de set. de 2021 às 12:40, Daniel Diniz <[email protected]<mailto:[email protected]>> escreveu:\r\nHello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.\r\n\r\nPostgres 13\r\n\r\n\"QUERY PLAN\"\r\n\"Limit (cost=1.13..26855.48 rows=30 width=137) (actual time=10886.585..429803.463 rows=4 loops=1)\"\r\n\" -> Nested Loop (cost=1.13..19531164.71 rows=21819 width=137) (actual time=10886.584..429803.457 rows=4 loops=1)\"\r\n\" Join Filter: (h.ult_eve_id = ev.evento_id)\"\r\n\" Rows Removed by Join Filter: 252\"\r\n\" -> Nested Loop (cost=1.13..19457514.32 rows=21819 width=62) (actual time=10886.326..429803.027 rows=4 loops=1)\"\r\n\" -> Nested Loop (cost=0.85..19450780.70 rows=21819 width=55) (actual time=10886.259..429802.908 rows=4 loops=1)\"\r\n\" -> Index Scan Backward using hawbs_pkey on hawbs h (cost=0.57..19444209.67 rows=21819 width=46) (actual time=10886.119..429802.676 rows=4 loops=1)\"\r\n\" Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))\"\r\n\" Rows Removed by Filter: 239188096\"\r\nIndex Scan Backward looks suspicious to me.\r\n239,188,096 rows removed by filter it's a lot of work.\r\n\r\nDo you, run analyze?\r\n\r\nregards,\r\nRanier Vilela", "msg_date": "Tue, 28 Sep 2021 19:05:33 +0000", "msg_from": "Daniel Diniz <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Problem with indices from 10 to 13" }, { "msg_contents": "Em ter., 28 de set. de 2021 ï¿œs 12:40, Daniel Diniz <[email protected]> escreveu:\n> > Hello I migrated from postgres 10 to 13 and I noticed that there was a big\n> > increase in a querie that I use, I did explain in 10 and 13 and the\n> > difference is absurd, the indices and data are the same in 2. I've re-\n> > created and re-indexed but I don't know what changed from 10 to 13 which\n> > made the performance so bad, I don't know if it needs some extra parameter\n> > in some conf on 13.\n> > \n> > Postgres 13\n> > \n> > \"QUERY PLAN\"\n> > \"Limit ï¿œ(cost=1.13..26855.48 rows=30 width=137) (actual\n> > time=10886.585..429803.463 rows=4 loops=1)\"\n> > \" ï¿œ-> ï¿œNested Loop ï¿œ(cost=1.13..19531164.71 rows=21819 width=137) (actual\n> > time=10886.584..429803.457 rows=4 loops=1)\"\n> > \" ï¿œ ï¿œ ï¿œ ï¿œJoin Filter: (h.ult_eve_id = ev.evento_id)\"\n> > \" ï¿œ ï¿œ ï¿œ ï¿œRows Removed by Join Filter: 252\"\n> > \" ï¿œ ï¿œ ï¿œ ï¿œ-> ï¿œNested Loop ï¿œ(cost=1.13..19457514.32 rows=21819 width=62)\n> > (actual time=10886.326..429803.027 rows=4 loops=1)\"\n> > \" ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ-> ï¿œNested Loop ï¿œ(cost=0.85..19450780.70 rows=21819\n> > width=55) (actual time=10886.259..429802.908 rows=4 loops=1)\"\n> > \" ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ-> ï¿œIndex Scan Backward using hawbs_pkey on hawbs h\n> > ï¿œ(cost=0.57..19444209.67 rows=21819 width=46) (actual\n> > time=10886.119..429802.676 rows=4 loops=1)\"\n> > \" ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œFilter: ((tipo_hawb_id = ANY\n> > ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~*\n> > convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea,\n> > 'LATIN1'::name)))\"\n> > \" ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œ ï¿œRows Removed by Filter: 239188096\"\n> \n> Index Scan Backward looks suspicious to me.\n> 239,188,096ï¿œ rows removed by filter it's a lot of work.\n> \n> Do you, run analyze?\n\nPostgreSQL has an unfortunate love of scanning the pkey index backwards when\nyou use LIMIT.\n\nTry pushing your actual query into a subquery (with an offset 0 to prevent it\nbeing optimized out) and then do the LIMIT outside it.\n\n\n\n\n\nEm ter., 28 de set. de 2021 às 12:40, Daniel Diniz <[email protected]> escreveu:Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.Postgres 13\"QUERY PLAN\"\"Limit  (cost=1.13..26855.48 rows=30 width=137) (actual time=10886.585..429803.463 rows=4 loops=1)\"\"  ->  Nested Loop  (cost=1.13..19531164.71 rows=21819 width=137) (actual time=10886.584..429803.457 rows=4 loops=1)\"\"        Join Filter: (h.ult_eve_id = ev.evento_id)\"\"        Rows Removed by Join Filter: 252\"\"        ->  Nested Loop  (cost=1.13..19457514.32 rows=21819 width=62) (actual time=10886.326..429803.027 rows=4 loops=1)\"\"              ->  Nested Loop  (cost=0.85..19450780.70 rows=21819 width=55) (actual time=10886.259..429802.908 rows=4 loops=1)\"\"                    ->  Index Scan Backward using hawbs_pkey on hawbs h  (cost=0.57..19444209.67 rows=21819 width=46) (actual time=10886.119..429802.676 rows=4 loops=1)\"\"                          Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))\"\"                          Rows Removed by Filter: 239188096\"Index Scan Backward looks suspicious to me.239,188,096  rows removed by filter it's a lot of work.Do you, run analyze?PostgreSQL has an unfortunate love of scanning the pkey index backwards when you use LIMIT.Try pushing your actual query into a subquery (with an offset 0 to prevent it being optimized out) and then do the LIMIT outside it.", "msg_date": "Tue, 28 Sep 2021 12:26:23 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with indices from 10 to 13" }, { "msg_contents": "Daniel Diniz <[email protected]> writes:\n> The index I use is the GIN.\n\npg_trgm, you mean? That answers one question, but you still didn't\nexplain what type h.nome_des is, nor how bytea and convert_from()\nare getting into the picture.\n\nThe second part of that is probably not critical, since the planner\nshould be willing to reduce the convert_from() call to a constant\nfor planning purposes, so I'm unclear as to why the estimate for\nthe ilike clause is so bad. Have you tried increasing the statistics\ntarget for h.nome_des to see if the estimate gets better?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Sep 2021 18:41:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with indices from 10 to 13" }, { "msg_contents": "Tom,\n\n\"pg_trgm, you mean? That answers one question, but you still didn't\nexplain what type h.nome_des is, nor how bytea and convert_from()\nare getting into the picture.\"\nThe column type is: nome_des | character varying(60)\n\n\"The second part of that is probably not critical, since the planner\nshould be willing to reduce the convert_from() call to a constant\nfor planning purposes, so I'm unclear as to why the estimate for\nthe ilike clause is so bad. Have you tried increasing the statistics\ntarget for h.nome_des to see if the estimate gets better?\"\nHow do i increase the statistics target for h.nome_des?\nAnd why uploading the dump at 10 and at 13 is there this difference?\n\nThanks\n\n[cid:flashcourier_6bc44896-f19b-4119-a728-f70d866e7cdd.png]\n\nDaniel Diniz\nDesenvolvimento\n\nCel.: 11981464923\n\n\nwww.flashcourier.com.br<http://www.flashcourier.com.br>\n\n[cid:SocialLink_Facebook_32x32_11ddcb69-c640-49e0-88b2-e1038ba38ffa.png]<https://www.facebook.com/flashcourieroficial> [cid:SocialLink_Instagram_32x32_1a56219d-8d68-474a-8e29-0e536d8241d4.png] <https://www.instagram.com/flashcourieroficial> [cid:SocialLink_Linkedin_32x32_6bb30d6c-bdcd-4446-ace2-9daa6eeb5e16.png] <https://www.linkedin.com/company/flashcourieroficial>\n\n#SomosTodosFlash #GrupoMOVE3\n [cid:QR7a1d4c80-9fed-4206-a9dd-ab8cda250534.png]\n\n[cid:whatsappimage2021-08-31at18.36.01_c1d35f2c-7adc-42cd-98ff-acbc6b165aa6.jpeg]<https://premio.reclameaqui.com.br/votacao>\n\n\"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio.\"\n\n________________________________\nDe: Tom Lane <[email protected]>\nEnviado: terça-feira, 28 de setembro de 2021 19:41\nPara: Daniel Diniz <[email protected]>\nCc: [email protected] <[email protected]>\nAssunto: Re: Problem with indices from 10 to 13\n\nDaniel Diniz <[email protected]> writes:\n> The index I use is the GIN.\n\npg_trgm, you mean? That answers one question, but you still didn't\nexplain what type h.nome_des is, nor how bytea and convert_from()\nare getting into the picture.\n\nThe second part of that is probably not critical, since the planner\nshould be willing to reduce the convert_from() call to a constant\nfor planning purposes, so I'm unclear as to why the estimate for\nthe ilike clause is so bad. Have you tried increasing the statistics\ntarget for h.nome_des to see if the estimate gets better?\n\n regards, tom lane", "msg_date": "Wed, 29 Sep 2021 02:11:15 +0000", "msg_from": "Daniel Diniz <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Problem with indices from 10 to 13" }, { "msg_contents": "On Wed, Sep 29, 2021 at 02:11:15AM +0000, Daniel Diniz wrote:\n> How do i increase the statistics target for h.nome_des?\n> And why uploading the dump at 10 and at 13 is there this difference?\n\nIt's like ALTER TABLE h ALTER nome_des SET STATISTICS 2000; ANALYZE h;\nhttps://www.postgresql.org/docs/current/sql-altertable.html\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 28 Sep 2021 21:18:03 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with indices from 10 to 13" }, { "msg_contents": "Justin tested it with some parameters 200, 2000, 10000, -1 and the 3 spent more or less the same time\n\nexemple ALTER TABLE hawbs ALTER nome_des SET STATISTICS 2000; ANALYZE hawbs;:\n\"QUERY PLAN\"\n\"Limit (cost=1.13..28049.86 rows=30 width=137) (actual time=5462.123..363089.923 rows=4 loops=1)\"\n\" -> Nested Loop (cost=1.13..19523788.64 rows=20882 width=137) (actual time=5462.122..363089.915 rows=4 loops=1)\"\n\" Join Filter: (h.ult_eve_id = ev.evento_id)\"\n\" Rows Removed by Join Filter: 252\"\n\" -> Nested Loop (cost=1.13..19453301.90 rows=20882 width=62) (actual time=5461.844..363089.429 rows=4 loops=1)\"\n\" -> Nested Loop (cost=0.85..19446849.38 rows=20882 width=55) (actual time=5461.788..363089.261 rows=4 loops=1)\"\n\" -> Index Scan Backward using hawbs_pkey on hawbs h (cost=0.57..19440557.11 rows=20882 width=46) (actual time=5461.644..363088.839 rows=4 loops=1)\"\n\" Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))\"\n\" Rows Removed by Filter: 239188096\"\n\" -> Index Scan using empresas_pkey on empresas e (cost=0.28..0.30 rows=1 width=17) (actual time=0.037..0.038 rows=1 loops=4)\"\n\" Index Cond: (empresa_id = h.cliente_id)\"\n\" -> Index Scan using contratos_pkey on contratos c (cost=0.28..0.31 rows=1 width=15) (actual time=0.021..0.021 rows=1 loops=4)\"\n\" Index Cond: (ctt_id = h.ctt_id)\"\n\" -> Materialize (cost=0.00..7.23 rows=215 width=27) (actual time=0.011..0.023 rows=64 loops=4)\"\n\" -> Seq Scan on eventos ev (cost=0.00..6.15 rows=215 width=27) (actual time=0.033..0.052 rows=67 loops=1)\"\n\"Planning Time: 10.452 ms\"\n\"Execution Time: 363090.127 ms\"\n\n\n[cid:flashcourier_6bc44896-f19b-4119-a728-f70d866e7cdd.png]\n\nDaniel Diniz\nDesenvolvimento\n\nCel.: 11981464923\n\n\nwww.flashcourier.com.br<http://www.flashcourier.com.br>\n\n[cid:SocialLink_Facebook_32x32_11ddcb69-c640-49e0-88b2-e1038ba38ffa.png]<https://www.facebook.com/flashcourieroficial> [cid:SocialLink_Instagram_32x32_1a56219d-8d68-474a-8e29-0e536d8241d4.png] <https://www.instagram.com/flashcourieroficial> [cid:SocialLink_Linkedin_32x32_6bb30d6c-bdcd-4446-ace2-9daa6eeb5e16.png] <https://www.linkedin.com/company/flashcourieroficial>\n\n#SomosTodosFlash #GrupoMOVE3\n [cid:QR7a1d4c80-9fed-4206-a9dd-ab8cda250534.png]\n\n[cid:whatsappimage2021-08-31at18.36.01_c1d35f2c-7adc-42cd-98ff-acbc6b165aa6.jpeg]<https://premio.reclameaqui.com.br/votacao>\n\n\"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio.\"\n\n________________________________\nDe: Justin Pryzby <[email protected]>\nEnviado: terça-feira, 28 de setembro de 2021 23:18\nPara: Daniel Diniz <[email protected]>\nCc: Tom Lane <[email protected]>; [email protected] <[email protected]>\nAssunto: Re: Problem with indices from 10 to 13\n\nOn Wed, Sep 29, 2021 at 02:11:15AM +0000, Daniel Diniz wrote:\n> How do i increase the statistics target for h.nome_des?\n> And why uploading the dump at 10 and at 13 is there this difference?\n\nIt's like ALTER TABLE h ALTER nome_des SET STATISTICS 2000; ANALYZE h;\nhttps://www.postgresql.org/docs/current/sql-altertable.html\n\n--\nJustin", "msg_date": "Wed, 29 Sep 2021 04:00:38 +0000", "msg_from": "Daniel Diniz <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Problem with indices from 10 to 13" }, { "msg_contents": "st 29. 9. 2021 v 6:01 odesílatel Daniel Diniz <[email protected]>\nnapsal:\n\n> Justin tested it with some parameters 200, 2000, 10000, -1 and the 3 spent\n> more or less the same time\n>\n> exemple ALTER TABLE hawbs ALTER nome_des SET STATISTICS 2000; ANALYZE\n> hawbs;:\n> \"QUERY PLAN\"\n> \"Limit (cost=1.13..28049.86 rows=30 width=137) (actual\n> time=5462.123..363089.923 rows=4 loops=1)\"\n> \" -> Nested Loop (cost=1.13..19523788.64 rows=20882 width=137) (actual\n> time=5462.122..363089.915 rows=4 loops=1)\"\n> \" Join Filter: (h.ult_eve_id = ev.evento_id)\"\n> \" Rows Removed by Join Filter: 252\"\n> \" -> Nested Loop (cost=1.13..19453301.90 rows=20882 width=62)\n> (actual time=5461.844..363089.429 rows=4 loops=1)\"\n> \" -> Nested Loop (cost=0.85..19446849.38 rows=20882\n> width=55) (actual time=5461.788..363089.261 rows=4 loops=1)\"\n> \" -> Index Scan Backward using hawbs_pkey on hawbs h\n> (cost=0.57..19440557.11 rows=20882 width=46) (actual\n> time=5461.644..363088.839 rows=4 loops=1)\"\n> \" Filter: ((tipo_hawb_id = ANY\n> ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~*\n> convert_from('\\x255354455048414e592053544f4557204c45414e44524f25'::bytea,\n> 'LATIN1'::name)))\"\n> \" Rows Removed by Filter: 239188096\"\n> \" -> Index Scan using empresas_pkey on empresas e\n> (cost=0.28..0.30 rows=1 width=17) (actual time=0.037..0.038 rows=1\n> loops=4)\"\n> \" Index Cond: (empresa_id = h.cliente_id)\"\n> \" -> Index Scan using contratos_pkey on contratos c\n> (cost=0.28..0.31 rows=1 width=15) (actual time=0.021..0.021 rows=1\n> loops=4)\"\n> \" Index Cond: (ctt_id = h.ctt_id)\"\n> \" -> Materialize (cost=0.00..7.23 rows=215 width=27) (actual\n> time=0.011..0.023 rows=64 loops=4)\"\n> \" -> Seq Scan on eventos ev (cost=0.00..6.15 rows=215\n> width=27) (actual time=0.033..0.052 rows=67 loops=1)\"\n> \"Planning Time: 10.452 ms\"\n> \"Execution Time: 363090.127 ms\"\n>\n\nMaybe you can try composite index based on hawbs_pkey, and tipo_hawb_id\n\nthe second problem can be the low value of LIMIT - got you faster result\nwithout LIMIT clause?\n\nRegards\n\nPavel\n\n\n\n>\n>\n>\n> *Daniel Diniz*Desenvolvimento\n>\n> Cel.: *11981464923*\n>\n>\n> *www.flashcourier.com.br* <http://www.flashcourier.com.br>\n>\n> <https://www.facebook.com/flashcourieroficial>\n> <https://www.instagram.com/flashcourieroficial>\n> <https://www.linkedin.com/company/flashcourieroficial>\n>\n> #SomosTodosFlash #GrupoMOVE3\n>\n>\n> <https://premio.reclameaqui.com.br/votacao>\n>\n> *\"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus\n> destinatários, podendo conter informação confidencial e/ou legalmente\n> privilegiada. Se você não for destinatário desta mensagem, não deve\n> revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não\n> se responsabiliza por alterações no conteúdo desta mensagem depois do seu\n> envio.\"*\n> ------------------------------\n> *De:* Justin Pryzby <[email protected]>\n> *Enviado:* terça-feira, 28 de setembro de 2021 23:18\n> *Para:* Daniel Diniz <[email protected]>\n> *Cc:* Tom Lane <[email protected]>; [email protected]\n> <[email protected]>\n> *Assunto:* Re: Problem with indices from 10 to 13\n>\n> On Wed, Sep 29, 2021 at 02:11:15AM +0000, Daniel Diniz wrote:\n> > How do i increase the statistics target for h.nome_des?\n> > And why uploading the dump at 10 and at 13 is there this difference?\n>\n> It's like ALTER TABLE h ALTER nome_des SET STATISTICS 2000; ANALYZE h;\n> https://www.postgresql.org/docs/current/sql-altertable.html\n>\n> --\n> Justin\n>", "msg_date": "Wed, 29 Sep 2021 06:59:05 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with indices from 10 to 13" } ]
[ { "msg_contents": "\nHello,\n\nI've been playing with CockroachDB, a distributed database system which is \nmore or less compatible with Postgres because it implements the same \nnetwork protocol. Because if this compatibility, I have used pgbench to \nsetup and run some tests on various AWS VMs (5 identical VMs, going up to \na total 80 vcpu in the system).\n\nThe general behavior and ease of use is great. Data are shared between \nnodes, adding a new node makes the system automatically replicate and \nbalance the data, wow. Also, the provided web interface is quite nice and \ngives hints about what is happening. They implement an automatic retry \nfeature so that when a transaction fails it is retried without the client \nneeded to know about it.\n\nAll this is impressive, but performance wise I ran in a few issues and/or \nquestions:\n\n - Loading data with a COPY (pgbench -i) is pretty slow, typically 3\n seconds per scale whereas on a basic postgres I would get 0.3 seconds\n per scale. Should I expect better performance, or is this the expected\n performance that can be achieved because of the automatic (automagic)\n replication performed by cockroach? Would it be better if I generated\n data from several connections (hmmm, pgbench does not know how to do\n that, but the tool could be improved if it is worth it)?\n\n - I'm at a loss at finding the right number of client connections to\n \"maximise\" tps under reasonable latency. Some of my tests suggest that\n maybe 4 clients per core is the best option. For a standard postgres,\n a typical client count would be larger, typically around 8-10 per\n core.\n Is this choice reasonable for cockroach?\n\n - The overall performance is a little bit disappointing. Ok, this is\n a distributed system which does automatic partitioning and replication\n on serializable transactions, so obviously this quality of service must\n cost something, but I'm typically running around 10 tps per core (with\n pgbench default transaction), so a pretty high latency, and even if\n it scales somehow, it which seems quite low.\n What I am doing wrong? What should I check?\n\n - Another strange thing is that the steady state at full speed is quite\n unstable: looking at instantaneous performance, the tps varies a lot,\n eg between 0 and 4500 tps, more or less uniformly, i.e. the standard\n deviation is large, say 1000 tps stddev for a 2000 tps average\n performance.\n\nBasically, any advice about cockroach configuration and running pgbench \nagainst it is welcome!\n\nThanks in advance,\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 29 Sep 2021 15:47:40 +0200 (CEST)", "msg_from": "Fabien COELHO <[email protected]>", "msg_from_op": true, "msg_subject": "How to improve cockroach performance with pgbench?" } ]
[ { "msg_contents": "Hi,\nQuery on one of our partitioned tables which is range partitioned on\n\"run\"date\" column is going to all partitions despite having run_date in\nWHERE clause. \"enable_parition_pruning\" is also on. I am unable to generate\na query plan as the query never runs fully even waiting for say half an\nhour.\n\nWe have composite indexes on run_date,status. Do I need to create an index\non run_date only?\n\nAny other solutions?\n\nRegards,\nAditya.\n\nHi,Query on  one of our partitioned tables which is range partitioned on \"run\"date\" column is going to all partitions despite having run_date in WHERE clause. \"enable_parition_pruning\" is also on. I am unable to generate a query plan as the query never runs fully even waiting for say half an hour.We have composite indexes on run_date,status. Do I need to create an index on run_date only?Any other solutions?Regards,Aditya.", "msg_date": "Fri, 1 Oct 2021 12:58:00 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Query going to all paritions" }, { "msg_contents": "On Fri, 2021-10-01 at 12:58 +0530, aditya desai wrote:\n> Hi,\n> Query on  one of our partitioned tables which is range partitioned on \"run\"date\" column is going to all partitions despite having run_date in WHERE clause. \"enable_parition_pruning\" is also on. I am\n> unable to generate a query plan as the query never runs fully even waiting for say half an hour.\n> \n> We have composite indexes on run_date,status. Do I need to create an index on run_date only?\n\nYou need to share the query and probably the table definition. EXPLAIN output\n(without ANALYZE) will also help.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 01 Oct 2021 09:53:03 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query going to all paritions" }, { "msg_contents": "Hi Laurenz,\nPlease find attached explain query plan and query.\n\nRegards,\nAditya.\n\nOn Friday, October 1, 2021, Laurenz Albe <[email protected]> wrote:\n\n> On Fri, 2021-10-01 at 12:58 +0530, aditya desai wrote:\n> > Hi,\n> > Query on one of our partitioned tables which is range partitioned on\n> \"run\"date\" column is going to all partitions despite having run_date in\n> WHERE clause. \"enable_parition_pruning\" is also on. I am\n> > unable to generate a query plan as the query never runs fully even\n> waiting for say half an hour.\n> >\n> > We have composite indexes on run_date,status. Do I need to create an\n> index on run_date only?\n>\n> You need to share the query and probably the table definition. EXPLAIN\n> output\n> (without ANALYZE) will also help.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>", "msg_date": "Fri, 1 Oct 2021 14:24:11 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query going to all paritions" }, { "msg_contents": "On Fri, Oct 01, 2021 at 02:24:11PM +0530, aditya desai wrote:\n> Hi Laurenz,\n> Please find attached explain query plan and query.\n\nCan you show us \\d of the table, and exact query you ran?\n\nAlso, please, don't send images. This is text, so you can copy-paste it\ndirectly into mail.\n\nOr, put it on some paste site - for explains, I suggest\nhttps://explain.depesz.com/\n\nIt's impossible to select text from image. It's much harder to read (it\ndoesn't help that it's not even screenshot, but, what looks like,\na photo of screen ?!\n\n\n", "msg_date": "Fri, 1 Oct 2021 12:33:25 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query going to all paritions" }, { "msg_contents": "Will try to get a query in text format. It looks difficult though.\n\nRegards,\nAditya.\n\n\nOn Fri, Oct 1, 2021 at 4:03 PM hubert depesz lubaczewski <[email protected]>\nwrote:\n\n> On Fri, Oct 01, 2021 at 02:24:11PM +0530, aditya desai wrote:\n> > Hi Laurenz,\n> > Please find attached explain query plan and query.\n>\n> Can you show us \\d of the table, and exact query you ran?\n>\n> Also, please, don't send images. This is text, so you can copy-paste it\n> directly into mail.\n>\n> Or, put it on some paste site - for explains, I suggest\n> https://explain.depesz.com/\n>\n> It's impossible to select text from image. It's much harder to read (it\n> doesn't help that it's not even screenshot, but, what looks like,\n> a photo of screen ?!\n>\n\nWill try to get a query in text format. It looks difficult though.Regards,Aditya.On Fri, Oct 1, 2021 at 4:03 PM hubert depesz lubaczewski <[email protected]> wrote:On Fri, Oct 01, 2021 at 02:24:11PM +0530, aditya desai wrote:\n> Hi Laurenz,\n> Please find attached explain query plan and query.\n\nCan you show us \\d of the table, and exact query you ran?\n\nAlso, please, don't send images. This is text, so you can copy-paste it\ndirectly into mail.\n\nOr, put it on some paste site - for explains, I suggest\nhttps://explain.depesz.com/\n\nIt's impossible to select text from image. It's much harder to read (it\ndoesn't help that it's not even screenshot, but, what looks like,\na photo of screen ?!", "msg_date": "Fri, 1 Oct 2021 17:03:12 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query going to all paritions" } ]
[ { "msg_contents": "TLDR; If I spend the time necessary to instrument the many functions that\nare the equivalent of the Oracle counterparts, would anyone pull those\nchanges and use them? Specifically, for those who know Oracle, I'm talking\nabout implementing:\n\n\n 1. The portion of the ALTER SESSION that enables extended SQL trace\n 2. Most of the DBMS_MONITOR and DBMS_APPLICATION_INFO packages\n 3. Instrument the thousand or so functions that are the equivalent of\n those found in Oracle's V$EVENT_NAME\n 4. Dynamic performance view V$DIAG_INFO\n\nFor the last 35 years, I've made my living helping people solve Oracle\nperformance problems by looking at it, which means:\n\nTrace a user experience and profile the trace file to (a) reveal where the\ntime has gone and its algorithm and (b) make it easy to imagine the cost of\npossible solutions as well as the savings in response time or resources.\n\nI've even submitted change requests to improve Oracle's tracing features\nwhile working for them and since those glorious five years.\n\nNow looking closely at postgreSQL, I see an opportunity to more quickly\nimplement Oracle's current feature list.\n\nI've come to this point because I see many roadblocks for users who want to\nsee a detailed \"receipt\" for their response time. The biggest roadblock is\nthat without a *lot* of automation, a user of any kind must log into the\nserver and attempt to get the data that are now traditionally child's play\nfor Oracle. The second biggest roadblock I see is the recompilation that is\nrequired for the server components (i.e., postgreSQL, operating system). My\ninitial attempts to get anything useful out of postgreSQL were dismal\nfailures and I think it should be infinitely easier.\n\nRunning either dtrace and eBPF scripts on the server should not be\nrequired. The instrumentation and the code being instrumented should be\ntightly coupled. Doing so will allow *anyone* on *any* platform for\n*any* PostgreSQL\nversion to get a trace file just as easily as people do for Oracle.\n\nTLDR; If I spend the time necessary to instrument the many functions that are the equivalent of the Oracle counterparts, would anyone pull those changes and use them? Specifically, for those who know Oracle, I'm talking about implementing:The portion of the ALTER SESSION that enables extended SQL traceMost of the DBMS_MONITOR and DBMS_APPLICATION_INFO packagesInstrument the thousand or so functions that are the equivalent of those found in Oracle's V$EVENT_NAMEDynamic performance view V$DIAG_INFOFor the last 35 years, I've made my living helping people solve Oracle performance problems by looking at it, which means:Trace a user experience and profile the trace file to (a) reveal where the time has gone and its algorithm and (b) make it easy to imagine the cost of possible solutions as well as the savings in response time or resources.I've even submitted change requests to improve Oracle's tracing features while working for them and since those glorious five years.Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.I've come to this point because I see many roadblocks for users who want to see a detailed \"receipt\" for their response time. The biggest roadblock is that without a lot of automation, a user of any kind must log into the server and attempt to get the data that are now traditionally child's play for Oracle. The second biggest roadblock I see is the recompilation that is required for the server components (i.e., postgreSQL, operating system). My initial attempts to get anything useful out of postgreSQL were dismal failures and I think it should be infinitely easier.Running either dtrace and eBPF scripts on the server should not be required. The instrumentation and the code being instrumented should be tightly coupled. Doing so will allow anyone on any platform for any PostgreSQL version to get a trace file just as easily as people do for Oracle.", "msg_date": "Fri, 1 Oct 2021 15:06:02 -0500", "msg_from": "Jeff Holt <[email protected]>", "msg_from_op": true, "msg_subject": "Better, consistent instrumentation for postgreSQL using a similar API\n as Oracle" }, { "msg_contents": "On Fri, 2021-10-01 at 15:06 -0500, Jeff Holt wrote:\n> TLDR; If I spend the time necessary to instrument the many functions that are the equivalent\n> of the Oracle counterparts, would anyone pull those changes and use them?\n> Specifically, for those who know Oracle, I'm talking about implementing:\n>    1. The portion of the ALTER SESSION that enables extended SQL trace\n>    2. Most of the DBMS_MONITOR and DBMS_APPLICATION_INFO packages\n>    3. Instrument the thousand or so functions that are the equivalent of those found in Oracle's V$EVENT_NAME\n>    4. Dynamic performance view V$DIAG_INFO\n> For the last 35 years, I've made my living helping people solve Oracle performance problems by looking at it\n> \n[...]\n> Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n\nAnything that improves user experience in that respect is welcome, but consider\nthat each database has different approaches to solve the same problems.\n\nBefore you go to the length of implementing a lot of stuff, check in with\nthe -hackers list and discuss your ideas.\n\nPlease be a lot more specific than in this e-mail. While it is certainly\nfine to sketch your ambitios vision, focus on one specific thing you can\nimagine implementing and come up with a design for that.\n\nNote that \"Oracle has it\" is not a good enough reason for a PostgreSQL\nfeature. We think we can do better than they do (at least in many respects).\nAlso, don't assume that everyone on the -hackers list will be familiar with\ncertain PostgreSQL features.\n\nOne think that you should keep in mind is that Oracle has to provide different\nfeatures in that area because they are not open source. In PostgreSQL, I can\nsimply read the code or attach a debugger to a backend, and when it comes to\nprofiling, \"perf\" works pretty well. So there is less need for these things.\n\nI don't want to discourage you, but contributing to PostgreSQL can be a lengthy\nand tedious process. On the upside, things that make it into core are usually\nfairly mature.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 04 Oct 2021 08:34:29 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" }, { "msg_contents": "\nOn 10/4/21 02:34, Laurenz Albe wrote:\n> On Fri, 2021-10-01 at 15:06 -0500, Jeff Holt wrote:\n>> TLDR; If I spend the time necessary to instrument the many functions that are the equivalent\n>> of the Oracle counterparts, would anyone pull those changes and use them?\n>> Specifically, for those who know Oracle, I'm talking about implementing:\n>>    1. The portion of the ALTER SESSION that enables extended SQL trace\n>>    2. Most of the DBMS_MONITOR and DBMS_APPLICATION_INFO packages\n>>    3. Instrument the thousand or so functions that are the equivalent of those found in Oracle's V$EVENT_NAME\n>>    4. Dynamic performance view V$DIAG_INFO\n>> For the last 35 years, I've made my living helping people solve Oracle performance problems by looking at it\n>>\n> [...]\n>> Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n> Anything that improves user experience in that respect is welcome, but consider\n> that each database has different approaches to solve the same problems.\n>\n> Before you go to the length of implementing a lot of stuff, check in with\n> the -hackers list and discuss your ideas.\n>\n> Please be a lot more specific than in this e-mail. While it is certainly\n> fine to sketch your ambitios vision, focus on one specific thing you can\n> imagine implementing and come up with a design for that.\n>\n> Note that \"Oracle has it\" is not a good enough reason for a PostgreSQL\n> feature. We think we can do better than they do (at least in many respects).\n> Also, don't assume that everyone on the -hackers list will be familiar with\n> certain PostgreSQL features.\n>\n> One think that you should keep in mind is that Oracle has to provide different\n> features in that area because they are not open source. In PostgreSQL, I can\n> simply read the code or attach a debugger to a backend, and when it comes to\n> profiling, \"perf\" works pretty well. So there is less need for these things.\n>\n> I don't want to discourage you, but contributing to PostgreSQL can be a lengthy\n> and tedious process. On the upside, things that make it into core are usually\n> fairly mature.\n>\n> Yours,\n> Laurenz Albe\n\nLaurenz, you are obviously not aware who are you talking to. Let me \nintroduce you: Cary Millsap and Jeff Holt are authors of the \"Optimizing \nOracle for Performance\", one of the most influential books in the entire \nrealm of  Oracle literature.  The book describes the method of tuning \nOracle applications by examining where are they spending time and what \nare they waiting for. The book can be found on Amazon and I would \nseriously advise you to read it:\n\nhttps://www.amazon.com/Optimizing-Oracle-Performance-Practitioners-Response-ebook/dp/B00BJ9A8SU/ref=sr_1_1?dchild=1&keywords=Optimizing+Oracle+for+Performance&qid=1633395886&s=books&sr=1-1\n\nHaughty lectures about \"Oracle has it\" not being good enough could \nhardly be more out of place here. To put it as politely as is possible \nin this case, shut your pie hole. What Jeff is asking for is not \nsomething that \"Oracle has\", it's something that customers want. That \nwas the case few years ago when I was asking for the optimizer hints. I \nwas castigated by the former pastry baker turned Postgres guru and my \nreaction was simple: I threw Postgres out of the company that I was a \nworking for as the lead DBA. You see, customer is always right, whether \nthe database is open source or not. Needless to say, Postgres has \noptimizer hints these days. It still has them in \"we do not want\" part \nof the Wiki, which is hilarious.\n\nYou see, without proper event instrumentation, and knowing where the \napplication spends time, it is not possible to exactly tune that \napplication. Oracle used to have a witchcraft based lore like that, \nwhere the performance was estimated, based on buffer cache hit ratio, \nthe famous \"BCHR\". That was known as \"Method C\". The name comes from \nCary's and Jeff's book. Jeff and Cary are the ones who made the BCHR \nbased black magic - obsolete.\n\nIn other words, Jeff is asking for a method to fine tune the \napplications with precision. Instead of being an a....rrogant person, \nyou should have given him the answer:\n\nhttps://github.com/postgrespro/pg_wait_sampling\n\nPostgres already has an extension which implements around 60% of what \nOracle has. Of course, Oracle's mechanism is somewhat more refined but \nit is also 20 years older. Cary Millsap, Anjo Kolk and Jeff Holt were \nimplementing the instrumentation for Oracle 7. There was a huge pile of \npaper, printed off Metalink, a predecessor of \"My Oracle Support\", \ndescribing Oracle 7 events and explaining what Oracle was actually \nwaiting for. At that time Cary Millsap was a VP in Oracle development. \nThe book came out for Oracle8. You see, Jeff Holt really knows what he's \nasking for. You are the ignorant one, the one who engaged in talking at \nJeff, not knowing that there already is an answer. There is no shame in \nnot knowing something, people ask questions all the time. Arrogantly \ntalking at someone and giving unsolicited lectures in what is \nappropriate and what is not is another thing altogether.\n\nFinally, about the tone of this message: you really pissed me off. I had \nto restrain myself from using even stronger language, that was \nsurprisingly hard to do. I wouldn't be surprised to see you giving \nhaughty lectures about programming to Brian Kernighan or Dennis Ritchie. \nAnd yes, those two have allegedly also written a book.\n\nRegards\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Mon, 4 Oct 2021 21:51:25 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "Mladen,\n\nShame on u lecturing a top notch guy in the PostgreSQL world, Laurenz Albe. I think Laurenz knows “a little bit” about Oracle having written the popular extension, fdw_oracle, among his many other contributions to the PG world. So ironic that Laurenz was just named “PostgReSQL person of the week”, and then has to be subjected to this “tirade” of yours!\n\nFollow the PG protocol in submitting your change requests to core PG and stop your Bitchin!\n\nMichael Vitale\n\n\nSent from my iPad\n\n> On Oct 4, 2021, at 9:51 PM, Mladen Gogala <[email protected]> wrote:\n> \n> \n>> On 10/4/21 02:34, Laurenz Albe wrote:\n>>> On Fri, 2021-10-01 at 15:06 -0500, Jeff Holt wrote:\n>>> TLDR; If I spend the time necessary to instrument the many functions that are the equivalent\n>>> of the Oracle counterparts, would anyone pull those changes and use them?\n>>> Specifically, for those who know Oracle, I'm talking about implementing:\n>>> 1. The portion of the ALTER SESSION that enables extended SQL trace\n>>> 2. Most of the DBMS_MONITOR and DBMS_APPLICATION_INFO packages\n>>> 3. Instrument the thousand or so functions that are the equivalent of those found in Oracle's V$EVENT_NAME\n>>> 4. Dynamic performance view V$DIAG_INFO\n>>> For the last 35 years, I've made my living helping people solve Oracle performance problems by looking at it\n>>> \n>> [...]\n>>> Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n>> Anything that improves user experience in that respect is welcome, but consider\n>> that each database has different approaches to solve the same problems.\n>> \n>> Before you go to the length of implementing a lot of stuff, check in with\n>> the -hackers list and discuss your ideas.\n>> \n>> Please be a lot more specific than in this e-mail. While it is certainly\n>> fine to sketch your ambitios vision, focus on one specific thing you can\n>> imagine implementing and come up with a design for that.\n>> \n>> Note that \"Oracle has it\" is not a good enough reason for a PostgreSQL\n>> feature. We think we can do better than they do (at least in many respects).\n>> Also, don't assume that everyone on the -hackers list will be familiar with\n>> certain PostgreSQL features.\n>> \n>> One think that you should keep in mind is that Oracle has to provide different\n>> features in that area because they are not open source. In PostgreSQL, I can\n>> simply read the code or attach a debugger to a backend, and when it comes to\n>> profiling, \"perf\" works pretty well. So there is less need for these things.\n>> \n>> I don't want to discourage you, but contributing to PostgreSQL can be a lengthy\n>> and tedious process. On the upside, things that make it into core are usually\n>> fairly mature.\n>> \n>> Yours,\n>> Laurenz Albe\n> \n> Laurenz, you are obviously not aware who are you talking to. Let me introduce you: Cary Millsap and Jeff Holt are authors of the \"Optimizing Oracle for Performance\", one of the most influential books in the entire realm of Oracle literature. The book describes the method of tuning Oracle applications by examining where are they spending time and what are they waiting for. The book can be found on Amazon and I would seriously advise you to read it:\n> \n> https://www.amazon.com/Optimizing-Oracle-Performance-Practitioners-Response-ebook/dp/B00BJ9A8SU/ref=sr_1_1?dchild=1&keywords=Optimizing+Oracle+for+Performance&qid=1633395886&s=books&sr=1-1\n> \n> Haughty lectures about \"Oracle has it\" not being good enough could hardly be more out of place here. To put it as politely as is possible in this case, shut your pie hole. What Jeff is asking for is not something that \"Oracle has\", it's something that customers want. That was the case few years ago when I was asking for the optimizer hints. I was castigated by the former pastry baker turned Postgres guru and my reaction was simple: I threw Postgres out of the company that I was a working for as the lead DBA. You see, customer is always right, whether the database is open source or not. Needless to say, Postgres has optimizer hints these days. It still has them in \"we do not want\" part of the Wiki, which is hilarious.\n> \n> You see, without proper event instrumentation, and knowing where the application spends time, it is not possible to exactly tune that application. Oracle used to have a witchcraft based lore like that, where the performance was estimated, based on buffer cache hit ratio, the famous \"BCHR\". That was known as \"Method C\". The name comes from Cary's and Jeff's book. Jeff and Cary are the ones who made the BCHR based black magic - obsolete.\n> \n> In other words, Jeff is asking for a method to fine tune the applications with precision. Instead of being an a....rrogant person, you should have given him the answer:\n> \n> https://github.com/postgrespro/pg_wait_sampling\n> \n> Postgres already has an extension which implements around 60% of what Oracle has. Of course, Oracle's mechanism is somewhat more refined but it is also 20 years older. Cary Millsap, Anjo Kolk and Jeff Holt were implementing the instrumentation for Oracle 7. There was a huge pile of paper, printed off Metalink, a predecessor of \"My Oracle Support\", describing Oracle 7 events and explaining what Oracle was actually waiting for. At that time Cary Millsap was a VP in Oracle development. The book came out for Oracle8. You see, Jeff Holt really knows what he's asking for. You are the ignorant one, the one who engaged in talking at Jeff, not knowing that there already is an answer. There is no shame in not knowing something, people ask questions all the time. Arrogantly talking at someone and giving unsolicited lectures in what is appropriate and what is not is another thing altogether.\n> \n> Finally, about the tone of this message: you really pissed me off. I had to restrain myself from using even stronger language, that was surprisingly hard to do. I wouldn't be surprised to see you giving haughty lectures about programming to Brian Kernighan or Dennis Ritchie. And yes, those two have allegedly also written a book.\n> \n> Regards\n> \n> \n> -- \n> Mladen Gogala\n> Database Consultant\n> Tel: (347) 321-1217\n> https://dbwhisperer.wordpress.com\n> \n> \n> \n\n\n\n", "msg_date": "Mon, 4 Oct 2021 22:25:18 -0400", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better,\n consistent instrumentation for postgreSQL using a similar API as Oracle" }, { "msg_contents": "On Mon, Oct 4, 2021 at 6:51 PM Mladen Gogala <[email protected]> wrote:\n> Haughty lectures about \"Oracle has it\" not being good enough could\n> hardly be more out of place here. To put it as politely as is possible\n> in this case, shut your pie hole. What Jeff is asking for is not\n> something that \"Oracle has\", it's something that customers want. That\n> was the case few years ago when I was asking for the optimizer hints. I\n> was castigated by the former pastry baker turned Postgres guru and my\n> reaction was simple: I threw Postgres out of the company that I was a\n> working for as the lead DBA. You see, customer is always right, whether\n> the database is open source or not. Needless to say, Postgres has\n> optimizer hints these days. It still has them in \"we do not want\" part\n> of the Wiki, which is hilarious.\n\nIn all sincerity: Chill out. I don't think that this is worth getting\ninto an argument over. I think that there is a good chance that you'd\nhave had a much better experience if the conversation had been in\nperson. Text has a way of losing a lot of important nuance.\n\nI have personally met and enjoyed talking to quite a few people that\npersonally worked on Oracle, in various capacities -- the world of\ndatabase internals experts is not huge. I find Tanel Poder very\ninteresting, too -- never met the man, but we follow each other on\nTwitter. Oracle is a system that has some interesting properties in\ngeneral (not just as a counterpoint to PostgreSQL), and I definitely\nrespect it. Same with SQL Server.\n\nThere are lots of smart and accomplished people in the world,\nincluding Jeff. I think that it's worth understanding these\ndifferences in perspective. There is likely to be merit in all of the\nspecific points made by both Laurenze and Jeff. They may not be\nirreconcilable, or anything like it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 4 Oct 2021 20:08:56 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "\nOn 10/4/21 22:25, [email protected] wrote:\n> Mladen,\n>\n> Shame on u lecturing a top notch guy in the PostgreSQL world, Laurenz Albe. I think Laurenz knows “a little bit” about Oracle having written the popular extension, fdw_oracle, among his many other contributions to the PG world. So ironic that Laurenz was just named “PostgReSQL person of the week”, and then has to be subjected to this “tirade” of yours!\n>\n> Follow the PG protocol in submitting your change requests to core PG and stop your Bitchin!\n>\n> Michael Vitale\n\nFirst, a matter of format: please don't top-post. Replies go under the \noriginal posts. That's an unwritten rule, but a very time honored one. \nSecond, I know very well who Laurenz Albe is. We have met on the \noracle-l few decades ago. Third, I think that my reproach to Laurenz's \ntone is very justified.  You don't say \"the argument that Python has it \nis not good enough\" to Dennis Ritchie. Hopefully, you get my analogy, \nbut one cannot ever be sure.\n\nLast, I didn't request any new features from the Postgres community. \nThat's a mistake that I'll never commit again. Last time I tried, this \nhas happened:\n\nhttps://www.toolbox.com/tech/data-management/blogs/why-postgresql-doesnt-have-query-hints-020411/\n\nI still keep it in my bookmark folder, under \"Humor\". I used that \narticle several times on the oracle-l as an illustration some properties \nof Postgres community. That article was a gift and I am sincerely \ngrateful. Of course, PostgreSQL now has query hints.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Mon, 4 Oct 2021 23:30:11 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "\nOn 10/4/21 23:08, Peter Geoghegan wrote:\n> n all sincerity: Chill out. I don't think that this is worth getting\n> into an argument over. I think that there is a good chance that you'd\n> have had a much better experience if the conversation had been in\n> person. Text has a way of losing a lot of important nuance.\n>\n> I have personally met and enjoyed talking to quite a few people that\n> personally worked on Oracle, in various capacities -- the world of\n> database internals experts is not huge. I find Tanel Poder very\n> interesting, too -- never met the man, but we follow each other on\n> Twitter. Oracle is a system that has some interesting properties in\n> general (not just as a counterpoint to PostgreSQL), and I definitely\n> respect it. Same with SQL Server.\n>\n> There are lots of smart and accomplished people in the world,\n> including Jeff. I think that it's worth understanding these\n> differences in perspective. There is likely to be merit in all of the\n> specific points made by both Laurenze and Jeff. They may not be\n> irreconcilable, or anything like it.\n\nWhat angered me was the presumptuous tone of voice directed to an Oracle \nlegend. I have probably talked to many more Oracle people than you, \nincluding Tanel, whom I have met personally. I am not on Twitter, \nunfortunately I am older than 20. Before you ask, I am not on Instagram, \nFacebook or Tiktok. I am not on OnlyFans either. I have never understood \nthe need to share one's every thought in real time. Being rather private \nperson has probably stymied my career of an internet influencer. I'll \nnever rival Kim Kardashian.\n\nAs for Jeff Holt, I believe that a person of his stature needs to be \ntaken seriously and not lectured \"how are things done in Postgres \ncommunity\". I  am rather confused by the thinly veiled hostility toward \nOracle. In my opinion, Postgres community should be rather welcoming to \nOracle people like Frits Hoogland, Frank Pachot or Jeff Holt. After all, \nwe are using Postgres and telling us \"you can't have what you used to \nget from Oracle\" is not either encouraging or smart. If you want \nconsulting jobs in big banks and for a decent money, you might just take \nOracle people seriously. Have you ever wondered why Oracle has so many \ncustomers despite the fact that it's so freakishly expensive?\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Tue, 5 Oct 2021 00:04:06 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On Mon, Oct 4, 2021 at 9:04 PM Mladen Gogala <[email protected]> wrote:\n> What angered me was the presumptuous tone of voice directed to an Oracle\n> legend. I have probably talked to many more Oracle people than you,\n> including Tanel, whom I have met personally. I am not on Twitter,\n> unfortunately I am older than 20. Before you ask, I am not on Instagram,\n> Facebook or Tiktok. I am not on OnlyFans either. I have never understood\n> the need to share one's every thought in real time. Being rather private\n> person has probably stymied my career of an internet influencer. I'll\n> never rival Kim Kardashian.\n\nYou do seem shy.\n\n> As for Jeff Holt, I believe that a person of his stature needs to be\n> taken seriously and not lectured \"how are things done in Postgres\n> community\".\n\nI haven't met Jeff Holt either, but I believe that he is also older\nthan 20. I have to imagine that he doesn't particularly need you to\ndefend his honor.\n\n> I am rather confused by the thinly veiled hostility toward\n> Oracle. In my opinion, Postgres community should be rather welcoming to\n> Oracle people like Frits Hoogland, Frank Pachot or Jeff Holt. After all,\n> we are using Postgres and telling us \"you can't have what you used to\n> get from Oracle\" is not either encouraging or smart.\n\nI agree with all that. I am also friendly with Frank, as it happens.\n\nI think that Laurenze was just trying to establish common terms of\nreference for discussion -- it's easy for two groups of people with\nsimilar but different terminology to talk past each other. For\nexample, I think that there may be confusion about what is possible\nwith a tool like eBPF today, and what might be possible in an ideal\nworld.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 4 Oct 2021 21:40:55 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On Mon, 2021-10-04 at 21:51 -0400, Mladen Gogala wrote:\n> \n> On 10/4/21 02:34, Laurenz Albe wrote:\n> > On Fri, 2021-10-01 at 15:06 -0500, Jeff Holt wrote:\n> > > TLDR; If I spend the time necessary to instrument the many functions that are the equivalent\n> > > of the Oracle counterparts, would anyone pull those changes and use them?\n> > > Specifically, for those who know Oracle, I'm talking about implementing:\n> > >     1. The portion of the ALTER SESSION that enables extended SQL trace\n> > >     2. Most of the DBMS_MONITOR and DBMS_APPLICATION_INFO packages\n> > >     3. Instrument the thousand or so functions that are the equivalent of those found in Oracle's V$EVENT_NAME\n> > >     4. Dynamic performance view V$DIAG_INFO\n> >\n> > Anything that improves user experience in that respect is welcome, but consider\n> > that each database has different approaches to solve the same problems.\n> > \n> > Before you go to the length of implementing a lot of stuff, check in with\n> > the -hackers list and discuss your ideas.\n> > \n> > Please be a lot more specific than in this e-mail.  While it is certainly\n> > fine to sketch your ambitios vision, focus on one specific thing you can\n> > imagine implementing and come up with a design for that.\n> > \n> > Note that \"Oracle has it\" is not a good enough reason for a PostgreSQL\n> > feature.  We think we can do better than they do (at least in many respects).\n> > Also, don't assume that everyone on the -hackers list will be familiar with\n> > certain PostgreSQL features.\n> > \n> > One think that you should keep in mind is that Oracle has to provide different\n> > features in that area because they are not open source.  In PostgreSQL, I can\n> > simply read the code or attach a debugger to a backend, and when it comes to\n> > profiling, \"perf\" works pretty well.  So there is less need for these things.\n> > \n> > I don't want to discourage you, but contributing to PostgreSQL can be a lengthy\n> > and tedious process.  On the upside, things that make it into core are usually\n> > fairly mature.\n> > \n> \n> Laurenz, you are obviously not aware who are you talking to. Let me \n> introduce you: Cary Millsap and Jeff Holt are authors of the \"Optimizing \n> Oracle for Performance\", one of the most influential books in the entire \n> realm of  Oracle literature.\n\nI have never heard of Jeff Holt, but then there are a lot of wonderful\nand smart people I have never heard of. I tend to be respectful in\nmy conversation, regardless if I know the other person or not.\n\n> Haughty lectures about \"Oracle has it\" not being good enough could \n> hardly be more out of place here.\n\nI have no idea how you arrive at the conclusion that I was delivering\na haughty lecture. Somebody asked if PostgreSQL would consider applying\npatches he is ready to write, somebody who seems not to be familiar\nwith the way PostgreSQL development works, so I tried to give helpful\npointers.\n\n> To put it as politely as is possible in this case, shut your pie hole.\n\nI think you have just disqualified yourself from taking part in this\nconversation. I recommend that you don't embarrass Jeff Holt by trying\nto champion him.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 05 Oct 2021 10:26:04 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" }, { "msg_contents": "Em ter., 5 de out. de 2021 às 01:04, Mladen Gogala <[email protected]>\nescreveu:\n\n> As for Jeff Holt, I believe that a person of his stature needs to be\n> taken seriously and not lectured \"how are things done in Postgres\n> community\". I am rather confused by the thinly veiled hostility toward\n> Oracle. In my opinion, Postgres community should be rather welcoming to\n> Oracle people like Frits Hoogland, Frank Pachot or Jeff Holt.\n>\nI think that you're a little mistaken, the hostility of the \"gurus\" is not\nexactly against Oracle guys,\nbut rather towards anyone who is not a \"committer\".\nJust follow the pgsql-hackers list, and you'll see that newbies are very\nunwelcome,\nwhether they're really newbies like me, or they're really teachers.\n\nregards,\nRanier Vilela\n\nEm ter., 5 de out. de 2021 às 01:04, Mladen Gogala <[email protected]> escreveu:\nAs for Jeff Holt, I believe that a person of his stature needs to be \ntaken seriously and not lectured \"how are things done in Postgres \ncommunity\". I  am rather confused by the thinly veiled hostility toward \nOracle. In my opinion, Postgres community should be rather welcoming to \nOracle people like Frits Hoogland, Frank Pachot or Jeff Holt.I think that you're a little mistaken, the hostility of the \"gurus\" is not exactly against Oracle guys, but rather towards anyone who is not a \"committer\".Just follow the pgsql-hackers list, and you'll see that newbies are very unwelcome, whether they're really newbies like me, or they're really teachers. regards,Ranier Vilela", "msg_date": "Tue, 5 Oct 2021 08:44:13 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "Comments in-line:\n\nOn 10/5/21 04:26, Laurenz Albe wrote:\n> have never heard of Jeff Holt, but then there are a lot of wonderful\n> and smart people I have never heard of. I tend to be respectful in\n> my conversation, regardless if I know the other person or not.\n\nThat much is apparent. However, that's no excuse.\n\n\n>\n>> Haughty lectures about \"Oracle has it\" not being good enough could\n>> hardly be more out of place here.\n> I have no idea how you arrive at the conclusion that I was delivering\n> a haughty lecture. Somebody asked if PostgreSQL would consider applying\n> patches he is ready to write, somebody who seems not to be familiar\n> with the way PostgreSQL development works, so I tried to give helpful\n> pointers.\n\nYour tone of voice did. Plus, you took it on yourself to explain \"how \nthings are done in the Postgres community\".  I always use hints and Josh \nBerkus as an example \"how things are done in the Postgres community\" and \nwhy is the Postgres progress so slow. You have just provided me another \nperfect example of the \"community spirit\".\n\n>\n>> To put it as politely as is possible in this case, shut your pie hole.\n> I think you have just disqualified yourself from taking part in this\n> conversation. I recommend that you don't embarrass Jeff Holt by trying\n> to champion him.\nIf you are under impression that I want to take part in a conversation, \nthen you're sorely mistaken. And I have to adjust my style of writing to \nmake things even more clear. As for Jeff, I don't need to 'champion \nhim'. He did that all by himself. In his place, I would simply ignore \nboth this topic and you, Mr. Postgres Community.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nComments in-line:\n\nOn 10/5/21 04:26, Laurenz Albe wrote:\n\n\n have never heard of Jeff Holt, but then there are a lot of wonderful\nand smart people I have never heard of. I tend to be respectful in\nmy conversation, regardless if I know the other person or not.\n\nThat much is apparent. However, that's no excuse.\n\n\n\n\n\n\n\n\nHaughty lectures about \"Oracle has it\" not being good enough could \nhardly be more out of place here.\n\n\nI have no idea how you arrive at the conclusion that I was delivering\na haughty lecture. Somebody asked if PostgreSQL would consider applying\npatches he is ready to write, somebody who seems not to be familiar\nwith the way PostgreSQL development works, so I tried to give helpful\npointers.\n\nYour tone of voice did. Plus, you took it on yourself to explain\n \"how things are done in the Postgres community\".  I always use\n hints and Josh Berkus as an example \"how things are done in the\n Postgres community\" and why is the Postgres progress so slow. You\n have just provided me another perfect example of the \"community\n spirit\".\n\n\n\n\n\n\nTo put it as politely as is possible in this case, shut your pie hole.\n\n\nI think you have just disqualified yourself from taking part in this\nconversation. I recommend that you don't embarrass Jeff Holt by trying\nto champion him.\n\n If you are under impression that I want to take part in a\n conversation, then you're sorely mistaken. And I have to adjust my\n style of writing to make things even more clear. As for Jeff, I\n don't need to 'champion him'. He did that all by himself. In his\n place, I would simply ignore both this topic and you, Mr. Postgres\n Community.\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Tue, 5 Oct 2021 10:41:59 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On Fri, Oct 1, 2021 at 1:06 PM Jeff Holt <[email protected]> wrote:\n> Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n>\n> I've come to this point because I see many roadblocks for users who want to see a detailed \"receipt\" for their response time.\n\nI have heard of method R. Offhand it seems roughly comparable to\nsomething like the Top-down Microarchitecture Analysis Method that low\nlevel systems programmers sometimes use, along with Intel's pmu-tools\n-- at least at a very high level. The point seems to be to provide a\nworkflow that can plausibly zero in on low-level bottlenecks, by\nproviding high level context. Many tricky real world problems are in\nsome sense a high level problem that is disguised as a low level\nproblem. And so all of the pieces need to be present on the board, so\nto speak.\n\nDoes that sound accurate?\n\nOne obvious issue with much of the Postgres instrumentation is that it\nmakes it hard to see how things change over time. I think that that is\noften *way* more informative than static snapshots.\n\nI can see why you'd emphasize the need for PostgreSQL to more or less\nown the end to end experience for something like this. It doesn't\nnecessarily follow that the underlying implementation cannot make use\nof infrastructure like eBPF, though. Fast user space probes provably\nhave no overhead, and can be compiled-in by distros that can support\nit. There hasn't been a consistent effort to make that stuff\navailable, but I doubt that that tells us much about what is possible.\nThe probes that we have today are somewhat of a grab-bag, that aren't\nparticularly useful -- so it's a chicken-and-egg thing.\n\nIt would probably be helpful if you could describe what you feel is\nmissing in more general terms -- while perhaps giving specific\npractical examples of specific scenarios that give us some sense of\nwhat the strengths of the model are. ISTM that it's not so much a lack\nof automation in PostgreSQL. It's more like a lack of a generalized\nmodel, which includes automation, but also some high level top-down\ntheory.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 Oct 2021 13:24:14 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "Comments in-line\n\nOn 10/5/21 16:24, Peter Geoghegan wrote:\n> On Fri, Oct 1, 2021 at 1:06 PM Jeff Holt <[email protected]> wrote:\n>> Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n>>\n>> I've come to this point because I see many roadblocks for users who want to see a detailed \"receipt\" for their response time.\n> I have heard of method R. Offhand it seems roughly comparable to\n> something like the Top-down Microarchitecture Analysis Method that low\n> level systems programmers sometimes use, along with Intel's pmu-tools\n> -- at least at a very high level. The point seems to be to provide a\n> workflow that can plausibly zero in on low-level bottlenecks, by\n> providing high level context. Many tricky real world problems are in\n> some sense a high level problem that is disguised as a low level\n> problem. And so all of the pieces need to be present on the board, so\n> to speak.\n>\n> Does that sound accurate?\nYes, that is pretty accurate. It is essentially the same method \ndescribed in the \"High Performance Computing\" books. The trick is to \nfigure what the process is waiting for and then reduce the wait times. \nAll computers wait at the same speed.\n> One obvious issue with much of the Postgres instrumentation is that it\n> makes it hard to see how things change over time. I think that that is\n> often *way* more informative than static snapshots.\n>\n> I can see why you'd emphasize the need for PostgreSQL to more or less\n> own the end to end experience for something like this. It doesn't\n> necessarily follow that the underlying implementation cannot make use\n> of infrastructure like eBPF, though. Fast user space probes provably\n> have no overhead, and can be compiled-in by distros that can support\n> it. There hasn't been a consistent effort to make that stuff\n> available, but I doubt that that tells us much about what is possible.\n> The probes that we have today are somewhat of a grab-bag, that aren't\n> particularly useful -- so it's a chicken-and-egg thing.\n\nNot exactly. There already is a very good extension for Postgres called \npg_wait_sampling:\n\nhttps://github.com/postgrespro/pg_wait_sampling\n\nWhat is missing here is mostly the documentation. This extension should \nbecome a part of Postgres proper and the events should be documented as \nthey are (mostly) documented for Oracle. Oracle uses trace files \ninstead. However, with Postgres equivalence of files and tables, this is \nnot a big difference.\n\n\n>\n> It would probably be helpful if you could describe what you feel is\n> missing in more general terms -- while perhaps giving specific\n> practical examples of specific scenarios that give us some sense of\n> what the strengths of the model are. ISTM that it's not so much a lack\n> of automation in PostgreSQL. It's more like a lack of a generalized\n> model, which includes automation, but also some high level top-down\n> theory.\n\nI am not Jeff and my opinion is not as valuable and doesn't carry the \nsame weight, by far. However, I do believe that we may not see Jeff Holt \nagain on this group so I am providing my opinion instead. At least I \nwould, in Jeff's place, be reluctant to return to this group.\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Tue, 5 Oct 2021 17:27:59 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "Jeff Holt is probably pretty embarrassed there's some blowhard making a\nscene using his name in a casual mailing list thread.\n\nOn Tue, Oct 5, 2021 at 5:28 PM Mladen Gogala <[email protected]>\nwrote:\n\n> Comments in-line\n>\n> On 10/5/21 16:24, Peter Geoghegan wrote:\n> > On Fri, Oct 1, 2021 at 1:06 PM Jeff Holt <[email protected]> wrote:\n> >> Now looking closely at postgreSQL, I see an opportunity to more quickly\n> implement Oracle's current feature list.\n> >>\n> >> I've come to this point because I see many roadblocks for users who\n> want to see a detailed \"receipt\" for their response time.\n> > I have heard of method R. Offhand it seems roughly comparable to\n> > something like the Top-down Microarchitecture Analysis Method that low\n> > level systems programmers sometimes use, along with Intel's pmu-tools\n> > -- at least at a very high level. The point seems to be to provide a\n> > workflow that can plausibly zero in on low-level bottlenecks, by\n> > providing high level context. Many tricky real world problems are in\n> > some sense a high level problem that is disguised as a low level\n> > problem. And so all of the pieces need to be present on the board, so\n> > to speak.\n> >\n> > Does that sound accurate?\n> Yes, that is pretty accurate. It is essentially the same method\n> described in the \"High Performance Computing\" books. The trick is to\n> figure what the process is waiting for and then reduce the wait times.\n> All computers wait at the same speed.\n> > One obvious issue with much of the Postgres instrumentation is that it\n> > makes it hard to see how things change over time. I think that that is\n> > often *way* more informative than static snapshots.\n> >\n> > I can see why you'd emphasize the need for PostgreSQL to more or less\n> > own the end to end experience for something like this. It doesn't\n> > necessarily follow that the underlying implementation cannot make use\n> > of infrastructure like eBPF, though. Fast user space probes provably\n> > have no overhead, and can be compiled-in by distros that can support\n> > it. There hasn't been a consistent effort to make that stuff\n> > available, but I doubt that that tells us much about what is possible.\n> > The probes that we have today are somewhat of a grab-bag, that aren't\n> > particularly useful -- so it's a chicken-and-egg thing.\n>\n> Not exactly. There already is a very good extension for Postgres called\n> pg_wait_sampling:\n>\n> https://github.com/postgrespro/pg_wait_sampling\n>\n> What is missing here is mostly the documentation. This extension should\n> become a part of Postgres proper and the events should be documented as\n> they are (mostly) documented for Oracle. Oracle uses trace files\n> instead. However, with Postgres equivalence of files and tables, this is\n> not a big difference.\n>\n>\n> >\n> > It would probably be helpful if you could describe what you feel is\n> > missing in more general terms -- while perhaps giving specific\n> > practical examples of specific scenarios that give us some sense of\n> > what the strengths of the model are. ISTM that it's not so much a lack\n> > of automation in PostgreSQL. It's more like a lack of a generalized\n> > model, which includes automation, but also some high level top-down\n> > theory.\n>\n> I am not Jeff and my opinion is not as valuable and doesn't carry the\n> same weight, by far. However, I do believe that we may not see Jeff Holt\n> again on this group so I am providing my opinion instead. At least I\n> would, in Jeff's place, be reluctant to return to this group.\n>\n>\n> --\n> Mladen Gogala\n> Database Consultant\n> Tel: (347) 321-1217\n> https://dbwhisperer.wordpress.com\n>\n>\n>\n>\n\nJeff Holt is probably pretty embarrassed there's some blowhard making a scene using his name in a casual mailing list thread.On Tue, Oct 5, 2021 at 5:28 PM Mladen Gogala <[email protected]> wrote:Comments in-line\n\nOn 10/5/21 16:24, Peter Geoghegan wrote:\n> On Fri, Oct 1, 2021 at 1:06 PM Jeff Holt <[email protected]> wrote:\n>> Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n>>\n>> I've come to this point because I see many roadblocks for users who want to see a detailed \"receipt\" for their response time.\n> I have heard of method R. Offhand it seems roughly comparable to\n> something like the Top-down Microarchitecture Analysis Method that low\n> level systems programmers sometimes use, along with Intel's pmu-tools\n> -- at least at a very high level. The point seems to be to provide a\n> workflow that can plausibly zero in on low-level bottlenecks, by\n> providing high level context. Many tricky real world problems are in\n> some sense a high level problem that is disguised as a low level\n> problem. And so all of the pieces need to be present on the board, so\n> to speak.\n>\n> Does that sound accurate?\nYes, that is pretty accurate. It is essentially the same method \ndescribed in the \"High Performance Computing\" books. The trick is to \nfigure what the process is waiting for and then reduce the wait times. \nAll computers wait at the same speed.\n> One obvious issue with much of the Postgres instrumentation is that it\n> makes it hard to see how things change over time. I think that that is\n> often *way* more informative than static snapshots.\n>\n> I can see why you'd emphasize the need for PostgreSQL to more or less\n> own the end to end experience for something like this. It doesn't\n> necessarily follow that the underlying implementation cannot make use\n> of infrastructure like eBPF, though. Fast user space probes provably\n> have no overhead, and can be compiled-in by distros that can support\n> it. There hasn't been a consistent effort to make that stuff\n> available, but I doubt that that tells us much about what is possible.\n> The probes that we have today are somewhat of a grab-bag, that aren't\n> particularly useful -- so it's a chicken-and-egg thing.\n\nNot exactly. There already is a very good extension for Postgres called \npg_wait_sampling:\n\nhttps://github.com/postgrespro/pg_wait_sampling\n\nWhat is missing here is mostly the documentation. This extension should \nbecome a part of Postgres proper and the events should be documented as \nthey are (mostly) documented for Oracle. Oracle uses trace files \ninstead. However, with Postgres equivalence of files and tables, this is \nnot a big difference.\n\n\n>\n> It would probably be helpful if you could describe what you feel is\n> missing in more general terms -- while perhaps giving specific\n> practical examples of specific scenarios that give us some sense of\n> what the strengths of the model are. ISTM that it's not so much a lack\n> of automation in PostgreSQL. It's more like a lack of a generalized\n> model, which includes automation, but also some high level top-down\n> theory.\n\nI am not Jeff and my opinion is not as valuable and doesn't carry the \nsame weight, by far. However, I do believe that we may not see Jeff Holt \nagain on this group so I am providing my opinion instead. At least I \nwould, in Jeff's place, be reluctant to return to this group.\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Tue, 5 Oct 2021 20:02:30 -0400", "msg_from": "Tim <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On 10/5/21 20:02, Tim wrote:\n\n> Jeff Holt is probably pretty embarrassed there's some blowhard making \n> a scene using his name in a casual mailing list thread.\n\nWow! What a contribution to the discussion! Calling me a blowhard, all \nwhile top-posting at the same time. Your post will be remembered for \ngenerations to come.\n\nOr not. Laurenz will probably tell you that we don't top-post in \nPostgres community. He's good with rules, regulations and the way things \nare done in Postgres community.\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Tue, 5 Oct 2021 22:39:13 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On Mon, Oct 4, 2021 at 08:34:29AM +0200, Laurenz Albe wrote:\n> > Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n> \n> Anything that improves user experience in that respect is welcome, but consider\n> that each database has different approaches to solve the same problems.\n> \n> Before you go to the length of implementing a lot of stuff, check in with\n> the -hackers list and discuss your ideas.\n> \n> Please be a lot more specific than in this e-mail. While it is certainly\n> fine to sketch your ambitios vision, focus on one specific thing you can\n> imagine implementing and come up with a design for that.\n> \n> Note that \"Oracle has it\" is not a good enough reason for a PostgreSQL\n> feature. We think we can do better than they do (at least in many respects).\n> Also, don't assume that everyone on the -hackers list will be familiar with\n> certain PostgreSQL features.\n> \n> One think that you should keep in mind is that Oracle has to provide different\n> features in that area because they are not open source. In PostgreSQL, I can\n> simply read the code or attach a debugger to a backend, and when it comes to\n> profiling, \"perf\" works pretty well. So there is less need for these things.\n> \n> I don't want to discourage you, but contributing to PostgreSQL can be a lengthy\n> and tedious process. On the upside, things that make it into core are usually\n> fairly mature.\n\nJeff, I suggest you consider Laurenz's suggestions above, and try to\nignore the comments from Mladen Gogala, since they are caustic and I\nbelieve unhelpful. Frankly, we rarely have such caustic comments on the\nPostgres email lists.\n\nI have emailed Mladen Gogala privately to discuss this.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 21:09:08 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" }, { "msg_contents": "On 10/5/21 13:24, Peter Geoghegan wrote:\n> On Fri, Oct 1, 2021 at 1:06 PM Jeff Holt <[email protected]> wrote:\n>> Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n>>\n>> I've come to this point because I see many roadblocks for users who want to see a detailed \"receipt\" for their response time.\n> \n> It would probably be helpful if you could describe what you feel is\n> missing in more general terms -- while perhaps giving specific\n> practical examples of specific scenarios that give us some sense of\n> what the strengths of the model are. ISTM that it's not so much a lack\n> of automation in PostgreSQL. It's more like a lack of a generalized\n> model, which includes automation, but also some high level top-down\n> theory.\n\nBack in my oracle days, I formally used method-R on a few consulting\ngigs while working with Hotsos (RIP Gary). Method-R is brilliant, and I\nreferenced it in my PostgreSQL user group talk about wait events in PG.\n\nhttps://www.slideshare.net/ardentperf/wait-whats-going-on-inside-my-database-173880246\n\nI'm not the author of Method-R, but I myself would describe it as a\nmethodical approach to consistently solve business problems rooted in\ndatabase performance faster than any other methodical approach, built on\na foundation of wait events, queuing theory and tracing (aka logging).\nBut the most brilliant part is how Cary Millsap's tireless efforts to\nsimplify, automate and educate have made it accessible to ordinary data\nanalysts and project managers all over the world who speak SQL but not C.\n\nPostgreSQL added wait events starting in 9.6 and the last thing that's\nmissing is an integrated way to trace or log them. A simple starting\npoint could be a session-level GUC that enables a hook in\npgstat_report_wait_start() and pgstat_report_wait_end() to just drop\nmessages in the log. These log messages could then easily be processed\nto generate the similar profiles to the ones we used with other\ndatabases. Basically I agree 100% with Jeff that while you can do these\nthings with perf probes or eBPF, there are massive advantages to having\nit baked in the database. With the right tools, this makes session\nprofiling available to regular users (who do their day jobs with excel\nrather than eBPF).\n\nHowever, one problem to watch out for will be whether the existing\nPostgreSQL logging infrastructure can handle this. Probably need higher\nprecision timestamps (I need to check what csvlog has), and it could\nstill be a lot of volume with some lightweight locks. Whereas Oracle had\neach individual process write the wait event trace messages to its own\nfile, today PostgreSQL only supports either the single-system-wide-file\nlogging collector, or syslog which I think can only split to 8\ndestinations (and may be lossy).\n\nThere's another use case where high logging bandwidth could also be\nuseful - temporarily logging all SQL statements to capture workload.\nNext time I see someone take down their production database because the\npgBadger doc said \"log_min_duration_statement = 0\" ... WHY PGBADGER WHY?\n\nAnyway I do hope there will be some improvements in this area with\nPostgreSQL. I'm not much of a C coder but maybe I'll take a swing at it\nsome day!\n\nAnyway, Jeff, nice to see you here - and this is a topic I've thought\nabout a lot too. PostgreSQL is a pretty cool bit of software, and an\neven cooler group of people around it. Hope to see you around some more. :)\n\n-Jeremy\n\n\nPS. \"tracing versus sampling\" was the perpetual debate amongst\nperformance engineers... we could have some good fun debating along\nthose lines too. hold my beer\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n", "msg_date": "Thu, 7 Oct 2021 19:15:39 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On Thu, Oct 7, 2021 at 07:15:39PM -0700, Jeremy Schneider wrote:\n> PostgreSQL added wait events starting in 9.6 and the last thing that's\n> missing is an integrated way to trace or log them. A simple starting\n> point could be a session-level GUC that enables a hook in\n> pgstat_report_wait_start() and pgstat_report_wait_end() to just drop\n> messages in the log. These log messages could then easily be processed\n> to generate the similar profiles to the ones we used with other\n> databases. Basically I agree 100% with Jeff that while you can do these\n> things with perf probes or eBPF, there are massive advantages to having\n> it baked in the database. With the right tools, this makes session\n> profiling available to regular users (who do their day jobs with excel\n> rather than eBPF).\n\nOur wait events reported in pg_stat_activity are really only a first\nstep --- I always felt it needed an external tool to efficiently collect\nand report those wait events. I don't think the server log is the right\nplace to collect them.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 22:38:49 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" }, { "msg_contents": "\nOn 10/7/21 22:15, Jeremy Schneider wrote:\n> On 10/5/21 13:24, Peter Geoghegan wrote:\n>> On Fri, Oct 1, 2021 at 1:06 PM Jeff Holt <[email protected]> wrote:\n>>> Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.\n>>>\n>>> I've come to this point because I see many roadblocks for users who want to see a detailed \"receipt\" for their response time.\n>> It would probably be helpful if you could describe what you feel is\n>> missing in more general terms -- while perhaps giving specific\n>> practical examples of specific scenarios that give us some sense of\n>> what the strengths of the model are. ISTM that it's not so much a lack\n>> of automation in PostgreSQL. It's more like a lack of a generalized\n>> model, which includes automation, but also some high level top-down\n>> theory.\n> Back in my oracle days, I formally used method-R on a few consulting\n> gigs while working with Hotsos (RIP Gary). Method-R is brilliant, and I\n> referenced it in my PostgreSQL user group talk about wait events in PG.\n>\n> https://www.slideshare.net/ardentperf/wait-whats-going-on-inside-my-database-173880246\n>\n> I'm not the author of Method-R, but I myself would describe it as a\n> methodical approach to consistently solve business problems rooted in\n> database performance faster than any other methodical approach, built on\n> a foundation of wait events, queuing theory and tracing (aka logging).\n> But the most brilliant part is how Cary Millsap's tireless efforts to\n> simplify, automate and educate have made it accessible to ordinary data\n> analysts and project managers all over the world who speak SQL but not C.\n>\n> PostgreSQL added wait events starting in 9.6 and the last thing that's\n> missing is an integrated way to trace or log them. A simple starting\n> point could be a session-level GUC that enables a hook in\n> pgstat_report_wait_start() and pgstat_report_wait_end() to just drop\n> messages in the log. These log messages could then easily be processed\n> to generate the similar profiles to the ones we used with other\n> databases. Basically I agree 100% with Jeff that while you can do these\n> things with perf probes or eBPF, there are massive advantages to having\n> it baked in the database. With the right tools, this makes session\n> profiling available to regular users (who do their day jobs with excel\n> rather than eBPF).\n>\n> However, one problem to watch out for will be whether the existing\n> PostgreSQL logging infrastructure can handle this. Probably need higher\n> precision timestamps (I need to check what csvlog has), and it could\n> still be a lot of volume with some lightweight locks. Whereas Oracle had\n> each individual process write the wait event trace messages to its own\n> file, today PostgreSQL only supports either the single-system-wide-file\n> logging collector, or syslog which I think can only split to 8\n> destinations (and may be lossy).\n>\n> There's another use case where high logging bandwidth could also be\n> useful - temporarily logging all SQL statements to capture workload.\n> Next time I see someone take down their production database because the\n> pgBadger doc said \"log_min_duration_statement = 0\" ... WHY PGBADGER WHY?\n>\n> Anyway I do hope there will be some improvements in this area with\n> PostgreSQL. I'm not much of a C coder but maybe I'll take a swing at it\n> some day!\n>\n> Anyway, Jeff, nice to see you here - and this is a topic I've thought\n> about a lot too. PostgreSQL is a pretty cool bit of software, and an\n> even cooler group of people around it. Hope to see you around some more. :)\n>\n> -Jeremy\n>\n>\n> PS. \"tracing versus sampling\" was the perpetual debate amongst\n> performance engineers... we could have some good fun debating along\n> those lines too. hold my beer\n>\n>\nHi Jeremy,\n\nThere is an extension which does wait event sampling:\n\nhttps://github.com/postgrespro/pg_wait_sampling\n\nIt's one of the Postgres Pro extensions, I like it a lot. Postgres Pro \nis getting very popular on the Azure cloud. It's essentially Microsoft \nresponse to Aurora. Also EnterpriseDB has the event interface and the \nviews analogous to Oracle: edb$session_wait_history, edb$session_waits \nand edb$system_waits views are implementing the event interface in Edb. \nYou can look them up in the documentation, the documentation is \navailable on the web. The foundation is already laid, what is needed are \nthe finishing touches, like the detailed event documentation. I am \ncurrently engaged in a pilot porting project, porting an application \nfrom Oracle to Postgres.  I was looking into the event interface in \ndetail. And we are testing the EDB as well.  As an Oraclite to Oraclite, \nI have to commend EDB, it's an excellent piece of software, 75% cheaper \nthan Oracle.\n\nI agree with you about the logging capacity. Postgres is very loquacious \nwhen it comes to logging. I love that feature because pgBadger reports \nare even better than the AWR reports. Oracle is very loquacious and \nverbose too. $ORACLE_BASE/diag/rdbms/.../trace is chock full of trace \nfiles plus the alert log, of course. That is why the adrci utility has \nparameters for the automatic cleanup of the traceand core dump files. \nSometimes they did fill the file system.\n\nAs for the \"tracing vs. sampling\" debate, Oracle has both. \nV$ACTIVE_SESSION_HISTORY is a sampling view. Sampling views are more \npractical, especially when there are pooled connections. Personally, I \nwould prefer sampling.\n\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 23:35:16 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "\nOn Oct 7, 2021, at 19:38, Bruce Momjian <[email protected]> wrote:\n> \n> On Thu, Oct 7, 2021 at 07:15:39PM -0700, Jeremy Schneider wrote:\n>> PostgreSQL added wait events starting in 9.6 and the last thing that's\n>> missing is an integrated way to trace or log them. A simple starting\n>> point could be a session-level GUC that enables a hook in\n>> pgstat_report_wait_start() and pgstat_report_wait_end() to just drop\n>> messages in the log. These log messages could then easily be processed\n>> to generate the similar profiles to the ones we used with other\n>> databases. Basically I agree 100% with Jeff that while you can do these\n>> things with perf probes or eBPF, there are massive advantages to having\n>> it baked in the database. With the right tools, this makes session\n>> profiling available to regular users (who do their day jobs with excel\n>> rather than eBPF).\n> \n> Our wait events reported in pg_stat_activity are really only a first\n> step --- I always felt it needed an external tool to efficiently collect\n> and report those wait events. I don't think the server log is the right\n> place to collect them.\n\nWhat would you think about adding hooks to the functions I mentioned, if someone wrote an open source extension that could do things with the wait event start/stop times in a preload library?\n\nBut we could use parameters too, that’s another gap. For example - which buffer, object, etc for buffer_content? Which filenode and block for an IO? Which relation OID for a SQL lock? Then you can find which table, whether the hot block is a root or leaf of a btree, etc. This can be done by extending the wait infra to accept two or three arbitrary “informational” parameters, maybe just numeric for efficiency, or maybe string, and each individual wait event can decide what to do with them. We’d want to pass that info out over the hooks too. This is another reason to support wait event tracing in the DB - sometimes it might be difficult to get all the relevant context with a kernel probe on an external tool.\n\n-Jeremy\n\nSent from my TI-83\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 22:22:12 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better,\n consistent instrumentation for postgreSQL using a similar API as Oracle" }, { "msg_contents": "On Thu, Oct 7, 2021 at 10:22:12PM -0700, Jeremy Schneider wrote:\n>\n> On Oct 7, 2021, at 19:38, Bruce Momjian <[email protected]> wrote:\n> > Our wait events reported in pg_stat_activity are really only a first\n> > step --- I always felt it needed an external tool to efficiently\n> > collect and report those wait events. I don't think the server log\n> > is the right place to collect them.\n>\n> What would you think about adding hooks to the functions I mentioned,\n> if someone wrote an open source extension that could do things with\n> the wait event start/stop times in a preload library?\n\n(I am adding Alexander Korotkov to this email since he worked on wait\nevents.)\n\nThe original goal was to implement wait event reporting in a way that\ncould always be enabled, and that was successful. I thought trying to\ndo anything more than that in the server by default would add\nunacceptable overhead.\n\nSo the big question is how do we build on the wait events we already\nhave? Do we create an external tool, do it internally in the database,\nor a mix? Is additional wait event detail needed and that can be\noptionally enabled? It would be good to see what other tools are using\nwait events to get an idea of what use-cases there are for Postgres.\n\n> But we could use parameters too, that’s another gap. For example\n> - which buffer, object, etc for buffer_content? Which filenode and\n> block for an IO? Which relation OID for a SQL lock? Then you can find\n> which table, whether the hot block is a root or leaf of a btree,\n> etc. This can be done by extending the wait infra to accept two or\n> three arbitrary “informational” parameters, maybe just numeric for\n> efficiency, or maybe string, and each individual wait event can decide\n> what to do with them. We’d want to pass that info out over the hooks\n> too. This is another reason to support wait event tracing in the DB -\n> sometimes it might be difficult to get all the relevant context with a\n> kernel probe on an external tool.\n\nI think a larger question is what value will such information have for\nPostgres users?\n\n> Sent from my TI-83\n\nI was an SR-52 guy in my teens.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 11:12:00 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" }, { "msg_contents": "On Thu, Oct 7, 2021 at 11:35:16PM -0400, Mladen Gogala wrote:\n> \n> On 10/7/21 22:15, Jeremy Schneider wrote:\n> There is an extension which does wait event sampling:\n> \n> https://github.com/postgrespro/pg_wait_sampling\n> \n> It's one of the Postgres Pro extensions, I like it a lot. Postgres Pro is\n> getting very popular on the Azure cloud. It's essentially Microsoft response\n> to Aurora. Also EnterpriseDB has the event interface and the views analogous\n> to Oracle: edb$session_wait_history, edb$session_waits and edb$system_waits\n> views are implementing the event interface in Edb. You can look them up in\n> the documentation, the documentation is available on the web. The foundation\n> is already laid, what is needed are the finishing touches, like the detailed\n> event documentation. I am currently engaged in a pilot porting project,\n\nAh, this is exactly what I wanted to know --- what people are using the\nevent waits for. Can you tell if these are done all externally, or if\nthey need internal database changes?\n\n> I agree with you about the logging capacity. Postgres is very loquacious\n> when it comes to logging. I love that feature because pgBadger reports are\n> even better than the AWR reports. Oracle is very loquacious and verbose too.\n\nNice, I had not heard that before.\n\n> As for the \"tracing vs. sampling\" debate, Oracle has both.\n> V$ACTIVE_SESSION_HISTORY is a sampling view. Sampling views are more\n> practical, especially when there are pooled connections. Personally, I would\n> prefer sampling.\n\nYes, slide 101 here:\n\n\thttps://momjian.us/main/writings/pgsql/administration.pdf#page=101\n\nshows the Postgres monitoring options for reporting and\nalterting/aggegation. Yes, both are needed for wait event, and right\nnow we really don't have either for wait events --- just the raw\ninformation.\n\nHowever, I also need to ask how the wait event information, whether\ntracing or sampling, can be useful for Postgres because that will drive\nthe solution.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 11:21:32 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" }, { "msg_contents": "Bruce Momjian schrieb am 08.10.2021 um 17:21:\n> However, I also need to ask how the wait event information, whether\n> tracing or sampling, can be useful for Postgres because that will drive\n> the solution.\n\nI guess everyone will use that information in a different way.\n\nWe typically use the AWR reports as a post-mortem analysis tool if\nsomething goes wrong in our application (=customer specific projects)\n\nE.g. if there was a slowdown \"last monday\" or \"saving something took minutes yesterday morning\",\nthen we usually request an AWR report from the time span in question. Quite frequently\nthis already reveals the culprit. If not, we ask them to poke in more detail into v$session_history.\n\nSo in our case it's not really used for active monitoring, but for\nfinding the root cause after the fact.\n\nI don't know how representative this usage is though.\n\nThomas\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 17:28:37 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On Fri, Oct 8, 2021 at 05:28:37PM +0200, Thomas Kellerer wrote:\n> Bruce Momjian schrieb am 08.10.2021 um 17:21:\n> > However, I also need to ask how the wait event information, whether\n> > tracing or sampling, can be useful for Postgres because that will drive\n> > the solution.\n> \n> I guess everyone will use that information in a different way.\n> \n> We typically use the AWR reports as a post-mortem analysis tool if\n> something goes wrong in our application (=customer specific projects)\n> \n> E.g. if there was a slowdown \"last monday\" or \"saving something took minutes yesterday morning\",\n> then we usually request an AWR report from the time span in question. Quite frequently\n> this already reveals the culprit. If not, we ask them to poke in more detail into v$session_history.\n> \n> So in our case it's not really used for active monitoring, but for\n> finding the root cause after the fact.\n> \n> I don't know how representative this usage is though.\n\nOK, that's a good usecase, and something that certainly would apply to\nPostgres. Don't you often need more than just wait events to find the\ncause, like system memory usage, total I/O, etc?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 11:40:23 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" }, { "msg_contents": "On Fri, Oct 8, 2021 at 11:40 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Fri, Oct 8, 2021 at 05:28:37PM +0200, Thomas Kellerer wrote:\n> >\n> > We typically use the AWR reports as a post-mortem analysis tool if\n> > something goes wrong in our application (=customer specific projects)\n> >\n> > E.g. if there was a slowdown \"last monday\" or \"saving something took minutes yesterday morning\",\n> > then we usually request an AWR report from the time span in question. Quite frequently\n> > this already reveals the culprit. If not, we ask them to poke in more detail into v$session_history.\n> >\n> > So in our case it's not really used for active monitoring, but for\n> > finding the root cause after the fact.\n> >\n> > I don't know how representative this usage is though.\n>\n> OK, that's a good usecase, and something that certainly would apply to\n> Postgres. Don't you often need more than just wait events to find the\n> cause, like system memory usage, total I/O, etc?\n\nYou usually need a variety of metrics to be able to find what is\nactually causing $random_incident, so the more you can aggregate in\nyour performance tool the better. Wait events are an important piece\nof that puzzle.\n\nAs a quick example for wait events, I recently had to diagnose some\nperformance issue, which turned out to be some process reaching the 64\nsubtransactions with the well known consequences. I had\npg_wait_sampling aggregated metrics available so it was really easy to\nknow that the slowdown was due to that. Knowing what application\nexactly reached those 64 subtransactions is another story.\n\n\n", "msg_date": "Fri, 8 Oct 2021 23:59:03 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "Bruce Momjian schrieb am 08.10.2021 um 17:40:\n>> I guess everyone will use that information in a different way.\n>>\n>> We typically use the AWR reports as a post-mortem analysis tool if\n>> something goes wrong in our application (=customer specific projects)\n>>\n>> E.g. if there was a slowdown \"last monday\" or \"saving something took minutes yesterday morning\",\n>> then we usually request an AWR report from the time span in question. Quite frequently\n>> this already reveals the culprit. If not, we ask them to poke in more detail into v$session_history.\n>>\n>> So in our case it's not really used for active monitoring, but for\n>> finding the root cause after the fact.\n>>\n>> I don't know how representative this usage is though.\n>\n> OK, that's a good usecase, and something that certainly would apply to\n> Postgres. Don't you often need more than just wait events to find the\n> cause, like system memory usage, total I/O, etc?\n\nYes, the AWR report contains that information as well. e.g. sorts that spilled\nto disk, shared memory at the start and end, top 10 statements sorted by\ntotal time, individual time, I/O, number of executions, segments (tables)\nthat received the highest I/O (read and write) and so on.\nIt's really huge.\n\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 18:07:17 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "\nOn 10/8/21 11:21, Bruce Momjian wrote:\n> Ah, this is exactly what I wanted to know --- what people are using the\n> event waits for. Can you tell if these are done all externally, or if\n> they need internal database changes?\nWell, the methodology goes like this: we get the slow queries from \npgBadger report and then run explain (analyze, timing, buffers) on the \nquery. If we still cannot figure out how to improve things, we check the \nevents and see what the query is waiting for. After that we may add an \nindex, partition the table, change index structure or do something like \nthat. Unrelated to this discussion, I discovered Bloom extension. Bloom \nindexes are phenomenally useful. I apologize for the digression.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Fri, 8 Oct 2021 12:38:19 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On Sun, Oct 10, 2021 at 11:06 PM Jeff Holt <[email protected]> wrote:\n\n> TLDR; If I spend the time necessary to instrument the many functions that\n> are the equivalent of the Oracle counterparts, would anyone pull those\n> changes and use them? Specifically, for those who know Oracle, I'm talking\n> about implementing:\n>\n>\n> 1. The portion of the ALTER SESSION that enables extended SQL trace\n> 2. Most of the DBMS_MONITOR and DBMS_APPLICATION_INFO packages\n> 3. Instrument the thousand or so functions that are the equivalent of\n> those found in Oracle's V$EVENT_NAME\n> 4. Dynamic performance view V$DIAG_INFO\n>\n> For the last 35 years, I've made my living helping people solve Oracle\n> performance problems by looking at it, which means:\n>\n> Trace a user experience and profile the trace file to (a) reveal where the\n> time has gone and its algorithm and (b) make it easy to imagine the cost of\n> possible solutions as well as the savings in response time or resources.\n>\n> I've even submitted change requests to improve Oracle's tracing features\n> while working for them and since those glorious five years.\n>\n> Now looking closely at postgreSQL, I see an opportunity to more quickly\n> implement Oracle's current feature list.\n>\n> I've come to this point because I see many roadblocks for users who want\n> to see a detailed \"receipt\" for their response time. The biggest roadblock\n> is that without a *lot* of automation, a user of any kind must log into\n> the server and attempt to get the data that are now traditionally child's\n> play for Oracle. The second biggest roadblock I see is the recompilation\n> that is required for the server components (i.e., postgreSQL, operating\n> system). My initial attempts to get anything useful out of postgreSQL were\n> dismal failures and I think it should be infinitely easier.\n>\n> Running either dtrace and eBPF scripts on the server should not be\n> required. The instrumentation and the code being instrumented should be\n> tightly coupled. Doing so will allow *anyone* on *any* platform for *any* PostgreSQL\n> version to get a trace file just as easily as people do for Oracle.\n>\n\nI hope this kind of instrumentation will make its way to PostgreSQL one\nday. Knowing where the time is spent changes the performance\ntroubleshooting approach from guess-and-try to a scientific method. This is\nwhat made Linux a valid OS for enterprises, when instrumentation reached\nthe same level as we got on Unix. There's a demand for it in enterprises:\nfor example, EDB Advanced Server implemented timed wait events. I'm sure\nhaving it in open source postgres will help to understand the performance\nissues encountered by users, then helping to improve the database.\nProfiling where the database time is spent should not be reserved to\ncommercial databases. Having the source code visible is not sufficient to\nunderstand what happens in production. Observability should also be there.\n\nThere is a fear in the postgres community that features are implemented\njust because they exist in oracle, and mentioning oracle is often seen\nsuspicious. Probably because of the risk of adding complexity for no user\nvalue. Here, about instrumentation, I think that looking at what Oracle did\nduring 20 years is a good start. Because instrumentation is not an easy\ntask. Some waits are too short to have meaningful timing (the timing itself\nmay take more cpu cycles than the instrumentation itself). Some tasks are\ncritical to be measured. Looking at what Oracle Support implemented in\norder to solve big customer problems can give a good basis. Of course, all\nthis must be adapted for postgres. For example, a write system call may be\na logical or physical write because there's no direct I/O. At least, a\nprecise timing, aggregated to histograms, will help to distinguish which\nwrites were filesystem hits, or storage cache hits, or went to disk. And on\nthe most common platform, the overhead is minimal because getting the\ntimestamp can be done in userspace.\n\nToday, Linux has many tools that were not there when Oracle had to\nimplement wait events. And people may think the Linux tools are sufficient\ntoday. However, getting system call time is not easy in production (strace\nmust attach to the process) and other tools (perf) are only sampling: gives\nan idea but hides the details. Unfortunately, what we have from the OS\ngives interesting clues (for guess and try) but not enough facts (for\nscientific approach).\n\nSo the proposal is great, but there is also the risk of putting a large\neffort in describing the specification and maybe a patch, and that it is\nrejected. It should probably be discussed in the -hackers list (\nhttps://www.postgresql.org/list/pgsql-hackers/) first. And people will\ndislike it because it mentions Oracle. Or people will dislike it because\nthey think this should be reserved to commercial forks. Or because it may\nintroduce too much dependency on the OS. But some others will see the value\nof it. Discussions are good as long as they stay focused on the value of\nthe community project. I don't have skills to contribute to the code, but\nwill be happy to expose the need for this instrumentation (profiling time\nspent in database functions or system calls) as I have many examples for it.\n\nOn Sun, Oct 10, 2021 at 11:06 PM Jeff Holt <[email protected]> wrote:TLDR; If I spend the time necessary to instrument the many functions that are the equivalent of the Oracle counterparts, would anyone pull those changes and use them? Specifically, for those who know Oracle, I'm talking about implementing:The portion of the ALTER SESSION that enables extended SQL traceMost of the DBMS_MONITOR and DBMS_APPLICATION_INFO packagesInstrument the thousand or so functions that are the equivalent of those found in Oracle's V$EVENT_NAMEDynamic performance view V$DIAG_INFOFor the last 35 years, I've made my living helping people solve Oracle performance problems by looking at it, which means:Trace a user experience and profile the trace file to (a) reveal where the time has gone and its algorithm and (b) make it easy to imagine the cost of possible solutions as well as the savings in response time or resources.I've even submitted change requests to improve Oracle's tracing features while working for them and since those glorious five years.Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.I've come to this point because I see many roadblocks for users who want to see a detailed \"receipt\" for their response time. The biggest roadblock is that without a lot of automation, a user of any kind must log into the server and attempt to get the data that are now traditionally child's play for Oracle. The second biggest roadblock I see is the recompilation that is required for the server components (i.e., postgreSQL, operating system). My initial attempts to get anything useful out of postgreSQL were dismal failures and I think it should be infinitely easier.Running either dtrace and eBPF scripts on the server should not be required. The instrumentation and the code being instrumented should be tightly coupled. Doing so will allow anyone on any platform for any PostgreSQL version to get a trace file just as easily as people do for Oracle.I hope this kind of instrumentation will make its way to PostgreSQL one day. Knowing where the time is spent changes the performance troubleshooting approach from guess-and-try to a scientific method. This is what made Linux a valid OS for enterprises, when instrumentation reached the same level as we got on Unix. There's a demand for it in enterprises: for example, EDB Advanced Server implemented timed wait events. I'm sure having it in open source postgres will help to understand the performance issues encountered by users, then helping to improve the database. Profiling where the database time is spent should not be reserved to commercial databases. Having the source code visible is not sufficient to understand what happens in production. Observability should also be there.There is a fear in the postgres community that features are implemented just because they exist in oracle, and mentioning oracle is often seen suspicious. Probably because of the risk of adding complexity for no user value. Here, about instrumentation, I think that looking at what Oracle did during 20 years is a good start. Because instrumentation is not an easy task. Some waits are too short to have meaningful timing (the timing itself may take more cpu cycles than the instrumentation itself). Some tasks are critical to be measured. Looking at what Oracle Support implemented in order to solve big customer problems can give a good basis. Of course, all this must be adapted for postgres. For example, a write system call may be a logical or physical write because there's no direct I/O. At least, a precise timing, aggregated to histograms, will help to distinguish which writes were filesystem hits, or storage cache hits, or went to disk. And on the most common platform, the overhead is minimal because getting the timestamp can be done in userspace.\nToday, Linux has many tools that were not there when Oracle had to implement wait events. And people may think the Linux tools are sufficient today. However, getting system call time is not easy in production (strace must attach to the process) and other tools (perf) are only sampling: gives an idea but hides the details. Unfortunately, what we have from the OS gives interesting clues (for guess and try) but not enough facts (for scientific approach).\nSo the proposal is great, but there is also the risk of putting a large effort in describing the specification and maybe a patch, and that it is rejected. It should probably be discussed in the \n-hackers list (https://www.postgresql.org/list/pgsql-hackers/) first. And people will dislike it because it mentions Oracle. Or people will dislike it because they think this should be reserved to commercial forks. Or because it may introduce too much dependency on the OS. But some others will see the value of it. Discussions are good as long as they stay focused on the value of the community project. I don't have skills to contribute to the code, but will be happy to expose the need for this instrumentation (profiling time spent in database functions or system calls) as I have many examples for it.", "msg_date": "Mon, 11 Oct 2021 00:09:32 +0200", "msg_from": "Franck Pachot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a similar\n API as Oracle" }, { "msg_contents": "On Mon, 2021-10-11 at 00:09 +0200, Franck Pachot wrote:\n> And people will dislike it because it mentions Oracle.\n\nI don't think so.\nWhile \"Oracle has it\" is not a good enough reason for a feature, it\nis certainly no counter-indication.\n\n> Or people will dislike it because they think this should be reserved to commercial forks.\n\nThat is conceivable, but I think most vendors would prefer to have\nthat in standard PostgreSQL rather than having to maintain it on\ntheir own.\n\n> Or because it may introduce too much dependency on the OS.\n\nThat is possible. But I think gettimeofday(2) is portable enough.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 08:54:36 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" }, { "msg_contents": "On Mon, Oct 11, 2021 at 12:09:32AM +0200, Franck Pachot wrote:\n> So the proposal is great, but there is also the risk of putting a large effort\n> in describing the specification and maybe a patch, and that it is rejected. It\n> should probably be discussed in the -hackers list (https://www.postgresql.org/\n> list/pgsql-hackers/) first. And people will dislike it because it mentions\n> Oracle. Or people will dislike it because they think this should be reserved to\n> commercial forks. Or because it may introduce too much dependency on the OS.\n> But some others will see the value of it. Discussions are good as long as they\n> stay focused on the value of the community project. I don't have skills to\n> contribute to the code, but will be happy to expose the need for this\n> instrumentation (profiling time spent in database functions or system calls) as\n> I have many examples for it.\n\nI think there are three issues.\n\nFirst, while the community is not _against_ something just because\nOracle has it, we are not in favor of it only becaues Oracle has it. \nThis means you have to make the case for its usefulness independent of\nOracle.\n\nSecond, while we clearly have successful wait event reporting in the\ndatabase server, it is less clear if we want event aggregation,\nper-session summaries, and alerting in the database server, rather than\nin an external project. Postgres has been successful in developing\nbackup, pooling, and failover tooling outside the database, so it is\npossible the answer to this is to create an external project that does\nthis using the wait events in the server. If more database internal\nsupport is needed for that, we can discuss that option with the\ncommunity.\n\nThird, our normal work process is:\n\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\nNot going in this order often leads to backtracking or failure.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 15:04:41 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better, consistent instrumentation for postgreSQL using a\n similar API as Oracle" } ]
[ { "msg_contents": "Question:\r\n\r\nHow would one troubleshoot this issue in Postgres as to why the delete was running so long? My background is Oracle and there are various statistics I may look at:\r\n• One could estimate the number of logical reads the delete should do based on expected number of rows to delete, expected logical reads against the table per row, expected logical reads against each index per row.\r\n• One could look in V$SQL and see how many logical reads the query was actually doing.\r\n• One could look at V$SESS_IO and see how many logical reads the session was doing.\r\n\r\nIn this case you would see the query was doing way more logical reads that expected and then try and think of scenarios that would cause that.\r\n\r\nHere is what I could see in Postgres:\r\n• When I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.\r\n• I could not find the query in pg_stat_statements to see how many shared block reads/hits the query was doing to see if the numbers were extremely high. Based on documentation queries do not show up in pg_stat_statements until after they complete.\r\n• pg_stat_activity showed wait_event_type and wait_event were null for the session every time I looked. So the session was continually using CPU.\r\n\r\nI started looking at table definitions (indexes, FK's, etc.) and comparing to Oracle and noticed some indexes missing. I then could see the table being deleted from was a child table with a FK pointing to a parent table. Finally I was able to see that the parent table was missing an index on the FK column so for every row being deleted from the child it was full scanning the parent. All makes sense after the fact but I'm looking for a more methodical way to come to that conclusion by looking at database statistics.\r\n\r\nAre there other statistics in Postgres I may have looked at to methodically come to the conclusion that the problem was the missing index on the parent FK column?\r\n\r\nThanks\r\n\r\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html\r\n\n\n\n\n\n\n\n\n\n\nQuestion:\n \nHow would one troubleshoot this issue in Postgres as to why the delete was running so long?  My background is Oracle and there are various statistics I may look at:\n\nOne could estimate the number of logical reads the delete should do based on expected number of rows to delete, expected logical reads against the table per row, expected logical reads against each index per row.  One could look in V$SQL and see how many logical reads the query was actually doing.One could look at V$SESS_IO and see how many logical reads the session was doing.\n \nIn this case you would see the query was doing way more logical reads that expected and then try and think of scenarios that would cause that.\n \nHere is what I could see in Postgres:\n\nWhen I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.I could not find the query in pg_stat_statements to see how many shared block reads/hits the query was doing to see if the numbers were extremely high.  Based on documentation queries do not show up in pg_stat_statements until after they complete.pg_stat_activity showed wait_event_type and wait_event were null for the session every time I looked.  So the session was continually using CPU.\n \nI started looking at table definitions (indexes, FK's, etc.) and comparing to Oracle and noticed some indexes missing.  I then could see the table being deleted from was a child table with a FK pointing to a parent table.  Finally I was able to see that\r\nthe parent table was missing an index on the FK column so for every row being deleted from the child it was full scanning the parent.  All makes sense after the fact but I'm looking for a more methodical way to come to that conclusion by looking at database\r\nstatistics.\n \nAre there other statistics in Postgres I may have looked at to methodically come to the conclusion that the problem was the missing index on the parent FK column?\n \nThanks\n \nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this\r\ne-mail and any attachments. Certain required legal entity disclosures can be accessed on our website:\r\nhttps://www.thomsonreuters.com/en/resources/disclosures.html", "msg_date": "Wed, 6 Oct 2021 18:00:07 +0000", "msg_from": "\"Dirschel, Steve\" <[email protected]>", "msg_from_op": true, "msg_subject": "Troubleshooting a long running delete statement" }, { "msg_contents": "On 10/6/21 14:00, Dirschel, Steve wrote:\n\n> Question:\n> How would one troubleshoot this issue in Postgres as to why the delete \n> was running so long?  My background is Oracle and there are various \n> statistics I may look at:\n>\n> * One could estimate the number of logical reads the delete should\n> do based on expected number of rows to delete, expected logical\n> reads against the table per row, expected logical reads against\n> each index per row.\n> * One could look in V$SQL and see how many logical reads the query\n> was actually doing.\n> * One could look at V$SESS_IO and see how many logical reads the\n> session was doing.\n>\n> In this case you would see the query was doing way more logical reads \n> that expected and then try and think of scenarios that would cause that.\n> Here is what I could see in Postgres:\n>\n> * When I did an explain on the delete I could see it was full\n> scanning the table. I did a full scan of the table interactively\n> in less than 1 second so the long runtime was not due to the full\n> tablescan.\n> * I could not find the query in pg_stat_statements to see how many\n> shared block reads/hits the query was doing to see if the numbers\n> were extremely high.  Based on documentation queries do not show\n> up in pg_stat_statements until after they complete.\n> * pg_stat_activity showed wait_event_type and wait_event were null\n> for the session every time I looked.  So the session was\n> continually using CPU.\n>\n> I started looking at table definitions (indexes, FK's, etc.) and \n> comparing to Oracle and noticed some indexes missing.  I then could \n> see the table being deleted from was a child table with a FK pointing \n> to a parent table.  Finally I was able to see that the parent table \n> was missing an index on the FK column so for every row being deleted \n> from the child it was full scanning the parent.  All makes sense after \n> the fact but I'm looking for a more methodical way to come to that \n> conclusion by looking at database statistics.\n> Are there other statistics in Postgres I may have looked at to \n> methodically come to the conclusion that the problem was the missing \n> index on the parent FK column?\n> Thanks\n> This e-mail is for the sole use of the intended recipient and contains \n> information that may be privileged and/or confidential. If you are not \n> an intended recipient, please notify the sender by return e-mail and \n> delete this e-mail and any attachments. Certain required legal entity \n> disclosures can be accessed on our website: \n> https://www.thomsonreuters.com/en/resources/disclosures.html\n\n\nHi Steve,\n\nFirst, check whether you have any triggers on the table. The best way of \ndoing it is to use information_schema.triggers. I have seen triggers \nintroduce some \"mysterious\" functionality in Oracle as well. Second, \ncheck constraints. Is the table you're deleting from the parent table of \na foreign key constraint(s)? If the constraints are defined with \"ON \nDELETE CASCADE\", you maybe deleting more than you think. If it is not \ndefined with \"ON DELETE CASCADE\" or \"ON DELETE SET NULL\", you would get \nan error. If that passes the muster, then check the processes doing the \nmost of IO using \"iotop\" or \"atop\". I like the latter. You can then \ncheck what the busy processes are doing using strace -e trace=file and, \nfor good measure, 'perf top\".\n\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 10/6/21 14:00, Dirschel, Steve wrote:\n\n\n\n\n\n\n\nQuestion:\n \nHow would one troubleshoot this issue in Postgres as to\n why the delete was running so long?  My background is Oracle\n and there are various statistics I may look at:\n\nOne could estimate the number of logical reads the\n delete should do based on expected number of rows to\n delete, expected logical reads against the table per row,\n expected logical reads against each index per row.  \nOne could look in V$SQL and see how many logical reads\n the query was actually doing.\nOne could look at V$SESS_IO and see how many logical\n reads the session was doing.\n\n \nIn this case you would see the query was doing way more\n logical reads that expected and then try and think of\n scenarios that would cause that.\n \nHere is what I could see in Postgres:\n\nWhen I did an explain on the delete I could see it was\n full scanning the table. I did a full scan of the table\n interactively in less than 1 second so the long runtime\n was not due to the full tablescan.\nI could not find the query in pg_stat_statements to see\n how many shared block reads/hits the query was doing to\n see if the numbers were extremely high.  Based on\n documentation queries do not show up in pg_stat_statements\n until after they complete.\npg_stat_activity showed wait_event_type and wait_event\n were null for the session every time I looked.  So the\n session was continually using CPU.\n\n \nI started looking at table definitions (indexes, FK's,\n etc.) and comparing to Oracle and noticed some indexes\n missing.  I then could see the table being deleted from was\n a child table with a FK pointing to a parent table.  Finally\n I was able to see that\n the parent table was missing an index on the FK column so\n for every row being deleted from the child it was full\n scanning the parent.  All makes sense after the fact but I'm\n looking for a more methodical way to come to that conclusion\n by looking at database\n statistics.\n \nAre there other statistics in Postgres I may have looked\n at to methodically come to the conclusion that the problem\n was the missing index on the parent FK column?\n \nThanks\n \nThis e-mail is for the sole\n use of the intended recipient and contains information\n that may be privileged and/or confidential. If you are not\n an intended recipient, please notify the sender by return\n e-mail and delete this\n e-mail and any attachments. Certain required legal entity\n disclosures can be accessed on our website:\n https://www.thomsonreuters.com/en/resources/disclosures.html\n\n\n\nHi Steve,\nFirst, check whether\n you have any triggers on the table. The best way of doing it\n is to use information_schema.triggers. I have seen triggers\n introduce some \"mysterious\" functionality in Oracle as well.\n Second, check constraints. Is the table you're deleting from\n the parent table of a foreign key constraint(s)? If the\n constraints are defined with \"ON DELETE CASCADE\", you maybe\n deleting more than you think. If it is not defined with \"ON\n DELETE CASCADE\" or \"ON DELETE SET NULL\", you would get an\n error. If that passes the muster, then check the processes\n doing the most of IO using \"iotop\" or \"atop\". I like the\n latter. You can then check what the busy processes are doing\n using strace -e trace=file and, for good measure, 'perf top\".\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Wed, 6 Oct 2021 14:54:59 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Troubleshooting a long running delete statement" }, { "msg_contents": "On Wed, Oct 06, 2021 at 06:00:07PM +0000, Dirschel, Steve wrote:\n> • When I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.\n\n> I started looking at table definitions (indexes, FK's, etc.) and comparing to Oracle and noticed some indexes missing. I then could see the table being deleted from was a child table with a FK pointing to a parent table. Finally I was able to see that the parent table was missing an index on the FK column so for every row being deleted from the child it was full scanning the parent. All makes sense after the fact but I'm looking for a more methodical way to come to that conclusion by looking at database statistics.\n> \n> Are there other statistics in Postgres I may have looked at to methodically come to the conclusion that the problem was the missing index on the parent FK column?\n\nI think explain (analyze on) would've helped you.\n\nIf I understand your scenario, it'd look like this:\n\n|postgres=# explain (analyze) delete from t;\n| Delete on t (cost=0.00..145.00 rows=10000 width=6) (actual time=10.124..10.136 rows=0 loops=1)\n| -> Seq Scan on t (cost=0.00..145.00 rows=10000 width=6) (actual time=0.141..2.578 rows=10000 loops=1)\n| Planning Time: 0.484 ms\n| Trigger for constraint u_i_fkey: time=4075.123 calls=10000\n| Execution Time: 4087.764 ms\n\nYou can see the query plan used for the FK trigger with autoexplain.\n\npostgres=*# SET auto_explain.log_min_duration='0s'; SET client_min_messages=debug; SET auto_explain.log_nested_statements=on;\npostgres=*# explain (analyze) delete from t;\n|...\n|Query Text: DELETE FROM ONLY \"public\".\"u\" WHERE $1 OPERATOR(pg_catalog.=) \"i\"\n|Delete on u (cost=0.00..214.00 rows=1 width=6) (actual rows=0 loops=1)\n| Buffers: shared hit=90\n| -> Seq Scan on u (cost=0.00..214.00 rows=1 width=6) (actual rows=1 loops=1)\n| Filter: ($1 = i)\n| Rows Removed by Filter: 8616\n| Buffers: shared hit=89\n|...\n\n\n", "msg_date": "Wed, 6 Oct 2021 14:20:00 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Troubleshooting a long running delete statement" }, { "msg_contents": "On Wed, 2021-10-06 at 18:00 +0000, Dirschel, Steve wrote:\n> Are there other statistics in Postgres I may have looked at to methodically come to the conclusion that the problem was the missing index on the parent FK column?\n\nYou could use the query from my article to find the missing indexes:\nhttps://www.cybertec-postgresql.com/en/index-your-foreign-key/\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Wed, 06 Oct 2021 21:48:45 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Troubleshooting a long running delete statement" }, { "msg_contents": "On 10/6/21 14:00, Dirschel, Steve wrote:\r\nQuestion:\r\n\r\nHow would one troubleshoot this issue in Postgres as to why the delete was running so long? My background is Oracle and there are various statistics I may look at:\r\n· One could estimate the number of logical reads the delete should do based on expected number of rows to delete, expected logical reads against the table per row, expected logical reads against each index per row.\r\n· One could look in V$SQL and see how many logical reads the query was actually doing.\r\n· One could look at V$SESS_IO and see how many logical reads the session was doing.\r\n\r\nIn this case you would see the query was doing way more logical reads that expected and then try and think of scenarios that would cause that.\r\n\r\nHere is what I could see in Postgres:\r\n· When I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.\r\n· I could not find the query in pg_stat_statements to see how many shared block reads/hits the query was doing to see if the numbers were extremely high. Based on documentation queries do not show up in pg_stat_statements until after they complete.\r\n· pg_stat_activity showed wait_event_type and wait_event were null for the session every time I looked. So the session was continually using CPU.\r\n\r\nI started looking at table definitions (indexes, FK's, etc.) and comparing to Oracle and noticed some indexes missing. I then could see the table being deleted from was a child table with a FK pointing to a parent table. Finally I was able to see that the parent table was missing an index on the FK column so for every row being deleted from the child it was full scanning the parent. All makes sense after the fact but I'm looking for a more methodical way to come to that conclusion by looking at database statistics.\r\n\r\nAre there other statistics in Postgres I may have looked at to methodically come to the conclusion that the problem was the missing index on the parent FK column?\r\n\r\nThanks\r\n\r\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html\r\n\r\n\r\n\r\nHi Steve,\r\n\r\nFirst, check whether you have any triggers on the table. The best way of doing it is to use information_schema.triggers. I have seen triggers introduce some \"mysterious\" functionality in Oracle as well. Second, check constraints. Is the table you're deleting from the parent table of a foreign key constraint(s)? If the constraints are defined with \"ON DELETE CASCADE\", you maybe deleting more than you think. If it is not defined with \"ON DELETE CASCADE\" or \"ON DELETE SET NULL\", you would get an error. If that passes the muster, then check the processes doing the most of IO using \"iotop\" or \"atop\". I like the latter. You can then check what the busy processes are doing using strace -e trace=file and, for good measure, 'perf top\".\r\n\r\nRegards\r\n\r\n--\r\n\r\nMladen Gogala\r\n\r\nDatabase Consultant\r\n\r\nTel: (347) 321-1217\r\n\r\nhttps://dbwhisperer.wordpress.com<https://urldefense.com/v3/__https:/dbwhisperer.wordpress.com__;!!GFN0sa3rsbfR8OLyAw!N_47EusVVgJfrjPtfvI46dinpPTLwBfl4RygI-qPX8gLb8p6-A2bQvhm19pFKaZBEU1iQwfOLA$>\r\n\r\n\r\n\r\nThanks for the reply and I hope I’m replying to this e-mail correctly at the bottom of the chain. We are running on AWS aurora postgres. I assume strace -e isn’t an option given we don’t have access to the server or are you aware of a method I could still do that without server access?\r\n\r\n\r\n\r\nRegards\r\n\r\nSteve\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \nOn 10/6/21 14:00, Dirschel, Steve wrote:\n\n\nQuestion:\n\n\n \n\n\nHow would one troubleshoot this issue in Postgres as to why the delete was running so long?  My background is Oracle and there are various statistics I may look at:\n\n\n·        \r\nOne could estimate the number of logical reads the delete should do based on expected number of rows to delete, expected logical reads against the table per row, expected logical reads against each index per row. \r\n\n\n·        \r\nOne could look in V$SQL and see how many logical reads the query was actually doing.\n\n·        \r\nOne could look at V$SESS_IO and see how many logical reads the session was doing.\n\n \n\n\nIn this case you would see the query was doing way more logical reads that expected and then try and think of scenarios that would cause that.\n\n\n \n\n\nHere is what I could see in Postgres:\n\n\n·        \r\nWhen I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.\n\n·        \r\nI could not find the query in pg_stat_statements to see how many shared block reads/hits the query was doing to see if the numbers were extremely high.  Based on documentation queries do not show up in pg_stat_statements until\r\n after they complete.\n\n·        \r\npg_stat_activity showed wait_event_type and wait_event were null for the session every time I looked.  So the session was continually using CPU.\n\n \n\n\nI started looking at table definitions (indexes, FK's, etc.) and comparing to Oracle and noticed some indexes missing.  I then could see the table being deleted from was a child table with a FK pointing to a parent\r\n table.  Finally I was able to see that the parent table was missing an index on the FK column so for every row being deleted from the child it was full scanning the parent.  All makes sense after the fact but I'm looking for a more methodical way to come to\r\n that conclusion by looking at database statistics.\n\n\n \n\n\nAre there other statistics in Postgres I may have looked at to methodically come to the conclusion that the problem was the missing index on the parent FK column?\n\n\n \n\n\nThanks\n\n\n \n\n\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient,\r\n please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website:\r\nhttps://www.thomsonreuters.com/en/resources/disclosures.html\n\n\n \nHi Steve,\nFirst, check whether you have any triggers on the table. The best way of doing it is to use information_schema.triggers. I have seen triggers introduce some \"mysterious\"\r\n functionality in Oracle as well. Second, check constraints. Is the table you're deleting from the parent table of a foreign key constraint(s)? If the constraints are defined with \"ON DELETE CASCADE\", you maybe deleting more than you think. If it is not defined\r\n with \"ON DELETE CASCADE\" or \"ON DELETE SET NULL\", you would get an error. If that passes the muster, then check the processes doing the most of IO using \"iotop\" or \"atop\". I like the latter. You can then check what the busy processes are doing using strace\r\n -e trace=file and, for good measure, 'perf top\".\nRegards\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n \nThanks for the reply and I hope I’m replying to this e-mail correctly at the bottom of the chain.  We are running on AWS aurora postgres.  I assume strace -e isn’t an option given we don’t have access to the server or are you aware of a method I could still do that without server access?\n \nRegards\nSteve", "msg_date": "Wed, 6 Oct 2021 20:26:49 +0000", "msg_from": "\"Dirschel, Steve\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [EXT] Re: Troubleshooting a long running delete statement" }, { "msg_contents": "On Wed, Oct 06, 2021 at 06:00:07PM +0000, Dirschel, Steve wrote:\r\n > • When I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.\r\n\r\n > I started looking at table definitions (indexes, FK's, etc.) and comparing to Oracle and noticed some indexes missing. I then could see the table being deleted from was a child table with a FK pointing to a parent table. Finally I was able to see that the parent table was missing an index on the FK column so for every row being deleted from the child it was full scanning the parent. All makes sense after the fact but I'm looking for a more methodical way to come to that conclusion by looking at database statistics.\r\n >\r\n > Are there other statistics in Postgres I may have looked at to methodically come to the conclusion that the problem was the missing index on the parent FK column?\r\n\r\n I think explain (analyze on) would've helped you.\r\n\r\n If I understand your scenario, it'd look like this:\r\n\r\n |postgres=# explain (analyze) delete from t; Delete on t\r\n |(cost=0.00..145.00 rows=10000 width=6) (actual time=10.124..10.136 rows=0 loops=1)\r\n | -> Seq Scan on t (cost=0.00..145.00 rows=10000 width=6) (actual\r\n |time=0.141..2.578 rows=10000 loops=1) Planning Time: 0.484 ms Trigger\r\n |for constraint u_i_fkey: time=4075.123 calls=10000 Execution Time:\r\n |4087.764 ms\r\n\r\n You can see the query plan used for the FK trigger with autoexplain.\r\n\r\n postgres=*# SET auto_explain.log_min_duration='0s'; SET client_min_messages=debug; SET auto_explain.log_nested_statements=on;\r\n postgres=*# explain (analyze) delete from t;\r\n |...\r\n |Query Text: DELETE FROM ONLY \"public\".\"u\" WHERE $1 OPERATOR(pg_catalog.=) \"i\"\r\n |Delete on u (cost=0.00..214.00 rows=1 width=6) (actual rows=0 loops=1)\r\n | Buffers: shared hit=90\r\n | -> Seq Scan on u (cost=0.00..214.00 rows=1 width=6) (actual rows=1 loops=1)\r\n | Filter: ($1 = i)\r\n | Rows Removed by Filter: 8616\r\n | Buffers: shared hit=89\r\n |...\r\n\r\n\r\nThanks for the reply and the info above. My question was more directed at how can you troubleshoot the active session running the query. In the examples above you are actually executing the query. If an application is executing the query I can't go in an re-execute it. I also found by trial and error if I do execute it interactively and CTRL C out if it a message is returned which would give hints where to look next:\r\n\r\nERROR: canceling statement due to user request\r\nCONTEXT: SQL statement \"SELECT 1 FROM ONLY \"cf4\".\"category_page\" x WHERE $1::pg_catalog.text OPERATOR(pg_catalog.=) \"prism_guid\"::pg_catalog.text FOR KEY SHARE OF x\"\r\n\r\nThat query above is hitting the parent table and my delete is against the child so that info would be very helpful, but it was only available if I interactively ran it. I’m mostly interested in what stats/info is available while it’s running to troubleshoot it.\r\n\r\nRegards\r\nSteve\r\n\r\n\n\n\n\n\n\n\n\n\n\n \n        On Wed, Oct 06, 2021 at 06:00:07PM +0000, Dirschel, Steve wrote:\n        > •       When I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.\n \n        > I started looking at table definitions (indexes, FK's, etc.) and comparing to Oracle and noticed some indexes missing.  I then could see the table being deleted from was a child table with a FK pointing to a        parent table.  Finally I was\r\nable to see that the parent table was missing an index on the FK column so for every row being deleted from the child it was full scanning the parent.  All makes sense after the fact but  I'm looking for a more methodical way to come to that conclusion by looking\r\nat database statistics.\n        > \n        > Are there other statistics in Postgres I may have looked at to methodically come to the conclusion that the problem was the missing index on the parent FK column?\n \n        I think explain (analyze on) would've helped you.\n \n        If I understand your scenario, it'd look like this:\n \n        |postgres=# explain (analyze) delete from t;  Delete on t  \n        |(cost=0.00..145.00 rows=10000 width=6) (actual time=10.124..10.136 rows=0 loops=1)\n        |   ->  Seq Scan on t  (cost=0.00..145.00 rows=10000 width=6) (actual \n        |time=0.141..2.578 rows=10000 loops=1)  Planning Time: 0.484 ms  Trigger \n        |for constraint u_i_fkey: time=4075.123 calls=10000  Execution Time: \n        |4087.764 ms\n \n        You can see the query plan used for the FK trigger with autoexplain.\n \n        postgres=*# SET auto_explain.log_min_duration='0s'; SET client_min_messages=debug; SET auto_explain.log_nested_statements=on;\n        postgres=*# explain (analyze) delete from t;\n        |...\n        |Query Text: DELETE FROM ONLY \"public\".\"u\" WHERE $1 OPERATOR(pg_catalog.=) \"i\"\n        |Delete on u  (cost=0.00..214.00 rows=1 width=6) (actual rows=0 loops=1)\n        |  Buffers: shared hit=90\n        |  ->  Seq Scan on u  (cost=0.00..214.00 rows=1 width=6) (actual rows=1 loops=1)\n        |        Filter: ($1 = i)\n        |        Rows Removed by Filter: 8616\n        |        Buffers: shared hit=89\n        |...\n \n \nThanks for the reply and the info above.  My question was more directed at how can you troubleshoot the active session running the query.  In the examples above you are actually executing the query.  If an application is executing the query I can't go\r\nin an re-execute it.  I also found by trial and error if I do execute it interactively and CTRL C out if it a message is returned which would give hints where to look next:\n \nERROR:  canceling statement due to user request\nCONTEXT:  SQL statement \"SELECT 1 FROM ONLY \"cf4\".\"category_page\" x WHERE $1::pg_catalog.text OPERATOR(pg_catalog.=) \"prism_guid\"::pg_catalog.text FOR KEY SHARE OF x\"\n \nThat query above is hitting the parent table and my delete is against the child so that info would be very helpful, but it was only available if I interactively ran it.  I’m mostly interested in what stats/info is available while it’s running to troubleshoot\r\nit.  \n \nRegards\nSteve", "msg_date": "Wed, 6 Oct 2021 20:32:14 +0000", "msg_from": "\"Dirschel, Steve\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [EXT] Re: Troubleshooting a long running delete statement" }, { "msg_contents": "On 10/6/21 16:26, Dirschel, Steve wrote:\n> Thanks for the reply and I hope I’m replying to this e-mail correctly \n> at the bottom of the chain.\n\n\nHey, it's not me, it's rules and regulations. And that's incredibly \nimportant on this group, or so I was lead to believe :)\n\n> We are running on AWS aurora postgres.  I assume strace -e isn’t an \n> option given we don’t have access to the server or are you aware of a \n> method I could still do that without server access?\n> Regards\n> Steve\nNo, access to the OS is not an option with RDS. However, RDS comes with \nsupport. You will have to contact support. They may use strace for you.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\n\n\nOn 10/6/21 16:26, Dirschel, Steve\n wrote:\n\n\n \nThanks for the reply and I hope I’m replying to this e-mail correctly at the bottom of the chain.  \n\n\n\nHey, it's not me, it's rules and regulations. And that's\n incredibly important on this group, or so I was lead to believe :)\n\n\nWe are running on AWS aurora postgres.  I assume strace -e isn’t an option given we don’t have access to the server or are you aware of a method I could still do that without server access?\n \nRegards\nSteve\n\n No, access to the OS is not an option with RDS. However, RDS comes\n with support. You will have to contact support. They may use strace\n for you.\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Wed, 6 Oct 2021 21:25:55 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [EXT] Re: Troubleshooting a long running delete statement" }, { "msg_contents": "On 10/6/21 16:32, Dirschel, Steve wrote:\n> postgres=# explain (analyze) delete from t;  Delete on t \n\nI would try explain (analyze, timing, buffers). That would also give you \nthe timing of each step so you can figure which one takes the longes.\n\nRegards\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\n\n\nOn 10/6/21 16:32, Dirschel, Steve\n wrote:\n\npostgres=#\n explain (analyze) delete from t;  Delete on t  \nI would try explain (analyze, timing, buffers). That would also\n give you the timing of each step so you can figure which one takes\n the longes.\nRegards\n\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Wed, 6 Oct 2021 21:30:59 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [EXT] Re: Troubleshooting a long running delete statement" }, { "msg_contents": "On Wed, Oct 6, 2021 at 12:00 PM Dirschel, Steve <\[email protected]> wrote:\n\n> Here is what I could see in Postgres:\n>\n> - When I did an explain on the delete I could see it was full scanning\n> the table. I did a full scan of the table interactively in less than 1\n> second so the long runtime was not due to the full tablescan.\n>\n>\nIf finding the rows is fast, but actually deleting them is slow and perhaps\nwon't even finish, I would strongly consider adding a where clause such\nthat a small fraction of the deletes would be done (perhaps in a\ntransaction that gets rolled back) and do the explain (analyze, buffers) on\nthat modified command. Yes, the planner may decide to use an index to find\nwhich rows to delete, but if finding the rows was already fast and it is\nthe non-obvious work that we want to profile, then it should be fine to do\n1% of the deletes and see how it performs and where the time goes.\n\n>\n\nOn Wed, Oct 6, 2021 at 12:00 PM Dirschel, Steve <[email protected]> wrote:\n\n\nHere is what I could see in Postgres:\n\nWhen I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.If finding the rows is fast, but actually deleting them is slow and perhaps won't even finish, I would strongly consider adding a where clause such that a small fraction of the deletes would be done (perhaps in a transaction that gets rolled back) and do the explain (analyze, buffers) on that modified command. Yes, the planner may decide to use an index to find which rows to delete, but if finding the rows was already fast and it is the non-obvious work that we want to profile, then it should be fine to do 1% of the deletes and see how it performs and where the time goes.", "msg_date": "Wed, 6 Oct 2021 22:44:17 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Troubleshooting a long running delete statement" }, { "msg_contents": "On Wed, Oct 6, 2021 at 1:32 PM Dirschel, Steve <\[email protected]> wrote:\n\n> Thanks for the reply and the info above. My question was more directed at\n> how can you troubleshoot the active session running the query. In the\n> examples above you are actually executing the query. If an application is\n> executing the query I can't go in an re-execute it.\n>\n\nThis won't be immediately useful, but there's been a patch proposed for\nPostgres 15 to allow logging the plan of a running query [1]. Progress\nseems to have stalled a bit, but it seems like there was a fair amount of\ninterest, so I wouldn't count it out yet. If you have thoughts on the\nproposed functionality, I'm sure thoughtful feedback would be appreciated.\n\nThanks,\nMaciek\n\n[1]:\nhttps://www.postgresql.org/message-id/flat/cf8501bcd95ba4d727cbba886ba9eea8%40oss.nttdata.com\n\n\n>\n\nOn Wed, Oct 6, 2021 at 1:32 PM Dirschel, Steve <[email protected]> wrote:\n\n\nThanks for the reply and the info above.  My question was more directed at how can you troubleshoot the active session running the query.  In the examples above you are actually executing the query.  If an application is executing the query I can't go\nin an re-execute it. This won't be immediately useful, but there's been a patch proposed for Postgres 15 to allow logging the plan of a running query [1]. Progress seems to have stalled a bit, but it seems like there was a fair amount of interest, so I wouldn't count it out yet. If you have thoughts on the proposed functionality, I'm sure thoughtful feedback would be appreciated.Thanks,Maciek[1]: https://www.postgresql.org/message-id/flat/cf8501bcd95ba4d727cbba886ba9eea8%40oss.nttdata.com", "msg_date": "Wed, 6 Oct 2021 22:15:36 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [EXT] Re: Troubleshooting a long running delete statement" }, { "msg_contents": ">\n>\n>\n> This won't be immediately useful, but there's been a patch proposed for\n> Postgres 15 to allow logging the plan of a running query [1]. Progress\n> seems to have stalled a bit, but it seems like there was a fair amount of\n> interest, so I wouldn't count it out yet. If you have thoughts on the\n> proposed functionality, I'm sure thoughtful feedback would be appreciated.\n>\n>\n>\nDidn't something get into v14 about doing ctd range scans, which would\nallow you to break up a large update/delete into chunks, so you wouldn't do\na full seq scan, but you also would avoid needing an index as a proxy for\nbatching records?\n\nThis won't be immediately useful, but there's been a patch proposed for Postgres 15 to allow logging the plan of a running query [1]. Progress seems to have stalled a bit, but it seems like there was a fair amount of interest, so I wouldn't count it out yet. If you have thoughts on the proposed functionality, I'm sure thoughtful feedback would be appreciated.Didn't something get into v14 about doing ctd range scans, which would allow you to break up a large update/delete into chunks, so you wouldn't do a full seq scan, but you also would avoid needing an index as a proxy for batching records?", "msg_date": "Thu, 7 Oct 2021 15:30:11 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [EXT] Re: Troubleshooting a long running delete statement" } ]
[ { "msg_contents": "Dear PostgreSQL community,\n\nwe have noticed a severe decrease in performance reading\npg_catalog.pg_settings table in PostgreSQL 12 on MS Windows 10 machines\ncompared to earlier versions.\n\n```\nexplain (analyze, buffers, timing)\nSELECT * from pg_catalog.pg_settings where name =\n'standard_conforming_strings';\n```\n\nOn *PostgreSQL 12.5, compiled by Visual C++ build 1914, 64-bit:*\nFunction Scan on pg_show_all_settings a (cost=0.00..12.50 rows=5\nwidth=485) (actual time=343.350..343.356 rows=1 loops=1)\n Filter: (name = 'standard_conforming_strings'::text)\n Rows Removed by Filter: 313\nPlanning Time: 0.079 ms\nExecution Time: 343.397 ms\n\nCompare to* PostgreSQL 11.13, compiled by Visual C++ build 1914, 64-bit*:\nFunction Scan on pg_show_all_settings a (cost=0.00..12.50 rows=5\nwidth=485) (actual time=0.723..0.728 rows=1 loops=1)\n Filter: (name = 'standard_conforming_strings'::text)\n Rows Removed by Filter: 289\nPlanning Time: 0.125 ms\nExecution Time: 0.796 ms\n\n\nThis is standard installation, the changed parameters are:\n```\nSELECT name, current_setting(name), source\nFROM pg_settings\nWHERE source NOT IN ('default', 'override');\n```\n\nclient_encoding UTF8 client\nDateStyle ISO, YMD client\ndefault_text_search_config pg_catalog.simple session\ndefault_transaction_isolation read committed session\ndynamic_shared_memory_type windows configuration file\nextra_float_digits 3 session\nlc_messages Lithuanian_Lithuania.1257 configuration file\nlc_monetary Lithuanian_Lithuania.1257 configuration file\nlc_numeric Lithuanian_Lithuania.1257 configuration file\nlc_time Lithuanian_Lithuania.1257 configuration file\nlisten_addresses * configuration file\nlog_destination stderr configuration file\nlog_file_mode 0640 configuration file\nlog_timezone Europe/Helsinki configuration file\nlogging_collector on configuration file\nmax_connections 100 configuration file\nmax_stack_depth 2MB environment variable\nmax_wal_size 1GB configuration file\nmin_wal_size 80MB configuration file\nport 5444 configuration file\nsearch_path \"$user\", public session\nshared_buffers 128MB configuration file\nTimeZone Europe/Helsinki client\n\n\nThe slowing down is observed on *MS Windows 10 machines only*. We have pg12\non linux (PostgreSQL 12.6 (Debian 12.6-1.pgdg100+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit ) that doesn't show any\ndecrease in performance.\n\nI've testet different versions and it seems the problem appeared on PG12,\nearlier versions up to PG11 work ok. PG13 also suffers from low reading\nspeed of pg_settings.\n\nThe behaviour is reproduced on 3 different machines (2 virtual and one\nphysical, different hardware).\n\nWhat is the cause of this? How to fix the issue?\n\nRegards,\nJulius Tuskenis\n\nDear PostgreSQL community,we have noticed a severe decrease in performance reading pg_catalog.pg_settings table in PostgreSQL 12 on MS Windows 10 machines compared to earlier versions.```explain (analyze, buffers, timing) SELECT * from pg_catalog.pg_settings where name = 'standard_conforming_strings';```On PostgreSQL 12.5, compiled by Visual C++ build 1914, 64-bit:Function Scan on pg_show_all_settings a  (cost=0.00..12.50 rows=5 width=485) (actual time=343.350..343.356 rows=1 loops=1)  Filter: (name = 'standard_conforming_strings'::text)  Rows Removed by Filter: 313Planning Time: 0.079 msExecution Time: 343.397 msCompare to PostgreSQL 11.13, compiled by Visual C++ build 1914, 64-bit:Function Scan on pg_show_all_settings a  (cost=0.00..12.50 rows=5 width=485) (actual time=0.723..0.728 rows=1 loops=1)  Filter: (name = 'standard_conforming_strings'::text)  Rows Removed by Filter: 289Planning Time: 0.125 msExecution Time: 0.796 msThis is standard installation, the changed parameters are:```SELECT name, current_setting(name), sourceFROM pg_settingsWHERE source NOT IN ('default', 'override');```client_encoding\tUTF8\tclientDateStyle\tISO, YMD\tclientdefault_text_search_config\tpg_catalog.simple\tsessiondefault_transaction_isolation\tread committed\tsessiondynamic_shared_memory_type\twindows\tconfiguration fileextra_float_digits\t3\tsessionlc_messages\tLithuanian_Lithuania.1257\tconfiguration filelc_monetary\tLithuanian_Lithuania.1257\tconfiguration filelc_numeric\tLithuanian_Lithuania.1257\tconfiguration filelc_time\tLithuanian_Lithuania.1257\tconfiguration filelisten_addresses\t*\tconfiguration filelog_destination\tstderr\tconfiguration filelog_file_mode\t0640\tconfiguration filelog_timezone\tEurope/Helsinki\tconfiguration filelogging_collector\ton\tconfiguration filemax_connections\t100\tconfiguration filemax_stack_depth\t2MB\tenvironment variablemax_wal_size\t1GB\tconfiguration filemin_wal_size\t80MB\tconfiguration fileport\t5444\tconfiguration filesearch_path\t\"$user\", public\tsessionshared_buffers\t128MB\tconfiguration fileTimeZone\tEurope/Helsinki\tclientThe slowing down is observed on MS Windows 10 machines only. We have pg12 on linux (PostgreSQL 12.6 (Debian 12.6-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit ) that doesn't show any decrease in performance.I've testet different versions and it seems the problem appeared on PG12, earlier versions up to PG11 work ok. PG13 also suffers from low reading speed of pg_settings.The behaviour is reproduced on 3 different machines (2 virtual and one physical, different hardware).What is the cause of this? How to fix the issue?Regards,Julius Tuskenis", "msg_date": "Fri, 8 Oct 2021 10:01:33 +0300", "msg_from": "Julius Tuskenis <[email protected]>", "msg_from_op": true, "msg_subject": "PG 12 slow selects from pg_settings" }, { "msg_contents": "Em sex., 8 de out. de 2021 às 04:01, Julius Tuskenis <\[email protected]> escreveu:\n\n> Dear PostgreSQL community,\n>\n> we have noticed a severe decrease in performance reading\n> pg_catalog.pg_settings table in PostgreSQL 12 on MS Windows 10 machines\n> compared to earlier versions.\n>\n> ```\n> explain (analyze, buffers, timing)\n> SELECT * from pg_catalog.pg_settings where name =\n> 'standard_conforming_strings';\n> ```\n>\n> On *PostgreSQL 12.5, compiled by Visual C++ build 1914, 64-bit:*\n> Function Scan on pg_show_all_settings a (cost=0.00..12.50 rows=5\n> width=485) (actual time=343.350..343.356 rows=1 loops=1)\n> Filter: (name = 'standard_conforming_strings'::text)\n> Rows Removed by Filter: 313\n> Planning Time: 0.079 ms\n> Execution Time: 343.397 ms\n>\nYou can try 12.8 which is available now, there is a dll related fix that\ncan make some improvement.\n\nregards,\nRanier Vilela\n\nEm sex., 8 de out. de 2021 às 04:01, Julius Tuskenis <[email protected]> escreveu:Dear PostgreSQL community,we have noticed a severe decrease in performance reading pg_catalog.pg_settings table in PostgreSQL 12 on MS Windows 10 machines compared to earlier versions.```explain (analyze, buffers, timing) SELECT * from pg_catalog.pg_settings where name = 'standard_conforming_strings';```On PostgreSQL 12.5, compiled by Visual C++ build 1914, 64-bit:Function Scan on pg_show_all_settings a  (cost=0.00..12.50 rows=5 width=485) (actual time=343.350..343.356 rows=1 loops=1)  Filter: (name = 'standard_conforming_strings'::text)  Rows Removed by Filter: 313Planning Time: 0.079 msExecution Time: 343.397 msYou can try 12.8 which is available now, there is a dll related fix that can make some improvement.regards,Ranier Vilela", "msg_date": "Fri, 8 Oct 2021 08:00:55 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 12 slow selects from pg_settings" }, { "msg_contents": "Thank you, Ranier,\n\nv12.8 has improved the performance\n\nPostgreSQL 12.8, compiled by Visual C++ build 1914, 64-bit:\n```\nFunction Scan on pg_show_all_settings a (cost=0.00..12.50 rows=5\nwidth=485) (actual time=7.122..7.128 rows=1 loops=1)\n Filter: (name = 'standard_conforming_strings'::text)\n Rows Removed by Filter: 313\nPlanning Time: 0.083 ms\nExecution Time: 7.204 ms\n```\n\nWould you please direct me to the change log or some bug report to read in\ndetail what was causing the problem and how it was fixed?\n\nRegards,\nJulius Tuskenis\n\n2021-10-08, pn, 14:01 Ranier Vilela <[email protected]> rašė:\n\n> Em sex., 8 de out. de 2021 às 04:01, Julius Tuskenis <\n> [email protected]> escreveu:\n>\n>> Dear PostgreSQL community,\n>>\n>> we have noticed a severe decrease in performance reading\n>> pg_catalog.pg_settings table in PostgreSQL 12 on MS Windows 10 machines\n>> compared to earlier versions.\n>>\n>> ```\n>> explain (analyze, buffers, timing)\n>> SELECT * from pg_catalog.pg_settings where name =\n>> 'standard_conforming_strings';\n>> ```\n>>\n>> On *PostgreSQL 12.5, compiled by Visual C++ build 1914, 64-bit:*\n>> Function Scan on pg_show_all_settings a (cost=0.00..12.50 rows=5\n>> width=485) (actual time=343.350..343.356 rows=1 loops=1)\n>> Filter: (name = 'standard_conforming_strings'::text)\n>> Rows Removed by Filter: 313\n>> Planning Time: 0.079 ms\n>> Execution Time: 343.397 ms\n>>\n> You can try 12.8 which is available now, there is a dll related fix that\n> can make some improvement.\n>\n> regards,\n> Ranier Vilela\n>\n\nThank you, \n\nRanier, v12.8 has improved the performancePostgreSQL 12.8, compiled by Visual C++ build 1914, 64-bit:```Function Scan on pg_show_all_settings a  (cost=0.00..12.50 rows=5 width=485) (actual time=7.122..7.128 rows=1 loops=1)  Filter: (name = 'standard_conforming_strings'::text)  Rows Removed by Filter: 313Planning Time: 0.083 msExecution Time: 7.204 ms```Would you please direct me to the change log or some bug report to read in detail what was causing the problem and how it was fixed?Regards,Julius Tuskenis2021-10-08, pn, 14:01 Ranier Vilela <[email protected]> rašė:Em sex., 8 de out. de 2021 às 04:01, Julius Tuskenis <[email protected]> escreveu:Dear PostgreSQL community,we have noticed a severe decrease in performance reading pg_catalog.pg_settings table in PostgreSQL 12 on MS Windows 10 machines compared to earlier versions.```explain (analyze, buffers, timing) SELECT * from pg_catalog.pg_settings where name = 'standard_conforming_strings';```On PostgreSQL 12.5, compiled by Visual C++ build 1914, 64-bit:Function Scan on pg_show_all_settings a  (cost=0.00..12.50 rows=5 width=485) (actual time=343.350..343.356 rows=1 loops=1)  Filter: (name = 'standard_conforming_strings'::text)  Rows Removed by Filter: 313Planning Time: 0.079 msExecution Time: 343.397 msYou can try 12.8 which is available now, there is a dll related fix that can make some improvement.regards,Ranier Vilela", "msg_date": "Fri, 8 Oct 2021 15:05:57 +0300", "msg_from": "Julius Tuskenis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG 12 slow selects from pg_settings" }, { "msg_contents": "Em sex., 8 de out. de 2021 às 09:06, Julius Tuskenis <\[email protected]> escreveu:\n\n> Thank you, Ranier,\n>\n> v12.8 has improved the performance\n>\n> PostgreSQL 12.8, compiled by Visual C++ build 1914, 64-bit:\n> ```\n> Function Scan on pg_show_all_settings a (cost=0.00..12.50 rows=5\n> width=485) (actual time=7.122..7.128 rows=1 loops=1)\n> Filter: (name = 'standard_conforming_strings'::text)\n> Rows Removed by Filter: 313\n> Planning Time: 0.083 ms\n> Execution Time: 7.204 ms\n> ```\n>\n> Would you please direct me to the change log or some bug report to read in\n> detail what was causing the problem and how it was fixed?\n>\nThe history is long, but if you want to read.\nhttps://www.postgresql.org/message-id/flat/7ff352d4-4879-5181-eb89-8a2046f928e6%40dunslane.net\n\nregards,\nRanier Vilela\n\nEm sex., 8 de out. de 2021 às 09:06, Julius Tuskenis <[email protected]> escreveu:Thank you, \n\nRanier, v12.8 has improved the performancePostgreSQL 12.8, compiled by Visual C++ build 1914, 64-bit:```Function Scan on pg_show_all_settings a  (cost=0.00..12.50 rows=5 width=485) (actual time=7.122..7.128 rows=1 loops=1)  Filter: (name = 'standard_conforming_strings'::text)  Rows Removed by Filter: 313Planning Time: 0.083 msExecution Time: 7.204 ms```Would you please direct me to the change log or some bug report to read in detail what was causing the problem and how it was fixed?The history is long, but if you want to read.https://www.postgresql.org/message-id/flat/7ff352d4-4879-5181-eb89-8a2046f928e6%40dunslane.netregards,Ranier Vilela", "msg_date": "Fri, 8 Oct 2021 09:50:03 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 12 slow selects from pg_settings" }, { "msg_contents": "> The history is long, but if you want to read.\n>\nhttps://www.postgresql.org/message-id/flat/7ff352d4-4879-5181-eb89-8a2046f928e6%40dunslane.net\n\nThank you, Ranier.\n\nIt's amazing how much effort and work that issue caused! Thank You and all\ninvolved!\n\nRegards,\nJulius Tuskenis\n\n2021-10-08, pn, 15:50 Ranier Vilela <[email protected]> rašė:\n\n> Em sex., 8 de out. de 2021 às 09:06, Julius Tuskenis <\n> [email protected]> escreveu:\n>\n>> Thank you, Ranier,\n>>\n>> v12.8 has improved the performance\n>>\n>> PostgreSQL 12.8, compiled by Visual C++ build 1914, 64-bit:\n>> ```\n>> Function Scan on pg_show_all_settings a (cost=0.00..12.50 rows=5\n>> width=485) (actual time=7.122..7.128 rows=1 loops=1)\n>> Filter: (name = 'standard_conforming_strings'::text)\n>> Rows Removed by Filter: 313\n>> Planning Time: 0.083 ms\n>> Execution Time: 7.204 ms\n>> ```\n>>\n>> Would you please direct me to the change log or some bug report to read\n>> in detail what was causing the problem and how it was fixed?\n>>\n> The history is long, but if you want to read.\n>\n> https://www.postgresql.org/message-id/flat/7ff352d4-4879-5181-eb89-8a2046f928e6%40dunslane.net\n>\n> regards,\n> Ranier Vilela\n>\n\n> The history is long, but if you want to read.> https://www.postgresql.org/message-id/flat/7ff352d4-4879-5181-eb89-8a2046f928e6%40dunslane.netThank you, Ranier.It's amazing how much effort and work that issue caused! Thank You and all involved!Regards,Julius Tuskenis2021-10-08, pn, 15:50 Ranier Vilela <[email protected]> rašė:Em sex., 8 de out. de 2021 às 09:06, Julius Tuskenis <[email protected]> escreveu:Thank you, \n\nRanier, v12.8 has improved the performancePostgreSQL 12.8, compiled by Visual C++ build 1914, 64-bit:```Function Scan on pg_show_all_settings a  (cost=0.00..12.50 rows=5 width=485) (actual time=7.122..7.128 rows=1 loops=1)  Filter: (name = 'standard_conforming_strings'::text)  Rows Removed by Filter: 313Planning Time: 0.083 msExecution Time: 7.204 ms```Would you please direct me to the change log or some bug report to read in detail what was causing the problem and how it was fixed?The history is long, but if you want to read.https://www.postgresql.org/message-id/flat/7ff352d4-4879-5181-eb89-8a2046f928e6%40dunslane.netregards,Ranier Vilela", "msg_date": "Fri, 8 Oct 2021 16:49:55 +0300", "msg_from": "Julius Tuskenis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG 12 slow selects from pg_settings" } ]
[ { "msg_contents": "Hi,\nLock contention observed high in PostgreSQLv13.3\nThe source code compiled with GNC(GCCv11.x)\nPostgreSQL version: 13.3\nOperating system: RHEL8.3\nKernel name:4.18.0-305.10.2.el8_4.x86_64\nRAM Size:512GB\nSSD: 1TB\nThe environment used IBM metal and test benchmark environment HammerDbv4.2\nTest case :TPC-C\n\nPerf data for 24vu(TPC-C)\n--------------------------------\n\n 18.99% postgres postgres [.] LWLockAcquire\n 7.09% postgres postgres [.] _bt_compare\n 8.66% postgres postgres [.] LWLockRelease\n 2.28% postgres postgres [.] GetSnapshotData\n 2.25% postgres postgres [.] hash_search_with_hash_value\n 2.11% postgres postgres [.] XLogInsertRecord\n 1.98% postgres postgres [.] PinBuffer\n\n1.Is there a way to tune the lock contention ?\n2.Is any recommendations to tune/reduce the lock contention via\npostgres.conf\n\nPostgres.conf used in Baremetal\n========================\nshared_buffers = 128GB(1/4 th RAM size)\neffective_cachesize=392 GB(1/3 or 75% of RAM size)\nhuge_pages = on\ntemp_buffers = 4000MB\nwork_mem = 4000MB\nmaintenance_work_mem = 512MB\nautovacuum_work_mem = -1\nmax_stack_depth = 7MB\ndynamic_shared_memory_type = posix\nmax_files_per_process = 4000\neffective_io_concurrency = 32\nwal_level = minimal\nsynchronous_commit = off\nwal_buffers = 512MB\ncheckpoint_timeout = 1h\ncheckpoint_completion_target = 1\ncheckpoint_warning = 0\nlog_min_messages = error\nlog_min_error_statement = error\nlog_timezone = 'GB'\nautovacuum = off\ndatestyle = 'iso, dmy'\ntimezone = 'GB'\nlc_messages = 'en_GB.UTF-8'\nlc_monetary = 'en_GB.UTF-8'\nlc_numeric = 'en_GB.UTF-8'\nlc_time = 'en_GB.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\nmax_locks_per_transaction = 64\nmax_pred_locks_per_transaction = 64\n\nBest Regards\nAnil\n\nHi,Lock contention observed high in PostgreSQLv13.3The source code compiled with GNC(GCCv11.x)PostgreSQL version: 13.3Operating system:   RHEL8.3Kernel name:4.18.0-305.10.2.el8_4.x86_64RAM Size:512GBSSD: 1TBThe environment used IBM metal and test benchmark environment HammerDbv4.2Test case :TPC-CPerf data for 24vu(TPC-C)--------------------------------      18.99%  postgres  postgres            [.] LWLockAcquire     7.09%  postgres  postgres            [.] _bt_compare     8.66%  postgres  postgres            [.] LWLockRelease     2.28%  postgres  postgres            [.] GetSnapshotData     2.25%  postgres  postgres            [.] hash_search_with_hash_value     2.11%  postgres  postgres            [.] XLogInsertRecord     1.98%  postgres  postgres            [.] PinBuffer1.Is there a way to tune the lock contention ?2.Is any recommendations to tune/reduce the lock contention via postgres.confPostgres.conf used  in Baremetal========================shared_buffers = 128GB(1/4 th RAM size)effective_cachesize=392 GB(1/3 or 75% of RAM size)                        huge_pages = on               temp_buffers = 4000MB                 work_mem = 4000MB                     maintenance_work_mem = 512MB           autovacuum_work_mem = -1               max_stack_depth = 7MB                 dynamic_shared_memory_type = posix     max_files_per_process = 4000           effective_io_concurrency = 32         wal_level = minimal                   synchronous_commit = off               wal_buffers = 512MB                            checkpoint_timeout = 1h         checkpoint_completion_target = 1       checkpoint_warning = 0         log_min_messages = error               log_min_error_statement = errorlog_timezone = 'GB'autovacuum = off                       datestyle = 'iso, dmy'timezone = 'GB'lc_messages = 'en_GB.UTF-8'           lc_monetary = 'en_GB.UTF-8'           lc_numeric = 'en_GB.UTF-8'             lc_time = 'en_GB.UTF-8'               default_text_search_config = 'pg_catalog.english'max_locks_per_transaction = 64         max_pred_locks_per_transaction = 64Best RegardsAnil", "msg_date": "Tue, 12 Oct 2021 13:05:12 +0530", "msg_from": "Ashkil Dighin <[email protected]>", "msg_from_op": true, "msg_subject": "Lock contention high" }, { "msg_contents": "Hi,\n\nHow many sockets are on motherboard?\nWhat is CPU model and interconnect type (UPI?)?\nCan you share output of \"lscpu\"?\n\nIf you have more than 1 NUMA node it may be worth to run PostgreSQL in \nsingle NUMA node via taskset. It will eliminate access to remote memory \nand speed up processing.\n\nThanks,\n  Michael.\n\nOn 10/12/21 10:35 AM, Ashkil Dighin wrote:\n>\n> Hi,\n> Lock contention observed high in PostgreSQLv13.3\n> The source code compiled with GNC(GCCv11.x)\n> PostgreSQL version: 13.3\n> Operating system:   RHEL8.3\n> Kernel name:4.18.0-305.10.2.el8_4.x86_64\n> RAM Size:512GB\n> SSD: 1TB\n> The environment used IBM metal and test benchmark environment HammerDbv4.2\n> Test case :TPC-C\n>\n> Perf data for 24vu(TPC-C)\n> --------------------------------\n>\n>       18.99%  postgres  postgres            [.] LWLockAcquire\n>      7.09%  postgres  postgres            [.] _bt_compare\n>      8.66%  postgres  postgres            [.] LWLockRelease\n>      2.28%  postgres  postgres            [.] GetSnapshotData\n>      2.25%  postgres  postgres            [.] hash_search_with_hash_value\n>      2.11%  postgres  postgres            [.] XLogInsertRecord\n>      1.98%  postgres  postgres            [.] PinBuffer\n>\n> 1.Is there a way to tune the lock contention ?\n> 2.Is any recommendations to tune/reduce the lock contention via \n> postgres.conf\n>\n> Postgres.conf used  in Baremetal\n> ========================\n> shared_buffers = 128GB(1/4 th RAM size)\n> effective_cachesize=392 GB(1/3 or 75% of RAM size)\n> huge_pages = on\n> temp_buffers = 4000MB\n> work_mem = 4000MB\n> maintenance_work_mem = 512MB\n> autovacuum_work_mem = -1\n> max_stack_depth = 7MB\n> dynamic_shared_memory_type = posix\n> max_files_per_process = 4000\n> effective_io_concurrency = 32\n> wal_level = minimal\n> synchronous_commit = off\n> wal_buffers = 512MB\n> checkpoint_timeout = 1h\n> checkpoint_completion_target = 1\n> checkpoint_warning = 0\n> log_min_messages = error\n> log_min_error_statement = error\n> log_timezone = 'GB'\n> autovacuum = off\n> datestyle = 'iso, dmy'\n> timezone = 'GB'\n> lc_messages = 'en_GB.UTF-8'\n> lc_monetary = 'en_GB.UTF-8'\n> lc_numeric = 'en_GB.UTF-8'\n> lc_time = 'en_GB.UTF-8'\n> default_text_search_config = 'pg_catalog.english'\n> max_locks_per_transaction = 64\n> max_pred_locks_per_transaction = 64\n>\n> Best Regards\n> Anil\n>\n\n\n\n\n\n\n\n Hi,\n\n How many sockets are on motherboard? \n What is CPU model and interconnect type (UPI?)? \n Can you share output of \"lscpu\"? \n\n If you have more than 1 NUMA node it may be worth to run PostgreSQL\n in single NUMA node via taskset. It will eliminate access to remote\n memory and speed up processing. \n\n Thanks,\n  Michael.\n\nOn 10/12/21 10:35 AM, Ashkil Dighin\n wrote:\n\n\n\n\n\n\n\nHi,\nLock contention observed high in\n PostgreSQLv13.3\nThe source code compiled with\n GNC(GCCv11.x)\nPostgreSQL version: 13.3\n Operating system:   RHEL8.3\nKernel name:4.18.0-305.10.2.el8_4.x86_64\nRAM Size:512GB\nSSD: 1TB\nThe environment used IBM metal and\n test benchmark environment HammerDbv4.2\n Test case :TPC-C\n\n Perf data for 24vu(TPC-C)\n --------------------------------\n\n       18.99%  postgres  postgres            [.]\n LWLockAcquire\n      7.09%  postgres  postgres            [.] _bt_compare\n      8.66%  postgres  postgres            [.]\n LWLockRelease\n      2.28%  postgres  postgres            [.]\n GetSnapshotData\n      2.25%  postgres  postgres            [.]\n hash_search_with_hash_value\n      2.11%  postgres  postgres            [.]\n XLogInsertRecord\n      1.98%  postgres  postgres            [.] PinBuffer\n\n\n1.Is there a way to tune the lock\n contention ?\n 2.Is any recommendations to tune/reduce the lock\n contention via postgres.conf\n\n Postgres.conf used  in Baremetal\n ========================\n shared_buffers = 128GB(1/4 th RAM size)\neffective_cachesize=392 GB(1/3 or\n 75% of RAM size)                        \n\nhuge_pages = on               \n temp_buffers = 4000MB                 \n work_mem = 4000MB                     \n maintenance_work_mem = 512MB           \n autovacuum_work_mem = -1               \n max_stack_depth = 7MB                 \n dynamic_shared_memory_type = posix     \n max_files_per_process = 4000           \n effective_io_concurrency = 32         \n wal_level = minimal                   \n synchronous_commit = off               \n wal_buffers = 512MB                            \n checkpoint_timeout = 1h         \n checkpoint_completion_target = 1       \n checkpoint_warning = 0         \n log_min_messages = error               \n log_min_error_statement = error\n log_timezone = 'GB'\n autovacuum = off                       \n datestyle = 'iso, dmy'\n timezone = 'GB'\n lc_messages = 'en_GB.UTF-8'           \n lc_monetary = 'en_GB.UTF-8'           \n lc_numeric = 'en_GB.UTF-8'             \n lc_time = 'en_GB.UTF-8'               \n default_text_search_config = 'pg_catalog.english'\nmax_locks_per_transaction = 64   \n      \n max_pred_locks_per_transaction = 64\n\n\n\nBest Regards\nAnil", "msg_date": "Tue, 12 Oct 2021 13:29:33 +0300", "msg_from": "Mikhail Zhilin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "On Tue, 2021-10-12 at 13:05 +0530, Ashkil Dighin wrote:\n> Perf data for 24vu(TPC-C)\n> --------------------------------\n> \n>       18.99%  postgres  postgres            [.] LWLockAcquire\n>      7.09%  postgres  postgres            [.] _bt_compare\n>      8.66%  postgres  postgres            [.] LWLockRelease\n>      2.28%  postgres  postgres            [.] GetSnapshotData\n>      2.25%  postgres  postgres            [.] hash_search_with_hash_value\n>      2.11%  postgres  postgres            [.] XLogInsertRecord\n>      1.98%  postgres  postgres            [.] PinBuffer\n> \n> 1.Is there a way to tune the lock contention ?\n\nHow many concurrent sesions are you running?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 16:39:16 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "On 10/12/21 03:35, Ashkil Dighin wrote:\n> 1.Is there a way to tune the lock contention ?\n\nLock contention is usually an application issue. Application processes \nare stepping on each other's toes. I have never seen a situation where \nthe database would be slow with managing locks. Postgres uses an \nin-memory queue manager which is, generally speaking, very fast. \nApplications usually do stupid things. I've seen GUI doing \"SELECT FOR \nUPDATE\". And then the operator decided to have lunch. I'll leave the \nrest to your imagination.\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\n\n\nOn 10/12/21 03:35, Ashkil Dighin wrote:\n\n1.Is there a way to tune the lock contention ?\nLock contention is usually an application issue. Application\n processes are stepping on each other's toes. I have never seen a\n situation where the database would be slow with managing locks.\n Postgres uses an in-memory queue manager which is, generally\n speaking, very fast. Applications usually do stupid things. I've\n seen GUI doing \"SELECT FOR UPDATE\". And then the operator decided\n to have lunch. I'll leave the rest to your imagination.\n\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Tue, 12 Oct 2021 11:37:20 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "On Tue, Oct 12, 2021 at 01:05:12PM +0530, Ashkil Dighin wrote:\n> Hi,\n> Lock contention observed high in PostgreSQLv13.3\n> The source code compiled with GNC(GCCv11.x)\n> PostgreSQL version: 13.3\n> Operating system: RHEL8.3\n> Kernel name:4.18.0-305.10.2.el8_4.x86_64\n> RAM Size:512GB\n> SSD: 1TB\n> The environment used IBM metal and test benchmark environment HammerDbv4.2\n> Test case :TPC-C\n> \n> Perf data for 24vu(TPC-C)\n> --------------------------------\n> \n> 18.99% postgres postgres [.] LWLockAcquire\n> 7.09% postgres postgres [.] _bt_compare\n> 8.66% postgres postgres [.] LWLockRelease\n...\n> 1.Is there a way to tune the lock contention ?\n> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf\n\nI think you'd want to find *which* LW locks are being waited on, to see if it's\nsomething that can be easily tuned.\n\nYou can check pg_stat_activity, or maybe create a cronjob to record its content\nfor later analysis.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 13 Oct 2021 11:48:05 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "> 1.Is there a way to tune the lock contention ?\n> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf\nI think you'd want to find *which* LW locks are being waited on, to see if it's\nsomething that can be easily tuned.\n\nYou can check pg_stat_activity, or maybe create a cronjob to record its content\nfor later analysis.\n\n\nHello,\n\nAlso turn on log_lock_waits so you can evaluate the actual SQL causing \nthe problems in the PG log files.  Thinking ahead, you may want to \nconsider if using advisory locks from the application side of things \nmight be helpful to manage locks in a more pessimistic way.  Also, join \nwith pg_locks table to find out the specific resources that are in \ncontention.\n\nRegards,\nMichael Vitale\n\n\n\n\n\n\n1.Is there a way to tune the lock contention ?\n2.Is any recommendations to tune/reduce the lock contention via postgres.conf\n\nI think you'd want to find *which* LW locks are being waited on, to see if it's\nsomething that can be easily tuned.\n\nYou can check pg_stat_activity, or maybe create a cronjob to record its content\nfor later analysis.\n\n\nHello,\n\n\nAlso turn on log_lock_waits so \nyou can evaluate the actual SQL causing the problems in the PG log \nfiles.  Thinking ahead, you may want to consider if using advisory locks\n from the application side of things might be helpful to manage locks in\n a more pessimistic way.  Also, join with pg_locks table to find out the specific resources that are\n in contention.\n\nRegards,\nMichael Vitale", "msg_date": "Wed, 13 Oct 2021 14:15:34 -0400", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Are you using PostGIS?\n\nIf so, there is an issue with TOAST table locking having these symptoms.\n\n\n---Paul\n\n\nOn Wed, Oct 13, 2021 at 11:15 AM MichaelDBA <[email protected]> wrote:\n\n> 1.Is there a way to tune the lock contention ?\n> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf\n>\n> I think you'd want to find *which* LW locks are being waited on, to see if it's\n> something that can be easily tuned.\n>\n> You can check pg_stat_activity, or maybe create a cronjob to record its content\n> for later analysis.\n>\n>\n> Hello,\n>\n> Also turn on log_lock_waits so you can evaluate the actual SQL causing\n> the problems in the PG log files. Thinking ahead, you may want to consider\n> if using advisory locks from the application side of things might be\n> helpful to manage locks in a more pessimistic way. Also, join with\n> pg_locks table to find out the specific resources that are in contention.\n>\n> Regards,\n> Michael Vitale\n>\n>\n>\n\nAre you using PostGIS?If so, there is an issue with TOAST table locking having these symptoms.---PaulOn Wed, Oct 13, 2021 at 11:15 AM MichaelDBA <[email protected]> wrote:\n\n1.Is there a way to tune the lock contention ?\n2.Is any recommendations to tune/reduce the lock contention via postgres.conf\n\nI think you'd want to find *which* LW locks are being waited on, to see if it's\nsomething that can be easily tuned.\n\nYou can check pg_stat_activity, or maybe create a cronjob to record its content\nfor later analysis.\n\n\nHello,\n\n\nAlso turn on log_lock_waits so \nyou can evaluate the actual SQL causing the problems in the PG log \nfiles.  Thinking ahead, you may want to consider if using advisory locks\n from the application side of things might be helpful to manage locks in\n a more pessimistic way.  Also, join with pg_locks table to find out the specific resources that are\n in contention.\n\nRegards,\nMichael Vitale", "msg_date": "Wed, 13 Oct 2021 11:57:56 -0700", "msg_from": "Paul Friedman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "On Tue, Oct 12, 2021 at 12:45 AM Ashkil Dighin <[email protected]> wrote:\n> Lock contention observed high in PostgreSQLv13.3\n> The source code compiled with GNC(GCCv11.x)\n> PostgreSQL version: 13.3\n> Operating system: RHEL8.3\n> Kernel name:4.18.0-305.10.2.el8_4.x86_64\n> RAM Size:512GB\n> SSD: 1TB\n> The environment used IBM metal and test benchmark environment HammerDbv4.2\n> Test case :TPC-C\n\nYou didn't say how many TPC-C warehouses you used. In my experience,\npeople sometimes run TPC-C with relatively few, which will tend to\nresult in extreme contention on certain B-Tree leaf pages. (My\nexperiences are with BenchmarkSQL, but I can't imagine HammerDB is too\nmuch different.)\n\nAssuming that's the case here, for you, then it's not clear that you\nhave a real problem. You're really not supposed to run the benchmark\nin that way, per the TPC-C spec, which strictly limits the number of\ntransactions per minute per warehouse -- for better or worse, valid\nresults generally require that you use lots of warehouses to get a\nvery large database (think terabytes). If you run the benchmark with\n100 warehouses or less, on a big server, then the contention you'll\nsee will be out of all proportion to what you're ever likely to see in\nthe real world.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 Oct 2021 12:05:35 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Your settings are interesting, I'm curious what the goal is for this\nparticular hammerdb exercise.\n\nA few comments inline\n\n\nOn 10/12/21 00:35, Ashkil Dighin wrote:\n> \n> Postgres.conf used  in Baremetal\n> ========================\n> maintenance_work_mem = 512MB           \n\nonly a half GB memory for autovac? (it will have a mandatory run as soon\nas you hit 200 mil XIDs, seems like you'd want the full max 1GB for it)\n\n> synchronous_commit = off            \n> checkpoint_timeout = 1h         \n> checkpoint_completion_target = 1       \n> checkpoint_warning = 0         \n\ncurious about this, seems you're just looking to understand how much\nthroughput you can get with a config that would not be used on a real system\n\n> autovacuum = off                       \n\ni assume you understand that autovacuum will still run when you hit 200\nmil XIDs. this setting seems incongruent with the previous settings,\nbecause it seemed like you were going for throughput, which generally\nrequires autovacuum to be more aggressive rather than less aggressive.\nassuming the benchmark runs for a properly sufficient length of time,\nthis setting will slow things down because of accumulating bloat.\n\nJust a few opinions, I might be wrong, hope the feedback is helpful. :)\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n", "msg_date": "Wed, 13 Oct 2021 18:54:22 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "On Wed, Oct 13, 2021 at 6:54 PM Jeremy Schneider\n<[email protected]> wrote:\n> only a half GB memory for autovac? (it will have a mandatory run as soon\n> as you hit 200 mil XIDs, seems like you'd want the full max 1GB for it)\n\nWhile anti-wraparound vacuums will become a problem for TPC-C (unless\nyou tune for it), it's not too sensitive to mwm. You just don't end up\naccumulating too many TIDs to delete from indexes in practice, even\nthough the overhead from VACUUM is a concern. The new autovacuum\ninstrumentation in Postgres 14 makes this far clearer.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 Oct 2021 19:11:41 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Hi\nCaptured the concurrent session with Netsat and pg-stat-actvity. Is the\nprocedure the right way to capture concurrent sesssions in postgresql?\n\nnetstat -a | grep postgres tcp 0 0 0.0.0.0:postgres 0.0.0.0:* LISTEN tcp 0\n0 :postgres :53984 ESTABLISHED tcp 0 0 :postgres :54012 ESTABLISHED tcp 0\n74 :postgres :53998 ESTABLISHED tcp 0 73 :53986 :postgres ESTABLISHED tcp 0\n0 :54004 :postgres ESTABLISHED tcp 0 75 :53990 :postgres ESTABLISHED tcp 0\n0 :postgres :53994 ESTABLISHED tcp 0 0 :postgres :54004 ESTABLISHED tcp 0\n106 :53978 :postgres ESTABLISHED tcp 0 0 :postgres :53972 ESTABLISHED tcp 0\n90 :54000 :postgres ESTABLISHED tcp 0 0 :postgres :54018 ESTABLISHED tcp 0\n0 :54016 :postgres ESTABLISHED tcp 0 0 :postgres :53986 ESTABLISHED tcp 0\n59 :54006 :postgres ESTABLISHED tcp 0 74 :postgres :53982 ESTABLISHED tcp 0\n75 :53994 :postgres ESTABLISHED tcp 0 0 :53970 :postgres ESTABLISHED tcp 0\n0 :postgres :53974 ESTABLISHED tcp 0 76 :53988 :postgres ESTABLISHED tcp 0\n0 :postgres :54008 ESTABLISHED tcp 0 93 :54014 :postgres ESTABLISHED tcp 0\n74 :54012 :postgres ESTABLISHED tcp 0 75 :53972 :postgres ESTABLISHED tcp 0\n76 :54002 :postgres ESTABLISHED tcp 0 68 :postgres :54006 ESTABLISHED tcp 0\n0 :postgres :53978 ESTABLISHED tcp 0 73 :54008 :postgres ESTABLISHED tcp 0\n0 :postgres :53976 ESTABLISHED tcp 0 93 :53974 :postgres ESTABLISHED tcp 0\n59 :53998 :postgres ESTABLISHED tcp 74 0 :53984 :postgres ESTABLISHED tcp 0\n0 :postgres :54014 ESTABLISHED tcp 0 76 :53982 :postgres ESTABLISHED tcp 0\n0 :postgres :54002 ESTABLISHED tcp 0 76 :53996 :postgres ESTABLISHED tcp 0\n0 :postgres :53990 ESTABLISHED tcp 0 59 :53976 :postgres ESTABLISHED tcp 0\n74 :postgres :53996 ESTABLISHED tcp 0 76 :53992 :postgres ESTABLISHED tcp 0\n0 :postgres :54016 ESTABLISHED tcp 0 0 :postgres :54000 ESTABLISHED tcp 0 0\n:postgres :53980 ESTABLISHED tcp 0 77 :53980 :postgres ESTABLISHED tcp 0 74\n:54018 :postgres ESTABLISHED tcp 0 0 :postgres :53970 ESTABLISHED tcp 0 0\n:postgres :53988 ESTABLISHED tcp 0 104 :54010 :postgres ESTABLISHED tcp 0 0\n:postgres :54010 ESTABLISHED tcp 0 0 :postgres :53992 ESTABLISHED tcp6 0 0\n[::]:postgres\n\nSelect pg_stat_activity\n\n\ndatid | datname | pid | leader_pid | usesysid | usename | application_name\n| client_addr | client_hostname | client_port | backend_start | xact_start\n| query_start | state_change | wait_event_type | wait_event | state |\nbackend_xid | backend_xmin | query | backend_type\n-------+----------+---------+------------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+---------------------+--------+-------------+--------------+-------------------------------------------------------------------------------------------------+------------------------------\n| | 2092230 | | 10 | postgres | | | | | 2021-10-13 02:41:12.083391-04 | | |\n| Activity | LogicalLauncherMain | | | | | logical replication launcher 16385\n| tpcc | 2092540 | | 16384 | tpcc | | 127.0.0.1 | | 53970 | 2021-10-13\n02:41:57.336031-04 | | 2021-10-13 02:43:58.97025-04 | 2021-10-13\n02:43:58.971538-04 | Client | ClientRead | idle | | | select\nsum(d_next_o_id) from district | client backend 16385 | tpcc | 2092541 | |\n16384 | tpcc | | 127.0.0.1 | | 53972 | 2021-10-13 02:41:57.836054-04 |\n2021-10-13 02:44:04.649045-04 | 2021-10-13 02:44:04.649054-04 | 2021-10-13\n02:44:04.649055-04 | | | active | 11301598 | 11301493 | prepare delivery\n(INTEGER, INTEGER) AS select delivery($1,$2) | client backend 16385 | tpcc\n| 2092548 | | 16384 | tpcc | | 127.0.0.1 | | 53974 | 2021-10-13\n02:41:58.336566-04 | 2021-10-13 02:44:04.649153-04 | 2021-10-13\n02:44:04.649163-04 | 2021-10-13 02:44:04.649163-04 | | | active | 11301611\n| 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER)\nas select neword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092549\n| | 16384 | tpcc | | 127.0.0.1 | | 53976 | 2021-10-13 02:41:58.836269-04 |\n2021-10-13 02:44:04.649443-04 | 2021-10-13 02:44:04.649454-04 | 2021-10-13\n02:44:04.649454-04 | | | active | | 11301528 | prepare neword (INTEGER,\nINTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) |\nclient backend 16385 | tpcc | 2092556 | | 16384 | tpcc | | 127.0.0.1 | |\n53978 | 2021-10-13 02:41:59.336172-04 | 2021-10-13 02:44:04.648817-04 |\n2021-10-13 02:44:04.648827-04 | 2021-10-13 02:44:04.648828-04 | | | active\n| | 11301493 | prepare slev (INTEGER, INTEGER, INTEGER) AS select\nslev($1,$2,$3) | client backend 16385 | tpcc | 2092557 | | 16384 | tpcc | |\n127.0.0.1 | | 53980 | 2021-10-13 02:41:59.83835-04 | 2021-10-13\n02:44:04.649027-04 | 2021-10-13 02:44:04.649036-04 | 2021-10-13\n02:44:04.649036-04 | | | active | | 11301493 | prepare slev (INTEGER,\nINTEGER, INTEGER) AS select slev($1,$2,$3) | client backend 16385 | tpcc |\n2092564 | | 16384 | tpcc | | 127.0.0.1 | | 53982 | 2021-10-13\n02:42:00.336974-04 | 2021-10-13 02:44:04.649194-04 | 2021-10-13\n02:44:04.649203-04 | 2021-10-13 02:44:04.649203-04 | | | active | 11301619\n| 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER)\nas select neword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092565\n| | 16384 | tpcc | | 127.0.0.1 | | 53984 | 2021-10-13 02:42:00.838269-04 |\n2021-10-13 02:44:04.649441-04 | 2021-10-13 02:44:04.649452-04 | 2021-10-13\n02:44:04.649453-04 | | | active | | 11301528 | prepare neword (INTEGER,\nINTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) |\nclient backend 16385 | tpcc | 2092572 | | 16384 | tpcc | | 127.0.0.1 | |\n53986 | 2021-10-13 02:42:01.337933-04 | 2021-10-13 02:44:04.648136-04 |\n2021-10-13 02:44:04.648144-04 | 2021-10-13 02:44:04.648144-04 | | | active\n| 11301528 | 11301396 | prepare delivery (INTEGER, INTEGER) AS select\ndelivery($1,$2) | client backend 16385 | tpcc | 2092573 | | 16384 | tpcc |\n| 127.0.0.1 | | 53988 | 2021-10-13 02:42:01.839434-04 | 2021-10-13\n02:44:04.648999-04 | 2021-10-13 02:44:04.649007-04 | 2021-10-13\n02:44:04.649007-04 | LWLock | ProcArray | active | 11301596 | 11301493 |\nprepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select\nneword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092580 | | 16384\n| tpcc | | 127.0.0.1 | | 53990 | 2021-10-13 02:42:02.339335-04 | 2021-10-13\n02:44:04.649463-04 | 2021-10-13 02:44:04.649474-04 | 2021-10-13\n02:44:04.649474-04 | | | active | | 11301528 | prepare neword (INTEGER,\nINTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) |\nclient backend 16385 | tpcc | 2092581 | | 16384 | tpcc | | 127.0.0.1 | |\n53992 | 2021-10-13 02:42:02.838867-04 | 2021-10-13 02:44:04.649161-04 |\n2021-10-13 02:44:04.64917-04 | 2021-10-13 02:44:04.64917-04 | | | active |\n11301616 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER,\nINTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc |\n2092588 | | 16384 | tpcc | | 127.0.0.1 | | 53994 | 2021-10-13\n02:42:03.343136-04 | 2021-10-13 02:44:04.64934-04 | 2021-10-13\n02:44:04.649351-04 | 2021-10-13 02:44:04.649352-04 | | | active | |\n11301528 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as\nselect neword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092589 | |\n16384 | tpcc | | 127.0.0.1 | | 53996 | 2021-10-13 02:42:03.839278-04 |\n2021-10-13 02:44:04.648822-04 | 2021-10-13 02:44:04.648834-04 | 2021-10-13\n02:44:04.648834-04 | | | active | | | prepare neword (INTEGER, INTEGER,\nINTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client\nbackend 16385 | tpcc | 2092596 | | 16384 | tpcc | | 127.0.0.1 | | 53998 |\n2021-10-13 02:42:04.34021-04 | 2021-10-13 02:44:04.649134-04 | 2021-10-13\n02:44:04.649143-04 | 2021-10-13 02:44:04.649144-04 | | | active | 11301614\n| 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER)\nas select neword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092597\n| | 16384 | tpcc | | 127.0.0.1 | | 54000 | 2021-10-13 02:42:04.840163-04 |\n2021-10-13 02:44:04.649429-04 | 2021-10-13 02:44:04.649438-04 | 2021-10-13\n02:44:04.649438-04 | | | active | | 11301528 | prepare delivery (INTEGER,\nINTEGER) AS select delivery($1,$2) | client backend 16385 | tpcc | 2092604\n| | 16384 | tpcc | | 127.0.0.1 | | 54002 | 2021-10-13 02:42:05.340832-04 |\n2021-10-13 02:44:04.649156-04 | 2021-10-13 02:44:04.649166-04 | 2021-10-13\n02:44:04.649166-04 | LWLock | WALInsert | active | 11301618 | 11301493 |\nprepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select\nneword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092605 | | 16384\n| tpcc | | 127.0.0.1 | | 54004 | 2021-10-13 02:42:05.841658-04 | 2021-10-13\n02:44:04.649089-04 | 2021-10-13 02:44:04.649099-04 | 2021-10-13\n02:44:04.6491-04 | | | active | 11301608 | 11301493 | prepare neword\n(INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select\nneword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092612 | | 16384\n| tpcc | | 127.0.0.1 | | 54006 | 2021-10-13 02:42:06.342751-04 | 2021-10-13\n02:44:04.649428-04 | 2021-10-13 02:44:04.649437-04 | 2021-10-13\n02:44:04.649437-04 | | | active | | 11301528 | prepare delivery (INTEGER,\nINTEGER) AS select delivery($1,$2) | client backend 16385 | tpcc | 2092613\n| | 16384 | tpcc | | 127.0.0.1 | | 54008 | 2021-10-13 02:42:06.841509-04 |\n2021-10-13 02:44:04.649237-04 | 2021-10-13 02:44:04.649249-04 | 2021-10-13\n02:44:04.649249-04 | | | active | 11301622 | 11301493 | prepare neword\n(INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select\nneword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092620 | | 16384\n| tpcc | | 127.0.0.1 | | 54010 | 2021-10-13 02:42:07.341743-04 | 2021-10-13\n02:44:04.648736-04 | 2021-10-13 02:44:04.648746-04 | 2021-10-13\n02:44:04.648746-04 | | | active | 11301580 | 11301493 | prepare neword\n(INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select\nneword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092621 | | 16384\n| tpcc | | 127.0.0.1 | | 54012 | 2021-10-13 02:42:07.841876-04 | 2021-10-13\n02:44:04.648983-04 | 2021-10-13 02:44:04.648991-04 | 2021-10-13\n02:44:04.648991-04 | | | active | 11301600 | 11301493 | prepare neword\n(INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select\nneword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092628 | | 16384\n| tpcc | | 127.0.0.1 | | 54014 | 2021-10-13 02:42:08.342179-04 | 2021-10-13\n02:44:04.649464-04 | 2021-10-13 02:44:04.649473-04 | 2021-10-13\n02:44:04.649474-04 | | | active | | 11301528 | prepare neword (INTEGER,\nINTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) |\nclient backend 16385 | tpcc | 2092629 | | 16384 | tpcc | | 127.0.0.1 | |\n54016 | 2021-10-13 02:42:08.845321-04 | 2021-10-13 02:44:04.649456-04 |\n2021-10-13 02:44:04.649472-04 | 2021-10-13 02:44:04.649472-04 | | | active\n| | 11301528 | prepare slev (INTEGER, INTEGER, INTEGER) AS select\nslev($1,$2,$3) | client backend 16385 | tpcc | 2092636 | | 16384 | tpcc | |\n127.0.0.1 | | 54018 | 2021-10-13 02:42:09.341768-04 | 2021-10-13\n02:44:04.649394-04 | 2021-10-13 02:44:04.649404-04 | 2021-10-13\n02:44:04.649404-04 | | | active | | 11301528 | prepare neword (INTEGER,\nINTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) |\nclient backend 12711 | postgres | 2093365 | | 10 | postgres | psql | | | -1\n| 2021-10-13 02:44:04.64633-04 | 2021-10-13 02:44:04.648186-04 | 2021-10-13\n02:44:04.648186-04 | 2021-10-13 02:44:04.648186-04 | | | active | |\n11301528 | select * from pg_stat_activity; | client backend | | 2092227 | |\n| | | | | | 2021-10-13 02:41:12.082448-04 | | | | Activity | BgWriterMain |\n| | | | background writer | | 2092226 | | | | | | | | 2021-10-13\n02:41:12.081979-04 | | | | Activity | CheckpointerMain | | | | |\ncheckpointer | | 2092228 | | | |\n\n\n\nOn Tuesday, October 12, 2021, Laurenz Albe <[email protected]> wrote:\n\n> On Tue, 2021-10-12 at 13:05 +0530, Ashkil Dighin wrote:\n> > Perf data for 24vu(TPC-C)\n> > --------------------------------\n> >\n> > 18.99% postgres postgres [.] LWLockAcquire\n> > 7.09% postgres postgres [.] _bt_compare\n> > 8.66% postgres postgres [.] LWLockRelease\n> > 2.28% postgres postgres [.] GetSnapshotData\n> > 2.25% postgres postgres [.] hash_search_with_hash_value\n> > 2.11% postgres postgres [.] XLogInsertRecord\n> > 1.98% postgres postgres [.] PinBuffer\n> >\n> > 1.Is there a way to tune the lock contention ?\n>\n> How many concurrent sesions are you running?\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nHiCaptured the concurrent session with Netsat and pg-stat-actvity. Is the procedure the right way to capture concurrent sesssions in postgresql?netstat -a | grep postgres\ntcp 0 0 0.0.0.0:postgres 0.0.0.0:* LISTEN\ntcp 0 0 :postgres :53984 ESTABLISHED\ntcp 0 0 :postgres :54012 ESTABLISHED\ntcp 0 74 :postgres :53998 ESTABLISHED\ntcp 0 73 :53986 :postgres ESTABLISHED\ntcp 0 0 :54004 :postgres ESTABLISHED\ntcp 0 75 :53990 :postgres ESTABLISHED\ntcp 0 0 :postgres :53994 ESTABLISHED\ntcp 0 0 :postgres :54004 ESTABLISHED\ntcp 0 106 :53978 :postgres ESTABLISHED\ntcp 0 0 :postgres :53972 ESTABLISHED\ntcp 0 90 :54000 :postgres ESTABLISHED\ntcp 0 0 :postgres :54018 ESTABLISHED\ntcp 0 0 :54016 :postgres ESTABLISHED\ntcp 0 0 :postgres :53986 ESTABLISHED\ntcp 0 59 :54006 :postgres ESTABLISHED\ntcp 0 74 :postgres :53982 ESTABLISHED\ntcp 0 75 :53994 :postgres ESTABLISHED\ntcp 0 0 :53970 :postgres ESTABLISHED\ntcp 0 0 :postgres :53974 ESTABLISHED\ntcp 0 76 :53988 :postgres ESTABLISHED\ntcp 0 0 :postgres :54008 ESTABLISHED\ntcp 0 93 :54014 :postgres ESTABLISHED\ntcp 0 74 :54012 :postgres ESTABLISHED\ntcp 0 75 :53972 :postgres ESTABLISHED\ntcp 0 76 :54002 :postgres ESTABLISHED\ntcp 0 68 :postgres :54006 ESTABLISHED\ntcp 0 0 :postgres :53978 ESTABLISHED\ntcp 0 73 :54008 :postgres ESTABLISHED\ntcp 0 0 :postgres :53976 ESTABLISHED\ntcp 0 93 :53974 :postgres ESTABLISHED\ntcp 0 59 :53998 :postgres ESTABLISHED\ntcp 74 0 :53984 :postgres ESTABLISHED\ntcp 0 0 :postgres :54014 ESTABLISHED\ntcp 0 76 :53982 :postgres ESTABLISHED\ntcp 0 0 :postgres :54002 ESTABLISHED\ntcp 0 76 :53996 :postgres ESTABLISHED\ntcp 0 0 :postgres :53990 ESTABLISHED\ntcp 0 59 :53976 :postgres ESTABLISHED\ntcp 0 74 :postgres :53996 ESTABLISHED\ntcp 0 76 :53992 :postgres ESTABLISHED\ntcp 0 0 :postgres :54016 ESTABLISHED\ntcp 0 0 :postgres :54000 ESTABLISHED\ntcp 0 0 :postgres :53980 ESTABLISHED\ntcp 0 77 :53980 :postgres ESTABLISHED\ntcp 0 74 :54018 :postgres ESTABLISHED\ntcp 0 0 :postgres :53970 ESTABLISHED\ntcp 0 0 :postgres :53988 ESTABLISHED\ntcp 0 104 :54010 :postgres ESTABLISHED\ntcp 0 0 :postgres :54010 ESTABLISHED\ntcp 0 0 :postgres :53992 ESTABLISHED\ntcp6 0 0 [::]:postgres Select pg_stat_activity \ndatid | datname | pid | leader_pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | wait_event_type | wait_event | state | backend_xid | backend_xmin | query | backend_type\n-------+----------+---------+------------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+---------------------+--------+-------------+--------------+-------------------------------------------------------------------------------------------------+------------------------------\n | | 2092230 | | 10 | postgres | | | | | 2021-10-13 02:41:12.083391-04 | | | | Activity | LogicalLauncherMain | | | | | logical replication launcher\n 16385 | tpcc | 2092540 | | 16384 | tpcc | | 127.0.0.1 | | 53970 | 2021-10-13 02:41:57.336031-04 | | 2021-10-13 02:43:58.97025-04 | 2021-10-13 02:43:58.971538-04 | Client | ClientRead | idle | | | select sum(d_next_o_id) from district | client backend\n 16385 | tpcc | 2092541 | | 16384 | tpcc | | 127.0.0.1 | | 53972 | 2021-10-13 02:41:57.836054-04 | 2021-10-13 02:44:04.649045-04 | 2021-10-13 02:44:04.649054-04 | 2021-10-13 02:44:04.649055-04 | | | active | 11301598 | 11301493 | prepare delivery (INTEGER, INTEGER) AS select delivery($1,$2) | client backend\n 16385 | tpcc | 2092548 | | 16384 | tpcc | | 127.0.0.1 | | 53974 | 2021-10-13 02:41:58.336566-04 | 2021-10-13 02:44:04.649153-04 | 2021-10-13 02:44:04.649163-04 | 2021-10-13 02:44:04.649163-04 | | | active | 11301611 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092549 | | 16384 | tpcc | | 127.0.0.1 | | 53976 | 2021-10-13 02:41:58.836269-04 | 2021-10-13 02:44:04.649443-04 | 2021-10-13 02:44:04.649454-04 | 2021-10-13 02:44:04.649454-04 | | | active | | 11301528 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092556 | | 16384 | tpcc | | 127.0.0.1 | | 53978 | 2021-10-13 02:41:59.336172-04 | 2021-10-13 02:44:04.648817-04 | 2021-10-13 02:44:04.648827-04 | 2021-10-13 02:44:04.648828-04 | | | active | | 11301493 | prepare slev (INTEGER, INTEGER, INTEGER) AS select slev($1,$2,$3) | client backend\n 16385 | tpcc | 2092557 | | 16384 | tpcc | | 127.0.0.1 | | 53980 | 2021-10-13 02:41:59.83835-04 | 2021-10-13 02:44:04.649027-04 | 2021-10-13 02:44:04.649036-04 | 2021-10-13 02:44:04.649036-04 | | | active | | 11301493 | prepare slev (INTEGER, INTEGER, INTEGER) AS select slev($1,$2,$3) | client backend\n 16385 | tpcc | 2092564 | | 16384 | tpcc | | 127.0.0.1 | | 53982 | 2021-10-13 02:42:00.336974-04 | 2021-10-13 02:44:04.649194-04 | 2021-10-13 02:44:04.649203-04 | 2021-10-13 02:44:04.649203-04 | | | active | 11301619 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092565 | | 16384 | tpcc | | 127.0.0.1 | | 53984 | 2021-10-13 02:42:00.838269-04 | 2021-10-13 02:44:04.649441-04 | 2021-10-13 02:44:04.649452-04 | 2021-10-13 02:44:04.649453-04 | | | active | | 11301528 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092572 | | 16384 | tpcc | | 127.0.0.1 | | 53986 | 2021-10-13 02:42:01.337933-04 | 2021-10-13 02:44:04.648136-04 | 2021-10-13 02:44:04.648144-04 | 2021-10-13 02:44:04.648144-04 | | | active | 11301528 | 11301396 | prepare delivery (INTEGER, INTEGER) AS select delivery($1,$2) | client backend\n 16385 | tpcc | 2092573 | | 16384 | tpcc | | 127.0.0.1 | | 53988 | 2021-10-13 02:42:01.839434-04 | 2021-10-13 02:44:04.648999-04 | 2021-10-13 02:44:04.649007-04 | 2021-10-13 02:44:04.649007-04 | LWLock | ProcArray | active | 11301596 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092580 | | 16384 | tpcc | | 127.0.0.1 | | 53990 | 2021-10-13 02:42:02.339335-04 | 2021-10-13 02:44:04.649463-04 | 2021-10-13 02:44:04.649474-04 | 2021-10-13 02:44:04.649474-04 | | | active | | 11301528 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092581 | | 16384 | tpcc | | 127.0.0.1 | | 53992 | 2021-10-13 02:42:02.838867-04 | 2021-10-13 02:44:04.649161-04 | 2021-10-13 02:44:04.64917-04 | 2021-10-13 02:44:04.64917-04 | | | active | 11301616 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092588 | | 16384 | tpcc | | 127.0.0.1 | | 53994 | 2021-10-13 02:42:03.343136-04 | 2021-10-13 02:44:04.64934-04 | 2021-10-13 02:44:04.649351-04 | 2021-10-13 02:44:04.649352-04 | | | active | | 11301528 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092589 | | 16384 | tpcc | | 127.0.0.1 | | 53996 | 2021-10-13 02:42:03.839278-04 | 2021-10-13 02:44:04.648822-04 | 2021-10-13 02:44:04.648834-04 | 2021-10-13 02:44:04.648834-04 | | | active | | | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092596 | | 16384 | tpcc | | 127.0.0.1 | | 53998 | 2021-10-13 02:42:04.34021-04 | 2021-10-13 02:44:04.649134-04 | 2021-10-13 02:44:04.649143-04 | 2021-10-13 02:44:04.649144-04 | | | active | 11301614 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092597 | | 16384 | tpcc | | 127.0.0.1 | | 54000 | 2021-10-13 02:42:04.840163-04 | 2021-10-13 02:44:04.649429-04 | 2021-10-13 02:44:04.649438-04 | 2021-10-13 02:44:04.649438-04 | | | active | | 11301528 | prepare delivery (INTEGER, INTEGER) AS select delivery($1,$2) | client backend\n 16385 | tpcc | 2092604 | | 16384 | tpcc | | 127.0.0.1 | | 54002 | 2021-10-13 02:42:05.340832-04 | 2021-10-13 02:44:04.649156-04 | 2021-10-13 02:44:04.649166-04 | 2021-10-13 02:44:04.649166-04 | LWLock | WALInsert | active | 11301618 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092605 | | 16384 | tpcc | | 127.0.0.1 | | 54004 | 2021-10-13 02:42:05.841658-04 | 2021-10-13 02:44:04.649089-04 | 2021-10-13 02:44:04.649099-04 | 2021-10-13 02:44:04.6491-04 | | | active | 11301608 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092612 | | 16384 | tpcc | | 127.0.0.1 | | 54006 | 2021-10-13 02:42:06.342751-04 | 2021-10-13 02:44:04.649428-04 | 2021-10-13 02:44:04.649437-04 | 2021-10-13 02:44:04.649437-04 | | | active | | 11301528 | prepare delivery (INTEGER, INTEGER) AS select delivery($1,$2) | client backend\n 16385 | tpcc | 2092613 | | 16384 | tpcc | | 127.0.0.1 | | 54008 | 2021-10-13 02:42:06.841509-04 | 2021-10-13 02:44:04.649237-04 | 2021-10-13 02:44:04.649249-04 | 2021-10-13 02:44:04.649249-04 | | | active | 11301622 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092620 | | 16384 | tpcc | | 127.0.0.1 | | 54010 | 2021-10-13 02:42:07.341743-04 | 2021-10-13 02:44:04.648736-04 | 2021-10-13 02:44:04.648746-04 | 2021-10-13 02:44:04.648746-04 | | | active | 11301580 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092621 | | 16384 | tpcc | | 127.0.0.1 | | 54012 | 2021-10-13 02:42:07.841876-04 | 2021-10-13 02:44:04.648983-04 | 2021-10-13 02:44:04.648991-04 | 2021-10-13 02:44:04.648991-04 | | | active | 11301600 | 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092628 | | 16384 | tpcc | | 127.0.0.1 | | 54014 | 2021-10-13 02:42:08.342179-04 | 2021-10-13 02:44:04.649464-04 | 2021-10-13 02:44:04.649473-04 | 2021-10-13 02:44:04.649474-04 | | | active | | 11301528 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 16385 | tpcc | 2092629 | | 16384 | tpcc | | 127.0.0.1 | | 54016 | 2021-10-13 02:42:08.845321-04 | 2021-10-13 02:44:04.649456-04 | 2021-10-13 02:44:04.649472-04 | 2021-10-13 02:44:04.649472-04 | | | active | | 11301528 | prepare slev (INTEGER, INTEGER, INTEGER) AS select slev($1,$2,$3) | client backend\n 16385 | tpcc | 2092636 | | 16384 | tpcc | | 127.0.0.1 | | 54018 | 2021-10-13 02:42:09.341768-04 | 2021-10-13 02:44:04.649394-04 | 2021-10-13 02:44:04.649404-04 | 2021-10-13 02:44:04.649404-04 | | | active | | 11301528 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) | client backend\n 12711 | postgres | 2093365 | | 10 | postgres | psql | | | -1 | 2021-10-13 02:44:04.64633-04 | 2021-10-13 02:44:04.648186-04 | 2021-10-13 02:44:04.648186-04 | 2021-10-13 02:44:04.648186-04 | | | active | | 11301528 | select * from pg_stat_activity; | client backend\n | | 2092227 | | | | | | | | 2021-10-13 02:41:12.082448-04 | | | | Activity | BgWriterMain | | | | | background writer\n | | 2092226 | | | | | | | | 2021-10-13 02:41:12.081979-04 | | | | Activity | CheckpointerMain | | | | | checkpointer\n | | 2092228 | | | | On Tuesday, October 12, 2021, Laurenz Albe <[email protected]> wrote:On Tue, 2021-10-12 at 13:05 +0530, Ashkil Dighin wrote:\n> Perf data for 24vu(TPC-C)\n> --------------------------------\n> \n>       18.99%  postgres  postgres            [.] LWLockAcquire\n>      7.09%  postgres  postgres            [.] _bt_compare\n>      8.66%  postgres  postgres            [.] LWLockRelease\n>      2.28%  postgres  postgres            [.] GetSnapshotData\n>      2.25%  postgres  postgres            [.] hash_search_with_hash_value\n>      2.11%  postgres  postgres            [.] XLogInsertRecord\n>      1.98%  postgres  postgres            [.] PinBuffer\n> \n> 1.Is there a way to tune the lock contention ?\n\nHow many concurrent sesions are you running?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com", "msg_date": "Thu, 14 Oct 2021 11:33:58 +0530", "msg_from": "Ashkil Dighin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "On Thu, 2021-10-14 at 11:33 +0530, Ashkil Dighin wrote:\n> Captured the concurrent session with Netsat and pg-stat-actvity. Is the procedure the right way to capture concurrent sesssions in postgresql?\n> \n> Select pg_stat_activity \n\n[some two dozen sessions]\n\nThat doesn't look like you would get into trouble just from the\nsheer number of sessions, so it must be something else.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Thu, 14 Oct 2021 08:42:57 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "NUMA node0 CPU(s): 0-63,128-191NUMA node1 CPU(s): 64-127,192-255\nThread(s) per core: 2\nCore(s) per socket: 64\nSocket(s): 2\nNUMA node(s): 2\ncorepinning(ta perf lock contention results for 24,32 vu\n0-63\n 24: 18.03% postgres postgres [.] LWLockAcquire\n 32: 7.02% postgres postgres [.] LWLockAcquire\n64-127\n 24: 17.96% postgres postgres [.] LWLockAcquire\n 32: 7.04% postgres postgres [.] LWLockAcquire\n0-63,128-191(Node0)\n 24: 18.4% postgres postgres [.] LWLockAcquire\n 32: 7.07% postgres postgres [.] LWLockAcquire\n64-127,192-255(Node1)\n 24: 18.3% postgres postgres [.] LWLockAcquire\n 32: 7.06% postgres postgres [.] LWLockAcquire\nI do not understand on interconnect type and has restrictions on lscpu .\n\nOn Tuesday, October 12, 2021, Mikhail Zhilin <[email protected]> wrote:\n\n> Hi,\n>\n> How many sockets are on motherboard?\n> What is CPU model and interconnect type (UPI?)?\n> Can you share output of \"lscpu\"?\n>\n> If you have more than 1 NUMA node it may be worth to run PostgreSQL in\n> single NUMA node via taskset. It will eliminate access to remote memory and\n> speed up processing.\n>\n> Thanks,\n> Michael.\n>\n> On 10/12/21 10:35 AM, Ashkil Dighin wrote:\n>\n>\n> Hi,\n> Lock contention observed high in PostgreSQLv13.3\n> The source code compiled with GNC(GCCv11.x)\n> PostgreSQL version: 13.3\n> Operating system: RHEL8.3\n> Kernel name:4.18.0-305.10.2.el8_4.x86_64\n> RAM Size:512GB\n> SSD: 1TB\n> The environment used IBM metal and test benchmark environment HammerDbv4.2\n> Test case :TPC-C\n>\n> Perf data for 24vu(TPC-C)\n> --------------------------------\n>\n> 18.99% postgres postgres [.] LWLockAcquire\n> 7.09% postgres postgres [.] _bt_compare\n> 8.66% postgres postgres [.] LWLockRelease\n> 2.28% postgres postgres [.] GetSnapshotData\n> 2.25% postgres postgres [.] hash_search_with_hash_value\n> 2.11% postgres postgres [.] XLogInsertRecord\n> 1.98% postgres postgres [.] PinBuffer\n>\n> 1.Is there a way to tune the lock contention ?\n> 2.Is any recommendations to tune/reduce the lock contention via\n> postgres.conf\n>\n> Postgres.conf used in Baremetal\n> ========================\n> shared_buffers = 128GB(1/4 th RAM size)\n> effective_cachesize=392 GB(1/3 or 75% of RAM size)\n> huge_pages = on\n> temp_buffers = 4000MB\n> work_mem = 4000MB\n> maintenance_work_mem = 512MB\n> autovacuum_work_mem = -1\n> max_stack_depth = 7MB\n> dynamic_shared_memory_type = posix\n> max_files_per_process = 4000\n> effective_io_concurrency = 32\n> wal_level = minimal\n> synchronous_commit = off\n> wal_buffers = 512MB\n> checkpoint_timeout = 1h\n> checkpoint_completion_target = 1\n> checkpoint_warning = 0\n> log_min_messages = error\n> log_min_error_statement = error\n> log_timezone = 'GB'\n> autovacuum = off\n> datestyle = 'iso, dmy'\n> timezone = 'GB'\n> lc_messages = 'en_GB.UTF-8'\n> lc_monetary = 'en_GB.UTF-8'\n> lc_numeric = 'en_GB.UTF-8'\n> lc_time = 'en_GB.UTF-8'\n> default_text_search_config = 'pg_catalog.english'\n> max_locks_per_transaction = 64\n> max_pred_locks_per_transaction = 64\n>\n> Best Regards\n> Anil\n>\n>\n>\n\nNUMA node0 CPU(s):   0-63,128-191NUMA node1 CPU(s):   64-127,192-255\nThread(s) per core:  2\nCore(s) per socket:  64\nSocket(s):           2\nNUMA node(s):        2\ncorepinning(ta perf lock contention results for 24,32 vu\n0-63\n  24: 18.03%  postgres  postgres            [.] LWLockAcquire\n  32: 7.02%  postgres  postgres             [.] LWLockAcquire\n64-127\n  24: 17.96%  postgres  postgres            [.] LWLockAcquire\n  32: 7.04%  postgres  postgres             [.] LWLockAcquire\n0-63,128-191(Node0)\n  24: 18.4%  postgres  postgres            [.] LWLockAcquire\n  32: 7.07%  postgres  postgres            [.] LWLockAcquire\n64-127,192-255(Node1)\n  24: 18.3%  postgres  postgres            [.] LWLockAcquire\n  32: 7.06%  postgres  postgres            [.] LWLockAcquireI do not understand on interconnect type and has restrictions on lscpu .On Tuesday, October 12, 2021, Mikhail Zhilin <[email protected]> wrote:\n\n Hi,\n\n How many sockets are on motherboard? \n What is CPU model and interconnect type (UPI?)? \n Can you share output of \"lscpu\"? \n\n If you have more than 1 NUMA node it may be worth to run PostgreSQL\n in single NUMA node via taskset. It will eliminate access to remote\n memory and speed up processing. \n\n Thanks,\n  Michael.\n\nOn 10/12/21 10:35 AM, Ashkil Dighin\n wrote:\n\n\n\n\n\n\nHi,\nLock contention observed high in\n PostgreSQLv13.3\nThe source code compiled with\n GNC(GCCv11.x)\nPostgreSQL version: 13.3\n Operating system:   RHEL8.3\nKernel name:4.18.0-305.10.2.el8_4.x86_64\nRAM Size:512GB\nSSD: 1TB\nThe environment used IBM metal and\n test benchmark environment HammerDbv4.2\n Test case :TPC-C\n\n Perf data for 24vu(TPC-C)\n --------------------------------\n\n       18.99%  postgres  postgres            [.]\n LWLockAcquire\n      7.09%  postgres  postgres            [.] _bt_compare\n      8.66%  postgres  postgres            [.]\n LWLockRelease\n      2.28%  postgres  postgres            [.]\n GetSnapshotData\n      2.25%  postgres  postgres            [.]\n hash_search_with_hash_value\n      2.11%  postgres  postgres            [.]\n XLogInsertRecord\n      1.98%  postgres  postgres            [.] PinBuffer\n\n\n1.Is there a way to tune the lock\n contention ?\n 2.Is any recommendations to tune/reduce the lock\n contention via postgres.conf\n\n Postgres.conf used  in Baremetal\n ========================\n shared_buffers = 128GB(1/4 th RAM size)\neffective_cachesize=392 GB(1/3 or\n 75% of RAM size)                        \n\nhuge_pages = on               \n temp_buffers = 4000MB                 \n work_mem = 4000MB                     \n maintenance_work_mem = 512MB           \n autovacuum_work_mem = -1               \n max_stack_depth = 7MB                 \n dynamic_shared_memory_type = posix     \n max_files_per_process = 4000           \n effective_io_concurrency = 32         \n wal_level = minimal                   \n synchronous_commit = off               \n wal_buffers = 512MB                            \n checkpoint_timeout = 1h         \n checkpoint_completion_target = 1       \n checkpoint_warning = 0         \n log_min_messages = error               \n log_min_error_statement = error\n log_timezone = 'GB'\n autovacuum = off                       \n datestyle = 'iso, dmy'\n timezone = 'GB'\n lc_messages = 'en_GB.UTF-8'           \n lc_monetary = 'en_GB.UTF-8'           \n lc_numeric = 'en_GB.UTF-8'             \n lc_time = 'en_GB.UTF-8'               \n default_text_search_config = 'pg_catalog.english'\nmax_locks_per_transaction = 64   \n      \n max_pred_locks_per_transaction = 64\n\n\n\nBest Regards\nAnil", "msg_date": "Thu, 14 Oct 2021 12:15:19 +0530", "msg_from": "Ashkil Dighin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Not using PostGIS\n\nOn Thursday, October 14, 2021, Paul Friedman <\[email protected]> wrote:\n\n> Are you using PostGIS?\n>\n> If so, there is an issue with TOAST table locking having these symptoms.\n>\n>\n> ---Paul\n>\n>\n> On Wed, Oct 13, 2021 at 11:15 AM MichaelDBA <[email protected]>\n> wrote:\n>\n>> 1.Is there a way to tune the lock contention ?\n>> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf\n>>\n>> I think you'd want to find *which* LW locks are being waited on, to see if it's\n>> something that can be easily tuned.\n>>\n>> You can check pg_stat_activity, or maybe create a cronjob to record its content\n>> for later analysis.\n>>\n>>\n>> Hello,\n>>\n>> Also turn on log_lock_waits so you can evaluate the actual SQL causing\n>> the problems in the PG log files. Thinking ahead, you may want to consider\n>> if using advisory locks from the application side of things might be\n>> helpful to manage locks in a more pessimistic way. Also, join with\n>> pg_locks table to find out the specific resources that are in contention.\n>>\n>> Regards,\n>> Michael Vitale\n>>\n>>\n>>\n\nNot using PostGISOn Thursday, October 14, 2021, Paul Friedman <[email protected]> wrote:Are you using PostGIS?If so, there is an issue with TOAST table locking having these symptoms.---PaulOn Wed, Oct 13, 2021 at 11:15 AM MichaelDBA <[email protected]> wrote:\n\n1.Is there a way to tune the lock contention ?\n2.Is any recommendations to tune/reduce the lock contention via postgres.conf\n\nI think you'd want to find *which* LW locks are being waited on, to see if it's\nsomething that can be easily tuned.\n\nYou can check pg_stat_activity, or maybe create a cronjob to record its content\nfor later analysis.\n\n\nHello,\n\n\nAlso turn on log_lock_waits so \nyou can evaluate the actual SQL causing the problems in the PG log \nfiles.  Thinking ahead, you may want to consider if using advisory locks\n from the application side of things might be helpful to manage locks in\n a more pessimistic way.  Also, join with pg_locks table to find out the specific resources that are\n in contention.\n\nRegards,\nMichael Vitale", "msg_date": "Thu, 14 Oct 2021 12:19:21 +0530", "msg_from": "Ashkil Dighin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Ashkil,\n\nCan you bind postgres in single NUMA node, for instance:\n  $ taskset -pc 0-63 <POSTMASTER_PID>\n\nThen run your benchmark, compare results in terms of benchmark metrics & \npresence on LWLock(Acquire|Release) in perf top.\n\nBR,\n  Michael.\n\nOn 10/14/21 9:45 AM, Ashkil Dighin wrote:\n>\n> NUMA node0 CPU(s):   0-63,128-191NUMA node1 CPU(s):   64-127,192-255\n> Thread(s) per core:  2\n> Core(s) per socket:  64\n> Socket(s):           2\n> NUMA node(s):        2\n> corepinning(ta perf lock contention results for 24,32 vu\n> 0-63\n>   24: 18.03%  postgres  postgres            [.] LWLockAcquire\n>   32: 7.02%  postgres  postgres             [.] LWLockAcquire\n> 64-127\n>   24: 17.96%  postgres  postgres            [.] LWLockAcquire\n>   32: 7.04%  postgres  postgres             [.] LWLockAcquire\n> 0-63,128-191(Node0)\n>   24: 18.4%  postgres  postgres            [.] LWLockAcquire\n>   32: 7.07%  postgres  postgres            [.] LWLockAcquire\n> 64-127,192-255(Node1)\n>   24: 18.3%  postgres  postgres            [.] LWLockAcquire\n>   32: 7.06%  postgres  postgres            [.] LWLockAcquire\n>\n> I do not understand on interconnect type and has restrictions on lscpu .\n>\n> On Tuesday, October 12, 2021, Mikhail Zhilin <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> How many sockets are on motherboard?\n> What is CPU model and interconnect type (UPI?)?\n> Can you share output of \"lscpu\"?\n>\n> If you have more than 1 NUMA node it may be worth to run\n> PostgreSQL in single NUMA node via taskset. It will eliminate\n> access to remote memory and speed up processing.\n>\n> Thanks,\n>  Michael.\n>\n> On 10/12/21 10:35 AM, Ashkil Dighin wrote:\n>>\n>> Hi,\n>> Lock contention observed high in PostgreSQLv13.3\n>> The source code compiled with GNC(GCCv11.x)\n>> PostgreSQL version: 13.3\n>> Operating system:   RHEL8.3\n>> Kernel name:4.18.0-305.10.2.el8_4.x86_64\n>> RAM Size:512GB\n>> SSD: 1TB\n>> The environment used IBM metal and test benchmark\n>> environment HammerDbv4.2\n>> Test case :TPC-C\n>>\n>> Perf data for 24vu(TPC-C)\n>> --------------------------------\n>>\n>>       18.99%  postgres  postgres            [.] LWLockAcquire\n>>      7.09%  postgres  postgres            [.] _bt_compare\n>>      8.66%  postgres  postgres            [.] LWLockRelease\n>>      2.28%  postgres  postgres            [.] GetSnapshotData\n>>      2.25%  postgres  postgres            [.]\n>> hash_search_with_hash_value\n>>      2.11%  postgres  postgres            [.] XLogInsertRecord\n>>      1.98%  postgres  postgres            [.] PinBuffer\n>>\n>> 1.Is there a way to tune the lock contention ?\n>> 2.Is any recommendations to tune/reduce the lock contention via\n>> postgres.conf\n>>\n>> Postgres.conf used  in Baremetal\n>> ========================\n>> shared_buffers = 128GB(1/4 th RAM size)\n>> effective_cachesize=392 GB(1/3 or 75% of RAM size)\n>> huge_pages = on\n>> temp_buffers = 4000MB\n>> work_mem = 4000MB\n>> maintenance_work_mem = 512MB\n>> autovacuum_work_mem = -1\n>> max_stack_depth = 7MB\n>> dynamic_shared_memory_type = posix\n>> max_files_per_process = 4000\n>> effective_io_concurrency = 32\n>> wal_level = minimal\n>> synchronous_commit = off\n>> wal_buffers = 512MB\n>> checkpoint_timeout = 1h\n>> checkpoint_completion_target = 1\n>> checkpoint_warning = 0\n>> log_min_messages = error\n>> log_min_error_statement = error\n>> log_timezone = 'GB'\n>> autovacuum = off\n>> datestyle = 'iso, dmy'\n>> timezone = 'GB'\n>> lc_messages = 'en_GB.UTF-8'\n>> lc_monetary = 'en_GB.UTF-8'\n>> lc_numeric = 'en_GB.UTF-8'\n>> lc_time = 'en_GB.UTF-8'\n>> default_text_search_config = 'pg_catalog.english'\n>> max_locks_per_transaction = 64\n>> max_pred_locks_per_transaction = 64\n>>\n>> Best Regards\n>> Anil\n>>\n>\n\n\n\n\n\n\n\nAshkil,\n\n Can you bind postgres in single NUMA node, for instance:\n  $ taskset -pc 0-63 <POSTMASTER_PID> \n\n Then run your benchmark, compare results in terms of benchmark\n metrics & presence on LWLock(Acquire|Release) in perf top.\n\n BR,\n  Michael.\n\nOn 10/14/21 9:45 AM, Ashkil Dighin\n wrote:\n\n\n\nNUMA\n node0 CPU(s):   0-63,128-191NUMA node1 CPU(s):   64-127,192-255\n Thread(s) per core:  2\n Core(s) per socket:  64\n Socket(s):           2\n NUMA node(s):        2\n corepinning(ta perf lock contention results for 24,32 vu\n 0-63\n   24: 18.03%  postgres  postgres            [.] LWLockAcquire\n   32: 7.02%  postgres  postgres             [.] LWLockAcquire\n 64-127\n   24: 17.96%  postgres  postgres            [.] LWLockAcquire\n   32: 7.04%  postgres  postgres             [.] LWLockAcquire\n 0-63,128-191(Node0)\n   24: 18.4%  postgres  postgres            [.] LWLockAcquire\n   32: 7.07%  postgres  postgres            [.] LWLockAcquire\n 64-127,192-255(Node1)\n   24: 18.3%  postgres  postgres            [.] LWLockAcquire\n   32: 7.06%  postgres  postgres            [.] LWLockAcquire\n I do not understand on interconnect type and has restrictions on\n lscpu .\n \n On Tuesday, October 12, 2021, Mikhail Zhilin <[email protected]>\n wrote:\n\n Hi,\n\n How many sockets are on motherboard? \n What is CPU model and interconnect type (UPI?)? \n Can you share output of \"lscpu\"? \n\n If you have more than 1 NUMA node it may be worth to run\n PostgreSQL in single NUMA node via taskset. It will\n eliminate access to remote memory and speed up processing. \n\n Thanks,\n  Michael.\n\nOn 10/12/21 10:35 AM, Ashkil Dighin wrote:\n\n\n\n\n\n\nHi,\nLock contention observed\n high in PostgreSQLv13.3\nThe source code compiled\n with GNC(GCCv11.x)\nPostgreSQL version: 13.3\n Operating system:   RHEL8.3\nKernel\n name:4.18.0-305.10.2.el8_4.x86_64\nRAM Size:512GB\nSSD: 1TB\nThe environment used IBM\n metal and test benchmark environment HammerDbv4.2\n Test case :TPC-C\n\n Perf data for 24vu(TPC-C)\n --------------------------------\n\n       18.99%  postgres  postgres            [.]\n LWLockAcquire\n      7.09%  postgres  postgres            [.]\n _bt_compare\n      8.66%  postgres  postgres            [.]\n LWLockRelease\n      2.28%  postgres  postgres            [.]\n GetSnapshotData\n      2.25%  postgres  postgres            [.]\n hash_search_with_hash_value\n      2.11%  postgres  postgres            [.]\n XLogInsertRecord\n      1.98%  postgres  postgres            [.]\n PinBuffer\n\n\n1.Is there a way to tune\n the lock contention ?\n 2.Is any recommendations to tune/reduce the lock\n contention via postgres.conf\n\n Postgres.conf used  in Baremetal\n ========================\n shared_buffers = 128GB(1/4 th RAM size)\neffective_cachesize=392\n GB(1/3 or 75% of RAM size)                        \n\nhuge_pages = on       \n        \n temp_buffers = 4000MB                 \n work_mem = 4000MB                     \n maintenance_work_mem = 512MB           \n autovacuum_work_mem = -1               \n max_stack_depth = 7MB                 \n dynamic_shared_memory_type = posix     \n max_files_per_process = 4000           \n effective_io_concurrency = 32         \n wal_level = minimal                   \n synchronous_commit = off               \n wal_buffers = 512MB                          \n  \n checkpoint_timeout = 1h         \n checkpoint_completion_target = 1       \n checkpoint_warning = 0         \n log_min_messages = error               \n log_min_error_statement = error\n log_timezone = 'GB'\n autovacuum = off                       \n datestyle = 'iso, dmy'\n timezone = 'GB'\n lc_messages = 'en_GB.UTF-8'           \n lc_monetary = 'en_GB.UTF-8'           \n lc_numeric = 'en_GB.UTF-8'             \n lc_time = 'en_GB.UTF-8'               \n default_text_search_config =\n 'pg_catalog.english'\nmax_locks_per_transaction\n = 64         \n max_pred_locks_per_transaction = 64\n\n\n\nBest Regards\nAnil", "msg_date": "Thu, 14 Oct 2021 10:47:37 +0300", "msg_from": "Mikhail Zhilin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Hi\nB-tree index used in the postgres environment\nChecked on warehouse different values like 100,800,1600,2400 and 3200 with\nvirtual user 64\nOn different values(warehouse) the lock contention same i.e. approx 17% and\niostat usage is 30-40%\n\n\n\npg_Count_ware=100\n-----------------\n17.76% postgres postgres [.] LWLockAcquire\n4.88% postgres postgres [.] _bt_compare\n3.10% postgres postgres [.] LWLockRelease\n\n\n\n\npg_Count_ware=800(previously I used Warehouse 800)\n--------------------------------------------\n17.91% postgres postgres [.] LWLockAcquire\n5.76% postgres postgres [.] _bt_compare\n3.06% postgres postgres [.] LWLockRelease\n\n\n\n\n\npg_Count_ware_1600\n-----------------\n17.80% postgres postgres [.] LWLockAcquire\n5.88% postgres postgres [.] _bt_compare\n2.70% postgres postgres [.] LWLockRelease\n\n\n\n\npg_Count_ware_2400\n------------------\n17.77% postgres postgres [.] LWLockAcquire\n6.01% postgres postgres [.] _bt_compare\n2.71% postgres postgres [.] LWLockRelease\n\n\n\n\npg_Count_ware_3200\n------------------\n17.46% postgres postgres [.] LWLockAcquire\n6.32% postgres postgres [.] _bt_compare\n2.86% postgres postgres [.] hash_search_with_hash_value\n\n\n\n1.Tired different values of lock management values in postgres.conf but it\nnot helped to reduce lock contention.\n deadlock_timeout = 5s\n max_locks_per_transaction = 64\n max_pred_locks_per_transaction = 64\n max_pred_locks_per_relation = -2\n\n max_pred_locks_per_page = 2\n2.Intention to check the postgreSQL scalability and performance or\nthroughput(TPC-C/TPC-H)\n with HammerDB and pgbench with server configuration on tune\nsettings(postgresql.conf)-reduce lock contention\nCPU's :256\nThreadper core: 2\nCore per socket: 64\nSockets: 2\nNUMA node0 : 0-63,128-191\nNUMA node1 : 64-127,192-255\nRAM size :512GB\nSSD :1TB\n\nRef link:\nhttps://www.hammerdb.com/blog/uncategorized/hammerdb-best-practice-for-postgresql-performance-and-scalability/\n\nOn Thursday, October 14, 2021, Peter Geoghegan <[email protected]> wrote:\n\n> On Tue, Oct 12, 2021 at 12:45 AM Ashkil Dighin <[email protected]>\n> wrote:\n> > Lock contention observed high in PostgreSQLv13.3\n> > The source code compiled with GNC(GCCv11.x)\n> > PostgreSQL version: 13.3\n> > Operating system: RHEL8.3\n> > Kernel name:4.18.0-305.10.2.el8_4.x86_64\n> > RAM Size:512GB\n> > SSD: 1TB\n> > The environment used IBM metal and test benchmark environment\n> HammerDbv4.2\n> > Test case :TPC-C\n>\n> You didn't say how many TPC-C warehouses you used. In my experience,\n> people sometimes run TPC-C with relatively few, which will tend to\n> result in extreme contention on certain B-Tree leaf pages. (My\n> experiences are with BenchmarkSQL, but I can't imagine HammerDB is too\n> much different.)\n>\n> Assuming that's the case here, for you, then it's not clear that you\n> have a real problem. You're really not supposed to run the benchmark\n> in that way, per the TPC-C spec, which strictly limits the number of\n> transactions per minute per warehouse -- for better or worse, valid\n> results generally require that you use lots of warehouses to get a\n> very large database (think terabytes). If you run the benchmark with\n> 100 warehouses or less, on a big server, then the contention you'll\n> see will be out of all proportion to what you're ever likely to see in\n> the real world.\n>\n> --\n> Peter Geoghegan\n>\n\nHi\nB-tree index used in the postgres environment\nChecked on warehouse different values like 100,800,1600,2400 and 3200 with virtual user 64\nOn different values(warehouse) the lock contention same i.e. approx 17% and iostat usage is 30-40%\n \npg_Count_ware=100\n-----------------\n17.76%  postgres  postgres            [.] LWLockAcquire\n4.88%  postgres  postgres            [.] _bt_compare\n3.10%  postgres  postgres            [.] LWLockRelease\n \n\npg_Count_ware=800(previously I used Warehouse 800)\n--------------------------------------------\n17.91%  postgres  postgres            [.] LWLockAcquire\n5.76%  postgres  postgres            [.] _bt_compare\n3.06%  postgres  postgres            [.] LWLockRelease\n \n \npg_Count_ware_1600\n-----------------\n17.80%  postgres  postgres            [.] LWLockAcquire\n5.88%  postgres  postgres            [.] _bt_compare\n2.70%  postgres  postgres            [.] LWLockRelease\n \n\npg_Count_ware_2400\n------------------\n17.77%  postgres  postgres            [.] LWLockAcquire\n6.01%  postgres  postgres            [.] _bt_compare\n2.71%  postgres  postgres            [.] LWLockRelease\n \n\npg_Count_ware_3200\n------------------\n17.46%  postgres  postgres            [.] LWLockAcquire\n6.32%  postgres  postgres            [.] _bt_compare\n2.86%  postgres  postgres            [.] hash_search_with_hash_value\n \n1.Tired different values of lock management values in postgres.conf but it not helped to reduce lock contention.\n    deadlock_timeout = 5s\n    max_locks_per_transaction = 64         \n    max_pred_locks_per_transaction = 64    \n    max_pred_locks_per_relation = -2                                           \n    max_pred_locks_per_page = 2   \n2.Intention to check the postgreSQL scalability and performance or throughput(TPC-C/TPC-H) \n with HammerDB and pgbench with  server configuration on tune settings(postgresql.conf)-reduce lock contention\nCPU's :256\nThreadper core:  2\nCore per socket:  64\nSockets:           2\nNUMA node0 :   0-63,128-191\nNUMA node1 :   64-127,192-255\nRAM size :512GB\nSSD :1TB\nRef link:https://www.hammerdb.com/blog/uncategorized/hammerdb-best-practice-for-postgresql-performance-and-scalability/On Thursday, October 14, 2021, Peter Geoghegan <[email protected]> wrote:On Tue, Oct 12, 2021 at 12:45 AM Ashkil Dighin <[email protected]> wrote:\n> Lock contention observed high in PostgreSQLv13.3\n> The source code compiled with GNC(GCCv11.x)\n> PostgreSQL version: 13.3\n> Operating system:   RHEL8.3\n> Kernel name:4.18.0-305.10.2.el8_4.x86_64\n> RAM Size:512GB\n> SSD: 1TB\n> The environment used IBM metal and test benchmark environment HammerDbv4.2\n> Test case :TPC-C\n\nYou didn't say how many TPC-C warehouses you used. In my experience,\npeople sometimes run TPC-C with relatively few, which will tend to\nresult in extreme contention on certain B-Tree leaf pages. (My\nexperiences are with BenchmarkSQL, but I can't imagine HammerDB is too\nmuch different.)\n\nAssuming that's the case here, for you, then it's not clear that you\nhave a real problem. You're really not supposed to run the benchmark\nin that way, per the TPC-C spec, which strictly limits the number of\ntransactions per minute per warehouse -- for better or worse, valid\nresults generally require that you use lots of warehouses to get a\nvery large database (think terabytes). If you run the benchmark with\n100 warehouses or less, on a big server, then the contention you'll\nsee will be out of all proportion to what you're ever likely to see in\nthe real world.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 20 Oct 2021 16:21:38 +0530", "msg_from": "Ashkil Dighin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Hi,\n\nOn 2021-10-12 13:05:12 +0530, Ashkil Dighin wrote:\n> PostgreSQL version: 13.3\n\nYou could try postgres 14 - that did improve scalability in some areas.\n\n\n\n> Perf data for 24vu(TPC-C)\n> --------------------------------\n> \n> 18.99% postgres postgres [.] LWLockAcquire\n> 7.09% postgres postgres [.] _bt_compare\n> 8.66% postgres postgres [.] LWLockRelease\n> 2.28% postgres postgres [.] GetSnapshotData\n> 2.25% postgres postgres [.] hash_search_with_hash_value\n> 2.11% postgres postgres [.] XLogInsertRecord\n> 1.98% postgres postgres [.] PinBuffer\n\nTo be more useful you'd need to create a profile with 'caller' information\nusing 'perf record --call-graph dwarf', and then check what the important\ncallers are.\n\n\n> Postgres.conf used in Baremetal\n> ========================\n> shared_buffers = 128GB(1/4 th RAM size)\n> effective_cachesize=392 GB(1/3 or 75% of RAM size)\n\nIf your hot data set is actually larger than s_b, I'd recommend trying a\nlarger s_b. It's plausible that a good chunk of lock contention is from that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 16:35:46 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "On Mon, Oct 25, 2021, 5:36 PM Andres Freund <[email protected]> wrote:\nIf your hot data set is actually larger than s_b, I'd recommend trying a\nlarger s_b. It's plausible that a good chunk of lock contention is from\nthat.\n\n\nHow much larger might you go? Any write ups on lock contention as it\nrelates to shared buffers? How impactful might huge pages (off, transparent\nor on) be to the use of shared buffers and the related locking mechanism?\n\nOn Mon, Oct 25, 2021, 5:36 PM Andres Freund <[email protected]> wrote:If your hot data set is actually larger than s_b, I'd recommend trying alarger s_b. It's plausible that a good chunk of lock contention is from that.How much larger might you go? Any write ups on lock contention as it relates to shared buffers? How impactful might huge pages (off, transparent or on) be to the use of shared buffers and the related locking mechanism?", "msg_date": "Mon, 25 Oct 2021 18:38:40 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Hi,\n\nOn 2021-10-25 18:38:40 -0600, Michael Lewis wrote:\n> On Mon, Oct 25, 2021, 5:36 PM Andres Freund <[email protected]> wrote:\n> If your hot data set is actually larger than s_b, I'd recommend trying a\n> larger s_b. It's plausible that a good chunk of lock contention is from\n> that.\n\n> How much larger might you go?\n\nI've seen s_b in the ~700GB range being a considerable speedup over lower\nvalues quite a few years ago. I don't see a clear cut upper boundary. The one\nthing this can regress measurably is the speed of dropping / truncating\ntables.\n\n\n> Any write ups on lock contention as it relates to shared buffers?\n\nI don't have a concrete thing to point you to, but searching for\nNUM_BUFFER_PARTITIONS might point you to some discussions.\n\n\n> How impactful might huge pages (off, transparent or on) be to the use of\n> shared buffers and the related locking mechanism?\n\nUsing huge pages can *hugely* help performance-wise. Not directly by relieving\npostgres-side contention however (it does reduce cache usage somewhat, but\nit's mainly really just the frequency of TLB misses that makes the\ndifference).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Oct 2021 17:43:23 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Hi,\nYes, lock contention reduced with postgresqlv14.\nLock acquire reduced 18% to 10%\n10.49 %postgres postgres [.] LWLockAcquire\n5.09% postgres postgres [.] _bt_compare\n\nIs lock contention can be reduced to 0-3%?\nOn pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n\n\nOn Tuesday, October 26, 2021, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2021-10-12 13:05:12 +0530, Ashkil Dighin wrote:\n> > PostgreSQL version: 13.3\n>\n> You could try postgres 14 - that did improve scalability in some areas.\n>\n>\n>\n> > Perf data for 24vu(TPC-C)\n> > --------------------------------\n> >\n> > 18.99% postgres postgres [.] LWLockAcquire\n> > 7.09% postgres postgres [.] _bt_compare\n> > 8.66% postgres postgres [.] LWLockRelease\n> > 2.28% postgres postgres [.] GetSnapshotData\n> > 2.25% postgres postgres [.] hash_search_with_hash_value\n> > 2.11% postgres postgres [.] XLogInsertRecord\n> > 1.98% postgres postgres [.] PinBuffer\n>\n> To be more useful you'd need to create a profile with 'caller' information\n> using 'perf record --call-graph dwarf', and then check what the important\n> callers are.\n>\n>\n> > Postgres.conf used in Baremetal\n> > ========================\n> > shared_buffers = 128GB(1/4 th RAM size)\n> > effective_cachesize=392 GB(1/3 or 75% of RAM size)\n>\n> If your hot data set is actually larger than s_b, I'd recommend trying a\n> larger s_b. It's plausible that a good chunk of lock contention is from\n> that.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi,Yes, lock contention reduced with postgresqlv14.Lock acquire reduced 18% to 10%10.49 %postgres  postgres            [.] LWLockAcquire5.09%  postgres  postgres            [.] _bt_compareIs lock contention can be reduced to 0-3%?On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”On Tuesday, October 26, 2021, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2021-10-12 13:05:12 +0530, Ashkil Dighin wrote:\n> PostgreSQL version: 13.3\n\nYou could try postgres 14 - that did improve scalability in some areas.\n\n\n\n> Perf data for 24vu(TPC-C)\n> --------------------------------\n> \n>       18.99%  postgres  postgres            [.] LWLockAcquire\n>      7.09%  postgres  postgres            [.] _bt_compare\n>      8.66%  postgres  postgres            [.] LWLockRelease\n>      2.28%  postgres  postgres            [.] GetSnapshotData\n>      2.25%  postgres  postgres            [.] hash_search_with_hash_value\n>      2.11%  postgres  postgres            [.] XLogInsertRecord\n>      1.98%  postgres  postgres            [.] PinBuffer\n\nTo be more useful you'd need to create a profile with 'caller' information\nusing 'perf record --call-graph dwarf', and then check what the important\ncallers are.\n\n\n> Postgres.conf used  in Baremetal\n> ========================\n> shared_buffers = 128GB(1/4 th RAM size)\n> effective_cachesize=392 GB(1/3 or 75% of RAM size)\n\nIf your hot data set is actually larger than s_b, I'd recommend trying a\nlarger s_b. It's plausible that a good chunk of lock contention is from that.\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 28 Oct 2021 03:14:56 +0530", "msg_from": "Ashkil Dighin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Hi, \n\nOn October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <[email protected]> wrote:\n>Hi,\n>Yes, lock contention reduced with postgresqlv14.\n>Lock acquire reduced 18% to 10%\n>10.49 %postgres postgres [.] LWLockAcquire\n>5.09% postgres postgres [.] _bt_compare\n>\n>Is lock contention can be reduced to 0-3%?\n\nProbably not, or at least not easily. Because of the atomic instructions the locking also includes some other costs (e.g. cache misses, serializing store buffers,...).\n\nThere's a good bit we can do to increase the cache efficiency around buffer headers, but it won't get us quite that low I'd guess.\n\n\n>On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n\nWithout knowing what proportion they have to each and to non-waiting backends that unfortunately doesn't help that much..\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Wed, 27 Oct 2021 15:22:01 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Hi\nI suspect lock contention and performance issues with __int128. And I would\nlike to check the performance by forcibly disabling int128(Maxalign16bytes)\nand enable like long long(maxlign 8bytes).\n Is it possible to disable int128 in PostgreSQL?\n\nOn Thursday, October 28, 2021, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <\n> [email protected]> wrote:\n> >Hi,\n> >Yes, lock contention reduced with postgresqlv14.\n> >Lock acquire reduced 18% to 10%\n> >10.49 %postgres postgres [.] LWLockAcquire\n> >5.09% postgres postgres [.] _bt_compare\n> >\n> >Is lock contention can be reduced to 0-3%?\n>\n> Probably not, or at least not easily. Because of the atomic instructions\n> the locking also includes some other costs (e.g. cache misses, serializing\n> store buffers,...).\n>\n> There's a good bit we can do to increase the cache efficiency around\n> buffer headers, but it won't get us quite that low I'd guess.\n>\n>\n> >On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n>\n> Without knowing what proportion they have to each and to non-waiting\n> backends that unfortunately doesn't help that much..\n>\n> Andres\n>\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>\n\nHiI suspect lock contention and performance issues with __int128. And I would like to check the performance by forcibly disabling int128(Maxalign16bytes) and enable like long long(maxlign 8bytes).  Is it possible to disable int128 in PostgreSQL?On Thursday, October 28, 2021, Andres Freund <[email protected]> wrote:Hi, \n\nOn October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <[email protected]> wrote:\n>Hi,\n>Yes, lock contention reduced with postgresqlv14.\n>Lock acquire reduced 18% to 10%\n>10.49 %postgres  postgres            [.] LWLockAcquire\n>5.09%  postgres  postgres            [.] _bt_compare\n>\n>Is lock contention can be reduced to 0-3%?\n\nProbably not, or at least not easily. Because of the atomic instructions the locking also includes  some other costs (e.g. cache misses, serializing store buffers,...).\n\nThere's a good bit we can do to increase the cache efficiency around buffer headers, but it won't get us quite that low I'd guess.\n\n\n>On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n\nWithout knowing what proportion they have to each and to non-waiting backends that unfortunately doesn't help that much..\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Fri, 12 Nov 2021 19:42:30 +0530", "msg_from": "Ashkil Dighin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Ashkil Dighin <[email protected]> writes:\n> I suspect lock contention and performance issues with __int128. And I would\n> like to check the performance by forcibly disabling int128(Maxalign16bytes)\n> and enable like long long(maxlign 8bytes).\n> Is it possible to disable int128 in PostgreSQL?\n\nSure, you can build without it --- easiest way would be to modify\npg_config.h after the configure step. But the idea that it has\nsomething to do with lock contention seems like nonsense.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Nov 2021 10:25:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "Hi Askhil\n\nPostgreSQL utilizes lightweight locks(LWLocks) to synchronize and control\naccess to the buffer content. A process acquires an LWLock in a shared\nmode to read from the buffer and an exclusive mode to write to the buffer.\nTherefore, while holding an exclusive lock, a process prevents other\nprocesses from acquiring a shared or exclusive lock. Also, a shared lock\ncan be acquired concurrently by other processes. The issue starts when many\nprocesses acquire an exclusive lock on buffer content. As a result,\nLwlockAcquire seen as top hot function in profilng.\nHere need to understand LwlockAcquire is lock contention or cpu time spent\ninside the method/ function(top function in profiling)\n\nIt can analysed log “LwStatus” with parameters like\nex-acquire-count(exclusive mode) , sh-acquire-count , block-count and\nspin-delay-count\n\nTotal lock acquisition request = ex-acquire-count+sh-acquire-count)\nTime lock contention %= block count)/ Total lock acquisition request.\n\nTime lock contention may provide as most of cpu time inside the function\nrather than spinning/ waiting for lock.\n\nOn Friday, November 12, 2021, Ashkil Dighin <[email protected]>\nwrote:\n\n> Hi\n> I suspect lock contention and performance issues with __int128. And I\n> would like to check the performance by forcibly disabling\n> int128(Maxalign16bytes) and enable like long long(maxlign 8bytes).\n> Is it possible to disable int128 in PostgreSQL?\n>\n> On Thursday, October 28, 2021, Andres Freund <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> On October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <\n>> [email protected]> wrote:\n>> >Hi,\n>> >Yes, lock contention reduced with postgresqlv14.\n>> >Lock acquire reduced 18% to 10%\n>> >10.49 %postgres postgres [.] LWLockAcquire\n>> >5.09% postgres postgres [.] _bt_compare\n>> >\n>> >Is lock contention can be reduced to 0-3%?\n>>\n>> Probably not, or at least not easily. Because of the atomic instructions\n>> the locking also includes some other costs (e.g. cache misses, serializing\n>> store buffers,...).\n>>\n>> There's a good bit we can do to increase the cache efficiency around\n>> buffer headers, but it won't get us quite that low I'd guess.\n>>\n>>\n>> >On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n>>\n>> Without knowing what proportion they have to each and to non-waiting\n>> backends that unfortunately doesn't help that much..\n>>\n>> Andres\n>>\n>> --\n>> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>>\n>\n\nHi AskhilPostgreSQL utilizes  lightweight locks(LWLocks) to synchronize and control access to the buffer content. A process acquires an LWLock in a  shared mode to read from the buffer and an exclusive mode  to write to the buffer. Therefore, while holding an exclusive lock, a process prevents other processes from acquiring a shared or exclusive lock. Also, a shared lock can be acquired concurrently by other processes. The issue starts when many processes acquire an exclusive lock on buffer content. As a result, LwlockAcquire seen as top hot function in profilng. Here  need to understand LwlockAcquire is lock contention or cpu time spent inside the method/ function(top function in profiling)It can analysed log  “LwStatus” with parameters like ex-acquire-count(exclusive mode) , sh-acquire-count , block-count and spin-delay-countTotal lock acquisition request = ex-acquire-count+sh-acquire-count)Time lock contention %= block count)/ Total lock acquisition request.Time lock contention may provide as most of cpu time inside the function rather than spinning/ waiting for lock.On Friday, November 12, 2021, Ashkil Dighin <[email protected]> wrote:HiI suspect lock contention and performance issues with __int128. And I would like to check the performance by forcibly disabling int128(Maxalign16bytes) and enable like long long(maxlign 8bytes).  Is it possible to disable int128 in PostgreSQL?On Thursday, October 28, 2021, Andres Freund <[email protected]> wrote:Hi, \n\nOn October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <[email protected]> wrote:\n>Hi,\n>Yes, lock contention reduced with postgresqlv14.\n>Lock acquire reduced 18% to 10%\n>10.49 %postgres  postgres            [.] LWLockAcquire\n>5.09%  postgres  postgres            [.] _bt_compare\n>\n>Is lock contention can be reduced to 0-3%?\n\nProbably not, or at least not easily. Because of the atomic instructions the locking also includes  some other costs (e.g. cache misses, serializing store buffers,...).\n\nThere's a good bit we can do to increase the cache efficiency around buffer headers, but it won't get us quite that low I'd guess.\n\n\n>On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n\nWithout knowing what proportion they have to each and to non-waiting backends that unfortunately doesn't help that much..\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Wed, 17 Nov 2021 12:13:12 +0530", "msg_from": "arjun shetty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "1. How to check which NUMA node in PostgreSQL process fetching from the\nmemory?\n\n2. Is NUMA configuration is better for PostgreSQL?\n vm.zone_reclaim_mode= 0\n numactl --interleave = all /init.d/ PostgreSQL start\n kernel.numa_balancing= 0\n\n\n\n\n\nOn Wednesday, November 17, 2021, arjun shetty <[email protected]>\nwrote:\n\n> Hi Askhil\n>\n> PostgreSQL utilizes lightweight locks(LWLocks) to synchronize and\n> control access to the buffer content. A process acquires an LWLock in a\n> shared mode to read from the buffer and an exclusive mode to write to\n> the buffer. Therefore, while holding an exclusive lock, a process prevents\n> other processes from acquiring a shared or exclusive lock. Also, a shared\n> lock can be acquired concurrently by other processes. The issue starts when\n> many processes acquire an exclusive lock on buffer content. As a result,\n> LwlockAcquire seen as top hot function in profilng.\n> Here need to understand LwlockAcquire is lock contention or cpu time\n> spent inside the method/ function(top function in profiling)\n>\n> It can analysed log “LwStatus” with parameters like\n> ex-acquire-count(exclusive mode) , sh-acquire-count , block-count and\n> spin-delay-count\n>\n> Total lock acquisition request = ex-acquire-count+sh-acquire-count)\n> Time lock contention %= block count)/ Total lock acquisition request.\n>\n> Time lock contention may provide as most of cpu time inside the function\n> rather than spinning/ waiting for lock.\n>\n> On Friday, November 12, 2021, Ashkil Dighin <[email protected]>\n> wrote:\n>\n>> Hi\n>> I suspect lock contention and performance issues with __int128. And I\n>> would like to check the performance by forcibly disabling\n>> int128(Maxalign16bytes) and enable like long long(maxlign 8bytes).\n>> Is it possible to disable int128 in PostgreSQL?\n>>\n>> On Thursday, October 28, 2021, Andres Freund <[email protected]> wrote:\n>>\n>>> Hi,\n>>>\n>>> On October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <\n>>> [email protected]> wrote:\n>>> >Hi,\n>>> >Yes, lock contention reduced with postgresqlv14.\n>>> >Lock acquire reduced 18% to 10%\n>>> >10.49 %postgres postgres [.] LWLockAcquire\n>>> >5.09% postgres postgres [.] _bt_compare\n>>> >\n>>> >Is lock contention can be reduced to 0-3%?\n>>>\n>>> Probably not, or at least not easily. Because of the atomic instructions\n>>> the locking also includes some other costs (e.g. cache misses, serializing\n>>> store buffers,...).\n>>>\n>>> There's a good bit we can do to increase the cache efficiency around\n>>> buffer headers, but it won't get us quite that low I'd guess.\n>>>\n>>>\n>>> >On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n>>>\n>>> Without knowing what proportion they have to each and to non-waiting\n>>> backends that unfortunately doesn't help that much..\n>>>\n>>> Andres\n>>>\n>>> --\n>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>>>\n>>\n\n1. How to check which NUMA node in PostgreSQL process fetching from the memory?2. Is NUMA configuration is better for PostgreSQL?      vm.zone_reclaim_mode= 0       numactl --interleave = all  /init.d/ PostgreSQL start        kernel.numa_balancing= 0On Wednesday, November 17, 2021, arjun shetty <[email protected]> wrote:Hi AskhilPostgreSQL utilizes  lightweight locks(LWLocks) to synchronize and control access to the buffer content. A process acquires an LWLock in a  shared mode to read from the buffer and an exclusive mode  to write to the buffer. Therefore, while holding an exclusive lock, a process prevents other processes from acquiring a shared or exclusive lock. Also, a shared lock can be acquired concurrently by other processes. The issue starts when many processes acquire an exclusive lock on buffer content. As a result, LwlockAcquire seen as top hot function in profilng. Here  need to understand LwlockAcquire is lock contention or cpu time spent inside the method/ function(top function in profiling)It can analysed log  “LwStatus” with parameters like ex-acquire-count(exclusive mode) , sh-acquire-count , block-count and spin-delay-countTotal lock acquisition request = ex-acquire-count+sh-acquire-count)Time lock contention %= block count)/ Total lock acquisition request.Time lock contention may provide as most of cpu time inside the function rather than spinning/ waiting for lock.On Friday, November 12, 2021, Ashkil Dighin <[email protected]> wrote:HiI suspect lock contention and performance issues with __int128. And I would like to check the performance by forcibly disabling int128(Maxalign16bytes) and enable like long long(maxlign 8bytes).  Is it possible to disable int128 in PostgreSQL?On Thursday, October 28, 2021, Andres Freund <[email protected]> wrote:Hi, \n\nOn October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <[email protected]> wrote:\n>Hi,\n>Yes, lock contention reduced with postgresqlv14.\n>Lock acquire reduced 18% to 10%\n>10.49 %postgres  postgres            [.] LWLockAcquire\n>5.09%  postgres  postgres            [.] _bt_compare\n>\n>Is lock contention can be reduced to 0-3%?\n\nProbably not, or at least not easily. Because of the atomic instructions the locking also includes  some other costs (e.g. cache misses, serializing store buffers,...).\n\nThere's a good bit we can do to increase the cache efficiency around buffer headers, but it won't get us quite that low I'd guess.\n\n\n>On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n\nWithout knowing what proportion they have to each and to non-waiting backends that unfortunately doesn't help that much..\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Mon, 29 Nov 2021 18:09:43 +0530", "msg_from": "arjun shetty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" }, { "msg_contents": "В Чт, 28/10/2021 в 03:14 +0530, Ashkil Dighin пишет:\n> Hi,\n> Yes, lock contention reduced with postgresqlv14.\n> Lock acquire reduced 18% to 10%\n> 10.49 %postgres postgres [.] LWLockAcquire\n> 5.09% postgres postgres [.] _bt_compare\n> \n> Is lock contention can be reduced to 0-3%?\n> On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”\n> \n> \n> On Tuesday, October 26, 2021, Andres Freund <[email protected]> wrote:\n> > Hi,\n> > \n> > On 2021-10-12 13:05:12 +0530, Ashkil Dighin wrote:\n> > > PostgreSQL version: 13.3\n> > \n> > You could try postgres 14 - that did improve scalability in some areas.\n> > \n> > \n> > \n> > > Perf data for 24vu(TPC-C)\n> > > --------------------------------\n> > > \n> > > 18.99% postgres postgres [.] LWLockAcquire\n> > > 7.09% postgres postgres [.] _bt_compare\n> > > 8.66% postgres postgres [.] LWLockRelease\n> > > 2.28% postgres postgres [.] GetSnapshotData\n> > > 2.25% postgres postgres [.] hash_search_with_hash_value\n> > > 2.11% postgres postgres [.] XLogInsertRecord\n> > > 1.98% postgres postgres [.] PinBuffer\n> > \n> > To be more useful you'd need to create a profile with 'caller' information\n> > using 'perf record --call-graph dwarf', and then check what the important\n> > callers are.\n> > \n> > \n> > > Postgres.conf used in Baremetal\n> > > ========================\n> > > shared_buffers = 128GB(1/4 th RAM size)\n> > > effective_cachesize=392 GB(1/3 or 75% of RAM size)\n> > \n> > If your hot data set is actually larger than s_b, I'd recommend trying a\n> > larger s_b. It's plausible that a good chunk of lock contention is from that.\n> > \n\nCould you try attached patch?\nIt reduces lock contention in buffer manager by not acquiring\ntwo locks simultaneously on buffer eviction.\n\nv1-0001-* - it is file for postgresql 14 and master branch\nvpg13v1-0001-* - this file for postgresql 13\n\nCorresponding (not so loud) discussion:\nhttps://postgr.es/m/flat/1edbb61981fe1d99c3f20e3d56d6c88999f4227c.camel%40postgrespro.ru\n\n--------\n\nregards,\n\nYura Sokolov\[email protected]\[email protected]", "msg_date": "Tue, 21 Dec 2021 08:45:52 +0300", "msg_from": "Yura Sokolov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock contention high" } ]
[ { "msg_contents": "Hi,\nI am running the below query. Table has 21 million records. I get an Out Of\nMemory error after a while.(from both pgadmin and psql). Can someone review\nDB parameters given below.\n\nselect t.*,g.column,a.column from\ngk_staging g, transaction t,account a\nwhere\ng.accountcodeis not null AND\ng.accountcode::text <> '' AND\nlength(g.accountcode)=13 AND\ng.closeid::text=t.transactionid::text AND\nsubsrting(g.accountcode::text,8)=a.mask_code::text\n\nBelow are system parameters.\nshared_buffers=3GB\nwork_mem=2GB\neffective_cache_size=10GB\nmaintenance_work_mem=1GB\nmax_connections=250\n\nI am unable to paste explain plan here due to security concerns.\n\nRegards,\nAditya.\n\nHi,I am running the below query. Table has 21 million records. I get an Out Of Memory error after a while.(from both pgadmin and psql). Can someone review DB parameters given below.select t.*,g.column,a.column fromgk_staging g, transaction t,account awhereg.accountcodeis not null ANDg.accountcode::text <> '' ANDlength(g.accountcode)=13 ANDg.closeid::text=t.transactionid::text ANDsubsrting(g.accountcode::text,8)=a.mask_code::textBelow are system parameters.shared_buffers=3GBwork_mem=2GBeffective_cache_size=10GBmaintenance_work_mem=1GBmax_connections=250I am unable to paste explain plan here due to security concerns.Regards,Aditya.", "msg_date": "Mon, 18 Oct 2021 22:12:52 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Query out of memory" }, { "msg_contents": "Database has 20GB RAM.\n\nOn Mon, Oct 18, 2021 at 10:12 PM aditya desai <[email protected]> wrote:\n\n> Hi,\n> I am running the below query. Table has 21 million records. I get an Out\n> Of Memory error after a while.(from both pgadmin and psql). Can someone\n> review DB parameters given below.\n>\n> select t.*,g.column,a.column from\n> gk_staging g, transaction t,account a\n> where\n> g.accountcodeis not null AND\n> g.accountcode::text <> '' AND\n> length(g.accountcode)=13 AND\n> g.closeid::text=t.transactionid::text AND\n> subsrting(g.accountcode::text,8)=a.mask_code::text\n>\n> Below are system parameters.\n> shared_buffers=3GB\n> work_mem=2GB\n> effective_cache_size=10GB\n> maintenance_work_mem=1GB\n> max_connections=250\n>\n> I am unable to paste explain plan here due to security concerns.\n>\n> Regards,\n> Aditya.\n>\n>\n\nDatabase has 20GB RAM.On Mon, Oct 18, 2021 at 10:12 PM aditya desai <[email protected]> wrote:Hi,I am running the below query. Table has 21 million records. I get an Out Of Memory error after a while.(from both pgadmin and psql). Can someone review DB parameters given below.select t.*,g.column,a.column fromgk_staging g, transaction t,account awhereg.accountcodeis not null ANDg.accountcode::text <> '' ANDlength(g.accountcode)=13 ANDg.closeid::text=t.transactionid::text ANDsubsrting(g.accountcode::text,8)=a.mask_code::textBelow are system parameters.shared_buffers=3GBwork_mem=2GBeffective_cache_size=10GBmaintenance_work_mem=1GBmax_connections=250I am unable to paste explain plan here due to security concerns.Regards,Aditya.", "msg_date": "Mon, 18 Oct 2021 22:13:33 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "Work memory 2 GB ?\nIs this intentional?\nHow many max active connections do you see?\n\nif you have too many connections. You can try toning it down to\nhttps://pgtune.leopard.in.ua/ to start with.\n\n\nOn Mon, Oct 18, 2021, 10:13 PM aditya desai <[email protected]> wrote:\n\n> Hi,\n> I am running the below query. Table has 21 million records. I get an Out\n> Of Memory error after a while.(from both pgadmin and psql). Can someone\n> review DB parameters given below.\n>\n> select t.*,g.column,a.column from\n> gk_staging g, transaction t,account a\n> where\n> g.accountcodeis not null AND\n> g.accountcode::text <> '' AND\n> length(g.accountcode)=13 AND\n> g.closeid::text=t.transactionid::text AND\n> subsrting(g.accountcode::text,8)=a.mask_code::text\n>\n> Below are system parameters.\n> shared_buffers=3GB\n> work_mem=2GB\n> effective_cache_size=10GB\n> maintenance_work_mem=1GB\n> max_connections=250\n>\n> I am unable to paste explain plan here due to security concerns.\n>\n> Regards,\n> Aditya.\n>\n>\n\nWork memory 2 GB ?Is this intentional?How many max active connections do you see?if you have too many connections. You can try toning it down to https://pgtune.leopard.in.ua/ to start with.On Mon, Oct 18, 2021, 10:13 PM aditya desai <[email protected]> wrote:Hi,I am running the below query. Table has 21 million records. I get an Out Of Memory error after a while.(from both pgadmin and psql). Can someone review DB parameters given below.select t.*,g.column,a.column fromgk_staging g, transaction t,account awhereg.accountcodeis not null ANDg.accountcode::text <> '' ANDlength(g.accountcode)=13 ANDg.closeid::text=t.transactionid::text ANDsubsrting(g.accountcode::text,8)=a.mask_code::textBelow are system parameters.shared_buffers=3GBwork_mem=2GBeffective_cache_size=10GBmaintenance_work_mem=1GBmax_connections=250I am unable to paste explain plan here due to security concerns.Regards,Aditya.", "msg_date": "Mon, 18 Oct 2021 22:18:23 +0530", "msg_from": "Vijaykumar Jain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "Not many active connections. Only 30-40.\n\nOn Monday, October 18, 2021, Vijaykumar Jain <\[email protected]> wrote:\n\n> Work memory 2 GB ?\n> Is this intentional?\n> How many max active connections do you see?\n>\n> if you have too many connections. You can try toning it down to\n> https://pgtune.leopard.in.ua/ to start with.\n>\n>\n> On Mon, Oct 18, 2021, 10:13 PM aditya desai <[email protected]> wrote:\n>\n>> Hi,\n>> I am running the below query. Table has 21 million records. I get an Out\n>> Of Memory error after a while.(from both pgadmin and psql). Can someone\n>> review DB parameters given below.\n>>\n>> select t.*,g.column,a.column from\n>> gk_staging g, transaction t,account a\n>> where\n>> g.accountcodeis not null AND\n>> g.accountcode::text <> '' AND\n>> length(g.accountcode)=13 AND\n>> g.closeid::text=t.transactionid::text AND\n>> subsrting(g.accountcode::text,8)=a.mask_code::text\n>>\n>> Below are system parameters.\n>> shared_buffers=3GB\n>> work_mem=2GB\n>> effective_cache_size=10GB\n>> maintenance_work_mem=1GB\n>> max_connections=250\n>>\n>> I am unable to paste explain plan here due to security concerns.\n>>\n>> Regards,\n>> Aditya.\n>>\n>>\n\nNot many active connections. Only 30-40.On Monday, October 18, 2021, Vijaykumar Jain <[email protected]> wrote:Work memory 2 GB ?Is this intentional?How many max active connections do you see?if you have too many connections. You can try toning it down to https://pgtune.leopard.in.ua/ to start with.On Mon, Oct 18, 2021, 10:13 PM aditya desai <[email protected]> wrote:Hi,I am running the below query. Table has 21 million records. I get an Out Of Memory error after a while.(from both pgadmin and psql). Can someone review DB parameters given below.select t.*,g.column,a.column fromgk_staging g, transaction t,account awhereg.accountcodeis not null ANDg.accountcode::text <> '' ANDlength(g.accountcode)=13 ANDg.closeid::text=t.transactionid::text ANDsubsrting(g.accountcode::text,8)=a.mask_code::textBelow are system parameters.shared_buffers=3GBwork_mem=2GBeffective_cache_size=10GBmaintenance_work_mem=1GBmax_connections=250I am unable to paste explain plan here due to security concerns.Regards,Aditya.", "msg_date": "Tue, 19 Oct 2021 00:33:27 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "Le lun. 18 oct. 2021 à 21:03, aditya desai <[email protected]> a écrit :\n\n> Not many active connections. Only 30-40.\n>\n\nThis means you can consume up to 60-80 GB. Way much more than the available\nRAM. You should lower your work_mem value.\n\n\nOn Monday, October 18, 2021, Vijaykumar Jain <\n> [email protected]> wrote:\n>\n>> Work memory 2 GB ?\n>> Is this intentional?\n>> How many max active connections do you see?\n>>\n>> if you have too many connections. You can try toning it down to\n>> https://pgtune.leopard.in.ua/ to start with.\n>>\n>>\n>> On Mon, Oct 18, 2021, 10:13 PM aditya desai <[email protected]> wrote:\n>>\n>>> Hi,\n>>> I am running the below query. Table has 21 million records. I get an Out\n>>> Of Memory error after a while.(from both pgadmin and psql). Can someone\n>>> review DB parameters given below.\n>>>\n>>> select t.*,g.column,a.column from\n>>> gk_staging g, transaction t,account a\n>>> where\n>>> g.accountcodeis not null AND\n>>> g.accountcode::text <> '' AND\n>>> length(g.accountcode)=13 AND\n>>> g.closeid::text=t.transactionid::text AND\n>>> subsrting(g.accountcode::text,8)=a.mask_code::text\n>>>\n>>> Below are system parameters.\n>>> shared_buffers=3GB\n>>> work_mem=2GB\n>>> effective_cache_size=10GB\n>>> maintenance_work_mem=1GB\n>>> max_connections=250\n>>>\n>>> I am unable to paste explain plan here due to security concerns.\n>>>\n>>> Regards,\n>>> Aditya.\n>>>\n>>>\n\nLe lun. 18 oct. 2021 à 21:03, aditya desai <[email protected]> a écrit :Not many active connections. Only 30-40.This means you can consume up to 60-80 GB. Way much more than the available RAM. You should lower your work_mem value.On Monday, October 18, 2021, Vijaykumar Jain <[email protected]> wrote:Work memory 2 GB ?Is this intentional?How many max active connections do you see?if you have too many connections. You can try toning it down to https://pgtune.leopard.in.ua/ to start with.On Mon, Oct 18, 2021, 10:13 PM aditya desai <[email protected]> wrote:Hi,I am running the below query. Table has 21 million records. I get an Out Of Memory error after a while.(from both pgadmin and psql). Can someone review DB parameters given below.select t.*,g.column,a.column fromgk_staging g, transaction t,account awhereg.accountcodeis not null ANDg.accountcode::text <> '' ANDlength(g.accountcode)=13 ANDg.closeid::text=t.transactionid::text ANDsubsrting(g.accountcode::text,8)=a.mask_code::textBelow are system parameters.shared_buffers=3GBwork_mem=2GBeffective_cache_size=10GBmaintenance_work_mem=1GBmax_connections=250I am unable to paste explain plan here due to security concerns.Regards,Aditya.", "msg_date": "Mon, 18 Oct 2021 23:05:35 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "This is how I received a query from the App Team. They have migrated from\nOracle to Postgres. I see in Oracle where it is working fine and also has\nthe same joins. Some of the developers increased the work_mem. I will try\nand tone it down. Will get back to you. Thanks.\n\nOn Tue, Oct 19, 2021 at 1:28 AM Geri Wright <[email protected]> wrote:\n\n> Hi,\n> It looks like you have Cartesian joins in the query. Try updating your\n> where clause to include\n>\n> And g.columnname = t.columnname\n> And t.columnname2 = a.columnname2\n>\n> On Mon, Oct 18, 2021, 12:43 PM aditya desai <[email protected]> wrote:\n>\n>> Hi,\n>> I am running the below query. Table has 21 million records. I get an Out\n>> Of Memory error after a while.(from both pgadmin and psql). Can someone\n>> review DB parameters given below.\n>>\n>> select t.*,g.column,a.column from\n>> gk_staging g, transaction t,account a\n>> where\n>> g.accountcodeis not null AND\n>> g.accountcode::text <> '' AND\n>> length(g.accountcode)=13 AND\n>> g.closeid::text=t.transactionid::text AND\n>> subsrting(g.accountcode::text,8)=a.mask_code::text\n>>\n>> Below are system parameters.\n>> shared_buffers=3GB\n>> work_mem=2GB\n>> effective_cache_size=10GB\n>> maintenance_work_mem=1GB\n>> max_connections=250\n>>\n>> I am unable to paste explain plan here due to security concerns.\n>>\n>> Regards,\n>> Aditya.\n>>\n>>\n\nThis is how I received a query from the App Team. They have migrated from Oracle to Postgres. I see in Oracle where it is working fine and also has the same joins. Some of the developers increased the work_mem. I will try and tone it down. Will get back to you. Thanks.On Tue, Oct 19, 2021 at 1:28 AM Geri Wright <[email protected]> wrote:Hi,It looks like you have Cartesian joins in the query.   Try updating your where clause to includeAnd g.columnname = t.columnname And t.columnname2 = a.columnname2On Mon, Oct 18, 2021, 12:43 PM aditya desai <[email protected]> wrote:Hi,I am running the below query. Table has 21 million records. I get an Out Of Memory error after a while.(from both pgadmin and psql). Can someone review DB parameters given below.select t.*,g.column,a.column fromgk_staging g, transaction t,account awhereg.accountcodeis not null ANDg.accountcode::text <> '' ANDlength(g.accountcode)=13 ANDg.closeid::text=t.transactionid::text ANDsubsrting(g.accountcode::text,8)=a.mask_code::textBelow are system parameters.shared_buffers=3GBwork_mem=2GBeffective_cache_size=10GBmaintenance_work_mem=1GBmax_connections=250I am unable to paste explain plan here due to security concerns.Regards,Aditya.", "msg_date": "Tue, 19 Oct 2021 09:37:48 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "Sending to a performance group instead of PLPGSQL.\n\n.\n.\nHi,\nI am running the below query. Table has 21 million records. I get an Out Of\nMemory error after a while.(from both pgadmin and psql). Can someone review\nDB parameters given below.\n\nselect t.*,g.column,a.column from\ngk_staging g, transaction t,account a\nwhere\ng.accountcodeis not null AND\ng.accountcode::text <> '' AND\nlength(g.accountcode)=13 AND\ng.closeid::text=t.transactionid::text AND\nsubsrting(g.accountcode::text,8)=a.mask_code::text\n\nBelow are system parameters.\nshared_buffers=3GB\nwork_mem=2GB\neffective_cache_size=10GB\nmaintenance_work_mem=1GB\nmax_connections=250\n\nI am unable to paste explain plan here due to security concerns.\n\nRegards,\nAditya.\n\nSending to a performance group instead of PLPGSQL...Hi,I am running the below query. Table has 21 million records. I get an Out Of Memory error after a while.(from both pgadmin and psql). Can someone review DB parameters given below.select t.*,g.column,a.column fromgk_staging g, transaction t,account awhereg.accountcodeis not null ANDg.accountcode::text <> '' ANDlength(g.accountcode)=13 ANDg.closeid::text=t.transactionid::text ANDsubsrting(g.accountcode::text,8)=a.mask_code::textBelow are system parameters.shared_buffers=3GBwork_mem=2GBeffective_cache_size=10GBmaintenance_work_mem=1GBmax_connections=250I am unable to paste explain plan here due to security concerns.Regards,Aditya.", "msg_date": "Tue, 19 Oct 2021 11:28:46 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Query out of memory" }, { "msg_contents": "That work_mem value could be way too high depending on how much ram your\nserver has...which would be a very important bit of information to help\nfigure this out. Also, what Postgres / OS versions?\n\nThat work_mem value could be way too high depending on how much ram your server has...which would be a very important bit of information to help figure this out. Also, what Postgres / OS versions?", "msg_date": "Tue, 19 Oct 2021 05:54:05 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "On Tue, 19 Oct 2021 at 05:54, Adam Brusselback <[email protected]>\nwrote:\n\n> That work_mem value could be way too high depending on how much ram your\n> server has...which would be a very important bit of information to help\n> figure this out. Also, what Postgres / OS versions?\n>\n\nWORK_MEM is definitely too high. With 250 connections there is no way you\ncould allocate 2G to each one of them if needed\n\n\nDave Cramer\nwww.postgres.rocks\n\nOn Tue, 19 Oct 2021 at 05:54, Adam Brusselback <[email protected]> wrote:That work_mem value could be way too high depending on how much ram your server has...which would be a very important bit of information to help figure this out. Also, what Postgres / OS versions?WORK_MEM is definitely too high. With 250 connections there is no way you could allocate 2G to each one of them if neededDave Cramerwww.postgres.rocks", "msg_date": "Tue, 19 Oct 2021 05:57:37 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "On Tue, Oct 19, 2021 at 11:28:46AM +0530, aditya desai wrote:\n> I am running the below query. Table has 21 million records. I get an Out Of\n> Memory error after a while.(from both pgadmin and psql). Can someone review\n\nIs the out of memory error on the client side ?\nThen you've simply returned more rows than the client can support.\n\nIn that case, you can run it with \"explain analyze\" to prove that the server\nside can run the query. That returns no data rows to the client, but shows the\nnumber of rows which would normally be returned.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 19 Oct 2021 05:26:13 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Query out of memory" }, { "msg_contents": "Hi Justin,\nOut of memory on pgadmin and psql. I executed it with explain analyze.\nStill going out of memory.\n\n Also currently 250 user connections are not being made. There are hardly\n10 connections to database. When I run thi query it is going out of memory.\n\nAlso this query is part of a view that gets referred in a\nprocedure.Transaction table is partitioned table but due to business\nrequirements partition key is not part of where clause.\n\nRegards,\nAditya.\n\nOn Tuesday, October 19, 2021, Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Oct 19, 2021 at 11:28:46AM +0530, aditya desai wrote:\n> > I am running the below query. Table has 21 million records. I get an Out\n> Of\n> > Memory error after a while.(from both pgadmin and psql). Can someone\n> review\n>\n> Is the out of memory error on the client side ?\n> Then you've simply returned more rows than the client can support.\n>\n> In that case, you can run it with \"explain analyze\" to prove that the\n> server\n> side can run the query. That returns no data rows to the client, but\n> shows the\n> number of rows which would normally be returned.\n>\n> --\n> Justin\n>\n\nHi Justin,Out of memory on pgadmin and psql. I executed it with explain analyze. Still going out of memory. Also currently 250 user connections are not being made. There are hardly 10 connections to database. When I run thi query it is going out of memory. Also this query is part of a view that gets referred in a procedure.Transaction table is partitioned table but due to business requirements partition key is not part of where clause.Regards,Aditya.On Tuesday, October 19, 2021, Justin Pryzby <[email protected]> wrote:On Tue, Oct 19, 2021 at 11:28:46AM +0530, aditya desai wrote:\n> I am running the below query. Table has 21 million records. I get an Out Of\n> Memory error after a while.(from both pgadmin and psql). Can someone review\n\nIs the out of memory error on the client side ?\nThen you've simply returned more rows than the client can support.\n\nIn that case, you can run it with \"explain analyze\" to prove that the server\nside can run the query.  That returns no data rows to the client, but shows the\nnumber of rows which would normally be returned.\n\n-- \nJustin", "msg_date": "Tue, 19 Oct 2021 16:16:46 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "På tirsdag 19. oktober 2021 kl. 07:58:46, skrev aditya desai <\[email protected] <mailto:[email protected]>>: \nSending to a performance group instead of PLPGSQL.\n\n\n. \n. \nHi, \nI am running the below query. Table has 21 million records. I get an Out Of \nMemory error after a while.(from both pgadmin and psql). Can someone review DB \nparameters given below. \n\nselect t.*,g.column,a.column from \ngk_staging g, transaction t,account a \nwhere \ng.accountcodeis not null AND \ng.accountcode::text <> '' AND \nlength(g.accountcode)=13 AND \ng.closeid::text=t.transactionid::text AND \nsubsrting(g.accountcode::text,8)=a.mask_code::text \n\nBelow are system parameters. \nshared_buffers=3GB \nwork_mem=2GB \neffective_cache_size=10GB \nmaintenance_work_mem=1GB \nmax_connections=250 \n\nI am unable to paste explain plan here due to security concerns. \n\nYou have not provided schema, explain-output nor the error-message. \nWithout this it's pretty much guess-work... \n\n\n\n--\n Andreas Joseph Krogh", "msg_date": "Tue, 19 Oct 2021 13:08:03 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: Fwd: Query out of memory" }, { "msg_contents": "Check explain plan, change work mem to 100MBs and then check explain plan\nagain. If it changed, then try explain analyze.\n\nWork mem is limit is used per node in the plan, so especially with\npartitioned tables, that limit is way too high.\n\nCheck explain plan, change work mem to 100MBs and then check explain plan again. If it changed, then try explain analyze.Work mem is limit is used per node in the plan, so especially with partitioned tables, that limit is way too high.", "msg_date": "Tue, 19 Oct 2021 07:39:34 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "Thanks Michael. I will check this further.\n\nOn Tue, Oct 19, 2021 at 7:09 PM Michael Lewis <[email protected]> wrote:\n\n> Check explain plan, change work mem to 100MBs and then check explain plan\n> again. If it changed, then try explain analyze.\n>\n> Work mem is limit is used per node in the plan, so especially with\n> partitioned tables, that limit is way too high.\n>\n\nThanks Michael. I will check this further.On Tue, Oct 19, 2021 at 7:09 PM Michael Lewis <[email protected]> wrote:Check explain plan, change work mem to 100MBs and then check explain plan again. If it changed, then try explain analyze.Work mem is limit is used per node in the plan, so especially with partitioned tables, that limit is way too high.", "msg_date": "Tue, 19 Oct 2021 19:18:04 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query out of memory" }, { "msg_contents": "Do you see any issue in PostgreSQL log files?\n\n\nRegards,\nNinad Shah\n\nOn Tue, 19 Oct 2021 at 16:17, aditya desai <[email protected]> wrote:\n\n> Hi Justin,\n> Out of memory on pgadmin and psql. I executed it with explain analyze.\n> Still going out of memory.\n>\n> Also currently 250 user connections are not being made. There are hardly\n> 10 connections to database. When I run thi query it is going out of memory.\n>\n> Also this query is part of a view that gets referred in a\n> procedure.Transaction table is partitioned table but due to business\n> requirements partition key is not part of where clause.\n>\n> Regards,\n> Aditya.\n>\n> On Tuesday, October 19, 2021, Justin Pryzby <[email protected]> wrote:\n>\n>> On Tue, Oct 19, 2021 at 11:28:46AM +0530, aditya desai wrote:\n>> > I am running the below query. Table has 21 million records. I get an\n>> Out Of\n>> > Memory error after a while.(from both pgadmin and psql). Can someone\n>> review\n>>\n>> Is the out of memory error on the client side ?\n>> Then you've simply returned more rows than the client can support.\n>>\n>> In that case, you can run it with \"explain analyze\" to prove that the\n>> server\n>> side can run the query. That returns no data rows to the client, but\n>> shows the\n>> number of rows which would normally be returned.\n>>\n>> --\n>> Justin\n>>\n>\n\nDo you see any issue in PostgreSQL log files?Regards,Ninad ShahOn Tue, 19 Oct 2021 at 16:17, aditya desai <[email protected]> wrote:Hi Justin,Out of memory on pgadmin and psql. I executed it with explain analyze. Still going out of memory. Also currently 250 user connections are not being made. There are hardly 10 connections to database. When I run thi query it is going out of memory. Also this query is part of a view that gets referred in a procedure.Transaction table is partitioned table but due to business requirements partition key is not part of where clause.Regards,Aditya.On Tuesday, October 19, 2021, Justin Pryzby <[email protected]> wrote:On Tue, Oct 19, 2021 at 11:28:46AM +0530, aditya desai wrote:\n> I am running the below query. Table has 21 million records. I get an Out Of\n> Memory error after a while.(from both pgadmin and psql). Can someone review\n\nIs the out of memory error on the client side ?\nThen you've simply returned more rows than the client can support.\n\nIn that case, you can run it with \"explain analyze\" to prove that the server\nside can run the query.  That returns no data rows to the client, but shows the\nnumber of rows which would normally be returned.\n\n-- \nJustin", "msg_date": "Fri, 22 Oct 2021 20:11:17 +0530", "msg_from": "Ninad Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query out of memory" } ]
[ { "msg_contents": "Hi,\n\nWe are trying to use the postgres view to accommodate some of the complex\nworkflow related operations, we perform we saw like using union in a where\nclause inside a view actually pushed the where clause to both subqueries\nand we get good performance using the index , but when used in a join it\ndoes a full scan and filter of the table instead of pushing the filter\ncolumn as a where clause. we also found that when used without any\njoin/where in the union clause (*i.e.,* *select ... from template union all\nselect ... from template_staging)* works with joins just fine , i think the\nonly problem is when we try to use both union and where/join the issue\nstarts to happen is there any specific flag or release planned to address\nthis issue.\n\nPostgres version: PostgreSQL 12.7 (Debian 12.7-1.pgdg100+1) on\nx86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n\n\n*SQL Steps:*\n\ncreate table template\n(\n id int primary key,\n name varchar(30) unique,\n description varchar(30)\n);\n\ncreate table template_staging\n(\n id int primary key,\n name varchar(30) unique,\n description varchar(30),\n is_deleted bool\n);\n\ninsert into template (id, name, description)\nvalues (1, 'test1', 'hello'),\n (2, 'test2', 'hello world 2'),\n (3, 'test3', 'hello world 3');\ninsert into template_staging (id, name, description, is_deleted)\nvalues (3, 'test3', 'revert hello world 3', false),\n (4, 'test4', 'hello world 2', false),\n (5, 'test5', 'hello world 3', false);\n\ncreate view template_view (id, name, description, is_staged) as\nselect t.id,t.name, t.description, false as is_staged\nfrom template t\n left join template_staging ts on t.name = ts.name and ts.name is null\nUNION ALL\nselect t.id, t.name, t.description, true as is_stage\nfrom template_staging t\nwhere is_deleted is false;\n\ncreate table tester(\n id int primary key,\n template_id int\n);\ninsert into tester (id, template_id)\nvalues (1, 1),\n (2, 2),\n (3, 3),(4, 4);\n\n\n*Analysis:*\n\n*EXPLAIN ANALYZE select * from template_view where id=1;*\n\nAppend (cost=0.15..16.36 rows=2 width=161) (actual time=0.012..0.015\nrows=1 loops=1)\n -> Index Scan using template_pkey on template t (cost=0.15..8.17\nrows=1 width=161) (actual time=0.011..0.012 rows=1 loops=1)\n Index Cond: (id = 1)\n -> Index Scan using template_staging_pkey on template_staging t_1\n(cost=0.15..8.17 rows=1 width=161) (actual time=0.002..0.002 rows=0\nloops=1)\n Index Cond: (id = 1)\n Filter: (is_deleted IS FALSE)\n\n\n*EXPLAIN ANALYZE select * from template_view where name='test1';*\n\nAppend (cost=0.15..16.36 rows=2 width=157) (actual time=0.012..0.015\nrows=1 loops=1)\n -> Index Scan using template_name_key on template t (cost=0.15..8.17\nrows=1 width=157) (actual time=0.012..0.012 rows=1 loops=1)\n Index Cond: ((name)::text = 'test1'::text)\n -> Index Scan using template_staging_name_key on template_staging t_1\n (cost=0.15..8.17 rows=1 width=157) (actual time=0.002..0.002 rows=0\nloops=1)\n Index Cond: ((name)::text = 'test1'::text)\n Filter: (is_deleted IS FALSE)\n\n\n\n*EXPLAIN ANALYZE select * from tester t inner join template_view tv on\ntv.id <http://tv.id> = t.template_idwhere t.id <http://t.id>=1;*\n\nHash Join (cost=8.18..48.19 rows=3 width=169) (actual\ntime=0.024..0.032 rows=1 loops=1)\n Hash Cond: (t_1.id = t.template_id)\n -> Append (cost=0.00..38.27 rows=645 width=161) (actual\ntime=0.008..0.014 rows=6 loops=1)\n -> Seq Scan on template t_1 (cost=0.00..14.30 rows=430\nwidth=161) (actual time=0.008..0.009 rows=3 loops=1)\n -> Seq Scan on template_staging t_2 (cost=0.00..14.30\nrows=215 width=161) (actual time=0.003..0.004 rows=3 loops=1)\n Filter: (is_deleted IS FALSE)\n -> Hash (cost=8.17..8.17 rows=1 width=8) (actual time=0.011..0.011\nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Index Scan using tester_pkey on tester t (cost=0.15..8.17\nrows=1 width=8) (actual time=0.008..0.009 rows=1 loops=1)\n Index Cond: (id = 1)\n\n\n*EXPLAIN (ANALYZE, BUFFERS) select * from template_view where id=1;*\n\nAppend (cost=0.15..16.36 rows=2 width=161) (actual time=0.011..0.015\nrows=1 loops=1)\n Buffers: shared hit=3\n -> Index Scan using template_pkey on template t (cost=0.15..8.17\nrows=1 width=161) (actual time=0.011..0.011 rows=1 loops=1)\n Index Cond: (id = 1)\n Buffers: shared hit=2\n -> Index Scan using template_staging_pkey on template_staging t_1\n(cost=0.15..8.17 rows=1 width=161) (actual time=0.002..0.002 rows=0\nloops=1)\n Index Cond: (id = 1)\n Filter: (is_deleted IS FALSE)\n Buffers: shared hit=1\n\n\n\n*EXPLAIN (ANALYZE, BUFFERS) select * from tester t inner join\ntemplate_view tv on tv.id <http://tv.id> = t.template_idwhere t.id\n<http://t.id>=1;*\n\nHash Join (cost=8.18..48.19 rows=3 width=169) (actual\ntime=0.019..0.025 rows=1 loops=1)\n Hash Cond: (t_1.id = t.template_id)\n Buffers: shared hit=4\n -> Append (cost=0.00..38.27 rows=645 width=161) (actual\ntime=0.007..0.011 rows=6 loops=1)\n Buffers: shared hit=2\n -> Seq Scan on template t_1 (cost=0.00..14.30 rows=430\nwidth=161) (actual time=0.006..0.007 rows=3 loops=1)\n Buffers: shared hit=1\n -> Seq Scan on template_staging t_2 (cost=0.00..14.30\nrows=215 width=161) (actual time=0.002..0.003 rows=3 loops=1)\n Filter: (is_deleted IS FALSE)\n Buffers: shared hit=1\n -> Hash (cost=8.17..8.17 rows=1 width=8) (actual time=0.008..0.009\nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=2\n -> Index Scan using tester_pkey on tester t (cost=0.15..8.17\nrows=1 width=8) (actual time=0.006..0.007 rows=1 loops=1)\n Index Cond: (id = 1)\n Buffers: shared hit=2\n\n\nPlease let me know if you need more info.\n\n\nThanks,\n\nMithran\n\nHi,We are trying to use the postgres view to accommodate some of the complex workflow related operations, we perform we saw like using union in a where clause inside a view actually pushed the where clause to both subqueries and we get good performance using the index , but when used in a join it does a full scan and filter of the table instead of pushing the filter column as a where clause. we also found that when used without any join/where in the union clause (i.e., select ... from template union all select ... from template_staging) works with joins just fine , i think the only problem is when we try to use both union and where/join the issue starts to happen is there any specific flag or release planned to address this issue.Postgres version: PostgreSQL 12.7 (Debian 12.7-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bitSQL Steps:create table template( id int primary key, name varchar(30) unique, description varchar(30));create table template_staging( id int primary key, name varchar(30) unique, description varchar(30), is_deleted bool);insert into template (id, name, description)values (1, 'test1', 'hello'), (2, 'test2', 'hello world 2'), (3, 'test3', 'hello world 3');insert into template_staging (id, name, description, is_deleted)values (3, 'test3', 'revert hello world 3', false), (4, 'test4', 'hello world 2', false), (5, 'test5', 'hello world 3', false);create view template_view (id, name, description, is_staged) asselect t.id,t.name, t.description, false as is_stagedfrom template t left join template_staging ts on t.name = ts.name and ts.name is nullUNION ALLselect t.id, t.name, t.description, true as is_stagefrom template_staging twhere is_deleted is false;create table tester( id int primary key, template_id int);insert into tester (id, template_id)values (1, 1), (2, 2), (3, 3),(4, 4);Analysis:EXPLAIN ANALYZE select * from template_view where id=1;Append  (cost=0.15..16.36 rows=2 width=161) (actual time=0.012..0.015 rows=1 loops=1)  ->  Index Scan using template_pkey on template t  (cost=0.15..8.17 rows=1 width=161) (actual time=0.011..0.012 rows=1 loops=1)        Index Cond: (id = 1)  ->  Index Scan using template_staging_pkey on template_staging t_1  (cost=0.15..8.17 rows=1 width=161) (actual time=0.002..0.002 rows=0 loops=1)        Index Cond: (id = 1)        Filter: (is_deleted IS FALSE)EXPLAIN ANALYZE select * from template_view where name='test1';Append  (cost=0.15..16.36 rows=2 width=157) (actual time=0.012..0.015 rows=1 loops=1)  ->  Index Scan using template_name_key on template t  (cost=0.15..8.17 rows=1 width=157) (actual time=0.012..0.012 rows=1 loops=1)        Index Cond: ((name)::text = 'test1'::text)  ->  Index Scan using template_staging_name_key on template_staging t_1  (cost=0.15..8.17 rows=1 width=157) (actual time=0.002..0.002 rows=0 loops=1)        Index Cond: ((name)::text = 'test1'::text)        Filter: (is_deleted IS FALSE)EXPLAIN ANALYZE select * from tester t inner join template_view tv on tv.id = t.template_idwhere t.id=1;\n\nHash Join  (cost=8.18..48.19 rows=3 width=169) (actual time=0.024..0.032 rows=1 loops=1)  Hash Cond: (t_1.id = t.template_id)  ->  Append  (cost=0.00..38.27 rows=645 width=161) (actual time=0.008..0.014 rows=6 loops=1)        ->  Seq Scan on template t_1  (cost=0.00..14.30 rows=430 width=161) (actual time=0.008..0.009 rows=3 loops=1)        ->  Seq Scan on template_staging t_2  (cost=0.00..14.30 rows=215 width=161) (actual time=0.003..0.004 rows=3 loops=1)              Filter: (is_deleted IS FALSE)  ->  Hash  (cost=8.17..8.17 rows=1 width=8) (actual time=0.011..0.011 rows=1 loops=1)        Buckets: 1024  Batches: 1  Memory Usage: 9kB        ->  Index Scan using tester_pkey on tester t  (cost=0.15..8.17 rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=1)              Index Cond: (id = 1)\nEXPLAIN (ANALYZE, BUFFERS) select * from template_view where id=1;Append  (cost=0.15..16.36 rows=2 width=161) (actual time=0.011..0.015 rows=1 loops=1)  Buffers: shared hit=3  ->  Index Scan using template_pkey on template t  (cost=0.15..8.17 rows=1 width=161) (actual time=0.011..0.011 rows=1 loops=1)        Index Cond: (id = 1)        Buffers: shared hit=2  ->  Index Scan using template_staging_pkey on template_staging t_1  (cost=0.15..8.17 rows=1 width=161) (actual time=0.002..0.002 rows=0 loops=1)        Index Cond: (id = 1)        Filter: (is_deleted IS FALSE)        Buffers: shared hit=1EXPLAIN (ANALYZE, BUFFERS) select * from tester t inner join template_view tv on tv.id = t.template_idwhere t.id=1;Hash Join  (cost=8.18..48.19 rows=3 width=169) (actual time=0.019..0.025 rows=1 loops=1)  Hash Cond: (t_1.id = t.template_id)  Buffers: shared hit=4  ->  Append  (cost=0.00..38.27 rows=645 width=161) (actual time=0.007..0.011 rows=6 loops=1)        Buffers: shared hit=2        ->  Seq Scan on template t_1  (cost=0.00..14.30 rows=430 width=161) (actual time=0.006..0.007 rows=3 loops=1)              Buffers: shared hit=1        ->  Seq Scan on template_staging t_2  (cost=0.00..14.30 rows=215 width=161) (actual time=0.002..0.003 rows=3 loops=1)              Filter: (is_deleted IS FALSE)              Buffers: shared hit=1  ->  Hash  (cost=8.17..8.17 rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=1)        Buckets: 1024  Batches: 1  Memory Usage: 9kB        Buffers: shared hit=2        ->  Index Scan using tester_pkey on tester t  (cost=0.15..8.17 rows=1 width=8) (actual time=0.006..0.007 rows=1 loops=1)              Index Cond: (id = 1)              Buffers: shared hit=2Please let me know if you need more info. Thanks,Mithran", "msg_date": "Tue, 19 Oct 2021 14:47:05 -0700", "msg_from": "Mithran Kulasekaran <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres views cannot use both union and join/where" }, { "msg_contents": "On Tue, Oct 19, 2021 at 2:48 PM Mithran Kulasekaran <\[email protected]> wrote:\n\n> i think the only problem is when we try to use both union and where/join\n> the issue starts to happen\n>\n\nI'm unconvinced this is actually an issue based upon what is presented\nhere. All I'm seeing is two decidedly different queries resulting in\ndifferent query plans. That the \"problem one\" isn't using an index isn't\nsurprising given the volume of data involved and the change from specifying\na literal value in the where clause to letting a join determine which\nresults to return.\n\nAssuming you have a real scenario you are testing with being able to\ndemonstrate (probably through the use of the query planner GUCs) that\nPostgreSQL can produce a better plan but doesn't by default would be a more\ncompelling case. More generally, you probably need to either use your real\nscenario's data to help demonstrate the issue or create a self-contained\ntest that is at least closer to what it produces (this approach still\nbenefits from seeing what is happening for real).\n\nDavid J.\n\nOn Tue, Oct 19, 2021 at 2:48 PM Mithran Kulasekaran <[email protected]> wrote:i think the only problem is when we try to use both union and where/join the issue starts to happenI'm unconvinced this is actually an issue based upon what is presented here.  All I'm seeing is two decidedly different queries resulting in different query plans.  That the \"problem one\" isn't using an index isn't surprising given the volume of data involved and the change from specifying a literal value in the where clause to letting a join determine which results to return.Assuming you have a real scenario you are testing with being able to demonstrate (probably through the use of the query planner GUCs) that PostgreSQL can produce a better plan but doesn't by default would be a more compelling case.  More generally, you probably need to either use your real scenario's data to help demonstrate the issue or create a self-contained test that is at least closer to what it produces (this approach still benefits from seeing what is happening for real).David J.", "msg_date": "Tue, 19 Oct 2021 18:32:10 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres views cannot use both union and join/where" }, { "msg_contents": "I thought a union mashed together two queries. The where clause can appear\nin both. But the execution plan will almost certainly run the first query\nand the second query. It should throw an error if the types don't match or\nthe number of columns don't match.\n\nThere are so few use cases for unions that can't get fixed with better\nschema designs. I ran into a few over the years.\n\nOn Tue, Oct 19, 2021, 9:32 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Tue, Oct 19, 2021 at 2:48 PM Mithran Kulasekaran <\n> [email protected]> wrote:\n>\n>> i think the only problem is when we try to use both union and where/join\n>> the issue starts to happen\n>>\n>\n> I'm unconvinced this is actually an issue based upon what is presented\n> here. All I'm seeing is two decidedly different queries resulting in\n> different query plans. That the \"problem one\" isn't using an index isn't\n> surprising given the volume of data involved and the change from specifying\n> a literal value in the where clause to letting a join determine which\n> results to return.\n>\n> Assuming you have a real scenario you are testing with being able to\n> demonstrate (probably through the use of the query planner GUCs) that\n> PostgreSQL can produce a better plan but doesn't by default would be a more\n> compelling case. More generally, you probably need to either use your real\n> scenario's data to help demonstrate the issue or create a self-contained\n> test that is at least closer to what it produces (this approach still\n> benefits from seeing what is happening for real).\n>\n> David J.\n>\n>\n>\n\nI thought a union mashed together two queries. The where clause can appear in both. But the execution plan will almost certainly run the first query and the second query. It should throw an error if the types don't match or the number of columns don't match. There are so few use cases for unions that can't get fixed with better schema designs. I ran into a few over the years.On Tue, Oct 19, 2021, 9:32 PM David G. Johnston <[email protected]> wrote:On Tue, Oct 19, 2021 at 2:48 PM Mithran Kulasekaran <[email protected]> wrote:i think the only problem is when we try to use both union and where/join the issue starts to happenI'm unconvinced this is actually an issue based upon what is presented here.  All I'm seeing is two decidedly different queries resulting in different query plans.  That the \"problem one\" isn't using an index isn't surprising given the volume of data involved and the change from specifying a literal value in the where clause to letting a join determine which results to return.Assuming you have a real scenario you are testing with being able to demonstrate (probably through the use of the query planner GUCs) that PostgreSQL can produce a better plan but doesn't by default would be a more compelling case.  More generally, you probably need to either use your real scenario's data to help demonstrate the issue or create a self-contained test that is at least closer to what it produces (this approach still benefits from seeing what is happening for real).David J.", "msg_date": "Tue, 19 Oct 2021 22:36:42 -0400", "msg_from": "Benedict Holland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres views cannot use both union and join/where" }, { "msg_contents": "On Tue, Oct 19, 2021 at 3:48 PM Mithran Kulasekaran <\[email protected]> wrote:\n\n> create view template_view (id, name, description, is_staged) as\n> select t.id,t.name, t.description, false as is_staged\n> from template t\n> left join template_staging ts on t.name = ts.name and ts.name is null\n>\n>\nDoes that work? I've only seen that type of logic written as-\n\nleft join template_staging ts on t.name = ts.name\nwhere ts.name is null\n\nOn Tue, Oct 19, 2021 at 3:48 PM Mithran Kulasekaran <[email protected]> wrote:create view template_view (id, name, description, is_staged) asselect t.id,t.name, t.description, false as is_stagedfrom template t left join template_staging ts on t.name = ts.name and ts.name is nullDoes that work? I've only seen that type of logic written as-left join template_staging ts on t.name = ts.namewhere ts.name is null", "msg_date": "Tue, 19 Oct 2021 20:56:33 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres views cannot use both union and join/where" }, { "msg_contents": "On Tuesday, October 19, 2021, Michael Lewis <[email protected]> wrote:\n\n> On Tue, Oct 19, 2021 at 3:48 PM Mithran Kulasekaran <\n> [email protected]> wrote:\n>\n>> create view template_view (id, name, description, is_staged) as\n>> select t.id,t.name, t.description, false as is_staged\n>> from template t\n>> left join template_staging ts on t.name = ts.name and ts.name is null\n>>\n>>\n> Does that work? I've only seen that type of logic written as-\n>\n> left join template_staging ts on t.name = ts.name\n> where ts.name is null\n>\n\nThe are functionally equivalent, though the timing of the expression\nevaluation differs slightly.\n\nIt could also be written as an anti-join:\n\nSelect * from template as t where not exists (select 1 from\ntemplate_staging as ts where t.name = ts.name)\n\nDavid J.\n\nOn Tuesday, October 19, 2021, Michael Lewis <[email protected]> wrote:On Tue, Oct 19, 2021 at 3:48 PM Mithran Kulasekaran <[email protected]> wrote:create view template_view (id, name, description, is_staged) asselect t.id,t.name, t.description, false as is_stagedfrom template t left join template_staging ts on t.name = ts.name and ts.name is nullDoes that work? I've only seen that type of logic written as-left join template_staging ts on t.name = ts.namewhere ts.name is nullThe are functionally equivalent, though the timing of the expression evaluation differs slightly.It could also be written as an anti-join:Select * from template as t where not exists (select 1 from template_staging as ts where t.name = ts.name)David J.", "msg_date": "Tue, 19 Oct 2021 20:38:40 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres views cannot use both union and join/where" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Tuesday, October 19, 2021, Michael Lewis <[email protected]> wrote:\n>> On Tue, Oct 19, 2021 at 3:48 PM Mithran Kulasekaran <\n>> [email protected]> wrote:\n>>> create view template_view (id, name, description, is_staged) as\n>>> select t.id,t.name, t.description, false as is_staged\n>>> from template t\n>>> left join template_staging ts on t.name = ts.name and ts.name is null\n\n>> Does that work? I've only seen that type of logic written as-\n>> left join template_staging ts on t.name = ts.name\n>> where ts.name is null\n\n> The are functionally equivalent, though the timing of the expression\n> evaluation differs slightly.\n\nNo, not at all. Michael's version correctly implements an anti-join,\nwhere the first version does not. The reason is that the WHERE clause\n\"sees\" the column value post-JOIN, whereas the JOIN/ON clause \"sees\"\nvalues pre-JOIN.\n\nAssuming that the '=' operator is strict, the first query's ON clause\nreally reduces to constant false, so that you just get a null-extended\nimage of the left table. That's almost surely not what's wanted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Oct 2021 09:58:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres views cannot use both union and join/where" }, { "msg_contents": "On Wed, Oct 20, 2021 at 6:58 AM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Tuesday, October 19, 2021, Michael Lewis <[email protected]> wrote:\n> >> On Tue, Oct 19, 2021 at 3:48 PM Mithran Kulasekaran <\n> >> [email protected]> wrote:\n> >>> create view template_view (id, name, description, is_staged) as\n> >>> select t.id,t.name, t.description, false as is_staged\n> >>> from template t\n> >>> left join template_staging ts on t.name = ts.name and ts.name is null\n>\n> >> Does that work? I've only seen that type of logic written as-\n> >> left join template_staging ts on t.name = ts.name\n> >> where ts.name is null\n>\n> > The are functionally equivalent, though the timing of the expression\n> > evaluation differs slightly.\n>\n> No, not at all. Michael's version correctly implements an anti-join,\n> where the first version does not. The reason is that the WHERE clause\n> \"sees\" the column value post-JOIN, whereas the JOIN/ON clause \"sees\"\n> values pre-JOIN.\n>\n\nYeah, my bad. I was actually thinking this but then figured the OP\nwouldn't have written an anti-join that didn't actually work.\n\nMy original email was going to be:\n\nAdding the single table expression to the ON clause is shorthand for\nwriting:\n\nSELECT t.* FROM template AS t LEFT JOIN (SELECT * FROM template_staging\nWHERE template_staging.name IS NULL) AS ts ON t.name = ts.name;\n\nDavid J.\n\nOn Wed, Oct 20, 2021 at 6:58 AM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Tuesday, October 19, 2021, Michael Lewis <[email protected]> wrote:\n>> On Tue, Oct 19, 2021 at 3:48 PM Mithran Kulasekaran <\n>> [email protected]> wrote:\n>>> create  view template_view (id, name, description, is_staged) as\n>>> select t.id,t.name, t.description, false as is_staged\n>>> from template t\n>>> left join template_staging ts on t.name = ts.name and ts.name is null\n\n>> Does that work? I've only seen that type of logic written as-\n>> left join template_staging ts on t.name = ts.name\n>> where ts.name is null\n\n> The are functionally equivalent, though the timing of the expression\n> evaluation differs slightly.\n\nNo, not at all.  Michael's version correctly implements an anti-join,\nwhere the first version does not.  The reason is that the WHERE clause\n\"sees\" the column value post-JOIN, whereas the JOIN/ON clause \"sees\"\nvalues pre-JOIN.Yeah, my bad.  I was actually thinking this but then figured the OP wouldn't have written an anti-join that didn't actually work.My original email was going to be:Adding the single table expression to the ON clause is shorthand for writing:SELECT t.* FROM template AS t LEFT JOIN (SELECT * FROM template_staging WHERE template_staging.name IS NULL) AS ts ON t.name = ts.name;David J.", "msg_date": "Wed, 20 Oct 2021 07:29:51 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres views cannot use both union and join/where" } ]
[ { "msg_contents": "Why does the planner not use an index when a view is involved?\n\n1) A description of what you are trying to achieve and what results you\nexpect.\nWhy don't plans use indexes when views are involved? A similar query on\nthe underlying table leverages the appropriate index.\n\n== Point 1. The following query leverages the pipl10n_object_name_1 index.\ntc=# EXPLAIN ANALYZE select substr(pval_0, 49, 128) from pl10n_object_name\nwhere substr(pval_0, 49, 128) = 'xxxx';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on pl10n_object_name (cost=4.48..32.15 rows=7 width=32)\n(actual time=0.040..0.040 rows=0 loops=1)\n Recheck Cond: (substr((pval_0)::text, 49, 128) = 'xxxx'::text)\n -> *Bitmap Index Scan on pipl10n_object_name_1* (cost=0.00..4.48\nrows=7 width=0) (actual time=0.039..*0.039* rows=0 loops=1)\n Index Cond: (substr((pval_0)::text, 49, 128) = 'xxxx'::text)\n Planning Time: 0.153 ms\n Execution Time: 0.056 ms\n(6 rows)\n\n== Point 2. The equivalent query on the VL10N_OBJECT_NAME view executes a\nSeq Scan on the underlying pl10n_object_name. Why?\ntc=# EXPLAIN ANALYZE select pval_0 from VL10N_OBJECT_NAME where pval_0 =\n'xxxx';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan on vl10n_object_name (cost=0.00..323818.92 rows=5228\nwidth=32) (actual time=2851.799..2851.801 rows=0 loops=1)\n Filter: (vl10n_object_name.pval_0 = 'xxxx'::text)\n Rows Removed by Filter: 1043308\n -> Append (cost=0.00..310749.58 rows=1045547 width=208) (actual\ntime=0.046..2777.167 rows=1043308 loops=1)\n -> *Seq Scan on pl10n_object_name* (cost=0.00..252460.06\nrows=870536 width=175) (actual time=0.046..*2389.282* rows=870645 loops=1)\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..44356.42\nrows=175011 width=175) (actual time=0.019..313.357 rows=172663 loops=1)\n -> Seq Scan on pworkspaceobject (cost=0.00..42168.79\nrows=175011 width=134) (actual time=0.016..291.661 rows=172663 loops=1)\n Filter: ((pobject_name IS NOT NULL) AND (vla_764_24 =\n0))\n Rows Removed by Filter: 870629\n Planning Time: 0.204 ms\n Execution Time: 2851.830 ms\n(11 rows)\n\n== Additional Information ==\n== View definition:\ntc=# \\d+ VL10N_OBJECT_NAME\n View \"public.vl10n_object_name\"\n Column | Type | Collation | Nullable | Default |\nStorage | Description\n-------------+-----------------------+-----------+----------+---------+----------+-------------\n puid | character varying(15) | | | |\nextended |\n locale | text | | | |\nextended |\n preference | text | | | |\nextended |\n status | text | | | |\nextended |\n sequence_no | numeric | | | |\nmain |\n pval_0 | text | | | |\nextended |\nView definition:\n SELECT pl10n_object_name.puid,\n substr(pl10n_object_name.pval_0::text, 1, 5) AS locale,\n substr(pl10n_object_name.pval_0::text, 7, 1) AS preference,\n substr(pl10n_object_name.pval_0::text, 9, 1) AS status,\n tc_to_number(substr(pl10n_object_name.pval_0::text, 11, 4)::character\nvarying) AS sequence_no,\n substr(pl10n_object_name.pval_0::text, 49, 128) AS pval_0\n FROM pl10n_object_name\nUNION ALL\n SELECT pworkspaceobject.puid,\n 'NONE'::text AS locale,\n 'M'::text AS preference,\n 'M'::text AS status,\n 0 AS sequence_no,\n pworkspaceobject.pobject_name AS pval_0\n FROM pworkspaceobject\n WHERE pworkspaceobject.pobject_name IS NOT NULL AND\npworkspaceobject.vla_764_24 = 0;\n\n== Table definition:\ntc=# \\d+ pl10n_object_name\n Table \"public.pl10n_object_name\"\n Column | Type | Collation | Nullable | Default | Storage\n | Stats target | Description\n--------+------------------------+-----------+----------+---------+----------+--------------+-------------\n puid | character varying(15) | | not null | |\nextended | |\n pseq | integer | | not null | | plain\n | |\n pval_0 | character varying(176) | | | |\nextended | |\nIndexes:\n \"pipl10n_object_name\" PRIMARY KEY, btree (puid, pseq) DEFERRABLE\nINITIALLY DEFERRED\n \"pipl10n_object_name_0\" btree (pval_0)\n \"pipl10n_object_name_1\" btree (substr(pval_0::text, 49, 128))\n \"pipl10n_object_name_2\" btree (upper(substr(pval_0::text, 49, 128)))\n \"pipl10n_object_name_3\" btree (substr(pval_0::text, 1, 5))\n \"pipl10n_object_name_4\" btree (upper(substr(pval_0::text, 1, 5)))\n \"pipl10n_object_name_t1\" btree (substr(pval_0::text, 1, 5),\nsubstr(pval_0::text, 9, 1))\nAccess method: heap\nOptions: autovacuum_analyze_scale_factor=0.0,\nautovacuum_analyze_threshold=1000\n\n** Any help would be greatly appreciated. **\n\n2) The EXACT PostgreSQL version you are running\ntc=# SELECT version();\n version\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-39), 64-bit\n(1 row)\n\n3) How you installed PostgreSQL\nUnsure... IT department installed it.\n\n4) Changes made to the settings in the postgresql.conf file: see Server\nConfiguration for a quick way to list them all.\ntc=# SELECT name, current_setting(name), source\ntc-# FROM pg_settings\ntc-# WHERE source NOT IN ('default', 'override');\n name | current_setting | source\n------------------------------+--------------------+----------------------\n application_name | psql | client\n checkpoint_completion_target | 0.75 | configuration file\n checkpoint_timeout | 30min | configuration file\n client_encoding | UTF8 | client\n DateStyle | ISO, MDY | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n dynamic_shared_memory_type | posix | configuration file\n effective_cache_size | 48GB | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | en_US.UTF-8 | configuration file\n listen_addresses | * | configuration file\n log_destination | stderr | configuration file\n log_directory | log | configuration file\n log_filename | postgresql-%a.log | configuration file\n log_line_prefix | %m [%p] | configuration file\n log_rotation_age | 1d | configuration file\n log_rotation_size | 0 | configuration file\n log_timezone | America/Detroit | configuration file\n log_truncate_on_rotation | on | configuration file\n logging_collector | on | configuration file\n maintenance_work_mem | 512MB | configuration file\n max_connections | 200 | configuration file\n max_locks_per_transaction | 6400 | configuration file\n max_stack_depth | 2MB | environment variable\n max_wal_size | 1GB | configuration file\n min_wal_size | 80MB | configuration file\n port | 5432 | configuration file\n shared_buffers | 16GB | configuration file\n temp_buffers | 256MB | configuration file\n TimeZone | America/Detroit | configuration file\n wal_buffers | 2MB | configuration file\n work_mem | 128MB | configuration file\n(34 rows)\n\n5) Operating system and version\n# uname -a\nLinux vcl6006 3.10.0-1160.25.1.el7.x86_64 #1 SMP Tue Apr 13 18:55:45 EDT\n2021 x86_64 x86_64 x86_64 GNU/Linux\n\n6) For questions about any kind of error:\nNo error.\n\n7) What program you're using to connect to PostgreSQL\npsql\n\n8) Is there anything remotely unusual in the PostgreSQL server logs?\nNothing obvious\n\nWhy does the planner not use an index when a view is involved? 1) A description of what you are trying to achieve and what results you expect.Why don't plans use indexes when views are involved?  A similar query on the underlying table leverages the appropriate index. == Point 1. The following query leverages the pipl10n_object_name_1 index. tc=# EXPLAIN ANALYZE select substr(pval_0, 49, 128) from pl10n_object_name where substr(pval_0, 49, 128) = 'xxxx';                                                          QUERY PLAN------------------------------------------------------------------------------------------------------------------------------ Bitmap Heap Scan on pl10n_object_name  (cost=4.48..32.15 rows=7 width=32) (actual time=0.040..0.040 rows=0 loops=1)   Recheck Cond: (substr((pval_0)::text, 49, 128) = 'xxxx'::text)   ->  Bitmap Index Scan on pipl10n_object_name_1  (cost=0.00..4.48 rows=7 width=0) (actual time=0.039..0.039 rows=0 loops=1)         Index Cond: (substr((pval_0)::text, 49, 128) = 'xxxx'::text) Planning Time: 0.153 ms Execution Time: 0.056 ms(6 rows)== Point 2. The equivalent query on the VL10N_OBJECT_NAME view executes a Seq Scan on the underlying pl10n_object_name. Why?tc=# EXPLAIN ANALYZE select pval_0 from VL10N_OBJECT_NAME where pval_0 = 'xxxx';                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Subquery Scan on vl10n_object_name  (cost=0.00..323818.92 rows=5228 width=32) (actual time=2851.799..2851.801 rows=0 loops=1)   Filter: (vl10n_object_name.pval_0 = 'xxxx'::text)   Rows Removed by Filter: 1043308   ->  Append  (cost=0.00..310749.58 rows=1045547 width=208) (actual time=0.046..2777.167 rows=1043308 loops=1)         ->  Seq Scan on pl10n_object_name  (cost=0.00..252460.06 rows=870536 width=175) (actual time=0.046..2389.282 rows=870645 loops=1)         ->  Subquery Scan on \"*SELECT* 2\"  (cost=0.00..44356.42 rows=175011 width=175) (actual time=0.019..313.357 rows=172663 loops=1)               ->  Seq Scan on pworkspaceobject  (cost=0.00..42168.79 rows=175011 width=134) (actual time=0.016..291.661 rows=172663 loops=1)                     Filter: ((pobject_name IS NOT NULL) AND (vla_764_24 = 0))                     Rows Removed by Filter: 870629 Planning Time: 0.204 ms Execution Time: 2851.830 ms(11 rows)== Additional Information ==== View definition:tc=# \\d+ VL10N_OBJECT_NAME                                View \"public.vl10n_object_name\"   Column    |         Type          | Collation | Nullable | Default | Storage  | Description-------------+-----------------------+-----------+----------+---------+----------+------------- puid        | character varying(15) |           |          |         | extended | locale      | text                  |           |          |         | extended | preference  | text                  |           |          |         | extended | status      | text                  |           |          |         | extended | sequence_no | numeric               |           |          |         | main     | pval_0      | text                  |           |          |         | extended |View definition: SELECT pl10n_object_name.puid,    substr(pl10n_object_name.pval_0::text, 1, 5) AS locale,    substr(pl10n_object_name.pval_0::text, 7, 1) AS preference,    substr(pl10n_object_name.pval_0::text, 9, 1) AS status,    tc_to_number(substr(pl10n_object_name.pval_0::text, 11, 4)::character varying) AS sequence_no,    substr(pl10n_object_name.pval_0::text, 49, 128) AS pval_0   FROM pl10n_object_nameUNION ALL SELECT pworkspaceobject.puid,    'NONE'::text AS locale,    'M'::text AS preference,    'M'::text AS status,    0 AS sequence_no,    pworkspaceobject.pobject_name AS pval_0   FROM pworkspaceobject  WHERE pworkspaceobject.pobject_name IS NOT NULL AND pworkspaceobject.vla_764_24 = 0;== Table definition:tc=# \\d+ pl10n_object_name                                     Table \"public.pl10n_object_name\" Column |          Type          | Collation | Nullable | Default | Storage  | Stats target | Description--------+------------------------+-----------+----------+---------+----------+--------------+------------- puid   | character varying(15)  |           | not null |         | extended |              | pseq   | integer                |           | not null |         | plain    |              | pval_0 | character varying(176) |           |          |         | extended |              |Indexes:    \"pipl10n_object_name\" PRIMARY KEY, btree (puid, pseq) DEFERRABLE INITIALLY DEFERRED    \"pipl10n_object_name_0\" btree (pval_0)    \"pipl10n_object_name_1\" btree (substr(pval_0::text, 49, 128))    \"pipl10n_object_name_2\" btree (upper(substr(pval_0::text, 49, 128)))    \"pipl10n_object_name_3\" btree (substr(pval_0::text, 1, 5))    \"pipl10n_object_name_4\" btree (upper(substr(pval_0::text, 1, 5)))    \"pipl10n_object_name_t1\" btree (substr(pval_0::text, 1, 5), substr(pval_0::text, 9, 1))Access method: heapOptions: autovacuum_analyze_scale_factor=0.0, autovacuum_analyze_threshold=1000** Any help would be greatly appreciated. **2) The EXACT PostgreSQL version you are runningtc=# SELECT version();                                                 version--------------------------------------------------------------------------------------------------------- PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit(1 row)3) How you installed PostgreSQLUnsure... IT department installed it. 4) Changes made to the settings in the postgresql.conf file: see Server Configuration for a quick way to list them all.tc=# SELECT name, current_setting(name), sourcetc-#   FROM pg_settingstc-#   WHERE source NOT IN ('default', 'override');             name             |  current_setting   |        source------------------------------+--------------------+---------------------- application_name             | psql               | client checkpoint_completion_target | 0.75               | configuration file checkpoint_timeout           | 30min              | configuration file client_encoding              | UTF8               | client DateStyle                    | ISO, MDY           | configuration file default_text_search_config   | pg_catalog.english | configuration file dynamic_shared_memory_type   | posix              | configuration file effective_cache_size         | 48GB               | configuration file lc_messages                  | en_US.UTF-8        | configuration file lc_monetary                  | en_US.UTF-8        | configuration file lc_numeric                   | en_US.UTF-8        | configuration file lc_time                      | en_US.UTF-8        | configuration file listen_addresses             | *                  | configuration file log_destination              | stderr             | configuration file log_directory                | log                | configuration file log_filename                 | postgresql-%a.log  | configuration file log_line_prefix              | %m [%p]            | configuration file log_rotation_age             | 1d                 | configuration file log_rotation_size            | 0                  | configuration file log_timezone                 | America/Detroit    | configuration file log_truncate_on_rotation     | on                 | configuration file logging_collector            | on                 | configuration file maintenance_work_mem         | 512MB              | configuration file max_connections              | 200                | configuration file max_locks_per_transaction    | 6400               | configuration file max_stack_depth              | 2MB                | environment variable max_wal_size                 | 1GB                | configuration file min_wal_size                 | 80MB               | configuration file port                         | 5432               | configuration file shared_buffers               | 16GB               | configuration file temp_buffers                 | 256MB              | configuration file TimeZone                     | America/Detroit    | configuration file wal_buffers                  | 2MB                | configuration file work_mem                     | 128MB              | configuration file(34 rows)5) Operating system and version# uname -aLinux vcl6006 3.10.0-1160.25.1.el7.x86_64 #1 SMP Tue Apr 13 18:55:45 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux6) For questions about any kind of error:No error.7) What program you're using to connect to PostgreSQLpsql8) Is there anything remotely unusual in the PostgreSQL server logs?Nothing obvious", "msg_date": "Wed, 27 Oct 2021 21:31:00 -0500", "msg_from": "Tim Slechta <[email protected]>", "msg_from_op": true, "msg_subject": "Views don't seem to use indexes?" }, { "msg_contents": "On Wed, Oct 27, 2021 at 7:31 PM Tim Slechta <[email protected]> wrote:\n\n>\n> == Point 2. The equivalent query on the VL10N_OBJECT_NAME view executes a\n> Seq Scan on the underlying pl10n_object_name. Why?\n> tc=# EXPLAIN ANALYZE select pval_0 from VL10N_OBJECT_NAME where pval_0 =\n> 'xxxx';\n>\n\nJust to confirm and simplify, the question boils down to:\n\nWhy does:\n\nSELECT * FROM view WHERE view.view_column = ?;\n\nAnd view is:\n\nCREATE VIEW AS\nSELECT ..., view_column\nFROM tbl1\nUNION ALL\nSELECT ..., view_column\nFROM tbl2\n;\n\nWhere tbl1 has an index on view_column AND tbl2 does not have an index on\nview_column\n\nResult in a plan where both tb11 and tbl2 are sequentially scanned and the\nfilter applied to the unioned result\n\nInstead of a plan where the index lookup rows of tbl1 are supplied to the\nunion and only tbl2 is sequentially scanned\n\n?\n\nI don't have an answer to offer up here. I'm pretty sure we do handle\npredicate pushdown into UNION ALL generally. I'm unclear exactly what the\nequivalently rewritten query would be in this case - but demonstrating that\na query that doesn't use union all applies the index while the direct\naccess of the view doesn't isn't sufficient to narrow down the problem. It\ncan still either be the rule processing or the union processing that is\nseeming to make a wrong plan choice.\n\nThat isn't meant to discount the possibility that this case is actually\ncorrect - or at least the best we do presently for one or more technical\nreasons that I'm not familiar with...\n\nDavid J.\n\nOn Wed, Oct 27, 2021 at 7:31 PM Tim Slechta <[email protected]> wrote:== Point 2. The equivalent query on the VL10N_OBJECT_NAME view executes a Seq Scan on the underlying pl10n_object_name. Why?tc=# EXPLAIN ANALYZE select pval_0 from VL10N_OBJECT_NAME where pval_0 = 'xxxx';Just to confirm and simplify, the question boils down to:Why does:SELECT * FROM view WHERE view.view_column = ?;And view is:CREATE VIEW ASSELECT ..., view_columnFROM tbl1UNION ALLSELECT ..., view_columnFROM tbl2;Where tbl1 has an index on view_column AND tbl2 does not have an index on view_columnResult in a plan where both tb11 and tbl2 are sequentially scanned and the filter applied to the unioned resultInstead of a plan where the index lookup rows of tbl1 are supplied to the union and only tbl2 is sequentially scanned?I don't have an answer to offer up here.  I'm pretty sure we do handle predicate pushdown into UNION ALL generally.  I'm unclear exactly what the equivalently rewritten query would be in this case - but demonstrating that a query that doesn't use union all applies the index while the direct access of the view doesn't isn't sufficient to narrow down the problem.  It can still either be the rule processing or the union processing that is seeming to make a wrong plan choice.  That isn't meant to discount the possibility that this case is actually correct - or at least the best we do presently for one or more technical reasons that I'm not familiar with...David J.", "msg_date": "Wed, 27 Oct 2021 21:54:33 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views don't seem to use indexes?" }, { "msg_contents": "Tim Slechta <[email protected]> writes:\n> Why does the planner not use an index when a view is involved?\n\nIt's not about a \"view\" ... you'd get the same results if you wrote\nout the UNION ALL construct in-line as a sub-select.\n\nI think you may be shooting yourself in the foot by not making sure that\nthe UNION ALL arms match in data type. You did not show us the definition\nof pworkspaceobject, but if pworkspaceobject.pobject_name isn't of type\ntext (maybe it's varchar?) then the type mismatch would prevent pushing\ndown a condition on that column. The source code says:\n\n * For subqueries using UNION/UNION ALL/INTERSECT/INTERSECT ALL, we can\n * push quals into each component query, but the quals can only reference\n * subquery columns that suffer no type coercions in the set operation.\n * Otherwise there are possible semantic gotchas.\n\nI'm too tired to reconstruct an example of the semantic issues...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Oct 2021 02:15:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views don't seem to use indexes?" }, { "msg_contents": "Tom, David,\n\nThank you for the time and information.\n\nI lost my system this morning, so I need to re-establish a system and do\nsome additional homework.\n\nThanks again.\n\n-Tim\n\nBTW: here is the definition of the pworkspaceobject table.\n\ntc=# \\d+ pworkspaceobject\n\n Table \"public.pworkspaceobject\"\n Column | Type | Collation |\nNullable | Default | Storage | Stats target | Description\n-----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n puid | character varying(15) | |\nnot null | | extended | |\n pobject_name | character varying(128) | |\nnot null | | extended | |\n pobject_desc | character varying(240) | |\n | | extended | |\n pobject_type | character varying(32) | |\nnot null | | extended | |\n pobject_application | character varying(32) | |\nnot null | | extended | |\n vla_764_7 | integer | |\nnot null | 0 | plain | |\n pip_classification | character varying(128) | |\n | | extended | |\n vla_764_10 | integer | |\nnot null | 0 | plain | |\n pgov_classification | character varying(128) | |\n | | extended | |\n vla_764_12 | integer | |\nnot null | 0 | plain | |\n pfnd0revisionid | character varying(32) | |\n | | extended | |\n vla_764_18 | integer | |\nnot null | 0 | plain | |\n vla_764_20 | integer | |\nnot null | 0 | plain | |\n rwso_threadu | character varying(15) | |\n | | extended | |\n rwso_threadc | integer | |\n | | plain | |\n prevision_limit | integer | |\nnot null | | plain | |\n prevision_number | integer | |\nnot null | | plain | |\n rowning_organizationu | character varying(15) | |\n | | extended | |\n rowning_organizationc | integer | |\n | | plain | |\n pactive_seq | integer | |\n | | plain | |\n rowning_projectu | character varying(15) | |\n | | extended | |\n rowning_projectc | integer | |\n | | plain | |\n pfnd0maturity | integer | |\n | | plain | |\n pdate_released | timestamp without time zone | |\n | | plain | |\n pfnd0isrevisiondiscontinued | smallint | |\n | | plain | |\n pfnd0inprocess | smallint | |\n | | plain | |\n aoid | character varying(15) | |\nnot null | NULL::character varying | extended | |\n arev_category | integer | |\nnot null | 48 | plain | |\n aspace_uid | character varying(15) | |\n | NULL::character varying | extended | |\n avalid_from | timestamp without time zone | |\nnot null | to_timestamp('1900/01/02 00:00:00'::text, 'YYYY/MM/DD\nHH24:MI:SS'::text)::timestamp without time zone | plain | |\n avalid_to | timestamp without time zone | |\n | | plain | |\n vla_764_26 | integer | |\nnot null | 0 | plain | |\n pawp0issuspect | smallint | |\n | | plain | |\n vla_764_24 | integer | |\nnot null | 0 | plain | |\n vla_764_23 | integer | |\nnot null | 0 | plain | |\nIndexes:\n \"pipworkspaceobject\" PRIMARY KEY, btree (puid)\n \"pipworkspaceobject_0\" btree (aoid)\n \"pipworkspaceobject_1\" btree (upper(pobject_type::text))\n \"pipworkspaceobject_2\" btree (upper(pobject_name::text))\n \"pipworkspaceobject_3\" btree (pobject_type)\n \"pipworkspaceobject_4\" btree (pobject_name)\n \"pipworkspaceobject_5\" btree (rwso_threadu)\n \"pipworkspaceobject_6\" btree (rowning_projectu)\nAccess method: heap\nOptions: autovacuum_analyze_scale_factor=0.0,\nautovacuum_analyze_threshold=500\n\n\nOn Thu, Oct 28, 2021 at 1:15 AM Tom Lane <[email protected]> wrote:\n\n> Tim Slechta <[email protected]> writes:\n> > Why does the planner not use an index when a view is involved?\n>\n> It's not about a \"view\" ... you'd get the same results if you wrote\n> out the UNION ALL construct in-line as a sub-select.\n>\n> I think you may be shooting yourself in the foot by not making sure that\n> the UNION ALL arms match in data type. You did not show us the definition\n> of pworkspaceobject, but if pworkspaceobject.pobject_name isn't of type\n> text (maybe it's varchar?) then the type mismatch would prevent pushing\n> down a condition on that column. The source code says:\n>\n> * For subqueries using UNION/UNION ALL/INTERSECT/INTERSECT ALL, we can\n> * push quals into each component query, but the quals can only reference\n> * subquery columns that suffer no type coercions in the set operation.\n> * Otherwise there are possible semantic gotchas.\n>\n> I'm too tired to reconstruct an example of the semantic issues...\n>\n> regards, tom lane\n>\n\nTom, David, Thank you for the time and information. I lost my system this morning, so I need to re-establish a system and do some additional homework. Thanks again. -Tim BTW:  here is the definition of the pworkspaceobject table. tc=# \\d+ pworkspaceobject                                                                                                 Table \"public.pworkspaceobject\"           Column            |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description-----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+------------- puid                        | character varying(15)       |           | not null |   | extended |              | pobject_name                | character varying(128)      |           | not null |   | extended |              | pobject_desc                | character varying(240)      |           |          |   | extended |              | pobject_type                | character varying(32)       |           | not null |   | extended |              | pobject_application         | character varying(32)       |           | not null |   | extended |              | vla_764_7                   | integer                     |           | not null | 0 | plain    |              | pip_classification          | character varying(128)      |           |          |   | extended |              | vla_764_10                  | integer                     |           | not null | 0 | plain    |              | pgov_classification         | character varying(128)      |           |          |   | extended |              | vla_764_12                  | integer                     |           | not null | 0 | plain    |              | pfnd0revisionid             | character varying(32)       |           |          |   | extended |              | vla_764_18                  | integer                     |           | not null | 0 | plain    |              | vla_764_20                  | integer                     |           | not null | 0 | plain    |              | rwso_threadu                | character varying(15)       |           |          |   | extended |              | rwso_threadc                | integer                     |           |          |   | plain    |              | prevision_limit             | integer                     |           | not null |   | plain    |              | prevision_number            | integer                     |           | not null |   | plain    |              | rowning_organizationu       | character varying(15)       |           |          |   | extended |              | rowning_organizationc       | integer                     |           |          |   | plain    |              | pactive_seq                 | integer                     |           |          |   | plain    |              | rowning_projectu            | character varying(15)       |           |          |   | extended |              | rowning_projectc            | integer                     |           |          |   | plain    |              | pfnd0maturity               | integer                     |           |          |   | plain    |              | pdate_released              | timestamp without time zone |           |          |   | plain    |              | pfnd0isrevisiondiscontinued | smallint                    |           |          |   | plain    |              | pfnd0inprocess              | smallint                    |           |          |   | plain    |              | aoid                        | character varying(15)       |           | not null | NULL::character varying | extended |              | arev_category               | integer                     |           | not null | 48 | plain    |              | aspace_uid                  | character varying(15)       |           |          | NULL::character varying | extended |              | avalid_from                 | timestamp without time zone |           | not null | to_timestamp('1900/01/02 00:00:00'::text, 'YYYY/MM/DD HH24:MI:SS'::text)::timestamp without time zone | plain    |              | avalid_to                   | timestamp without time zone |           |          |   | plain    |              | vla_764_26                  | integer                     |           | not null | 0 | plain    |              | pawp0issuspect              | smallint                    |           |          |   | plain    |              | vla_764_24                  | integer                     |           | not null | 0 | plain    |              | vla_764_23                  | integer                     |           | not null | 0 | plain    |              |Indexes:    \"pipworkspaceobject\" PRIMARY KEY, btree (puid)    \"pipworkspaceobject_0\" btree (aoid)    \"pipworkspaceobject_1\" btree (upper(pobject_type::text))    \"pipworkspaceobject_2\" btree (upper(pobject_name::text))    \"pipworkspaceobject_3\" btree (pobject_type)    \"pipworkspaceobject_4\" btree (pobject_name)    \"pipworkspaceobject_5\" btree (rwso_threadu)    \"pipworkspaceobject_6\" btree (rowning_projectu)Access method: heapOptions: autovacuum_analyze_scale_factor=0.0, autovacuum_analyze_threshold=500On Thu, Oct 28, 2021 at 1:15 AM Tom Lane <[email protected]> wrote:Tim Slechta <[email protected]> writes:\r\n> Why does the planner not use an index when a view is involved?\n\r\nIt's not about a \"view\" ... you'd get the same results if you wrote\r\nout the UNION ALL construct in-line as a sub-select.\n\r\nI think you may be shooting yourself in the foot by not making sure that\r\nthe UNION ALL arms match in data type.  You did not show us the definition\r\nof pworkspaceobject, but if pworkspaceobject.pobject_name isn't of type\r\ntext (maybe it's varchar?) then the type mismatch would prevent pushing\r\ndown a condition on that column.  The source code says:\n\r\n * For subqueries using UNION/UNION ALL/INTERSECT/INTERSECT ALL, we can\r\n * push quals into each component query, but the quals can only reference\r\n * subquery columns that suffer no type coercions in the set operation.\r\n * Otherwise there are possible semantic gotchas.\n\r\nI'm too tired to reconstruct an example of the semantic issues...\n\r\n                        regards, tom lane", "msg_date": "Thu, 28 Oct 2021 10:00:31 -0500", "msg_from": "Tim Slechta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Views don't seem to use indexes?" } ]
[ { "msg_contents": "Hi\nPostgreSQLv14 source code build with GCCv11.2 and Clangv12(without JIT)\nwith optimisation flags like O3 and tested with HammerDB\nObserved TPC-H , GCC performance better than Clang(without JIT). The\nperformance difference ~22% and also noticed the assembly code difference\nGCC vs Clang( e.g. GCC inlined functionality compared to Clang).\n\nEnvironment details:\n————————-\nOS :RHEL8.4\nBare metal : Apple/AMD EPYC/IBM\nTest(TPC-H) Benchmark Environment:HammerDB\n\nIs the performance difference mainly because of below points ?\n1 data over flow and calculations like int128(int128.c) and C arithmetic\noperations(functions include in float.h e.g float4_mul)\n\nAnd please suggest is any another functionality or code points need to\ncheck on the performance difference\n\nHi PostgreSQLv14 source code build  with GCCv11.2 and Clangv12(without JIT) with  optimisation flags like O3 and tested with HammerDBObserved TPC-H , GCC performance better than Clang(without JIT). The performance difference ~22% and also noticed the assembly code difference GCC vs Clang( e.g. GCC inlined functionality compared to Clang). Environment details:————————-OS :RHEL8.4Bare metal : Apple/AMD EPYC/IBMTest(TPC-H) Benchmark Environment:HammerDBIs the performance difference mainly because of below points ?1 data over flow and calculations like int128(int128.c) and C arithmetic operations(functions include in float.h e.g float4_mul)   And please suggest is any another functionality or code points need to check on the performance difference", "msg_date": "Tue, 2 Nov 2021 22:43:22 +0530", "msg_from": "arjun shetty <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQLv14 TPC-H performance GCC vs Clang" }, { "msg_contents": "> .. optimisation flags like O3\n> And please suggest ... to check on the performance difference\n\nThe Phoronix has been tested the PostgreSQL 13 with Clang 12 + GCC 11.1 On\nXeon Ice Lake\n* \"The CFLAGS/CXXFLAGS set throughout testing were \"-O3 -march=native\n-flto\" *\n* as would be common for HPC systems when building performance sensitive\ncode.\"*\n*and the results:*\n\nhttps://www.phoronix.com/scan.php?page=article&item=clang12-gcc11-icelake&num=4\n( see ~ bottom of the page )\nonly the Postgres ( GCC 11 vs. LLVM Clang 12 Benchmarks On Xeon Ice Lake )\n\nhttps://openbenchmarking.org/result/2105299-IB-COMPILERT91&sgm=1&ppt=D&sor&sgm=1&ppt=D&oss=Postgresql\n maybe you can replicate the Phoronix results ( but this is only gcc11.1\n! )\n \"Compare your own system(s) to this result file with the Phoronix Test\nSuite\n by running the command: phoronix-test-suite benchmark\n2105299-IB-COMPILERT91\"\n\nRegards.\n Imre\n\narjun shetty <[email protected]> ezt írta (időpont: 2021. nov. 2.,\nK, 18:13):\n\n> Hi\n> PostgreSQLv14 source code build with GCCv11.2 and Clangv12(without JIT)\n> with optimisation flags like O3 and tested with HammerDB\n> Observed TPC-H , GCC performance better than Clang(without JIT). The\n> performance difference ~22% and also noticed the assembly code difference\n> GCC vs Clang( e.g. GCC inlined functionality compared to Clang).\n>\n> Environment details:\n> ————————-\n> OS :RHEL8.4\n> Bare metal : Apple/AMD EPYC/IBM\n> Test(TPC-H) Benchmark Environment:HammerDB\n>\n> Is the performance difference mainly because of below points ?\n> 1 data over flow and calculations like int128(int128.c) and C arithmetic\n> operations(functions include in float.h e.g float4_mul)\n>\n> And please suggest is any another functionality or code points need to\n> check on the performance difference\n>\n\n> .. optimisation flags like O3> And please suggest ...  to check on the performance difference The Phoronix has been tested the PostgreSQL 13 with Clang 12 + GCC 11.1 On Xeon Ice Lake  \"The CFLAGS/CXXFLAGS set throughout testing were \"-O3 -march=native -flto\"   as would be common for HPC systems when building performance sensitive code.\"and the results:  https://www.phoronix.com/scan.php?page=article&item=clang12-gcc11-icelake&num=4 ( see ~ bottom of the page )only the Postgres ( GCC 11 vs. LLVM Clang 12 Benchmarks On Xeon Ice Lake )   https://openbenchmarking.org/result/2105299-IB-COMPILERT91&sgm=1&ppt=D&sor&sgm=1&ppt=D&oss=Postgresql  maybe you can replicate the Phoronix results  ( but this is only gcc11.1 ! )  \"Compare your own system(s) to this result file with the Phoronix Test Suite     by running the command: phoronix-test-suite benchmark 2105299-IB-COMPILERT91\"Regards.  Imrearjun shetty <[email protected]> ezt írta (időpont: 2021. nov. 2., K, 18:13):Hi PostgreSQLv14 source code build  with GCCv11.2 and Clangv12(without JIT) with  optimisation flags like O3 and tested with HammerDBObserved TPC-H , GCC performance better than Clang(without JIT). The performance difference ~22% and also noticed the assembly code difference GCC vs Clang( e.g. GCC inlined functionality compared to Clang). Environment details:————————-OS :RHEL8.4Bare metal : Apple/AMD EPYC/IBMTest(TPC-H) Benchmark Environment:HammerDBIs the performance difference mainly because of below points ?1 data over flow and calculations like int128(int128.c) and C arithmetic operations(functions include in float.h e.g float4_mul)   And please suggest is any another functionality or code points need to check on the performance difference", "msg_date": "Tue, 2 Nov 2021 20:29:23 +0100", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQLv14 TPC-H performance GCC vs Clang" }, { "msg_contents": "Hi\n\n@imre : Thank you sharing the links on “ Phoronix has been tested the\nPostgreSQL 13”.\nI compared my test results with Phoronix test suit” . It has too\ndeviations(may be hardware environment and PostgreSQL version)\nI think PostgreSQLv13 may have issues with Auto vacuum and currently I’m\nusing with PostgreSQLv14\n\n\nIn my environment GCC performs better than Clang(llvm) the reason would be\n“int128”performance better in GCC compared to Clang.\n1.Clang(__int128) require 4 additional functions like “__divti3 , __modti3,\n__udivti3, __umodti3” and these additional not required in GCC . So it may\nlead performance drop in Clang.\n2.__int128 aligned 16 bytes boundaries (MAXALIGN) supported in GCC and may\nthis in not support in Clang\n\n@postgresql- performance: kindly let know your view on those two points.\n\n\n\n\n\nOn Wednesday, November 3, 2021, Imre Samu <[email protected]> wrote:\n\n> > .. optimisation flags like O3\n> > And please suggest ... to check on the performance difference\n>\n> The Phoronix has been tested the PostgreSQL 13 with Clang 12 + GCC 11.1 On\n> Xeon Ice Lake\n> * \"The CFLAGS/CXXFLAGS set throughout testing were \"-O3 -march=native\n> -flto\" *\n> * as would be common for HPC systems when building performance sensitive\n> code.\"*\n> *and the results:*\n> https://www.phoronix.com/scan.php?page=article&item=clang12-\n> gcc11-icelake&num=4 ( see ~ bottom of the page )\n> only the Postgres ( GCC 11 vs. LLVM Clang 12 Benchmarks On Xeon Ice Lake )\n> https://openbenchmarking.org/result/2105299-IB-COMPILERT91&\n> sgm=1&ppt=D&sor&sgm=1&ppt=D&oss=Postgresql\n> maybe you can replicate the Phoronix results ( but this is only gcc11.1\n> ! )\n> \"Compare your own system(s) to this result file with the Phoronix Test\n> Suite\n> by running the command: phoronix-test-suite benchmark\n> 2105299-IB-COMPILERT91\"\n>\n> Regards.\n> Imre\n>\n> arjun shetty <[email protected]> ezt írta (időpont: 2021. nov. 2.,\n> K, 18:13):\n>\n>> Hi\n>> PostgreSQLv14 source code build with GCCv11.2 and Clangv12(without JIT)\n>> with optimisation flags like O3 and tested with HammerDB\n>> Observed TPC-H , GCC performance better than Clang(without JIT). The\n>> performance difference ~22% and also noticed the assembly code difference\n>> GCC vs Clang( e.g. GCC inlined functionality compared to Clang).\n>>\n>> Environment details:\n>> ————————-\n>> OS :RHEL8.4\n>> Bare metal : Apple/AMD EPYC/IBM\n>> Test(TPC-H) Benchmark Environment:HammerDB\n>>\n>> Is the performance difference mainly because of below points ?\n>> 1 data over flow and calculations like int128(int128.c) and C arithmetic\n>> operations(functions include in float.h e.g float4_mul)\n>>\n>> And please suggest is any another functionality or code points need to\n>> check on the performance difference\n>>\n>\n\nHi @imre : Thank you sharing the links on “ Phoronix has been tested the PostgreSQL 13”.I compared my test results with Phoronix test suit” . It has too deviations(may be hardware environment and PostgreSQL version) I think PostgreSQLv13 may have issues with Auto vacuum and currently I’m using with PostgreSQLv14 In my environment GCC performs better than Clang(llvm) the reason would  be “int128”performance better in GCC compared to Clang.1.Clang(__int128) require 4 additional functions like “__divti3 , __modti3, __udivti3, __umodti3” and these additional not required in GCC . So it may lead performance drop in Clang.2.__int128 aligned 16 bytes boundaries (MAXALIGN) supported in GCC and may this in not support in Clang@postgresql- performance: kindly let know your view on those two points.On Wednesday, November 3, 2021, Imre Samu <[email protected]> wrote:> .. optimisation flags like O3> And please suggest ...  to check on the performance difference The Phoronix has been tested the PostgreSQL 13 with Clang 12 + GCC 11.1 On Xeon Ice Lake  \"The CFLAGS/CXXFLAGS set throughout testing were \"-O3 -march=native -flto\"   as would be common for HPC systems when building performance sensitive code.\"and the results:  https://www.phoronix.com/scan.php?page=article&item=clang12-gcc11-icelake&num=4 ( see ~ bottom of the page )only the Postgres ( GCC 11 vs. LLVM Clang 12 Benchmarks On Xeon Ice Lake )   https://openbenchmarking.org/result/2105299-IB-COMPILERT91&sgm=1&ppt=D&sor&sgm=1&ppt=D&oss=Postgresql  maybe you can replicate the Phoronix results  ( but this is only gcc11.1 ! )  \"Compare your own system(s) to this result file with the Phoronix Test Suite     by running the command: phoronix-test-suite benchmark 2105299-IB-COMPILERT91\"Regards.  Imrearjun shetty <[email protected]> ezt írta (időpont: 2021. nov. 2., K, 18:13):Hi PostgreSQLv14 source code build  with GCCv11.2 and Clangv12(without JIT) with  optimisation flags like O3 and tested with HammerDBObserved TPC-H , GCC performance better than Clang(without JIT). The performance difference ~22% and also noticed the assembly code difference GCC vs Clang( e.g. GCC inlined functionality compared to Clang). Environment details:————————-OS :RHEL8.4Bare metal : Apple/AMD EPYC/IBMTest(TPC-H) Benchmark Environment:HammerDBIs the performance difference mainly because of below points ?1 data over flow and calculations like int128(int128.c) and C arithmetic operations(functions include in float.h e.g float4_mul)   And please suggest is any another functionality or code points need to check on the performance difference", "msg_date": "Fri, 5 Nov 2021 17:37:40 +0530", "msg_from": "arjun shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQLv14 TPC-H performance GCC vs Clang" }, { "msg_contents": "Hi,\n\nIMO this thread provides so little information it's almost impossible to \nanswer the question. There's almost no information about the hardware, \nscale of the test, configuration of the Postgres instance, the exact \nbuild flags, differences in generated asm code, etc.\n\nI find it hard to believe merely switching from clang to gcc yields 22% \nspeedup - that's way higher than any differences we've seen in the past.\n\nIn my experience, the speedup is unlikely to be \"across the board\". \nThere will be a handful of affected queries, while most remaining \nqueries will be about the same. In that case you need to focus on those \nqueries, see if the plans are the same, do some profiling, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 5 Nov 2021 18:29:37 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQLv14 TPC-H performance GCC vs Clang" }, { "msg_contents": "Yes, currently focusing affects queries as well.\nIn meanwhile on analysis(hardware level) and sample examples noticed\n1. GCC performance better than Clang on int128 .\n2. Clang performance better than GCC on long long\n the reference example\nhttps://stackoverflow.com/questions/63029428/why-is-int128-t-faster-than-long-long-on-x86-64-gcc\n\n3.GCC enabled with “ fexcess-precision=standard” (precision cast for\nfloating point ).\n\nIs these 3 points can make performance difference GCC vs Clang in\nPostgreSQLv14 in Apple/AMD/()environment(intel environment need to check).\nIn these environment int128 enabled wrt PostgreSQLv14.\n\nOn Friday, November 5, 2021, Tomas Vondra <[email protected]>\nwrote:\n\n> Hi,\n>\n> IMO this thread provides so little information it's almost impossible to\n> answer the question. There's almost no information about the hardware,\n> scale of the test, configuration of the Postgres instance, the exact build\n> flags, differences in generated asm code, etc.\n>\n> I find it hard to believe merely switching from clang to gcc yields 22%\n> speedup - that's way higher than any differences we've seen in the past.\n>\n> In my experience, the speedup is unlikely to be \"across the board\". There\n> will be a handful of affected queries, while most remaining queries will be\n> about the same. In that case you need to focus on those queries, see if the\n> plans are the same, do some profiling, etc.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nYes, currently focusing affects queries as well.In meanwhile on analysis(hardware level) and sample examples noticed1. GCC performance  better than Clang on int128 . 2. Clang performance better than GCC on long long  the reference example https://stackoverflow.com/questions/63029428/why-is-int128-t-faster-than-long-long-on-x86-64-gcc3.GCC enabled with “ fexcess-precision=standard” (precision cast for floating point ).Is these 3 points can make performance  difference GCC vs Clang in PostgreSQLv14 in Apple/AMD/()environment(intel environment need to check). In these environment int128 enabled wrt PostgreSQLv14.On Friday, November 5, 2021, Tomas Vondra <[email protected]> wrote:Hi,\n\nIMO this thread provides so little information it's almost impossible to answer the question. There's almost no information about the hardware, scale of the test, configuration of the Postgres instance, the exact build flags, differences in generated asm code, etc.\n\nI find it hard to believe merely switching from clang to gcc yields 22% speedup - that's way higher than any differences we've seen in the past.\n\nIn my experience, the speedup is unlikely to be \"across the board\". There will be a handful of affected queries, while most remaining queries will be about the same. In that case you need to focus on those queries, see if the plans are the same, do some profiling, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 16 Nov 2021 15:40:24 +0530", "msg_from": "arjun shetty <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQLv14 TPC-H performance GCC vs Clang" }, { "msg_contents": "> GCC vs Clang\n\nrelated:\nAs I see - with LLVM/Clang 14.0 ( X86_64 -O3 ) ~12% performance increase\nexpected with the new optimisation ( probably adapted from gcc )\n- https://twitter.com/djtodoro/status/1466808507240386560\n-\nhttps://www.phoronix.com/scan.php?page=news_item&px=LLVM-Clang-14-Hoist-Load\n\nregards,\n Imre\n\n\n\narjun shetty <[email protected]> ezt írta (időpont: 2021. nov. 16.,\nK, 11:10):\n\n> Yes, currently focusing affects queries as well.\n> In meanwhile on analysis(hardware level) and sample examples noticed\n> 1. GCC performance better than Clang on int128 .\n> 2. Clang performance better than GCC on long long\n> the reference example\n> https://stackoverflow.com/questions/63029428/why-is-int128-t-faster-than-long-long-on-x86-64-gcc\n>\n> 3.GCC enabled with “ fexcess-precision=standard” (precision cast for\n> floating point ).\n>\n> Is these 3 points can make performance difference GCC vs Clang in\n> PostgreSQLv14 in Apple/AMD/()environment(intel environment need to check).\n> In these environment int128 enabled wrt PostgreSQLv14.\n>\n> On Friday, November 5, 2021, Tomas Vondra <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> IMO this thread provides so little information it's almost impossible to\n>> answer the question. There's almost no information about the hardware,\n>> scale of the test, configuration of the Postgres instance, the exact build\n>> flags, differences in generated asm code, etc.\n>>\n>> I find it hard to believe merely switching from clang to gcc yields 22%\n>> speedup - that's way higher than any differences we've seen in the past.\n>>\n>> In my experience, the speedup is unlikely to be \"across the board\". There\n>> will be a handful of affected queries, while most remaining queries will be\n>> about the same. In that case you need to focus on those queries, see if the\n>> plans are the same, do some profiling, etc.\n>>\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n\n> GCC vs Clang related: As I see - with LLVM/Clang 14.0 ( X86_64 -O3 )   ~12% performance increase expected with the new optimisation ( probably adapted from gcc  )  - https://twitter.com/djtodoro/status/1466808507240386560- https://www.phoronix.com/scan.php?page=news_item&px=LLVM-Clang-14-Hoist-Loadregards, Imrearjun shetty <[email protected]> ezt írta (időpont: 2021. nov. 16., K, 11:10):Yes, currently focusing affects queries as well.In meanwhile on analysis(hardware level) and sample examples noticed1. GCC performance  better than Clang on int128 . 2. Clang performance better than GCC on long long  the reference example https://stackoverflow.com/questions/63029428/why-is-int128-t-faster-than-long-long-on-x86-64-gcc3.GCC enabled with “ fexcess-precision=standard” (precision cast for floating point ).Is these 3 points can make performance  difference GCC vs Clang in PostgreSQLv14 in Apple/AMD/()environment(intel environment need to check). In these environment int128 enabled wrt PostgreSQLv14.On Friday, November 5, 2021, Tomas Vondra <[email protected]> wrote:Hi,\n\nIMO this thread provides so little information it's almost impossible to answer the question. There's almost no information about the hardware, scale of the test, configuration of the Postgres instance, the exact build flags, differences in generated asm code, etc.\n\nI find it hard to believe merely switching from clang to gcc yields 22% speedup - that's way higher than any differences we've seen in the past.\n\nIn my experience, the speedup is unlikely to be \"across the board\". There will be a handful of affected queries, while most remaining queries will be about the same. In that case you need to focus on those queries, see if the plans are the same, do some profiling, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 10 Dec 2021 19:21:26 +0100", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQLv14 TPC-H performance GCC vs Clang" }, { "msg_contents": "Hi All,\n\nI checked with LLVM/CLang 14.0 on arch x86-64-O3 in the Mac/AMD EPYC\nenvironment , but I see GCC performs better than Clang14.\nClang14-https://github.com/llvm/llvm-project(main branch and pull or\ncommitID:3f3fe4a5cfa1797..)\n[image: image.png]\npre analysis GCC vs Clang\n (1) GCC more inlined functionality compared to Clang in PostgreSQL\n (2) in few functions GCC are not inlined but Clang consider inline\n postgresqlv14/src/include/utlis/float.h: float8_mul(),float8_div\n(arithmetic functions).v\n postgresqlv14/src/backend/adt/geo_ops.c : point_xxx().\n(3) GCC performs better than clang on datatype Int128(need to cross check\non instruction level/assembly code on Hardware).\n(4) as point(2) without inline(remove inline in source code ) on those\nfunctions in file's float.h and geo_ops.c and observed performance\nimprovement 6% compared to within inline in Clang.\n\nregards,\nArjun\n\n\nOn Fri, Dec 10, 2021 at 11:51 PM Imre Samu <[email protected]> wrote:\n\n> > GCC vs Clang\n>\n> related:\n> As I see - with LLVM/Clang 14.0 ( X86_64 -O3 ) ~12% performance increase\n> expected with the new optimisation ( probably adapted from gcc )\n> - https://twitter.com/djtodoro/status/1466808507240386560\n> -\n> https://www.phoronix.com/scan.php?page=news_item&px=LLVM-Clang-14-Hoist-Load\n>\n> regards,\n> Imre\n>\n>\n>\n> arjun shetty <[email protected]> ezt írta (időpont: 2021. nov.\n> 16., K, 11:10):\n>\n>> Yes, currently focusing affects queries as well.\n>> In meanwhile on analysis(hardware level) and sample examples noticed\n>> 1. GCC performance better than Clang on int128 .\n>> 2. Clang performance better than GCC on long long\n>> the reference example\n>> https://stackoverflow.com/questions/63029428/why-is-int128-t-faster-than-long-long-on-x86-64-gcc\n>>\n>> 3.GCC enabled with “ fexcess-precision=standard” (precision cast for\n>> floating point ).\n>>\n>> Is these 3 points can make performance difference GCC vs Clang in\n>> PostgreSQLv14 in Apple/AMD/()environment(intel environment need to check).\n>> In these environment int128 enabled wrt PostgreSQLv14.\n>>\n>> On Friday, November 5, 2021, Tomas Vondra <[email protected]>\n>> wrote:\n>>\n>>> Hi,\n>>>\n>>> IMO this thread provides so little information it's almost impossible to\n>>> answer the question. There's almost no information about the hardware,\n>>> scale of the test, configuration of the Postgres instance, the exact build\n>>> flags, differences in generated asm code, etc.\n>>>\n>>> I find it hard to believe merely switching from clang to gcc yields 22%\n>>> speedup - that's way higher than any differences we've seen in the past.\n>>>\n>>> In my experience, the speedup is unlikely to be \"across the board\".\n>>> There will be a handful of affected queries, while most remaining queries\n>>> will be about the same. In that case you need to focus on those queries,\n>>> see if the plans are the same, do some profiling, etc.\n>>>\n>>>\n>>> regards\n>>>\n>>> --\n>>> Tomas Vondra\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>> The Enterprise PostgreSQL Company\n>>>\n>>", "msg_date": "Tue, 18 Jan 2022 19:52:18 +0530", "msg_from": "arjun shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQLv14 TPC-H performance GCC vs Clang" } ]
[ { "msg_contents": "Hi guys,\n\nPostgreSQL 13.4 for SUSE linux  (should be same behavior on red hat)_________________________________________________________\n We installed the llvmJIT community RPM => postgresql13-llvmjit-13.4-1PGDG.sles15.x86_64.rpm   \n\n=> It requires libLLVM11 : https://opensuse.pkgs.org/15.3/opensuse-oss-aarch64/libLLVM11-11.0.1-1.26.x86_64.rpm.html\n=> Also requires llvm11 : https://opensuse.pkgs.org/15.3/opensuse-oss-aarch64/llvm11-11.0.1-1.26.x86_64.rpm.html\n  For llvm11  we faced a security issue for installation in production because this lib (llvm11) seems to be a kind of compiler.\nSo we installed *only* libLLVM11 (not llvm11) and tried the example test in JIT documentation : https://www.postgresql.org/docs/11/jit-decision.html  \n\nit works...\n => Could you please confirm if this library is really needed: 'llvm11'  or we can  live only with 'libLLVM11' library installed and using PostgreSQL JIT feature \n *without* any penalty on performances ?\n\nThank you very much\n\nBest regards\n\n\nHi guys,PostgreSQL 13.4 for SUSE linux  (should be same behavior on red hat)_________________________________________________________ We installed the llvmJIT community RPM => postgresql13-llvmjit-13.4-1PGDG.sles15.x86_64.rpm   => It requires libLLVM11 : https://opensuse.pkgs.org/15.3/opensuse-oss-aarch64/libLLVM11-11.0.1-1.26.x86_64.rpm.html=> Also requires llvm11 : https://opensuse.pkgs.org/15.3/opensuse-oss-aarch64/llvm11-11.0.1-1.26.x86_64.rpm.html  For llvm11  we faced a security issue for installation in production because this lib (llvm11) seems to be a kind of compiler.So we installed *only* libLLVM11 (not llvm11) and tried the example test in JIT documentation : https://www.postgresql.org/docs/11/jit-decision.html  it works... => Could you please confirm if this library is really needed: 'llvm11'  or we can  live only with 'libLLVM11' library installed and using PostgreSQL JIT feature  *without* any penalty on performances ?Thank you very muchBest regards", "msg_date": "Thu, 4 Nov 2021 20:56:32 +0000 (UTC)", "msg_from": "ANASTACIO Tiago <[email protected]>", "msg_from_op": true, "msg_subject": "JIT llvm11 package" } ]
[ { "msg_contents": "A description of what you are trying to achieve and what results you\nexpect.:\nI have two equivalent queries, one with an EXISTS clause by itself and one\nwrapped in a (SELECT EXISTS) and the \"naked\" exists is much slower.\nI would expect both to be the same speed / have same execution plan.\n\n-- slow\nexplain (analyze, buffers)\nSELECT\n parent.*,\n EXISTS (SELECT * FROM child WHERE child.parent_id=parent.parent_id) AS\nchild_exists\nFROM parent\nORDER BY parent_id LIMIT 10;\n\n-- fast\nexplain (analyze, buffers)\nSELECT\n parent.*,\n (SELECT EXISTS (SELECT * FROM child WHERE\nchild.parent_id=parent.parent_id)) AS child_exists\nFROM parent\nORDER BY parent_id LIMIT 10;\n\n-- slow\nhttps://explain.depesz.com/s/DzcK\n\n-- fast\nhttps://explain.depesz.com/s/EftS\n\nSetup:\nCREATE TABLE parent(parent_id BIGSERIAL PRIMARY KEY, name text);\nCREATE TABLE child(child_id BIGSERIAL PRIMARY KEY, parent_id bigint\nreferences parent(parent_id), name text);\n\n-- random name and sequential primary key for 100 thousand parents.\nINSERT INTO parent\n SELECT\n nextval('parent_parent_id_seq'),\n md5(random()::text)\n FROM generate_series(1, 100000);\n\n-- 1 million children.\n-- set every odd id parent to have children. even id parent gets none.\nINSERT INTO child\n SELECT\n nextval('child_child_id_seq'),\n ((generate_series/2*2) % 100000)::bigint + 1,\n md5(random()::text)\n FROM generate_series(1, 1000000);\n\nCREATE INDEX ON child(parent_id);\nVACUUM ANALYZE parent, child;\n\nBoth queries return the same results - I have taken a md5 of both queries\nwithout the LIMIT clause to confirm.\nTables have been vacuumed and analyzed.\nNo other queries are being executed.\nReproducible with LIMIT 1 or LIMIT 100 or LIMIT 500.\nChanging work_mem makes no difference.\n\n-[ RECORD 1 ]--+---------\nrelname | parent\nrelpages | 935\nreltuples | 100000\nrelallvisible | 935\nrelkind | r\nrelnatts | 2\nrelhassubclass | f\nreloptions |\npg_table_size | 7700480\n-[ RECORD 2 ]--+---------\nrelname | child\nrelpages | 10310\nreltuples | 1e+06\nrelallvisible | 10310\nrelkind | r\nrelnatts | 3\nrelhassubclass | f\nreloptions |\npg_table_size | 84516864\n\nPostgreSQL version number you are running:\nPostgreSQL 13.4 on arm-apple-darwin20.5.0, compiled by Apple clang version\n12.0.5 (clang-1205.0.22.9), 64-bit\n\nHow you installed PostgreSQL:\nUsing homebrew for mac.\nbrew install postgres\n\nChanges made to the settings in the postgresql.conf file: see Server\nConfiguration for a quick way to list them all.\ncheckpoint_completion_target | 0.9 | configuration file\ncheckpoint_timeout | 30min | configuration file\nclient_encoding | UTF8 | client\ncpu_tuple_cost | 0.03 | configuration file\neffective_cache_size | 4GB | configuration file\nlog_directory | log | configuration file\nlog_min_duration_statement | 25ms | configuration file\nlog_statement | none | configuration file\nlog_temp_files | 0 | configuration file\nlog_timezone | America/Anchorage | configuration file\nmaintenance_work_mem | 512MB | configuration file\nmax_parallel_maintenance_workers | 2 | configuration file\nmax_parallel_workers | 4 | configuration file\nmax_parallel_workers_per_gather | 4 | configuration file\nmax_stack_depth | 2MB | environment\nvariable\nmax_wal_size | 10GB | configuration file\nmax_worker_processes | 4 | configuration file\nmin_wal_size | 80MB | configuration file\nrandom_page_cost | 1.1 | configuration file\nshared_buffers | 512MB | configuration file\nshared_preload_libraries | auto_explain | configuration file\ntrack_io_timing | on | configuration file\nvacuum_cost_limit | 1000 | configuration file\nwal_buffers | 64MB | configuration file\nwal_compression | on | configuration file\nwork_mem | 128MB | configuration file\n\nOperating system and version:\nmacOS Big Sur 11.2.3\nI have confirmed this to happen on ubuntu linux however.\n\nWhat program you're using to connect to PostgreSQL:\npsql\n\nIs there anything relevant or unusual in the PostgreSQL server logs?:\nno\n\nHardware specs:\nMacBook Air10,1 M1\n8GB RAM\nAPPLE SSD AP0512Q 500.28GB", "msg_date": "Sun, 7 Nov 2021 14:27:14 -0800", "msg_from": "Jimmy A <[email protected]>", "msg_from_op": true, "msg_subject": "EXISTS by itself vs SELECT EXISTS much slower in query." }, { "msg_contents": "postgresql 14, linux\nwith:\nCREATE TABLE child(child_id bigint generated always as identity\nPRIMARY KEY, parent_id bigint references parent(parent_id), name\ntext);\nCREATE TABLE child(child_id bigint generated always as identity\nPRIMARY KEY, parent_id bigint references parent(parent_id), name\ntext);\n---------\nINSERT INTO parent(name)\n SELECT\n md5(random()::text)\n FROM generate_series(1, 100000);\n---------\nINSERT INTO child(parent_id, name)\n SELECT\n ((generate_series/2*2) % 100000)::bigint + 1,\n md5(random()::text)\n FROM generate_series(1, 1000000);\n---------\n CREATE INDEX ON child(parent_id);\nVACUUM ANALYZE parent, child;\n\nslow:\nexplain (analyze, buffers)\nSELECT\n parent.*,\n EXISTS (SELECT * FROM child WHERE\nchild.parent_id=parent.parent_id) AS child_exists\nFROM parent\nORDER BY parent_id LIMIT 10;\nhttps://explain.depesz.com/s/Sx9t\nfast:\nexplain (analyze, buffers)\nSELECT\n parent.*,\n (SELECT EXISTS (SELECT * FROM child WHERE\nchild.parent_id=parent.parent_id)) AS child_exists\nFROM parent\nORDER BY parent_id LIMIT 10;\n\nhttps://explain.depesz.com/s/mIXR\n\n-------\n\nso, this looks strange.\n\nOn 11/8/21, Jimmy A <[email protected]> wrote:\n> A description of what you are trying to achieve and what results you\n> expect.:\n> I have two equivalent queries, one with an EXISTS clause by itself and one\n> wrapped in a (SELECT EXISTS) and the \"naked\" exists is much slower.\n> I would expect both to be the same speed / have same execution plan.\n>\n> -- slow\n> explain (analyze, buffers)\n> SELECT\n> parent.*,\n> EXISTS (SELECT * FROM child WHERE child.parent_id=parent.parent_id) AS\n> child_exists\n> FROM parent\n> ORDER BY parent_id LIMIT 10;\n>\n> -- fast\n> explain (analyze, buffers)\n> SELECT\n> parent.*,\n> (SELECT EXISTS (SELECT * FROM child WHERE\n> child.parent_id=parent.parent_id)) AS child_exists\n> FROM parent\n> ORDER BY parent_id LIMIT 10;\n>\n> -- slow\n> https://explain.depesz.com/s/DzcK\n>\n> -- fast\n> https://explain.depesz.com/s/EftS\n>\n> Setup:\n> CREATE TABLE parent(parent_id BIGSERIAL PRIMARY KEY, name text);\n> CREATE TABLE child(child_id BIGSERIAL PRIMARY KEY, parent_id bigint\n> references parent(parent_id), name text);\n>\n> -- random name and sequential primary key for 100 thousand parents.\n> INSERT INTO parent\n> SELECT\n> nextval('parent_parent_id_seq'),\n> md5(random()::text)\n> FROM generate_series(1, 100000);\n>\n> -- 1 million children.\n> -- set every odd id parent to have children. even id parent gets none.\n> INSERT INTO child\n> SELECT\n> nextval('child_child_id_seq'),\n> ((generate_series/2*2) % 100000)::bigint + 1,\n> md5(random()::text)\n> FROM generate_series(1, 1000000);\n>\n> CREATE INDEX ON child(parent_id);\n> VACUUM ANALYZE parent, child;\n>\n> Both queries return the same results - I have taken a md5 of both queries\n> without the LIMIT clause to confirm.\n> Tables have been vacuumed and analyzed.\n> No other queries are being executed.\n> Reproducible with LIMIT 1 or LIMIT 100 or LIMIT 500.\n> Changing work_mem makes no difference.\n>\n> -[ RECORD 1 ]--+---------\n> relname | parent\n> relpages | 935\n> reltuples | 100000\n> relallvisible | 935\n> relkind | r\n> relnatts | 2\n> relhassubclass | f\n> reloptions |\n> pg_table_size | 7700480\n> -[ RECORD 2 ]--+---------\n> relname | child\n> relpages | 10310\n> reltuples | 1e+06\n> relallvisible | 10310\n> relkind | r\n> relnatts | 3\n> relhassubclass | f\n> reloptions |\n> pg_table_size | 84516864\n>\n> PostgreSQL version number you are running:\n> PostgreSQL 13.4 on arm-apple-darwin20.5.0, compiled by Apple clang version\n> 12.0.5 (clang-1205.0.22.9), 64-bit\n>\n> How you installed PostgreSQL:\n> Using homebrew for mac.\n> brew install postgres\n>\n> Changes made to the settings in the postgresql.conf file: see Server\n> Configuration for a quick way to list them all.\n> checkpoint_completion_target | 0.9 | configuration\n> file\n> checkpoint_timeout | 30min | configuration\n> file\n> client_encoding | UTF8 | client\n> cpu_tuple_cost | 0.03 | configuration\n> file\n> effective_cache_size | 4GB | configuration\n> file\n> log_directory | log | configuration\n> file\n> log_min_duration_statement | 25ms | configuration\n> file\n> log_statement | none | configuration\n> file\n> log_temp_files | 0 | configuration\n> file\n> log_timezone | America/Anchorage | configuration\n> file\n> maintenance_work_mem | 512MB | configuration\n> file\n> max_parallel_maintenance_workers | 2 | configuration\n> file\n> max_parallel_workers | 4 | configuration\n> file\n> max_parallel_workers_per_gather | 4 | configuration\n> file\n> max_stack_depth | 2MB | environment\n> variable\n> max_wal_size | 10GB | configuration\n> file\n> max_worker_processes | 4 | configuration\n> file\n> min_wal_size | 80MB | configuration\n> file\n> random_page_cost | 1.1 | configuration\n> file\n> shared_buffers | 512MB | configuration\n> file\n> shared_preload_libraries | auto_explain | configuration\n> file\n> track_io_timing | on | configuration\n> file\n> vacuum_cost_limit | 1000 | configuration\n> file\n> wal_buffers | 64MB | configuration\n> file\n> wal_compression | on | configuration\n> file\n> work_mem | 128MB | configuration\n> file\n>\n> Operating system and version:\n> macOS Big Sur 11.2.3\n> I have confirmed this to happen on ubuntu linux however.\n>\n> What program you're using to connect to PostgreSQL:\n> psql\n>\n> Is there anything relevant or unusual in the PostgreSQL server logs?:\n> no\n>\n> Hardware specs:\n> MacBook Air10,1 M1\n> 8GB RAM\n> APPLE SSD AP0512Q 500.28GB\n>\n\n\n-- \n\nRespectfully,\nBoytsov Vasya\n\n\n", "msg_date": "Mon, 8 Nov 2021 12:22:13 +0300", "msg_from": "Vasya Boytsov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXISTS by itself vs SELECT EXISTS much slower in query." }, { "msg_contents": "Jimmy A <[email protected]> writes:\n> I have two equivalent queries, one with an EXISTS clause by itself and one\n> wrapped in a (SELECT EXISTS) and the \"naked\" exists is much slower.\n> I would expect both to be the same speed / have same execution plan.\n\nThat is a dangerous assumption. In general, wrapping (SELECT ...) around\nsomething has a significant performance impact, because it pushes Postgres\nto try to decouple the sub-select's execution from the outer query.\nAs an example,\n\npostgres=# select x, random() from generate_series(1,3) x;\n x | random \n---+---------------------\n 1 | 0.08595356832524814\n 2 | 0.6444265043474005\n 3 | 0.6878852071694332\n(3 rows)\n\npostgres=# select x, (select random()) from generate_series(1,3) x;\n x | random \n---+--------------------\n 1 | 0.7028987801136708\n 2 | 0.7028987801136708\n 3 | 0.7028987801136708\n(3 rows)\n\nThat's not a bug: it's expected that the second query will evaluate\nrandom() only once.\n\nIn the case at hand, I suspect you're getting a \"hashed subplan\"\nin one query and not the other. The depesz.com display doesn't\nreally show that, but EXPLAIN VERBOSE would.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Nov 2021 15:35:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXISTS by itself vs SELECT EXISTS much slower in query." }, { "msg_contents": "I see, I never knew that.\n\nIndeed there is a hashed subplan for the EXISTS by itself. So that explains\nit.\n\nThanks Tom.\n\n\nOn Mon, Nov 8, 2021 at 12:35 PM Tom Lane <[email protected]> wrote:\n\n> Jimmy A <[email protected]> writes:\n> > I have two equivalent queries, one with an EXISTS clause by itself and\n> one\n> > wrapped in a (SELECT EXISTS) and the \"naked\" exists is much slower.\n> > I would expect both to be the same speed / have same execution plan.\n>\n> That is a dangerous assumption. In general, wrapping (SELECT ...) around\n> something has a significant performance impact, because it pushes Postgres\n> to try to decouple the sub-select's execution from the outer query.\n> As an example,\n>\n> postgres=# select x, random() from generate_series(1,3) x;\n> x | random\n> ---+---------------------\n> 1 | 0.08595356832524814\n> 2 | 0.6444265043474005\n> 3 | 0.6878852071694332\n> (3 rows)\n>\n> postgres=# select x, (select random()) from generate_series(1,3) x;\n> x | random\n> ---+--------------------\n> 1 | 0.7028987801136708\n> 2 | 0.7028987801136708\n> 3 | 0.7028987801136708\n> (3 rows)\n>\n> That's not a bug: it's expected that the second query will evaluate\n> random() only once.\n>\n> In the case at hand, I suspect you're getting a \"hashed subplan\"\n> in one query and not the other. The depesz.com display doesn't\n> really show that, but EXPLAIN VERBOSE would.\n>\n> regards, tom lane\n>\n\nI see, I never knew that.Indeed there is a hashed subplan for the EXISTS by itself. So that explains it.Thanks Tom.On Mon, Nov 8, 2021 at 12:35 PM Tom Lane <[email protected]> wrote:Jimmy A <[email protected]> writes:\n> I have two equivalent queries, one with an EXISTS clause by itself and one\n> wrapped in a (SELECT EXISTS) and the \"naked\" exists is much slower.\n> I would expect both to be the same speed / have same execution plan.\n\nThat is a dangerous assumption.  In general, wrapping (SELECT ...) around\nsomething has a significant performance impact, because it pushes Postgres\nto try to decouple the sub-select's execution from the outer query.\nAs an example,\n\npostgres=# select x, random() from generate_series(1,3) x;\n x |       random        \n---+---------------------\n 1 | 0.08595356832524814\n 2 |  0.6444265043474005\n 3 |  0.6878852071694332\n(3 rows)\n\npostgres=# select x, (select random()) from generate_series(1,3) x;\n x |       random       \n---+--------------------\n 1 | 0.7028987801136708\n 2 | 0.7028987801136708\n 3 | 0.7028987801136708\n(3 rows)\n\nThat's not a bug: it's expected that the second query will evaluate\nrandom() only once.\n\nIn the case at hand, I suspect you're getting a \"hashed subplan\"\nin one query and not the other.  The depesz.com display doesn't\nreally show that, but EXPLAIN VERBOSE would.\n\n                        regards, tom lane", "msg_date": "Mon, 8 Nov 2021 17:08:50 -0800", "msg_from": "Jimmy A <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXISTS by itself vs SELECT EXISTS much slower in query." } ]
[ { "msg_contents": "Hi folks,\n\nwe have found that (probably after VACUUM ANALYZE) one analytical query\nstarts to be slow on our production DB. Moreover, more or less the same\nplan is used on our testing data (how to restore our testing data is\ndescribed at the end of this email), or better to say the same problem\nexists in both (production vs testing data) scenarios: nested loop scanning\nCTE several thousand times is used due to the bad estimates:\nhttps://explain.dalibo.com/plan/sER#plan/node/87 (query is included on\ndalibo).\n\nWe improved the query guided by some intuitive thoughts about how it works\nand get a much faster (120x) plan\nhttps://explain.dalibo.com/plan/M21#plan/node/68. We continued with further\nimprovement/simplification of the query but we get again a similar plan\nhttps://explain.dalibo.com/plan/nLb#plan/node/72 with nested loop and with\noriginal inferior performance. I realized that the success of the\nintermediate plan (M21) is somewhat random as is based on bad estimates as\nwell.\n\nFurther, I tried version forcing to not materialize CTE\nhttps://explain.dalibo.com/plan/0Tp#plan and version using PG default CTE\nmaterialization policy https://explain.dalibo.com/plan/g7M#plan/node/68.\nBoth with no success.\n\nDo you have any idea how to get HASH JOINS in the CTE w_1p_data instead of\nNESTED LOOPs?\n* Add some statistics to not get bad estimates on \"lower-level\" CTEs?\n* Some PG configuration (I am currently only disabling JIT [1])?\n* Rewrite that query into several smaller pieces and use PL/pgSQL to put it\ntogether?\n* In a slightly more complicated function I used temporary tables to be\nable to narrow statistics [2] but I am afraid of system table bloating\nbecause of the huge amount of usage of this function on the production\n(hundred thousand of calls by day when data are to be analyzed).\n\n-------------------------------------------------------------------\nhow to restore data\n===============\nERD of the schema is also available [3].\n\ntesting data as a part of an extension\n---------------------------------------\nIt is possible to install [4] the extension\nhttps://gitlab.com/nfiesta/nfiesta_pg and run regression tests [5] (make\ninstallcheck-all). This will create database contrib_regression_fst_1p\n(besides other DBs) and populate this DB with the testing data. The\nregression test fst_1p_data is in fact testing functionality/code, which I\nam experimenting with.\n\nusing DB dump (without extension)\n----------------------------------------------\nIt is also possible to create mentioned testing DB by simply downloading DB\ndumps from the link\nhttps://drive.google.com/drive/folders/1OVJEISpfuvbxPQG1ArDmSQxZByNZN0xG?usp=sharing\nfollowed by creating DB with postgis extension and restoring dumps:\n* perf_test.sql (format plain) to be used with psql \\i\n* perf_test.dump to be used with pg_restore...\n\nThank you for possible suggestions, Jiří.\n\n[1] https://gitlab.com/nfiesta/nfiesta_pg/-/blob/master/.gitlab-ci.yml#L10\n[2]\nhttps://gitlab.com/nfiesta/nfiesta_pg/-/blob/master/functions/extschema/fn_2p_data.sql#L79\n[3] https://gitlab.com/nfiesta/nfiesta_pg/-/wikis/Data-Storage#v25x.\n[4] https://gitlab.com/nfiesta/nfiesta_pg/-/wikis/Installation\n[5] https://gitlab.com/nfiesta/nfiesta_pg/-/jobs/1762550188\n\nHi folks,we have found that (probably after VACUUM ANALYZE) one analytical query starts to be slow on our production DB. Moreover, more or less the same plan is used on our testing data (how to restore our testing data is described at the end of this email), or better to say the same problem exists in both (production vs testing data) scenarios: nested loop scanning CTE several thousand times is used due to the bad estimates: https://explain.dalibo.com/plan/sER#plan/node/87 (query is included on dalibo).We improved the query guided by some intuitive thoughts about how it works and get a much faster (120x) plan https://explain.dalibo.com/plan/M21#plan/node/68. We continued with further improvement/simplification of the query but we get again a similar plan https://explain.dalibo.com/plan/nLb#plan/node/72 with nested loop and with original inferior performance. I realized that the success of the intermediate plan (M21) is somewhat random as is based on bad estimates as well.Further, I tried version forcing to not materialize CTE https://explain.dalibo.com/plan/0Tp#plan and version using PG default CTE materialization policy https://explain.dalibo.com/plan/g7M#plan/node/68. Both with no success.Do you have any idea how to get HASH JOINS in the CTE w_1p_data instead of NESTED LOOPs? * Add some statistics to not get bad estimates on \"lower-level\" CTEs? * Some PG configuration (I am currently only disabling JIT [1])? * Rewrite that query into several smaller pieces and use PL/pgSQL to put it together? * In a slightly more complicated function I used temporary tables to be able to narrow statistics [2] but I am afraid of system table bloating because of the huge amount of usage of this function on the production (hundred thousand of calls by day when data are to be analyzed).-------------------------------------------------------------------how to restore data===============ERD of the schema is also available [3].testing data as a part of an extension---------------------------------------It is possible to install [4] the extension https://gitlab.com/nfiesta/nfiesta_pg and run regression tests [5] (make installcheck-all). This will create database contrib_regression_fst_1p (besides other DBs) and populate this DB with the testing data. The regression test fst_1p_data is in fact testing functionality/code, which I am experimenting with. using DB dump (without extension)----------------------------------------------It is also possible to create mentioned testing DB by simply downloading DB dumps from the linkhttps://drive.google.com/drive/folders/1OVJEISpfuvbxPQG1ArDmSQxZByNZN0xG?usp=sharingfollowed by creating DB with postgis extension and restoring dumps:* perf_test.sql (format plain) to be used with psql \\i  * perf_test.dump to be used with pg_restore... Thank you for possible suggestions, Jiří.\n[1] https://gitlab.com/nfiesta/nfiesta_pg/-/blob/master/.gitlab-ci.yml#L10\n\n[2] https://gitlab.com/nfiesta/nfiesta_pg/-/blob/master/functions/extschema/fn_2p_data.sql#L79\n\n[3] https://gitlab.com/nfiesta/nfiesta_pg/-/wikis/Data-Storage#v25x.\n\n\n[4] https://gitlab.com/nfiesta/nfiesta_pg/-/wikis/Installation\n\n[5] https://gitlab.com/nfiesta/nfiesta_pg/-/jobs/1762550188", "msg_date": "Thu, 11 Nov 2021 20:20:57 +0100", "msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "performance of analytical query" }, { "msg_contents": "On Thu, Nov 11, 2021 at 08:20:57PM +0100, Jiří Fejfar wrote:\n> Hi folks,\n> \n> we have found that (probably after VACUUM ANALYZE) one analytical query\n> starts to be slow on our production DB. Moreover, more or less the same\n> plan is used on our testing data (how to restore our testing data is\n> described at the end of this email), or better to say the same problem\n> exists in both (production vs testing data) scenarios: nested loop scanning\n> CTE several thousand times is used due to the bad estimates:\n> https://explain.dalibo.com/plan/sER#plan/node/87 (query is included on\n> dalibo).\n\n> Do you have any idea how to get HASH JOINS in the CTE w_1p_data instead of\n> NESTED LOOPs?\n> * Add some statistics to not get bad estimates on \"lower-level\" CTEs?\n\nDo you know why the estimates are bad ?\n\nIndex Scan using t_map_plot_cell__cell_gid__idx on cm_plot2cell_mapping cm_plot2cell_mapping (cost=0.29..18.59 rows=381 width=12) (actual time=0.015..2.373 rows=3,898 loops=1)\n Index Cond: (cm_plot2cell_mapping.estimation_cell = f_a_cell.estimation_cell)\n Buffers: shared hit=110\n\nI don't know, but is the estimate for this portion of the plan improved by doing:\n| ALTER TABLE f_a_cell ALTER estimation_cell SET STATISTICS 500; ANALYZE f_a_cell;\n\n> * In a slightly more complicated function I used temporary tables to be\n> able to narrow statistics [2] but I am afraid of system table bloating\n> because of the huge amount of usage of this function on the production\n> (hundred thousand of calls by day when data are to be analyzed).\n\nI would try this for sure - I think hundreds of calls per day would be no\nproblem. If you're concerned, you could add manual calls to do (for example)\nVACUUM pg_attribute; after dropping the temp tables.\n\nBTW, we disable nested loops for the our analytic report queries. I have never\nbeen able to avoid pathological plans any other way.\n\n\n", "msg_date": "Thu, 11 Nov 2021 20:41:51 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of analytical query" }, { "msg_contents": "On Thu, Nov 11, 2021 at 7:42 PM Justin Pryzby <[email protected]> wrote:\n\n> BTW, we disable nested loops for the our analytic report queries. I have\n> never\n> been able to avoid pathological plans any other way.\n>\n\nCurious, do you see any problems from that? Are there certain nodes that\nreally are best suited to a nested loop like a lateral subquery?\n\nOn Thu, Nov 11, 2021 at 7:42 PM Justin Pryzby <[email protected]> wrote:BTW, we disable nested loops for the our analytic report queries.  I have never\nbeen able to avoid pathological plans any other way.Curious, do you see any problems from that? Are there certain nodes that really are best suited to a nested loop like a lateral subquery?", "msg_date": "Fri, 12 Nov 2021 10:55:53 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of analytical query" }, { "msg_contents": "On Fri, Nov 12, 2021 at 10:55:53AM -0700, Michael Lewis wrote:\n> On Thu, Nov 11, 2021 at 7:42 PM Justin Pryzby <[email protected]> wrote:\n> \n> > BTW, we disable nested loops for the our analytic report queries. I have\n> > never\n> > been able to avoid pathological plans any other way.\n> \n> Curious, do you see any problems from that? Are there certain nodes that\n> really are best suited to a nested loop like a lateral subquery?\n\nWhen I first disabled it years ago, I did it for the entire database, and it\ncaused issues with a more interactive, non-analytic query, on a non-partitioned\ntable.\n\nSo my second attempt was to disable nested loops only during report queries,\nand I have not looked back. For our report queries on partitioned tables, the\noverhead of hashing a handful of rows is of no significance. Any query that\nfinishes in 1sec would be exceptionally fast.\n\nBTW, Jiří's inquiry caused me to look at the source of one of our historic\nmis-estimates, and to realize that it's resolved in pg14:\nhttps://www.postgresql.org/message-id/20211112173102.GI17618%40telsasoft.com\n\nI doubt that's enough to avoid catastrophic nested loop plans in every case\n(especially CTEs on top of CTEs).\n\nThere was a discussion about discouraging nested loop plans that weren't\nprovably \"safe\" (due to returning at most one row, due to a unique index).\nhttps://www.postgresql.org/message-id/CA%2BTgmoYtWXNpj6D92XxUfjT_YFmi2dWq1XXM9EY-CRcr2qmqbg%40mail.gmail.com\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 12 Nov 2021 12:33:50 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of analytical query" }, { "msg_contents": "On Fri, 12 Nov 2021 at 03:41, Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Nov 11, 2021 at 08:20:57PM +0100, Jiří Fejfar wrote:\n> > Hi folks,\n> >\n> > we have found that (probably after VACUUM ANALYZE) one analytical query\n> > starts to be slow on our production DB. Moreover, more or less the same\n> > plan is used on our testing data (how to restore our testing data is\n> > described at the end of this email), or better to say the same problem\n> > exists in both (production vs testing data) scenarios: nested loop\n> scanning\n> > CTE several thousand times is used due to the bad estimates:\n> > https://explain.dalibo.com/plan/sER#plan/node/87 (query is included on\n> > dalibo).\n>\n> > Do you have any idea how to get HASH JOINS in the CTE w_1p_data instead\n> of\n> > NESTED LOOPs?\n> > * Add some statistics to not get bad estimates on \"lower-level\" CTEs?\n>\n> Do you know why the estimates are bad ?\n>\n> I have no clear insight at the moment... problem is probably with bad\nestimates which chain along the whole tree of nodes... one bad estimate was\nafter aggregation for example... probably, I would need to explore\ncarefully whole execution plan and identify sources of unprecise estimates\nand correct it with additional, more precise statistics when possible,\nright?\n\n\n> Index Scan using t_map_plot_cell__cell_gid__idx on cm_plot2cell_mapping\n> cm_plot2cell_mapping (cost=0.29..18.59 rows=381 width=12) (actual\n> time=0.015..2.373 rows=3,898 loops=1)\n> Index Cond: (cm_plot2cell_mapping.estimation_cell =\n> f_a_cell.estimation_cell)\n> Buffers: shared hit=110\n>\n> I don't know, but is the estimate for this portion of the plan improved by\n> doing:\n> | ALTER TABLE f_a_cell ALTER estimation_cell SET STATISTICS 500; ANALYZE\n> f_a_cell;\n>\n> this does not help to the plan as a whole... but I am thinking about\nincreasing this parameter (size of sample) at the DB level\n\n\n> > * In a slightly more complicated function I used temporary tables to be\n> > able to narrow statistics [2] but I am afraid of system table bloating\n> > because of the huge amount of usage of this function on the production\n> > (hundred thousand of calls by day when data are to be analyzed).\n>\n> I would try this for sure - I think hundreds of calls per day would be no\n> problem. If you're concerned, you could add manual calls to do (for\n> example)\n> VACUUM pg_attribute; after dropping the temp tables.\n>\n> it is hundreds of thousands of calls (10^5) ... but yes I got some hints\nhow to avoid bloating (basically use temp tables longer and truncate them\ninstead of deleting when possible)\n\n\n> BTW, we disable nested loops for the our analytic report queries. I have\n> never\n> been able to avoid pathological plans any other way.\n>\n\nI will think about that.\n\nAND\n\nwe further simplified the query and get again one good execution plan\nhttps://explain.dalibo.com/plan/tCk :-)\n\nI have some thoughts now:\n\n* I know that PG is focused on OLTP rather then analytics, but we are happy\nwith it at all and do not wish to use another engine for analytical\nqueries... isn't somewhere some \"PG analytical best practice\" available?\n* It seems that the the form / style of query has great impact on execution\nplans... I was very happy with writing queries as CTEs on top of other CTEs\nor layering VIEWS because you can really focus on the semantics of the\nproblem and I hoped that planner will somehow magically \"compile\" my code\nand get something good enough with respect to performance. Of course, I\nhave to not use materialized CTEs, but it was not possible with NOT\nMATERIALIZED version as performance was bad and I was not able even to get\noriented in exec. plan...\n\nThank you for your ideas! J.\n\nOn Fri, 12 Nov 2021 at 03:41, Justin Pryzby <[email protected]> wrote:On Thu, Nov 11, 2021 at 08:20:57PM +0100, Jiří Fejfar wrote:\n> Hi folks,\n> \n> we have found that (probably after VACUUM ANALYZE) one analytical query\n> starts to be slow on our production DB. Moreover, more or less the same\n> plan is used on our testing data (how to restore our testing data is\n> described at the end of this email), or better to say the same problem\n> exists in both (production vs testing data) scenarios: nested loop scanning\n> CTE several thousand times is used due to the bad estimates:\n> https://explain.dalibo.com/plan/sER#plan/node/87 (query is included on\n> dalibo).\n\n> Do you have any idea how to get HASH JOINS in the CTE w_1p_data instead of\n> NESTED LOOPs?\n> * Add some statistics to not get bad estimates on \"lower-level\" CTEs?\n\nDo you know why the estimates are bad ?\nI have no clear insight at the moment... problem is probably with bad estimates which chain along the whole tree of nodes... one bad estimate\nwas \n\n after aggregation for example... probably, I would need to explore carefully whole execution plan and identify sources of unprecise estimates and correct it with additional, more precise statistics when possible, right? \nIndex Scan using t_map_plot_cell__cell_gid__idx on cm_plot2cell_mapping cm_plot2cell_mapping (cost=0.29..18.59 rows=381 width=12) (actual time=0.015..2.373 rows=3,898 loops=1)\n    Index Cond: (cm_plot2cell_mapping.estimation_cell = f_a_cell.estimation_cell)\n    Buffers: shared hit=110\n\nI don't know, but is the estimate for this portion of the plan improved by doing:\n| ALTER TABLE f_a_cell ALTER estimation_cell SET STATISTICS 500; ANALYZE f_a_cell;\nthis does not help to the plan as a whole... but I am thinking about increasing this parameter (size of sample) at the DB level \n> * In a slightly more complicated function I used temporary tables to be\n> able to narrow statistics [2] but I am afraid of system table bloating\n> because of the huge amount of usage of this function on the production\n> (hundred thousand of calls by day when data are to be analyzed).\n\nI would try this for sure - I think hundreds of calls per day would be no\nproblem.  If you're concerned, you could add manual calls to do (for example)\nVACUUM pg_attribute; after dropping the temp tables.\nit is hundreds of thousands of calls (10^5) ... but yes I got some hints how to avoid bloating (basically use temp tables longer and truncate them instead of deleting when possible)  \nBTW, we disable nested loops for the our analytic report queries.  I have never\nbeen able to avoid pathological plans any other way.I will think about that.ANDwe further simplified the query and get again one good execution plan https://explain.dalibo.com/plan/tCk :-)I have some thoughts now:* I know that PG is focused on OLTP rather then analytics, but we are happy with it at all and do not wish to use another engine for analytical queries... isn't somewhere some \"PG analytical best practice\" available?* It seems that the the form / style of query has great impact on execution plans... I was very happy with writing queries as CTEs on top of other CTEs or layering VIEWS because you can really focus on the semantics of the problem and I hoped that planner will somehow magically \"compile\" my code and get something good enough with respect to performance. Of course, I have to not use materialized CTEs, but it was not possible with NOT MATERIALIZED version as performance was bad and I was not able even to get oriented in exec. plan...Thank you for your ideas! J.", "msg_date": "Fri, 12 Nov 2021 21:12:38 +0100", "msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of analytical query" }, { "msg_contents": "On Fri, Nov 12, 2021 at 09:12:38PM +0100, Jiří Fejfar wrote:\n> * I know that PG is focused on OLTP rather then analytics, but we are happy\n> with it at all and do not wish to use another engine for analytical\n> queries... isn't somewhere some \"PG analytical best practice\" available?\n\nIt's a good question. Here's some ideas:\n\nI don't think we know what version you're using - that's important, and there's\nother ideas here:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nYou said that your query was slow \"probably after VACUUM ANALYZE\".\nIs it really faster without stats ? You can do this to see if there was really\na better plan \"before\":\n| begin; DELETE FROM pg_statistic WHERE starelid='thetable'::regclass; explain analyze ...; rollback;\n\nTry enable_nestloop=off for analytic queries;\n\nTest whether jit=off helps you or hurts you (you said that it's already disabled);\n\nYou can do other things that can improve estimates, by sacrificing planning time\n(which for an analytic query is a small component of the total query time, and\npays off at runtime if you can get a btter plan):\n - FKs can help with estimates since pg9.6;\n - CREATE STATISTICS;\n - ALTER SET STATISTICS or increase default_statistics_target;\n - increase from_collapse_limit and join_collapse_limit. But I don't think it\n will help your current query plan.\n - partitioning data increases planning time, and (if done well) can allow\n improved execution plans;\n\nYou can REINDEX or maybe CLUSTER during \"off hours\" to optimize indexes/tables.\n\nBRIN indexes (WITH autoanalyze) are very successful for us, here.\n\nYou can monitor your slow queries using auto_explain and/or pg_stat_statements.\n\nYou can reduce autovacuum_analyze_threshold to analyze more often.\n\nI'd be interested to hear if others have more suggestions.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 23 Nov 2021 13:42:35 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of analytical query" } ]
[ { "msg_contents": "I was trying to upgrade my test 13.4 instance on Oracle Linux 8.4 \n(x86_64) to 13.5. I can't upgrade postgresql13-llvm jit because Oracle's \nand Red Hat repositories still don't have the required version of llvm \n(12.1.0.2):\n\nroot@postgres mgogala]# rpm -qa|grep llvm\nllvm-libs-11.0.1-2.0.1.module+el8.4.0+20397+f876858a.x86_64\nllvm7.0-libs-7.0.1-7.el8.x86_64\nllvm-test-11.0.1-2.0.1.module+el8.4.0+20397+f876858a.x86_64\nllvm-11.0.1-2.0.1.module+el8.4.0+20397+f876858a.x86_64\nllvm-static-11.0.1-2.0.1.module+el8.4.0+20397+f876858a.x86_64\nllvm-devel-11.0.1-2.0.1.module+el8.4.0+20397+f876858a.x86_64\n[root@postgres mgogala]#\n\nI am getting the following error:\n\npostgresql13-llvm jit-13.5-1PGDG.rhel8.x86_64 requires \nlibLLVM-12.so()(64bit), but none of the providers can be installed.\n\nThere is a CentOS8-stream version which solves the problem but I cannot \nuse that in the office. I will probably have to wait for another month \nbefore OL8 has everything that I need in its repositories. Now, the \nquestion is what kind of an impact will running without llvm-jit have? \nAccording to the links below, llvm-jit effects are quite spectacular:\n\nhttps://llvm.org/devmtg/2016-09/slides/Melnik-PostgreSQLLLVM.pdf\n\nhttps://www.pgcon.org/2017/schedule/events/1092.en.html\n\nNow, the question is whether anyone on this list can quantify the \ndifference? What would be a better option? To wait for the repos to \nreceive the necessary packages or to run without llvm-jit? In the office \nI have some rather large databases and CREATE INDEX CONCURRENTLY and \nREINDEX CONCURRENTLY  fixes in 13.5 are highly desired but not at the \ncost of the overall application performance.\n\nRegards\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Sun, 14 Nov 2021 23:21:54 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql13-llvm jit-13.5-1PGDG.rhel8.x86_64" }, { "msg_contents": "Hi\n\n\n> There is a CentOS8-stream version which solves the problem but I cannot\n> use that in the office. I will probably have to wait for another month\n> before OL8 has everything that I need in its repositories. Now, the\n> question is what kind of an impact will running without llvm-jit have?\n> According to the links below, llvm-jit effects are quite spectacular:\n>\n> https://llvm.org/devmtg/2016-09/slides/Melnik-PostgreSQLLLVM.pdf\n>\n> https://www.pgcon.org/2017/schedule/events/1092.en.html\n\n\nWhen JIT was used on very large query with a lot of CASE expr, then JIT has\na positive effect about 50%. On usual large queries, the effect of JIT was\nabout 20%. Unfortunately, JIT is sensitive to estimation, and the JIT\nsometimes increases seconds to queries, although without JIT this query is\nexecuted in ms. When you use a query that can be well calculated in\nparallel, then positive effect of JIT is less.\n\nRegards\n\nPavel\n\n\n\n>\n>\n>\n> --\n> Mladen Gogala\n> Database Consultant\n> Tel: (347) 321-1217\n> https://dbwhisperer.wordpress.com\n>\n>\n>\n>\n\nHi\n\nThere is a CentOS8-stream version which solves the problem but I cannot \nuse that in the office. I will probably have to wait for another month \nbefore OL8 has everything that I need in its repositories. Now, the \nquestion is what kind of an impact will running without llvm-jit have? \nAccording to the links below, llvm-jit effects are quite spectacular:\n\nhttps://llvm.org/devmtg/2016-09/slides/Melnik-PostgreSQLLLVM.pdf\n\nhttps://www.pgcon.org/2017/schedule/events/1092.en.htmlWhen JIT was used on very large query with a lot of CASE expr, then JIT has a positive effect about 50%. On usual large queries, the effect of JIT was about 20%. Unfortunately, JIT is sensitive to estimation, and the JIT sometimes increases seconds to queries, although without JIT this query is executed in ms. When you use a query that can be well calculated in parallel, then positive effect of JIT is less. RegardsPavel \n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Mon, 15 Nov 2021 06:04:47 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql13-llvm jit-13.5-1PGDG.rhel8.x86_64" }, { "msg_contents": "On 11/15/21 00:04, Pavel Stehule wrote:\n>\n> Hi\n>\n>\n> There is a CentOS8-stream version which solves the problem but I\n> cannot\n> use that in the office. I will probably have to wait for another\n> month\n> before OL8 has everything that I need in its repositories. Now, the\n> question is what kind of an impact will running without llvm-jit\n> have?\n> According to the links below, llvm-jit effects are quite spectacular:\n>\n> https://llvm.org/devmtg/2016-09/slides/Melnik-PostgreSQLLLVM.pdf\n>\n> https://www.pgcon.org/2017/schedule/events/1092.en.html\n>\n>\n> When JIT was used on very large query with a lot of CASE expr, then \n> JIT has a positive effect about 50%. On usual large queries, the \n> effect of JIT was about 20%. Unfortunately, JIT is sensitive to \n> estimation, and the JIT sometimes increases seconds to queries, \n> although without JIT this query is executed in ms. When you use a \n> query that can be well calculated in parallel, then positive effect of \n> JIT is less.\n>\n> Regards\n>\n> Pavel\n\nThanks Pavel, you answered my question. I'll wait with the upgrade.\n\nRegards\n\n-- \n\nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\n\n\nOn 11/15/21 00:04, Pavel Stehule wrote:\n\n\n\n\n\n\nHi\n\n\n\n\n\n\n There is a CentOS8-stream version which solves the problem\n but I cannot \n use that in the office. I will probably have to wait for\n another month \n before OL8 has everything that I need in its repositories.\n Now, the \n question is what kind of an impact will running without\n llvm-jit have? \n According to the links below, llvm-jit effects are quite\n spectacular:\n\nhttps://llvm.org/devmtg/2016-09/slides/Melnik-PostgreSQLLLVM.pdf\n\nhttps://www.pgcon.org/2017/schedule/events/1092.en.html\n\n\nWhen JIT was used on very large query with a lot of\n CASE expr, then JIT has a positive effect about 50%. On\n usual large queries, the effect of JIT was about 20%.\n Unfortunately, JIT is sensitive to estimation, and the JIT\n sometimes increases seconds to queries, although without\n JIT this query is executed in ms. When you use a query\n that can be well calculated in parallel, then positive\n effect of JIT is less. \n\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\nThanks Pavel, you answered my question. I'll wait with the\n upgrade.\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Mon, 15 Nov 2021 08:56:54 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql13-llvm jit-13.5-1PGDG.rhel8.x86_64" }, { "msg_contents": "On Mon, Nov 15, 2021 at 8:57 AM Mladen Gogala <[email protected]>\nwrote:\n\n>\n> On 11/15/21 00:04, Pavel Stehule wrote:\n>\n>\n> Hi\n>\n>\n>> There is a CentOS8-stream version which solves the problem but I cannot\n>> use that in the office. I will probably have to wait for another month\n>> before OL8 has everything that I need in its repositories. Now, the\n>> question is what kind of an impact will running without llvm-jit have?\n>> According to the links below, llvm-jit effects are quite spectacular:\n>>\n>> https://llvm.org/devmtg/2016-09/slides/Melnik-PostgreSQLLLVM.pdf\n>>\n>> https://www.pgcon.org/2017/schedule/events/1092.en.html\n>\n>\n> When JIT was used on very large query with a lot of CASE expr, then JIT\n> has a positive effect about 50%. On usual large queries, the effect of JIT\n> was about 20%. Unfortunately, JIT is sensitive to estimation, and the JIT\n> sometimes increases seconds to queries, although without JIT this query is\n> executed in ms. When you use a query that can be well calculated in\n> parallel, then positive effect of JIT is less.\n>\n> Regards\n>\n> Pavel\n>\n>\n> Thanks Pavel, you answered my question. I'll wait with the upgrade.\n>\n>\n>\nFWIW, there was a lively discussion on the postgresql subreddit over the\nweekend on JIT:\nhttps://www.reddit.com/r/PostgreSQL/comments/qtsif5/cascade_of_doom_jit_and_how_a_postgres_update_led/\n\n(lively for that subreddit anyway)\n\nOn Mon, Nov 15, 2021 at 8:57 AM Mladen Gogala <[email protected]> wrote:\n\n\n\nOn 11/15/21 00:04, Pavel Stehule wrote:\n\n\n\n\n\nHi\n\n\n\n\n\n\n There is a CentOS8-stream version which solves the problem\n but I cannot \n use that in the office. I will probably have to wait for\n another month \n before OL8 has everything that I need in its repositories.\n Now, the \n question is what kind of an impact will running without\n llvm-jit have? \n According to the links below, llvm-jit effects are quite\n spectacular:\n\nhttps://llvm.org/devmtg/2016-09/slides/Melnik-PostgreSQLLLVM.pdf\n\nhttps://www.pgcon.org/2017/schedule/events/1092.en.html\n\n\nWhen JIT was used on very large query with a lot of\n CASE expr, then JIT has a positive effect about 50%. On\n usual large queries, the effect of JIT was about 20%.\n Unfortunately, JIT is sensitive to estimation, and the JIT\n sometimes increases seconds to queries, although without\n JIT this query is executed in ms. When you use a query\n that can be well calculated in parallel, then positive\n effect of JIT is less. \n\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\nThanks Pavel, you answered my question. I'll wait with the\n upgrade.\nFWIW, there was a lively discussion on the postgresql subreddit over the weekend on JIT:  https://www.reddit.com/r/PostgreSQL/comments/qtsif5/cascade_of_doom_jit_and_how_a_postgres_update_led/ (lively for that subreddit anyway)", "msg_date": "Mon, 15 Nov 2021 09:17:14 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql13-llvm jit-13.5-1PGDG.rhel8.x86_64" } ]
[ { "msg_contents": "Hello all,\r\n\r\nA description of what you are trying to achieve and what results you expect.:\r\n\r\nWe’re executing the following copy to fill a table with approximately 5k records, then repeating for a total of 250k records. Normally, this copy executes < 1 second, with the entire set taking a couple of minutes. The problem is not reproducible on command, but usually within a couple of hours of starting some test runs.\r\n\r\nCOPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS ‘|’\r\n\r\nBut, occasionally we get into a huge performance bottleneck for about 2 hours, where these copy operations are taking 140 seconds or so\r\n\r\nNov 15 22:25:49 sm4u-34 postgres[5799]: [381-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 145326.293 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\n\r\nOne CPU is pegged, the data has been sent over STDIN, so Postgres is not waiting for more, there are no other queries running using this select:\r\n\r\nSELECT pid,\r\n client_port,\r\n now() - query_start AS \"runtime\",\r\n query_start,\r\n datname,\r\n state,\r\n wait_event_type,\r\n query,\r\n usename\r\nFROM pg_stat_activity\r\nWHERE query !~ 'pg_stat_activity' AND\r\n state != 'idle'\r\nORDER BY state, runtime DESC;\r\n\r\n# pid,client_port,runtime,query_start,datname,state,wait_event_type,query,usename\r\n5799,27136,0 years 0 mons 0 days 0 hours 2 mins 15.534339 secs,2021-11-15 22:23:23.932988 +00:00,tapesystem,active,,\"COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\",Administrator\r\n\r\nI’m logging statements with pgbadger monitoring the logs. There are no apparent auto-vacuum’s running, nor any vacuums, nor anything at all really. Other select queries around that time frame are executing normally.\r\n\r\nWe’re coming from PostgreSQL 9.6 on FreeBSD 11 where we did not see this problem, but have a major release upgrade happening. I’m checking to see if this machine was updated or was a fresh install.\r\n\r\nFrom pbgadger:\r\n\r\n[cid:26C88B9B-2CBD-4C70-B07B-3847B2E57C4E]\r\n\r\nPostgreSQL version number you are running:\r\n\r\n PostgreSQL 13.2 on amd64-portbld-freebsd13.0, compiled by FreeBSD clang version 11.0.1 ([email protected]<mailto:[email protected]>:llvm/llvm-project.git llvmorg-11.0.1-0-g43ff75f2c3fe), 64-bit\r\n\r\nHow you installed PostgreSQL:\r\n\r\nPorts tree, compiled from source.\r\n\r\nChanges made to the settings in the postgresql.conf file\r\n\r\n name | current_setting | source\r\n---------------------------------+---------------------------------+--------------------\r\n application_name | psql | client\r\n autovacuum_analyze_scale_factor | 0.05 | configuration file\r\n autovacuum_analyze_threshold | 5000 | configuration file\r\n autovacuum_max_workers | 8 | configuration file\r\n autovacuum_vacuum_cost_delay | 5ms | configuration file\r\n autovacuum_vacuum_scale_factor | 0.1 | configuration file\r\n autovacuum_vacuum_threshold | 5000 | configuration file\r\n checkpoint_completion_target | 0.9 | configuration file\r\n checkpoint_timeout | 30min | configuration file\r\n checkpoint_warning | 5min | configuration file\r\n client_encoding | UTF8 | client\r\n commit_delay | 1000 | configuration file\r\n DateStyle | ISO, MDY | configuration file\r\n default_text_search_config | pg_catalog.english | configuration file\r\n dynamic_shared_memory_type | posix | configuration file\r\n effective_cache_size | 58076MB | configuration file\r\n effective_io_concurrency | 200 | configuration file\r\n full_page_writes | off | configuration file\r\n hot_standby | off | configuration file\r\n lc_messages | C | configuration file\r\n lc_monetary | C | configuration file\r\n lc_numeric | C | configuration file\r\n lc_time | C | configuration file\r\n listen_addresses | * | configuration file\r\n log_autovacuum_min_duration | 1s | configuration file\r\n log_checkpoints | on | configuration file\r\n log_connections | on | configuration file\r\n log_destination | syslog | configuration file\r\n log_disconnections | on | configuration file\r\n log_duration | off | configuration file\r\n log_line_prefix | db=%d,user=%u,app=%a,client=%h | configuration file\r\n log_lock_waits | on | configuration file\r\n log_min_duration_sample | 100ms | configuration file\r\n log_min_duration_statement | 1ms | configuration file\r\n log_statement_sample_rate | 0.01 | configuration file\r\n log_temp_files | 0 | configuration file\r\n log_timezone | UTC | configuration file\r\n maintenance_work_mem | 3927MB | configuration file\r\n max_connections | 250 | configuration file\r\n max_parallel_workers_per_gather | 8 | configuration file\r\n max_replication_slots | 0 | configuration file\r\n max_stack_depth | 32MB | configuration file\r\n max_wal_senders | 0 | configuration file\r\n max_wal_size | 50GB | configuration file\r\n max_worker_processes | 8 | configuration file\r\n random_page_cost | 2 | configuration file\r\n shared_buffers | 21679MB | configuration file\r\n synchronous_commit | off | configuration file\r\n temp_buffers | 1309MB | configuration file\r\n TimeZone | UTC | configuration file\r\n track_activities | on | configuration file\r\n track_counts | on | configuration file\r\n update_process_title | off | configuration file\r\n vacuum_cost_delay | 1ms | configuration file\r\n wal_init_zero | off | configuration file\r\n wal_level | minimal | configuration file\r\n wal_recycle | off | configuration file\r\n wal_skip_threshold | 20MB | configuration file\r\n wal_sync_method | fsync | configuration file\r\n wal_writer_delay | 500ms | configuration file\r\n wal_writer_flush_after | 10MB | configuration file\r\n work_mem | 1309MB | configuration file\r\n\r\n\r\nOperating system and version:\r\n\r\nFreeBSD sm4u-34 13.0-STABLE FreeBSD 13.0-STABLE #0: Mon Sep 13 10:11:57 MDT 2021\r\n\r\nWhat program you're using to connect to PostgreSQL:\r\n\r\nIn house Java 1.8 program, JDBC 42.2.20\r\n\r\nIs there anything relevant or unusual in the PostgreSQL server logs?:\r\n\r\nNope\r\n\r\nOther info\r\n\r\ntapesystem=# SELECT * FROM pg_config();\r\n name | setting\r\n-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n BINDIR | /usr/local/bin\r\n DOCDIR | /usr/local/share/doc/postgresql\r\n HTMLDIR | /usr/local/share/doc/postgresql\r\n INCLUDEDIR | /usr/local/include\r\n PKGINCLUDEDIR | /usr/local/include/postgresql\r\n INCLUDEDIR-SERVER | /usr/local/include/postgresql/server\r\n LIBDIR | /usr/local/lib\r\n PKGLIBDIR | /usr/local/lib/postgresql\r\n LOCALEDIR | /usr/local/share/locale\r\n MANDIR | /usr/local/man\r\n SHAREDIR | /usr/local/share/postgresql\r\n SYSCONFDIR | /usr/local/etc/postgresql\r\n PGXS | /usr/local/lib/postgresql/pgxs/src/makefiles/pgxs.mk\r\n CONFIGURE | '--with-libraries=/usr/local/lib' '--with-includes=/usr/local/include' '--enable-thread-safety' '--with-icu' '--disable-debug' '--disable-dtrace' '--without-gssapi' '--without-ldap' '--disable-nls' '--without-pam' '--with-openssl' '--with-system-tzdata=/usr/share/zoneinfo' '--without-libxml' '--with-llvm' '--prefix=/usr/local' '--localstatedir=/var' '--mandir=/usr/local/man' '--infodir=/usr/local/share/info/' '--build=amd64-portbld-freebsd13.0' 'build_alias=amd64-portbld-freebsd13.0' 'CC=cc' 'CFLAGS=-O2 -pipe -fstack-protector-strong -fno-strict-aliasing ' 'LDFLAGS= -L/usr/local/lib -lpthread -L/usr/local/lib -fstack-protector-strong ' 'LIBS=' 'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -pipe -fstack-protector-strong -fno-strict-aliasing ' 'LLVM_CONFIG=/usr/local/bin/llvm-config11' 'CPP=cpp' 'PKG_CONFIG=pkgconf' 'LDFLAGS_SL='\r\n CC | cc\r\n CPPFLAGS | -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include\r\n CFLAGS | -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -pipe -fstack-protector-strong -fno-strict-aliasing\r\n CFLAGS_SL | -fPIC -DPIC\r\n LDFLAGS | -L/usr/local/lib -lpthread -L/usr/local/lib -fstack-protector-strong -L/usr/local/llvm11/lib -L/usr/local/lib -Wl,--as-needed -Wl,-R'/usr/local/lib'\r\n LDFLAGS_EX |\r\n LDFLAGS_SL |\r\n LIBS | -lpgcommon -lpgport -lssl -lcrypto -lz -lreadline -lexecinfo -lm\r\n VERSION | PostgreSQL 13.2\r\n\r\n hw.model: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz\r\nhw.machine: amd64\r\nhw.ncpu: 24\r\n\r\nhw.physmem: 137287901184\r\n\r\n2 mirrored ZFS SSD disks. Just for fun, we briefly broke the mirror during one of these slowdowns, nothing changed.\r\n\r\nThese are the system calls made over 30 seconds from Postgres during a slowdown.\r\n\r\ngetrusage 1\r\n access 3\r\n exit 3\r\n fork 3\r\n getrandom 3\r\n pipe2 3\r\n procctl 3\r\n setsid 3\r\n thr_self 3\r\n __sysctl 5\r\n mmap 6\r\n wait4 6\r\n kill 11\r\n select 14\r\n sigreturn 14\r\n rename 18\r\n getpid 21\r\n fsync 27\r\n pwrite 27\r\n openat 28\r\n sigaction 33\r\n write 50\r\n fstat 56\r\n open 56\r\n sigprocmask 98\r\n read 143\r\n getppid 163\r\n kqueue 163\r\n fcntl 175\r\n close 249\r\n sendto 629\r\n kevent 1069\r\n recvfrom 1192\r\n lseek 3604\r\n fstatat 15894\r\n\r\nThanks for any assistance.\r\n\r\nBest,\r\nRobert", "msg_date": "Tue, 16 Nov 2021 04:43:25 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Need help identifying a periodic performance issue." }, { "msg_contents": "On Tue, Nov 16, 2021 at 04:43:25AM +0000, Robert Creager wrote:\n> We’re executing the following copy to fill a table with approximately 5k records, then repeating for a total of 250k records. Normally, this copy executes < 1 second, with the entire set taking a couple of minutes. The problem is not reproducible on command, but usually within a couple of hours of starting some test runs.\n> \n> COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS ‘|’\n> \n> But, occasionally we get into a huge performance bottleneck for about 2 hours, where these copy operations are taking 140 seconds or so\n> \n> Nov 15 22:25:49 sm4u-34 postgres[5799]: [381-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 145326.293 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\n> I’m logging statements with pgbadger monitoring the logs. There are no apparent auto-vacuum’s running, nor any vacuums, nor anything at all really. Other select queries around that time frame are executing normally.\n\nWhat about checkpoints ?\n\nWould you show the \"^checkpoint starting\" and \"^checkpoint complete\" logs\nsurrounding a slow COPY ?\n\n> We’re coming from PostgreSQL 9.6 on FreeBSD 11 where we did not see this problem, but have a major release upgrade happening. I’m checking to see if this machine was updated or was a fresh install.\n> PostgreSQL 13.2 on amd64-portbld-freebsd13.0, compiled by FreeBSD clang version 11.0.1 ([email protected]<mailto:[email protected]>:llvm/llvm-project.git llvmorg-11.0.1-0-g43ff75f2c3fe), 64-bit\n> \n> Changes made to the settings in the postgresql.conf file\n> checkpoint_timeout | 30min | configuration file\n> log_checkpoints | on | configuration file\n> log_lock_waits | on | configuration file\n...\n> shared_buffers | 21679MB | configuration file\n\n> Operating system and version:\n> FreeBSD sm4u-34 13.0-STABLE FreeBSD 13.0-STABLE #0: Mon Sep 13 10:11:57 MDT 2021\n\n> These are the system calls made over 30 seconds from Postgres during a slowdown.\n...\n> fsync 27\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 15 Nov 2021 23:29:38 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Tue, Nov 16, 2021 at 5:43 PM Robert Creager <[email protected]> wrote:\n> One CPU is pegged, the data has been sent over STDIN, so Postgres is not waiting for more, there are no other queries running using this select:\n\nSo PostgreSQL is eating 100% CPU, with no value shown in\nwait_event_type, and small numbers of system calls are counted. In\nthat case, is there an interesting user stack that jumps out with a\nprofiler during the slowdown (or the kernel version, stack())?\n\nsudo dtrace -n 'profile-99 /arg0/ { @[ustack()] = count(); } tick-10s\n{ exit(0); }'\n\n\n", "msg_date": "Tue, 16 Nov 2021 18:50:18 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 15, 2021, at 10:29 PM, Justin Pryzby <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nThis message originated outside your organization.\r\n\r\nOn Tue, Nov 16, 2021 at 04:43:25AM +0000, Robert Creager wrote:\r\n> We’re executing the following copy to fill a table with approximately 5k records, then repeating for a total of 250k records. Normally, this copy executes < 1 second, with the entire set taking a couple of minutes. The problem is not reproducible on command, but usually within a couple of hours of starting some test runs.\r\n>\r\n> COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS ‘|’\r\n>\r\n> But, occasionally we get into a huge performance bottleneck for about 2 hours, where these copy operations are taking 140 seconds or so\r\n>\r\n> Nov 15 22:25:49 sm4u-34 postgres[5799]: [381-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1<http://127.0.0.1> LOG: duration: 145326.293 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\n\r\n> I’m logging statements with pgbadger monitoring the logs. There are no apparent auto-vacuum’s running, nor any vacuums, nor anything at all really. Other select queries around that time frame are executing normally.\r\n\r\nWhat about checkpoints ?\r\n\r\nWould you show the \"^checkpoint starting\" and \"^checkpoint complete\" logs\r\nsurrounding a slow COPY ?\r\n\r\nSorry, it was late last night, I meant to include the checkpoint info. I didn’t have enough logs around the one I pointed out above, my tail got aborted by a reboot. Working on a log server… From an earlier one:\r\n\r\nNov 5 03:56:28 sm4u-34 postgres[4934]: [2679-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 247 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=26.279 s, sync=0.002 s, total=26.323 s; sync files=142, longest=0.001 s, average=0.001 s; distance=592 kB, estimate=279087 kB\r\nNov 5 04:26:03 sm4u-34 postgres[4934]: [2680-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 04:26:14 sm4u-34 postgres[4934]: [2681-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 115 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=11.880 s, sync=0.003 s, total=11.885 s; sync files=75, longest=0.001 s, average=0.001 s; distance=541 kB, estimate=251232 kB\r\nNov 5 04:56:03 sm4u-34 postgres[4934]: [2682-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 04:56:15 sm4u-34 postgres[4934]: [2683-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 103 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=11.011 s, sync=0.002 s, total=11.015 s; sync files=74, longest=0.001 s, average=0.001 s; distance=528 kB, estimate=226162 kB\r\nNov 5 05:15:28 sm4u-34 postgres[59442]: [24-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 1.059 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 05:26:03 sm4u-34 postgres[4934]: [2684-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 05:26:27 sm4u-34 postgres[4934]: [2685-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 226 buffers (0.0%); 0 WAL file(s) added, 1 removed, 0 recycled; write=24.000 s, sync=0.006 s, total=24.037 s; sync files=122, longest=0.001 s, average=0.001 s; distance=583 kB, estimate=203604 kB\r\nNov 5 05:56:03 sm4u-34 postgres[4934]: [2686-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 05:56:24 sm4u-34 postgres[4934]: [2687-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 199 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=21.212 s, sync=0.004 s, total=21.218 s; sync files=122, longest=0.001 s, average=0.001 s; distance=580 kB, estimate=183302 kB\r\nNov 5 06:26:03 sm4u-34 postgres[4934]: [2688-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 06:26:22 sm4u-34 postgres[4934]: [2689-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 178 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=19.078 s, sync=0.005 s, total=19.084 s; sync files=120, longest=0.001 s, average=0.001 s; distance=563 kB, estimate=165028 kB\r\nNov 5 06:32:27 sm4u-34 postgres[7728]: [213-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 143318.661 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:34:56 sm4u-34 postgres[7728]: [214-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 149175.227 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:37:27 sm4u-34 postgres[7728]: [215-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 150440.140 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:39:56 sm4u-34 postgres[7728]: [216-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 149521.024 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:42:26 sm4u-34 postgres[7728]: [217-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 149182.715 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:44:59 sm4u-34 postgres[7728]: [218-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 153734.718 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:47:26 sm4u-34 postgres[7728]: [219-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 146371.043 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:49:59 sm4u-34 postgres[7728]: [220-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 152996.005 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:52:29 sm4u-34 postgres[7728]: [221-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 150094.597 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:55:03 sm4u-34 postgres[7728]: [222-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 154446.475 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 06:56:03 sm4u-34 postgres[4934]: [2690-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 06:57:33 sm4u-34 postgres[7728]: [223-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 149823.562 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:00:06 sm4u-34 postgres[7728]: [224-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 152262.349 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:02:37 sm4u-34 postgres[7728]: [225-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 151812.262 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:05:11 sm4u-34 postgres[7728]: [226-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 152992.509 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:07:46 sm4u-34 postgres[7728]: [227-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 155094.565 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:10:19 sm4u-34 postgres[7728]: [228-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 153728.503 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:12:53 sm4u-34 postgres[7728]: [229-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 153031.260 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:15:26 sm4u-34 postgres[7728]: [230-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 153722.550 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:17:49 sm4u-34 postgres[4934]: [2691-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 12310 buffers (0.4%); 0 WAL file(s) added, 10 removed, 0 recycled; write=1305.144 s, sync=0.001 s, total=1305.178 s; sync files=92, longest=0.001 s, average=0.001 s; distance=172759 kB, estimate=172759 kB\r\nNov 5 07:18:00 sm4u-34 postgres[7728]: [231-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 153736.774 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:20:25 sm4u-34 postgres[7728]: [232-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 145263.582 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:22:53 sm4u-34 postgres[7728]: [233-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 147632.451 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:25:22 sm4u-34 postgres[7728]: [234-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 149081.218 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:26:03 sm4u-34 postgres[4934]: [2692-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 07:27:51 sm4u-34 postgres[7728]: [235-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 148655.719 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:30:20 sm4u-34 postgres[7728]: [236-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 148677.766 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:32:49 sm4u-34 postgres[7728]: [237-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 149493.666 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:34:15 sm4u-34 postgres[7728]: [238-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 85751.267 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 07:40:50 sm4u-34 postgres[4934]: [2693-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 8356 buffers (0.3%); 0 WAL file(s) added, 3 removed, 0 recycled; write=887.648 s, sync=0.001 s, total=887.660 s; sync files=51, longest=0.001 s, average=0.001 s; distance=47063 kB, estimate=160189 kB\r\nNov 5 07:56:03 sm4u-34 postgres[4934]: [2694-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 08:12:46 sm4u-34 postgres[4934]: [2695-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 9436 buffers (0.3%); 0 WAL file(s) added, 5 removed, 0 recycled; write=1002.606 s, sync=0.002 s, total=1002.627 s; sync files=87, longest=0.001 s, average=0.001 s; distance=77658 kB, estimate=151936 kB\r\nNov 5 08:26:03 sm4u-34 postgres[4934]: [2696-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 5 08:38:03 sm4u-34 postgres[7728]: [317-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 206.436 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:04 sm4u-34 postgres[7728]: [318-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 222.790 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:04 sm4u-34 postgres[7728]: [319-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 225.146 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:04 sm4u-34 postgres[7728]: [320-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 217.768 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:04 sm4u-34 postgres[7728]: [321-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 221.421 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:05 sm4u-34 postgres[7728]: [322-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 280.209 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:05 sm4u-34 postgres[7728]: [323-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 224.838 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:05 sm4u-34 postgres[7728]: [324-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 232.646 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:06 sm4u-34 postgres[7728]: [325-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 261.916 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:06 sm4u-34 postgres[7728]: [326-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 252.336 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:06 sm4u-34 postgres[7728]: [327-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 231.334 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:06 sm4u-34 postgres[7728]: [328-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 221.924 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\nNov 5 08:38:07 sm4u-34 postgres[7728]: [329-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 222.862 ms statement: COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\r\n\r\n\r\n\r\n> We’re coming from PostgreSQL 9.6 on FreeBSD 11 where we did not see this problem, but have a major release upgrade happening. I’m checking to see if this machine was updated or was a fresh install.\r\n> PostgreSQL 13.2 on amd64-portbld-freebsd13.0, compiled by FreeBSD clang version 11.0.1 ([email protected]<mailto:[email protected]><mailto:[email protected]>:llvm/llvm-project.git llvmorg-11.0.1-0-g43ff75f2c3fe), 64-bit\r\n>\r\n> Changes made to the settings in the postgresql.conf file\r\n> checkpoint_timeout | 30min | configuration file\r\n> log_checkpoints | on | configuration file\r\n> log_lock_waits | on | configuration file\r\n...\r\n> shared_buffers | 21679MB | configuration file\r\n\r\n> Operating system and version:\r\n> FreeBSD sm4u-34 13.0-STABLE FreeBSD 13.0-STABLE #0: Mon Sep 13 10:11:57 MDT 2021\r\n\r\n> These are the system calls made over 30 seconds from Postgres during a slowdown.\r\n...\r\n> fsync 27\r\n\r\n--\r\nJustin\r\n\r\n\n\n\n\n\n\n\n\n\nOn Nov 15, 2021, at 10:29 PM, Justin Pryzby <[email protected]> wrote:\n\n\nThis message originated outside your organization.\n\r\nOn Tue, Nov 16, 2021 at 04:43:25AM +0000, Robert Creager wrote:\r\n> We’re executing the following copy to fill a table with approximately 5k records, then repeating for a total of 250k records. Normally, this copy executes < 1 second, with the entire set taking a couple of minutes. The problem is not reproducible on command,\r\n but usually within a couple of hours of starting some test runs.\r\n> \r\n> COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS ‘|’\r\n> \r\n> But, occasionally we get into a huge performance bottleneck for about 2 hours, where these copy operations are taking 140 seconds or so\r\n> \r\n> Nov 15 22:25:49 sm4u-34 postgres[5799]: [381-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 145326.293 ms statement:\r\n COPY ds3.blob (byte_offset, checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\r\n> I’m logging statements with pgbadger monitoring the logs. There are no apparent auto-vacuum’s running, nor any vacuums, nor anything at all really. Other select queries around that time frame are executing normally.\n\r\nWhat about checkpoints ?\n\r\nWould you show the \"^checkpoint starting\" and \"^checkpoint complete\" logs\r\nsurrounding a slow COPY ?\n\n\n\n\n\nSorry, it was late last night, I meant to include the checkpoint info. I didn’t have enough logs around the one I pointed out above, my tail got aborted by a reboot.  Working on a log server… From an earlier one:\n\n\n\n\nNov  5 03:56:28 sm4u-34 postgres[4934]: [2679-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 247 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=26.279 s, sync=0.002\r\n s, total=26.323 s; sync files=142, longest=0.001 s, average=0.001 s; distance=592 kB, estimate=279087 kB\n\nNov  5 04:26:03 sm4u-34 postgres[4934]: [2680-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 04:26:14 sm4u-34 postgres[4934]: [2681-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 115 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=11.880 s, sync=0.003\r\n s, total=11.885 s; sync files=75, longest=0.001 s, average=0.001 s; distance=541 kB, estimate=251232 kB\n\nNov  5 04:56:03 sm4u-34 postgres[4934]: [2682-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 04:56:15 sm4u-34 postgres[4934]: [2683-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 103 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=11.011 s, sync=0.002\r\n s, total=11.015 s; sync files=74, longest=0.001 s, average=0.001 s; distance=528 kB, estimate=226162 kB\n\nNov  5 05:15:28 sm4u-34 postgres[59442]: [24-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 1.059 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 05:26:03 sm4u-34 postgres[4934]: [2684-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 05:26:27 sm4u-34 postgres[4934]: [2685-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 226 buffers (0.0%); 0 WAL file(s) added, 1 removed, 0 recycled; write=24.000 s, sync=0.006\r\n s, total=24.037 s; sync files=122, longest=0.001 s, average=0.001 s; distance=583 kB, estimate=203604 kB\n\nNov  5 05:56:03 sm4u-34 postgres[4934]: [2686-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 05:56:24 sm4u-34 postgres[4934]: [2687-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 199 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=21.212 s, sync=0.004\r\n s, total=21.218 s; sync files=122, longest=0.001 s, average=0.001 s; distance=580 kB, estimate=183302 kB\n\nNov  5 06:26:03 sm4u-34 postgres[4934]: [2688-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 06:26:22 sm4u-34 postgres[4934]: [2689-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 178 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=19.078 s, sync=0.005\r\n s, total=19.084 s; sync files=120, longest=0.001 s, average=0.001 s; distance=563 kB, estimate=165028 kB\n\nNov  5 06:32:27 sm4u-34 postgres[7728]: [213-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 143318.661 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:34:56 sm4u-34 postgres[7728]: [214-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 149175.227 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:37:27 sm4u-34 postgres[7728]: [215-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 150440.140 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:39:56 sm4u-34 postgres[7728]: [216-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 149521.024 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:42:26 sm4u-34 postgres[7728]: [217-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 149182.715 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:44:59 sm4u-34 postgres[7728]: [218-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 153734.718 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:47:26 sm4u-34 postgres[7728]: [219-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 146371.043 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:49:59 sm4u-34 postgres[7728]: [220-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 152996.005 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:52:29 sm4u-34 postgres[7728]: [221-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 150094.597 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:55:03 sm4u-34 postgres[7728]: [222-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 154446.475 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 06:56:03 sm4u-34 postgres[4934]: [2690-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 06:57:33 sm4u-34 postgres[7728]: [223-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 149823.562 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:00:06 sm4u-34 postgres[7728]: [224-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 152262.349 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:02:37 sm4u-34 postgres[7728]: [225-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 151812.262 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:05:11 sm4u-34 postgres[7728]: [226-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 152992.509 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:07:46 sm4u-34 postgres[7728]: [227-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 155094.565 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:10:19 sm4u-34 postgres[7728]: [228-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 153728.503 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:12:53 sm4u-34 postgres[7728]: [229-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 153031.260 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:15:26 sm4u-34 postgres[7728]: [230-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 153722.550 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:17:49 sm4u-34 postgres[4934]: [2691-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 12310 buffers (0.4%); 0 WAL file(s) added, 10 removed, 0 recycled; write=1305.144 s,\r\n sync=0.001 s, total=1305.178 s; sync files=92, longest=0.001 s, average=0.001 s; distance=172759 kB, estimate=172759 kB\n\nNov  5 07:18:00 sm4u-34 postgres[7728]: [231-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 153736.774 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:20:25 sm4u-34 postgres[7728]: [232-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 145263.582 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:22:53 sm4u-34 postgres[7728]: [233-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 147632.451 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:25:22 sm4u-34 postgres[7728]: [234-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 149081.218 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:26:03 sm4u-34 postgres[4934]: [2692-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 07:27:51 sm4u-34 postgres[7728]: [235-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 148655.719 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:30:20 sm4u-34 postgres[7728]: [236-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 148677.766 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:32:49 sm4u-34 postgres[7728]: [237-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 149493.666 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:34:15 sm4u-34 postgres[7728]: [238-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 85751.267 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 07:40:50 sm4u-34 postgres[4934]: [2693-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 8356 buffers (0.3%); 0 WAL file(s) added, 3 removed, 0 recycled; write=887.648 s, sync=0.001\r\n s, total=887.660 s; sync files=51, longest=0.001 s, average=0.001 s; distance=47063 kB, estimate=160189 kB\n\nNov  5 07:56:03 sm4u-34 postgres[4934]: [2694-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 08:12:46 sm4u-34 postgres[4934]: [2695-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 9436 buffers (0.3%); 0 WAL file(s) added, 5 removed, 0 recycled; write=1002.606 s, sync=0.002\r\n s, total=1002.627 s; sync files=87, longest=0.001 s, average=0.001 s; distance=77658 kB, estimate=151936 kB\n\nNov  5 08:26:03 sm4u-34 postgres[4934]: [2696-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov  5 08:38:03 sm4u-34 postgres[7728]: [317-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 206.436 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:04 sm4u-34 postgres[7728]: [318-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 222.790 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:04 sm4u-34 postgres[7728]: [319-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 225.146 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:04 sm4u-34 postgres[7728]: [320-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 217.768 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:04 sm4u-34 postgres[7728]: [321-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 221.421 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:05 sm4u-34 postgres[7728]: [322-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 280.209 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:05 sm4u-34 postgres[7728]: [323-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 224.838 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:05 sm4u-34 postgres[7728]: [324-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 232.646 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:06 sm4u-34 postgres[7728]: [325-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 261.916 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:06 sm4u-34 postgres[7728]: [326-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 252.336 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:06 sm4u-34 postgres[7728]: [327-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 231.334 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:06 sm4u-34 postgres[7728]: [328-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 221.924 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\nNov  5 08:38:07 sm4u-34 postgres[7728]: [329-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 222.862 ms  statement: COPY ds3.blob (byte_offset,\r\n checksum, checksum_type, id, length, object_id) FROM STDIN WITH DELIMITER AS '|'\n\n\n\n\n\n\n\r\n> We’re coming from PostgreSQL 9.6 on FreeBSD 11 where we did not see this problem, but have a major release upgrade happening. I’m checking to see if this machine was updated or was a fresh install.\r\n> PostgreSQL 13.2 on amd64-portbld-freebsd13.0, compiled by FreeBSD clang version 11.0.1 ([email protected]<mailto:[email protected]>:llvm/llvm-project.git llvmorg-11.0.1-0-g43ff75f2c3fe),\r\n 64-bit\r\n> \r\n> Changes made to the settings in the postgresql.conf file\r\n> checkpoint_timeout | 30min | configuration file\r\n> log_checkpoints | on | configuration file\r\n> log_lock_waits | on | configuration file\r\n...\r\n> shared_buffers | 21679MB | configuration file\n\r\n> Operating system and version:\r\n> FreeBSD sm4u-34 13.0-STABLE FreeBSD 13.0-STABLE #0: Mon Sep 13 10:11:57 MDT 2021\n\r\n> These are the system calls made over 30 seconds from Postgres during a slowdown.\r\n...\r\n> fsync 27\n\r\n-- \r\nJustin", "msg_date": "Tue, 16 Nov 2021 17:40:09 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 15, 2021, at 10:50 PM, Thomas Munro <[email protected]<mailto:[email protected]>> wrote:\n\nThis message originated outside your organization.\n\nOn Tue, Nov 16, 2021 at 5:43 PM Robert Creager <[email protected]<mailto:[email protected]>> wrote:\nOne CPU is pegged, the data has been sent over STDIN, so Postgres is not waiting for more, there are no other queries running using this select:\n\nSo PostgreSQL is eating 100% CPU, with no value shown in\nwait_event_type, and small numbers of system calls are counted. In\nthat case, is there an interesting user stack that jumps out with a\nprofiler during the slowdown (or the kernel version, stack())?\n\nsudo dtrace -n 'profile-99 /arg0/ { @[ustack()] = count(); } tick-10s\n{ exit(0); }\n\nI setup a monitoring script to do the dtrace stack sampler you sent once a minute on the top CPU consuming Postgres process. Now I wait until we reproduce it.\n\n#!/usr/local/bin/bash\n\nwhile [[ true ]]; do\n DATE=$(date \"+%d-%H:%M:%S\")\n PID=$(top -b | grep postgres | head -n 1 | awk '{print $1}')\n echo \"${DATE} ${PID}\"\n dtrace -n 'profile-99 /pid == '$PID'/ { @[ustack()] = count(); } tick-10s { exit(0); }' > dtrace/dtrace_${DATE}.txt\n sleep 60\ndone\n\nPresuming this is the type of output you are expecting:\n\nCPU ID FUNCTION:NAME\n 0 58709 :tick-10s\n\n\n postgres`AtEOXact_LargeObject+0x11\n postgres`CommitTransaction+0x127\n postgres`CommitTransactionCommand+0xf2\n postgres`PostgresMain+0x1fef\n postgres`process_startup_packet_die\n postgres`0x73055b\n postgres`PostmasterMain+0xf36\n postgres`0x697837\n postgres`_start+0x100\n `0x80095f008\n 1\n\n postgres`printtup+0xf3\n postgres`standard_ExecutorRun+0x136\n postgres`PortalRunSelect+0x10f\n postgres`PortalRun+0x1c8\n postgres`PostgresMain+0x1f94\n postgres`process_startup_packet_die\n postgres`0x73055b\n postgres`PostmasterMain+0xf36\n postgres`0x697837\n postgres`_start+0x100\n `0x80095f008\n 1\n...\n\n\n\n\n\n\n\n\n\n\n\n\nOn Nov 15, 2021, at 10:50 PM, Thomas Munro <[email protected]> wrote:\n\n\nThis message originated outside your organization.\n\nOn Tue, Nov 16, 2021 at 5:43 PM Robert Creager <[email protected]> wrote:\nOne CPU is pegged, the data has been sent over STDIN, so Postgres is not waiting for more, there are no other queries running using this select:\n\n\nSo PostgreSQL is eating 100% CPU, with no value shown in\nwait_event_type, and small numbers of system calls are counted.  In\nthat case, is there an interesting user stack that jumps out with a\nprofiler during the slowdown (or the kernel version, stack())?\n\nsudo dtrace -n 'profile-99 /arg0/ { @[ustack()] = count(); } tick-10s\n{ exit(0); }\n\n\n\n\n\nI setup a monitoring script to do the dtrace stack sampler you sent once a minute on the top CPU consuming Postgres process.  Now I wait until we reproduce it.\n\n\n\n\n#!/usr/local/bin/bash\n\n\n\n\nwhile [[ true ]]; do\n\n   DATE=$(date \"+%d-%H:%M:%S\")\n\n   PID=$(top -b | grep postgres | head -n 1  | awk '{print $1}')\n\n   echo \"${DATE} ${PID}\"\n\n   dtrace -n 'profile-99 /pid == '$PID'/ { @[ustack()] = count(); } tick-10s { exit(0); }' > dtrace/dtrace_${DATE}.txt\n\n   sleep 60\n\ndone\n\n\n\nPresuming this is the type of output you are expecting:\n\n\n\n\n\nCPU     ID                    FUNCTION:NAME\n\n  0  58709                        :tick-10s \n\n\n\n\n\n\n\n              postgres`AtEOXact_LargeObject+0x11\n\n              postgres`CommitTransaction+0x127\n\n              postgres`CommitTransactionCommand+0xf2\n\n              postgres`PostgresMain+0x1fef\n\n              postgres`process_startup_packet_die\n\n              postgres`0x73055b\n\n              postgres`PostmasterMain+0xf36\n\n              postgres`0x697837\n\n              postgres`_start+0x100\n\n              `0x80095f008\n\n                1\n\n\n\n\n              postgres`printtup+0xf3\n\n              postgres`standard_ExecutorRun+0x136\n\n              postgres`PortalRunSelect+0x10f\n\n              postgres`PortalRun+0x1c8\n\n              postgres`PostgresMain+0x1f94\n\n              postgres`process_startup_packet_die\n\n              postgres`0x73055b\n\n              postgres`PostmasterMain+0xf36\n\n              postgres`0x697837\n\n              postgres`_start+0x100\n\n              `0x80095f008\n\n                1\n\n...", "msg_date": "Tue, 16 Nov 2021 22:40:43 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Wed, Nov 17, 2021 at 11:40 AM Robert Creager\n<[email protected]> wrote:\n> Presuming this is the type of output you are expecting:\n>\n> CPU ID FUNCTION:NAME\n> 0 58709 :tick-10s\n>\n>\n> postgres`AtEOXact_LargeObject+0x11\n> postgres`CommitTransaction+0x127\n> postgres`CommitTransactionCommand+0xf2\n> postgres`PostgresMain+0x1fef\n> postgres`process_startup_packet_die\n> postgres`0x73055b\n> postgres`PostmasterMain+0xf36\n> postgres`0x697837\n> postgres`_start+0x100\n> `0x80095f008\n> 1\n\nIt's the right output format, but isn't /pid == '$PID'/ only going to\nmatch one single process called \"postgres\"? Maybe /execname ==\n\"postgres\"/ to catch them all? Hopefully it'll be obvious what's\ngoing on from an outlier stack with a high sample count. Can also be\nuseful to convert the output to flamegraph format if CPU time is\ndistributed over many distinct stacks.\n\n\n", "msg_date": "Wed, 17 Nov 2021 11:51:46 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Wed, Nov 17, 2021 at 11:51 AM Thomas Munro <[email protected]> wrote:\n> It's the right output format, but isn't /pid == '$PID'/ only going to\n> match one single process called \"postgres\"? Maybe /execname ==\n> \"postgres\"/ to catch them all?\n\nOh, duh, it's the top CPU one. Makes sense. Never mind :-)\n\n\n", "msg_date": "Wed, 17 Nov 2021 11:52:53 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 15, 2021, at 10:50 PM, Thomas Munro <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nThis message originated outside your organization.\r\n\r\nOn Tue, Nov 16, 2021 at 5:43 PM Robert Creager <[email protected]<mailto:[email protected]>> wrote:\r\nOne CPU is pegged, the data has been sent over STDIN, so Postgres is not waiting for more, there are no other queries running using this select:\r\n\r\nSo PostgreSQL is eating 100% CPU, with no value shown in\r\nwait_event_type, and small numbers of system calls are counted. In\r\nthat case, is there an interesting user stack that jumps out with a\r\nprofiler during the slowdown (or the kernel version, stack())?\r\n\r\nsudo dtrace -n 'profile-99 /arg0/ { @[ustack()] = count(); } tick-10s\r\n{ exit(0); }\r\n\r\nOk, here is the logs around a dtrace included. I have dtaces every 1m10s, and the two I looked at, the last entry, were the same.\r\n\r\negrep \"( checkpoint |COPY ds3.job_entry|COPY ds3.blob|vacuum)” postgres.log\r\n...\r\nNov 17 06:19:55 sm2u-10 postgres[71885]: [57-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 5843 buffers (0.2%); 1 WAL file(s) added, 2 removed, 0 recycled; write=617.324 s, sync=0.029 s, total=617.42\r\n3 s; sync files=62, longest=0.028 s, average=0.001 s; distance=33368 kB, estimate=213837 kB\r\nNov 17 06:22:07 sm2u-10 postgres[72628]: [29-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 199171.920 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, jo\r\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:24:38 sm2u-10 postgres[71885]: [58-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 17 06:25:21 sm2u-10 postgres[72628]: [30-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 193812.868 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, jo\r\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:28:39 sm2u-10 postgres[72628]: [31-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 197402.732 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, jo\r\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:31:53 sm2u-10 postgres[72628]: [32-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 194173.569 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, jo\r\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:35:09 sm2u-10 postgres[72628]: [33-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 195687.516 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, jo\r\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:35:46 sm2u-10 postgres[71885]: [59-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 6314 buffers (0.2%); 0 WAL file(s) added, 2 removed, 0 recycled; write=667.531 s, sync=0.260 s, total=667.86\r\n3 s; sync files=54, longest=0.192 s, average=0.005 s; distance=27410 kB, estimate=195194 kB\r\nNov 17 06:38:22 sm2u-10 postgres[72628]: [34-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 193470.003 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, jo\r\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:39:38 sm2u-10 postgres[71885]: [60-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 17 06:41:38 sm2u-10 postgres[72628]: [35-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 195634.058 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:44:51 sm2u-10 postgres[72628]: [36-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 192194.098 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:48:01 sm2u-10 postgres[72628]: [37-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 190761.032 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:51:12 sm2u-10 postgres[72628]: [38-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 190892.036 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:52:54 sm2u-10 postgres[71885]: [61-1] db=,user=,app=,client= LOG: checkpoint complete: wrote 7530 buffers (0.3%); 1 WAL file(s) added, 2 removed, 0 recycled; write=795.559 s, sync=0.018 s, total=795.647 s; sync files=59, longest=0.018 s, average=0.001 s; distance=33884 kB, estimate=179063 kB\r\nNov 17 06:53:07 sm2u-10 postgres[72628]: [39-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG: duration: 114084.629 ms statement: COPY ds3.job_entry (blob_id, chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\r\nNov 17 06:53:24 sm2u-10 postgres[22492]: [7-1] db=,user=,app=,client= LOG: automatic vacuum of table \"tapesystem.ds3.s3_object\": index scans: 1\r\nNov 17 06:53:27 sm2u-10 postgres[22492]: [8-1] db=,user=,app=,client= LOG: automatic vacuum of table \"tapesystem.ds3.job_entry\": index scans: 1\r\nNov 17 06:54:38 sm2u-10 postgres[71885]: [62-1] db=,user=,app=,client= LOG: checkpoint starting: time\r\nNov 17 06:55:29 sm2u-10 postgres[22844]: [7-1] db=,user=,app=,client= LOG: automatic vacuum of table \"tapesystem.ds3.s3_object\": index scans: 1\r\nNov 17 06:55:33 sm2u-10 postgres[22844]: [8-1] db=,user=,app=,client= LOG: automatic vacuum of table \"tapesystem.ds3.blob\": index scans: 1\r\nNov 17 06:55:37 sm2u-10 postgres[22844]: [9-1] db=,user=,app=,client= LOG: automatic vacuum of table \"tapesystem.ds3.job_entry\": index scans: 1\r\n...\r\n\r\nThe dtrace is from 17 6:43:13. Wasn’t sure if I should attack files, so it’s pasted in it’s entirety.\r\n\r\nCPU ID FUNCTION:NAME\r\n 0 58712 :tick-10s\r\n\r\n\r\n postgres`CheckForSerializableConflictOutNeeded+0x10\r\n postgres`HeapCheckForSerializableConflictOut+0x35\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`SeqNext+0x80\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x150\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`LWLockAcquire+0x1\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`LWLockAcquire+0x1\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`heapgetpage+0x292\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`SeqNext+0x83\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecScanFetch+0x14\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`standard_ExecutorStart+0x614\r\n postgres`_SPI_execute_plan+0x50f\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n postgres`process_startup_packet_die\r\n postgres`0x73055b\r\n 1\r\n\r\n postgres`HeapTupleHeaderGetCmin+0x4\r\n postgres`HeapTupleSatisfiesVisibility+0x8e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x165\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`SeqNext+0x85\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heap_getnextslot+0x65\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`ResourceOwnerRememberBuffer+0x75\r\n postgres`PinBuffer+0x13e\r\n postgres`ReadBuffer_common+0x142\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n 1\r\n\r\n postgres`heap_getnextslot+0x66\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x9\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`MemoryContextReset+0x9\r\n postgres`ExecScan+0xb9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x16a\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`IncrBufferRefCount+0x1a\r\n postgres`ExecStoreBufferHeapTuple+0x8a\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ExecScan+0xeb\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`PinBuffer+0x5b\r\n postgres`ReadBuffer_common+0x142\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x16c\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x6f\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x1dd3\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ReadBufferExtended+0xb3\r\n postgres`heap_fetch+0x2c\r\n postgres`heapam_fetch_row_version+0x3e\r\n postgres`afterTriggerInvokeEvents+0x39b\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n postgres`process_startup_packet_die\r\n postgres`0x73055b\r\n postgres`PostmasterMain+0xf36\r\n postgres`0x697837\r\n postgres`_start+0x100\r\n `0x80095f008\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x664\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x666\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x177\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x278\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecScanFetch+0x29\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x19\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`LWLockRelease+0x39\r\n postgres`heapgetpage+0x271\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x37d\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`heapgetpage+0x1ae\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ExecScanFetch+0x30\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x284\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x1e4\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`hash_search_with_hash_value+0xd5\r\n postgres`BufTableLookup+0x1a\r\n postgres`ReadBuffer_common+0x116\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n 1\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x26\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x6\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x1de9\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ReadBufferExtended+0xcb\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x1ec\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`RecoveryInProgress\r\n postgres`heapgetpage+0x84\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`heapam_slot_callbacks+0x1\r\n postgres`EvalPlanQualSlot+0x46\r\n postgres`ExecLockRows+0xdd\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0x82\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x93\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x16\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ResourceOwnerForgetBuffer+0x6\r\n postgres`UnpinBuffer+0xd9\r\n postgres`ExecStoreBufferHeapTuple+0x79\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`CheckForSerializableConflictOutNeeded+0x4a\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x1c\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x9d\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x1f\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x18f\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n libc.so.7`memcmp\r\n postgres`BufTableLookup+0x1a\r\n postgres`ReadBuffer_common+0x116\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n 1\r\n\r\n postgres`ReadBuffer_common+0xd2\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`BufTableLookup+0x23\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x2a4\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`LWLockAcquire+0x44\r\n postgres`ReadBuffer_common+0x10a\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x805\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgetpage+0xd6\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x26\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x47\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`MemoryContextReset+0x4a\r\n postgres`ExecScan+0xb9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`MemoryContextReset+0x4b\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xac\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x2ac\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x4d\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`heapgetpage+0x1df\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`int4hashfast+0x20\r\n postgres`CatalogCacheComputeHashValue+0x6e\r\n postgres`SearchCatCacheInternal+0x73\r\n postgres`TupleDescInitEntry+0xcb\r\n postgres`ExecTypeFromTLInternal+0xa9\r\n postgres`ExecInitJunkFilter+0x12\r\n postgres`standard_ExecutorStart+0x644\r\n postgres`_SPI_execute_plan+0x50f\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x51\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`GetPrivateRefCountEntry+0x12\r\n postgres`IncrBufferRefCount+0x24\r\n postgres`ExecStoreBufferHeapTuple+0x8a\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x4\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`PinBuffer+0xa5\r\n postgres`ReadBuffer_common+0x142\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n 1\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x58\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x9\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x819\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x1b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0xbed\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x1f\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgetpage+0x1f1\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`spi_printtup+0x32\r\n postgres`standard_ExecutorRun+0x136\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n postgres`process_startup_packet_die\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0xbf2\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xc3\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ResourceArrayEnlarge+0x233\r\n postgres`ReadBuffer_common+0x53\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0xbf4\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n libc.so.7`memset+0x37\r\n postgres`heap_form_tuple+0xa1\r\n postgres`spi_printtup+0x66\r\n postgres`standard_ExecutorRun+0x136\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x2b8\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x28\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgetpage+0x1f9\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x12c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x20\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`heapgetpage+0x1\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x4\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x35\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x835\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgetpage+0x207\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x8\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x39\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x839\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0xd\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x30\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`SeqNext\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`table_slot_create+0x1\r\n postgres`ExecLockRows+0xdd\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x2d2\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x143\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x4d4\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x165\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x36\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`SeqNext+0x6\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgetpage+0x21a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ReadBuffer_common+0x1b\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xec\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x4c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`hash_search_with_hash_value+0x643\r\n postgres`ReadBuffer_common+0x116\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n 1\r\n\r\n postgres`heapgetpage+0x224\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ResourceArrayRemove+0x7\r\n postgres`ResourceOwnerForgetBuffer+0x19\r\n postgres`UnpinBuffer+0xd9\r\n postgres`ExecStoreBufferHeapTuple+0x79\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x2b\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x9d\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x2e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x5f\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x52\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`SeqNext+0x23\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`uuid_eq+0x4\r\n postgres`ExecInterpExpr+0x58c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`heap_getnextslot+0x4\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`GetCachedPlan+0x1a5\r\n postgres`_SPI_execute_plan+0x1df\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n postgres`process_startup_packet_die\r\n postgres`0x73055b\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x66\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heap_getnextslot+0x7\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x8\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x108\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xa\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`hash_search_with_hash_value+0x5b\r\n postgres`BufTableLookup+0x1a\r\n postgres`ReadBuffer_common+0x116\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x5d\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x6d\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x570\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0x1\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`heap_getnextslot+0x11\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`slot_getsomeattrs_int+0x4\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x4\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ReleaseBuffer+0x24\r\n postgres`ExecStoreBufferHeapTuple+0x79\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`heapgetpage+0x246\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x67\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`uuid_eq+0x18\r\n postgres`ExecInterpExpr+0x58c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`slot_getsomeattrs_int+0x8\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0x8\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`heapgetpage+0x249\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`slot_getsomeattrs_int+0xb\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x57b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0xc\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`heapgetpage+0x24c\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0xc\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0xd\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x7f\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`uuid_eq+0x24\r\n postgres`ExecInterpExpr+0x58c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x584\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x2a\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x7a\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`heapgetpage+0x25c\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x58c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0x1e\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0x22\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`ExecScan+0xb4\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`SeqNext+0x57\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecScan+0xb9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x3a\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecEvalParamExtern+0x8c\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x6c\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x9c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`SeqNext+0x61\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n libc.so.7`bsearch+0x1\r\n postgres`ExecInterpExprStillValid+0x18\r\n postgres`ExecReScanIndexScan+0xa5\r\n postgres`ExecReScan+0x1ff\r\n postgres`ExecIndexScan+0x23\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`heap_getnextslot+0x41\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`slot_getsomeattrs_int+0x33\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`slot_getsomeattrs_int+0x35\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`slot_getsomeattrs_int+0x38\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`SeqNext+0x68\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`ExecScanFetch+0xf9\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x339\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x5a9\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x33d\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`heap_getnextslot+0x4d\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 1\r\n\r\n postgres`ExecScanFetch+0xfe\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`ExecScanFetch\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`CheckForSerializableConflictOutNeeded\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`heapgetpage+0x180\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 1\r\n\r\n postgres`ExecScanFetch+0x1\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`CheckForSerializableConflictOutNeeded+0x1\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x254\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`CheckForSerializableConflictOutNeeded+0x5\r\n postgres`HeapCheckForSerializableConflictOut+0x35\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`ExecScanFetch+0x6\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`CheckForSerializableConflictOutNeeded+0x6\r\n postgres`HeapCheckForSerializableConflictOut+0x35\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 1\r\n\r\n postgres`ExecScanFetch+0x107\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`ExecScan+0xd7\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 1\r\n\r\n postgres`ExecScanFetch+0x8\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x258\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x49\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x5a\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`ExecInterpExpr+0x1dbb\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x25b\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x15d\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`heapgettup_pagemode+0x64d\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x5f\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 1\r\n\r\n postgres`HeapTupleHeaderGetCmin\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x6\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x98\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`HeapTupleHeaderGetCmin+0xb\r\n postgres`HeapTupleSatisfiesVisibility+0x8e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 2\r\n\r\n postgres`heapgetpage+0x1a5\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x77\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`heapgetpage+0x1ab\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`ExecScan+0xfc\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 2\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0xac\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x1de0\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x4\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x1de7\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x8\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`heapgetpage+0x1bb\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x18d\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0xd\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`CheckForSerializableConflictOutNeeded+0x42\r\n postgres`HeapCheckForSerializableConflictOut+0x35\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 2\r\n\r\n postgres`heapgetpage+0x1c6\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x2a0\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`ExecInterpExpr+0xb06\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x2a8\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`heapgetpage+0x1d8\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x8\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`ExecInterpExpr+0xa\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x80a\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xb0\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`ExecEvalParamExtern\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`ExecEvalParamExtern+0x6\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 2\r\n\r\n postgres`heapgetpage+0x1ea\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x822\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xc7\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xce\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xd6\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x6\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`heapgetpage+0x20a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0xc\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x162\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xe5\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`heapgetpage+0x215\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x16c\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`heapgettup_pagemode+0x2e4\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xf8\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x100\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`heapgettup_pagemode+0x2f2\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x103\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x4\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x566\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x38\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x10d\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`heapgettup_pagemode\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 2\r\n\r\n postgres`heapgettup_pagemode+0x1\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x114\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x577\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x118\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x78\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`slot_getsomeattrs_int+0xa\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 2\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0xa\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 2\r\n\r\n postgres`heapgetpage+0x250\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 2\r\n\r\n postgres`ExecEvalParamExtern+0x71\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 2\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x124\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x55\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`heap_getnextslot+0x26\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 2\r\n\r\n postgres`heapgettup_pagemode+0x17\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`slot_getsomeattrs_int+0x1a\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 2\r\n\r\n postgres`slot_getsomeattrs_int+0x1d\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 2\r\n\r\n postgres`LWLockRelease+0xed\r\n postgres`heapgetpage+0x271\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 2\r\n\r\n postgres`heapgettup_pagemode+0x24\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x594\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`hash_search_with_hash_value+0x85\r\n postgres`BufTableLookup+0x1a\r\n postgres`ReadBuffer_common+0x116\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n 2\r\n\r\n postgres`ExecInterpExpr+0x597\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`heapgettup_pagemode+0x335\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`ExecScanFetch+0xf6\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`SeqNext+0x6e\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`ExecScanFetch+0x4\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`ExecScanFetch+0x104\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`heapgettup_pagemode+0x45\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 2\r\n\r\n postgres`SeqNext+0x79\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`ExecScanFetch+0xa\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`ExecScanFetch+0xe\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 2\r\n\r\n postgres`ExecScan+0xe4\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 3\r\n\r\n postgres`HeapTupleHeaderGetCmin+0x7\r\n postgres`HeapTupleSatisfiesVisibility+0x8e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 3\r\n\r\n postgres`heapgettup_pagemode+0x65c\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x280\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`CheckForSerializableConflictOutNeeded+0x49\r\n postgres`HeapCheckForSerializableConflictOut+0x35\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 3\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0x8d\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 3\r\n\r\n postgres`ExecInterpExpr+0x800\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 3\r\n\r\n postgres`MemoryContextReset+0x45\r\n postgres`ExecScan+0xb9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 3\r\n\r\n postgres`heapgetpage+0x6\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 3\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xdf\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xe9\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x169\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 3\r\n\r\n postgres`heapgetpage+0x228\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 3\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x99\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`ExecEvalParamExtern+0x4b\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 3\r\n\r\n postgres`uuid_eq\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 3\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xc\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`uuid_eq+0x10\r\n postgres`ExecInterpExpr+0x58c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 3\r\n\r\n postgres`slot_getsomeattrs_int\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 3\r\n\r\n postgres`heapgetpage+0x242\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 3\r\n\r\n postgres`slot_getsomeattrs_int+0x6\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 3\r\n\r\n postgres`heapgettup_pagemode+0x6\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x19\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`uuid_eq+0x1c\r\n postgres`ExecInterpExpr+0x58c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 3\r\n\r\n postgres`heapgettup_pagemode+0x14\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`ExecEvalParamExtern+0x78\r\n postgres`ExecInterpExpr+0xb0b\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 3\r\n\r\n postgres`heapgetpage+0x258\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 3\r\n\r\n postgres`ExecInterpExpr+0x91\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 3\r\n\r\n postgres`PinBuffer+0x26\r\n postgres`ReadBuffer_common+0x142\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n 3\r\n\r\n postgres`heapgettup_pagemode+0x328\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`ExecEvalParamExtern+0x91\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 3\r\n\r\n postgres`ExecScanFetch+0xfd\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 3\r\n\r\n postgres`SeqNext+0x73\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 3\r\n\r\n postgres`heapgettup_pagemode+0x14c\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 3\r\n\r\n postgres`SeqNext+0x81\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 4\r\n\r\n postgres`heapgettup_pagemode+0x5d\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x74\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x379\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`heapgetpage+0x1b1\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x386\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0xa\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 4\r\n\r\n postgres`TransactionIdIsCurrentTransactionId+0x86\r\n postgres`HeapTupleSatisfiesVisibility+0x7e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x9f\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`ExecInterpExpr+0x1\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 4\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0xbd9\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 4\r\n\r\n postgres`ExecInterpExpr+0x14\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 4\r\n\r\n postgres`heapgetpage+0x1ed\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 4\r\n\r\n postgres`ExecInterpExpr+0x830\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 4\r\n\r\n postgres`SeqNext+0x1\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 4\r\n\r\n postgres`SeqNext+0x4\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xee\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xf1\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x1\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 4\r\n\r\n postgres`heap_getnextslot+0x1\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 4\r\n\r\n postgres`heap_getnextslot+0x6\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 4\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xd\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`uuid_eq+0x14\r\n postgres`ExecInterpExpr+0x58c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 4\r\n\r\n postgres`uuid_eq+0x27\r\n postgres`ExecInterpExpr+0x58c\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 4\r\n\r\n postgres`heap_getnextslot+0x39\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 4\r\n\r\n postgres`heapgettup_pagemode+0x646\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 4\r\n\r\n postgres`heap_getnextslot+0x57\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 4\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x8e\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 4\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x1\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 5\r\n\r\n postgres`ExecInterpExpr+0x1dcc\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 5\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x36f\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 5\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x22\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 5\r\n\r\n postgres`ExecInterpExpr+0x808\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 5\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0xbea\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 5\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x15e\r\n postgres`heapgetpage+0x255\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 5\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xf4\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 5\r\n\r\n postgres`slot_getsomeattrs_int+0x1\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 5\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x44\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 5\r\n\r\n postgres`slot_getsomeattrs_int+0x2b\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 5\r\n\r\n postgres`heapgettup_pagemode+0x32c\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 5\r\n\r\n postgres`ExecScan+0xc4\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 5\r\n\r\n postgres`heapgettup_pagemode+0x38\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 5\r\n\r\n postgres`heapgettup_pagemode+0x346\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 5\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x159\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 5\r\n\r\n postgres`ExecStoreBufferHeapTuple+0xf\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 6\r\n\r\n postgres`ExecInterpExpr+0x1d0\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 6\r\n\r\n postgres`HeapCheckForSerializableConflictOut+0x1\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 6\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x5f\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 6\r\n\r\n postgres`SeqNext+0x9\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 6\r\n\r\n postgres`heap_getnextslot\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 6\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x106\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 6\r\n\r\n postgres`heapgetpage+0x23b\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 6\r\n\r\n postgres`heapgettup_pagemode+0x332\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 6\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x75\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 6\r\n\r\n postgres`hash_search_with_hash_value+0xa5\r\n postgres`BufTableLookup+0x1a\r\n postgres`ReadBuffer_common+0x116\r\n postgres`ReadBufferExtended+0x9c\r\n postgres`heapgetpage+0x5a\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n 6\r\n\r\n postgres`heap_page_prune_opt+0xc8\r\n postgres`heapgetpage+0x84\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 6\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x365\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 7\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x64\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 7\r\n\r\n postgres`ExecInterpExpr+0x575\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 7\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x3d\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 7\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xd2\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 8\r\n\r\n postgres`SeqNext+0x5e\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 8\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x79\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 8\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x369\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 9\r\n\r\n postgres`ExecScan+0x107\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n postgres`PostgresMain+0x49c\r\n 9\r\n\r\n postgres`uuid_eq+0x1\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 9\r\n\r\n postgres`SeqNext+0x5a\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n postgres`exec_simple_query+0x623\r\n 9\r\n\r\n postgres`heap_getnextslot+0x3e\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n postgres`PortalRun+0x1a0\r\n 10\r\n\r\n postgres`HeapTupleHeaderGetCmin+0x1\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 11\r\n\r\n postgres`heapgetpage+0x1b7\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n 13\r\n\r\n postgres`ExecStoreBufferHeapTuple+0x5c\r\n postgres`heap_getnextslot+0x49\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 16\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0xfd\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 17\r\n\r\n postgres`tts_buffer_heap_getsomeattrs+0x36c\r\n postgres`slot_getsomeattrs_int+0x27\r\n postgres`ExecInterpExpr+0x140\r\n postgres`ExecScan+0x100\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n postgres`PortalRunUtility+0x66\r\n postgres`PortalRunMulti+0x13c\r\n 34\r\n\r\n postgres`HeapTupleSatisfiesVisibility+0x42\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 55\r\n\r\n\n\n\n\n\n\n\n\n\nOn Nov 15, 2021, at 10:50 PM, Thomas Munro <[email protected]> wrote:\n\n\nThis message originated outside your organization.\n\r\nOn Tue, Nov 16, 2021 at 5:43 PM Robert Creager <[email protected]> wrote:\nOne CPU is pegged, the data has been sent over STDIN, so Postgres is not waiting for more, there are no other queries running using this select:\n\n\r\nSo PostgreSQL is eating 100% CPU, with no value shown in\r\nwait_event_type, and small numbers of system calls are counted.  In\r\nthat case, is there an interesting user stack that jumps out with a\r\nprofiler during the slowdown (or the kernel version, stack())?\n\r\nsudo dtrace -n 'profile-99 /arg0/ { @[ustack()] = count(); } tick-10s\r\n{ exit(0); }\n\n\n\n\n\nOk, here is the logs around a dtrace included.  I have dtaces every 1m10s, and the two I looked at, the last entry, were the same.\n\n\n\n\negrep \"( checkpoint |COPY ds3.job_entry|COPY ds3.blob|vacuum)” postgres.log\n\n...\n\nNov 17 06:19:55 sm2u-10 postgres[71885]: [57-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 5843 buffers (0.2%); 1 WAL file(s) added, 2 removed, 0 recycled; write=617.324 s, sync=0.029\r\n s, total=617.42\n\n3 s; sync files=62, longest=0.028 s, average=0.001 s; distance=33368 kB, estimate=213837 kB\n\nNov 17 06:22:07 sm2u-10 postgres[72628]: [29-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 199171.920 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, jo\n\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:24:38 sm2u-10 postgres[71885]: [58-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov 17 06:25:21 sm2u-10 postgres[72628]: [30-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 193812.868 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, jo\n\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:28:39 sm2u-10 postgres[72628]: [31-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 197402.732 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, jo\n\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:31:53 sm2u-10 postgres[72628]: [32-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 194173.569 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, jo\n\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:35:09 sm2u-10 postgres[72628]: [33-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 195687.516 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, jo\n\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:35:46 sm2u-10 postgres[71885]: [59-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 6314 buffers (0.2%); 0 WAL file(s) added, 2 removed, 0 recycled; write=667.531 s, sync=0.260\r\n s, total=667.86\n\n3 s; sync files=54, longest=0.192 s, average=0.005 s; distance=27410 kB, estimate=195194 kB\n\nNov 17 06:38:22 sm2u-10 postgres[72628]: [34-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 193470.003 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, jo\n\nb_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:39:38 sm2u-10 postgres[71885]: [60-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov 17 06:41:38 sm2u-10 postgres[72628]: [35-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 195634.058 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:44:51 sm2u-10 postgres[72628]: [36-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 192194.098 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:48:01 sm2u-10 postgres[72628]: [37-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 190761.032 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:51:12 sm2u-10 postgres[72628]: [38-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 190892.036 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:52:54 sm2u-10 postgres[71885]: [61-1] db=,user=,app=,client= LOG:  checkpoint complete: wrote 7530 buffers (0.3%); 1 WAL file(s) added, 2 removed, 0 recycled; write=795.559 s, sync=0.018\r\n s, total=795.647 s; sync files=59, longest=0.018 s, average=0.001 s; distance=33884 kB, estimate=179063 kB\n\nNov 17 06:53:07 sm2u-10 postgres[72628]: [39-1] db=tapesystem,user=Administrator,app=PostgreSQL JDBC Driver,client=127.0.0.1 LOG:  duration: 114084.629 ms  statement: COPY ds3.job_entry (blob_id,\r\n chunk_id, id, job_id, order_index) FROM STDIN WITH DELIMITER AS '|'\n\nNov 17 06:53:24 sm2u-10 postgres[22492]: [7-1] db=,user=,app=,client= LOG:  automatic vacuum of table \"tapesystem.ds3.s3_object\": index scans: 1\n\nNov 17 06:53:27 sm2u-10 postgres[22492]: [8-1] db=,user=,app=,client= LOG:  automatic vacuum of table \"tapesystem.ds3.job_entry\": index scans: 1\n\nNov 17 06:54:38 sm2u-10 postgres[71885]: [62-1] db=,user=,app=,client= LOG:  checkpoint starting: time\n\nNov 17 06:55:29 sm2u-10 postgres[22844]: [7-1] db=,user=,app=,client= LOG:  automatic vacuum of table \"tapesystem.ds3.s3_object\": index scans: 1\n\nNov 17 06:55:33 sm2u-10 postgres[22844]: [8-1] db=,user=,app=,client= LOG:  automatic vacuum of table \"tapesystem.ds3.blob\": index scans: 1\n\nNov 17 06:55:37 sm2u-10 postgres[22844]: [9-1] db=,user=,app=,client= LOG:  automatic vacuum of table \"tapesystem.ds3.job_entry\": index scans: 1\n\n...\n\n\n\r\nThe dtrace is from 17 6:43:13.  Wasn’t sure if I should attack files, so it’s pasted in it’s entirety. \r\n\n\n\nCPU     ID                    FUNCTION:NAME\n  0  58712                        :tick-10s \n\n\n\n\n              postgres`CheckForSerializableConflictOutNeeded+0x10\n              postgres`HeapCheckForSerializableConflictOut+0x35\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`SeqNext+0x80\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgettup_pagemode+0x150\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`LWLockAcquire+0x1\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`LWLockAcquire+0x1\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`heapgetpage+0x292\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`SeqNext+0x83\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecScanFetch+0x14\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`standard_ExecutorStart+0x614\n              postgres`_SPI_execute_plan+0x50f\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n              postgres`process_startup_packet_die\n              postgres`0x73055b\n                1\n\n\n              postgres`HeapTupleHeaderGetCmin+0x4\n              postgres`HeapTupleSatisfiesVisibility+0x8e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x165\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`SeqNext+0x85\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heap_getnextslot+0x65\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`ResourceOwnerRememberBuffer+0x75\n              postgres`PinBuffer+0x13e\n              postgres`ReadBuffer_common+0x142\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n                1\n\n\n              postgres`heap_getnextslot+0x66\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`ExecStoreBufferHeapTuple+0x9\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`MemoryContextReset+0x9\n              postgres`ExecScan+0xb9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x16a\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`IncrBufferRefCount+0x1a\n              postgres`ExecStoreBufferHeapTuple+0x8a\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ExecScan+0xeb\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`PinBuffer+0x5b\n              postgres`ReadBuffer_common+0x142\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x16c\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x6f\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecInterpExpr+0x1dd3\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ReadBufferExtended+0xb3\n              postgres`heap_fetch+0x2c\n              postgres`heapam_fetch_row_version+0x3e\n              postgres`afterTriggerInvokeEvents+0x39b\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n              postgres`process_startup_packet_die\n              postgres`0x73055b\n              postgres`PostmasterMain+0xf36\n              postgres`0x697837\n              postgres`_start+0x100\n              `0x80095f008\n                1\n\n\n              postgres`heapgettup_pagemode+0x664\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`heapgettup_pagemode+0x666\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x177\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x278\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecScanFetch+0x29\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecStoreBufferHeapTuple+0x19\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`LWLockRelease+0x39\n              postgres`heapgetpage+0x271\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x37d\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`heapgetpage+0x1ae\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ExecScanFetch+0x30\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x284\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecInterpExpr+0x1e4\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`hash_search_with_hash_value+0xd5\n              postgres`BufTableLookup+0x1a\n              postgres`ReadBuffer_common+0x116\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n                1\n\n\n              postgres`ExecStoreBufferHeapTuple+0x26\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x6\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecInterpExpr+0x1de9\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ReadBufferExtended+0xcb\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecInterpExpr+0x1ec\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`RecoveryInProgress\n              postgres`heapgetpage+0x84\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`heapam_slot_callbacks+0x1\n              postgres`EvalPlanQualSlot+0x46\n              postgres`ExecLockRows+0xdd\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0x82\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x93\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x16\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ResourceOwnerForgetBuffer+0x6\n              postgres`UnpinBuffer+0xd9\n              postgres`ExecStoreBufferHeapTuple+0x79\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`CheckForSerializableConflictOutNeeded+0x4a\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x1c\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x9d\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x1f\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`heapgettup_pagemode+0x18f\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              libc.so.7`memcmp\n              postgres`BufTableLookup+0x1a\n              postgres`ReadBuffer_common+0x116\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n                1\n\n\n              postgres`ReadBuffer_common+0xd2\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`BufTableLookup+0x23\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x2a4\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`LWLockAcquire+0x44\n              postgres`ReadBuffer_common+0x10a\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n                1\n\n\n              postgres`ExecInterpExpr+0x805\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgetpage+0xd6\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x26\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecStoreBufferHeapTuple+0x47\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`MemoryContextReset+0x4a\n              postgres`ExecScan+0xb9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`MemoryContextReset+0x4b\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xac\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x2ac\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecStoreBufferHeapTuple+0x4d\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`heapgetpage+0x1df\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`int4hashfast+0x20\n              postgres`CatalogCacheComputeHashValue+0x6e\n              postgres`SearchCatCacheInternal+0x73\n              postgres`TupleDescInitEntry+0xcb\n              postgres`ExecTypeFromTLInternal+0xa9\n              postgres`ExecInitJunkFilter+0x12\n              postgres`standard_ExecutorStart+0x644\n              postgres`_SPI_execute_plan+0x50f\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ExecStoreBufferHeapTuple+0x51\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`GetPrivateRefCountEntry+0x12\n              postgres`IncrBufferRefCount+0x24\n              postgres`ExecStoreBufferHeapTuple+0x8a\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecEvalParamExtern+0x4\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`PinBuffer+0xa5\n              postgres`ReadBuffer_common+0x142\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n                1\n\n\n              postgres`ExecStoreBufferHeapTuple+0x58\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecEvalParamExtern+0x9\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`ExecInterpExpr+0x819\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecInterpExpr+0x1b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0xbed\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecInterpExpr+0x1f\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgetpage+0x1f1\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`spi_printtup+0x32\n              postgres`standard_ExecutorRun+0x136\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n              postgres`process_startup_packet_die\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0xbf2\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xc3\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ResourceArrayEnlarge+0x233\n              postgres`ReadBuffer_common+0x53\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0xbf4\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              libc.so.7`memset+0x37\n              postgres`heap_form_tuple+0xa1\n              postgres`spi_printtup+0x66\n              postgres`standard_ExecutorRun+0x136\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgettup_pagemode+0x2b8\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecInterpExpr+0x28\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgetpage+0x1f9\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ExecInterpExpr+0x12c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecEvalParamExtern+0x20\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`heapgetpage+0x1\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x4\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecInterpExpr+0x35\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecInterpExpr+0x835\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgetpage+0x207\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x8\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecInterpExpr+0x39\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecInterpExpr+0x839\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0xd\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecEvalParamExtern+0x30\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`SeqNext\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`table_slot_create+0x1\n              postgres`ExecLockRows+0xdd\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`heapgettup_pagemode+0x2d2\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecInterpExpr+0x143\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgettup_pagemode+0x4d4\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x165\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecEvalParamExtern+0x36\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`SeqNext+0x6\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgetpage+0x21a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ReadBuffer_common+0x1b\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xec\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecInterpExpr+0x4c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`hash_search_with_hash_value+0x643\n              postgres`ReadBuffer_common+0x116\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n                1\n\n\n              postgres`heapgetpage+0x224\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ResourceArrayRemove+0x7\n              postgres`ResourceOwnerForgetBuffer+0x19\n              postgres`UnpinBuffer+0xd9\n              postgres`ExecStoreBufferHeapTuple+0x79\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x2b\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecStoreBufferHeapTuple+0x9d\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x2e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecInterpExpr+0x5f\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecEvalParamExtern+0x52\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`SeqNext+0x23\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`uuid_eq+0x4\n              postgres`ExecInterpExpr+0x58c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`heap_getnextslot+0x4\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`GetCachedPlan+0x1a5\n              postgres`_SPI_execute_plan+0x1df\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n              postgres`process_startup_packet_die\n              postgres`0x73055b\n                1\n\n\n              postgres`ExecInterpExpr+0x66\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heap_getnextslot+0x7\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x8\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x108\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xa\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`hash_search_with_hash_value+0x5b\n              postgres`BufTableLookup+0x1a\n              postgres`ReadBuffer_common+0x116\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n                1\n\n\n              postgres`ExecEvalParamExtern+0x5d\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`ExecInterpExpr+0x6d\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecInterpExpr+0x570\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0x1\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`heap_getnextslot+0x11\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`slot_getsomeattrs_int+0x4\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`heapgettup_pagemode+0x4\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ReleaseBuffer+0x24\n              postgres`ExecStoreBufferHeapTuple+0x79\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`heapgetpage+0x246\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ExecEvalParamExtern+0x67\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`uuid_eq+0x18\n              postgres`ExecInterpExpr+0x58c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`slot_getsomeattrs_int+0x8\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0x8\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`heapgetpage+0x249\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`slot_getsomeattrs_int+0xb\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`ExecInterpExpr+0x57b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0xc\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`heapgetpage+0x24c\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`heapgettup_pagemode+0xc\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0xd\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`ExecInterpExpr+0x7f\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`uuid_eq+0x24\n              postgres`ExecInterpExpr+0x58c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`ExecInterpExpr+0x584\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x2a\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecEvalParamExtern+0x7a\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`heapgetpage+0x25c\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ExecInterpExpr+0x58c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0x1e\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0x22\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`ExecScan+0xb4\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`SeqNext+0x57\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecScan+0xb9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x3a\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecEvalParamExtern+0x8c\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x6c\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecInterpExpr+0x9c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`SeqNext+0x61\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              libc.so.7`bsearch+0x1\n              postgres`ExecInterpExprStillValid+0x18\n              postgres`ExecReScanIndexScan+0xa5\n              postgres`ExecReScan+0x1ff\n              postgres`ExecIndexScan+0x23\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`heap_getnextslot+0x41\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`slot_getsomeattrs_int+0x33\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`slot_getsomeattrs_int+0x35\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`slot_getsomeattrs_int+0x38\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`SeqNext+0x68\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`ExecScanFetch+0xf9\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgettup_pagemode+0x339\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecInterpExpr+0x5a9\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`heapgettup_pagemode+0x33d\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`heap_getnextslot+0x4d\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                1\n\n\n              postgres`ExecScanFetch+0xfe\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`ExecScanFetch\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`CheckForSerializableConflictOutNeeded\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`heapgetpage+0x180\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                1\n\n\n              postgres`ExecScanFetch+0x1\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`CheckForSerializableConflictOutNeeded+0x1\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x254\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`CheckForSerializableConflictOutNeeded+0x5\n              postgres`HeapCheckForSerializableConflictOut+0x35\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`ExecScanFetch+0x6\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`CheckForSerializableConflictOutNeeded+0x6\n              postgres`HeapCheckForSerializableConflictOut+0x35\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                1\n\n\n              postgres`ExecScanFetch+0x107\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`ExecScan+0xd7\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                1\n\n\n              postgres`ExecScanFetch+0x8\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x258\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`heapgettup_pagemode+0x49\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x5a\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`ExecInterpExpr+0x1dbb\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x25b\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x15d\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`heapgettup_pagemode+0x64d\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x5f\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                1\n\n\n              postgres`HeapTupleHeaderGetCmin\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`ExecStoreBufferHeapTuple+0x6\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x98\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`HeapTupleHeaderGetCmin+0xb\n              postgres`HeapTupleSatisfiesVisibility+0x8e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                2\n\n\n              postgres`heapgetpage+0x1a5\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x77\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`heapgetpage+0x1ab\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`ExecScan+0xfc\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                2\n\n\n              postgres`HeapTupleSatisfiesVisibility+0xac\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`ExecInterpExpr+0x1de0\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x4\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`ExecInterpExpr+0x1de7\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x8\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`heapgetpage+0x1bb\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x18d\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`HeapCheckForSerializableConflictOut+0xd\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`CheckForSerializableConflictOutNeeded+0x42\n              postgres`HeapCheckForSerializableConflictOut+0x35\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                2\n\n\n              postgres`heapgetpage+0x1c6\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x2a0\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`ExecInterpExpr+0xb06\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x2a8\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`heapgetpage+0x1d8\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`ExecInterpExpr+0x8\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`ExecInterpExpr+0xa\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`ExecInterpExpr+0x80a\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xb0\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`ExecEvalParamExtern\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`ExecEvalParamExtern+0x6\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                2\n\n\n              postgres`heapgetpage+0x1ea\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`ExecInterpExpr+0x822\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xc7\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xce\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xd6\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x6\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`heapgetpage+0x20a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`HeapTupleSatisfiesVisibility+0xc\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x162\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xe5\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`heapgetpage+0x215\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x16c\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`heapgettup_pagemode+0x2e4\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xf8\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x100\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`heapgettup_pagemode+0x2f2\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x103\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x4\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`ExecInterpExpr+0x566\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x38\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x10d\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`heapgettup_pagemode\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                2\n\n\n              postgres`heapgettup_pagemode+0x1\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x114\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`ExecInterpExpr+0x577\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x118\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`ExecInterpExpr+0x78\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`slot_getsomeattrs_int+0xa\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                2\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0xa\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                2\n\n\n              postgres`heapgetpage+0x250\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                2\n\n\n              postgres`ExecEvalParamExtern+0x71\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                2\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x124\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x55\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`heap_getnextslot+0x26\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                2\n\n\n              postgres`heapgettup_pagemode+0x17\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`slot_getsomeattrs_int+0x1a\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                2\n\n\n              postgres`slot_getsomeattrs_int+0x1d\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                2\n\n\n              postgres`LWLockRelease+0xed\n              postgres`heapgetpage+0x271\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                2\n\n\n              postgres`heapgettup_pagemode+0x24\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`ExecInterpExpr+0x594\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`hash_search_with_hash_value+0x85\n              postgres`BufTableLookup+0x1a\n              postgres`ReadBuffer_common+0x116\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n                2\n\n\n              postgres`ExecInterpExpr+0x597\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`heapgettup_pagemode+0x335\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`ExecScanFetch+0xf6\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`SeqNext+0x6e\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`ExecScanFetch+0x4\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`ExecScanFetch+0x104\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`heapgettup_pagemode+0x45\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                2\n\n\n              postgres`SeqNext+0x79\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`ExecScanFetch+0xa\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`ExecScanFetch+0xe\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                2\n\n\n              postgres`ExecScan+0xe4\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                3\n\n\n              postgres`HeapTupleHeaderGetCmin+0x7\n              postgres`HeapTupleSatisfiesVisibility+0x8e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                3\n\n\n              postgres`heapgettup_pagemode+0x65c\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x280\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`CheckForSerializableConflictOutNeeded+0x49\n              postgres`HeapCheckForSerializableConflictOut+0x35\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                3\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0x8d\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                3\n\n\n              postgres`ExecInterpExpr+0x800\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                3\n\n\n              postgres`MemoryContextReset+0x45\n              postgres`ExecScan+0xb9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                3\n\n\n              postgres`heapgetpage+0x6\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                3\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xdf\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xe9\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x169\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                3\n\n\n              postgres`heapgetpage+0x228\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                3\n\n\n              postgres`ExecStoreBufferHeapTuple+0x99\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`ExecEvalParamExtern+0x4b\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                3\n\n\n              postgres`uuid_eq\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                3\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xc\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`uuid_eq+0x10\n              postgres`ExecInterpExpr+0x58c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                3\n\n\n              postgres`slot_getsomeattrs_int\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                3\n\n\n              postgres`heapgetpage+0x242\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                3\n\n\n              postgres`slot_getsomeattrs_int+0x6\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                3\n\n\n              postgres`heapgettup_pagemode+0x6\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x19\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`uuid_eq+0x1c\n              postgres`ExecInterpExpr+0x58c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                3\n\n\n              postgres`heapgettup_pagemode+0x14\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`ExecEvalParamExtern+0x78\n              postgres`ExecInterpExpr+0xb0b\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                3\n\n\n              postgres`heapgetpage+0x258\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                3\n\n\n              postgres`ExecInterpExpr+0x91\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                3\n\n\n              postgres`PinBuffer+0x26\n              postgres`ReadBuffer_common+0x142\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n                3\n\n\n              postgres`heapgettup_pagemode+0x328\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`ExecEvalParamExtern+0x91\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                3\n\n\n              postgres`ExecScanFetch+0xfd\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                3\n\n\n              postgres`SeqNext+0x73\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                3\n\n\n              postgres`heapgettup_pagemode+0x14c\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                3\n\n\n              postgres`SeqNext+0x81\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                4\n\n\n              postgres`heapgettup_pagemode+0x5d\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x74\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x379\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`heapgetpage+0x1b1\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x386\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`HeapCheckForSerializableConflictOut+0xa\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                4\n\n\n              postgres`TransactionIdIsCurrentTransactionId+0x86\n              postgres`HeapTupleSatisfiesVisibility+0x7e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x9f\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`ExecInterpExpr+0x1\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                4\n\n\n              postgres`HeapTupleSatisfiesVisibility+0xbd9\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                4\n\n\n              postgres`ExecInterpExpr+0x14\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                4\n\n\n              postgres`heapgetpage+0x1ed\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                4\n\n\n              postgres`ExecInterpExpr+0x830\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                4\n\n\n              postgres`SeqNext+0x1\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                4\n\n\n              postgres`SeqNext+0x4\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xee\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xf1\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x1\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                4\n\n\n              postgres`heap_getnextslot+0x1\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                4\n\n\n              postgres`heap_getnextslot+0x6\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                4\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xd\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`uuid_eq+0x14\n              postgres`ExecInterpExpr+0x58c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                4\n\n\n              postgres`uuid_eq+0x27\n              postgres`ExecInterpExpr+0x58c\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                4\n\n\n              postgres`heap_getnextslot+0x39\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                4\n\n\n              postgres`heapgettup_pagemode+0x646\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                4\n\n\n              postgres`heap_getnextslot+0x57\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                4\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x8e\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                4\n\n\n              postgres`ExecStoreBufferHeapTuple+0x1\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                5\n\n\n              postgres`ExecInterpExpr+0x1dcc\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                5\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x36f\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                5\n\n\n              postgres`ExecStoreBufferHeapTuple+0x22\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                5\n\n\n              postgres`ExecInterpExpr+0x808\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                5\n\n\n              postgres`HeapTupleSatisfiesVisibility+0xbea\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                5\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x15e\n              postgres`heapgetpage+0x255\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                5\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xf4\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                5\n\n\n              postgres`slot_getsomeattrs_int+0x1\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                5\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x44\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                5\n\n\n              postgres`slot_getsomeattrs_int+0x2b\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n                5\n\n\n              postgres`heapgettup_pagemode+0x32c\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                5\n\n\n              postgres`ExecScan+0xc4\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                5\n\n\n              postgres`heapgettup_pagemode+0x38\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                5\n\n\n              postgres`heapgettup_pagemode+0x346\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                5\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x159\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                5\n\n\n              postgres`ExecStoreBufferHeapTuple+0xf\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                6\n\n\n              postgres`ExecInterpExpr+0x1d0\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                6\n\n\n              postgres`HeapCheckForSerializableConflictOut+0x1\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                6\n\n\n              postgres`ExecStoreBufferHeapTuple+0x5f\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                6\n\n\n              postgres`SeqNext+0x9\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                6\n\n\n              postgres`heap_getnextslot\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                6\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x106\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                6\n\n\n              postgres`heapgetpage+0x23b\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n                6\n\n\n              postgres`heapgettup_pagemode+0x332\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                6\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x75\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                6\n\n\n              postgres`hash_search_with_hash_value+0xa5\n              postgres`BufTableLookup+0x1a\n              postgres`ReadBuffer_common+0x116\n              postgres`ReadBufferExtended+0x9c\n              postgres`heapgetpage+0x5a\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n                6\n\n\n              postgres`heap_page_prune_opt+0xc8\n              postgres`heapgetpage+0x84\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                6\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x365\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                7\n\n\n              postgres`ExecStoreBufferHeapTuple+0x64\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                7\n\n\n              postgres`ExecInterpExpr+0x575\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                7\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x3d\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                7\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xd2\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                8\n\n\n              postgres`SeqNext+0x5e\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                8\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x79\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n                8\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x369\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n                9\n\n\n              postgres`ExecScan+0x107\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n              postgres`PostgresMain+0x49c\n                9\n\n\n              postgres`uuid_eq+0x1\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                9\n\n\n              postgres`SeqNext+0x5a\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n              postgres`exec_simple_query+0x623\n                9\n\n\n              postgres`heap_getnextslot+0x3e\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n              postgres`PortalRun+0x1a0\n               10\n\n\n              postgres`HeapTupleHeaderGetCmin+0x1\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n               11\n\n\n              postgres`heapgetpage+0x1b7\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n               13\n\n\n              postgres`ExecStoreBufferHeapTuple+0x5c\n              postgres`heap_getnextslot+0x49\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n               16\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0xfd\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n               17\n\n\n              postgres`tts_buffer_heap_getsomeattrs+0x36c\n              postgres`slot_getsomeattrs_int+0x27\n              postgres`ExecInterpExpr+0x140\n              postgres`ExecScan+0x100\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n              postgres`PortalRunUtility+0x66\n              postgres`PortalRunMulti+0x13c\n               34\n\n\n              postgres`HeapTupleSatisfiesVisibility+0x42\n              postgres`heapgetpage+0x237\n              postgres`heapgettup_pagemode+0x5ad\n              postgres`heap_getnextslot+0x52\n              postgres`SeqNext+0x71\n              postgres`ExecScan+0xc9\n              postgres`ExecLockRows+0x7b\n              postgres`standard_ExecutorRun+0x10a\n              postgres`_SPI_execute_plan+0x524\n              postgres`SPI_execute_snapshot+0x116\n              postgres`ri_PerformCheck+0x29e\n              postgres`RI_FKey_check+0x5d3\n              postgres`RI_FKey_check_ins+0x21\n              postgres`ExecCallTriggerFunc+0x105\n              postgres`afterTriggerInvokeEvents+0x605\n              postgres`AfterTriggerEndQuery+0x7a\n              postgres`CopyFrom+0xaca\n              postgres`DoCopy+0x553\n              postgres`standard_ProcessUtility+0x5f9\n              postgres`ProcessUtility+0x28\n               55", "msg_date": "Wed, 17 Nov 2021 17:51:05 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 17, 2021, at 10:51 AM, Robert Creager <[email protected]<mailto:[email protected]>> wrote:\n\n\n\nOn Nov 15, 2021, at 10:50 PM, Thomas Munro <[email protected]<mailto:[email protected]>> wrote:\n\nThis message originated outside your organization.\n\nOn Tue, Nov 16, 2021 at 5:43 PM Robert Creager <[email protected]<mailto:[email protected]>> wrote:\nOne CPU is pegged, the data has been sent over STDIN, so Postgres is not waiting for more, there are no other queries running using this select:\n\nSo PostgreSQL is eating 100% CPU, with no value shown in\nwait_event_type, and small numbers of system calls are counted. In\nthat case, is there an interesting user stack that jumps out with a\nprofiler during the slowdown (or the kernel version, stack())?\n\nsudo dtrace -n 'profile-99 /arg0/ { @[ustack()] = count(); } tick-10s\n{ exit(0); }\n\nOk, here is the logs around a dtrace included. I have dtaces every 1m10s, and the two I looked at, the last entry, were the same.\n\nAlso, the wall settings are different on this machine, had changed them on the original email submittal to see if it was related.\n\ntapesystem=# SELECT name, current_setting(name), source\n FROM pg_settings\n WHERE source NOT IN ('default', 'override');\n name | current_setting | source\n---------------------------------+---------------------------------+--------------------\n application_name | psql | client\n autovacuum_analyze_scale_factor | 0.05 | configuration file\n autovacuum_analyze_threshold | 5000 | configuration file\n autovacuum_max_workers | 8 | configuration file\n autovacuum_vacuum_cost_delay | 5ms | configuration file\n autovacuum_vacuum_scale_factor | 0.1 | configuration file\n autovacuum_vacuum_threshold | 5000 | configuration file\n checkpoint_completion_target | 0.9 | configuration file\n checkpoint_timeout | 15min | configuration file\n client_encoding | UTF8 | client\n DateStyle | ISO, MDY | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n dynamic_shared_memory_type | posix | configuration file\n effective_cache_size | 6546MB | configuration file\n effective_io_concurrency | 200 | configuration file\n full_page_writes | off | configuration file\n hot_standby | off | configuration file\n lc_messages | C | configuration file\n lc_monetary | C | configuration file\n lc_numeric | C | configuration file\n lc_time | C | configuration file\n listen_addresses | * | configuration file\n log_autovacuum_min_duration | 1s | configuration file\n log_checkpoints | on | configuration file\n log_connections | on | configuration file\n log_destination | syslog | configuration file\n log_disconnections | on | configuration file\n log_duration | off | configuration file\n log_line_prefix | db=%d,user=%u,app=%a,client=%h | configuration file\n log_lock_waits | on | configuration file\n log_min_duration_sample | 500ms | configuration file\n log_min_duration_statement | 1s | configuration file\n log_statement_sample_rate | 0.01 | configuration file\n log_temp_files | 0 | configuration file\n log_timezone | UTC | configuration file\n maintenance_work_mem | 1964MB | configuration file\n max_connections | 250 | configuration file\n max_parallel_workers_per_gather | 8 | configuration file\n max_replication_slots | 0 | configuration file\n max_stack_depth | 32MB | configuration file\n max_wal_senders | 0 | configuration file\n max_wal_size | 10GB | configuration file\n max_worker_processes | 8 | configuration file\n random_page_cost | 2 | configuration file\n shared_buffers | 22064MB | configuration file\n synchronous_commit | off | configuration file\n temp_buffers | 654MB | configuration file\n TimeZone | UTC | configuration file\n track_activities | on | configuration file\n track_counts | on | configuration file\n update_process_title | off | configuration file\n vacuum_cost_delay | 1ms | configuration file\n wal_init_zero | off | configuration file\n wal_level | minimal | configuration file\n wal_recycle | off | configuration file\n work_mem | 654MB | configuration file\n\n\n\n\n\n\n\n\n\n\n\nOn Nov 17, 2021, at 10:51 AM, Robert Creager <[email protected]> wrote:\n\n\n\n\n\n\nOn Nov 15, 2021, at 10:50 PM, Thomas Munro <[email protected]> wrote:\n\n\nThis message originated outside your organization.\n\nOn Tue, Nov 16, 2021 at 5:43 PM Robert Creager <[email protected]> wrote:\nOne CPU is pegged, the data has been sent over STDIN, so Postgres is not waiting for more, there are no other queries running using this select:\n\n\nSo PostgreSQL is eating 100% CPU, with no value shown in\nwait_event_type, and small numbers of system calls are counted.  In\nthat case, is there an interesting user stack that jumps out with a\nprofiler during the slowdown (or the kernel version, stack())?\n\nsudo dtrace -n 'profile-99 /arg0/ { @[ustack()] = count(); } tick-10s\n{ exit(0); }\n\n\n\n\n\n\nOk, here is the logs around a dtrace included.  I have dtaces every 1m10s, and the two I looked at, the last entry, were the same.\n\n\n\n\nAlso, the wall settings are different on this machine, had changed them on the original email submittal to see if it was related.\n\n\n\n\ntapesystem=# SELECT name, current_setting(name), source\n\n  FROM pg_settings\n\n  WHERE source NOT IN ('default', 'override');\n\n              name               |         current_setting         |       source       \n\n---------------------------------+---------------------------------+--------------------\n\n application_name                | psql                            | client\n\n autovacuum_analyze_scale_factor | 0.05                            | configuration file\n\n autovacuum_analyze_threshold    | 5000                            | configuration file\n\n autovacuum_max_workers          | 8                               | configuration file\n\n autovacuum_vacuum_cost_delay    | 5ms                             | configuration file\n\n autovacuum_vacuum_scale_factor  | 0.1                             | configuration file\n\n autovacuum_vacuum_threshold     | 5000                            | configuration file\n\n checkpoint_completion_target    | 0.9                             | configuration file\n\n checkpoint_timeout              | 15min                           | configuration file\n\n client_encoding                 | UTF8                            | client\n\n DateStyle                       | ISO, MDY                        | configuration file\n\n default_text_search_config      | pg_catalog.english              | configuration file\n\n dynamic_shared_memory_type      | posix                           | configuration file\n\n effective_cache_size            | 6546MB                          | configuration file\n\n effective_io_concurrency        | 200                             | configuration file\n\n full_page_writes                | off                             | configuration file\n\n hot_standby                     | off                             | configuration file\n\n lc_messages                     | C                               | configuration file\n\n lc_monetary                     | C                               | configuration file\n\n lc_numeric                      | C                               | configuration file\n\n lc_time                         | C                               | configuration file\n\n listen_addresses                | *                               | configuration file\n\n log_autovacuum_min_duration     | 1s                              | configuration file\n\n log_checkpoints                 | on                              | configuration file\n\n log_connections                 | on                              | configuration file\n\n log_destination                 | syslog                          | configuration file\n\n log_disconnections              | on                              | configuration file\n\n log_duration                    | off                             | configuration file\n\n log_line_prefix                 | db=%d,user=%u,app=%a,client=%h  | configuration file\n\n log_lock_waits                  | on                              | configuration file\n\n log_min_duration_sample         | 500ms                           | configuration file\n\n log_min_duration_statement      | 1s                              | configuration file\n\n log_statement_sample_rate       | 0.01                            | configuration file\n\n log_temp_files                  | 0                               | configuration file\n\n log_timezone                    | UTC                             | configuration file\n\n maintenance_work_mem            | 1964MB                          | configuration file\n\n max_connections                 | 250                             | configuration file\n\n max_parallel_workers_per_gather | 8                               | configuration file\n\n max_replication_slots           | 0                               | configuration file\n\n max_stack_depth                 | 32MB                            | configuration file\n\n max_wal_senders                 | 0                               | configuration file\n\n max_wal_size                    | 10GB                            | configuration file\n\n max_worker_processes            | 8                               | configuration file\n\n random_page_cost                | 2                               | configuration file\n\n shared_buffers                  | 22064MB                         | configuration file\n\n synchronous_commit              | off                             | configuration file\n\n temp_buffers                    | 654MB                           | configuration file\n\n TimeZone                        | UTC                             | configuration file\n\n track_activities                | on                              | configuration file\n\n track_counts                    | on                              | configuration file\n\n update_process_title            | off                             | configuration file\n\n vacuum_cost_delay               | 1ms                             | configuration file\n\n wal_init_zero                   | off                             | configuration file\n\n wal_level                       | minimal                         | configuration file\n\n wal_recycle                     | off                             | configuration file\n\n work_mem                        | 654MB                           | configuration file", "msg_date": "Wed, 17 Nov 2021 17:55:38 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Wed, Nov 17, 2021 at 05:51:05PM +0000, Robert Creager wrote:\n> postgres`HeapTupleSatisfiesVisibility+0x42\n> postgres`heapgetpage+0x237\n> postgres`heapgettup_pagemode+0x5ad\n> postgres`heap_getnextslot+0x52\n> postgres`SeqNext+0x71\n> postgres`ExecScan+0xc9\n> postgres`ExecLockRows+0x7b\n> postgres`standard_ExecutorRun+0x10a\n> postgres`_SPI_execute_plan+0x524\n> postgres`SPI_execute_snapshot+0x116\n> postgres`ri_PerformCheck+0x29e\n> postgres`RI_FKey_check+0x5d3\n> postgres`RI_FKey_check_ins+0x21\n> postgres`ExecCallTriggerFunc+0x105\n> postgres`afterTriggerInvokeEvents+0x605\n> postgres`AfterTriggerEndQuery+0x7a\n> postgres`CopyFrom+0xaca\n> postgres`DoCopy+0x553\n> postgres`standard_ProcessUtility+0x5f9\n> postgres`ProcessUtility+0x28\n> 55\n\nIt shows that the process is running FK triggers.\nWould you show \\d for the table which is the destination of COPY, and for other\ntables to which it has FK constraints.\n\nAlso, do you have any long-running transactions ?\nIn your first message, you showed no other queries except \"idle\" ones (not\nidle-in-transaction) but I figured I'd ask anyway.\n\nDoes your COPY job run in a transaction block ?\n\nYou're running pg13.2, so it would be interesting to know if the problem exists\nunder 13.5.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 17 Nov 2021 13:00:38 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n> It shows that the process is running FK triggers.\n\nIndeed, and doing a seqscan therein. Normally I'd suppose that\nthis reflects a lack of an index, but RI_FKey_check should always\nbe doing something that matches the referenced table's unique\nconstraint, so why isn't it using that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Nov 2021 14:28:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 17, 2021, at 12:00 PM, Justin Pryzby <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nThis message originated outside your organization.\r\n\r\nOn Wed, Nov 17, 2021 at 05:51:05PM +0000, Robert Creager wrote:\r\n postgres`HeapTupleSatisfiesVisibility+0x42\r\n postgres`heapgetpage+0x237\r\n postgres`heapgettup_pagemode+0x5ad\r\n postgres`heap_getnextslot+0x52\r\n postgres`SeqNext+0x71\r\n postgres`ExecScan+0xc9\r\n postgres`ExecLockRows+0x7b\r\n postgres`standard_ExecutorRun+0x10a\r\n postgres`_SPI_execute_plan+0x524\r\n postgres`SPI_execute_snapshot+0x116\r\n postgres`ri_PerformCheck+0x29e\r\n postgres`RI_FKey_check+0x5d3\r\n postgres`RI_FKey_check_ins+0x21\r\n postgres`ExecCallTriggerFunc+0x105\r\n postgres`afterTriggerInvokeEvents+0x605\r\n postgres`AfterTriggerEndQuery+0x7a\r\n postgres`CopyFrom+0xaca\r\n postgres`DoCopy+0x553\r\n postgres`standard_ProcessUtility+0x5f9\r\n postgres`ProcessUtility+0x28\r\n 55\r\n\r\nIt shows that the process is running FK triggers.\r\nWould you show \\d for the table which is the destination of COPY, and for other\r\ntables to which it has FK constraints.\r\n\r\nTwo tables being copied into. I chased the first FK tables from the job_entry. I can do the entire thing if you want. There are bunches...\r\n\r\ntapesystem=# \\d ds3.job_entry\r\n Table \"ds3.job_entry\"\r\n Column | Type | Collation | Nullable | Default\r\n-------------+---------+-----------+----------+---------\r\n blob_id | uuid | | not null |\r\n chunk_id | uuid | | not null |\r\n id | uuid | | not null |\r\n job_id | uuid | | not null |\r\n order_index | integer | | not null |\r\nIndexes:\r\n \"job_entry_pkey\" PRIMARY KEY, btree (id)\r\n \"job_entry_blob_id_idx\" btree (blob_id)\r\n \"job_entry_chunk_id_idx\" btree (chunk_id)\r\n \"job_entry_job_id_blob_id_key\" UNIQUE CONSTRAINT, btree (job_id, blob_id)\r\n \"job_entry_job_id_idx\" btree (job_id)\r\n \"job_entry_order_index_chunk_id_key\" UNIQUE CONSTRAINT, btree (order_index, chunk_id)\r\nForeign-key constraints:\r\n \"job_entry_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n \"job_entry_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n \"job_entry_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n\r\ntapesystem=# \\d ds3.job_chunk\r\n Table \"ds3.job_chunk\"\r\n Column | Type | Collation | Nullable | Default\r\n---------------------------+--------------------------------+-----------+----------+---------\r\n blob_store_state | ds3.job_chunk_blob_store_state | | not null |\r\n chunk_number | integer | | not null |\r\n id | uuid | | not null |\r\n job_id | uuid | | not null |\r\n node_id | uuid | | |\r\n pending_target_commit | boolean | | not null |\r\n read_from_azure_target_id | uuid | | |\r\n read_from_ds3_target_id | uuid | | |\r\n read_from_pool_id | uuid | | |\r\n read_from_s3_target_id | uuid | | |\r\n read_from_tape_id | uuid | | |\r\nIndexes:\r\n \"job_chunk_pkey\" PRIMARY KEY, btree (id)\r\n \"job_chunk_blob_store_state_idx\" btree (blob_store_state)\r\n \"job_chunk_chunk_number_job_id_key\" UNIQUE CONSTRAINT, btree (chunk_number, job_id)\r\n \"job_chunk_job_id_idx\" btree (job_id)\r\n \"job_chunk_node_id_idx\" btree (node_id)\r\n \"job_chunk_read_from_azure_target_id_idx\" btree (read_from_azure_target_id)\r\n \"job_chunk_read_from_ds3_target_id_idx\" btree (read_from_ds3_target_id)\r\n \"job_chunk_read_from_pool_id_idx\" btree (read_from_pool_id)\r\n \"job_chunk_read_from_s3_target_id_idx\" btree (read_from_s3_target_id)\r\n \"job_chunk_read_from_tape_id_idx\" btree (read_from_tape_id)\r\nForeign-key constraints:\r\n \"job_chunk_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n \"job_chunk_node_id_fkey\" FOREIGN KEY (node_id) REFERENCES ds3.node(id) ON UPDATE CASCADE ON DELETE SET NULL\r\n \"job_chunk_read_from_azure_target_id_fkey\" FOREIGN KEY (read_from_azure_target_id) REFERENCES target.azure_target(id) ON UPDATE CASCADE ON DELETE SET NULL\r\n \"job_chunk_read_from_ds3_target_id_fkey\" FOREIGN KEY (read_from_ds3_target_id) REFERENCES target.ds3_target(id) ON UPDATE CASCADE ON DELETE SET NULL\r\n \"job_chunk_read_from_pool_id_fkey\" FOREIGN KEY (read_from_pool_id) REFERENCES pool.pool(id) ON UPDATE CASCADE ON DELETE SET NULL\r\n \"job_chunk_read_from_s3_target_id_fkey\" FOREIGN KEY (read_from_s3_target_id) REFERENCES target.s3_target(id) ON UPDATE CASCADE ON DELETE SET NULL\r\n \"job_chunk_read_from_tape_id_fkey\" FOREIGN KEY (read_from_tape_id) REFERENCES tape.tape(id) ON UPDATE CASCADE ON DELETE SET NULL\r\nReferenced by:\r\n TABLE \"ds3.job_chunk_azure_target\" CONSTRAINT \"job_chunk_azure_target_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.job_chunk_ds3_target\" CONSTRAINT \"job_chunk_ds3_target_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.job_chunk_persistence_target\" CONSTRAINT \"job_chunk_persistence_target_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.job_chunk_s3_target\" CONSTRAINT \"job_chunk_s3_target_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.job_entry\" CONSTRAINT \"job_entry_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n\r\ntapesystem=# \\d ds3.job\r\n Table \"ds3.job\"\r\n Column | Type | Collation | Nullable | Default\r\n-----------------------------------------+-------------------------------------------------+-----------+----------+---------\r\n bucket_id | uuid | | not null |\r\n cached_size_in_bytes | bigint | | not null |\r\n chunk_client_processing_order_guarantee | ds3.job_chunk_client_processing_order_guarantee | | not null |\r\n completed_size_in_bytes | bigint | | not null |\r\n created_at | timestamp without time zone | | not null |\r\n id | uuid | | not null |\r\n original_size_in_bytes | bigint | | not null |\r\n priority | ds3.blob_store_task_priority | | not null |\r\n request_type | ds3.job_request_type | | not null |\r\n user_id | uuid | | not null |\r\n truncated | boolean | | not null |\r\n rechunked | timestamp without time zone | | |\r\n error_message | character varying | | |\r\n naked | boolean | | not null |\r\n name | character varying | | not null |\r\n aggregating | boolean | | not null |\r\n minimize_spanning_across_media | boolean | | not null |\r\n truncated_due_to_timeout | boolean | | not null |\r\n implicit_job_id_resolution | boolean | | not null |\r\n verify_after_write | boolean | | not null |\r\n replicating | boolean | | not null |\r\n dead_job_cleanup_allowed | boolean | | not null |\r\n restore | ds3.job_restore | | not null |\r\nIndexes:\r\n \"job_pkey\" PRIMARY KEY, btree (id)\r\n \"ds3_job__bucket_id\" btree (bucket_id)\r\n \"ds3_job__created_at\" btree (created_at)\r\n \"ds3_job__name\" btree (name)\r\n \"ds3_job__user_id\" btree (user_id)\r\nForeign-key constraints:\r\n \"job_bucket_id_fkey\" FOREIGN KEY (bucket_id) REFERENCES ds3.bucket(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n \"job_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES ds3.\"user\"(id) ON UPDATE CASCADE\r\nReferenced by:\r\n TABLE \"ds3.data_migration\" CONSTRAINT \"data_migration_get_job_id_fkey\" FOREIGN KEY (get_job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE SET NULL\r\n TABLE \"ds3.data_migration\" CONSTRAINT \"data_migration_put_job_id_fkey\" FOREIGN KEY (put_job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE SET NULL\r\n TABLE \"ds3.job_chunk\" CONSTRAINT \"job_chunk_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"notification.job_completed_notification_registration\" CONSTRAINT \"job_completed_notification_registration_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.job_entry\" CONSTRAINT \"job_entry_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"notification.s3_object_cached_notification_registration\" CONSTRAINT \"s3_object_cached_notification_registration_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"notification.s3_object_persisted_notification_registration\" CONSTRAINT \"s3_object_persisted_notification_registration_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n\r\ntapesystem=# \\d ds3.blob\r\n Table \"ds3.blob\"\r\n Column | Type | Collation | Nullable | Default\r\n---------------+------------------------+-----------+----------+---------\r\n byte_offset | bigint | | not null |\r\n checksum | character varying | | |\r\n checksum_type | security.checksum_type | | |\r\n id | uuid | | not null |\r\n length | bigint | | not null |\r\n object_id | uuid | | not null |\r\nIndexes:\r\n \"blob_pkey\" PRIMARY KEY, btree (id)\r\n \"blob_byte_offset_object_id_key\" UNIQUE CONSTRAINT, btree (byte_offset, object_id)\r\n \"ds3_blob__object_id\" btree (object_id)\r\nForeign-key constraints:\r\n \"blob_object_id_fkey\" FOREIGN KEY (object_id) REFERENCES ds3.s3_object(id) ON UPDATE CASCADE\r\nReferenced by:\r\n TABLE \"target.blob_azure_target\" CONSTRAINT \"blob_azure_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"temp.blob_azure_target_to_verify\" CONSTRAINT \"blob_azure_target_to_verify_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"pool.blob_pool\" CONSTRAINT \"blob_pool_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"target.blob_s3_target\" CONSTRAINT \"blob_s3_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"temp.blob_s3_target_to_verify\" CONSTRAINT \"blob_s3_target_to_verify_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"tape.blob_tape\" CONSTRAINT \"blob_tape_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"target.blob_ds3_target\" CONSTRAINT \"blob_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.degraded_blob\" CONSTRAINT \"degraded_blob_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.job_entry\" CONSTRAINT \"job_entry_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.multi_part_upload_part\" CONSTRAINT \"multi_part_upload_part_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"ds3.multi_part_upload\" CONSTRAINT \"multi_part_upload_placeholder_blob_id_fkey\" FOREIGN KEY (placeholder_blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"pool.obsolete_blob_pool\" CONSTRAINT \"obsolete_blob_pool_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"tape.obsolete_blob_tape\" CONSTRAINT \"obsolete_blob_tape_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"target.suspect_blob_azure_target\" CONSTRAINT \"suspect_blob_azure_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"pool.suspect_blob_pool\" CONSTRAINT \"suspect_blob_pool_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"target.suspect_blob_s3_target\" CONSTRAINT \"suspect_blob_s3_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"tape.suspect_blob_tape\" CONSTRAINT \"suspect_blob_tape_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n TABLE \"target.suspect_blob_ds3_target\" CONSTRAINT \"suspect_blob_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\r\n\r\n\r\n\r\nAlso, do you have any long-running transactions ?\r\n\r\nNot at the time this is happening.\r\n\r\nIn your first message, you showed no other queries except \"idle\" ones (not\r\nidle-in-transaction) but I figured I'd ask anyway.\r\n\r\nDoes your COPY job run in a transaction block ?\r\n\r\nAuto-commit is enabled for that connection, so each COPY should be in its own transaction.\r\n\r\n\r\nYou're running pg13.2, so it would be interesting to know if the problem exists\r\nunder 13.5.\r\n\r\nI’d have to see what it would take to get to 13.5\r\n\r\n\r\n--\r\nJustin\r\n\r\n\n\n\n\n\n\n\n\n\nOn Nov 17, 2021, at 12:00 PM, Justin Pryzby <[email protected]> wrote:\n\nThis\r\n message originated outside your organization.\n\nOn\r\n Wed, Nov 17, 2021 at 05:51:05PM +0000, Robert Creager wrote:\n\r\n             postgres`HeapTupleSatisfiesVisibility+0x42\r\n             postgres`heapgetpage+0x237\r\n             postgres`heapgettup_pagemode+0x5ad\r\n             postgres`heap_getnextslot+0x52\r\n             postgres`SeqNext+0x71\r\n             postgres`ExecScan+0xc9\r\n             postgres`ExecLockRows+0x7b\r\n             postgres`standard_ExecutorRun+0x10a\r\n             postgres`_SPI_execute_plan+0x524\r\n             postgres`SPI_execute_snapshot+0x116\r\n             postgres`ri_PerformCheck+0x29e\r\n             postgres`RI_FKey_check+0x5d3\r\n             postgres`RI_FKey_check_ins+0x21\r\n             postgres`ExecCallTriggerFunc+0x105\r\n             postgres`afterTriggerInvokeEvents+0x605\r\n             postgres`AfterTriggerEndQuery+0x7a\r\n             postgres`CopyFrom+0xaca\r\n             postgres`DoCopy+0x553\r\n             postgres`standard_ProcessUtility+0x5f9\r\n             postgres`ProcessUtility+0x28\r\n              55\n\n\nIt\r\n shows that the process is running FK triggers.\nWould\r\n you show \\d for the table which is the destination of COPY, and for other\ntables\r\n to which it has FK constraints.\n\n\n\n\nTwo tables being copied into. I chased the first FK tables from the job_entry.  I can do the entire thing if you want.  There are bunches...\n\n\n\n\ntapesystem=# \\d ds3.job_entry\n\n                 Table \"ds3.job_entry\"\n\n   Column    |  Type   | Collation | Nullable | Default \n\n-------------+---------+-----------+----------+---------\n\n blob_id     | uuid    |           | not null | \n\n chunk_id    | uuid    |           | not null | \n\n id          | uuid    |           | not null | \n\n job_id      | uuid    |           | not null | \n\n order_index | integer |           | not null | \n\nIndexes:\n\n    \"job_entry_pkey\" PRIMARY KEY, btree (id)\n\n    \"job_entry_blob_id_idx\" btree (blob_id)\n\n    \"job_entry_chunk_id_idx\" btree (chunk_id)\n\n    \"job_entry_job_id_blob_id_key\" UNIQUE CONSTRAINT, btree (job_id, blob_id)\n\n    \"job_entry_job_id_idx\" btree (job_id)\n\n    \"job_entry_order_index_chunk_id_key\" UNIQUE CONSTRAINT, btree (order_index, chunk_id)\n\nForeign-key constraints:\n\n    \"job_entry_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    \"job_entry_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    \"job_entry_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n\n\ntapesystem=# \\d ds3.job_chunk\n\n                                    Table \"ds3.job_chunk\"\n\n          Column           |              Type              | Collation | Nullable | Default \n\n---------------------------+--------------------------------+-----------+----------+---------\n\n blob_store_state          | ds3.job_chunk_blob_store_state |           | not null | \n\n chunk_number              | integer                        |           | not null | \n\n id                        | uuid                           |           | not null | \n\n job_id                    | uuid                           |           | not null | \n\n node_id                   | uuid                           |           |          | \n\n pending_target_commit     | boolean                        |           | not null | \n\n read_from_azure_target_id | uuid                           |           |          | \n\n read_from_ds3_target_id   | uuid                           |           |          | \n\n read_from_pool_id         | uuid                           |           |          | \n\n read_from_s3_target_id    | uuid                           |           |          | \n\n read_from_tape_id         | uuid                           |           |          | \n\nIndexes:\n\n    \"job_chunk_pkey\" PRIMARY KEY, btree (id)\n\n    \"job_chunk_blob_store_state_idx\" btree (blob_store_state)\n\n    \"job_chunk_chunk_number_job_id_key\" UNIQUE CONSTRAINT, btree (chunk_number, job_id)\n\n    \"job_chunk_job_id_idx\" btree (job_id)\n\n    \"job_chunk_node_id_idx\" btree (node_id)\n\n    \"job_chunk_read_from_azure_target_id_idx\" btree (read_from_azure_target_id)\n\n    \"job_chunk_read_from_ds3_target_id_idx\" btree (read_from_ds3_target_id)\n\n    \"job_chunk_read_from_pool_id_idx\" btree (read_from_pool_id)\n\n    \"job_chunk_read_from_s3_target_id_idx\" btree (read_from_s3_target_id)\n\n    \"job_chunk_read_from_tape_id_idx\" btree (read_from_tape_id)\n\nForeign-key constraints:\n\n    \"job_chunk_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    \"job_chunk_node_id_fkey\" FOREIGN KEY (node_id) REFERENCES ds3.node(id) ON UPDATE CASCADE ON DELETE SET NULL\n\n    \"job_chunk_read_from_azure_target_id_fkey\" FOREIGN KEY (read_from_azure_target_id) REFERENCES target.azure_target(id) ON UPDATE CASCADE ON DELETE SET NULL\n\n    \"job_chunk_read_from_ds3_target_id_fkey\" FOREIGN KEY (read_from_ds3_target_id) REFERENCES target.ds3_target(id) ON UPDATE CASCADE ON DELETE SET NULL\n\n    \"job_chunk_read_from_pool_id_fkey\" FOREIGN KEY (read_from_pool_id) REFERENCES pool.pool(id) ON UPDATE CASCADE ON DELETE SET NULL\n\n    \"job_chunk_read_from_s3_target_id_fkey\" FOREIGN KEY (read_from_s3_target_id) REFERENCES target.s3_target(id) ON UPDATE CASCADE ON DELETE SET NULL\n\n    \"job_chunk_read_from_tape_id_fkey\" FOREIGN KEY (read_from_tape_id) REFERENCES tape.tape(id) ON UPDATE CASCADE ON DELETE SET NULL\n\nReferenced by:\n\n    TABLE \"ds3.job_chunk_azure_target\" CONSTRAINT \"job_chunk_azure_target_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"ds3.job_chunk_ds3_target\" CONSTRAINT \"job_chunk_ds3_target_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"ds3.job_chunk_persistence_target\" CONSTRAINT \"job_chunk_persistence_target_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"ds3.job_chunk_s3_target\" CONSTRAINT \"job_chunk_s3_target_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"ds3.job_entry\" CONSTRAINT \"job_entry_chunk_id_fkey\" FOREIGN KEY (chunk_id) REFERENCES ds3.job_chunk(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n\n\n\ntapesystem=# \\d ds3.job\n\n                                                      Table \"ds3.job\"\n\n                 Column                  |                      Type                       | Collation | Nullable | Default \n\n-----------------------------------------+-------------------------------------------------+-----------+----------+---------\n\n bucket_id                               | uuid                                            |           | not null | \n\n cached_size_in_bytes                    | bigint                                          |           | not null | \n\n chunk_client_processing_order_guarantee | ds3.job_chunk_client_processing_order_guarantee |           | not null | \n\n completed_size_in_bytes                 | bigint                                          |           | not null | \n\n created_at                              | timestamp without time zone                     |           | not null | \n\n id                                      | uuid                                            |           | not null | \n\n original_size_in_bytes                  | bigint                                          |           | not null | \n\n priority                                | ds3.blob_store_task_priority                    |           | not null | \n\n request_type                            | ds3.job_request_type                            |           | not null | \n\n user_id                                 | uuid                                            |           | not null | \n\n truncated                               | boolean                                         |           | not null | \n\n rechunked                               | timestamp without time zone                     |           |          | \n\n error_message                           | character varying                               |           |          | \n\n naked                                   | boolean                                         |           | not null | \n\n name                                    | character varying                               |           | not null | \n\n aggregating                             | boolean                                         |           | not null | \n\n minimize_spanning_across_media          | boolean                                         |           | not null | \n\n truncated_due_to_timeout                | boolean                                         |           | not null | \n\n implicit_job_id_resolution              | boolean                                         |           | not null | \n\n verify_after_write                      | boolean                                         |           | not null | \n\n replicating                             | boolean                                         |           | not null | \n\n dead_job_cleanup_allowed                | boolean                                         |           | not null | \n\n restore                                 | ds3.job_restore                                 |           | not null | \n\nIndexes:\n\n    \"job_pkey\" PRIMARY KEY, btree (id)\n\n    \"ds3_job__bucket_id\" btree (bucket_id)\n\n    \"ds3_job__created_at\" btree (created_at)\n\n    \"ds3_job__name\" btree (name)\n\n    \"ds3_job__user_id\" btree (user_id)\n\nForeign-key constraints:\n\n    \"job_bucket_id_fkey\" FOREIGN KEY (bucket_id) REFERENCES ds3.bucket(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    \"job_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES ds3.\"user\"(id) ON UPDATE CASCADE\n\nReferenced by:\n\n    TABLE \"ds3.data_migration\" CONSTRAINT \"data_migration_get_job_id_fkey\" FOREIGN KEY (get_job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE SET NULL\n\n    TABLE \"ds3.data_migration\" CONSTRAINT \"data_migration_put_job_id_fkey\" FOREIGN KEY (put_job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE SET NULL\n\n    TABLE \"ds3.job_chunk\" CONSTRAINT \"job_chunk_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"notification.job_completed_notification_registration\" CONSTRAINT \"job_completed_notification_registration_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE\r\n ON DELETE CASCADE\n\n    TABLE \"ds3.job_entry\" CONSTRAINT \"job_entry_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"notification.s3_object_cached_notification_registration\" CONSTRAINT \"s3_object_cached_notification_registration_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE\r\n CASCADE ON DELETE CASCADE\n\n    TABLE \"notification.s3_object_persisted_notification_registration\" CONSTRAINT \"s3_object_persisted_notification_registration_job_id_fkey\" FOREIGN KEY (job_id) REFERENCES ds3.job(id) ON UPDATE\r\n CASCADE ON DELETE CASCADE\n\n\n\n\n\ntapesystem=# \\d ds3.blob\n\n                            Table \"ds3.blob\"\n\n    Column     |          Type          | Collation | Nullable | Default \n\n---------------+------------------------+-----------+----------+---------\n\n byte_offset   | bigint                 |           | not null | \n\n checksum      | character varying      |           |          | \n\n checksum_type | security.checksum_type |           |          | \n\n id            | uuid                   |           | not null | \n\n length        | bigint                 |           | not null | \n\n object_id     | uuid                   |           | not null | \n\nIndexes:\n\n    \"blob_pkey\" PRIMARY KEY, btree (id)\n\n    \"blob_byte_offset_object_id_key\" UNIQUE CONSTRAINT, btree (byte_offset, object_id)\n\n    \"ds3_blob__object_id\" btree (object_id)\n\nForeign-key constraints:\n\n    \"blob_object_id_fkey\" FOREIGN KEY (object_id) REFERENCES ds3.s3_object(id) ON UPDATE CASCADE\n\nReferenced by:\n\n    TABLE \"target.blob_azure_target\" CONSTRAINT \"blob_azure_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"temp.blob_azure_target_to_verify\" CONSTRAINT \"blob_azure_target_to_verify_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"pool.blob_pool\" CONSTRAINT \"blob_pool_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"target.blob_s3_target\" CONSTRAINT \"blob_s3_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"temp.blob_s3_target_to_verify\" CONSTRAINT \"blob_s3_target_to_verify_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"tape.blob_tape\" CONSTRAINT \"blob_tape_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"target.blob_ds3_target\" CONSTRAINT \"blob_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"ds3.degraded_blob\" CONSTRAINT \"degraded_blob_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"ds3.job_entry\" CONSTRAINT \"job_entry_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"ds3.multi_part_upload_part\" CONSTRAINT \"multi_part_upload_part_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"ds3.multi_part_upload\" CONSTRAINT \"multi_part_upload_placeholder_blob_id_fkey\" FOREIGN KEY (placeholder_blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"pool.obsolete_blob_pool\" CONSTRAINT \"obsolete_blob_pool_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"tape.obsolete_blob_tape\" CONSTRAINT \"obsolete_blob_tape_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"target.suspect_blob_azure_target\" CONSTRAINT \"suspect_blob_azure_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"pool.suspect_blob_pool\" CONSTRAINT \"suspect_blob_pool_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"target.suspect_blob_s3_target\" CONSTRAINT \"suspect_blob_s3_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"tape.suspect_blob_tape\" CONSTRAINT \"suspect_blob_tape_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n    TABLE \"target.suspect_blob_ds3_target\" CONSTRAINT \"suspect_blob_target_blob_id_fkey\" FOREIGN KEY (blob_id) REFERENCES ds3.blob(id) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n\n\n\n\n\nAlso,\r\n do you have any long-running transactions ?\n\n\n\n\nNot at the time this is happening.\n\n\nIn\r\n your first message, you showed no other queries except \"idle\" ones (not\nidle-in-transaction)\r\n but I figured I'd ask anyway.\n\nDoes\r\n your COPY job run in a transaction block ?\n\n\n\n\nAuto-commit is enabled for that connection, so each COPY should be in its own transaction.\n\n\n\nYou're\r\n running pg13.2, so it would be interesting to know if the problem exists\nunder\r\n 13.5.\n\n\n\n\nI’d have to see what it would take to get to 13.5\n\n\n\n-- \nJustin", "msg_date": "Wed, 17 Nov 2021 19:51:49 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Thu, Nov 18, 2021 at 8:28 AM Tom Lane <[email protected]> wrote:\n> Justin Pryzby <[email protected]> writes:\n> > It shows that the process is running FK triggers.\n>\n> Indeed, and doing a seqscan therein. Normally I'd suppose that\n> this reflects a lack of an index, but RI_FKey_check should always\n> be doing something that matches the referenced table's unique\n> constraint, so why isn't it using that?\n\nI wonder if the reference tables are empty sometimes, and there's an\nunlucky sequence of events that results in cached RI plans with seq\nscans being used later in the same session after the tables are\npopulated.\n\n\n", "msg_date": "Thu, 18 Nov 2021 10:01:18 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 17, 2021, at 2:01 PM, Thomas Munro <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nThis message originated outside your organization.\r\n\r\nOn Thu, Nov 18, 2021 at 8:28 AM Tom Lane <[email protected]<mailto:[email protected]>> wrote:\r\nJustin Pryzby <[email protected]<mailto:[email protected]>> writes:\r\nIt shows that the process is running FK triggers.\r\n\r\nIndeed, and doing a seqscan therein. Normally I'd suppose that\r\nthis reflects a lack of an index, but RI_FKey_check should always\r\nbe doing something that matches the referenced table's unique\r\nconstraint, so why isn't it using that?\r\n\r\nI wonder if the reference tables are empty sometimes, and there's an\r\nunlucky sequence of events that results in cached RI plans with seq\r\nscans being used later in the same session after the tables are\r\npopulated.\r\n\r\nWe are able to move up to Postgres 13.5, in our ports tree, if that would help. We used pg_upgrade to get from 9.6 to 13.3, so that should work fine going instead to 13.5. We’re almost branching/releasing our code, so it’s not a good time, but if it may help with this problem, we’ll deal with it.\r\n\r\nIt seems to be important (so far) that we delete a ‘bucket’ in the re-creation of this problem. I’ve included a graphical copy of the schema courtesy of DataGrip. We’re trying to get the problem reproducible more quickly, but at the moment, it takes hours.\r\n\r\n[cid:1EA8210A-B9EE-438E-BD9C-216E2BB534D3]", "msg_date": "Wed, 17 Nov 2021 21:54:14 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Wed, Nov 17, 2021 at 09:54:14PM +0000, Robert Creager wrote:\n> We are able to move up to Postgres 13.5, in our ports tree, if that would help. We used pg_upgrade to get from 9.6 to 13.3, so that should work fine going instead to 13.5. We’re almost branching/releasing our code, so it’s not a good time, but if it may help with this problem, we’ll deal with it.\n\nTo be clear, I have no specfic reason to believe it would help.\nBut it would be silly to chase down a problem that someone already fixed 10\nmonths ago (the source of this problem, or something else that comes up).\n\nIn fact I suspect it won't help, and there's an issue with your schema, or\nautovacuum, or postgres.\n\nNote that since v10, the version scheme uses only two components, and 13.3 to\n13.5 is a minor release, similar to 9.6.3 to 9.6.5. So you don't need to use\npg_upgrade - just update the binaries.\n\nhttps://www.postgresql.org/docs/13/release-13-5.html\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 17 Nov 2021 17:18:15 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "\r\n\r\n> On Nov 17, 2021, at 4:18 PM, Justin Pryzby <[email protected]> wrote:\r\n> \r\n> This message originated outside your organization.\r\n> \r\n> On Wed, Nov 17, 2021 at 09:54:14PM +0000, Robert Creager wrote:\r\n> > We are able to move up to Postgres 13.5, in our ports tree, if that would help. We used pg_upgrade to get from 9.6 to 13.3, so that should work fine going instead to 13.5. We’re almost branching/releasing our code, so it’s not a good time, but if it may help with this problem, we’ll deal with it.\r\n> \r\n> To be clear, I have no specfic reason to believe it would help.\r\n\r\nI figured as much, and told our gang that also. Was looking through the release notes a little, need to finish with 4 and look at 5.\r\n\r\n> But it would be silly to chase down a problem that someone already fixed 10\r\n> months ago (the source of this problem, or something else that comes up).\r\n\r\nYeah, trying to figure out how feasible it is for us to do quickly. Ok, we have approval, and will be working on an upgraded build in the morning.\r\n\r\n> \r\n> In fact I suspect it won't help, and there's an issue with your schema, or\r\n> autovacuum, or postgres.\r\n\r\nWell, if it’s our schema, 9.6 didn’t care about it, doesn’t mean there wasn’t one though, and I understand that completely.\r\n\r\nSo, how do I go about capturing more information for the big brains (you guys) to help figure this out? I have all our resources at mine (and hence your) disposal.\r\n\r\n> \r\n> Note that since v10, the version scheme uses only two components, and 13.3 to\r\n> 13.5 is a minor release, similar to 9.6.3 to 9.6.5. So you don't need to use\r\n> pg_upgrade - just update the binaries.\r\n\r\nGood, but the previous release of our product was at 9.6, so we’re currently using the pg_upgrade to do that, automated (storage appliance). Just talking out loud, that switching to 13.5 shouldn’t cause us any upgrade issues other than figuring out builds and dependencies and some upgrade testing. And a straight re-run with 13.5 on top of the db should work just fine. When I talk out loud, I sometimes catch the stupid things I’m missing, and it allows others to point out the stupid things I’m missing...\r\n\r\nBest,\r\nRobert\r\n\r\n", "msg_date": "Thu, 18 Nov 2021 00:18:22 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Thu, Nov 18, 2021 at 1:18 PM Robert Creager <[email protected]> wrote:\n> So, how do I go about capturing more information for the big brains (you guys) to help figure this out? I have all our resources at mine (and hence your) disposal.\n\nAs a workaround, does it help if you issue DISCARD PLANS before your\nCOPY jobs, or alternatively start with a fresh connection? I'm\nguessing that something like this is happening.\n\n-- set up the auto_explain extension to show the internal foreign key\ncheck queries' plans\nload 'auto_explain';\nset auto_explain.log_nested_statements = true;\nset auto_explain.log_min_duration = 0;\nset auto_explain.log_analyze = true;\n\ndrop table if exists r, s cascade;\ncreate table r (i int primary key);\ncreate table s (i int references r(i));\n\n-- collect stats showing r as empty\nanalyze r;\n\n-- execute RI query 6 times to lock the plan (inserts fail, log shows seq scan)\ninsert into s values (42);\ninsert into s values (42);\ninsert into s values (42);\ninsert into s values (42);\ninsert into s values (42);\ninsert into s values (42);\n\ninsert into r select generate_series(1, 1000000);\n\n-- once more, we still get a seq scan, which is by now a bad idea\ninsert into s values (42);\n\ndiscard plans;\n\n-- once more, now we get an index scan\ninsert into s values (42);\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:39:42 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Thu, Nov 18, 2021 at 04:39:42PM +1300, Thomas Munro wrote:\n> On Thu, Nov 18, 2021 at 1:18 PM Robert Creager <[email protected]> wrote:\n> > So, how do I go about capturing more information for the big brains (you guys) to help figure this out? I have all our resources at mine (and hence your) disposal.\n> \n> As a workaround, does it help if you issue DISCARD PLANS before your\n> COPY jobs, or alternatively start with a fresh connection? I'm\n> guessing that something like this is happening.\n> \n> -- set up the auto_explain extension to show the internal foreign key check queries' plans\n> load 'auto_explain';\n> set auto_explain.log_nested_statements = true;\n> set auto_explain.log_min_duration = 0;\n> set auto_explain.log_analyze = true;\n\n..and SET client_min_messages=debug;\n\n> drop table if exists r, s cascade;\n> create table r (i int primary key);\n> create table s (i int references r(i));\n> \n> -- collect stats showing r as empty\n> analyze r;\n> \n> -- execute RI query 6 times to lock the plan (inserts fail, log shows seq scan)\n> insert into s values (42);\n> insert into s values (42);\n> insert into s values (42);\n> insert into s values (42);\n> insert into s values (42);\n> insert into s values (42);\n> \n> insert into r select generate_series(1, 1000000);\n> \n> -- once more, we still get a seq scan, which is by now a bad idea\n> insert into s values (42);\n> \n> discard plans;\n>\n> -- once more, now we get an index scan\n> insert into s values (42);\n\nIt also seems to work if one does SET plan_cache_mode=force_custom_plan;\n\nRobert might try that, either in postresql.conf, or SET in the client that's\ndoing COPY.\n\nRobert is using jdbc, which (as I recall) has this problem more often than\nother clients. But, in this case, I think JDBC isn't causing the problem.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 17 Nov 2021 23:42:21 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "\n\n> On Nov 17, 2021, at 10:42 PM, Justin Pryzby <[email protected]> wrote:\n> \n> This message originated outside your organization.\n> \n> On Thu, Nov 18, 2021 at 04:39:42PM +1300, Thomas Munro wrote:\n>> On Thu, Nov 18, 2021 at 1:18 PM Robert Creager <[email protected]> wrote:\n>>> So, how do I go about capturing more information for the big brains (you guys) to help figure this out? I have all our resources at mine (and hence your) disposal.\n>> \n>> As a workaround, does it help if you issue DISCARD PLANS before your\n>> COPY jobs, or alternatively start with a fresh connection? I'm\n\nI can certainly give that a try.\n\n> It also seems to work if one does SET plan_cache_mode=force_custom_plan;\n> \n> Robert might try that, either in postresql.conf, or SET in the client that's\n> doing COPY.\n\nWhich would be better? Discard plans or forcing custom plans? Seems like wrapping a copy might be better than the Postgres.conf change as that would affect all statements. What kind of performance hit would we be taking with that do you estimate? Microseconds per statement? Yeah, hard to say, depends on hardware and such. Would there be any benefit overall to doing that? Forcing the replan?\n\nBest,\nRobert\n\n\n\n", "msg_date": "Thu, 18 Nov 2021 17:03:08 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Fri, Nov 19, 2021 at 6:03 AM Robert Creager <[email protected]> wrote:\n> Which would be better? Discard plans or forcing custom plans? Seems like wrapping a copy might be better than the Postgres.conf change as that would affect all statements. What kind of performance hit would we be taking with that do you estimate? Microseconds per statement? Yeah, hard to say, depends on hardware and such. Would there be any benefit overall to doing that? Forcing the replan?\n\nJust to understand what's going on, it'd be interesting to know if the\nproblem goes away if you *just* inject the DISCARD PLANS statement\nbefore running your COPYs, but if that doesn't help it'd also be\ninteresting to know what happens if you ANALYZE each table after each\nCOPY. Are you running any explicit ANALYZE commands? How long do\nyour sessions/connections live for?\n\nI'm wondering if the thing that changed between 9.6 and 13 might be\nthe heuristics for when auto vacuum's background ANALYZE is triggered,\ncreating the unlucky timing required to get your system to this state\noccasionally.\n\nFor a while now I have been wondering how we could teach the\nplanner/stats system about \"volatile\" tables (as DB2 calls them), that\nis, ones that are frequently empty, which often come up in job queue\nworkloads. I've seen problems like this with user queries (I used to\nwork on big job queue systems across different relational database\nvendors, which is why I finished up writing the SKIP LOCKED patch for\n9.5), but this is the first time I've contemplated FK check queries\nbeing negatively affected by this kind of stats problem. I don't have\na good concrete idea, though (various dumb ideas: don't let auto\nanalyze run on an empty table if it's marked VOLATILE, or ignore\napparently empty stats on tables marked VOLATILE (and use what?),\n...).\n\n\n", "msg_date": "Fri, 19 Nov 2021 10:08:02 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> I'm wondering if the thing that changed between 9.6 and 13 might be\n> the heuristics for when auto vacuum's background ANALYZE is triggered,\n> creating the unlucky timing required to get your system to this state\n> occasionally.\n\n> For a while now I have been wondering how we could teach the\n> planner/stats system about \"volatile\" tables (as DB2 calls them), that\n> is, ones that are frequently empty, which often come up in job queue\n> workloads. I've seen problems like this with user queries (I used to\n> work on big job queue systems across different relational database\n> vendors, which is why I finished up writing the SKIP LOCKED patch for\n> 9.5), but this is the first time I've contemplated FK check queries\n> being negatively affected by this kind of stats problem. I don't have\n> a good concrete idea, though (various dumb ideas: don't let auto\n> analyze run on an empty table if it's marked VOLATILE, or ignore\n> apparently empty stats on tables marked VOLATILE (and use what?),\n> ...).\n\nHmm. If this complaint were about v14 rather than v13, I'd be\nwondering whether 3d351d916 was what made things worse. But\nin v13, if the table does go to empty (zero length) and ANALYZE\nhappens to see that state, we should end up back at the planner's\n\"minimum ten pages\" heuristic, which likely would be enough to\nprevent choice of a seqscan. OTOH, if the analyzed state is\n\"empty but has a couple of pages\", it looks like that could\nprovoke a seqscan.\n\nThis is all guesswork though, since we don't know quite what's\nhappening on Robert's system. It might be worth setting\n\"log_autovacuum_min_duration = 0\" (either globally, or as a\nreloption on the relevant tables), and seeing if there seems\nto be any correlation between autovacuum/autoanalyze activity\nand the occurrences of poor plan choices.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:42:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 18, 2021, at 2:08 PM, Thomas Munro <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nThis message originated outside your organization.\r\n\r\nOn Fri, Nov 19, 2021 at 6:03 AM Robert Creager <[email protected]<mailto:[email protected]>> wrote:\r\nWhich would be better? Discard plans or forcing custom plans? Seems like wrapping a copy might be better than the Postgres.conf change as that would affect all statements. What kind of performance hit would we be taking with that do you estimate? Microseconds per statement? Yeah, hard to say, depends on hardware and such. Would there be any benefit overall to doing that? Forcing the replan?\r\n\r\nJust to understand what's going on, it'd be interesting to know if the\r\nproblem goes away if you *just* inject the DISCARD PLANS statement\r\nbefore running your COPYs, but if that doesn't help it'd also be\r\n\r\nI’m doing that now, “SET plan_cache_mode=force_custom_plan” before the copy, then auto after the copy.\r\n\r\ninteresting to know what happens if you ANALYZE each table after each\r\nCOPY. Are you running any explicit ANALYZE commands? How long do\r\nyour sessions/connections live for?\r\n\r\nNo explicit analyze happening. I’m not super familiar with this code base, but a bit of looking confirmed what I thought, they served via connection pool and appear to live for the life of the app, which could be months.\r\n\r\nAfter this test (tomorrow likely) I can try the explicit ANALYZE after the copy completes.\r\n\r\nBest,\r\nRobert\r\n\n\n\n\n\n\n\n\n\nOn Nov 18, 2021, at 2:08 PM, Thomas Munro <[email protected]> wrote:\n\n\nThis message originated outside your organization.\n\r\nOn Fri, Nov 19, 2021 at 6:03 AM Robert Creager <[email protected]> wrote:\nWhich would be better?  Discard plans or forcing custom plans?  Seems like wrapping a copy might be better than the Postgres.conf change as that would affect all statements.  What kind of performance hit would we be taking with\r\n that do you estimate?  Microseconds per statement?  Yeah, hard to say, depends on hardware and such.  Would there be any benefit overall to doing that?  Forcing the replan?\n\n\r\nJust to understand what's going on, it'd be interesting to know if the\r\nproblem goes away if you *just* inject the DISCARD PLANS statement\r\nbefore running your COPYs, but if that doesn't help it'd also be\n\n\n\n\n\nI’m doing that now, “SET plan_cache_mode=force_custom_plan” before the copy, then auto after the copy.\n\n\n\ninteresting to know what happens if you ANALYZE each table after each\r\nCOPY.  Are you running any explicit ANALYZE commands?  How long do\r\nyour sessions/connections live for?\n\n\n\n\n\nNo explicit analyze happening.  I’m not super familiar with this code base, but a bit of looking confirmed what I thought, they served via connection pool and appear to live for the life of the app, which could be months.\n\n\nAfter this test (tomorrow likely) I can try the explicit ANALYZE after the copy completes.\n\n\nBest,\nRobert", "msg_date": "Thu, 18 Nov 2021 22:28:38 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "\r\n\r\n> On Nov 18, 2021, at 2:42 PM, Tom Lane <[email protected]> wrote:\r\n> \r\n> This is all guesswork though, since we don't know quite what's\r\n> happening on Robert's system. It might be worth setting\r\n> \"log_autovacuum_min_duration = 0\" (either globally, or as a\r\n> reloption on the relevant tables), and seeing if there seems\r\n> to be any correlation between autovacuum/autoanalyze activity\r\n> and the occurrences of poor plan choices.\r\n\r\nI’ve changed the log duration globally, and am also using the plan_cache_mode=force_custom_plan suggested by Justin.\r\n\r\nBest,\r\nRobert", "msg_date": "Thu, 18 Nov 2021 22:30:39 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 18, 2021, at 2:42 PM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nThis message originated outside your organization.\r\n\r\nThomas Munro <[email protected]<mailto:[email protected]>> writes:\r\n\r\nThis is all guesswork though, since we don't know quite what's\r\nhappening on Robert's system. It might be worth setting\r\n\"log_autovacuum_min_duration = 0\" (either globally, or as a\r\nreloption on the relevant tables), and seeing if there seems\r\nto be any correlation between autovacuum/autoanalyze activity\r\nand the occurrences of poor plan choices.\r\n\r\nOk, doing a SET plan_cache_mode=force_custom_plan before the COPY and resetting it after appears to fix the problem. We’re going to run it over the weekend to make sure.\r\n\r\nSo, I thank you very much for all your help.\r\n\r\nI have logs with autovacuum=0 and dtrace output every minute, but suspect that won’t help you now. Would you like me to remove the fix next week and reproduce the issue with the same config to provide more information for trouble shooting? I may be able get a SSH session into a live system, I’d have to check with IT to see if that’s possible/allowed.\r\n\r\nBest,\r\nRobert\r\n\r\n\n\n\n\n\n\n\n\n\nOn Nov 18, 2021, at 2:42 PM, Tom Lane <[email protected]> wrote:\n\nThis\r\n message originated outside your organization.\n\nThomas\r\n Munro <[email protected]>\r\n writes:\n\nThis\r\n is all guesswork though, since we don't know quite what's\nhappening\r\n on Robert's system.  It might be worth setting\n\"log_autovacuum_min_duration\r\n = 0\" (either globally, or as a\nreloption\r\n on the relevant tables), and seeing if there seems\nto\r\n be any correlation between autovacuum/autoanalyze activity\nand\r\n the occurrences of poor plan choices.\n\n\n\n\nOk, doing a SET plan_cache_mode=force_custom_plan before the COPY and resetting it after appears to fix the\r\n problem.  We’re going to run it over the weekend to make sure.\n\n\nSo, I thank you very much for all your help.\n\n\nI have logs with autovacuum=0 and dtrace output every minute, but suspect that won’t help you now.  Would you like me to remove the fix next week and reproduce the issue with the same config to provide more information for trouble shooting?  I may be able\r\n get a SSH session into a live system, I’d have to check with IT to see if that’s possible/allowed.\n\n\nBest,\nRobert", "msg_date": "Fri, 19 Nov 2021 18:47:45 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Nov 19, 2021, at 11:47 AM, Robert Creager <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\n\r\nOn Nov 18, 2021, at 2:42 PM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nThis message originated outside your organization.\r\n\r\nThomas Munro <[email protected]<mailto:[email protected]>> writes:\r\n\r\nThis is all guesswork though, since we don't know quite what's\r\nhappening on Robert's system. It might be worth setting\r\n\"log_autovacuum_min_duration = 0\" (either globally, or as a\r\nreloption on the relevant tables), and seeing if there seems\r\nto be any correlation between autovacuum/autoanalyze activity\r\nand the occurrences of poor plan choices.\r\n\r\nOk, doing a SET plan_cache_mode=force_custom_plan before the COPY and resetting it after appears to fix the problem. We’re going to run it over the weekend to make sure.\r\n\r\nWe are at it again. I have a DELETE operation that’s taking 48 minutes so far. I had set plan_cache_mode = force_custom_plan for the entire server before this happened, as we started seeing the COPY slowdown again. I have dtrace information again, but primarily shows the nested scan operation.\r\n\r\npid,client_port,runtime,query_start,datname,state,wait_event_type,query,usename\r\n40665,15978,0 years 0 mons 0 days 0 hours 48 mins 49.62347 secs,2021-11-24 20:13:30.017188 +00:00,tapesystem,active,,DELETE FROM ds3.blob WHERE EXISTS (SELECT * FROM ds3.s3_object WHERE id = ds3.blob.object_id AND (bucket_id = $1)),Administrator\r\n\r\nSo how do we avoid this query plan? Do we need to start doing explicit analyzes after every delete?\r\n\r\n\r\nEXPLAIN DELETE\r\nFROM ds3.blob\r\nWHERE EXISTS(SELECT * FROM ds3.s3_object WHERE id = ds3.blob.object_id AND (bucket_id = '85b9e793-2141-455c-a752-90c2346cdfe1'));\r\n\r\n250k objects in blob\r\n256k objects in s3_object\r\n\r\nQUERY PLAN\r\nDelete on blob (cost=10117.05..16883.09 rows=256002 width=12)\r\n -> Hash Join (cost=10117.05..16883.09 rows=256002 width=12)\r\n Hash Cond: (blob.object_id = s3_object.id<http://s3_object.id>)\r\n -> Seq Scan on blob (cost=0.00..6094.02 rows=256002 width=22)\r\n -> Hash (cost=6917.02..6917.02 rows=256002 width=22)\r\n -> Seq Scan on s3_object (cost=0.00..6917.02 rows=256002 width=22)\r\n Filter: (bucket_id = '8a988c6c-ef98-465e-a148-50054c739212'::uuid)\r\n\r\n’Normal’ explain, very few objects with that bucket.\r\n\r\nQUERY PLAN\r\nDelete on blob (cost=0.71..6.76 rows=1 width=12)\r\n -> Nested Loop (cost=0.71..6.76 rows=1 width=12)\r\n -> Index Scan using ds3_s3_object__bucket_id on s3_object (cost=0.29..2.31 rows=1 width=22)\r\n Index Cond: (bucket_id = '85b9e793-2141-455c-a752-90c2346cdfe1'::uuid)\r\n -> Index Scan using ds3_blob__object_id on blob (cost=0.42..4.44 rows=1 width=22)\r\n Index Cond: (object_id = s3_object.id<http://s3_object.id>)\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nOn Nov 19, 2021, at 11:47 AM, Robert Creager <[email protected]> wrote:\n\n\n\n\n\n\nOn Nov 18, 2021, at 2:42 PM, Tom Lane <[email protected]> wrote:\n\nThis\r\n message originated outside your organization.\n\nThomas\r\n Munro <[email protected]>\r\n writes:\n\nThis\r\n is all guesswork though, since we don't know quite what's\nhappening\r\n on Robert's system.  It might be worth setting\n\"log_autovacuum_min_duration\r\n = 0\" (either globally, or as a\nreloption\r\n on the relevant tables), and seeing if there seems\nto\r\n be any correlation between autovacuum/autoanalyze activity\nand\r\n the occurrences of poor plan choices.\n\n\n\n\nOk, doing a SET plan_cache_mode=force_custom_plan before the COPY and resetting it after appears to\r\n fix the problem.  We’re going to run it over the weekend to make sure.\n\n\n\n\n\nWe are at it again.  I have a DELETE operation that’s taking 48 minutes so far.  I had set plan_cache_mode = force_custom_plan for the entire server before this happened, as we started seeing the COPY slowdown again.  I have dtrace information again, but\r\n primarily shows the nested scan operation.\n\n\n\npid,client_port,runtime,query_start,datname,state,wait_event_type,query,usename\n40665,15978,0 years 0 mons 0 days 0 hours 48 mins 49.62347 secs,2021-11-24 20:13:30.017188 +00:00,tapesystem,active,,DELETE FROM ds3.blob WHERE EXISTS (SELECT * FROM ds3.s3_object WHERE id = ds3.blob.object_id AND (bucket_id = $1)),Administrator\n\n\nSo how do we avoid this query plan? Do we need to start doing explicit analyzes after every delete?\n\n\n\n\nEXPLAIN DELETEFROM ds3.blobWHERE EXISTS(SELECT * FROM ds3.s3_object WHERE id = ds3.blob.object_id AND (bucket_id = '85b9e793-2141-455c-a752-90c2346cdfe1'));250k objects in blob256k objects in s3_objectQUERY PLANDelete on blob (cost=10117.05..16883.09 rows=256002 width=12) -> Hash Join (cost=10117.05..16883.09 rows=256002 width=12) Hash Cond: (blob.object_id = s3_object.id) -> Seq Scan on blob (cost=0.00..6094.02 rows=256002 width=22) -> Hash (cost=6917.02..6917.02 rows=256002 width=22) -> Seq Scan on s3_object (cost=0.00..6917.02 rows=256002 width=22) Filter: (bucket_id = '8a988c6c-ef98-465e-a148-50054c739212'::uuid)’Normal’ explain, very few objects with that bucket.QUERY PLANDelete on blob (cost=0.71..6.76 rows=1 width=12) -> Nested Loop (cost=0.71..6.76 rows=1 width=12) -> Index Scan using ds3_s3_object__bucket_id on s3_object (cost=0.29..2.31 rows=1 width=22) Index Cond: (bucket_id = '85b9e793-2141-455c-a752-90c2346cdfe1'::uuid) -> Index Scan using ds3_blob__object_id on blob (cost=0.42..4.44 rows=1 width=22) Index Cond: (object_id = s3_object.id)", "msg_date": "Wed, 24 Nov 2021 21:13:27 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "I forgot, I had reloaded postgres, but had not re-started our app, so the connections wouldn’t have that plan setting on them. Re-doing now.\r\n\r\nOn Nov 24, 2021, at 2:13 PM, Robert Creager <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\n\r\nOn Nov 19, 2021, at 11:47 AM, Robert Creager <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\n\r\nOn Nov 18, 2021, at 2:42 PM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nThis message originated outside your organization.\r\n\r\nThomas Munro <[email protected]<mailto:[email protected]>> writes:\r\n\r\nThis is all guesswork though, since we don't know quite what's\r\nhappening on Robert's system. It might be worth setting\r\n\"log_autovacuum_min_duration = 0\" (either globally, or as a\r\nreloption on the relevant tables), and seeing if there seems\r\nto be any correlation between autovacuum/autoanalyze activity\r\nand the occurrences of poor plan choices.\r\n\r\nOk, doing a SET plan_cache_mode=force_custom_plan before the COPY and resetting it after appears to fix the problem. We’re going to run it over the weekend to make sure.\r\n\r\nWe are at it again. I have a DELETE operation that’s taking 48 minutes so far. I had set plan_cache_mode = force_custom_plan for the entire server before this happened, as we started seeing the COPY slowdown again. I have dtrace information again, but primarily shows the nested scan operation.\r\n\r\npid,client_port,runtime,query_start,datname,state,wait_event_type,query,usename\r\n40665,15978,0 years 0 mons 0 days 0 hours 48 mins 49.62347 secs,2021-11-24 20:13:30.017188 +00:00,tapesystem,active,,DELETE FROM ds3.blob WHERE EXISTS (SELECT * FROM ds3.s3_object WHERE id = ds3.blob.object_id AND (bucket_id = $1)),Administrator\r\n\r\nSo how do we avoid this query plan? Do we need to start doing explicit analyzes after every delete?\r\n\r\n\r\nEXPLAIN DELETE\r\nFROM ds3.blob\r\nWHERE EXISTS(SELECT * FROM ds3.s3_object WHERE id = ds3.blob.object_id AND (bucket_id = '85b9e793-2141-455c-a752-90c2346cdfe1'));\r\n\r\n250k objects in blob\r\n256k objects in s3_object\r\n\r\nQUERY PLAN\r\nDelete on blob (cost=10117.05..16883.09 rows=256002 width=12)\r\n -> Hash Join (cost=10117.05..16883.09 rows=256002 width=12)\r\n Hash Cond: (blob.object_id = s3_object.id<http://s3_object.id/>)\r\n -> Seq Scan on blob (cost=0.00..6094.02 rows=256002 width=22)\r\n -> Hash (cost=6917.02..6917.02 rows=256002 width=22)\r\n -> Seq Scan on s3_object (cost=0.00..6917.02 rows=256002 width=22)\r\n Filter: (bucket_id = '8a988c6c-ef98-465e-a148-50054c739212'::uuid)\r\n\r\n’Normal’ explain, very few objects with that bucket.\r\n\r\nQUERY PLAN\r\nDelete on blob (cost=0.71..6.76 rows=1 width=12)\r\n -> Nested Loop (cost=0.71..6.76 rows=1 width=12)\r\n -> Index Scan using ds3_s3_object__bucket_id on s3_object (cost=0.29..2.31 rows=1 width=22)\r\n Index Cond: (bucket_id = '85b9e793-2141-455c-a752-90c2346cdfe1'::uuid)\r\n -> Index Scan using ds3_blob__object_id on blob (cost=0.42..4.44 rows=1 width=22)\r\n Index Cond: (object_id = s3_object.id<http://s3_object.id/>)\r\n\r\n\n\n\n\n\n\r\nI forgot, I had reloaded postgres, but had not re-started our app, so the connections wouldn’t have that plan setting on them. Re-doing now.\n\n\nOn Nov 24, 2021, at 2:13 PM, Robert Creager <[email protected]> wrote:\n\n\n\n\n\n\nOn Nov 19, 2021, at 11:47 AM, Robert Creager <[email protected]> wrote:\n\n\n\n\n\n\nOn Nov 18, 2021, at 2:42 PM, Tom Lane <[email protected]> wrote:\n\nThis\r\n message originated outside your organization.\n\nThomas\r\n Munro <[email protected]>\r\n writes:\n\nThis\r\n is all guesswork though, since we don't know quite what's\nhappening\r\n on Robert's system.  It might be worth setting\n\"log_autovacuum_min_duration\r\n = 0\" (either globally, or as a\nreloption\r\n on the relevant tables), and seeing if there seems\nto\r\n be any correlation between autovacuum/autoanalyze activity\nand\r\n the occurrences of poor plan choices.\n\n\n\n\nOk, doing a SET plan_cache_mode=force_custom_plan before the COPY and resetting it after appears to\r\n fix the problem.  We’re going to run it over the weekend to make sure.\n\n\n\n\n\n\r\nWe are at it again.  I have a DELETE operation that’s taking 48 minutes so far.  I had set plan_cache_mode = force_custom_plan for the entire server before this happened, as we started seeing the COPY slowdown again.  I have dtrace information again, but primarily\r\n shows the nested scan operation.\n\n\n\n\npid,client_port,runtime,query_start,datname,state,wait_event_type,query,usename\n40665,15978,0 years 0 mons 0 days 0 hours 48 mins 49.62347 secs,2021-11-24 20:13:30.017188 +00:00,tapesystem,active,,DELETE FROM ds3.blob WHERE EXISTS (SELECT * FROM ds3.s3_object WHERE id = ds3.blob.object_id AND (bucket_id = $1)),Administrator\n\n\nSo how do we avoid this query plan? Do we need to start doing explicit analyzes after every delete?\n\n\n\n\n\nEXPLAIN DELETEFROM ds3.blobWHERE EXISTS(SELECT * FROM ds3.s3_object WHERE id = ds3.blob.object_id AND (bucket_id = '85b9e793-2141-455c-a752-90c2346cdfe1'));250k objects in blob256k objects in s3_objectQUERY PLANDelete on blob (cost=10117.05..16883.09 rows=256002 width=12) -> Hash Join (cost=10117.05..16883.09 rows=256002 width=12) Hash Cond: (blob.object_id = s3_object.id) -> Seq Scan on blob (cost=0.00..6094.02 rows=256002 width=22) -> Hash (cost=6917.02..6917.02 rows=256002 width=22) -> Seq Scan on s3_object (cost=0.00..6917.02 rows=256002 width=22) Filter: (bucket_id = '8a988c6c-ef98-465e-a148-50054c739212'::uuid)’Normal’ explain, very few objects with that bucket.QUERY PLANDelete on blob (cost=0.71..6.76 rows=1 width=12) -> Nested Loop (cost=0.71..6.76 rows=1 width=12) -> Index Scan using ds3_s3_object__bucket_id on s3_object (cost=0.29..2.31 rows=1 width=22) Index Cond: (bucket_id = '85b9e793-2141-455c-a752-90c2346cdfe1'::uuid) -> Index Scan using ds3_blob__object_id on blob (cost=0.42..4.44 rows=1 width=22) Index Cond: (object_id = s3_object.id)", "msg_date": "Wed, 24 Nov 2021 22:44:12 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "On Wed, Nov 24, 2021 at 10:44:12PM +0000, Robert Creager wrote:\n> I forgot, I had reloaded postgres, but had not re-started our app, so the connections wouldn’t have that plan setting on them. Re-doing now.\n\nAre you sure? GUC changes should be applied for existing sessions, right ?\n\nWould you send the logs surrounding the slow COPY ?\nSpecifically including the autovacuum logs.\n\n> We are at it again. I have a DELETE operation that’s taking 48 minutes so far.\n\nBefore, you had slow COPY due to FKs. Now you have a slow DELETE, which you\nonly alluded to before.\n\n> So how do we avoid this query plan? Do we need to start doing explicit analyzes after every delete?\n\nIf your DELETE is deleting the entire table, then I think you should VACUUM\nanyway (or else the next inserts will bloat the table).\n\nOr (preferably) use TRUNCATE instead, which will set relpages=0 and (one\nsupposes) avoid the bad plans. But read the NOTE about non-mvcc behavior of\nTRUNCATE, in case that matters to you.\n\nBut first, I believe Thomas was suggesting to put plan_cache_mode back to its\ndefault, and (for testing purposes) try using issue DISCARD PLANS.\n\nOn Fri, Nov 19, 2021 at 10:08:02AM +1300, Thomas Munro wrote:\n> Just to understand what's going on, it'd be interesting to know if the\n> problem goes away if you *just* inject the DISCARD PLANS statement\n> before running your COPYs, but if that doesn't help it'd also be\n> interesting to know what happens if you ANALYZE each table after each\n> COPY. Are you running any explicit ANALYZE commands? How long do\n> your sessions/connections live for?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 24 Nov 2021 17:15:42 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help identifying a periodic performance issue." }, { "msg_contents": "> On Nov 24, 2021, at 4:15 PM, Justin Pryzby <[email protected]> wrote:\r\n>\r\n> This message originated outside your organization.\r\n>\r\n> On Wed, Nov 24, 2021 at 10:44:12PM +0000, Robert Creager wrote:\r\n>> I forgot, I had reloaded postgres, but had not re-started our app, so the connections wouldn’t have that plan setting on them. Re-doing now.\r\n>\r\n> Are you sure? GUC changes should be applied for existing sessions, right ?\r\n>\r\n> Would you send the logs surrounding the slow COPY ?\r\n> Specifically including the autovacuum logs.\r\n\r\nHere are the log lines 5 minutes leading up to the 2min copy operation happening. There is no vacuum activity. The previous auto vacuum happened 20 minutes earlier on a different table.\r\n\r\n\r\n\r\n>\r\n>> We are at it again. I have a DELETE operation that’s taking 48 minutes so far.\r\n>\r\n> Before, you had slow COPY due to FKs. Now you have a slow DELETE, which you\r\n> only alluded to before.\r\n\r\nYeah, I had not been able to reproduce it previously with logging/dtracing enabled. And I was able to look at the query plan as I saw it happening.\r\n\r\nAnd we’ve run across another problem query, which is also hitting that ds3.blob table.\r\n\r\nINFO Nov 25 05:30:05,787 [WorkLogger] | Still in progress after 30 minutes: [IomDriverWorker] SQL: SELECT * FROM ds3.s3_object_property WHERE (key = 'x-amz-meta-o-spectra-backup-start-date' AND EXISTS (SELECT * FROM ds3.s3_object WHERE id = ds3.s3_object_property.object_id AND ((EXISTS (SELECT * FROM ds3.bucket WHERE id = ds3.s3_object.bucket_id AND (name LIKE 'Spectra%')) AND NOT EXISTS (SELECT * FROM ds3.blob WHERE object_id = ds3.s3_object.id AND (EXISTS (SELECT * FROM ds3.job... (MonitoredWorkManager$WorkLogger.run:84)\r\n\r\n>\r\n>> So how do we avoid this query plan? Do we need to start doing explicit analyzes after every delete?\r\n>\r\n> If your DELETE is deleting the entire table, then I think you should VACUUM\r\n> anyway (or else the next inserts will bloat the table).\r\n\r\nWe’re not deleting the entire table necessarily, we don’t know, customer driven thing. In general, the COPY table used will not see a lot of deletes, this in from the test group, which is deleting a lot of data.\r\n\r\n>\r\n> But first, I believe Thomas was suggesting to put plan_cache_mode back to its\r\n> default, and (for testing purposes) try using issue DISCARD PLANS.\r\n\r\nOk, I’ll do that now and see what happens with the COPY.\r\n\r\n>\r\n> On Fri, Nov 19, 2021 at 10:08:02AM +1300, Thomas Munro wrote:\r\n>> Just to understand what's going on, it'd be interesting to know if the\r\n>> problem goes away if you *just* inject the DISCARD PLANS statement\r\n>> before running your COPYs, but if that doesn't help it'd also be\r\n>> interesting to know what happens if you ANALYZE each table after each\r\n>> COPY. Are you running any explicit ANALYZE commands? How long do\r\n>> your sessions/connections live for?\r\n>\r\n> --\r\n> Justin\r\n>", "msg_date": "Mon, 29 Nov 2021 23:04:30 +0000", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help identifying a periodic performance issue." } ]
[ { "msg_contents": "Hi,\nIn a trigger function I am creating a temp table . When an update on a\ntable is executed for say 10k rows. I get the below error.\n\nERROR: out of shared memory\nHINT:You might need to increase max_locks_per_transaction\nCONTEXT: SQL Statement \"created temp table changedinfo(colName\nvarchar(100), oldValue varchar(4000), newValue varchar(4000)\n\nCurrent value of max_locks_per_transaction is 64. Do I have to increase\nthis?\n\nRegards,\nAditya.\n\nHi,In a trigger function I am creating a temp table . When an update on a table is executed for say 10k rows. I get the below error.ERROR: out of shared memoryHINT:You might need to increase max_locks_per_transactionCONTEXT: SQL Statement \"created temp table changedinfo(colName varchar(100), oldValue varchar(4000), newValue varchar(4000)Current value of  max_locks_per_transaction is 64. Do I have to increase this?Regards,Aditya.", "msg_date": "Wed, 24 Nov 2021 10:57:37 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Out of memory error" }, { "msg_contents": "aditya desai <[email protected]> writes:\n> In a trigger function I am creating a temp table . When an update on a\n> table is executed for say 10k rows. I get the below error.\n\n> ERROR: out of shared memory\n> HINT:You might need to increase max_locks_per_transaction\n> CONTEXT: SQL Statement \"created temp table changedinfo(colName\n> varchar(100), oldValue varchar(4000), newValue varchar(4000)\n\n[ raised eyebrow ... ] If you are concerned about performance,\nI'd start by not creating a temp table per row of the outer update.\nThat's costing probably 100x to 1000x as much as the row update itself.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 00:52:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "Thanks Tom. However I could not find any solution to achieve the given\nrequirement. I have to take all values in the temp table and assign it to\nan array variable to pass it to the audit procedure as shown below. Can you\nplease advise ?\n\nCREATE OR REPLACE FUNCTION call_insert_info(\n\n) RETURNS void AS $$\n DECLARE\n v_message r_log_message[];\nOLDVALUE1 varchar(4000);\n BEGIN\n drop table if exists changedinfo\n create temp table changedinfo(colName varchar(100), oldValue\nvarchar(4000), newValue varchar(4000));\n insert into changed infot select 'empName', OLD.empName,\nNEW.empName from employee;\n insert into changed infot select 'location', OLD.location,\nNEW.location from employee;\n\n\nv_message:= array(select '(' || columname || ',' || oldvalue || ',' ||\nnewvalue ||')' from changedinfo);\n perform insert_info(v_message);\n raise notice '%',v_message;\n END;\n$$ LANGUAGE plpgsql;\n\nRegards,\nAD.\n\n\nOn Wed, Nov 24, 2021 at 11:22 AM Tom Lane <[email protected]> wrote:\n\n> aditya desai <[email protected]> writes:\n> > In a trigger function I am creating a temp table . When an update on a\n> > table is executed for say 10k rows. I get the below error.\n>\n> > ERROR: out of shared memory\n> > HINT:You might need to increase max_locks_per_transaction\n> > CONTEXT: SQL Statement \"created temp table changedinfo(colName\n> > varchar(100), oldValue varchar(4000), newValue varchar(4000)\n>\n> [ raised eyebrow ... ] If you are concerned about performance,\n> I'd start by not creating a temp table per row of the outer update.\n> That's costing probably 100x to 1000x as much as the row update itself.\n>\n> regards, tom lane\n>\n\nThanks Tom. However I could not find any solution to achieve the given requirement. I have to take all values in the temp table and assign it to an array variable to pass it to the audit procedure as shown below. Can you please advise ? CREATE OR REPLACE FUNCTION call_insert_info(    ) RETURNS void AS $$    DECLARE        v_message r_log_message[]; OLDVALUE1 varchar(4000);    BEGIN            drop table if exists changedinfo     create temp table changedinfo(colName varchar(100), oldValue varchar(4000), newValue varchar(4000));            insert into changed infot select 'empName', OLD.empName, NEW.empName from employee;            insert into changed infot select 'location', OLD.location, NEW.location from employee;                     v_message:=   array(select '(' || columname || ',' || oldvalue || ',' || newvalue ||')' from changedinfo);        perform insert_info(v_message);        raise notice '%',v_message;    END;$$ LANGUAGE plpgsql;Regards,AD.On Wed, Nov 24, 2021 at 11:22 AM Tom Lane <[email protected]> wrote:aditya desai <[email protected]> writes:\n> In a trigger function I am creating a temp table . When an update on a\n> table is executed for say 10k rows. I get the below error.\n\n> ERROR: out of shared memory\n> HINT:You might need to increase max_locks_per_transaction\n> CONTEXT: SQL Statement \"created temp table changedinfo(colName\n> varchar(100), oldValue varchar(4000), newValue varchar(4000)\n\n[ raised eyebrow ... ]  If you are concerned about performance,\nI'd start by not creating a temp table per row of the outer update.\nThat's costing probably 100x to 1000x as much as the row update itself.\n\n                        regards, tom lane", "msg_date": "Wed, 24 Nov 2021 11:55:50 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "aditya desai schrieb am 24.11.2021 um 07:25:\n> Thanks Tom. However I could not find any solution to achieve the given requirement. I have to take all values in the temp table and assign it to an array variable to pass it to the audit procedure as shown below. Can you please advise ? \n>\n> CREATE OR REPLACE FUNCTION call_insert_info(\n>     \n> ) RETURNS void AS $$\n>     DECLARE\n>         v_message r_log_message[];\n> OLDVALUE1 varchar(4000);\n>     BEGIN\n>             drop table if exists changedinfo\n>     create temp table changedinfo(colName varchar(100), oldValue varchar(4000), newValue varchar(4000));\n>             insert into changed infot select 'empName', OLD.empName, NEW.empName from employee;\n>             insert into changed infot select 'location', OLD.location, NEW.location from employee;\n>             \n>         \n> v_message:=   array(select '(' || columname || ',' || oldvalue || ',' || newvalue ||')' from changedinfo);\n>         perform insert_info(v_message);\n>         raise notice '%',v_message;\n>     END;\n> $$ LANGUAGE plpgsql;\n\n\nYou don't need a temp table for that. You can create the array directly from the new and old records:\n\n v_message := array[concat_ws(',', 'empName', old.empname, new.empname), concat_ws(',', 'location', old.location, new.location)];\n\nAlthough nowadays I would probably pass such an \"structure\" as JSON though, not as a comma separated list.\n\n\n\n", "msg_date": "Wed, 24 Nov 2021 07:31:08 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "Ok. Let me try this. Thanks!!\n\nOn Wed, Nov 24, 2021 at 12:01 PM Thomas Kellerer <[email protected]> wrote:\n\n> aditya desai schrieb am 24.11.2021 um 07:25:\n> > Thanks Tom. However I could not find any solution to achieve the given\n> requirement. I have to take all values in the temp table and assign it to\n> an array variable to pass it to the audit procedure as shown below. Can you\n> please advise ?\n> >\n> > CREATE OR REPLACE FUNCTION call_insert_info(\n> >\n> > ) RETURNS void AS $$\n> > DECLARE\n> > v_message r_log_message[];\n> > OLDVALUE1 varchar(4000);\n> > BEGIN\n> > drop table if exists changedinfo\n> > create temp table changedinfo(colName varchar(100), oldValue\n> varchar(4000), newValue varchar(4000));\n> > insert into changed infot select 'empName', OLD.empName,\n> NEW.empName from employee;\n> > insert into changed infot select 'location', OLD.location,\n> NEW.location from employee;\n> >\n> >\n> > v_message:= array(select '(' || columname || ',' || oldvalue || ',' ||\n> newvalue ||')' from changedinfo);\n> > perform insert_info(v_message);\n> > raise notice '%',v_message;\n> > END;\n> > $$ LANGUAGE plpgsql;\n>\n>\n> You don't need a temp table for that. You can create the array directly\n> from the new and old records:\n>\n> v_message := array[concat_ws(',', 'empName', old.empname,\n> new.empname), concat_ws(',', 'location', old.location, new.location)];\n>\n> Although nowadays I would probably pass such an \"structure\" as JSON\n> though, not as a comma separated list.\n>\n>\n>\n>\n\nOk. Let me try this. Thanks!!On Wed, Nov 24, 2021 at 12:01 PM Thomas Kellerer <[email protected]> wrote:aditya desai schrieb am 24.11.2021 um 07:25:\n> Thanks Tom. However I could not find any solution to achieve the given requirement. I have to take all values in the temp table and assign it to an array variable to pass it to the audit procedure as shown below. Can you please advise ? \n>\n> CREATE OR REPLACE FUNCTION call_insert_info(\n>     \n> ) RETURNS void AS $$\n>     DECLARE\n>         v_message r_log_message[];\n> OLDVALUE1 varchar(4000);\n>     BEGIN\n>             drop table if exists changedinfo\n>     create temp table changedinfo(colName varchar(100), oldValue varchar(4000), newValue varchar(4000));\n>             insert into changed infot select 'empName', OLD.empName, NEW.empName from employee;\n>             insert into changed infot select 'location', OLD.location, NEW.location from employee;\n>             \n>         \n> v_message:=   array(select '(' || columname || ',' || oldvalue || ',' || newvalue ||')' from changedinfo);\n>         perform insert_info(v_message);\n>         raise notice '%',v_message;\n>     END;\n> $$ LANGUAGE plpgsql;\n\n\nYou don't need a temp table for that. You can create the array directly from the new and old records:\n\n    v_message := array[concat_ws(',', 'empName', old.empname, new.empname), concat_ws(',', 'location', old.location, new.location)];\n\nAlthough nowadays I would probably pass such an \"structure\" as JSON though, not as a comma separated list.", "msg_date": "Wed, 24 Nov 2021 12:06:20 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "It seems like that function has some syntax errors, and also doesn't do\nwhat you want since I presume the \"from employee\" bit would mean you get\nmany rows inserted into that temp table for all the existing data and not\nthe one row you are operating on at the moment the trigger fires.\n\nIt is worth noting also that if bulk operations are at all common for this\ntable then writing this as an after statement trigger will likely be\nhelpful for performance.\n\nFor full context, we'd need to see how the function insert_info is defined.\n\nIt seems like that function has some syntax errors, and also doesn't do what you want since I presume the \"from employee\" bit would mean you get many rows inserted into that temp table for all the existing data and not the one row you are operating on at the moment the trigger fires.It is worth noting also that if bulk operations are at all common for this table then writing this as an after statement trigger will likely be helpful for performance.For full context, we'd need to see how the function insert_info is defined.", "msg_date": "Tue, 23 Nov 2021 23:45:52 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "H Michael,\nPlease see insert_info function below. Also r_log_message is composite data\ntype and it's definition is also given below.\n\nCREATE OR REPLACE FUNCTION insert_info(\n info_array r_log_message[]\n) RETURNS varchar AS $$\n DECLARE\n info_element r_log_message;\n BEGIN\n FOREACH info_element IN ARRAY info_array\n LOOP\n INSERT INTO testaditya(\n columname,\n oldvalue,\n newvalue\n ) VALUES(\n info_element.column_name,\n info_element.oldvalue,\n info_element.newvalue\n );\n END LOOP;\n RETURN 'OK';\n END;\n$$ LANGUAGE plpgsql;\n\n\npostgres=# \\d r_log_message;\n Composite type \"public.r_log_message\"\n Column | Type | Collation | Nullable | Default\n-------------+-------------------------+-----------+----------+---------\n column_name | character varying(30) | | |\n oldvalue | character varying(4000) | | |\n newvalue | character varying(4000) | | |\n\nRegards,\nAditya.\n\n\n\nOn Wed, Nov 24, 2021 at 12:16 PM Michael Lewis <[email protected]> wrote:\n\n> It seems like that function has some syntax errors, and also doesn't do\n> what you want since I presume the \"from employee\" bit would mean you get\n> many rows inserted into that temp table for all the existing data and not\n> the one row you are operating on at the moment the trigger fires.\n>\n> It is worth noting also that if bulk operations are at all common for this\n> table then writing this as an after statement trigger will likely be\n> helpful for performance.\n>\n> For full context, we'd need to see how the function insert_info is defined.\n>\n\nH Michael,Please see insert_info function below. Also r_log_message is composite data type and it's definition is also given below.CREATE OR REPLACE FUNCTION insert_info(    info_array  r_log_message[]) RETURNS varchar AS $$    DECLARE        info_element  r_log_message;    BEGIN        FOREACH info_element IN ARRAY info_array        LOOP            INSERT INTO testaditya(                columname,                oldvalue,                newvalue            ) VALUES(                info_element.column_name,                info_element.oldvalue,                info_element.newvalue            );        END LOOP;        RETURN 'OK';    END;$$ LANGUAGE plpgsql;postgres=# \\d r_log_message;                 Composite type \"public.r_log_message\"   Column    |          Type           | Collation | Nullable | Default-------------+-------------------------+-----------+----------+--------- column_name | character varying(30)   |           |          | oldvalue    | character varying(4000) |           |          | newvalue    | character varying(4000) |           |          |Regards,Aditya.On Wed, Nov 24, 2021 at 12:16 PM Michael Lewis <[email protected]> wrote:It seems like that function has some syntax errors, and also doesn't do what you want since I presume the \"from employee\" bit would mean you get many rows inserted into that temp table for all the existing data and not the one row you are operating on at the moment the trigger fires.It is worth noting also that if bulk operations are at all common for this table then writing this as an after statement trigger will likely be helpful for performance.For full context, we'd need to see how the function insert_info is defined.", "msg_date": "Wed, 24 Nov 2021 13:01:18 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "Hi Thomas,\nv_message is of composite data type r_log_message and it's definition is as\nshown below.\n\npostgres=# \\d r_log_message;\n Composite type \"public.r_log_message\"\n Column | Type | Collation | Nullable | Default\n-------------+-------------------------+-----------+----------+---------\n column_name | character varying(30) | | |\n oldvalue | character varying(4000) | | |\n newvalue | character varying(4000) | | |\n\nRegards,\nAditya.\n\nOn Wed, Nov 24, 2021 at 12:01 PM Thomas Kellerer <[email protected]> wrote:\n\n> aditya desai schrieb am 24.11.2021 um 07:25:\n> > Thanks Tom. However I could not find any solution to achieve the given\n> requirement. I have to take all values in the temp table and assign it to\n> an array variable to pass it to the audit procedure as shown below. Can you\n> please advise ?\n> >\n> > CREATE OR REPLACE FUNCTION call_insert_info(\n> >\n> > ) RETURNS void AS $$\n> > DECLARE\n> > v_message r_log_message[];\n> > OLDVALUE1 varchar(4000);\n> > BEGIN\n> > drop table if exists changedinfo\n> > create temp table changedinfo(colName varchar(100), oldValue\n> varchar(4000), newValue varchar(4000));\n> > insert into changed infot select 'empName', OLD.empName,\n> NEW.empName from employee;\n> > insert into changed infot select 'location', OLD.location,\n> NEW.location from employee;\n> >\n> >\n> > v_message:= array(select '(' || columname || ',' || oldvalue || ',' ||\n> newvalue ||')' from changedinfo);\n> > perform insert_info(v_message);\n> > raise notice '%',v_message;\n> > END;\n> > $$ LANGUAGE plpgsql;\n>\n>\n> You don't need a temp table for that. You can create the array directly\n> from the new and old records:\n>\n> v_message := array[concat_ws(',', 'empName', old.empname,\n> new.empname), concat_ws(',', 'location', old.location, new.location)];\n>\n> Although nowadays I would probably pass such an \"structure\" as JSON\n> though, not as a comma separated list.\n>\n>\n>\n>\n\nHi Thomas,v_message is of composite data type r_log_message and it's definition is as shown below.postgres=# \\d r_log_message;                 Composite type \"public.r_log_message\"   Column    |          Type           | Collation | Nullable | Default-------------+-------------------------+-----------+----------+--------- column_name | character varying(30)   |           |          | oldvalue    | character varying(4000) |           |          | newvalue    | character varying(4000) |           |          |Regards,Aditya.On Wed, Nov 24, 2021 at 12:01 PM Thomas Kellerer <[email protected]> wrote:aditya desai schrieb am 24.11.2021 um 07:25:\n> Thanks Tom. However I could not find any solution to achieve the given requirement. I have to take all values in the temp table and assign it to an array variable to pass it to the audit procedure as shown below. Can you please advise ? \n>\n> CREATE OR REPLACE FUNCTION call_insert_info(\n>     \n> ) RETURNS void AS $$\n>     DECLARE\n>         v_message r_log_message[];\n> OLDVALUE1 varchar(4000);\n>     BEGIN\n>             drop table if exists changedinfo\n>     create temp table changedinfo(colName varchar(100), oldValue varchar(4000), newValue varchar(4000));\n>             insert into changed infot select 'empName', OLD.empName, NEW.empName from employee;\n>             insert into changed infot select 'location', OLD.location, NEW.location from employee;\n>             \n>         \n> v_message:=   array(select '(' || columname || ',' || oldvalue || ',' || newvalue ||')' from changedinfo);\n>         perform insert_info(v_message);\n>         raise notice '%',v_message;\n>     END;\n> $$ LANGUAGE plpgsql;\n\n\nYou don't need a temp table for that. You can create the array directly from the new and old records:\n\n    v_message := array[concat_ws(',', 'empName', old.empname, new.empname), concat_ws(',', 'location', old.location, new.location)];\n\nAlthough nowadays I would probably pass such an \"structure\" as JSON though, not as a comma separated list.", "msg_date": "Wed, 24 Nov 2021 13:05:14 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "aditya desai schrieb am 24.11.2021 um 08:35:\n> Hi Thomas,\n> v_message is of composite data type r_log_message and it's definition is as shown below.\n>\n> postgres=# \\d r_log_message;\n>                  Composite type \"public.r_log_message\"\n>    Column    |          Type           | Collation | Nullable | Default\n> -------------+-------------------------+-----------+----------+---------\n>  column_name | character varying(30)   |           |          |\n>  oldvalue    | character varying(4000) |           |          |\n>  newvalue    | character varying(4000) |           |          |\n>\n> Regards,\n> Aditya.\n\nSorry, didn't see that.\n\nThen you need to create records of that type in the array:\n\n v_message := array[('empName', old.empname, new.empname)::r_log_message, ('location', old.location, new.location)::r_log_message];\n\nor an array of that type:\n\n v_message := array[('empName', old.empname, new.empname), ('location', old.location, new.location)]::r_log_message[];\n\n\nBtw: why don't you use `text` instead of varchar(4000).\n\n\n", "msg_date": "Wed, 24 Nov 2021 13:10:45 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "aditya desai schrieb am 24.11.2021 um 08:31:\n> H Michael,\n> Please see insert_info function below. Also r_log_message is composite data type and it's definition is also given below.\n>\n> CREATE OR REPLACE FUNCTION insert_info(\n>     info_array  r_log_message[]\n> ) RETURNS varchar AS $$\n>     DECLARE\n>         info_element  r_log_message;\n>     BEGIN\n>         FOREACH info_element IN ARRAY info_array\n>         LOOP\n>             INSERT INTO testaditya(\n>                 columname,\n>                 oldvalue,\n>                 newvalue\n>             ) VALUES(\n>                 info_element.column_name,\n>                 info_element.oldvalue,\n>                 info_element.newvalue\n>             );\n>         END LOOP;\n>         RETURN 'OK';\n>     END;\n> $$ LANGUAGE plpgsql;\n\nYou don't need a loop for that. This can be done more efficiently using unnest()\n\n\n INSERT INTO testaditya(columname,oldvalue,newvalue)\n select u.*\n from unnest(info_array) as u;\n\n\n\n\n\n", "msg_date": "Wed, 24 Nov 2021 13:12:14 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Out of memory error" }, { "msg_contents": "Thanks Thomas! Sorry to say this but ,this was migrated from Oracle to PG\n:) and the app team just wants to keep the data type as it is :(\n\nOn Wed, Nov 24, 2021 at 5:40 PM Thomas Kellerer <[email protected]> wrote:\n\n> aditya desai schrieb am 24.11.2021 um 08:35:\n> > Hi Thomas,\n> > v_message is of composite data type r_log_message and it's definition is\n> as shown below.\n> >\n> > postgres=# \\d r_log_message;\n> > Composite type \"public.r_log_message\"\n> > Column | Type | Collation | Nullable | Default\n> > -------------+-------------------------+-----------+----------+---------\n> > column_name | character varying(30) | | |\n> > oldvalue | character varying(4000) | | |\n> > newvalue | character varying(4000) | | |\n> >\n> > Regards,\n> > Aditya.\n>\n> Sorry, didn't see that.\n>\n> Then you need to create records of that type in the array:\n>\n> v_message := array[('empName', old.empname,\n> new.empname)::r_log_message, ('location', old.location,\n> new.location)::r_log_message];\n>\n> or an array of that type:\n>\n> v_message := array[('empName', old.empname, new.empname), ('location',\n> old.location, new.location)]::r_log_message[];\n>\n>\n> Btw: why don't you use `text` instead of varchar(4000).\n>\n>\n>\n\nThanks Thomas!  Sorry to say this but ,this was migrated from Oracle to PG  :) and the app team just wants to keep the data type as it is  :(On Wed, Nov 24, 2021 at 5:40 PM Thomas Kellerer <[email protected]> wrote:aditya desai schrieb am 24.11.2021 um 08:35:\n> Hi Thomas,\n> v_message is of composite data type r_log_message and it's definition is as shown below.\n>\n> postgres=# \\d r_log_message;\n>                  Composite type \"public.r_log_message\"\n>    Column    |          Type           | Collation | Nullable | Default\n> -------------+-------------------------+-----------+----------+---------\n>  column_name | character varying(30)   |           |          |\n>  oldvalue    | character varying(4000) |           |          |\n>  newvalue    | character varying(4000) |           |          |\n>\n> Regards,\n> Aditya.\n\nSorry, didn't see that.\n\nThen you need to create records of that type in the array:\n\n   v_message := array[('empName', old.empname, new.empname)::r_log_message, ('location', old.location, new.location)::r_log_message];\n\nor an array of that type:\n\n   v_message := array[('empName', old.empname, new.empname), ('location', old.location, new.location)]::r_log_message[];\n\n\nBtw: why don't you use `text` instead of varchar(4000).", "msg_date": "Wed, 24 Nov 2021 20:02:59 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Out of memory error" } ]
[ { "msg_contents": "Software/Hardware used:\n===================\nPostgresV14.v\nOS: RHELv8.4\nBenchmark:HammerDB v4.3\nHardware used: Apple/AMD Ryzen.\nRAM size: 256 GB\nSSD/HDD: 1TB\nCPU(s): 256(0-255)\nThread(s) per core: 2\nCore(s) per socket: 64\nSocket(s): 2\nNUMA node(s): 8\n\nCommand used to count process: ps -eaf | grep postgres\n\nCase1: AutoVaccum=on\nvu GCC Clang\n32 43 42\n64 76 74\n192 203 202\n250 262 262\nCase2:AutoVaccum=off\nvu GCC Clang\n32 40 40\n64 72 72\n192 200 200\n250 261 263\nIn Case1 why is the process different in Clang vs GCC.\nIn postgresql process dependent wrt compiler GCC/Clang?\nIs any recommendation or suggestion to check on this in Postgresv14\n\nSoftware/Hardware used:===================PostgresV14.vOS: RHELv8.4Benchmark:HammerDB v4.3Hardware used: Apple/AMD Ryzen.RAM size: 256 GBSSD/HDD: 1TBCPU(s): 256(0-255)Thread(s) per core:  2Core(s) per socket:  64Socket(s):           2NUMA node(s):        8Command used to count process: ps -eaf | grep postgresCase1: AutoVaccum=onvuGCCClang324342647674192203202250262262Case2:AutoVaccum=offvuGCCClang324040647272192200200250261263In Case1 why is the process different in Clang vs GCC.In postgresql process dependent wrt compiler GCC/Clang?Is any recommendation or suggestion to check on this in Postgresv14", "msg_date": "Wed, 24 Nov 2021 17:35:50 +0530", "msg_from": "hpc researcher_mspk <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres process count GCC vs Clang is Different on autovaccum=on" }, { "msg_contents": "\n\nOn 11/24/21 13:05, hpc researcher_mspk wrote:\n> Software/Hardware used:\n> ===================\n> PostgresV14.v\n> OS: RHELv8.4\n> Benchmark:HammerDB v4.3\n> Hardware used: Apple/AMD Ryzen.\n> RAM size: 256 GB\n> SSD/HDD: 1TB\n> CPU(s): 256(0-255)\n> Thread(s) per core:  2\n> Core(s) per socket:  64\n> Socket(s):           2\n> NUMA node(s):        8\n> \n> Command used to count process: ps -eaf | grep postgres\n> \n> Case1: AutoVaccum=on\n> vu\tGCC\tClang\n> 32\t43\t42\n> 64\t76\t74\n> 192\t203\t202\n> 250\t262\t262\n> \n> \n> Case2:AutoVaccum=off\n> vu\tGCC\tClang\n> 32\t40\t40\n> 64\t72\t72\n> 192\t200\t200\n> 250\t261\t263\n> \n> \n> In Case1 why is the process different in Clang vs GCC.\n> In postgresql process dependent wrt compiler GCC/Clang?\n\nNo, it's not. The most likely explanation is that you're seeing \ndifferent number of autovacuum workers. Those are dynamic, i.e. may \nappear/disappear. Or maybe there are more connections to the DB.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 24 Nov 2021 13:22:00 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres process count GCC vs Clang is Different on autovaccum=on" } ]
[ { "msg_contents": "Hi Team,\n\nPlease suggest how I can ensure pg_dump backup has completed successfully ?\nI don't think there is any view like Oracle which helps with\ndba_datampump_jobs etc.\n\nThanks,\n\nHi Team,Please suggest how I can ensure pg_dump backup has completed successfully ?I don't think there is any view like Oracle which helps with dba_datampump_jobs etc.Thanks,", "msg_date": "Thu, 25 Nov 2021 14:41:34 +0530", "msg_from": "Daulat <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump backup verification" }, { "msg_contents": "Hi,\r\n\r\nmaybe switching on verbose output with -v or -- verbose is a\r\nquick and easy help for this? At least you can see something\r\nhappening.\r\n\r\nCheers\r\n\r\nDirk\r\n\r\nVon: Daulat <[email protected]>\r\nGesendet: Donnerstag, 25. November 2021 10:12\r\nAn: [email protected]\r\nBetreff: [External] pg_dump backup verification\r\n\r\nHi Team,\r\n\r\nPlease suggest how I can ensure pg_dump backup has completed successfully ?\r\nI don't think there is any view like Oracle which helps with dba_datampump_jobs etc.\r\n\r\nThanks,\r\n\n\n\n\n\n\n\n\n\nHi,\n \nmaybe switching on verbose output with -v or  -- verbose is a\r\n\nquick and easy help for this? At least you can see something\nhappening.\n \nCheers\n \nDirk\n \n\nVon: Daulat <[email protected]>\r\n\nGesendet: Donnerstag, 25. November 2021 10:12\nAn: [email protected]\nBetreff: [External] pg_dump backup verification\n\n \n\n\n\nHi Team,\n\n\n \n\n\nPlease suggest how I can ensure pg_dump backup has completed successfully ?\n\n\nI don't think there is any view like Oracle which helps with dba_datampump_jobs etc.\n\n\n \n\n\nThanks,", "msg_date": "Thu, 25 Nov 2021 09:21:22 +0000", "msg_from": "Dirk Krautschick <[email protected]>", "msg_from_op": false, "msg_subject": "AW: [External] pg_dump backup verification" }, { "msg_contents": "On Thu, Nov 25, 2021 at 02:41:34PM +0530, Daulat wrote:\n> Please suggest how I can ensure pg_dump backup has completed successfully ?\n> I don't think there is any view like Oracle which helps with\n> dba_datampump_jobs etc.\n\n1) Check its exit status. If it's nonzero, then surely there's a problem\n(typically detailed indicated by output to stderr).\n\n2) You can also run pg_restore -l to output a TOC for the backup. From\nexperience, this can be a good secondary test. You could add to your backup\nscript \"pg_restore -l ./thebackup >/dev/null\" to check that pg_restore itself\nexits with a zero exit status.\n\n3) If your backup job is a shell script, you should use \"set -e\", to be sure\nthat a command which fails causes the script to exit rather than plowing ahead\nas if it had succeeded. This is important for any shell script that's more\nthan 1 line long.\n\n4) It's usually a good idea to write first to a \"*.new\" file, and then rename\nit only if the pg_dump succeeds. Avoid \"clobbering\" a pre-existing file (or\nelse you have no backup at all until the backup finishes, successfully). Avoid\npiping pg_dump to another command, since pipes only preserve the exit status of\nthe final command in the pipeline.\n\nFor example:\n\n#! /bin/sh\nset -e\nf=/srv/otherfs/thebackup\nrm -f \"$f.new\" # Remove a previous, failed backup, if any\npg_dump -Fc -d ourdatabase >\"$f.new\"\npg_restore -l \"$f.new\" >/dev/null\nmv \"$f.new\" \"$f\"\nexit 0 # In case the previous line is a conditional like \"if\" or \"&&\" or \"||\".\n\n5) You can monitor the age of ./thebackup.\n\n6) Log the output of the script; do not let its output get redirected to\n/var/mail/postgres, or somewhere else nobody looks at.\n\n7) It's nice to dump to a separate filesystem; not only because FS corruption\nwould affect both the live DB but also its backup. But also because the\nbackups could overflow the FS, causing the main DB to fail queries or crash.\n\n8) Keep a few backups rotated weekly and a few rotated monthly. Even if it's\nnever needed to restore a 2 month old backup, it can be valuable to help\ndiagnose issues to see when some data changed.\n\n9) Also save output from pg_dumpall -g, or else your backup will probably spew\nout lots of errors, which are themselves important, but might also obscure\nother, even more important problems.\n\n10) Perhaps most importantly, test your backups. Having backups is of little\nuse if you don't know how to restore them. This should be a periodic\nprocedure, not something you do once to be able to say that you did.\nAre you confident you can run using the restored DB ? \n\n-- \nJustin\n\n\n", "msg_date": "Thu, 25 Nov 2021 11:09:03 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump backup verification" } ]
[ { "msg_contents": "Hello,\n\nrecently I wrote a query that provides suggestions from a Postgres table.\nIt should be able to work despite smaller typos and thus I chose to use \nthe pg_trgm extension (https://www.postgresql.org/docs/current/pgtrgm.html).\nWhen measuring the performance, I observed great differences in the \nquery time, depending on the input string.\nAnalysis showed that Postgres sometimes used the created indexes and \nsometimes it didn't, even though it would provide a considerable speedup.\n\nIn the included test case the degradation occurs for all input strings \nof length 8 or longer, for shorter strings the index is used.\n\nMy questions:\n\tWhy doesn't the query planner choose to use the index?\n\tCan I make Postgres use the index, and if so, how?\nI understand that trying to outsmart the planner is generally a bad \nidea. Maybe the query can be rewritten or there are some parameters that \ncould be tweaked.\n\n\n## Setup Information\n\nHardware: Intel i5-8250U, 8GB RAM, encrypted SSD, no RAID\n$ uname -a\nLinux 5.11.0-40-generic #44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 UTC \n2021 x86_64 x86_64 x86_64 GNU/Linux\n\nSoftware:\nOS: Ubuntu 20.04\nPostgres: PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on \nx86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, \n64-bit\nThe Postgres docker image was used.\nDocker: Docker version 20.10.5, build 55c4c88\nImage used: postgres:14.1\n\nConfiguration:\nThe config file was not changed.\n name | current_setting | source\n----------------------------+--------------------+----------------------\n application_name | psql | client\n client_encoding | UTF8 | client\n DateStyle | ISO, MDY | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n dynamic_shared_memory_type | posix | configuration file\n enable_seqscan | off | session\n lc_messages | en_US.utf8 | configuration file\n lc_monetary | en_US.utf8 | configuration file\n lc_numeric | en_US.utf8 | configuration file\n lc_time | en_US.utf8 | configuration file\n listen_addresses | * | configuration file\n log_timezone | Etc/UTC | configuration file\n max_connections | 100 | configuration file\n max_stack_depth | 2MB | environment variable\n max_wal_size | 1GB | configuration file\n min_wal_size | 80MB | configuration file\n shared_buffers | 128MB | configuration file\n TimeZone | Etc/UTC | configuration file\n\n\n## Test Case\nThe test case creates a simple table and fills it with 10000 identical \nentries.\nThe query is executed twice with an 8 character string, once with \nsequential scans enabled, and once with sequential scans disabled.\nThe first query does not use the index, even if the second query shows \nthat it would be much faster.\n\ndocker run --name postgres -e POSTGRES_PASSWORD=postgres -d postgres:14.1\ndocker exec -it postgres bash\npsql -U postgres\n\nCREATE EXTENSION pg_trgm;\n\nCREATE TABLE song (\n artist varchar(20),\n title varchar(20)\n);\n\nINSERT INTO song (artist, title)\nSELECT 'artist','title'\nFROM generate_series(1,10000);\n\nCREATE INDEX artist_trgm ON song USING GIN (artist gin_trgm_ops);\nCREATE INDEX title_trgm ON song USING GIN (title gin_trgm_ops);\n\n-- Tips from https://wiki.postgresql.org/wiki/Slow_Query_Questions\nANALYZE;\nVACUUM;\nREINDEX TABLE song;\n\n\\set query '12345678'\n\n-- This query is slow\nEXPLAIN ANALYZE\nSELECT song.artist, song.title\nFROM song\nWHERE (song.artist %> :'query' OR song.title %> :'query')\n;\n\nset enable_seqscan=off;\n\n-- This query is fast\nEXPLAIN ANALYZE\nSELECT song.artist, song.title\nFROM song\nWHERE (song.artist %> :'query' OR song.title %> :'query')\n;\n\n\n## Additional Test Case Info\n\nSchemata:\n Table \"public.song\"\n Column | Type | Collation | Nullable | Default | \nStorage | Compression | Stats target | Description\n--------+-----------------------+-----------+----------+---------+----------+-------------+--------------+-------------\n artist | character varying(20) | | | | \nextended | | |\n title | character varying(20) | | | | \nextended | | |\nIndexes:\n \"artist_trgm\" gin (artist gin_trgm_ops)\n \"title_trgm\" gin (title gin_trgm_ops)\nAccess method: heap\n Index \"public.artist_trgm\"\n Column | Type | Key? | Definition | Storage | Stats target\n--------+---------+------+------------+---------+--------------\n artist | integer | yes | artist | plain |\ngin, for table \"public.song\"\n Index \"public.title_trgm\"\n Column | Type | Key? | Definition | Storage | Stats target\n--------+---------+------+------------+---------+--------------\n title | integer | yes | title | plain |\ngin, for table \"public.song\"\n\nTable Metadata:\npostgres=# SELECT relname, relpages, reltuples, relallvisible, relkind, \nrelnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class \nWHERE relname='song';\n relname | relpages | reltuples | relallvisible | relkind | relnatts | \nrelhassubclass | reloptions | pg_table_size\n---------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n song | 55 | 10000 | 55 | r | 2 | \nf | | 483328\n\nEXPLAIN ANALYZE of the \"slow\" query\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Seq Scan on song (cost=0.00..205.00 rows=1 width=13) (actual \ntime=68.896..68.897 rows=0 loops=1)\n Filter: (((artist)::text %> '12345678'::text) OR ((title)::text %> \n'12345678'::text))\n Rows Removed by Filter: 10000\n Planning Time: 0.304 ms\n Execution Time: 68.928 ms\n\nEXPLAIN ANALYZE of the \"fast\" query\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on song (cost=288.00..292.02 rows=1 width=13) \n(actual time=0.023..0.024 rows=0 loops=1)\n Recheck Cond: (((artist)::text %> '12345678'::text) OR \n((title)::text %> '12345678'::text))\n -> BitmapOr (cost=288.00..288.00 rows=1 width=0) (actual \ntime=0.022..0.023 rows=0 loops=1)\n -> Bitmap Index Scan on artist_trgm (cost=0.00..144.00 \nrows=1 width=0) (actual time=0.013..0.014 rows=0 loops=1)\n Index Cond: ((artist)::text %> '12345678'::text)\n -> Bitmap Index Scan on title_trgm (cost=0.00..144.00 rows=1 \nwidth=0) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: ((title)::text %> '12345678'::text)\n Planning Time: 0.224 ms\n Execution Time: 0.052 ms\n\nThe behaviour is identical when using similarity instead of word_similarity.\nGIN indexes were chosen because the table is queried far more often than \nit is updated.\nI tried increasing shared_buffers, effective_cache_size or work_mem to \nno avail.\n\nAny help would be greatly appreciated.\n\n\nRegards\nJonathan\n\n\n", "msg_date": "Tue, 30 Nov 2021 22:38:32 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pg_trgm word_similarity query does not use index for input strings\n longer than 8 characters" }, { "msg_contents": "On Tue, 2021-11-30 at 22:38 +0100, [email protected] wrote:\n> ## Setup Information\n> Hardware: Intel i5-8250U, 8GB RAM, encrypted SSD, no RAID\n> [...]\n>\n> Configuration:\n> The config file was not changed.\n> [...]\n>\n> ## Test Case\n> [...]\n> CREATE EXTENSION pg_trgm;\n> \n> CREATE TABLE song (\n>      artist      varchar(20),\n>      title       varchar(20)\n> );\n> \n> INSERT INTO song (artist, title)\n> SELECT 'artist','title'\n> FROM generate_series(1,10000);\n> \n> CREATE INDEX artist_trgm ON song USING GIN (artist gin_trgm_ops);\n> CREATE INDEX title_trgm ON song USING GIN (title gin_trgm_ops);\n> \n> -- Tips from https://wiki.postgresql.org/wiki/Slow_Query_Questions\n> ANALYZE;\n> VACUUM;\n> REINDEX TABLE song;\n> \n> \\set query '12345678'\n> \n> -- This query is slow\n> EXPLAIN ANALYZE\n> SELECT song.artist, song.title\n> FROM song\n> WHERE (song.artist %> :'query' OR song.title %> :'query')\n> ;\n> \n> set enable_seqscan=off;\n> \n> -- This query is fast\n> EXPLAIN ANALYZE\n> SELECT song.artist, song.title\n> FROM song\n> WHERE (song.artist %> :'query' OR song.title %> :'query')\n> ;\n\nThe table is quite small; with a bigger table, the test would be more meaningful.\n\nSince you have SSDs, you should tune \"random_page_cost = 1.1\".\nThis makes the planner prefer index scans, and it leads to the index scan\nbeing chosen in your case.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 07 Dec 2021 04:10:20 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm word_similarity query does not use index for input\n strings longer than 8 characters" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Tue, 2021-11-30 at 22:38 +0100, [email protected] wrote:\n>> INSERT INTO song (artist, title)\n>> SELECT 'artist','title'\n>> FROM generate_series(1,10000);\n>>\n>> \\set query '12345678'\n>> \n>> -- This query is slow\n>> EXPLAIN ANALYZE\n>> SELECT song.artist, song.title\n>> FROM song\n>> WHERE (song.artist %> :'query' OR song.title %> :'query')\n>> ;\n\n> The table is quite small; with a bigger table, the test would be more meaningful.\n\nYeah, this test case seems very unrealistic, both as to table size\nand as to the lack of variability of the table entries. I think the\nlatter is causing the indexscans to take less time than they otherwise\nmight, because none of the extracted trigrams find any matches.\n\n> Since you have SSDs, you should tune \"random_page_cost = 1.1\".\n\nRight. Poking at gincostestimate a bit, I see that for this\noperator the indexscan cost estimate is basically driven by the\nnumber of trigrams extracted from the query string (nine in this\ntest case) and the index size; those lead to a predicted number\nof index page fetches that's then scaled by random_page_cost.\nThat's coming out to make it look more expensive than the seqscan.\nIt's actually not more expensive, but that's partially because\npage fetch costs are really zero in this test case (everything\nwill stay in shared buffers the whole time), and partially because\nthe unrealistic data pattern is leading to not having to look at\nas much of the index as gincostestimate expected.\n\nIn general, it appears correct that longer query strings lead to a\nhigher index cost estimate, because they produce more trigrams so\nthere's more work for the index match to do. (At some level, a\nlonger query means more work in the seqscan case too; but our cost\nmodels are inadequate to predict that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 12:08:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm word_similarity query does not use index for input\n strings longer than 8 characters" }, { "msg_contents": "Thank you both a lot for the insights and your input.\n\n > Yeah, this test case seems very unrealistic, both as to table size\n > and as to the lack of variability of the table entries.\n\nThe example was based on real data with a more complicated query which\nprompted me to investigate the issue. The distinction between slow and\nfast queries is not as clear cut as with the generated data, but the\ngeneral problem remains.\n\n >> Since you have SSDs, you should tune \"random_page_cost = 1.1\".\n\nI tested different values of random_page_cost with various queries. Too\nsmall values increased the execution time again, due to too eager index\nusage. I identified the optimum for my use case at 1.4. This solved my\nproblem, thanks.\n\nRegards\nJonathan\n\nOn 07.12.21 18:08, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n>> On Tue, 2021-11-30 at 22:38 +0100, [email protected] wrote:\n>>> INSERT INTO song (artist, title)\n>>> SELECT 'artist','title'\n>>> FROM generate_series(1,10000);\n>>>\n>>> \\set query '12345678'\n>>>\n>>> -- This query is slow\n>>> EXPLAIN ANALYZE\n>>> SELECT song.artist, song.title\n>>> FROM song\n>>> WHERE (song.artist %> :'query' OR song.title %> :'query')\n>>> ;\n> \n>> The table is quite small; with a bigger table, the test would be more meaningful.\n> \n> Yeah, this test case seems very unrealistic, both as to table size\n> and as to the lack of variability of the table entries. I think the\n> latter is causing the indexscans to take less time than they otherwise\n> might, because none of the extracted trigrams find any matches.\n> \n>> Since you have SSDs, you should tune \"random_page_cost = 1.1\".\n> \n> Right. Poking at gincostestimate a bit, I see that for this\n> operator the indexscan cost estimate is basically driven by the\n> number of trigrams extracted from the query string (nine in this\n> test case) and the index size; those lead to a predicted number\n> of index page fetches that's then scaled by random_page_cost.\n> That's coming out to make it look more expensive than the seqscan.\n> It's actually not more expensive, but that's partially because\n> page fetch costs are really zero in this test case (everything\n> will stay in shared buffers the whole time), and partially because\n> the unrealistic data pattern is leading to not having to look at\n> as much of the index as gincostestimate expected.\n> \n> In general, it appears correct that longer query strings lead to a\n> higher index cost estimate, because they produce more trigrams so\n> there's more work for the index match to do. (At some level, a\n> longer query means more work in the seqscan case too; but our cost\n> models are inadequate to predict that.)\n> \n> \t\t\tregards, tom lane\n> \n\n\n", "msg_date": "Wed, 8 Dec 2021 19:43:03 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: pg_trgm word_similarity query does not use index for input\n strings longer than 8 characters" } ]
[ { "msg_contents": "Hi\nThe performance bottleneck in LWLockRelease()method goes through an array\none by one to see which lock was released with O(N). As soon as the lock is\nfound it performs an array to remove the lock.\nAs linear search and compaction delays the release of the lock forcing the\nother Postgres instances WAiting for the lock to be released\nIs any possible solution like\n1. LWLockRelease() releases the lock first and then remove held lock from\nthe array\n2. Binary search (like non-linear structure) to reduce on high searching\nand remove all held locks\n\nHi The performance bottleneck in LWLockRelease()method goes through an array one by one to see which lock was released with O(N). As soon as the lock is found it performs an array to remove the lock.As linear search and compaction delays the release of the lock forcing the other Postgres instances WAiting for the lock to be released Is any possible solution like 1. LWLockRelease() releases the lock first and then remove held lock from the array2. Binary search (like non-linear structure) to reduce on high searching and remove all held locks", "msg_date": "Wed, 1 Dec 2021 19:56:11 +0530", "msg_from": "Ashkil Dighin <[email protected]>", "msg_from_op": true, "msg_subject": "LwLockRelease performance" }, { "msg_contents": "Ashkil Dighin <[email protected]> writes:\n> The performance bottleneck in LWLockRelease()method goes through an array\n> one by one to see which lock was released with O(N). As soon as the lock is\n> found it performs an array to remove the lock.\n\nTypically, such locks are released in LIFO order. Do you have any\nactual evidence of a performance problem here?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Dec 2021 17:19:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LwLockRelease performance" }, { "msg_contents": "Hi,\n\nOn 2021-12-01 19:56:11 +0530, Ashkil Dighin wrote:\n> The performance bottleneck in LWLockRelease()method goes through an array\n> one by one to see which lock was released with O(N). As soon as the lock is\n> found it performs an array to remove the lock.\n> As linear search and compaction delays the release of the lock forcing the\n> other Postgres instances WAiting for the lock to be released\n> Is any possible solution like\n> 1. LWLockRelease() releases the lock first and then remove held lock from\n> the array\n\nYou currently can't really - we don't know the to-be-released lockmode without\nthe lookup in the array. You could try to reason it based on the current state\nof the lock, but that doesn't strike me as a great idea either.\n\n\n> 2. Binary search (like non-linear structure) to reduce on high searching\n> and remove all held locks\n\nThat'd likely lead to considerably worse performance. Due to the unpredictable\nbranches binary search is worse than linear search at low-ish array values\n(obviously depends on the cost of a comparison etc).\n\n\nDo you have a workload where this is a significant issue? Most of the time we\ndo not hold enough lwlocks for it to be a problem. I've seen it become a\nbottleneck during my AIO work (*) - but IIRC I worked around it.\n\nIIRC I observed the shifting of the locks to be the bigger issue than the\nsearch for the locks themselves. It might be worth experimenting with not\nshifting allt he subsequent locks, but instead just swapping in the\ncurrently-last lock.\n\nGreetings,\n\nAndres Freund\n\n(*) IIRC the issue is when writing back we try to write back multiple buffers\nat once (using conditional lock acquisition to avoid deadlocks). Those then\nare likely released in FIFO order. I think it's now not a problem anymore\nbecause I ended up introducing the concept of locks that are owned by the AIO\nsubsystem for other reasons.\n\n\n", "msg_date": "Thu, 2 Dec 2021 15:50:44 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LwLockRelease performance" } ]
[ { "msg_contents": "Hello,\n\nI hope this email finds you all well!\n\nI have a data warehouse with a fairly complex ETL process that has been running for years now across PG 9.6, 11.2 and now 13.4 for the past couple of months. I have been getting the error \"An I/O error occurred while sending to the backend\" quite often under load in 13.4 which I never used to get on 11.2. I have applied some tricks, particularly with the socketTimeout JDBC configuration.\n\nSo my first question is whether anyone has any idea why this is happening? My hardware and general PG configuration have not changed between 11.2 and 13.4 and I NEVER experienced this on 11.2 in about 2y of production.\n\nSecond, I have one stored procedure that takes a very long time to run (40mn more or less), so obviously, I'd need to set socketTimeout to something like 1h in order to call it and not timeout. That doesn't seem reasonable?\n\nI understand that there is not just Postgres 13.4, but also the JDBC driver. I ran production for a several days on V42.2.19 (which had run with PG11.2 fine) to try and got the error a couple of times, i.e., the same as with 42.2.24, so I am not sure this has to do with the JDBC Driver.\n\nSo I am not sure what to do now. I do not know if there are some related configuration options since 11.2 that could trigger this issue that I missed, or some other phenomenon going on. I have always had a few \"long running\" queries in the system (i.e., > 20mn) and never experienced this on 11.2, and experiencing this maybe once or twice a week on 13.4, seemingly randomly. So sometimes, the queries run fine, and others, they time out. Weird.\n\nThanks,\nLaurent.\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI hope this email finds you all well!\n \nI have a data warehouse with a fairly complex ETL process that has been running for years now across PG 9.6, 11.2 and now 13.4 for the past couple of months. I have been getting the error “An I/O error occurred while sending to the backend”\n quite often under load in 13.4 which I never used to get on 11.2. I have applied some tricks, particularly with the socketTimeout JDBC configuration.\n \nSo my first question is whether anyone has any idea why this is happening? My hardware and general PG configuration have not changed between 11.2 and 13.4 and I NEVER experienced this on 11.2 in about 2y of production.\n \nSecond, I have one stored procedure that takes a very long time to run (40mn more or less), so obviously, I’d need to set socketTimeout to something like 1h in order to call it and not timeout. That doesn’t seem reasonable?\n \nI understand that there is not just Postgres 13.4, but also the JDBC driver. I ran production for a several days on V42.2.19 (which had run with PG11.2 fine) to try and got the error a couple of times, i.e., the same as with 42.2.24, so\n I am not sure this has to do with the JDBC Driver.\n \nSo I am not sure what to do now. I do not know if there are some related configuration options since 11.2 that could trigger this issue that I missed, or some other phenomenon going on. I have always had a few “long running” queries in\n the system (i.e., > 20mn) and never experienced this on 11.2, and experiencing this maybe once or twice a week on 13.4, seemingly randomly. So sometimes, the queries run fine, and others, they time out. Weird.\n \nThanks,\nLaurent.", "msg_date": "Sat, 4 Dec 2021 17:32:10 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Sat, Dec 04, 2021 at 05:32:10PM +0000, [email protected] wrote:\n> I have a data warehouse with a fairly complex ETL process that has been running for years now across PG 9.6, 11.2 and now 13.4 for the past couple of months. I have been getting the error \"An I/O error occurred while sending to the backend\" quite often under load in 13.4 which I never used to get on 11.2. I have applied some tricks, particularly with the socketTimeout JDBC configuration.\n> \n> So my first question is whether anyone has any idea why this is happening? My hardware and general PG configuration have not changed between 11.2 and 13.4 and I NEVER experienced this on 11.2 in about 2y of production.\n> \n> Second, I have one stored procedure that takes a very long time to run (40mn more or less), so obviously, I'd need to set socketTimeout to something like 1h in order to call it and not timeout. That doesn't seem reasonable?\n\nIs the DB server local or remote (TCP/IP) to the client?\n\nCould you collect the corresponding postgres query logs when this happens ?\n\nIt'd be nice to see a network trace for this too. Using tcpdump or wireshark.\nPreferably from the client side.\n\nFWIW, I suspect the JDBC socketTimeout is a bad workaround.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 4 Dec 2021 11:59:26 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "\n > -----Original Message-----\n > From: Justin Pryzby <[email protected]>\n > Sent: Saturday, December 4, 2021 12:59\n > To: [email protected]\n > Cc: [email protected]\n > Subject: Re: An I/O error occurred while sending to the backend (PG\n > 13.4)\n > \n > On Sat, Dec 04, 2021 at 05:32:10PM +0000, [email protected]\n > wrote:\n > > I have a data warehouse with a fairly complex ETL process that has\n > been running for years now across PG 9.6, 11.2 and now 13.4 for the\n > past couple of months. I have been getting the error \"An I/O error\n > occurred while sending to the backend\" quite often under load in 13.4\n > which I never used to get on 11.2. I have applied some tricks, particularly\n > with the socketTimeout JDBC configuration.\n > >\n > > So my first question is whether anyone has any idea why this is\n > happening? My hardware and general PG configuration have not\n > changed between 11.2 and 13.4 and I NEVER experienced this on 11.2 in\n > about 2y of production.\n > >\n > > Second, I have one stored procedure that takes a very long time to run\n > (40mn more or less), so obviously, I'd need to set socketTimeout to\n > something like 1h in order to call it and not timeout. That doesn't seem\n > reasonable?\n > \n > Is the DB server local or remote (TCP/IP) to the client?\n > \n > Could you collect the corresponding postgres query logs when this\n > happens ?\n > \n > It'd be nice to see a network trace for this too. Using tcpdump or\n > wireshark.\n > Preferably from the client side.\n > \n > FWIW, I suspect the JDBC socketTimeout is a bad workaround.\n > \n > --\n > Justin\n\nIt's a remote server, but all on a local network. Network performance is I am sure not the issue. Also, the system is on Windows Server. What are you expecting to see out of a tcpdump? I'll try to get PG logs on the failing query.\n\nThank you,\nLaurent.\n\n\n\n\n\n", "msg_date": "Sat, 4 Dec 2021 19:18:06 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Sat, Dec 04, 2021 at 07:18:06PM +0000, [email protected] wrote:\n> It's a remote server, but all on a local network. Network performance is I am sure not the issue. Also, the system is on Windows Server. What are you expecting to see out of a tcpdump? I'll try to get PG logs on the failing query.\n\nI'd want to know if postgres sent anything to the client, like TCP RST, or if\nthe client decided on its own that there had been an error.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 4 Dec 2021 13:53:20 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "\n\n > -----Original Message-----\n > From: [email protected] <[email protected]>\n > Sent: Saturday, December 4, 2021 14:18\n > To: Justin Pryzby <[email protected]>\n > Cc: [email protected]\n > Subject: RE: An I/O error occurred while sending to the backend (PG 13.4)\n > \n > \n > > -----Original Message-----\n > > From: Justin Pryzby <[email protected]>\n > > Sent: Saturday, December 4, 2021 12:59\n > > To: [email protected]\n > > Cc: [email protected]\n > > Subject: Re: An I/O error occurred while sending to the backend (PG\n > > 13.4)\n > >\n > > On Sat, Dec 04, 2021 at 05:32:10PM +0000, [email protected]\n > > wrote:\n > > > I have a data warehouse with a fairly complex ETL process that has\n > > been running for years now across PG 9.6, 11.2 and now 13.4 for the\n > > past couple of months. I have been getting the error \"An I/O error\n > > occurred while sending to the backend\" quite often under load in 13.4\n > > which I never used to get on 11.2. I have applied some tricks,\n > particularly\n > > with the socketTimeout JDBC configuration.\n > > >\n > > > So my first question is whether anyone has any idea why this is\n > > happening? My hardware and general PG configuration have not\n > > changed between 11.2 and 13.4 and I NEVER experienced this on 11.2\n > in\n > > about 2y of production.\n > > >\n > > > Second, I have one stored procedure that takes a very long time to\n > run\n > > (40mn more or less), so obviously, I'd need to set socketTimeout to\n > > something like 1h in order to call it and not timeout. That doesn't seem\n > > reasonable?\n > >\n > > Is the DB server local or remote (TCP/IP) to the client?\n > >\n > > Could you collect the corresponding postgres query logs when this\n > > happens ?\n > >\n > > It'd be nice to see a network trace for this too. Using tcpdump or\n > > wireshark.\n > > Preferably from the client side.\n > >\n > > FWIW, I suspect the JDBC socketTimeout is a bad workaround.\n > >\n > > --\n > > Justin\n > \n > It's a remote server, but all on a local network. Network performance is I\n > am sure not the issue. Also, the system is on Windows Server. What are you\n > expecting to see out of a tcpdump? I'll try to get PG logs on the failing query.\n > \n > Thank you,\n > Laurent.\n > \n > \n > \n > \n\nHello Justin,\n\nIt has been ages! The issue has been happening a bit more often recently, as much as once every 10 days or so. As a reminder, the set up is Postgres 13.4 on Windows Server with 16cores and 64GB memory. The scenario where this occurs is an ETL tool called Pentaho Kettle (V7) connecting to the DB for DataWarehouse workloads. The tool is Java-based and connects via JDBC using postgresql-42.2.5.jar. There are no particular settings besides the socketTimeout setting mentioned above.\n\nThe workload has some steps being lots of quick transactions for dimension tables for example, but some fact table calculations, especially large pivots, can make queries run for 40mn up to over an hour (a few of those).\n\nI caught these in the logs at the time of a failure but unsure what to make of that:\n\n\n2022-02-21 02:08:16.214 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:29.347 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:30.371 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:30.463 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:30.596 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:30.687 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:30.786 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:30.873 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:30.976 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:31.050 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:31.131 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:31.240 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:31.906 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:31.988 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:33.068 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:08:34.850 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:10:43.596 EST [836] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\n\n\t\n2022-02-21 02:10:43.598 EST [8616] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\n\n\t\n2022-02-21 02:10:43.598 EST [8616] LOG: unexpected EOF on client connection with an open transaction\n2022-02-21 02:10:43.605 EST [7000] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\n\n\t\n2022-02-21 02:10:43.605 EST [7000] LOG: unexpected EOF on client connection with an open transaction\n2022-02-21 02:10:43.605 EST [1368] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\n\n\t\n2022-02-21 02:10:43.605 EST [1368] LOG: unexpected EOF on client connection with an open transaction\n2022-02-21 02:10:43.605 EST [3304] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\n\n\t\n2022-02-21 02:10:43.605 EST [3304] LOG: unexpected EOF on client connection with an open transaction\n2022-02-21 02:31:38.808 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:31:38.817 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:31:38.825 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:31:38.834 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:31:38.845 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n2022-02-21 02:34:32.112 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n\nThank you,\nLaurent.\n\n\n", "msg_date": "Thu, 24 Feb 2022 00:47:42 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Thu, Feb 24, 2022 at 12:47:42AM +0000, [email protected] wrote:\n> On Sat, Dec 04, 2021 at 05:32:10PM +0000, [email protected] wrote:\n> > I have a data warehouse with a fairly complex ETL process that has\n> > been running for years now across PG 9.6, 11.2 and now 13.4 for the\n> > past couple of months. I have been getting the error \"An I/O error\n> > occurred while sending to the backend\" quite often under load in 13.4\n> > which I never used to get on 11.2. I have applied some tricks, particularly\n> > with the socketTimeout JDBC configuration.\n\n> It'd be nice to see a network trace for this too. Using tcpdump or\n> wireshark. Preferably from the client side.\n> \n> Hello Justin,\n> \n> It has been ages! The issue has been happening a bit more often recently, as much as once every 10 days or so. As a reminder, the set up is Postgres 13.4 on Windows Server with 16cores and 64GB memory. The scenario where this occurs is an ETL tool called Pentaho Kettle (V7) connecting to the DB for DataWarehouse workloads. The tool is Java-based and connects via JDBC using postgresql-42.2.5.jar. There are no particular settings besides the socketTimeout setting mentioned above.\n> \n> The workload has some steps being lots of quick transactions for dimension tables for example, but some fact table calculations, especially large pivots, can make queries run for 40mn up to over an hour (a few of those).\n> \n> I caught these in the logs at the time of a failure but unsure what to make of that:\n> \n> 2022-02-21 02:10:43.605 EST [1368] LOG: unexpected EOF on client connection with an open transaction\n> 2022-02-21 02:10:43.605 EST [3304] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\n> \t\n> 2022-02-21 02:10:43.605 EST [3304] LOG: unexpected EOF on client connection with an open transaction\n> 2022-02-21 02:31:38.808 EST [1704] LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n\nI suggest to enable CSV logging, which has many more columns of data.\nSome of them might provide an insight - I'm not sure.\nlog_destination=csvlog (in addition to whatever else you have set).\n\nAnd the aforementioned network trace. You could set a capture filter on TCP\nSYN|RST so it's not absurdly large. From my notes, it might look like this:\n(tcp[tcpflags]&(tcp-rst|tcp-syn|tcp-fin)!=0)\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 23 Feb 2022 19:04:15 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "You originally mailed about an error on the client, and now you found\ncorresponding server logs, which suggests a veritable network issue.\n\nAre the postgres clients and server on the same subnet ? If not, what are the\nintermediate routers ? Is there any NAT happening ? Do those devices have any\ninteresting logs that correspond with the server/client connection failures ?\n\nHave you tried enabling TCP keepalives ? This might help to convince a NAT\ndevice not to forget about your connection.\n\nhttps://www.postgresql.org/docs/current/runtime-config-connection.html\ntcp_keepalives_idle=9\ntcp_keepalives_interval=9\ntcp_keepalives_count=0\ntcp_user_timeout=0 -- You apparently have this set, but it cannot work on windows, so just generates noise.\n\nOn linux, you can check the keepalive counters in \"netstat -not\" to be sure\nthat it's enabled. A similar switch hopefully exists for windows.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 23 Feb 2022 20:00:05 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "Em qua., 23 de fev. de 2022 às 21:47, [email protected] <\[email protected]> escreveu:\n\n>\n>\n> > -----Original Message-----\n> > From: [email protected] <[email protected]>\n> > Sent: Saturday, December 4, 2021 14:18\n> > To: Justin Pryzby <[email protected]>\n> > Cc: [email protected]\n> > Subject: RE: An I/O error occurred while sending to the backend (PG\n> 13.4)\n> >\n> >\n> > > -----Original Message-----\n> > > From: Justin Pryzby <[email protected]>\n> > > Sent: Saturday, December 4, 2021 12:59\n> > > To: [email protected]\n> > > Cc: [email protected]\n> > > Subject: Re: An I/O error occurred while sending to the\n> backend (PG\n> > > 13.4)\n> > >\n> > > On Sat, Dec 04, 2021 at 05:32:10PM +0000,\n> [email protected]\n> > > wrote:\n> > > > I have a data warehouse with a fairly complex ETL process\n> that has\n> > > been running for years now across PG 9.6, 11.2 and now 13.4\n> for the\n> > > past couple of months. I have been getting the error \"An I/O\n> error\n> > > occurred while sending to the backend\" quite often under load\n> in 13.4\n> > > which I never used to get on 11.2. I have applied some tricks,\n> > particularly\n> > > with the socketTimeout JDBC configuration.\n> > > >\n> > > > So my first question is whether anyone has any idea why this\n> is\n> > > happening? My hardware and general PG configuration have not\n> > > changed between 11.2 and 13.4 and I NEVER experienced this on\n> 11.2\n> > in\n> > > about 2y of production.\n> > > >\n> > > > Second, I have one stored procedure that takes a very long\n> time to\n> > run\n> > > (40mn more or less), so obviously, I'd need to set\n> socketTimeout to\n> > > something like 1h in order to call it and not timeout. That\n> doesn't seem\n> > > reasonable?\n> > >\n> > > Is the DB server local or remote (TCP/IP) to the client?\n> > >\n> > > Could you collect the corresponding postgres query logs when\n> this\n> > > happens ?\n> > >\n> > > It'd be nice to see a network trace for this too. Using\n> tcpdump or\n> > > wireshark.\n> > > Preferably from the client side.\n> > >\n> > > FWIW, I suspect the JDBC socketTimeout is a bad workaround.\n> > >\n> > > --\n> > > Justin\n> >\n> > It's a remote server, but all on a local network. Network\n> performance is I\n> > am sure not the issue. Also, the system is on Windows Server. What\n> are you\n> > expecting to see out of a tcpdump? I'll try to get PG logs on the\n> failing query.\n> >\n> > Thank you,\n> > Laurent.\n> >\n> >\n> >\n> >\n>\n> Hello Justin,\n>\n> It has been ages! The issue has been happening a bit more often recently,\n> as much as once every 10 days or so. As a reminder, the set up is Postgres\n> 13.4 on Windows Server with 16cores and 64GB memory.\n\nI can't understand why you are still using 13.4?\n[1] There is a long discussion about the issue with 13.4, the project was\nmade to fix a DLL bottleneck.\n\nWhy you not use 13.6?\n\nregards,\nRanier Vilela\n\n[1]\nhttps://www.postgresql.org/message-id/MN2PR15MB2560BBB3EC911D973C2FE3F885A89%40MN2PR15MB2560.namprd15.prod.outlook.com\n\nEm qua., 23 de fev. de 2022 às 21:47, [email protected] <[email protected]> escreveu:\n\n   >  -----Original Message-----\n   >  From: [email protected] <[email protected]>\n   >  Sent: Saturday, December 4, 2021 14:18\n   >  To: Justin Pryzby <[email protected]>\n   >  Cc: [email protected]\n   >  Subject: RE: An I/O error occurred while sending to the backend (PG 13.4)\n   >  \n   >  \n   >     >  -----Original Message-----\n   >     >  From: Justin Pryzby <[email protected]>\n   >     >  Sent: Saturday, December 4, 2021 12:59\n   >     >  To: [email protected]\n   >     >  Cc: [email protected]\n   >     >  Subject: Re: An I/O error occurred while sending to the backend (PG\n   >     >  13.4)\n   >     >\n   >     >  On Sat, Dec 04, 2021 at 05:32:10PM +0000, [email protected]\n   >     >  wrote:\n   >     >  > I have a data warehouse with a fairly complex ETL process that has\n   >     >  been running for years now across PG 9.6, 11.2 and now 13.4 for the\n   >     >  past couple of months. I have been getting the error \"An I/O error\n   >     >  occurred while sending to the backend\" quite often under load in 13.4\n   >     >  which I never used to get on 11.2. I have applied some tricks,\n   >  particularly\n   >     >  with the socketTimeout JDBC configuration.\n   >     >  >\n   >     >  > So my first question is whether anyone has any idea why this is\n   >     >  happening? My hardware and general PG configuration have not\n   >     >  changed between 11.2 and 13.4 and I NEVER experienced this on 11.2\n   >  in\n   >     >  about 2y of production.\n   >     >  >\n   >     >  > Second, I have one stored procedure that takes a very long time to\n   >  run\n   >     >  (40mn more or less), so obviously, I'd need to set socketTimeout to\n   >     >  something like 1h in order to call it and not timeout. That doesn't seem\n   >     >  reasonable?\n   >     >\n   >     >  Is the DB server local or remote (TCP/IP) to the client?\n   >     >\n   >     >  Could you collect the corresponding postgres query logs when this\n   >     >  happens ?\n   >     >\n   >     >  It'd be nice to see a network trace for this too.  Using tcpdump or\n   >     >  wireshark.\n   >     >  Preferably from the client side.\n   >     >\n   >     >  FWIW, I suspect the JDBC socketTimeout is a bad workaround.\n   >     >\n   >     >  --\n   >     >  Justin\n   >  \n   >  It's a remote server, but all on a local network. Network performance is I\n   >  am sure not the issue. Also, the system is on Windows Server. What are you\n   >  expecting to see out of a tcpdump? I'll try to get PG logs on the failing query.\n   >  \n   >  Thank you,\n   >  Laurent.\n   >  \n   >  \n   >  \n   >  \n\nHello Justin,\n\nIt has been ages! The issue has been happening a bit more often recently, as much as once every 10 days or so. As a reminder, the set up is Postgres 13.4 on Windows Server with 16cores and 64GB memory. I can't understand why you are still using 13.4?[1] There is a long discussion about the issue with 13.4, the project was made to fix a DLL bottleneck.Why you not use 13.6?regards,Ranier Vilela[1] https://www.postgresql.org/message-id/MN2PR15MB2560BBB3EC911D973C2FE3F885A89%40MN2PR15MB2560.namprd15.prod.outlook.com", "msg_date": "Thu, 24 Feb 2022 08:50:45 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Thu, Feb 24, 2022 at 08:50:45AM -0300, Ranier Vilela wrote:\n> I can't understand why you are still using 13.4?\n> [1] There is a long discussion about the issue with 13.4, the project was\n> made to fix a DLL bottleneck.\n> \n> Why you not use 13.6?\n\nThat other problem (and its fix) were in the windows build environment, and not\nan issue in some postgres version. It's still a good idea to schedule an\nupdate.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 24 Feb 2022 06:59:45 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "Em qui., 24 de fev. de 2022 às 09:59, Justin Pryzby <[email protected]>\nescreveu:\n\n> On Thu, Feb 24, 2022 at 08:50:45AM -0300, Ranier Vilela wrote:\n> > I can't understand why you are still using 13.4?\n> > [1] There is a long discussion about the issue with 13.4, the project was\n> > made to fix a DLL bottleneck.\n> >\n> > Why you not use 13.6?\n>\n> That other problem (and its fix) were in the windows build environment,\n> and not\n> an issue in some postgres version.\n\nYeah, correct.\nBut I think that it was very clear in the other thread that version 13.4,\non Windows, may have a slowdown, because of the DLL problem.\nSo it would be better to use the latest available version\nthat has this specific fix and many others.\n\nregards,\nRanier Vilela\n\nEm qui., 24 de fev. de 2022 às 09:59, Justin Pryzby <[email protected]> escreveu:On Thu, Feb 24, 2022 at 08:50:45AM -0300, Ranier Vilela wrote:\n> I can't understand why you are still using 13.4?\n> [1] There is a long discussion about the issue with 13.4, the project was\n> made to fix a DLL bottleneck.\n> \n> Why you not use 13.6?\n\nThat other problem (and its fix) were in the windows build environment, and not\nan issue in some postgres version.Yeah, correct.But I think that it was very clear in the other thread that version 13.4, on Windows, may have a slowdown, because of the DLL problem.So it would be better to use the latest available version that has this specific fix and many others.regards,Ranier Vilela", "msg_date": "Thu, 24 Feb 2022 10:46:23 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Wed, Feb 23, 2022 at 07:04:15PM -0600, Justin Pryzby wrote:\n> And the aforementioned network trace. You could set a capture filter on TCP\n> SYN|RST so it's not absurdly large. From my notes, it might look like this:\n> (tcp[tcpflags]&(tcp-rst|tcp-syn|tcp-fin)!=0)\n\nI'd also add '|| icmp'. My hunch is that you'll see some ICMP (not \"ping\")\nbeing sent by an intermediate gateway, resulting in the connection being reset.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 25 Feb 2022 07:02:13 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "\r\n\r\n>From: Ranier Vilela <[email protected]> \r\n>Sent: Thursday, February 24, 2022 08:46\r\n>To: Justin Pryzby <[email protected]>\r\n>Cc: [email protected]; [email protected]\r\n>Subject: Re: An I/O error occurred while sending to the backend (PG 13.4) \r\n>\r\n>Em qui., 24 de fev. de 2022 às 09:59, Justin Pryzby <mailto:[email protected]> escreveu:\r\n>On Thu, Feb 24, 2022 at 08:50:45AM -0300, Ranier Vilela wrote:\r\n>> I can't understand why you are still using 13.4?\r\n>> [1] There is a long discussion about the issue with 13.4, the project was\r\n>> made to fix a DLL bottleneck.\r\n>> \r\n>> Why you not use 13.6?\r\n>\r\n>That other problem (and its fix) were in the windows build environment, and not\r\n>an issue in some postgres version.\r\n>Yeah, correct.\r\n>But I think that it was very clear in the other thread that version 13.4, \r\n>on Windows, may have a slowdown, because of the DLL problem.\r\n>So it would be better to use the latest available version \r\n>that has this specific fix and many others.\r\n>\r\n>regards,\r\n>Ranier Vilela\r\n\r\n\r\nOK, absolutely. I was thinking about even moving to 14. I know migrations within a release are painless, but my experience with upgrading across releases has also been quite good (short of bugs that were found of course). Any opinion on 14.2?\r\n\r\nThank you, Laurent.\r\n\r\n\r\n", "msg_date": "Mon, 28 Feb 2022 16:50:54 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "\n > -----Original Message-----\n > From: Justin Pryzby <[email protected]>\n > Sent: Friday, February 25, 2022 08:02\n > To: [email protected]\n > Cc: [email protected]\n > Subject: Re: An I/O error occurred while sending to the backend (PG 13.4)\n > \n > On Wed, Feb 23, 2022 at 07:04:15PM -0600, Justin Pryzby wrote:\n > > And the aforementioned network trace. You could set a capture filter\n > > on TCP\n > > SYN|RST so it's not absurdly large. From my notes, it might look like this:\n > > (tcp[tcpflags]&(tcp-rst|tcp-syn|tcp-fin)!=0)\n > \n > I'd also add '|| icmp'. My hunch is that you'll see some ICMP (not \"ping\")\n > being sent by an intermediate gateway, resulting in the connection being\n > reset.\n > \n > --\n > Justin\n\n\nHello Justin,\n\nI am so sorry but I do not understand what you are asking me to do. I am unfamiliar with these commands. Is this a postgres configuration file? Is this something I just do once or something I leave on to hopefully catch it when the issue occurs? Is this something to do on the DB machine or the ETL machine? FYI:\n\n - My ETL machine is on 10.64.17.211\n - My DB machine is on 10.64.17.210\n - Both on Windows Server 2012 R2, x64\n\nSo sorry for the bother,\nLaurent.\n\n\n\n\n", "msg_date": "Mon, 28 Feb 2022 21:43:09 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Mon, Feb 28, 2022 at 09:43:09PM +0000, [email protected] wrote:\n> On Wed, Feb 23, 2022 at 07:04:15PM -0600, Justin Pryzby wrote:\n> > > And the aforementioned network trace. You could set a capture filter on TCP\n> > > SYN|RST so it's not absurdly large. From my notes, it might look like this:\n> > > (tcp[tcpflags]&(tcp-rst|tcp-syn|tcp-fin)!=0)\n> > \n> > I'd also add '|| icmp'. My hunch is that you'll see some ICMP (not \"ping\")\n> > being sent by an intermediate gateway, resulting in the connection being\n> > reset.\n> \n> I am so sorry but I do not understand what you are asking me to do. I am unfamiliar with these commands. Is this a postgres configuration file? Is this something I just do once or something I leave on to hopefully catch it when the issue occurs? Is this something to do on the DB machine or the ETL machine? FYI:\n\nIt's no problem.\n\nI suggest that you run wireshark with a capture filter to try to show *why* the\nconnections are failing. I think the capture filter might look like:\n\n(icmp || (tcp[tcpflags] & (tcp-rst|tcp-syn|tcp-fin)!=0)) && host 10.64.17.211\n\nWith the \"host\" filtering for the IP address of the *remote* machine.\n\nYou could run that on whichever machine is more convenient and leave it running\nfor however long it takes for that error to happen. You'll be able to save a\n.pcap file for inspection. I suppose it'll show either a TCP RST or an ICMP.\nWhichever side sent that is where the problem is. I still suspect the issue\nisn't in postgres.\n\n> - My ETL machine is on 10.64.17.211\n> - My DB machine is on 10.64.17.210\n> - Both on Windows Server 2012 R2, x64\n\nThese network details make my theory unlikely.\n\nThey're on the same subnet with no intermediate gateways, and communicate\ndirectly via a hub/switch/crossover cable. If that's true, then both will have\neach other's hardware address in ARP after pinging from one to the other.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 28 Feb 2022 16:05:03 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Justin Pryzby <[email protected]>\r\n > Sent: Monday, February 28, 2022 17:05\r\n > To: [email protected]\r\n > Cc: [email protected]\r\n > Subject: Re: An I/O error occurred while sending to the backend (PG 13.4)\r\n > \r\n > On Mon, Feb 28, 2022 at 09:43:09PM +0000, [email protected]\r\n > wrote:\r\n > > On Wed, Feb 23, 2022 at 07:04:15PM -0600, Justin Pryzby wrote:\r\n > > > > And the aforementioned network trace. You could set a capture\r\n > filter on TCP\r\n > > > > SYN|RST so it's not absurdly large. From my notes, it might look like\r\n > this:\r\n > > > > (tcp[tcpflags]&(tcp-rst|tcp-syn|tcp-fin)!=0)\r\n > > >\r\n > > > I'd also add '|| icmp'. My hunch is that you'll see some ICMP (not\r\n > \"ping\")\r\n > > > being sent by an intermediate gateway, resulting in the connection\r\n > being\r\n > > > reset.\r\n > >\r\n > > I am so sorry but I do not understand what you are asking me to do. I am\r\n > unfamiliar with these commands. Is this a postgres configuration file? Is this\r\n > something I just do once or something I leave on to hopefully catch it when\r\n > the issue occurs? Is this something to do on the DB machine or the ETL\r\n > machine? FYI:\r\n > \r\n > It's no problem.\r\n > \r\n > I suggest that you run wireshark with a capture filter to try to show *why*\r\n > the connections are failing. I think the capture filter might look like:\r\n > \r\n > (icmp || (tcp[tcpflags] & (tcp-rst|tcp-syn|tcp-fin)!=0)) && host\r\n > 10.64.17.211\r\n > \r\n > With the \"host\" filtering for the IP address of the *remote* machine.\r\n > \r\n > You could run that on whichever machine is more convenient and leave it\r\n > running for however long it takes for that error to happen. You'll be able to\r\n > save a .pcap file for inspection. I suppose it'll show either a TCP RST or an\r\n > ICMP.\r\n > Whichever side sent that is where the problem is. I still suspect the issue\r\n > isn't in postgres.\r\n > \r\n > > - My ETL machine is on 10.64.17.211\r\n > > - My DB machine is on 10.64.17.210\r\n > > - Both on Windows Server 2012 R2, x64\r\n > \r\n > These network details make my theory unlikely.\r\n > \r\n > They're on the same subnet with no intermediate gateways, and\r\n > communicate directly via a hub/switch/crossover cable. If that's true, then\r\n > both will have each other's hardware address in ARP after pinging from one\r\n > to the other.\r\n > \r\n > --\r\n > Justin\r\n\r\nYes, the machines ARE on the same subnet. They actually even are on the same physical rack as per what I have been told. When I run a tracert, I get this:\r\n\r\nTracing route to PRODDB.xxx.int [10.64.17.210] over a maximum of 30 hops:\r\n 1 1 ms <1 ms <1 ms PRODDB.xxx.int [10.64.17.210]\r\nTrace complete.\r\n\r\nNow, there is an additional component I think... Storage is on an array and I am not getting a clear answer as to where it is 😊 Is it possible that something is happening at the storage layer? Could that be reported as a network issue vs a storage issue for Postgres?\r\n\r\nAlso, both machines are actually VMs. I forgot to mention that and not sure if that's relevant.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n", "msg_date": "Tue, 1 Mar 2022 16:28:31 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Tue, Mar 01, 2022 at 04:28:31PM +0000, [email protected] wrote:\n> Now, there is an additional component I think... Storage is on an array and I am not getting a clear answer as to where it is 😊 Is it possible that something is happening at the storage layer? Could that be reported as a network issue vs a storage issue for Postgres?\n\nNo. If there were an error with storage, it'd be reported as a local error,\nand the query would fail, rather than failing with client-server communication.\n\n> Also, both machines are actually VMs. I forgot to mention that and not sure if that's relevant.\n\nAre they running on the same hypervisor ? Is that hyperv ?\nLacking other good hypotheses, that does seem relevant.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 1 Mar 2022 13:26:41 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "Em seg., 28 de fev. de 2022 às 13:50, [email protected] <\[email protected]> escreveu:\n\n>\n>\n> >From: Ranier Vilela <[email protected]>\n> >Sent: Thursday, February 24, 2022 08:46\n> >To: Justin Pryzby <[email protected]>\n> >Cc: [email protected]; [email protected]\n> >Subject: Re: An I/O error occurred while sending to the backend (PG 13.4)\n> >\n> >Em qui., 24 de fev. de 2022 às 09:59, Justin Pryzby <mailto:\n> [email protected]> escreveu:\n> >On Thu, Feb 24, 2022 at 08:50:45AM -0300, Ranier Vilela wrote:\n> >> I can't understand why you are still using 13.4?\n> >> [1] There is a long discussion about the issue with 13.4, the project\n> was\n> >> made to fix a DLL bottleneck.\n> >>\n> >> Why you not use 13.6?\n> >\n> >That other problem (and its fix) were in the windows build environment,\n> and not\n> >an issue in some postgres version.\n> >Yeah, correct.\n> >But I think that it was very clear in the other thread that version 13.4,\n> >on Windows, may have a slowdown, because of the DLL problem.\n> >So it would be better to use the latest available version\n> >that has this specific fix and many others.\n> >\n> >regards,\n> >Ranier Vilela\n>\n>\n> OK, absolutely. I was thinking about even moving to 14. I know migrations\n> within a release are painless, but my experience with upgrading across\n> releases has also been quite good (short of bugs that were found of\n> course). Any opinion on 14.2?\n>\nOf course, 14.2 would be better than 13.6, but I think that there are\nchances that this specific problem is not beneficial.\nAnd both 13.6 and 14.2 still suffer from a Windows version specific issue\n[1].\nA solution has been proposed which has not yet been accepted.\n\nBut in general terms, you will benefit from adopting 14.2 for sure.\n\nregards,\nRanie Vilela\n\n[1]\nhttps://www.postgresql.org/message-id/CAEudQAovOEM0haC4NbWZaYGW4ESmAE1j6_yr93tS8Xo8i7%2B54A%40mail.gmail.com\n\nEm seg., 28 de fev. de 2022 às 13:50, [email protected] <[email protected]> escreveu:\n\n>From: Ranier Vilela <[email protected]> \n>Sent: Thursday, February 24, 2022 08:46\n>To: Justin Pryzby <[email protected]>\n>Cc: [email protected]; [email protected]\n>Subject: Re: An I/O error occurred while sending to the backend (PG 13.4) \n>\n>Em qui., 24 de fev. de 2022 às 09:59, Justin Pryzby <mailto:[email protected]> escreveu:\n>On Thu, Feb 24, 2022 at 08:50:45AM -0300, Ranier Vilela wrote:\n>> I can't understand why you are still using 13.4?\n>> [1] There is a long discussion about the issue with 13.4, the project was\n>> made to fix a DLL bottleneck.\n>> \n>> Why you not use 13.6?\n>\n>That other problem (and its fix) were in the windows build environment, and not\n>an issue in some postgres version.\n>Yeah, correct.\n>But I think that it was very clear in the other thread that version 13.4, \n>on Windows, may have a slowdown, because of the DLL problem.\n>So it would be better to use the latest available version \n>that has this specific fix and many others.\n>\n>regards,\n>Ranier Vilela\n\n\nOK, absolutely. I was thinking about even moving to 14. I know migrations within a release are painless, but my experience with upgrading across releases has also been quite good (short of bugs that were found of course). Any opinion on 14.2?Of course, 14.2 would be better than 13.6, but I think that there arechances that this specific problem is not beneficial.And both 13.6 and 14.2 still suffer from a Windows version specific issue [1].A solution has been proposed which has not yet been accepted.But in general terms, you will benefit from adopting 14.2 for sure.regards,Ranie Vilela[1] https://www.postgresql.org/message-id/CAEudQAovOEM0haC4NbWZaYGW4ESmAE1j6_yr93tS8Xo8i7%2B54A%40mail.gmail.com", "msg_date": "Tue, 1 Mar 2022 22:15:49 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "\r\n\r\n > -----Original Message-----\r\n > From: Justin Pryzby <[email protected]>\r\n > Sent: Tuesday, March 1, 2022 14:27\r\n > To: [email protected]\r\n > Cc: [email protected]\r\n > Subject: Re: An I/O error occurred while sending to the backend (PG 13.4)\r\n > \r\n > On Tue, Mar 01, 2022 at 04:28:31PM +0000, [email protected]\r\n > wrote:\r\n > > Now, there is an additional component I think... Storage is on an array\r\n > and I am not getting a clear answer as to where it is 😊 Is it possible that\r\n > something is happening at the storage layer? Could that be reported as a\r\n > network issue vs a storage issue for Postgres?\r\n > \r\n > No. If there were an error with storage, it'd be reported as a local error,\r\n > and the query would fail, rather than failing with client-server\r\n > communication.\r\n > \r\n > > Also, both machines are actually VMs. I forgot to mention that and not\r\n > sure if that's relevant.\r\n > \r\n > Are they running on the same hypervisor ? Is that hyperv ?\r\n > Lacking other good hypotheses, that does seem relevant.\r\n > \r\n > --\r\n > Justin\r\n\r\nIssue happened again last night. I did implement your recommendations but it didn't seem to prevent the issue:\r\n\r\ntcp_keepalives_idle=9\t\t# TCP_KEEPIDLE, in seconds;\r\n\t\t\t\t\t# 0 selects the system default\r\ntcp_keepalives_interval=9\t\t# TCP_KEEPINTVL, in seconds;\r\n\t\t\t\t\t# 0 selects the system default\r\ntcp_keepalives_count=0\t\t# TCP_KEEPCNT;\r\n\t\t\t\t\t# 0 selects the system default\r\n#tcp_user_timeout = 0\t\t# TCP_USER_TIMEOUT, in milliseconds;\r\n\t\t\t\t\t# 0 selects the system default\r\n\r\nOn the client application, the exceptions are:\r\n\r\n2022/03/03 01:04:56 - Upsert2.0 - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : Unexpected error\r\n2022/03/03 01:04:56 - Upsert2.0 - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : org.pentaho.di.core.exception.KettleStepException: \r\n2022/03/03 01:04:56 - Upsert2.0 - Error in step, asking everyone to stop because of:\r\n2022/03/03 01:04:56 - Upsert2.0 - \r\n2022/03/03 01:04:56 - Upsert2.0 - Error inserting/updating row\r\n2022/03/03 01:04:56 - Upsert2.0 - An I/O error occurred while sending to the backend.\r\n2022/03/03 01:04:56 - Upsert2.0 - \r\n2022/03/03 01:04:56 - Upsert2.0 - \r\n2022/03/03 01:04:56 - Upsert2.0 - at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:313)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)\r\n2022/03/03 01:04:56 - Upsert2.0 - at java.lang.Thread.run(Thread.java:745)\r\n2022/03/03 01:04:56 - Upsert2.0 - Caused by: org.pentaho.di.core.exception.KettleDatabaseException: \r\n2022/03/03 01:04:56 - Upsert2.0 - Error inserting/updating row\r\n2022/03/03 01:04:56 - Upsert2.0 - An I/O error occurred while sending to the backend.\r\n2022/03/03 01:04:56 - Upsert2.0 - \r\n2022/03/03 01:04:56 - Upsert2.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1321)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1245)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1233)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.lookupValues(InsertUpdate.java:163)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:299)\r\n2022/03/03 01:04:56 - Upsert2.0 - ... 2 more\r\n2022/03/03 01:04:56 - Upsert2.0 - Caused by: org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:382)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:166)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:134)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1288)\r\n2022/03/03 01:04:56 - Upsert2.0 - ... 6 more\r\n2022/03/03 01:04:56 - Upsert2.0 - Caused by: java.net.SocketException: Connection reset\r\n2022/03/03 01:04:56 - Upsert2.0 - at java.net.SocketInputStream.read(SocketInputStream.java:209)\r\n2022/03/03 01:04:56 - Upsert2.0 - at java.net.SocketInputStream.read(SocketInputStream.java:141)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:161)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:128)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:113)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.core.PGStream.receiveChar(PGStream.java:453)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2119)\r\n2022/03/03 01:04:56 - Upsert2.0 - at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)\r\n2022/03/03 01:04:56 - Upsert2.0 - ... 11 more\r\n\r\nOn the DB:\r\n\r\n2022-03-03 01:04:40 EST [21228] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\r\n2022-03-03 01:04:40 EST [21228] LOG: unexpected EOF on client connection with an open transaction\r\n2022-03-03 01:04:40 EST [21228] LOG: disconnection: session time: 0:02:07.570 user=postgres database=Pepper host=10.64.17.211 port=63686\r\n2022-03-03 01:04:41 EST [21160] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\r\n2022-03-03 01:04:41 EST [21160] LOG: unexpected EOF on client connection with an open transaction\r\n2022-03-03 01:04:41 EST [21160] LOG: disconnection: session time: 0:02:07.730 user=postgres database=Pepper host=10.64.17.211 port=63688\r\n\r\nI don't know if that is meaningful, but I see a 15s delay between the timestamp on the database and on the application. The servers are synchronized properly. I have asked the IT team to look at the VMs and see if anything strange is happening. They are not too happy with installing WireShark to do more analysis given the \"complexity of the tools and size of the logs\" 😊 I keep on pushing.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n", "msg_date": "Thu, 3 Mar 2022 14:55:40 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "Em qui., 3 de mar. de 2022 às 11:55, [email protected] <\[email protected]> escreveu:\n\n>\n>\n> > -----Original Message-----\n> > From: Justin Pryzby <[email protected]>\n> > Sent: Tuesday, March 1, 2022 14:27\n> > To: [email protected]\n> > Cc: [email protected]\n> > Subject: Re: An I/O error occurred while sending to the backend (PG\n> 13.4)\n> >\n> > On Tue, Mar 01, 2022 at 04:28:31PM +0000, [email protected]\n> > wrote:\n> > > Now, there is an additional component I think... Storage is on an\n> array\n> > and I am not getting a clear answer as to where it is 😊 Is it\n> possible that\n> > something is happening at the storage layer? Could that be reported\n> as a\n> > network issue vs a storage issue for Postgres?\n> >\n> > No. If there were an error with storage, it'd be reported as a\n> local error,\n> > and the query would fail, rather than failing with client-server\n> > communication.\n> >\n> > > Also, both machines are actually VMs. I forgot to mention that and\n> not\n> > sure if that's relevant.\n> >\n> > Are they running on the same hypervisor ? Is that hyperv ?\n> > Lacking other good hypotheses, that does seem relevant.\n> >\n> > --\n> > Justin\n>\n> Issue happened again last night. I did implement your recommendations but\n> it didn't seem to prevent the issue:\n>\n> tcp_keepalives_idle=9 # TCP_KEEPIDLE, in seconds;\n> # 0 selects the system default\n> tcp_keepalives_interval=9 # TCP_KEEPINTVL, in seconds;\n> # 0 selects the system default\n> tcp_keepalives_count=0 # TCP_KEEPCNT;\n> # 0 selects the system default\n> #tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds;\n> # 0 selects the system default\n>\n> On the client application, the exceptions are:\n>\n> 2022/03/03 01:04:56 - Upsert2.0 - ERROR (version 7.1.0.0-12, build 1 from\n> 2017-05-16 17.18.02 by buildguy) : Unexpected error\n> 2022/03/03 01:04:56 - Upsert2.0 - ERROR (version 7.1.0.0-12, build 1 from\n> 2017-05-16 17.18.02 by buildguy) :\n> org.pentaho.di.core.exception.KettleStepException:\n> 2022/03/03 01:04:56 - Upsert2.0 - Error in step, asking everyone to stop\n> because of:\n> 2022/03/03 01:04:56 - Upsert2.0 -\n> 2022/03/03 01:04:56 - Upsert2.0 - Error inserting/updating row\n> 2022/03/03 01:04:56 - Upsert2.0 - An I/O error occurred while sending to\n> the backend.\n> 2022/03/03 01:04:56 - Upsert2.0 -\n> 2022/03/03 01:04:56 - Upsert2.0 -\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:313)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> java.lang.Thread.run(Thread.java:745)\n> 2022/03/03 01:04:56 - Upsert2.0 - Caused by:\n> org.pentaho.di.core.exception.KettleDatabaseException:\n> 2022/03/03 01:04:56 - Upsert2.0 - Error inserting/updating row\n> 2022/03/03 01:04:56 - Upsert2.0 - An I/O error occurred while sending to\n> the backend.\n> 2022/03/03 01:04:56 - Upsert2.0 -\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.pentaho.di.core.database.Database.insertRow(Database.java:1321)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.pentaho.di.core.database.Database.insertRow(Database.java:1245)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.pentaho.di.core.database.Database.insertRow(Database.java:1233)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.pentaho.di.trans.steps.insertupdate.InsertUpdate.lookupValues(InsertUpdate.java:163)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:299)\n> 2022/03/03 01:04:56 - Upsert2.0 - ... 2 more\n> 2022/03/03 01:04:56 - Upsert2.0 - Caused by:\n> org.postgresql.util.PSQLException: An I/O error occurred while sending to\n> the backend.\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:382)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:166)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:134)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.pentaho.di.core.database.Database.insertRow(Database.java:1288)\n> 2022/03/03 01:04:56 - Upsert2.0 - ... 6 more\n> 2022/03/03 01:04:56 - Upsert2.0 - Caused by: java.net.SocketException:\n> Connection reset\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> java.net.SocketInputStream.read(SocketInputStream.java:209)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> java.net.SocketInputStream.read(SocketInputStream.java:141)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:161)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:128)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:113)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.core.PGStream.receiveChar(PGStream.java:453)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2119)\n> 2022/03/03 01:04:56 - Upsert2.0 - at\n> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)\n> 2022/03/03 01:04:56 - Upsert2.0 - ... 11 more\n>\n> On the DB:\n>\n> 2022-03-03 01:04:40 EST [21228] LOG: could not receive data from client:\n> An existing connection was forcibly closed by the remote host.\n> 2022-03-03 01:04:40 EST [21228] LOG: unexpected EOF on client connection\n> with an open transaction\n>\nSorry, but this is much more on the client side.\nFollowing the logs, it is understood that the client is dropping the\nconnection.\nSo most likely the error could be from Pentaho or JDBC.\n\nhttps://www.geeksforgeeks.org/java-net-socketexception-in-java-with-examples/\n\" This *SocketException* occurs on the server-side when the client closed\nthe socket connection before the response could be returned over the\nsocket.\"\n\nI suggest moving this thread to the Pentaho or JDBC support.\n\nregards,\nRanier Vilela\n\nEm qui., 3 de mar. de 2022 às 11:55, [email protected] <[email protected]> escreveu:\n\n   >  -----Original Message-----\n   >  From: Justin Pryzby <[email protected]>\n   >  Sent: Tuesday, March 1, 2022 14:27\n   >  To: [email protected]\n   >  Cc: [email protected]\n   >  Subject: Re: An I/O error occurred while sending to the backend (PG 13.4)\n   >  \n   >  On Tue, Mar 01, 2022 at 04:28:31PM +0000, [email protected]\n   >  wrote:\n   >  > Now, there is an additional component I think... Storage is on an array\n   >  and I am not getting a clear answer as to where it is 😊 Is it possible that\n   >  something is happening at the storage layer? Could that be reported as a\n   >  network issue vs a storage issue for Postgres?\n   >  \n   >  No.  If there were an error with storage, it'd be reported as a local error,\n   >  and the query would fail, rather than failing with client-server\n   >  communication.\n   >  \n   >  > Also, both machines are actually VMs. I forgot to mention that and not\n   >  sure if that's relevant.\n   >  \n   >  Are they running on the same hypervisor ?  Is that hyperv ?\n   >  Lacking other good hypotheses, that does seem relevant.\n   >  \n   >  --\n   >  Justin\n\nIssue happened again last night. I did implement your recommendations but it didn't seem to prevent the issue:\n\ntcp_keepalives_idle=9           # TCP_KEEPIDLE, in seconds;\n                                        # 0 selects the system default\ntcp_keepalives_interval=9               # TCP_KEEPINTVL, in seconds;\n                                        # 0 selects the system default\ntcp_keepalives_count=0          # TCP_KEEPCNT;\n                                        # 0 selects the system default\n#tcp_user_timeout = 0           # TCP_USER_TIMEOUT, in milliseconds;\n                                        # 0 selects the system default\n\nOn the client application, the exceptions are:\n\n2022/03/03 01:04:56 - Upsert2.0 - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : Unexpected error\n2022/03/03 01:04:56 - Upsert2.0 - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : org.pentaho.di.core.exception.KettleStepException: \n2022/03/03 01:04:56 - Upsert2.0 - Error in step, asking everyone to stop because of:\n2022/03/03 01:04:56 - Upsert2.0 - \n2022/03/03 01:04:56 - Upsert2.0 - Error inserting/updating row\n2022/03/03 01:04:56 - Upsert2.0 - An I/O error occurred while sending to the backend.\n2022/03/03 01:04:56 - Upsert2.0 - \n2022/03/03 01:04:56 - Upsert2.0 - \n2022/03/03 01:04:56 - Upsert2.0 -    at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:313)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)\n2022/03/03 01:04:56 - Upsert2.0 -    at java.lang.Thread.run(Thread.java:745)\n2022/03/03 01:04:56 - Upsert2.0 - Caused by: org.pentaho.di.core.exception.KettleDatabaseException: \n2022/03/03 01:04:56 - Upsert2.0 - Error inserting/updating row\n2022/03/03 01:04:56 - Upsert2.0 - An I/O error occurred while sending to the backend.\n2022/03/03 01:04:56 - Upsert2.0 - \n2022/03/03 01:04:56 - Upsert2.0 -    at org.pentaho.di.core.database.Database.insertRow(Database.java:1321)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.pentaho.di.core.database.Database.insertRow(Database.java:1245)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.pentaho.di.core.database.Database.insertRow(Database.java:1233)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.lookupValues(InsertUpdate.java:163)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:299)\n2022/03/03 01:04:56 - Upsert2.0 -    ... 2 more\n2022/03/03 01:04:56 - Upsert2.0 - Caused by: org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:382)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:166)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:134)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.pentaho.di.core.database.Database.insertRow(Database.java:1288)\n2022/03/03 01:04:56 - Upsert2.0 -    ... 6 more\n2022/03/03 01:04:56 - Upsert2.0 - Caused by: java.net.SocketException: Connection reset\n2022/03/03 01:04:56 - Upsert2.0 -    at java.net.SocketInputStream.read(SocketInputStream.java:209)\n2022/03/03 01:04:56 - Upsert2.0 -    at java.net.SocketInputStream.read(SocketInputStream.java:141)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:161)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:128)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:113)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.core.PGStream.receiveChar(PGStream.java:453)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2119)\n2022/03/03 01:04:56 - Upsert2.0 -    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)\n2022/03/03 01:04:56 - Upsert2.0 -    ... 11 more\n\nOn the DB:\n\n2022-03-03 01:04:40 EST [21228] LOG:  could not receive data from client: An existing connection was forcibly closed by the remote host.\n2022-03-03 01:04:40 EST [21228] LOG:  unexpected EOF on client connection with an open transactionSorry, but this is much more on the client side.Following the logs, it is understood that the client is dropping the connection.So most likely the error could be from Pentaho or JDBC.https://www.geeksforgeeks.org/java-net-socketexception-in-java-with-examples/\"\nThis SocketException occurs on the server-side when the client closed the socket connection before the response could be returned over the socket.\"\nI suggest moving this thread to the Pentaho or JDBC support.\nregards,Ranier Vilela", "msg_date": "Thu, 3 Mar 2022 13:33:08 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Thu, Mar 03, 2022 at 01:33:08PM -0300, Ranier Vilela wrote:\n> Sorry, but this is much more on the client side.\n\nThe client is reporting the problem, as is the server.\n\n> Following the logs, it is understood that the client is dropping the\n> connection.\n\nThe logs show that the client's connection *was* dropped.\nAnd on the server, the same.\n\n> So most likely the error could be from Pentaho or JDBC.\n> \n> https://www.geeksforgeeks.org/java-net-socketexception-in-java-with-examples/\n> \" This *SocketException* occurs on the server-side when the client closed\n> the socket connection before the response could be returned over the\n> socket.\"\n> \n> I suggest moving this thread to the Pentaho or JDBC support.\n\nWe don't know the source of the problem. I still doubt it's in postgres, but I\ndon't think it's helpful to blame the client, just because the client reported\nthe problem. If the server were to disconnect abruptly, I'd expect the client\nto report that, too.\n\nLaurent would just have to start the conversation over (and probably collect\nthe same diagnostic information anyway). The client projects could blame\npostgres with as much rationale as there is for us to blame the client.\n\nPlease don't add confusion here. I made suggestions for how to collect more\ninformation to better understand the source of the problem, and there's\nprobably not much else to say without that.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 3 Mar 2022 10:46:04 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "Em qui., 3 de mar. de 2022 às 13:46, Justin Pryzby <[email protected]>\nescreveu:\n\n> On Thu, Mar 03, 2022 at 01:33:08PM -0300, Ranier Vilela wrote:\n> > Sorry, but this is much more on the client side.\n>\n> The client is reporting the problem, as is the server.\n>\n Are you read the server log?\n\" 2022-03-03 01:04:40 EST [21228] LOG: could not receive data from client:\nAn existing connection was forcibly closed by the remote host.\n2022-03-03 01:04:40 EST [21228] LOG: unexpected EOF on client connection\nwith an open transaction\"\n\n\n> > Following the logs, it is understood that the client is dropping the\n> > connection.\n>\n> The logs show that the client's connection *was* dropped.\n> And on the server, the same.\n>\nNo, the log server shows that the client dropped the connection.\n\n\n>\n> > So most likely the error could be from Pentaho or JDBC.\n> >\n> >\n> https://www.geeksforgeeks.org/java-net-socketexception-in-java-with-examples/\n> > \" This *SocketException* occurs on the server-side when the client closed\n> > the socket connection before the response could be returned over the\n> > socket.\"\n> >\n> > I suggest moving this thread to the Pentaho or JDBC support.\n>\n> We don't know the source of the problem.\n\nYeah, but it is much more likely to be on the client.\n\n\n> I still doubt it's in postgres,\n\nEverything indicates not.\n\nbut I\n> don't think it's helpful to blame the client, just because the client\n> reported\n> the problem. If the server were to disconnect abruptly, I'd expect the\n> client\n> to report that, too.\n>\n> Laurent would just have to start the conversation over (and probably\n> collect\n> the same diagnostic information anyway). The client projects could blame\n> postgres with as much rationale as there is for us to blame the client.\n>\n> Please don't add confusion here.\n\nI just suggested, this is not an order.\n\nregards,\nRanier Vilela\n\nEm qui., 3 de mar. de 2022 às 13:46, Justin Pryzby <[email protected]> escreveu:On Thu, Mar 03, 2022 at 01:33:08PM -0300, Ranier Vilela wrote:\n> Sorry, but this is much more on the client side.\n\nThe client is reporting the problem, as is the server. Are you read the server log?\"\n2022-03-03 01:04:40 EST [21228] LOG:  could not receive data from \nclient: An existing connection was forcibly closed by the remote host.\n2022-03-03 01:04:40 EST [21228] LOG:  unexpected EOF on client connection with an open transaction\"\n\n> Following the logs, it is understood that the client is dropping the\n> connection.\n\nThe logs show that the client's connection *was* dropped.\nAnd on the server, the same.No, the log server shows that the client dropped the connection. \n\n> So most likely the error could be from Pentaho or JDBC.\n> \n> https://www.geeksforgeeks.org/java-net-socketexception-in-java-with-examples/\n> \" This *SocketException* occurs on the server-side when the client closed\n> the socket connection before the response could be returned over the\n> socket.\"\n> \n> I suggest moving this thread to the Pentaho or JDBC support.\n\nWe don't know the source of the problem.Yeah, but it is much more likely to be on the client.   I still doubt it's in postgres, Everything indicates not. but I\ndon't think it's helpful to blame the client, just because the client reported\nthe problem.  If the server were to disconnect abruptly, I'd expect the client\nto report that, too.\n\nLaurent would just have to start the conversation over (and probably collect\nthe same diagnostic information anyway).  The client projects could blame\npostgres with as much rationale as there is for us to blame the client.\n\nPlease don't add confusion here. I just suggested, this is not an order. regards,Ranier Vilela", "msg_date": "Thu, 3 Mar 2022 13:56:51 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "I am also starting to feel that the issue being on the database’s side is less and less likely. There is something happening in between, or possibly on the client.\r\n\r\nRanier, the only reason I was focusing on this at the PG level is that this issue started to show up several months ago shortly after I updated to PG13 from PG11. Had run PG11 for 2 years without ever seeing that issue at all. The ETL itself hasn’t changed either, except for upgrading the JDBC driver… But I did revert back to an older JDBC driver and the issue still did occur eventually.\r\n\r\nOf course, other things could have changed in the client’s IT infrastructure that I am not aware of, so I am pushing that angle as well more aggressively now. I am also pushing for WireShark to monitor the network more closely. Stay tuned!\r\n\r\nThank you so much all for your support but at this time, I think the ball is in my camp and working out with it on some plan.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\nFrom: Ranier Vilela <[email protected]>\r\nSent: Thursday, March 3, 2022 11:57\r\nTo: Justin Pryzby <[email protected]>\r\nCc: [email protected]; [email protected]\r\nSubject: Re: An I/O error occurred while sending to the backend (PG 13.4)\r\n\r\n\r\nEm qui., 3 de mar. de 2022 às 13:46, Justin Pryzby <[email protected]<mailto:[email protected]>> escreveu:\r\nOn Thu, Mar 03, 2022 at 01:33:08PM -0300, Ranier Vilela wrote:\r\n> Sorry, but this is much more on the client side.\r\n\r\nThe client is reporting the problem, as is the server.\r\n Are you read the server log?\r\n\" 2022-03-03 01:04:40 EST [21228] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\r\n2022-03-03 01:04:40 EST [21228] LOG: unexpected EOF on client connection with an open transaction\"\r\n\r\n\r\n> Following the logs, it is understood that the client is dropping the\r\n> connection.\r\n\r\nThe logs show that the client's connection *was* dropped.\r\nAnd on the server, the same.\r\nNo, the log server shows that the client dropped the connection.\r\n\r\n\r\n> So most likely the error could be from Pentaho or JDBC.\r\n>\r\n> https://www.geeksforgeeks.org/java-net-socketexception-in-java-with-examples/\r\n> \" This *SocketException* occurs on the server-side when the client closed\r\n> the socket connection before the response could be returned over the\r\n> socket.\"\r\n>\r\n> I suggest moving this thread to the Pentaho or JDBC support.\r\n\r\nWe don't know the source of the problem.\r\nYeah, but it is much more likely to be on the client.\r\n\r\n I still doubt it's in postgres,\r\nEverything indicates not.\r\n\r\nbut I\r\ndon't think it's helpful to blame the client, just because the client reported\r\nthe problem. If the server were to disconnect abruptly, I'd expect the client\r\nto report that, too.\r\n\r\nLaurent would just have to start the conversation over (and probably collect\r\nthe same diagnostic information anyway). The client projects could blame\r\npostgres with as much rationale as there is for us to blame the client.\r\n\r\nPlease don't add confusion here.\r\nI just suggested, this is not an order.\r\n\r\nregards,\r\nRanier Vilela\r\n\n\n\n\n\n\n\n\n\nI am also starting to feel that the issue being on the database’s side is less and less likely. There is something happening in between, or possibly on the client.\n \nRanier, the only reason I was focusing on this at the PG level is that this issue started to show up several months ago shortly after I updated to PG13 from PG11. Had run PG11 for 2 years without ever seeing that issue at all. The ETL itself\r\n hasn’t changed either, except for upgrading the JDBC driver… But I did revert back to an older JDBC driver and the issue still did occur eventually.\n \nOf course, other things could have changed in the client’s IT infrastructure that I am not aware of, so I am pushing that angle as well more aggressively now. I am also pushing for WireShark to monitor the network more closely. Stay tuned!\n \nThank you so much all for your support but at this time, I think the ball is in my camp and working out with it on some plan.\n \nThank you,\nLaurent.\n \n \n\n\n\nFrom: Ranier Vilela <[email protected]> \nSent: Thursday, March 3, 2022 11:57\nTo: Justin Pryzby <[email protected]>\nCc: [email protected]; [email protected]\nSubject: Re: An I/O error occurred while sending to the backend (PG 13.4)\n\n\n \n\n \n\n\nEm qui., 3 de mar. de 2022 às 13:46, Justin Pryzby <[email protected]> escreveu:\n\n\nOn Thu, Mar 03, 2022 at 01:33:08PM -0300, Ranier Vilela wrote:\r\n> Sorry, but this is much more on the client side.\n\r\nThe client is reporting the problem, as is the server.\n\n\n Are you read the server log?\n\n\n\" 2022-03-03 01:04:40 EST [21228] LOG:  could not receive data from client: An existing connection was forcibly closed by the remote host.\r\n2022-03-03 01:04:40 EST [21228] LOG:  unexpected EOF on client connection with an open transaction\"\n\n\n \n\n\n\r\n> Following the logs, it is understood that the client is dropping the\r\n> connection.\n\r\nThe logs show that the client's connection *was* dropped.\r\nAnd on the server, the same.\n\n\nNo, the log server shows that the client dropped the connection.\n\n\n \n\n\n\r\n> So most likely the error could be from Pentaho or JDBC.\r\n> \r\n> \r\nhttps://www.geeksforgeeks.org/java-net-socketexception-in-java-with-examples/\r\n> \" This *SocketException* occurs on the server-side when the client closed\r\n> the socket connection before the response could be returned over the\r\n> socket.\"\r\n> \r\n> I suggest moving this thread to the Pentaho or JDBC support.\n\r\nWe don't know the source of the problem.\n\n\nYeah, but it is much more likely to be on the client.\n\n\n \n\n\n  I still doubt it's in postgres, \n\n\nEverything indicates not.\n\n\n \n\n\nbut I\r\ndon't think it's helpful to blame the client, just because the client reported\r\nthe problem.  If the server were to disconnect abruptly, I'd expect the client\r\nto report that, too.\n\r\nLaurent would just have to start the conversation over (and probably collect\r\nthe same diagnostic information anyway).  The client projects could blame\r\npostgres with as much rationale as there is for us to blame the client.\n\r\nPlease don't add confusion here. \n\n\nI just suggested, this is not an order.\n\n\n \n\nregards,\n\n\nRanier Vilela", "msg_date": "Thu, 3 Mar 2022 18:19:27 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "Em qui., 3 de mar. de 2022 às 15:19, [email protected] <\[email protected]> escreveu:\n\n> I am also starting to feel that the issue being on the database’s side is\n> less and less likely. There is something happening in between, or possibly\n> on the client.\n>\n>\n>\n> Ranier, the only reason I was focusing on this at the PG level is that\n> this issue started to show up several months ago shortly after I updated to\n> PG13 from PG11. Had run PG11 for 2 years without ever seeing that issue at\n> all. The ETL itself hasn’t changed either, except for upgrading the JDBC\n> driver… But I did revert back to an older JDBC driver and the issue still\n> did occur eventually.\n>\n>\n>\n> Of course, other things could have changed in the client’s IT\n> infrastructure that I am not aware of, so I am pushing that angle as well\n> more aggressively now. I am also pushing for WireShark to monitor the\n> network more closely. Stay tuned!\n>\n>\n>\n> Thank you so much all for your support but at this time, I think the ball\n> is in my camp and working out with it on some plan.\n>\nYou are welcome.\n\nregards,\nRanier Vilela\n\nEm qui., 3 de mar. de 2022 às 15:19, [email protected] <[email protected]> escreveu:\n\n\nI am also starting to feel that the issue being on the database’s side is less and less likely. There is something happening in between, or possibly on the client.\n \nRanier, the only reason I was focusing on this at the PG level is that this issue started to show up several months ago shortly after I updated to PG13 from PG11. Had run PG11 for 2 years without ever seeing that issue at all. The ETL itself\n hasn’t changed either, except for upgrading the JDBC driver… But I did revert back to an older JDBC driver and the issue still did occur eventually.\n \nOf course, other things could have changed in the client’s IT infrastructure that I am not aware of, so I am pushing that angle as well more aggressively now. I am also pushing for WireShark to monitor the network more closely. Stay tuned!\n \nThank you so much all for your support but at this time, I think the ball is in my camp and working out with it on some plan.You are welcome.regards,Ranier Vilela", "msg_date": "Thu, 3 Mar 2022 15:22:23 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": ">From: Ranier Vilela [email protected]<mailto:[email protected]>\r\n>Sent: Thursday, March 03, 2022 13:22\r\n>\r\n>\r\n>You are welcome.\r\n>\r\n>regards,\r\n>Ranier Vilela\r\n\r\n\r\n\r\nHello all,\r\n\r\nAfter a lot of back and forth, someone in IT informed us that the database VM is under a backup schedule using Veeam. Apparently, during the backup window, Veeam creates a snapshot and that takes the VM offline for a couple of minutes… And of course, they scheduled this right at the busiest time of the day for this machine which is during our nightly ETL. Their backup doesn’t perform very week either, which explained why the failure seemed to randomly happen at various points during our ETL (which takes about 2h30mn).\r\n\r\nThey moved the schedule out and the issue has not happened again over the past 3 weeks. This looks like it was the root cause and would explain (I think) how the database and the client simultaneously reported a connection timeout.\r\n\r\nThank you so much for all your help in trying to figure this out and exonerate Postgres.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n>From: Ranier Vilela [email protected]\n\r\n>Sent: Thursday, March 03, 2022 13:22\r\n>\n> \n>You are welcome.\n> \n>regards,\n>Ranier Vilela\n\n \n \n\n \nHello all,\n \nAfter a lot of back and forth, someone in IT informed us that the database VM is under a backup schedule using Veeam. Apparently, during the backup window, Veeam creates a snapshot and that takes the VM offline for a couple of minutes…\r\n And of course, they scheduled this right at the busiest time of the day for this machine which is during our nightly ETL. Their backup doesn’t perform very week either, which explained why the failure seemed to randomly happen at various points during our\r\n ETL (which takes about 2h30mn).\n \nThey moved the schedule out and the issue has not happened again over the past 3 weeks. This looks like it was the root cause and would explain (I think) how the database and the client simultaneously reported a connection timeout.\n \nThank you so much for all your help in trying to figure this out and exonerate Postgres.\n \nThank you,\nLaurent.", "msg_date": "Wed, 13 Apr 2022 15:36:19 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: An I/O error occurred while sending to the backend (PG 13.4)" }, { "msg_contents": "On Wed, Apr 13, 2022 at 03:36:19PM +0000, [email protected] wrote:\n> After a lot of back and forth, someone in IT informed us that the database VM is under a backup schedule using Veeam. Apparently, during the backup window, Veeam creates a snapshot and that takes the VM offline for a couple of minutes… And of course, they scheduled this right at the busiest time of the day for this machine which is during our nightly ETL. Their backup doesn’t perform very week either, which explained why the failure seemed to randomly happen at various points during our ETL (which takes about 2h30mn).\n> \n> They moved the schedule out and the issue has not happened again over the past 3 weeks. This looks like it was the root cause and would explain (I think) how the database and the client simultaneously reported a connection timeout.\n> \n> Thank you so much for all your help in trying to figure this out and exonerate Postgres.\n\nGreat, thanks for letting us know.\nThis time it wasn't postgres' fault; you're 2 for 3 ;)\n\nOne issue I've seen is if a vmware snapshot is taken and then saved for a long\ntime. It can be okay if VEEM takes a transient snapshot, copies its data, and\nthen destroys the snapshot. But it can be bad if multiple snapshots are taken\nand then left around for a long time to use as a backup themselves.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 13 Apr 2022 10:43:19 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An I/O error occurred while sending to the backend (PG 13.4)" } ]
[ { "msg_contents": "Hi ,\n\nPostgreSQLv14 source build/compiled with GCCv11.1 and bin's run different\nmachine like single machine and client-server machine.\n\nobserved Single Milan machine, the NOPM is more or less half with the\nClient-Server method.\n\nAnd checked the network bandwidth on Client-Server machine, it is similar\nbandwidth(transmit request and receive) and tcp/udp ports same bandwidth.\n\nOnly the difference in Client-Server is RAM size and Cache(L1/L2/L3). is\nthis cause drop in NOPM?\n\nIs another recommend configurations or parameters need to check via\nHammerDBv4.x\n\nIn Client-server model(HammerDBv4.x run in Client and PostgreSQLv14 run in\nServer Model)\n\n12 VU:NOPM 431811)\n\nOn Single or Sole Machine (both HammerDBv4.x & PostgreSQLv14 run same\nmachine )\n\n12 VU: NOPM:728825\n\nHi ,PostgreSQLv14 source build/compiled with GCCv11.1 and bin's run different machine like single machine and client-server machine.observed Single Milan machine, the NOPM is more or less half with the Client-Server method.And checked the network bandwidth on Client-Server machine, it is similar bandwidth(transmit request and receive) and tcp/udp ports same bandwidth.Only the difference in Client-Server is RAM size and Cache(L1/L2/L3). is this cause drop in NOPM?Is another recommend configurations or parameters need to check via HammerDBv4.xIn Client-server model(HammerDBv4.x run in Client and PostgreSQLv14 run in Server Model)12 VU:NOPM 431811)On Single or Sole Machine (both HammerDBv4.x & PostgreSQLv14 run same machine )12 VU: NOPM:728825", "msg_date": "Thu, 16 Dec 2021 21:05:03 +0530", "msg_from": "arjun shetty <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQLv14 performance client-server-HammerDB" } ]
[ { "msg_contents": "First of all, here is the version of PostgreSQL I'm using:\nPostgreSQL 13.3 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc\n(GCC) 7.4.0, 64-bit\n\nI'm new to PostgreSQL, and I'm deciding if I should make columns in my\ndatabase nullable or not.\n\nI have no need to distinguish between blank/zero and null. But I have\nnoticed that using NULL for unused values does save disk space, as opposed\nto using zero/blank default values.\n\nIn my table I have 142 columns and 18,508,470 rows. Using NULLs instead of\nzero/blank reduces the table storage space from 11.462 GB down to 9.120 GB.\nThat's a 20% reduction in size, and from what I know about the data it's\nabout right that 20% of the values in the database are unused.\n\nI would think that any full table scans would run faster against the table\nthat has the null values since there are less data pages to read. But,\nactually, the opposite is true, and by quite a large margin (about 50%\nslower!).\n\nIn the \"Slow Query Questions\" guide, it says to mention if the table\n\n - has a large proportion of NULLs in several columns\n\nYes, it does, I would estimate that about 20% of the time a column's value\nis null. Why does this matter? Is this a known thing about PostgreSQL\nperformance? If so, where do I read about it?\n\nThe table does not contain large objects, has been freshly loaded (so not a\nlot of UPDATE/DELETEs), is not growing, only has the 1 primary index,\nand does not use triggers.\n\nAnyway, below are the query results. The field being selected\n(creation_user) is not in any index, which forces a full table scan:\n\n--> 18.844 sec to execute when all columns defined NOT NULL WITH DEFAULT,\ntable size is 11.462 GB\nselect creation_user, count(*)\n from eu.royalty_no_null\n group by creation_user;\n\ncreation_user|count |\n-------------+--------+\n[BLANK] | 84546|\nBACOND | 10|\nBALUN | 2787|\nFOGGOL | 109|\nTRBATCH |18421018|\nQUERY PLAN\nFinalize GroupAggregate (cost=1515478.96..1515479.72 rows=3 width=15)\n(actual time=11133.324..11135.311 rows=5 loops=1)\n Group Key: creation_user\n I/O Timings: read=1884365.335\n -> Gather Merge (cost=1515478.96..1515479.66 rows=6 width=15) (actual\ntime=11133.315..11135.300 rows=13 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n I/O Timings: read=1884365.335\n -> Sort (cost=1514478.94..1514478.95 rows=3 width=15) (actual\ntime=11127.396..11127.398 rows=4 loops=3)\n Sort Key: creation_user\n Sort Method: quicksort Memory: 25kB\n I/O Timings: read=1884365.335\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n -> Partial HashAggregate (cost=1514478.89..1514478.92\nrows=3 width=15) (actual time=11127.370..11127.372 rows=4 loops=3)\n Group Key: creation_user\n Batches: 1 Memory Usage: 24kB\n I/O Timings: read=1884365.335\n Worker 0: Batches: 1 Memory Usage: 40kB\n Worker 1: Batches: 1 Memory Usage: 40kB\n -> Parallel Seq Scan on royalty_no_null\n (cost=0.00..1475918.59 rows=7712059 width=7) (actual time=0.006..9339.296\nrows=6169490 loops=3)\n I/O Timings: read=1884365.335\nSettings: effective_cache_size = '21553496kB', maintenance_io_concurrency =\n'1', search_path = 'public, public, \"$user\"'\nPlanning Time: 0.098 ms\nExecution Time: 11135.368 ms\n\n\n--> 30.57 sec to execute when all columns are nullable instead of\ndefaulting to zero/blank, table size is 9.120 GB:\nselect creation_user, count(*)\n from eu.royalty_with_null\n group by creation_user;\n\ncreation_user|count |\n-------------+--------+\nBACOND | 10|\nBALUN | 2787|\nFOGGOL | 109|\nTRBATCH |18421018|\n[NULL] | 84546|\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------+\nFinalize GroupAggregate (cost=1229649.93..1229650.44 rows=2 width=15)\n(actual time=25404.925..25407.262 rows=5 loops=1)\n Group Key: creation_user\n I/O Timings: read=17141420.771\n -> Gather Merge (cost=1229649.93..1229650.40 rows=4 width=15) (actual\ntime=25404.917..25407.249 rows=12 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n I/O Timings: read=17141420.771\n -> Sort (cost=1228649.91..1228649.91 rows=2 width=15) (actual\ntime=25398.004..25398.006 rows=4 loops=3)\n Sort Key: creation_user\n Sort Method: quicksort Memory: 25kB\n I/O Timings: read=17141420.771\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n -> Partial HashAggregate (cost=1228649.88..1228649.90\nrows=2 width=15) (actual time=25397.918..25397.920 rows=4 loops=3)\n Group Key: creation_user\n Batches: 1 Memory Usage: 24kB\n I/O Timings: read=17141420.771\n Worker 0: Batches: 1 Memory Usage: 40kB\n Worker 1: Batches: 1 Memory Usage: 40kB\n -> Parallel Seq Scan on royalty_with_null\n (cost=0.00..1190094.92 rows=7710992 width=7) (actual time=1.063..21481.517\nrows=6169490 loops=3)\n I/O Timings: read=17141420.771\nSettings: effective_cache_size = '21553496kB', maintenance_io_concurrency =\n'1', search_path = 'public, public, \"$user\"'\nPlanning Time: 0.112 ms\nExecution Time: 25407.318 ms\n\nThe query runs about 50% longer even though I would think there are 20%\nless disk pages to read!\n\nIn this particular column creation_user very few values are unused, but in\nother columns in the table many more of the rows have an unused value\n(blank/zero or NULL).\n\nIt seems to make no difference if the column selected is near the beginning\nof the row or the end, results are about the same.\n\nWhat is it about null values in the table that slows down the full table\nscan?\n\nIf I populate blank/zero for all of the unused values in columns that are\nNULLable, the query is fast again. So just defining the columns as NULLable\nisn't what slows it down -- it's actually the NULL values in the rows that\nseems to degrade performance.\n\nDo other operations (besides full table scan) get slowed down by null\nvalues as well?\n\nHere is the table definition with no nulls, the other table is the same\nexcept that all columns are NULLable.\nCREATE TABLE eu.royalty_no_null (\nisn int4 NOT NULL,\ncontract_key numeric(19) NOT NULL DEFAULT 0,\nco_code numeric(7) NOT NULL DEFAULT 0,\nrec_status numeric(1) NOT NULL DEFAULT 0,\nrecord_type numeric(1) NOT NULL DEFAULT 0,\ncontract_no numeric(6) NOT NULL DEFAULT 0,\nsubcon_no numeric(4) NOT NULL DEFAULT 0,\nsub_division varchar(6) NOT NULL DEFAULT '',\ntop_price_perc numeric(3) NOT NULL DEFAULT 0,\nbar_code_ind varchar(1) NOT NULL DEFAULT '',\nprocess_step_no varchar(1) NOT NULL DEFAULT '',\nmain_code_group varchar(1) NOT NULL DEFAULT '',\ncondition varchar(1) NOT NULL DEFAULT '',\nneg_sales_ind varchar(1) NOT NULL DEFAULT '',\ncon_type_ind varchar(1) NOT NULL DEFAULT '',\nexchg_tape_ind varchar(1) NOT NULL DEFAULT '',\nbagatelle_ind varchar(1) NOT NULL DEFAULT '',\nrestrict_terr_ind varchar(1) NOT NULL DEFAULT '',\nsys_esc_ind varchar(1) NOT NULL DEFAULT '',\nequiv_prc_ind varchar(1) NOT NULL DEFAULT '',\nscal_fact_ind varchar(1) NOT NULL DEFAULT '',\nsell_off_ind varchar(1) NOT NULL DEFAULT '',\ncut_rate_ind varchar(1) NOT NULL DEFAULT '',\ngross_nett_ind varchar(1) NOT NULL DEFAULT '',\nsleeve_ind varchar(1) NOT NULL DEFAULT '',\nrate_ind varchar(1) NOT NULL DEFAULT '',\nprice_basis_calc varchar(1) NOT NULL DEFAULT '',\nsource_price_basis varchar(1) NOT NULL DEFAULT '',\nesc_ind varchar(1) NOT NULL DEFAULT '',\npayment_period varchar(1) NOT NULL DEFAULT '',\nsubcon_reserve_ind varchar(1) NOT NULL DEFAULT '',\nrelease_reason_code varchar(1) NOT NULL DEFAULT '',\nbagatelle_qty_val_ind varchar(1) NOT NULL DEFAULT '',\npayable_ind varchar(1) NOT NULL DEFAULT '',\nrecord_sequence numeric(1) NOT NULL DEFAULT 0,\nsales_type numeric(1) NOT NULL DEFAULT 0,\nrate_index numeric(1) NOT NULL DEFAULT 0,\nparticipation numeric(7, 4) NOT NULL DEFAULT 0,\ncontract_co numeric(7) NOT NULL DEFAULT 0,\nreporting_co numeric(7) NOT NULL DEFAULT 0,\narticle_no varchar(13) NOT NULL DEFAULT '',\nart_cat_adm numeric(7) NOT NULL DEFAULT 0,\nequiv_config varchar(2) NOT NULL DEFAULT '',\nmusic_class numeric(2) NOT NULL DEFAULT 0,\nsales_reference_no numeric(7) NOT NULL DEFAULT 0,\nsales_processing_no numeric(2) NOT NULL DEFAULT 0,\nsales_trans_code numeric(4) NOT NULL DEFAULT 0,\nterr_combination numeric(6) NOT NULL DEFAULT 0,\nrate_no numeric(7) NOT NULL DEFAULT 0,\nrate varchar(20) NOT NULL DEFAULT '',\nrate_normal numeric(7, 3) NOT NULL DEFAULT 0,\nrate_esc_1 numeric(7, 3) NOT NULL DEFAULT 0,\nsales_process_qty numeric(9) NOT NULL DEFAULT 0,\nroy_price numeric(11, 4) NOT NULL DEFAULT 0,\nroyalty_fee numeric(11, 2) NOT NULL DEFAULT 0,\nunit_fee numeric(9, 4) NOT NULL DEFAULT 0,\narticle_release_date numeric(6) NOT NULL DEFAULT 0,\nroyalty_rate numeric(7, 3) NOT NULL DEFAULT 0,\norig_article_no varchar(13) NOT NULL DEFAULT '',\nno_of_records_in_set numeric(3) NOT NULL DEFAULT 0,\nsales_end_period_date numeric(7) NOT NULL DEFAULT 0,\nsales_settlement_period numeric(5) NOT NULL DEFAULT 0,\nsales_processing_date numeric(7) NOT NULL DEFAULT 0,\nsales_date numeric(7) NOT NULL DEFAULT 0,\nsales_record_no numeric(9) NOT NULL DEFAULT 0,\nequiv_prc_perc numeric(5, 2) NOT NULL DEFAULT 0,\nscal_fact_perc numeric(5, 2) NOT NULL DEFAULT 0,\nsell_off_perc numeric(5, 2) NOT NULL DEFAULT 0,\nprice_adj numeric(5, 2) NOT NULL DEFAULT 0,\nsleeve_allowance numeric(5, 2) NOT NULL DEFAULT 0,\nsales_percentage numeric(5, 2) NOT NULL DEFAULT 0,\nprice_basis_perc numeric(5, 2) NOT NULL DEFAULT 0,\nqtr_date_yyyyq numeric(5) NOT NULL DEFAULT 0,\norig_subcon_no numeric(4) NOT NULL DEFAULT 0,\norig_sales_channel varchar(2) NOT NULL DEFAULT '',\norig_sales_terr numeric(3) NOT NULL DEFAULT 0,\nsubcon_reserve_code numeric(4) NOT NULL DEFAULT 0,\ngross_sales_qty numeric(9) NOT NULL DEFAULT 0,\nnet_sales_qty numeric(9) NOT NULL DEFAULT 0,\nsales_reserved_qty numeric(9) NOT NULL DEFAULT 0,\nsales_reserved_perc numeric(5, 2) NOT NULL DEFAULT 0,\nreleased_qty numeric(9) NOT NULL DEFAULT 0,\nsource_tax_perc numeric(5, 2) NOT NULL DEFAULT 0,\nsource_tax_amount numeric(11, 2) NOT NULL DEFAULT 0,\nincome_amount numeric(11, 2) NOT NULL DEFAULT 0,\ngross_income numeric(11, 2) NOT NULL DEFAULT 0,\nexchange_rate numeric(13, 7) NOT NULL DEFAULT 0,\nsales_origin_idc varchar(1) NOT NULL DEFAULT '',\nblack_box varchar(30) NOT NULL DEFAULT '',\ncontract_expiry_date numeric(6) NOT NULL DEFAULT 0,\ncontract_expiry_period numeric(3) NOT NULL DEFAULT 0,\nbagatelle_amount numeric(9) NOT NULL DEFAULT 0,\nto_be_released_date numeric(6) NOT NULL DEFAULT 0,\nsales_start_period_date numeric(7) NOT NULL DEFAULT 0,\nselling_company numeric(7) NOT NULL DEFAULT 0,\nroyalty_amount numeric(11, 2) NOT NULL DEFAULT 0,\ncurrency_code numeric(3) NOT NULL DEFAULT 0,\nroy_price_curr numeric(11, 4) NOT NULL DEFAULT 0,\nroyalty_fee_curr numeric(11, 2) NOT NULL DEFAULT 0,\nunit_fee_curr numeric(9, 4) NOT NULL DEFAULT 0,\nsource_tax_amount_curr numeric(11, 2) NOT NULL DEFAULT 0,\nroyalty_amount_curr numeric(11, 2) NOT NULL DEFAULT 0,\nchange_description varchar(40) NOT NULL DEFAULT '',\nerror_no numeric(2) NOT NULL DEFAULT 0,\nfirst_rel_ind varchar(1) NOT NULL DEFAULT '',\nmax50_percentage numeric(5, 2) NOT NULL DEFAULT 0,\nmax50_compare_ind varchar(1) NOT NULL DEFAULT '',\nppd_fin_curr numeric(11, 4) NOT NULL DEFAULT 0,\nproject_ref_nr varchar(15) NOT NULL DEFAULT '',\nreserve_priority_ind varchar(1) NOT NULL DEFAULT '',\nesca_seqno numeric(7) NOT NULL DEFAULT 0,\ncut_rate_perc numeric(5, 2) NOT NULL DEFAULT 0,\ncp_details_c int2 NOT NULL DEFAULT 0,\nprice_info_c int2 NOT NULL DEFAULT 0,\narts_con_recording_c int2 NOT NULL DEFAULT 0,\nprocessing_company numeric(7) NOT NULL DEFAULT 0,\ndate_time_change_p numeric(13) NOT NULL DEFAULT 0,\nlocked_ind_b bytea NOT NULL DEFAULT '\\x00',\ndate_time_cleanup_p numeric(13) NOT NULL DEFAULT 0,\nmin_price_perc numeric(5, 2) NOT NULL DEFAULT 0,\nreserve_sale_type varchar(1) NOT NULL DEFAULT '',\nuplift_perc numeric(5, 2) NOT NULL DEFAULT 0,\ndouble_ind bytea NOT NULL DEFAULT '\\x00',\nmin_unit_fee numeric(9, 4) NOT NULL DEFAULT 0,\nsales_batch_type varchar(1) NOT NULL DEFAULT '',\nprev_esca_qty numeric(9) NOT NULL DEFAULT 0,\narts_con_recording_2_c int2 NOT NULL DEFAULT 0,\naif_share_percentage numeric(5, 2) NOT NULL DEFAULT 0,\naif_rate_percentage numeric(5, 2) NOT NULL DEFAULT 0,\noriginal_recording_id varchar(12) NOT NULL DEFAULT '',\ninter_companied_line bytea NOT NULL DEFAULT '\\x00',\nlast_chg_date_time timestamp(6) NOT NULL DEFAULT '0001-01-01',\nlast_chg_by_id varchar(8) NOT NULL DEFAULT '',\ncreation_date timestamp(6) NOT NULL DEFAULT '0001-01-01',\ncreation_user varchar(8) NOT NULL DEFAULT '',\naif_sublic_ind varchar(1) NOT NULL DEFAULT '',\nsublicensee numeric(7) NOT NULL DEFAULT 0,\nscal_fact_qty numeric(5) NOT NULL DEFAULT 0,\npay_delay numeric(3) NOT NULL DEFAULT 0,\nrecalc_date timestamp(6) NOT NULL DEFAULT '0001-01-01',\nrecalc_userid varchar(8) NOT NULL DEFAULT '',\nsent_to_sap_ind varchar(1) NOT NULL DEFAULT '',\nCONSTRAINT royalty_no_null_pkey PRIMARY KEY (isn)\n);\n\nFirst of all, here is the version of PostgreSQL I'm using:PostgreSQL 13.3 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (GCC) 7.4.0, 64-bitI'm new to PostgreSQL, and I'm deciding if I should make columns in my database nullable or not.I have no need to distinguish between blank/zero and null. But I have noticed that using NULL for unused values does save disk space, as opposed to using zero/blank default values.In my table I have 142 columns and 18,508,470 rows. Using NULLs instead of zero/blank reduces the table storage space from 11.462 GB down to 9.120 GB. That's a 20% reduction in size, and from what I know about the data it's about right that 20% of the values in the database are unused.I would think that any full table scans would run faster against the table that has the null values since there are less data pages to read. But, actually, the opposite is true, and by quite a large margin (about 50% slower!).In the \"Slow Query Questions\" guide, it says to mention if the tablehas a large proportion of NULLs in several columnsYes, it does, I would estimate that about 20% of the time a column's value is null. Why does this matter? Is this a known thing about PostgreSQL performance? If so, where do I read about it?The table does not contain large objects, has been freshly loaded (so not a lot of UPDATE/DELETEs), is not growing, only has the 1 primary index, and does not use triggers.Anyway, below are the query results. The field being selected (creation_user) is not in any index, which forces a full table scan:--> 18.844 sec to execute when all columns defined NOT NULL WITH DEFAULT, table size is 11.462 GBselect creation_user, count(*)   from eu.royalty_no_null group by creation_user;creation_user|count   |-------------+--------+[BLANK]      |   84546|BACOND       |      10|BALUN        |    2787|FOGGOL       |     109|TRBATCH      |18421018|QUERY PLANFinalize GroupAggregate  (cost=1515478.96..1515479.72 rows=3 width=15) (actual time=11133.324..11135.311 rows=5 loops=1)  Group Key: creation_user  I/O Timings: read=1884365.335  ->  Gather Merge  (cost=1515478.96..1515479.66 rows=6 width=15) (actual time=11133.315..11135.300 rows=13 loops=1)        Workers Planned: 2        Workers Launched: 2        I/O Timings: read=1884365.335        ->  Sort  (cost=1514478.94..1514478.95 rows=3 width=15) (actual time=11127.396..11127.398 rows=4 loops=3)              Sort Key: creation_user              Sort Method: quicksort  Memory: 25kB              I/O Timings: read=1884365.335              Worker 0:  Sort Method: quicksort  Memory: 25kB              Worker 1:  Sort Method: quicksort  Memory: 25kB              ->  Partial HashAggregate  (cost=1514478.89..1514478.92 rows=3 width=15) (actual time=11127.370..11127.372 rows=4 loops=3)                    Group Key: creation_user                    Batches: 1  Memory Usage: 24kB                    I/O Timings: read=1884365.335                    Worker 0:  Batches: 1  Memory Usage: 40kB                    Worker 1:  Batches: 1  Memory Usage: 40kB                    ->  Parallel Seq Scan on royalty_no_null  (cost=0.00..1475918.59 rows=7712059 width=7) (actual time=0.006..9339.296 rows=6169490 loops=3)                          I/O Timings: read=1884365.335Settings: effective_cache_size = '21553496kB', maintenance_io_concurrency = '1', search_path = 'public, public, \"$user\"'Planning Time: 0.098 msExecution Time: 11135.368 ms--> 30.57 sec to execute when all columns are nullable instead of defaulting to zero/blank, table size is 9.120 GB:select creation_user, count(*)   from eu.royalty_with_null group by creation_user;creation_user|count   |-------------+--------+BACOND       |      10|BALUN        |    2787|FOGGOL       |     109|TRBATCH      |18421018|[NULL]       |   84546|QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------+Finalize GroupAggregate  (cost=1229649.93..1229650.44 rows=2 width=15) (actual time=25404.925..25407.262 rows=5 loops=1)  Group Key: creation_user  I/O Timings: read=17141420.771  ->  Gather Merge  (cost=1229649.93..1229650.40 rows=4 width=15) (actual time=25404.917..25407.249 rows=12 loops=1)        Workers Planned: 2        Workers Launched: 2        I/O Timings: read=17141420.771        ->  Sort  (cost=1228649.91..1228649.91 rows=2 width=15) (actual time=25398.004..25398.006 rows=4 loops=3)              Sort Key: creation_user              Sort Method: quicksort  Memory: 25kB              I/O Timings: read=17141420.771              Worker 0:  Sort Method: quicksort  Memory: 25kB              Worker 1:  Sort Method: quicksort  Memory: 25kB              ->  Partial HashAggregate  (cost=1228649.88..1228649.90 rows=2 width=15) (actual time=25397.918..25397.920 rows=4 loops=3)                    Group Key: creation_user                    Batches: 1  Memory Usage: 24kB                    I/O Timings: read=17141420.771                    Worker 0:  Batches: 1  Memory Usage: 40kB                    Worker 1:  Batches: 1  Memory Usage: 40kB                    ->  Parallel Seq Scan on royalty_with_null  (cost=0.00..1190094.92 rows=7710992 width=7) (actual time=1.063..21481.517 rows=6169490 loops=3)                          I/O Timings: read=17141420.771Settings: effective_cache_size = '21553496kB', maintenance_io_concurrency = '1', search_path = 'public, public, \"$user\"'Planning Time: 0.112 msExecution Time: 25407.318 msThe query runs about 50% longer even though I would think there are 20% less disk pages to read!In this particular column creation_user very few values are unused, but in other columns in the table many more of the rows have an unused value (blank/zero or NULL).It seems to make no difference if the column selected is near the beginning of the row or the end, results are about the same.What is it about null values in the table that slows down the full table scan?If I populate blank/zero for all of the unused values in columns that are NULLable, the query is fast again. So just defining the columns as NULLable isn't what slows it down -- it's actually the NULL values in the rows that seems to degrade performance.Do other operations (besides full table scan) get slowed down by null values as well?Here is the table definition with no nulls, the other table is the same except that all columns are NULLable.CREATE TABLE eu.royalty_no_null (\tisn int4 NOT NULL,\tcontract_key numeric(19) NOT NULL DEFAULT 0,\tco_code numeric(7) NOT NULL DEFAULT 0,\trec_status numeric(1) NOT NULL DEFAULT 0,\trecord_type numeric(1) NOT NULL DEFAULT 0,\tcontract_no numeric(6) NOT NULL DEFAULT 0,\tsubcon_no numeric(4) NOT NULL DEFAULT 0,\tsub_division varchar(6) NOT NULL DEFAULT '',\ttop_price_perc numeric(3) NOT NULL DEFAULT 0,\tbar_code_ind varchar(1) NOT NULL DEFAULT '',\tprocess_step_no varchar(1) NOT NULL DEFAULT '',\tmain_code_group varchar(1) NOT NULL DEFAULT '',\tcondition varchar(1) NOT NULL DEFAULT '',\tneg_sales_ind varchar(1) NOT NULL DEFAULT '',\tcon_type_ind varchar(1) NOT NULL DEFAULT '',\texchg_tape_ind varchar(1) NOT NULL DEFAULT '',\tbagatelle_ind varchar(1) NOT NULL DEFAULT '',\trestrict_terr_ind varchar(1) NOT NULL DEFAULT '',\tsys_esc_ind varchar(1) NOT NULL DEFAULT '',\tequiv_prc_ind varchar(1) NOT NULL DEFAULT '',\tscal_fact_ind varchar(1) NOT NULL DEFAULT '',\tsell_off_ind varchar(1) NOT NULL DEFAULT '',\tcut_rate_ind varchar(1) NOT NULL DEFAULT '',\tgross_nett_ind varchar(1) NOT NULL DEFAULT '',\tsleeve_ind varchar(1) NOT NULL DEFAULT '',\trate_ind varchar(1) NOT NULL DEFAULT '',\tprice_basis_calc varchar(1) NOT NULL DEFAULT '',\tsource_price_basis varchar(1) NOT NULL DEFAULT '',\tesc_ind varchar(1) NOT NULL DEFAULT '',\tpayment_period varchar(1) NOT NULL DEFAULT '',\tsubcon_reserve_ind varchar(1) NOT NULL DEFAULT '',\trelease_reason_code varchar(1) NOT NULL DEFAULT '',\tbagatelle_qty_val_ind varchar(1) NOT NULL DEFAULT '',\tpayable_ind varchar(1) NOT NULL DEFAULT '',\trecord_sequence numeric(1) NOT NULL DEFAULT 0,\tsales_type numeric(1) NOT NULL DEFAULT 0,\trate_index numeric(1) NOT NULL DEFAULT 0,\tparticipation numeric(7, 4) NOT NULL DEFAULT 0,\tcontract_co numeric(7) NOT NULL DEFAULT 0,\treporting_co numeric(7) NOT NULL DEFAULT 0,\tarticle_no varchar(13) NOT NULL DEFAULT '',\tart_cat_adm numeric(7) NOT NULL DEFAULT 0,\tequiv_config varchar(2) NOT NULL DEFAULT '',\tmusic_class numeric(2) NOT NULL DEFAULT 0,\tsales_reference_no numeric(7) NOT NULL DEFAULT 0,\tsales_processing_no numeric(2) NOT NULL DEFAULT 0,\tsales_trans_code numeric(4) NOT NULL DEFAULT 0,\tterr_combination numeric(6) NOT NULL DEFAULT 0,\trate_no numeric(7) NOT NULL DEFAULT 0,\trate varchar(20) NOT NULL DEFAULT '',\trate_normal numeric(7, 3) NOT NULL DEFAULT 0,\trate_esc_1 numeric(7, 3) NOT NULL DEFAULT 0,\tsales_process_qty numeric(9) NOT NULL DEFAULT 0,\troy_price numeric(11, 4) NOT NULL DEFAULT 0,\troyalty_fee numeric(11, 2) NOT NULL DEFAULT 0,\tunit_fee numeric(9, 4) NOT NULL DEFAULT 0,\tarticle_release_date numeric(6) NOT NULL DEFAULT 0,\troyalty_rate numeric(7, 3) NOT NULL DEFAULT 0,\torig_article_no varchar(13) NOT NULL DEFAULT '',\tno_of_records_in_set numeric(3) NOT NULL DEFAULT 0,\tsales_end_period_date numeric(7) NOT NULL DEFAULT 0,\tsales_settlement_period numeric(5) NOT NULL DEFAULT 0,\tsales_processing_date numeric(7) NOT NULL DEFAULT 0,\tsales_date numeric(7) NOT NULL DEFAULT 0,\tsales_record_no numeric(9) NOT NULL DEFAULT 0,\tequiv_prc_perc numeric(5, 2) NOT NULL DEFAULT 0,\tscal_fact_perc numeric(5, 2) NOT NULL DEFAULT 0,\tsell_off_perc numeric(5, 2) NOT NULL DEFAULT 0,\tprice_adj numeric(5, 2) NOT NULL DEFAULT 0,\tsleeve_allowance numeric(5, 2) NOT NULL DEFAULT 0,\tsales_percentage numeric(5, 2) NOT NULL DEFAULT 0,\tprice_basis_perc numeric(5, 2) NOT NULL DEFAULT 0,\tqtr_date_yyyyq numeric(5) NOT NULL DEFAULT 0,\torig_subcon_no numeric(4) NOT NULL DEFAULT 0,\torig_sales_channel varchar(2) NOT NULL DEFAULT '',\torig_sales_terr numeric(3) NOT NULL DEFAULT 0,\tsubcon_reserve_code numeric(4) NOT NULL DEFAULT 0,\tgross_sales_qty numeric(9) NOT NULL DEFAULT 0,\tnet_sales_qty numeric(9) NOT NULL DEFAULT 0,\tsales_reserved_qty numeric(9) NOT NULL DEFAULT 0,\tsales_reserved_perc numeric(5, 2) NOT NULL DEFAULT 0,\treleased_qty numeric(9) NOT NULL DEFAULT 0,\tsource_tax_perc numeric(5, 2) NOT NULL DEFAULT 0,\tsource_tax_amount numeric(11, 2) NOT NULL DEFAULT 0,\tincome_amount numeric(11, 2) NOT NULL DEFAULT 0,\tgross_income numeric(11, 2) NOT NULL DEFAULT 0,\texchange_rate numeric(13, 7) NOT NULL DEFAULT 0,\tsales_origin_idc varchar(1) NOT NULL DEFAULT '',\tblack_box varchar(30) NOT NULL DEFAULT '',\tcontract_expiry_date numeric(6) NOT NULL DEFAULT 0,\tcontract_expiry_period numeric(3) NOT NULL DEFAULT 0,\tbagatelle_amount numeric(9) NOT NULL DEFAULT 0,\tto_be_released_date numeric(6) NOT NULL DEFAULT 0,\tsales_start_period_date numeric(7) NOT NULL DEFAULT 0,\tselling_company numeric(7) NOT NULL DEFAULT 0,\troyalty_amount numeric(11, 2) NOT NULL DEFAULT 0,\tcurrency_code numeric(3) NOT NULL DEFAULT 0,\troy_price_curr numeric(11, 4) NOT NULL DEFAULT 0,\troyalty_fee_curr numeric(11, 2) NOT NULL DEFAULT 0,\tunit_fee_curr numeric(9, 4) NOT NULL DEFAULT 0,\tsource_tax_amount_curr numeric(11, 2) NOT NULL DEFAULT 0,\troyalty_amount_curr numeric(11, 2) NOT NULL DEFAULT 0,\tchange_description varchar(40) NOT NULL DEFAULT '',\terror_no numeric(2) NOT NULL DEFAULT 0,\tfirst_rel_ind varchar(1) NOT NULL DEFAULT '',\tmax50_percentage numeric(5, 2) NOT NULL DEFAULT 0,\tmax50_compare_ind varchar(1) NOT NULL DEFAULT '',\tppd_fin_curr numeric(11, 4) NOT NULL DEFAULT 0,\tproject_ref_nr varchar(15) NOT NULL DEFAULT '',\treserve_priority_ind varchar(1) NOT NULL DEFAULT '',\tesca_seqno numeric(7) NOT NULL DEFAULT 0,\tcut_rate_perc numeric(5, 2) NOT NULL DEFAULT 0,\tcp_details_c int2 NOT NULL DEFAULT 0,\tprice_info_c int2 NOT NULL DEFAULT 0,\tarts_con_recording_c int2 NOT NULL DEFAULT 0,\tprocessing_company numeric(7) NOT NULL DEFAULT 0,\tdate_time_change_p numeric(13) NOT NULL DEFAULT 0,\tlocked_ind_b bytea NOT NULL DEFAULT '\\x00',\tdate_time_cleanup_p numeric(13) NOT NULL DEFAULT 0,\tmin_price_perc numeric(5, 2) NOT NULL DEFAULT 0,\treserve_sale_type varchar(1) NOT NULL DEFAULT '',\tuplift_perc numeric(5, 2) NOT NULL DEFAULT 0,\tdouble_ind bytea NOT NULL DEFAULT '\\x00',\tmin_unit_fee numeric(9, 4) NOT NULL DEFAULT 0,\tsales_batch_type varchar(1) NOT NULL DEFAULT '',\tprev_esca_qty numeric(9) NOT NULL DEFAULT 0,\tarts_con_recording_2_c int2 NOT NULL DEFAULT 0,\taif_share_percentage numeric(5, 2) NOT NULL DEFAULT 0,\taif_rate_percentage numeric(5, 2) NOT NULL DEFAULT 0,\toriginal_recording_id varchar(12) NOT NULL DEFAULT '',\tinter_companied_line bytea NOT NULL DEFAULT '\\x00',\tlast_chg_date_time timestamp(6) NOT NULL DEFAULT '0001-01-01',\tlast_chg_by_id varchar(8) NOT NULL DEFAULT '',\tcreation_date timestamp(6) NOT NULL DEFAULT '0001-01-01',\tcreation_user varchar(8) NOT NULL DEFAULT '',\taif_sublic_ind varchar(1) NOT NULL DEFAULT '',\tsublicensee numeric(7) NOT NULL DEFAULT 0,\tscal_fact_qty numeric(5) NOT NULL DEFAULT 0,\tpay_delay numeric(3) NOT NULL DEFAULT 0,\trecalc_date timestamp(6) NOT NULL DEFAULT '0001-01-01',\trecalc_userid varchar(8) NOT NULL DEFAULT '',\tsent_to_sap_ind varchar(1) NOT NULL DEFAULT '',\tCONSTRAINT royalty_no_null_pkey PRIMARY KEY (isn));", "msg_date": "Mon, 20 Dec 2021 17:23:54 -0800", "msg_from": "Lars Bergeson <[email protected]>", "msg_from_op": true, "msg_subject": "Query is slower with a large proportion of NULLs in several columns" }, { "msg_contents": "On Monday, December 20, 2021, Lars Bergeson <[email protected]> wrote:\n\n>\n> What is it about null values in the table that slows down the full table\n> scan?\n>\n> If I populate blank/zero for all of the unused values in columns that are\n> NULLable, the query is fast again. So just defining the columns as NULLable\n> isn't what slows it down -- it's actually the NULL values in the rows that\n> seems to degrade performance.\n>\n\nThe presence or absence of the constraint has zero effect on the contents\nof the page/tuple. As soon as you have a single null in a row you are\nadding a null bitmap [1] to the stored tuple. And now for every single\ncolumn the system has to check whether a specific column’s value is null or\nnot. Given the number of columns in your table, that this is noticeable is\nnot surprising.\n\nDavid J.\n\n[1] https://www.postgresql.org/docs/current/storage-page-layout.html\n\nOn Monday, December 20, 2021, Lars Bergeson <[email protected]> wrote:What is it about null values in the table that slows down the full table scan?If I populate blank/zero for all of the unused values in columns that are NULLable, the query is fast again. So just defining the columns as NULLable isn't what slows it down -- it's actually the NULL values in the rows that seems to degrade performance.The presence or absence of the constraint has zero effect on the contents of the page/tuple.  As soon as you have a single null in a row you are adding a null bitmap [1] to the stored tuple.  And now for every single column the system has to check whether a specific column’s value is null or not.  Given the number of columns in your table, that this is noticeable is not surprising.David J.[1]  https://www.postgresql.org/docs/current/storage-page-layout.html", "msg_date": "Mon, 20 Dec 2021 18:49:00 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "Lars Bergeson <[email protected]> writes:\n> What is it about null values in the table that slows down the full table\n> scan?\n\nIf a row has any nulls, then it contains a \"nulls bitmap\" [1] that says\nwhich columns are null, and that bitmap has to be consulted while\nwalking through the row contents. So the most obvious theory here\nis that that adds overhead that's significant in your case. But there\nare some holes in that theory, mainly that the I/O timings you are\nshowing don't seem very consistent:\n\nno nulls:\n> I/O Timings: read=1884365.335\n> Execution Time: 11135.368 ms\n\nwith nulls:\n> I/O Timings: read=17141420.771\n> Execution Time: 25407.318 ms\n\nRegardless of CPU time required, it should not take 10X less I/O\ntime to read a physically larger table. So there's something\nfairly bogus going on there. One thing you might try is disabling\nparallelism (set max_parallel_workers_per_gather = 0) to see if\nthat's confusing the numbers somehow.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/storage-page-layout.html#STORAGE-TUPLE-LAYOUT\n\n\n", "msg_date": "Mon, 20 Dec 2021 20:51:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "ok, here are results after I did:\nset max_parallel_workers_per_gather = 0;\n\nno nulls table is 11.462 GB:\nQUERY PLAN\nHashAggregate (cost=1676432.13..1676432.16 rows=3 width=15) (actual\ntime=19908.343..19908.345 rows=5 loops=1)\n Group Key: roys_creation_user\n Batches: 1 Memory Usage: 24kB\n I/O Timings: read=532369.898\n -> Seq Scan on royalty_no_null (cost=0.00..1583887.42 rows=18508942\nwidth=7) (actual time=0.013..16705.734 rows=18508470 loops=1)\n I/O Timings: read=532369.898\nSettings: effective_cache_size = '21553496kB', maintenance_io_concurrency =\n'1', max_parallel_workers_per_gather = '0', search_path = 'public, public,\n\"$user\"'\nPlanning Time: 0.056 ms\nExecution Time: 19908.383 ms\n\nwith nulls table is 9.120 GB:\nQUERY PLAN\nHashAggregate (cost=1390580.70..1390580.72 rows=2 width=15) (actual\ntime=30369.758..30369.761 rows=5 loops=1)\n Group Key: roys_creation_user\n Batches: 1 Memory Usage: 24kB\n I/O Timings: read=6440851.540\n -> Seq Scan on royalty_with_null (cost=0.00..1298048.80 rows=18506380\nwidth=7) (actual time=0.015..25525.104 rows=18508470 loops=1)\n I/O Timings: read=6440851.540\nSettings: effective_cache_size = '21553496kB', maintenance_io_concurrency =\n'1', max_parallel_workers_per_gather = '0', search_path = 'public, public,\n\"$user\"'\nPlanning Time: 0.060 ms\nExecution Time: 30369.796 ms\n\nStill taking 10X more I/O to read the smaller table. Very odd.\n\nRegarding the earlier comment from David Johnston: If I put null values in\njust one of the columns for all rows, it should force a null bitmap to be\ncreated for every row, with the same amount of checking of the bitmap\nrequired. However, the query still runs faster even though the table is\nlarger:\nwith nulls table is 11.604 GB when all values are filled except 1 column\nhas mostly nulls. The extra 0.14 GB (11.604 GB - 11.462 GB) is probably\nspace consumed by null bitmaps:\nQUERY PLAN\nHashAggregate (cost=1693765.03..1693765.06 rows=3 width=15) (actual\ntime=26452.653..26452.655 rows=5 loops=1)\n Group Key: roys_creation_user\n Batches: 1 Memory Usage: 24kB\n I/O Timings: read=2706123.209\n -> Seq Scan on royalty_with_null_cols_filled (cost=0.00..1601218.02\nrows=18509402 width=7) (actual time=0.014..22655.366 rows=18508470 loops=1)\n I/O Timings: read=2706123.209\nSettings: effective_cache_size = '21553496kB', maintenance_io_concurrency =\n'1', max_parallel_workers_per_gather = '0', search_path = 'public, public,\n\"$user\"'\nPlanning Time: 0.068 ms\nExecution Time: 26452.691 ms\n\nIt seems to be the actual presence of null values that slows things down,\neven when the same sized null bitmap exists for each row.\n\nOn Mon, Dec 20, 2021 at 5:51 PM Tom Lane <[email protected]> wrote:\n\n> Lars Bergeson <[email protected]> writes:\n> > What is it about null values in the table that slows down the full table\n> > scan?\n>\n> If a row has any nulls, then it contains a \"nulls bitmap\" [1] that says\n> which columns are null, and that bitmap has to be consulted while\n> walking through the row contents. So the most obvious theory here\n> is that that adds overhead that's significant in your case. But there\n> are some holes in that theory, mainly that the I/O timings you are\n> showing don't seem very consistent:\n>\n> no nulls:\n> > I/O Timings: read=1884365.335\n> > Execution Time: 11135.368 ms\n>\n> with nulls:\n> > I/O Timings: read=17141420.771\n> > Execution Time: 25407.318 ms\n>\n> Regardless of CPU time required, it should not take 10X less I/O\n> time to read a physically larger table. So there's something\n> fairly bogus going on there. One thing you might try is disabling\n> parallelism (set max_parallel_workers_per_gather = 0) to see if\n> that's confusing the numbers somehow.\n>\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/docs/current/storage-page-layout.html#STORAGE-TUPLE-LAYOUT\n>\n\nok, here are results after I did:set max_parallel_workers_per_gather = 0;no nulls table is 11.462 GB:QUERY PLANHashAggregate  (cost=1676432.13..1676432.16 rows=3 width=15) (actual time=19908.343..19908.345 rows=5 loops=1)  Group Key: roys_creation_user  Batches: 1  Memory Usage: 24kB  I/O Timings: read=532369.898  ->  Seq Scan on royalty_no_null  (cost=0.00..1583887.42 rows=18508942 width=7) (actual time=0.013..16705.734 rows=18508470 loops=1)        I/O Timings: read=532369.898Settings: effective_cache_size = '21553496kB', maintenance_io_concurrency = '1', max_parallel_workers_per_gather = '0', search_path = 'public, public, \"$user\"'Planning Time: 0.056 msExecution Time: 19908.383 mswith nulls table is 9.120 GB:QUERY PLANHashAggregate  (cost=1390580.70..1390580.72 rows=2 width=15) (actual time=30369.758..30369.761 rows=5 loops=1)  Group Key: roys_creation_user  Batches: 1  Memory Usage: 24kB  I/O Timings: read=6440851.540  ->  Seq Scan on royalty_with_null  (cost=0.00..1298048.80 rows=18506380 width=7) (actual time=0.015..25525.104 rows=18508470 loops=1)        I/O Timings: read=6440851.540Settings: effective_cache_size = '21553496kB', maintenance_io_concurrency = '1', max_parallel_workers_per_gather = '0', search_path = 'public, public, \"$user\"'Planning Time: 0.060 msExecution Time: 30369.796 msStill taking 10X more I/O to read the smaller table. Very odd.Regarding the earlier comment from David Johnston: If I put null values in just one of the columns for all rows, it should force a null bitmap to be created for every row, with the same amount of checking of the bitmap required. However, the query still runs faster even though the table is larger:with nulls table is 11.604 GB when all values are filled except 1 column has mostly nulls. The extra 0.14 GB (11.604 GB - 11.462 GB) is probably space consumed by null bitmaps:QUERY PLANHashAggregate  (cost=1693765.03..1693765.06 rows=3 width=15) (actual time=26452.653..26452.655 rows=5 loops=1)  Group Key: roys_creation_user  Batches: 1  Memory Usage: 24kB  I/O Timings: read=2706123.209  ->  Seq Scan on royalty_with_null_cols_filled  (cost=0.00..1601218.02 rows=18509402 width=7) (actual time=0.014..22655.366 rows=18508470 loops=1)        I/O Timings: read=2706123.209Settings: effective_cache_size = '21553496kB', maintenance_io_concurrency = '1', max_parallel_workers_per_gather = '0', search_path = 'public, public, \"$user\"'Planning Time: 0.068 msExecution Time: 26452.691 msIt seems to be the actual presence of null values that slows things down, even when the same sized null bitmap exists for each row.On Mon, Dec 20, 2021 at 5:51 PM Tom Lane <[email protected]> wrote:Lars Bergeson <[email protected]> writes:\n> What is it about null values in the table that slows down the full table\n> scan?\n\nIf a row has any nulls, then it contains a \"nulls bitmap\" [1] that says\nwhich columns are null, and that bitmap has to be consulted while\nwalking through the row contents.  So the most obvious theory here\nis that that adds overhead that's significant in your case.  But there\nare some holes in that theory, mainly that the I/O timings you are\nshowing don't seem very consistent:\n\nno nulls:\n>   I/O Timings: read=1884365.335\n> Execution Time: 11135.368 ms\n\nwith nulls:\n>   I/O Timings: read=17141420.771\n> Execution Time: 25407.318 ms\n\nRegardless of CPU time required, it should not take 10X less I/O\ntime to read a physically larger table.  So there's something\nfairly bogus going on there.  One thing you might try is disabling\nparallelism (set max_parallel_workers_per_gather = 0) to see if\nthat's confusing the numbers somehow.\n\n                        regards, tom lane\n\n[1] https://www.postgresql.org/docs/current/storage-page-layout.html#STORAGE-TUPLE-LAYOUT", "msg_date": "Mon, 20 Dec 2021 20:11:42 -0800", "msg_from": "Lars Bergeson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:\n> ok, here are results after I did:\n> set max_parallel_workers_per_gather = 0;\n> \n> HashAggregate (cost=1676432.13..1676432.16 rows=3 width=15) (actual time=19908.343..19908.345 rows=5 loops=1)\n> I/O Timings: read=532369.898\n> Execution Time: 19908.383 ms\n\n> HashAggregate (cost=1390580.70..1390580.72 rows=2 width=15) (actual time=30369.758..30369.761 rows=5 loops=1)\n> I/O Timings: read=6440851.540\n> Execution Time: 30369.796 ms\n\n> Still taking 10X more I/O to read the smaller table. Very odd.\n\nIf I'm not wrong, it's even worse than that ?\nIt takes 20 or 30sec to run the query - but it says the associated I/O times\nare ~500sec or ~6000sec ?\n\nWhat architecture and OS/version are you running ?\nHow did you install postgres? From a package or compiled from source ?\n\nIt might be interesting to know the output from something like this command,\ndepending on whether and where the headers like pg_config_x86_64.h are installed.\n\ngrep -r HAVE_CLOCK_GETTIME /usr/pgsql-13/include\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 20 Dec 2021 22:51:38 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:\n>> Still taking 10X more I/O to read the smaller table. Very odd.\n\n> If I'm not wrong, it's even worse than that ?\n> It takes 20 or 30sec to run the query - but it says the associated I/O times\n> are ~500sec or ~6000sec ?\n\nIt would help if somebody had labeled the units of I/O Time\n... but I'm guessing those are microsec vs. the millisec\nof the other times, because otherwise it's completely wrong.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Dec 2021 00:01:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "On Monday, December 20, 2021, Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:\n> > ok, here are results after I did:\n> > set max_parallel_workers_per_gather = 0;\n> >\n> > HashAggregate (cost=1676432.13..1676432.16 rows=3 width=15) (actual\n> time=19908.343..19908.345 rows=5 loops=1)\n> > I/O Timings: read=532369.898\n> > Execution Time: 19908.383 ms\n>\n> > HashAggregate (cost=1390580.70..1390580.72 rows=2 width=15) (actual\n> time=30369.758..30369.761 rows=5 loops=1)\n> > I/O Timings: read=6440851.540\n> > Execution Time: 30369.796 ms\n>\n> > Still taking 10X more I/O to read the smaller table. Very odd.\n>\n> If I'm not wrong, it's even worse than that ?\n> It takes 20 or 30sec to run the query - but it says the associated I/O\n> times\n> are ~500sec or ~6000sec ?\n>\n> What architecture and OS/version are you running ?\n> How did you install postgres? From a package or compiled from source ?\n>\n\nThe docs indicate you’ll only see I/O Timing information if using EXPLAIN\nBUFFERS but I’m not seeing any of the other buffer-related information in\nthese plans. Thoughts?\n\nDavid J.\n\nOn Monday, December 20, 2021, Justin Pryzby <[email protected]> wrote:On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:\n> ok, here are results after I did:\n> set max_parallel_workers_per_gather = 0;\n> \n> HashAggregate  (cost=1676432.13..1676432.16 rows=3 width=15) (actual time=19908.343..19908.345 rows=5 loops=1)\n>   I/O Timings: read=532369.898\n> Execution Time: 19908.383 ms\n\n> HashAggregate  (cost=1390580.70..1390580.72 rows=2 width=15) (actual time=30369.758..30369.761 rows=5 loops=1)\n>   I/O Timings: read=6440851.540\n> Execution Time: 30369.796 ms\n\n> Still taking 10X more I/O to read the smaller table. Very odd.\n\nIf I'm not wrong, it's even worse than that ?\nIt takes 20 or 30sec to run the query - but it says the associated I/O times\nare ~500sec or ~6000sec ?\n\nWhat architecture and OS/version are you running ?\nHow did you install postgres?  From a package or compiled from source ?\nThe docs indicate you’ll only see I/O Timing information if using EXPLAIN BUFFERS but I’m not seeing any of the other buffer-related information in these plans.  Thoughts?David J.", "msg_date": "Mon, 20 Dec 2021 22:07:01 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "On Monday, December 20, 2021, Tom Lane <[email protected]> wrote:\n\n> Justin Pryzby <[email protected]> writes:\n> > On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:\n> >> Still taking 10X more I/O to read the smaller table. Very odd.\n>\n> > If I'm not wrong, it's even worse than that ?\n> > It takes 20 or 30sec to run the query - but it says the associated I/O\n> times\n> > are ~500sec or ~6000sec ?\n>\n> It would help if somebody had labeled the units of I/O Time\n> ... but I'm guessing those are microsec vs. the millisec\n> of the other times, because otherwise it's completely wrong.\n>\n>\nRelated to my preceding observation, from the explain (buffers) docs:\n\n“…and the time spent reading and writing data file blocks (in milliseconds)\nif track_io_timing\n<https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-IO-TIMING>\nis\nenabled.“\n\nDavid J.\n\nOn Monday, December 20, 2021, Tom Lane <[email protected]> wrote:Justin Pryzby <[email protected]> writes:\n> On Mon, Dec 20, 2021 at 08:11:42PM -0800, Lars Bergeson wrote:\n>> Still taking 10X more I/O to read the smaller table. Very odd.\n\n> If I'm not wrong, it's even worse than that ?\n> It takes 20 or 30sec to run the query - but it says the associated I/O times\n> are ~500sec or ~6000sec ?\n\nIt would help if somebody had labeled the units of I/O Time\n... but I'm guessing those are microsec vs. the millisec\nof the other times, because otherwise it's completely wrong.\nRelated to my preceding observation, from the explain (buffers) docs:“…and the time spent reading and writing data file blocks (in milliseconds) if track_io_timing is enabled.“David J.", "msg_date": "Mon, 20 Dec 2021 22:08:59 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Monday, December 20, 2021, Tom Lane <[email protected]> wrote:\n>> It would help if somebody had labeled the units of I/O Time\n>> ... but I'm guessing those are microsec vs. the millisec\n>> of the other times, because otherwise it's completely wrong.\n\n> Related to my preceding observation, from the explain (buffers) docs:\n> “…and the time spent reading and writing data file blocks (in milliseconds)\n> if track_io_timing\n> <https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-IO-TIMING>\n> is enabled.“\n\nHmm ... the code sure looks like it's supposed to be millisec:\n\n appendStringInfoString(es->str, \"I/O Timings:\");\n if (!INSTR_TIME_IS_ZERO(usage->blk_read_time))\n appendStringInfo(es->str, \" read=%0.3f\",\n INSTR_TIME_GET_MILLISEC(usage->blk_read_time));\n if (!INSTR_TIME_IS_ZERO(usage->blk_write_time))\n appendStringInfo(es->str, \" write=%0.3f\",\n INSTR_TIME_GET_MILLISEC(usage->blk_write_time));\n\nAnd when I try some cases here, I get I/O timing numbers that are\nconsistent with the overall time reported by EXPLAIN, for example\n\n Seq Scan on foo (cost=0.00..843334.10 rows=11000010 width=508) (actual time=0.\n015..1897.492 rows=11000000 loops=1)\n Buffers: shared hit=15874 read=717460\n I/O Timings: read=1184.638\n Planning:\n Buffers: shared hit=5 read=2\n I/O Timings: read=0.025\n Planning Time: 0.229 ms\n Execution Time: 2151.529 ms\n\nSo now we have a real mystery about what is happening on Lars'\nsystem. Those numbers can't be right.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Dec 2021 00:33:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "On Tue, Dec 21, 2021 at 12:33:06AM -0500, Tom Lane wrote:\n> So now we have a real mystery about what is happening on Lars'\n> system. Those numbers can't be right.\n\nI realized Lars said it was x86_64/Linux, but I'm hoping to hear back with more\ndetails:\n\nWhat OS version?\nIs it a VM of some type ?\nHow did you install postgres? From a package or compiled from source?\ngrep -r HAVE_CLOCK_GETTIME /usr/pgsql-13/include\nSend the exact command and output you used to run the query?\nWhy does your explain output have IO timing but not Buffers: hit/read ?\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 21 Dec 2021 15:13:37 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "Justin,\n\nThanks for your continued interest.\n\nI'm running PostgreSQL under AWS Aurora, and I didn't set it up or install\nit, so I'm not sure about the OS version.\n\nI can't run the grep command since I don't know how to get down to the\ncommand line on the actual box running Aurora. I just connect to PostgreSQL\nfrom either my desktop or an EC2 Linux instance.\n\nSQL I entered was:\nset max_parallel_workers_per_gather = 0;\nexplain (analyze, buffers, settings)\nselect roys_creation_user, count(*)\n from eu.royalty_with_null\n group by roys_creation_user;\n\nThe output is shown earlier in this thread.\n\nI have no idea why I have IO timings but not buffers hit/read.\n\nOn Tue, Dec 21, 2021 at 1:13 PM Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Dec 21, 2021 at 12:33:06AM -0500, Tom Lane wrote:\n> > So now we have a real mystery about what is happening on Lars'\n> > system. Those numbers can't be right.\n>\n> I realized Lars said it was x86_64/Linux, but I'm hoping to hear back with\n> more\n> details:\n>\n> What OS version?\n> Is it a VM of some type ?\n> How did you install postgres? From a package or compiled from source?\n> grep -r HAVE_CLOCK_GETTIME /usr/pgsql-13/include\n> Send the exact command and output you used to run the query?\n> Why does your explain output have IO timing but not Buffers: hit/read ?\n>\n> --\n> Justin\n>\n\nJustin,Thanks for your continued interest.I'm running PostgreSQL under AWS Aurora, and I didn't set it up or install it, so I'm not sure about the OS version.I can't run the grep command since I don't know how to get down to the command line on the actual box running Aurora. I just connect to PostgreSQL from either my desktop or an EC2 Linux instance.SQL I entered was:set max_parallel_workers_per_gather = 0;explain (analyze, buffers, settings)select roys_creation_user, count(*)  from eu.royalty_with_null group by roys_creation_user;The output is shown earlier in this thread.I have no idea why I have IO timings but not buffers hit/read.On Tue, Dec 21, 2021 at 1:13 PM Justin Pryzby <[email protected]> wrote:On Tue, Dec 21, 2021 at 12:33:06AM -0500, Tom Lane wrote:\n> So now we have a real mystery about what is happening on Lars'\n> system.  Those numbers can't be right.\n\nI realized Lars said it was x86_64/Linux, but I'm hoping to hear back with more\ndetails:\n\nWhat OS version?\nIs it a VM of some type ?\nHow did you install postgres?  From a package or compiled from source?\ngrep -r HAVE_CLOCK_GETTIME /usr/pgsql-13/include\nSend the exact command and output you used to run the query?\nWhy does your explain output have IO timing but not Buffers: hit/read ?\n\n-- \nJustin", "msg_date": "Tue, 21 Dec 2021 14:53:20 -0800", "msg_from": "Lars Bergeson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "Lars Bergeson <[email protected]> writes:\n> I'm running PostgreSQL under AWS Aurora, and I didn't set it up or install\n> it, so I'm not sure about the OS version.\n\nOh! Aurora is not Postgres. My admittedly-not-well-informed\nunderstanding is that they stuck a Postgres front end on their\nexisting storage engine, so it's not surprising if storage-level\nbehaviors are quite different from stock Postgres.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Dec 2021 18:07:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" }, { "msg_contents": "On Tue, Dec 21, 2021 at 4:07 PM Tom Lane <[email protected]> wrote:\n\n> Lars Bergeson <[email protected]> writes:\n> > I'm running PostgreSQL under AWS Aurora, and I didn't set it up or\n> install\n> > it, so I'm not sure about the OS version.\n>\n> Oh! Aurora is not Postgres. My admittedly-not-well-informed\n> understanding is that they stuck a Postgres front end on their\n> existing storage engine, so it's not surprising if storage-level\n> behaviors are quite different from stock Postgres.\n>\n>\nI do wish Amazon would be more considerate and modify what version()\noutputs to include \"AWS Aurora\" somewhere in the human readable string.\nThough the lack really isn't an excuse for reports of this nature to omit\nsuch a crucial hardware/hosting detail. The rest of the problem statement,\neven with the \"newbie to PostgreSQL\" qualifier, was written well enough I\nhadn't really considered that it would be anything but stock PostgreSQL on\na personal VM setup for testing.\n\nDavid J.\n\nOn Tue, Dec 21, 2021 at 4:07 PM Tom Lane <[email protected]> wrote:Lars Bergeson <[email protected]> writes:\n> I'm running PostgreSQL under AWS Aurora, and I didn't set it up or install\n> it, so I'm not sure about the OS version.\n\nOh!  Aurora is not Postgres.  My admittedly-not-well-informed\nunderstanding is that they stuck a Postgres front end on their\nexisting storage engine, so it's not surprising if storage-level\nbehaviors are quite different from stock Postgres.I do wish Amazon would be more considerate and modify what version() outputs to include \"AWS Aurora\" somewhere in the human readable string.  Though the lack really isn't an excuse for reports of this nature to omit such a crucial hardware/hosting detail.  The rest of the problem statement, even with the \"newbie to PostgreSQL\" qualifier, was written well enough I hadn't really considered that it would be anything but stock PostgreSQL on a personal VM setup for testing.David J.", "msg_date": "Tue, 21 Dec 2021 17:45:06 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slower with a large proportion of NULLs in several\n columns" } ]
[ { "msg_contents": "Hi,\n\nI have a PostgreSQL (9.6.15) two node cluster setup with Patroni. The\ninstances are configured to work in master <-> synchronous standby setup\nand both run in docker containers with pgdata volume mounted from host.\nWhen master is restarted the synchronous standby automatically takes the\nrole of master and master starts operating as synchronous replica.\n\nEverything works fine but whenever I restart master instance it creates a\nnew wal file in pg_xlog/ however old wal files are not cleaned up. They\npile up and in my reproduction environment, where there are no operations\non the database, they currently occupy:\ndu -sh pg_xlog/\n17G pg_xlog/\n\nand the number of the files is more or less:\nls pg_xlog/ | grep -v history | wc -l\n1024\n\nI've searched through the mailing lists and articles like\nhttps://dataegret.com/2018/04/pg_wal-is-too-big-whats-going-on/ and the\nmain problems mentioned in most of the places are:\n1. failing archive command\n2. abandoned (inactive) replication slot\n3. issues with checkpoints\n3. too big wal_keep_segments value\n\nHowever, none of those problems seem to apply to my deployment:\n1. I have `archive_mode` set to `off`\n1a. I even tried enabling it and setting the `archive_command` to\n'/bin/true' just to confirm a suggestion found in one of the post on the\nmailing list (that didn't improve anything)\n2. the only replication slot in `pg_replication_slots` is the one related\nto the synchronous replica and it is active\n3. I've enabled `log_checkpoint` but doesn't see any errors or warnings\nrelated to checkpoints triggered either automatically or manually via\n`CHECKPOINT;`\n4. `wal_keep_segments` is set to 8 and `max_wal_size` is set to 1GB\n\nIs there anything that I should check that could shed some light on this?\n\nA few configuration options taken from `pg_settings`:\narchive_command = (disabled)\narchive_mode = off\narchive_timeout = 0\ncheck_function_bodies = on\ncheckpoint_completion_target = 0.5\ncheckpoint_flush_after = 32\ncheckpoint_timeout = 300\nhot_standby = on\nhot_standby_feedback = off\nmax_replication_slots = 10\nmax_wal_senders = 10\nmax_wal_size = 64\nmin_wal_size = 5\nsynchronous_commit = on\nsynchronous_standby_names = patroni1\nwal_block_size = 8192\nwal_buffers = 512\nwal_compression = off\nwal_keep_segments = 8\nwal_level = replica\nwal_log_hints = on\nwal_receiver_status_interval = 10\nwal_receiver_timeout = 60000\nwal_retrieve_retry_interval = 5000\nwal_segment_size = 2048\nwal_sender_timeout = 60000\nwal_sync_method = fdatasync\nwal_writer_delay = 200\nwal_writer_flush_after = 128\n\nKind regards\n\nHi,I have a PostgreSQL (9.6.15) two node cluster setup with Patroni. The instances are configured to work in master <-> synchronous standby setup and both run in docker containers with pgdata volume mounted from host. When master is restarted the synchronous standby automatically takes the role of master and master starts operating as synchronous replica.Everything works fine but whenever I restart master instance it creates a new wal file in pg_xlog/ however old wal files are not cleaned up. They pile up and in my reproduction environment, where there are no operations on the database, they currently occupy:du -sh pg_xlog/17G     pg_xlog/and the number of the files is more or less:ls pg_xlog/ | grep -v history | wc -l1024I've searched through the mailing lists and articles like https://dataegret.com/2018/04/pg_wal-is-too-big-whats-going-on/ and the main problems mentioned in most of the places are:1. failing archive command2. abandoned (inactive) replication slot3. issues with checkpoints3. too big wal_keep_segments valueHowever, none of those problems seem to apply to my deployment:1. I have `archive_mode` set to `off`1a. I even tried enabling it and setting the `archive_command` to '/bin/true' just to confirm a suggestion found in one of the post on the mailing list (that didn't improve anything)2. the only replication slot in `pg_replication_slots` is the one related to the synchronous replica and it is active3. I've enabled `log_checkpoint` but doesn't see any errors or warnings related to checkpoints triggered either automatically or manually via `CHECKPOINT;`4. `wal_keep_segments` is set to 8 and `max_wal_size` is set to 1GBIs there anything that I should check that could shed some light on this?A few configuration options taken from `pg_settings`:archive_command = (disabled)archive_mode = offarchive_timeout = 0check_function_bodies = oncheckpoint_completion_target = 0.5checkpoint_flush_after = 32checkpoint_timeout = 300hot_standby = onhot_standby_feedback = offmax_replication_slots = 10max_wal_senders = 10max_wal_size = 64min_wal_size = 5synchronous_commit = onsynchronous_standby_names = patroni1wal_block_size = 8192wal_buffers = 512wal_compression = offwal_keep_segments = 8wal_level = replicawal_log_hints = onwal_receiver_status_interval = 10wal_receiver_timeout = 60000wal_retrieve_retry_interval = 5000wal_segment_size = 2048wal_sender_timeout = 60000wal_sync_method = fdatasyncwal_writer_delay = 200wal_writer_flush_after = 128Kind regards", "msg_date": "Wed, 22 Dec 2021 16:34:19 +0100", "msg_from": "Zbigniew Kostrzewa <[email protected]>", "msg_from_op": true, "msg_subject": "WAL files keep piling up" }, { "msg_contents": "A stupid question. How many .ready files are there?\n\n\nRegards,\nNinad Shah\n\nOn Wed, 22 Dec 2021 at 21:04, Zbigniew Kostrzewa <[email protected]>\nwrote:\n\n> Hi,\n>\n> I have a PostgreSQL (9.6.15) two node cluster setup with Patroni. The\n> instances are configured to work in master <-> synchronous standby setup\n> and both run in docker containers with pgdata volume mounted from host.\n> When master is restarted the synchronous standby automatically takes the\n> role of master and master starts operating as synchronous replica.\n>\n> Everything works fine but whenever I restart master instance it creates a\n> new wal file in pg_xlog/ however old wal files are not cleaned up. They\n> pile up and in my reproduction environment, where there are no operations\n> on the database, they currently occupy:\n> du -sh pg_xlog/\n> 17G pg_xlog/\n>\n> and the number of the files is more or less:\n> ls pg_xlog/ | grep -v history | wc -l\n> 1024\n>\n> I've searched through the mailing lists and articles like\n> https://dataegret.com/2018/04/pg_wal-is-too-big-whats-going-on/ and the\n> main problems mentioned in most of the places are:\n> 1. failing archive command\n> 2. abandoned (inactive) replication slot\n> 3. issues with checkpoints\n> 3. too big wal_keep_segments value\n>\n> However, none of those problems seem to apply to my deployment:\n> 1. I have `archive_mode` set to `off`\n> 1a. I even tried enabling it and setting the `archive_command` to\n> '/bin/true' just to confirm a suggestion found in one of the post on the\n> mailing list (that didn't improve anything)\n> 2. the only replication slot in `pg_replication_slots` is the one related\n> to the synchronous replica and it is active\n> 3. I've enabled `log_checkpoint` but doesn't see any errors or warnings\n> related to checkpoints triggered either automatically or manually via\n> `CHECKPOINT;`\n> 4. `wal_keep_segments` is set to 8 and `max_wal_size` is set to 1GB\n>\n> Is there anything that I should check that could shed some light on this?\n>\n> A few configuration options taken from `pg_settings`:\n> archive_command = (disabled)\n> archive_mode = off\n> archive_timeout = 0\n> check_function_bodies = on\n> checkpoint_completion_target = 0.5\n> checkpoint_flush_after = 32\n> checkpoint_timeout = 300\n> hot_standby = on\n> hot_standby_feedback = off\n> max_replication_slots = 10\n> max_wal_senders = 10\n> max_wal_size = 64\n> min_wal_size = 5\n> synchronous_commit = on\n> synchronous_standby_names = patroni1\n> wal_block_size = 8192\n> wal_buffers = 512\n> wal_compression = off\n> wal_keep_segments = 8\n> wal_level = replica\n> wal_log_hints = on\n> wal_receiver_status_interval = 10\n> wal_receiver_timeout = 60000\n> wal_retrieve_retry_interval = 5000\n> wal_segment_size = 2048\n> wal_sender_timeout = 60000\n> wal_sync_method = fdatasync\n> wal_writer_delay = 200\n> wal_writer_flush_after = 128\n>\n> Kind regards\n>\n\nA stupid question. How many .ready files are there?Regards,Ninad ShahOn Wed, 22 Dec 2021 at 21:04, Zbigniew Kostrzewa <[email protected]> wrote:Hi,I have a PostgreSQL (9.6.15) two node cluster setup with Patroni. The instances are configured to work in master <-> synchronous standby setup and both run in docker containers with pgdata volume mounted from host. When master is restarted the synchronous standby automatically takes the role of master and master starts operating as synchronous replica.Everything works fine but whenever I restart master instance it creates a new wal file in pg_xlog/ however old wal files are not cleaned up. They pile up and in my reproduction environment, where there are no operations on the database, they currently occupy:du -sh pg_xlog/17G     pg_xlog/and the number of the files is more or less:ls pg_xlog/ | grep -v history | wc -l1024I've searched through the mailing lists and articles like https://dataegret.com/2018/04/pg_wal-is-too-big-whats-going-on/ and the main problems mentioned in most of the places are:1. failing archive command2. abandoned (inactive) replication slot3. issues with checkpoints3. too big wal_keep_segments valueHowever, none of those problems seem to apply to my deployment:1. I have `archive_mode` set to `off`1a. I even tried enabling it and setting the `archive_command` to '/bin/true' just to confirm a suggestion found in one of the post on the mailing list (that didn't improve anything)2. the only replication slot in `pg_replication_slots` is the one related to the synchronous replica and it is active3. I've enabled `log_checkpoint` but doesn't see any errors or warnings related to checkpoints triggered either automatically or manually via `CHECKPOINT;`4. `wal_keep_segments` is set to 8 and `max_wal_size` is set to 1GBIs there anything that I should check that could shed some light on this?A few configuration options taken from `pg_settings`:archive_command = (disabled)archive_mode = offarchive_timeout = 0check_function_bodies = oncheckpoint_completion_target = 0.5checkpoint_flush_after = 32checkpoint_timeout = 300hot_standby = onhot_standby_feedback = offmax_replication_slots = 10max_wal_senders = 10max_wal_size = 64min_wal_size = 5synchronous_commit = onsynchronous_standby_names = patroni1wal_block_size = 8192wal_buffers = 512wal_compression = offwal_keep_segments = 8wal_level = replicawal_log_hints = onwal_receiver_status_interval = 10wal_receiver_timeout = 60000wal_retrieve_retry_interval = 5000wal_segment_size = 2048wal_sender_timeout = 60000wal_sync_method = fdatasyncwal_writer_delay = 200wal_writer_flush_after = 128Kind regards", "msg_date": "Wed, 22 Dec 2021 23:31:42 +0530", "msg_from": "Ninad Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL files keep piling up" }, { "msg_contents": "Thanks for responding. On current master it looks like so:\n\nls pg_xlog/archive_status/ | grep ready | wc -l\n0\n\nls pg_xlog/archive_status/ | grep done | wc -l\n501\n\nKind regards.\n\n\nśr., 22 gru 2021 o 19:01 Ninad Shah <[email protected]> napisał(a):\n\n> A stupid question. How many .ready files are there?\n>\n>\n> Regards,\n> Ninad Shah\n>\n> On Wed, 22 Dec 2021 at 21:04, Zbigniew Kostrzewa <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> I have a PostgreSQL (9.6.15) two node cluster setup with Patroni. The\n>> instances are configured to work in master <-> synchronous standby setup\n>> and both run in docker containers with pgdata volume mounted from host.\n>> When master is restarted the synchronous standby automatically takes the\n>> role of master and master starts operating as synchronous replica.\n>>\n>> Everything works fine but whenever I restart master instance it creates a\n>> new wal file in pg_xlog/ however old wal files are not cleaned up. They\n>> pile up and in my reproduction environment, where there are no operations\n>> on the database, they currently occupy:\n>> du -sh pg_xlog/\n>> 17G pg_xlog/\n>>\n>> and the number of the files is more or less:\n>> ls pg_xlog/ | grep -v history | wc -l\n>> 1024\n>>\n>> I've searched through the mailing lists and articles like\n>> https://dataegret.com/2018/04/pg_wal-is-too-big-whats-going-on/ and the\n>> main problems mentioned in most of the places are:\n>> 1. failing archive command\n>> 2. abandoned (inactive) replication slot\n>> 3. issues with checkpoints\n>> 3. too big wal_keep_segments value\n>>\n>> However, none of those problems seem to apply to my deployment:\n>> 1. I have `archive_mode` set to `off`\n>> 1a. I even tried enabling it and setting the `archive_command` to\n>> '/bin/true' just to confirm a suggestion found in one of the post on the\n>> mailing list (that didn't improve anything)\n>> 2. the only replication slot in `pg_replication_slots` is the one related\n>> to the synchronous replica and it is active\n>> 3. I've enabled `log_checkpoint` but doesn't see any errors or warnings\n>> related to checkpoints triggered either automatically or manually via\n>> `CHECKPOINT;`\n>> 4. `wal_keep_segments` is set to 8 and `max_wal_size` is set to 1GB\n>>\n>> Is there anything that I should check that could shed some light on this?\n>>\n>> A few configuration options taken from `pg_settings`:\n>> archive_command = (disabled)\n>> archive_mode = off\n>> archive_timeout = 0\n>> check_function_bodies = on\n>> checkpoint_completion_target = 0.5\n>> checkpoint_flush_after = 32\n>> checkpoint_timeout = 300\n>> hot_standby = on\n>> hot_standby_feedback = off\n>> max_replication_slots = 10\n>> max_wal_senders = 10\n>> max_wal_size = 64\n>> min_wal_size = 5\n>> synchronous_commit = on\n>> synchronous_standby_names = patroni1\n>> wal_block_size = 8192\n>> wal_buffers = 512\n>> wal_compression = off\n>> wal_keep_segments = 8\n>> wal_level = replica\n>> wal_log_hints = on\n>> wal_receiver_status_interval = 10\n>> wal_receiver_timeout = 60000\n>> wal_retrieve_retry_interval = 5000\n>> wal_segment_size = 2048\n>> wal_sender_timeout = 60000\n>> wal_sync_method = fdatasync\n>> wal_writer_delay = 200\n>> wal_writer_flush_after = 128\n>>\n>> Kind regards\n>>\n>\n\nThanks for responding. On current master it looks like so:ls pg_xlog/archive_status/ | grep ready | wc -l0ls pg_xlog/archive_status/ | grep done | wc -l501Kind regards.śr., 22 gru 2021 o 19:01 Ninad Shah <[email protected]> napisał(a):A stupid question. How many .ready files are there?Regards,Ninad ShahOn Wed, 22 Dec 2021 at 21:04, Zbigniew Kostrzewa <[email protected]> wrote:Hi,I have a PostgreSQL (9.6.15) two node cluster setup with Patroni. The instances are configured to work in master <-> synchronous standby setup and both run in docker containers with pgdata volume mounted from host. When master is restarted the synchronous standby automatically takes the role of master and master starts operating as synchronous replica.Everything works fine but whenever I restart master instance it creates a new wal file in pg_xlog/ however old wal files are not cleaned up. They pile up and in my reproduction environment, where there are no operations on the database, they currently occupy:du -sh pg_xlog/17G     pg_xlog/and the number of the files is more or less:ls pg_xlog/ | grep -v history | wc -l1024I've searched through the mailing lists and articles like https://dataegret.com/2018/04/pg_wal-is-too-big-whats-going-on/ and the main problems mentioned in most of the places are:1. failing archive command2. abandoned (inactive) replication slot3. issues with checkpoints3. too big wal_keep_segments valueHowever, none of those problems seem to apply to my deployment:1. I have `archive_mode` set to `off`1a. I even tried enabling it and setting the `archive_command` to '/bin/true' just to confirm a suggestion found in one of the post on the mailing list (that didn't improve anything)2. the only replication slot in `pg_replication_slots` is the one related to the synchronous replica and it is active3. I've enabled `log_checkpoint` but doesn't see any errors or warnings related to checkpoints triggered either automatically or manually via `CHECKPOINT;`4. `wal_keep_segments` is set to 8 and `max_wal_size` is set to 1GBIs there anything that I should check that could shed some light on this?A few configuration options taken from `pg_settings`:archive_command = (disabled)archive_mode = offarchive_timeout = 0check_function_bodies = oncheckpoint_completion_target = 0.5checkpoint_flush_after = 32checkpoint_timeout = 300hot_standby = onhot_standby_feedback = offmax_replication_slots = 10max_wal_senders = 10max_wal_size = 64min_wal_size = 5synchronous_commit = onsynchronous_standby_names = patroni1wal_block_size = 8192wal_buffers = 512wal_compression = offwal_keep_segments = 8wal_level = replicawal_log_hints = onwal_receiver_status_interval = 10wal_receiver_timeout = 60000wal_retrieve_retry_interval = 5000wal_segment_size = 2048wal_sender_timeout = 60000wal_sync_method = fdatasyncwal_writer_delay = 200wal_writer_flush_after = 128Kind regards", "msg_date": "Wed, 22 Dec 2021 19:11:22 +0100", "msg_from": "Zbigniew Kostrzewa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL files keep piling up" }, { "msg_contents": "Zbigniew Kostrzewa <[email protected]> writes:\n> Thanks for responding. On current master it looks like so:\n> ls pg_xlog/archive_status/ | grep ready | wc -l\n> 0\n> ls pg_xlog/archive_status/ | grep done | wc -l\n> 501\n\nHmm, if you've got archiving turned off, I wonder why you have\nany .done files at all. Perhaps they are leftover from a time\nwhen you did have archiving on, and for some reason they are\nconfusing the non-archive-mode cleanup logic.\n\nAnyway, you could certainly manually remove the .done files and\nthe corresponding WAL segment files, and then see what happens.\n\nBTW, I'm sure you realize that 9.6.15 is not exactly current.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Dec 2021 13:18:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL files keep piling up" }, { "msg_contents": "I've just checked my second reproduction cluster (also Patroni but this\ntime on K8s). It also has non-empty `archive_status/` directory:\n\nls pg_xlog/ | grep -v history | wc -l\n165\n\nls pg_xlog/archive_status/ | wc -l\n81\nls pg_xlog/archive_status/ | grep done | wc -l\n81\n\nbut on this cluster I did not enable `archive_mode` at any time:\n\npostgres=# select name,setting from pg_settings where name like 'archive_%';\n name | setting\n-----------------+------------\n archive_command | (disabled)\n archive_mode | off\n archive_timeout | 0\n\nYes, I am aware 9.6 is pretty old, soon I will be replacing it with 11.x.\nThanks.\n\nKind regards.\n\n\nśr., 22 gru 2021 o 19:18 Tom Lane <[email protected]> napisał(a):\n\n> Zbigniew Kostrzewa <[email protected]> writes:\n> > Thanks for responding. On current master it looks like so:\n> > ls pg_xlog/archive_status/ | grep ready | wc -l\n> > 0\n> > ls pg_xlog/archive_status/ | grep done | wc -l\n> > 501\n>\n> Hmm, if you've got archiving turned off, I wonder why you have\n> any .done files at all. Perhaps they are leftover from a time\n> when you did have archiving on, and for some reason they are\n> confusing the non-archive-mode cleanup logic.\n>\n> Anyway, you could certainly manually remove the .done files and\n> the corresponding WAL segment files, and then see what happens.\n>\n> BTW, I'm sure you realize that 9.6.15 is not exactly current.\n>\n> regards, tom lane\n>\n\nI've just checked my second reproduction cluster (also Patroni but this time on K8s). It also has non-empty `archive_status/` directory:ls pg_xlog/ | grep -v history | wc -l165ls pg_xlog/archive_status/ | wc -l81ls pg_xlog/archive_status/ | grep done | wc -l81but on this cluster I did not enable `archive_mode` at any time:postgres=# select name,setting from pg_settings where name like 'archive_%';      name       |  setting   -----------------+------------ archive_command | (disabled) archive_mode    | off archive_timeout | 0Yes, I am aware 9.6 is pretty old, soon I will be replacing it with 11.x. Thanks.Kind regards.śr., 22 gru 2021 o 19:18 Tom Lane <[email protected]> napisał(a):Zbigniew Kostrzewa <[email protected]> writes:\n> Thanks for responding. On current master it looks like so:\n> ls pg_xlog/archive_status/ | grep ready | wc -l\n> 0\n> ls pg_xlog/archive_status/ | grep done | wc -l\n> 501\n\nHmm, if you've got archiving turned off, I wonder why you have\nany .done files at all.  Perhaps they are leftover from a time\nwhen you did have archiving on, and for some reason they are\nconfusing the non-archive-mode cleanup logic.\n\nAnyway, you could certainly manually remove the .done files and\nthe corresponding WAL segment files, and then see what happens.\n\nBTW, I'm sure you realize that 9.6.15 is not exactly current.\n\n                        regards, tom lane", "msg_date": "Wed, 22 Dec 2021 20:26:22 +0100", "msg_from": "Zbigniew Kostrzewa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL files keep piling up" }, { "msg_contents": "On Wed, 2021-12-22 at 20:26 +0100, Zbigniew Kostrzewa wrote:\n> Yes, I am aware 9.6 is pretty old, soon I will be replacing it with 11.x. Thanks.\n\nv11 is old as well. I suggest v14.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Thu, 23 Dec 2021 11:04:09 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL files keep piling up" }, { "msg_contents": ">\n> I have a PostgreSQL (9.6.15) two node cluster setup with Patroni. The\n> instances are configured to work in master <-> synchronous standby setup\n> and both run in docker containers with pgdata volume mounted from host.\n> When master is restarted the synchronous standby automatically takes the\n> role of master and master starts operating as synchronous replica.\n>\n\nSo it seems that Patroni starts Postgres in two phases. First it starts\nboth instances in standby mode and then, once it establishes who is the\nleader, it promotes one of the instances. It seems that when promoting a\nstandby to a leader a new timeline is created which causes new WAL file to\nbe created in pg_xlog/. In a result each restart results in a new timeline\nand a new WAL file. However, nothing cleans up the WAL files stamped with\nprevious timelines. Is there a way to figure out which WAL files can be\nsafely removed?\n\nI have a PostgreSQL (9.6.15) two node cluster setup with Patroni. The instances are configured to work in master <-> synchronous standby setup and both run in docker containers with pgdata volume mounted from host. When master is restarted the synchronous standby automatically takes the role of master and master starts operating as synchronous replica.So it seems that Patroni starts Postgres in two phases. First it starts both instances in standby mode and then, once it establishes who is the leader, it promotes one of the instances. It seems that when promoting a standby to a leader a new timeline is created which causes new WAL file to be created in pg_xlog/. In a result each restart results in a new timeline and a new WAL file. However, nothing cleans up the WAL files stamped with previous timelines. Is there a way to figure out which WAL files can be safely removed?", "msg_date": "Mon, 3 Jan 2022 23:48:14 +0100", "msg_from": "Zbigniew Kostrzewa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL files keep piling up" } ]
[ { "msg_contents": "Hi,\n\nI am looking at a postgres 9.6 on rh7\nI see that checkpoint_write_time is huge which looks quite strange as the\naverage amount of data written is not that big.\nFor example in 5:30 hours today, data from pg_stat_bgwriter view, comparing\nvalues at 11AM and 4h30PM :\ncheckpoint_write_time 6986324\nbuffers_checkpoint 182447\n\nso, to my understanding, it takes almost 2 hours to write 1.6 GB of data.\n\nCan someone either correct my understanding, or shed some light on what can\ncause this ??\n\nthanks,\n\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\nHi,I am looking at a postgres 9.6 on rh7 I see that checkpoint_write_time is huge which looks quite strange as the average amount of data written is not that big.For example in 5:30 hours today, data from pg_stat_bgwriter view, comparing values at 11AM and 4h30PM :checkpoint_write_time 6986324buffers_checkpoint 182447so, to my understanding, it takes almost 2 hours to write 1.6 GB of data.Can someone either correct my understanding, or shed some light on what can cause this ??thanks,Marc MILLASSenior Architect+33607850334www.mokadb.com", "msg_date": "Tue, 28 Dec 2021 17:52:25 +0100", "msg_from": "Marc Millas <[email protected]>", "msg_from_op": true, "msg_subject": "9.6 write time" }, { "msg_contents": "Marc Millas <[email protected]> writes:\n> I am looking at a postgres 9.6 on rh7\n> I see that checkpoint_write_time is huge which looks quite strange as the\n> average amount of data written is not that big.\n\ncheckpoint_write_time is not the amount of time spent doing I/O;\nit's the elapsed wall-clock time in the write phase. If the I/O\nis being throttled because of an un-aggressive checkpoint completion\ntarget, it could be a lot more than the actual I/O time. What have\nyou got your checkpoint parameters set to?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Dec 2021 12:46:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.6 write time" }, { "msg_contents": "Hi Tom,\n\ncheckpoint completion target is 0.9\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\n\n\nOn Tue, Dec 28, 2021 at 6:46 PM Tom Lane <[email protected]> wrote:\n\n> Marc Millas <[email protected]> writes:\n> > I am looking at a postgres 9.6 on rh7\n> > I see that checkpoint_write_time is huge which looks quite strange as the\n> > average amount of data written is not that big.\n>\n> checkpoint_write_time is not the amount of time spent doing I/O;\n> it's the elapsed wall-clock time in the write phase. If the I/O\n> is being throttled because of an un-aggressive checkpoint completion\n> target, it could be a lot more than the actual I/O time. What have\n> you got your checkpoint parameters set to?\n>\n> regards, tom lane\n>\n\nHi Tom,checkpoint completion target is 0.9Marc MILLASSenior Architect+33607850334www.mokadb.comOn Tue, Dec 28, 2021 at 6:46 PM Tom Lane <[email protected]> wrote:Marc Millas <[email protected]> writes:\n> I am looking at a postgres 9.6 on rh7\n> I see that checkpoint_write_time is huge which looks quite strange as the\n> average amount of data written is not that big.\n\ncheckpoint_write_time is not the amount of time spent doing I/O;\nit's the elapsed wall-clock time in the write phase.  If the I/O\nis being throttled because of an un-aggressive checkpoint completion\ntarget, it could be a lot more than the actual I/O time.  What have\nyou got your checkpoint parameters set to?\n\n                        regards, tom lane", "msg_date": "Tue, 28 Dec 2021 18:55:47 +0100", "msg_from": "Marc Millas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.6 write time" }, { "msg_contents": "Marc Millas <[email protected]> writes:\n> checkpoint completion target is 0.9\n\ncheckpoint_timeout is the more interesting number here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Dec 2021 13:34:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.6 write time" }, { "msg_contents": "Default ie. 5 minutes\n\nLe mar. 28 déc. 2021 à 19:34, Tom Lane <[email protected]> a écrit :\n\n> Marc Millas <[email protected]> writes:\n> > checkpoint completion target is 0.9\n>\n> checkpoint_timeout is the more interesting number here.\n>\n> regards, tom lane\n>\n\nDefault ie. 5 minutesLe mar. 28 déc. 2021 à 19:34, Tom Lane <[email protected]> a écrit :Marc Millas <[email protected]> writes:\n> checkpoint completion target is 0.9\n\ncheckpoint_timeout is the more interesting number here.\n\n                        regards, tom lane", "msg_date": "Tue, 28 Dec 2021 19:59:29 +0100", "msg_from": "Marc Millas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.6 write time" } ]
[ { "msg_contents": "I have PostgreSQL Version 10.7 on AIX 7.1 set up with streaming\nreplication. Replication appears to be working fine and database contents\nare staying current.\n\n*ps -ef |grep sender*\n> postgres 54854022 30212254 0 10:10:29 - 0:00 postgres: wal sender\n> process postgres 10.253.15.123(47852) streaming 54/BB631A30\n>\n\n\n> *ps -ef |grep receiver*\n> postgres 34079622 9897420 0 10:10:29 - 0:00 postgres: wal\n> receiver process streaming 54/BB631A30\n\n\nThe problem I have is related to the wal sender process. The AUTOVACUUM and\nVACUUM are not cleaning up dead tuples in the tables because it is\nreporting that they are \"nonremovable\" due to the backend_xmin that is not\nchanging. This has resulted in queries on some tables taking seconds or\nminutes to return under 100 tuples that should take 5ms or less.\n\n*VACUUM VERBOSE scttlk_tbl;*\n\nINFO: \"scttlk_tbl\": found 0 removable, 149715 nonremovable row versions in\n3322\nout of 12152 pages\nDETAIL: 149699 dead row versions cannot be removed yet, oldest xmin:\n340818216\nThere were 21246 unused item pointers.\nSkipped 0 pages due to buffer pins, 8830 frozen pages.\n\nWhen I check the backend_xmin that is indicated as preventing the dead\ntuples from being removed, the PID it points to is the wal sender.\n\n\n*SELECT pid, datname, usename, state, backend_xid, backend_xminFROM\npg_stat_activity WHERE backend_xmin = 340818216;*\n\n pid | datname | usename | state | backend_xid | backend_xmin\n----------+--------------+----------+--------+-------------+--------------\n54854022 | | postgres | active | | 340818216\n\nI have determined that if I shut down the replication database, the wal\nsender process will shut down. When I do this and run my VACUUM, it is then\nable to remove the dead tuples that were nonremovable prior. However, when\nI restart the replication database, the wal sender becomes active again and\ntries to pick up where it left off, at the same backend_xmin.\n\nI believe the issue may be related to another product we are using as part\nof the replication process called \"Attunity\". But we have shut that down\nand restarted it to make sure it did not have any long running queries or\nother hooks that may be affecting the wal sender and preventing the\nbackend_xmin from moving forward. It just does not seem to do so.\n\nMy questions are as follows:\n\n1) Is there anything I can do short of shutting down and restarting the\nprimary (production system) that would allow the backend_xmin to move\nforward?\n\n2) Is it possible to \"kill\" the WAL sender process? I know it's possible,\nbut what I mean is will it crash Postgres doing that? Or will it simply\nrespawn?\n\nUltimately, the goal is to get backend_xmin to be caught up to work being\ndone today and not waiting on something from days or weeks ago to release\nso the autovacuum can take place.\n\nHope I'm explaining myself right! Please let me know any advice you may\nhave on this, and thanks in advance for any tips on where to look or how to\naddress this.\n\nRegards,\n\nSteve N.\n\nI have PostgreSQL Version 10.7 on AIX 7.1 set up with streaming replication. Replication appears to be working fine and database contents are staying current.ps -ef |grep senderpostgres 54854022 30212254   0 10:10:29      -  0:00 postgres: wal sender process postgres 10.253.15.123(47852) streaming 54/BB631A30 ps -ef |grep receiverpostgres 34079622  9897420   0 10:10:29      -  0:00 postgres: wal receiver process   streaming 54/BB631A30The problem I have is related to the wal sender process. The AUTOVACUUM and VACUUM are not cleaning up dead tuples in the tables because it is reporting that they are \"nonremovable\" due to the backend_xmin that is not changing. This has resulted in queries on some tables taking seconds or minutes to return under 100 tuples that should take 5ms or less.VACUUM VERBOSE scttlk_tbl;INFO:  \"scttlk_tbl\": found 0 removable, 149715 nonremovable row versions in 3322out of 12152 pagesDETAIL: 149699 dead row versions cannot be removed yet, oldest xmin: 340818216There were 21246 unused item pointers.Skipped 0 pages due to buffer pins, 8830 frozen pages.When I check the backend_xmin that is indicated as preventing the dead tuples from being removed, the PID it points to is the wal sender.SELECT pid, datname, usename, state, backend_xid, backend_xminFROM pg_stat_activity WHERE backend_xmin = 340818216;   pid    |   datname    | usename  | state  | backend_xid | backend_xmin----------+--------------+----------+--------+-------------+--------------54854022 | | postgres | active | | 340818216I have determined that if I shut down the replication database, the wal sender process will shut down. When I do this and run my VACUUM, it is then able to remove the dead tuples that were nonremovable prior. However, when I restart the replication database, the wal sender becomes active again and tries to pick up where it left off, at the same backend_xmin. I believe the issue may be related to another product we are using as part of the replication process called \"Attunity\". But we have shut that down and restarted it to make sure it did not have any long running queries or other hooks that may be affecting the wal sender and preventing the backend_xmin from moving forward. It just does not seem to do so.My questions are as follows:1) Is there anything I can do short of shutting down and restarting the primary (production system) that would allow the backend_xmin to move forward?2)  Is it possible to \"kill\" the WAL sender process? I know it's possible, but what I mean is will it crash Postgres doing that? Or will it simply respawn?Ultimately, the goal is to get backend_xmin to be caught up to work being done today and not waiting on something from days or weeks ago to release so the autovacuum can take place.Hope I'm explaining myself right! Please let me know any advice you may have on this, and thanks in advance for any tips on where to look or how to address this.Regards,Steve N.", "msg_date": "Tue, 4 Jan 2022 12:16:00 -0500", "msg_from": "Steve Nixon <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM: Nonremovable rows due to wal sender process" }, { "msg_contents": "Hello\nThis is exactly the reason why you need to track the age of the oldest transaction on the primary itself and on every replica that has hot_standby_feedback = on. By default hot_standby_feedback is disabled.\n\n> Is there anything I can do short of shutting down and restarting the primary (production system) that would allow the backend_xmin to move forward?\n\nYou need to investigate this replica. Not a primary database. What transactions are in progress? Is it reasonable? Is hot_standby_feedback really needed here and is it reasonable to pay for its impact across the entire cluster?\nIn my practice, hot_standby_feedback = on is only needed on replicas intended for fast OLTP queries. And where any long requests are prohibited. \n\nregards, Sergei\n\n\n", "msg_date": "Tue, 04 Jan 2022 23:17:52 +0300", "msg_from": "Sergei Kornilov <[email protected]>", "msg_from_op": false, "msg_subject": "Re:VACUUM: Nonremovable rows due to wal sender process" }, { "msg_contents": "Thank you for the quick reply. You are correct that hot_standby_feedback is\nindeed on. I'm trying to find out why at the moment because we are not\nusing the replication for any queries that would need that turned on. I was\njust made aware of that after posting my question, and I am looking to get\npermission to turn it off. I have access to the primary and the streaming\nreplication, but I do not have access to the replication being done by this\n\"Attunity\" product. Our parent company is managing that.\n\nThe AUTOVACUUM appears to have stopped working sometime around NOV 22. If I\nlook on the replication server I have access to, one of the\npg_stat_activity entries are older than today. Based on that, I suspect\nthat the culprit long running transaction may be on the corporate\nreplicated database that I do not have direct access to.\n\nselect pid, backend_xmin, backend_start, backend_type from pg_stat_activity;\n\n-[ RECORD 1 ]-+------------------------------\npid | 63111452\nbackend_xmin | 661716178\nbackend_start | 2022-01-04 15:52:42.269666-05\nbackend_type | client backend\n-[ RECORD 2 ]-+------------------------------\npid | 46400004\nbackend_xmin |\nbackend_start | 2022-01-04 11:10:28.939006-05\nbackend_type | startup\n-[ RECORD 3 ]-+------------------------------\npid | 46270090\nbackend_xmin |\nbackend_start | 2022-01-04 11:10:28.979557-05\nbackend_type | background writer\n-[ RECORD 4 ]-+------------------------------\npid | 918684\nbackend_xmin |\nbackend_start | 2022-01-04 11:10:28.978996-05\nbackend_type | checkpointer\n-[ RECORD 5 ]-+------------------------------\npid | 34079622\nbackend_xmin |\nbackend_start | 2022-01-04 11:10:29.172959-05\nbackend_type | walreceiver\n\nThanks again. At least it helped me figure out where I should be looking.\n\nSteve Nixon\n\n\n\nOn Tue, 4 Jan 2022 at 15:17, Sergei Kornilov <[email protected]> wrote:\n\n> Hello\n> This is exactly the reason why you need to track the age of the oldest\n> transaction on the primary itself and on every replica that has\n> hot_standby_feedback = on. By default hot_standby_feedback is disabled.\n>\n> > Is there anything I can do short of shutting down and restarting the\n> primary (production system) that would allow the backend_xmin to move\n> forward?\n>\n> You need to investigate this replica. Not a primary database. What\n> transactions are in progress? Is it reasonable? Is hot_standby_feedback\n> really needed here and is it reasonable to pay for its impact across the\n> entire cluster?\n> In my practice, hot_standby_feedback = on is only needed on replicas\n> intended for fast OLTP queries. And where any long requests are prohibited.\n>\n> regards, Sergei\n>\n\nThank you for the quick reply. You are correct that hot_standby_feedback is indeed on. I'm trying to find out why at the moment because we are not using the replication for any queries that would need that turned on. I was just made aware of that after posting my question, and I am looking to get permission to turn it off. I have access to the primary and the streaming replication, but I do not have access to the replication being done by this \"Attunity\" product. Our parent company is managing that. The AUTOVACUUM appears to have stopped working sometime around NOV 22. If I look on the replication server I have access to, one of the pg_stat_activity entries are older than today. Based on that, I suspect that the culprit long running transaction may be on the corporate replicated database  that I do not have direct access to.select pid, backend_xmin, backend_start, backend_type from pg_stat_activity;-[ RECORD 1 ]-+------------------------------pid | 63111452backend_xmin | 661716178backend_start | 2022-01-04 15:52:42.269666-05backend_type  | client backend-[ RECORD 2 ]-+------------------------------pid | 46400004backend_xmin  | backend_start | 2022-01-04 11:10:28.939006-05backend_type  | startup-[ RECORD 3 ]-+------------------------------pid | 46270090backend_xmin  | backend_start | 2022-01-04 11:10:28.979557-05backend_type  | background writer-[ RECORD 4 ]-+------------------------------pid           | 918684backend_xmin  | backend_start | 2022-01-04 11:10:28.978996-05backend_type  | checkpointer-[ RECORD 5 ]-+------------------------------pid | 34079622backend_xmin  | backend_start | 2022-01-04 11:10:29.172959-05backend_type  | walreceiverThanks again. At least it helped me figure out where I should be looking. Steve NixonOn Tue, 4 Jan 2022 at 15:17, Sergei Kornilov <[email protected]> wrote:Hello\nThis is exactly the reason why you need to track the age of the oldest transaction on the primary itself and on every replica that has hot_standby_feedback = on. By default hot_standby_feedback is disabled.\n\n> Is there anything I can do short of shutting down and restarting the primary (production system) that would allow the backend_xmin to move forward?\n\nYou need to investigate this replica. Not a primary database. What transactions are in progress? Is it reasonable? Is hot_standby_feedback really needed here and is it reasonable to pay for its impact across the entire cluster?\nIn my practice, hot_standby_feedback = on is only needed on replicas intended for fast OLTP queries. And where any long requests are prohibited. \n\nregards, Sergei", "msg_date": "Tue, 4 Jan 2022 16:01:11 -0500", "msg_from": "Steve Nixon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM: Nonremovable rows due to wal sender process" } ]
[ { "msg_contents": "Hi\n\nI have postgres_fdw table called tbl_link. The source table is 2.5 GB in size with 122 lines (some lines has 70MB bytea column, but not the ones I select in the example)\nI noticed that when I put the specific ids in the list \"where id in (140,144,148)\" it works fast (few ms), but when I put the same list as select \"where id in (select 140 as id union select 144 union select 148)\" it takes 50 seconds. This select union is just for the example, I obviously have a different select (which by itself takes few ms but cause the whole insert query to take 10000x more time)\n\nWhy is that? How can I still use regular select and still get reasonable response time?\n\nThanks\n\n\nFAST:\nselect lnk.*\ninto local_1\nfrom tbl_link lnk\nwhere id in (140,144,148)\n\n\"Foreign Scan on tbl_link lnk (cost=100.00..111.61 rows=3 width=700) (actual time=4.161..4.167 rows=3 loops=1)\"\n\"Planning Time: 0.213 ms\"\n\"Execution Time: 16.251 ms\"\n\n\n\nSLOW:\nselect lnk.*\ninto local_1\nfrom tbl_link lnk\nwhere id in (select 140 as id union select 144 union select 148)\n\n\n\"Hash Join (cost=100.18..113.88 rows=3 width=700) (actual time=45398.721..46812.100 rows=3 loops=1)\"\n\" Hash Cond: (lnk.id = (140))\"\n\" -> Foreign Scan on tbl_link lnk (cost=100.00..113.39 rows=113 width=700) (actual time=45398.680..46812.026 rows=112 loops=1)\"\n\" -> Hash (cost=0.14..0.14 rows=3 width=4) (actual time=0.023..0.026 rows=3 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 9kB\"\n\" -> HashAggregate (cost=0.08..0.11 rows=3 width=4) (actual time=0.017..0.021 rows=3 loops=1)\"\n\" Group Key: (140)\"\n\" Batches: 1 Memory Usage: 24kB\"\n\" -> Append (cost=0.00..0.07 rows=3 width=4) (actual time=0.005..0.009 rows=3 loops=1)\"\n\" -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1)\"\n\" -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)\"\n\" -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.000..0.001 rows=1 loops=1)\"\n\"Planning Time: 0.541 ms\"\n\"Execution Time: 46827.945 ms\"\n\n\nIMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you should not copy or use this e-mail for any purpose nor disclose its contents to any person.\n\n\n\n\n\n\n\n\n\nHi \n\n \nI have postgres_fdw table called tbl_link.  The source table is 2.5 GB in size with 122 lines (some lines has 70MB bytea column, but not the ones I select in the example)\nI noticed that when I put the specific ids in the list \"where id in (140,144,148)\" it works fast (few ms), but when I put the same list as select \"where id in (select 140 as id union\n select 144  union select 148)\" it takes 50 seconds.  This select union is just for the example, I obviously have a different select (which by itself takes few ms but cause the whole insert query to take 10000x more time)\n \nWhy is that?  How can I still use regular select and still get reasonable response time?\n \nThanks\n \n \nFAST:\nselect lnk.*\n\ninto local_1\nfrom tbl_link lnk\nwhere id in (140,144,148)\n \n\"Foreign Scan on tbl_link lnk  (cost=100.00..111.61 rows=3 width=700) (actual time=4.161..4.167 rows=3 loops=1)\"\n\"Planning Time: 0.213 ms\"\n\"Execution Time: 16.251 ms\"\n \n \n \nSLOW:\nselect lnk.*\n\ninto local_1\nfrom tbl_link lnk\nwhere id in (select 140 as id union select 144  union select 148)\n \n \n\"Hash Join  (cost=100.18..113.88 rows=3 width=700) (actual time=45398.721..46812.100 rows=3 loops=1)\"\n\"  Hash Cond: (lnk.id = (140))\"\n\"  ->  Foreign Scan on tbl_link lnk  (cost=100.00..113.39 rows=113 width=700) (actual time=45398.680..46812.026 rows=112 loops=1)\"\n\"  ->  Hash  (cost=0.14..0.14 rows=3 width=4) (actual time=0.023..0.026 rows=3 loops=1)\"\n\"        Buckets: 1024  Batches: 1  Memory Usage: 9kB\"\n\"        ->  HashAggregate  (cost=0.08..0.11 rows=3 width=4) (actual time=0.017..0.021 rows=3 loops=1)\"\n\"              Group Key: (140)\"\n\"              Batches: 1  Memory Usage: 24kB\"\n\"              ->  Append  (cost=0.00..0.07 rows=3 width=4) (actual time=0.005..0.009 rows=3 loops=1)\"\n\"                    ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1)\"\n\"                    ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)\"\n\"                    ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.000..0.001 rows=1 loops=1)\"\n\"Planning Time: 0.541 ms\"\n\"Execution Time: 46827.945 ms\"\n \n \n\nIMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you\n should not copy or use this e-mail for any purpose nor disclose its contents to any person.", "msg_date": "Thu, 6 Jan 2022 07:43:46 +0000", "msg_from": "Avi Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "Same query 10000x More Time" }, { "msg_contents": "On Thu, 6 Jan 2022 at 13:13, Avi Weinberg <[email protected]> wrote:\n\n> Hi\n>\n>\n>\n> I have postgres_fdw table called tbl_link. The source table is 2.5 GB in\n> size with 122 lines (some lines has 70MB bytea column, but not the ones I\n> select in the example)\n>\n> I noticed that when I put the specific ids in the list \"where id in\n> (140,144,148)\" it works fast (few ms), but when I put the same list as\n> select \"where id in (select 140 as id union select 144 union select 148)\"\n> it takes 50 seconds. This select union is just for the example, I\n> obviously have a different select (which by itself takes few ms but cause\n> the whole insert query to take 10000x more time)\n>\n>\n>\n> Why is that? How can I still use regular select and still get reasonable\n> response time?\n>\n>\n>\n> Thanks\n>\n>\n>\n\ncouple of things:\nPostgreSQL: Documentation: 14: F.35. postgres_fdw\n<https://www.postgresql.org/docs/current/postgres-fdw.html>\n<https://www.postgresql.org/docs/current/postgres-fdw.html>when you set\nyour foreign server what are your\nuse_remote_estimate\nfetch_size\nparams for the foreign server.\n\nyou need to know there are certain restrictions on what gets pushed down to\nthe remote server\ni generally use postgres/postgres_fdw.sql at master · postgres/postgres\n(github.com)\n<https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/sql/postgres_fdw.sql>\nas\na reference\nif you predicates are not pushed down, it will bring all the rows from the\nforeign server to your local server (and fetch_size value and network io\nwill add to delay)\nand given you used select * , it will be a lot of io, so maybe restrict\nonly to columns needed after being filtered would help.\n\n\nyou can try by running\nexplain (verbose,analyze) query and then also enabling log_statement =\n'all' / log_min_duration_statement = 0\non the foreign server to see the actual plan for the foreign scan.\n\nThat might help in trouble shooting.\n\n\nas always, i have little production exposure. If i am wrong, i can be\ncorrected.\n\nOn Thu, 6 Jan 2022 at 13:13, Avi Weinberg <[email protected]> wrote:\n\n\nHi \n\n \nI have postgres_fdw table called tbl_link.  The source table is 2.5 GB in size with 122 lines (some lines has 70MB bytea column, but not the ones I select in the example)\nI noticed that when I put the specific ids in the list \"where id in (140,144,148)\" it works fast (few ms), but when I put the same list as select \"where id in (select 140 as id union\n select 144  union select 148)\" it takes 50 seconds.  This select union is just for the example, I obviously have a different select (which by itself takes few ms but cause the whole insert query to take 10000x more time)\n \nWhy is that?  How can I still use regular select and still get reasonable response time?\n \nThanks\n couple of things:PostgreSQL: Documentation: 14: F.35. postgres_fdwwhen you set your foreign server what are youruse_remote_estimatefetch_size params for the foreign server.you need to know there are certain restrictions on what gets pushed down to the remote serveri generally use postgres/postgres_fdw.sql at master · postgres/postgres (github.com) as a referenceif you predicates are not pushed down, it will bring all the rows from the foreign server to your local server (and fetch_size value and network io will add to delay)and given you used select * , it will be a lot of io, so maybe restrict only to columns needed after being filtered would help.you can try by runningexplain (verbose,analyze) query  and then also enabling log_statement = 'all' / log_min_duration_statement = 0 on the foreign server to see the actual plan for the foreign scan.That might help in trouble shooting.as always, i have little production exposure. If i am wrong, i can be corrected.", "msg_date": "Thu, 6 Jan 2022 13:50:55 +0530", "msg_from": "Vijaykumar Jain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same query 10000x More Time" }, { "msg_contents": "At Thu, 6 Jan 2022 13:50:55 +0530, Vijaykumar Jain <[email protected]> wrote in \n> On Thu, 6 Jan 2022 at 13:13, Avi Weinberg <[email protected]> wrote:\n> \n> > Hi\n> >\n> >\n> >\n> > I have postgres_fdw table called tbl_link. The source table is 2.5 GB in\n> > size with 122 lines (some lines has 70MB bytea column, but not the ones I\n> > select in the example)\n> >\n> > I noticed that when I put the specific ids in the list \"where id in\n> > (140,144,148)\" it works fast (few ms), but when I put the same list as\n> > select \"where id in (select 140 as id union select 144 union select 148)\"\n> > it takes 50 seconds. This select union is just for the example, I\n> > obviously have a different select (which by itself takes few ms but cause\n> > the whole insert query to take 10000x more time)\n> >\n> >\n> >\n> > Why is that? How can I still use regular select and still get reasonable\n> > response time?\n> >\n> >\n> >\n> > Thanks\n> >\n> >\n> >\n> \n> couple of things:\n> PostgreSQL: Documentation: 14: F.35. postgres_fdw\n> <https://www.postgresql.org/docs/current/postgres-fdw.html>\n> <https://www.postgresql.org/docs/current/postgres-fdw.html>when you set\n> your foreign server what are your\n> use_remote_estimate\n> fetch_size\n> params for the foreign server.\n> \n> you need to know there are certain restrictions on what gets pushed down to\n> the remote server\n> i generally use postgres/postgres_fdw.sql at master · postgres/postgres\n> (github.com)\n> <https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/sql/postgres_fdw.sql>\n> as\n> a reference\n> if you predicates are not pushed down, it will bring all the rows from the\n> foreign server to your local server (and fetch_size value and network io\n> will add to delay)\n> and given you used select * , it will be a lot of io, so maybe restrict\n> only to columns needed after being filtered would help.\n> \n> \n> you can try by running\n> explain (verbose,analyze) query and then also enabling log_statement =\n> 'all' / log_min_duration_statement = 0\n> on the foreign server to see the actual plan for the foreign scan.\n> \n> That might help in trouble shooting.\n> \n> \n> as always, i have little production exposure. If i am wrong, i can be\n> corrected.\n\nIn this specific case, the FAST query doesn't contain a join and its\npredicate can be pushed down to remote. On the other hand the SLOW\none contains a join. The planner considers remote join only when the\nboth hands of a join are on the same foreign server. Tthis is not the\ncase since the inner subquery is not even a foreign scan. The planner\ndoesn't consider the possibility that a subquery is executable\nanywhere.\n\nAs the result, the local inevitably draw all rows from remote table to\njoin with the result of the subquery on-local, which should be quite\nslow.\n\nIt could be improved, but I don't think we are going to consider that\ncase because the SLOW query seems like a kind of bad query, which can\nbe improved by rewriting to the FAST one.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 06 Jan 2022 18:39:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same query 10000x More Time" }, { "msg_contents": "Hi Kyotaro Horiguchi and Vijaykumar Jain,\n\nThanks for your quick reply!\n\nI understand that the fact the slow query has a join caused this problem. However, why can't Postgres evaluate the table of the \"IN\" clause (select 140 as id union select 144 union select 148) and based on its size decide what is more optimal.\nPush the local table to the linked server to perform the join on the linked server\nPull the linked server table to local to perform the join on the local.\n\nIn my case the table size of the local is million times smaller than the table size of the remote.\n\n\n\nselect lnk.*\ninto local_1\nfrom tbl_link lnk\nwhere id in (select 140 as id union select 144 union select 148)\n\n\n-----Original Message-----\nFrom: Kyotaro Horiguchi [mailto:[email protected]]\nSent: Thursday, January 6, 2022 11:39 AM\nTo: [email protected]\nCc: Avi Weinberg <[email protected]>; [email protected]\nSubject: Re: Same query 10000x More Time\n\nAt Thu, 6 Jan 2022 13:50:55 +0530, Vijaykumar Jain <[email protected]> wrote in\n> On Thu, 6 Jan 2022 at 13:13, Avi Weinberg <[email protected]> wrote:\n>\n> > Hi\n> >\n> >\n> >\n> > I have postgres_fdw table called tbl_link. The source table is 2.5\n> > GB in size with 122 lines (some lines has 70MB bytea column, but not\n> > the ones I select in the example)\n> >\n> > I noticed that when I put the specific ids in the list \"where id in\n> > (140,144,148)\" it works fast (few ms), but when I put the same list\n> > as select \"where id in (select 140 as id union select 144 union select 148)\"\n> > it takes 50 seconds. This select union is just for the example, I\n> > obviously have a different select (which by itself takes few ms but\n> > cause the whole insert query to take 10000x more time)\n> >\n> >\n> >\n> > Why is that? How can I still use regular select and still get\n> > reasonable response time?\n> >\n> >\n> >\n> > Thanks\n> >\n> >\n> >\n>\n> couple of things:\n> PostgreSQL: Documentation: 14: F.35. postgres_fdw\n> <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww\n> .postgresql.org%2Fdocs%2Fcurrent%2Fpostgres-fdw.html&amp;data=04%7C01%\n> 7Caviw%40gilat.com%7Cc8585d2ddbeb4a09e3e208d9d0f8684c%7C7300b1a3573a40\n> 1092a61c65cd85e927%7C0%7C0%7C637770587595033327%7CUnknown%7CTWFpbGZsb3\n> d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7\n> C3000&amp;sdata=bVBCIOkXrVkkI%2BDH44QmAZmm%2FJLz%2FWYp5Wda%2FrJRfDA%3D\n> &amp;reserved=0>\n> <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww\n> .postgresql.org%2Fdocs%2Fcurrent%2Fpostgres-fdw.html&amp;data=04%7C01%\n> 7Caviw%40gilat.com%7Cc8585d2ddbeb4a09e3e208d9d0f8684c%7C7300b1a3573a40\n> 1092a61c65cd85e927%7C0%7C0%7C637770587595033327%7CUnknown%7CTWFpbGZsb3\n> d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=bVBCIOkXrVkkI%2BDH44QmAZmm%2FJLz%2FWYp5Wda%2FrJRfDA%3D&amp;reserved=0>when you set your foreign server what are your use_remote_estimate fetch_size params for the foreign server.\n>\n> you need to know there are certain restrictions on what gets pushed\n> down to the remote server i generally use postgres/postgres_fdw.sql at\n> master * postgres/postgres\n> (github.com)\n> <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit\n> hub.com%2Fpostgres%2Fpostgres%2Fblob%2Fmaster%2Fcontrib%2Fpostgres_fdw\n> %2Fsql%2Fpostgres_fdw.sql&amp;data=04%7C01%7Caviw%40gilat.com%7Cc8585d\n> 2ddbeb4a09e3e208d9d0f8684c%7C7300b1a3573a401092a61c65cd85e927%7C0%7C0%\n> 7C637770587595033327%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQI\n> joiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=TzqeuCMrThZ\n> RUkq9m%2F97N8bRgm9wu3VFjTnoZpt%2BA7w%3D&amp;reserved=0>\n> as\n> a reference\n> if you predicates are not pushed down, it will bring all the rows from\n> the foreign server to your local server (and fetch_size value and\n> network io will add to delay) and given you used select * , it will be\n> a lot of io, so maybe restrict only to columns needed after being\n> filtered would help.\n>\n>\n> you can try by running\n> explain (verbose,analyze) query and then also enabling log_statement\n> = 'all' / log_min_duration_statement = 0 on the foreign server to see\n> the actual plan for the foreign scan.\n>\n> That might help in trouble shooting.\n>\n>\n> as always, i have little production exposure. If i am wrong, i can be\n> corrected.\n\nIn this specific case, the FAST query doesn't contain a join and its predicate can be pushed down to remote. On the other hand the SLOW one contains a join. The planner considers remote join only when the both hands of a join are on the same foreign server. Tthis is not the case since the inner subquery is not even a foreign scan. The planner doesn't consider the possibility that a subquery is executable anywhere.\n\nAs the result, the local inevitably draw all rows from remote table to join with the result of the subquery on-local, which should be quite slow.\n\nIt could be improved, but I don't think we are going to consider that case because the SLOW query seems like a kind of bad query, which can be improved by rewriting to the FAST one.\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\nIMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you should not copy or use this e-mail for any purpose nor disclose its contents to any person.\n\n\n", "msg_date": "Thu, 6 Jan 2022 10:20:49 +0000", "msg_from": "Avi Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Same query 10000x More Time" }, { "msg_contents": "On Thu, Jan 6, 2022, 3:50 PM Avi Weinberg <[email protected]> wrote:\n\n> Hi Kyotaro Horiguchi and Vijaykumar Jain,\n>\n> Thanks for your quick reply!\n>\n> I understand that the fact the slow query has a join caused this problem.\n> However, why can't Postgres evaluate the table of the \"IN\" clause (select\n> 140 as id union select 144 union select 148) and based on its size decide\n> what is more optimal.\n> Push the local table to the linked server to perform the join on the\n> linked server\n> Pull the linked server table to local to perform the join on the local.\n>\n> In my case the table size of the local is million times smaller than the\n> table size of the remote.\n\n\nI understand when the optimizer makes a decision it uses stats to use the\nleast expensive plan to get the result.\nI can reply but I am pretty sure making an analogy to a local setup of big\nand small table is not the same as small local table and a big remote table.\nI would leave it to the experts here unless you are open to read the src\nfor postgres_fdw extension.\nhttps://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/postgres_fdw.c\n\n\nThere must be a reason if that is beyond cost calculation as to why this\nhappens.\nElse if this is all just cost based, you can try tweaking the cost params\nand see if you can get a better plan.\n\nFor exp, if you force parallel cost to 0 on the foreign server, it may use\nparallel workers and do some speed up, but given my exp, fighting optimizer\nis mostly asking for trouble :)\n\nOn Thu, Jan 6, 2022, 3:50 PM Avi Weinberg <[email protected]> wrote:Hi Kyotaro Horiguchi and Vijaykumar Jain,\n\nThanks for your quick reply!\n\nI understand that the fact the slow query has a join caused this problem.  However, why can't Postgres evaluate the table of the \"IN\" clause (select 140 as id union select 144  union select 148) and based on its size decide what is more optimal.\nPush the local table to the linked server to perform the join on the linked server\nPull the linked server table to local to perform the join on the local.\n\nIn my case the table size of the local is million times smaller than the table size of the remote.I understand when the optimizer makes a decision it uses stats to use the least expensive plan to get the result.I can reply but I am pretty sure making an analogy to a local setup of big and small table is not the same as small local table and a big remote table.I would leave it to the experts here unless you  are open to read the src for postgres_fdw extension.https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/postgres_fdw.cThere must be a reason if that is beyond cost calculation as to why this happens.Else if this is all just cost based, you can try tweaking the cost params and see if you can get a better plan.For exp, if you force parallel cost to 0 on the foreign server, it may use parallel workers and do some speed up, but given my exp, fighting optimizer is mostly asking for trouble :)", "msg_date": "Thu, 6 Jan 2022 18:22:56 +0530", "msg_from": "Vijaykumar Jain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same query 10000x More Time" }, { "msg_contents": "Thanks for the input\n\npostgres_fdw seems to bring the entire table even if all I use in the join is just the id from the remote table. I know it is possible to query for the missing ids and then perform the delete, but I wonder why all types of joins are so inefficient.\n\n\n DELETE FROM tbl_local lcl\n WHERE NOT EXISTS (\n SELECT id FROM tbl_link lnk\n WHERE lnk.id = lcl.id );\n\n\n\"Delete on tbl_local lcl (cost=114.59..122.14 rows=3 width=730) (actual time=62153.636..62153.639 rows=0 loops=1)\"\n\" -> Hash Anti Join (cost=114.59..122.14 rows=3 width=730) (actual time=62153.633..62153.636 rows=0 loops=1)\"\n\" Hash Cond: (lcl.id = lnk.id)\"\n\" -> Seq Scan on tbl_local lcl (cost=0.00..7.11 rows=111 width=14) (actual time=0.022..0.062 rows=111 loops=1)\"\n\" -> Hash (cost=113.24..113.24 rows=108 width=732) (actual time=55984.489..55984.490 rows=112 loops=1)\"\n\" Buckets: 1024 (originally 1024) Batches: 32 (originally 1) Memory Usage: 240024kB\"\n\" -> Foreign Scan on tbl_link lnk (cost=100.00..113.24 rows=108 width=732) (actual time=48505.926..51893.668 rows=112 loops=1)\"\n\"Planning Time: 0.237 ms\"\n\"Execution Time: 62184.253 ms\"\n\nFrom: Vijaykumar Jain [mailto:[email protected]]\nSent: Thursday, January 6, 2022 2:53 PM\nTo: Avi Weinberg <[email protected]>\nCc: Kyotaro Horiguchi <[email protected]>; pgsql-performa. <[email protected]>\nSubject: Re: Same query 10000x More Time\n\n\nOn Thu, Jan 6, 2022, 3:50 PM Avi Weinberg <[email protected]<mailto:[email protected]>> wrote:\nHi Kyotaro Horiguchi and Vijaykumar Jain,\n\nThanks for your quick reply!\n\nI understand that the fact the slow query has a join caused this problem. However, why can't Postgres evaluate the table of the \"IN\" clause (select 140 as id union select 144 union select 148) and based on its size decide what is more optimal.\nPush the local table to the linked server to perform the join on the linked server\nPull the linked server table to local to perform the join on the local.\n\nIn my case the table size of the local is million times smaller than the table size of the remote.\n\nI understand when the optimizer makes a decision it uses stats to use the least expensive plan to get the result.\nI can reply but I am pretty sure making an analogy to a local setup of big and small table is not the same as small local table and a big remote table.\nI would leave it to the experts here unless you are open to read the src for postgres_fdw extension.\nhttps://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/postgres_fdw.c<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres%2Fblob%2Fmaster%2Fcontrib%2Fpostgres_fdw%2Fpostgres_fdw.c&data=04%7C01%7Caviw%40gilat.com%7Cd57dda52c4594051c3fe08d9d1138309%7C7300b1a3573a401092a61c65cd85e927%7C0%7C0%7C637770704011750683%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=KMdiTS%2FaX%2Fi%2B4I80DjL2g2xbmY3kFCUyMli%2BNpWwlBM%3D&reserved=0>\n\n\nThere must be a reason if that is beyond cost calculation as to why this happens.\nElse if this is all just cost based, you can try tweaking the cost params and see if you can get a better plan.\n\nFor exp, if you force parallel cost to 0 on the foreign server, it may use parallel workers and do some speed up, but given my exp, fighting optimizer is mostly asking for trouble :)\nIMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you should not copy or use this e-mail for any purpose nor disclose its contents to any person.\n\n\n\n\n\n\n\n\n\nThanks for the input\n \npostgres_fdw seems to bring the entire table even if all I use in the join is just the id from the remote table.  I know it is possible to query for the missing ids and then perform the delete, but I wonder why all types of joins are so\n inefficient.\n \n \n   DELETE FROM tbl_local lcl\n   WHERE  NOT EXISTS (\n   SELECT id FROM tbl_link lnk\n   WHERE lnk.id = lcl.id   );\n \n \n\"Delete on tbl_local lcl  (cost=114.59..122.14 rows=3 width=730) (actual time=62153.636..62153.639 rows=0 loops=1)\"\n\"  ->  Hash Anti Join  (cost=114.59..122.14 rows=3 width=730) (actual time=62153.633..62153.636 rows=0 loops=1)\"\n\"        Hash Cond: (lcl.id = lnk.id)\"\n\"        ->  Seq Scan on tbl_local lcl  (cost=0.00..7.11 rows=111 width=14) (actual time=0.022..0.062 rows=111 loops=1)\"\n\"        ->  Hash  (cost=113.24..113.24 rows=108 width=732) (actual time=55984.489..55984.490 rows=112 loops=1)\"\n\"              Buckets: 1024 (originally 1024)  Batches: 32 (originally 1)  Memory Usage: 240024kB\"\n\"              ->  Foreign Scan on tbl_link lnk  (cost=100.00..113.24 rows=108 width=732) (actual time=48505.926..51893.668 rows=112 loops=1)\"\n\"Planning Time: 0.237 ms\"\n\"Execution Time: 62184.253 ms\"\n \nFrom: Vijaykumar Jain [mailto:[email protected]]\n\nSent: Thursday, January 6, 2022 2:53 PM\nTo: Avi Weinberg <[email protected]>\nCc: Kyotaro Horiguchi <[email protected]>; pgsql-performa. <[email protected]>\nSubject: Re: Same query 10000x More Time\n \n\n\n \n\n\nOn Thu, Jan 6, 2022, 3:50 PM Avi Weinberg <[email protected]> wrote:\n\n\nHi Kyotaro Horiguchi and Vijaykumar Jain,\n\nThanks for your quick reply!\n\nI understand that the fact the slow query has a join caused this problem.  However, why can't Postgres evaluate the table of the \"IN\" clause (select 140 as id union select 144  union select 148) and based on its size decide what is more optimal.\nPush the local table to the linked server to perform the join on the linked server\nPull the linked server table to local to perform the join on the local.\n\nIn my case the table size of the local is million times smaller than the table size of the remote.\n\n\n\n\n \n\n\nI understand when the optimizer makes a decision it uses stats to use the least expensive plan to get the result.\n\n\nI can reply but I am pretty sure making an analogy to a local setup of big and small table is not the same as small local table and a big remote table.\n\n\nI would leave it to the experts here unless you  are open to read the src for postgres_fdw extension.\n\n\nhttps://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/postgres_fdw.c\n\n\n \n\n\n \n\n\nThere must be a reason if that is beyond cost calculation as to why this happens.\n\n\nElse if this is all just cost based, you can try tweaking the cost params and see if you can get a better plan.\n\n\n \n\n\nFor exp, if you force parallel cost to 0 on the foreign server, it may use parallel workers and do some speed up, but given my exp, fighting optimizer is mostly asking for trouble :)\n\n\n\nIMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you\n should not copy or use this e-mail for any purpose nor disclose its contents to any person.", "msg_date": "Thu, 6 Jan 2022 14:31:46 +0000", "msg_from": "Avi Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Same query 10000x More Time" }, { "msg_contents": "On Thu, 6 Jan 2022 at 20:01, Avi Weinberg <[email protected]> wrote:\n\n> Thanks for the input\n>\n>\n>\n> postgres_fdw seems to bring the entire table even if all I use in the join\n> is just the id from the remote table. I know it is possible to query for\n> the missing ids and then perform the delete, but I wonder why all types of\n> joins are so inefficient.\n>\n>\n>\n\njust for fun, please do not do this.\nI tried out multiple options where we join a small local table to a huge\nremote table with multiple plan skip settings.\n\n\npostgres@db:~/playground$ psql\npsql (14beta1)\nType \"help\" for help.\n\npostgres=# \\c localdb\nYou are now connected to database \"localdb\" as user \"postgres\".\nlocaldb=# \\x\nExpanded display is on.\nlocaldb=# table pg_foreign_server;\n-[ RECORD 1\n]-----------------------------------------------------------------------------------------------\noid | 85462\nsrvname | remote_server\nsrvowner | 10\nsrvfdw | 85458\nsrvtype |\nsrvversion |\nsrvacl |\nsrvoptions |\n{dbname=remotedb,use_remote_estimate=true,fdw_startup_cost=0,fdw_tuple_cost=0,fetch_size=10000}\n\nlocaldb=# \\x\nExpanded display is off.\nlocaldb=# \\dt\n List of relations\n Schema | Name | Type | Owner\n--------+------+-------+----------\n public | t | table | postgres\n(1 row)\n\nlocaldb=# \\det remote_schema.remote_table;\n List of foreign tables\n Schema | Table | Server\n---------------+--------------+---------------\n remote_schema | remote_table | remote_server\n(1 row)\n\nlocaldb=# \\c remotedb;\nYou are now connected to database \"remotedb\" as user \"postgres\".\nremotedb=# \\dt\n List of relations\n Schema | Name | Type | Owner\n--------+--------------+-------+----------\n public | remote_table | table | postgres\n(1 row)\n\nremotedb=# select count(1) from remote_table;\n count\n--------\n 100000\n(1 row)\n\nremotedb=# \\c localdb\nYou are now connected to database \"localdb\" as user \"postgres\".\nlocaldb=# select count(1) from t;\n count\n-------\n 10\n(1 row)\n\n*# all the set options are forcing the optmizer to skip that plan route*\nlocaldb=# explain (analyze, verbose) select * from t join\nremote_schema.remote_table r on (t.t_id = r.t_id);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.23..2817.97 rows=100000 width=16) (actual\ntime=5.814..63.310 rows=90000 loops=1)\n Output: t.t_id, t.t_col, r.rt_id, r.t_id\n Inner Unique: true\n Hash Cond: (r.t_id = t.t_id)\n -> Foreign Scan on remote_schema.remote_table r (cost=0.00..2443.00\nrows=100000 width=8) (actual time=5.797..47.329 rows=100000 loops=1)\n Output: r.rt_id, r.t_id\n *Remote SQL: SELECT rt_id, t_id FROM public.remote_table*\n -> Hash (cost=1.10..1.10 rows=10 width=8) (actual time=0.009..0.010\nrows=10 loops=1)\n Output: t.t_id, t.t_col\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on public.t (cost=0.00..1.10 rows=10 width=8)\n(actual time=0.005..0.006 rows=10 loops=1)\n Output: t.t_id, t.t_col\n Planning Time: 4.464 ms\n Execution Time: 65.995 ms\n(14 rows)\n\nlocaldb=# set enable_seqscan TO 0;\nSET\nlocaldb=# explain (analyze, verbose) select * from t join\nremote_schema.remote_table r on (t.t_id = r.t_id);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=12.41..2829.16 rows=100000 width=16) (actual\ntime=5.380..61.028 rows=90000 loops=1)\n Output: t.t_id, t.t_col, r.rt_id, r.t_id\n Inner Unique: true\n Hash Cond: (r.t_id = t.t_id)\n -> Foreign Scan on remote_schema.remote_table r (cost=0.00..2443.00\nrows=100000 width=8) (actual time=5.362..45.625 rows=100000 loops=1)\n Output: r.rt_id, r.t_id\n *Remote SQL: SELECT rt_id, t_id FROM public.remote_table*\n -> Hash (cost=12.29..12.29 rows=10 width=8) (actual time=0.011..0.011\nrows=10 loops=1)\n Output: t.t_id, t.t_col\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Index Scan using t_pkey on public.t (cost=0.14..12.29 rows=10\nwidth=8) (actual time=0.005..0.008 rows=10 loops=1)\n Output: t.t_id, t.t_col\n Planning Time: 0.696 ms\n Execution Time: 63.666 ms\n(14 rows)\n\nlocaldb=# set enable_hashjoin TO 0;\nSET\nlocaldb=# explain (analyze, verbose) select * from t join\nremote_schema.remote_table r on (t.t_id = r.t_id);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.15..4821.93 rows=100000 width=16) (actual\ntime=5.199..75.817 rows=90000 loops=1)\n Output: t.t_id, t.t_col, r.rt_id, r.t_id\n Inner Unique: true\n -> Foreign Scan on remote_schema.remote_table r (cost=0.00..2443.00\nrows=100000 width=8) (actual time=5.186..46.152 rows=100000 loops=1)\n Output: r.rt_id, r.t_id\n *Remote SQL: SELECT rt_id, t_id FROM public.remote_table*\n -> Result Cache (cost=0.15..0.16 rows=1 width=8) (actual\ntime=0.000..0.000 rows=1 loops=100000)\n Output: t.t_id, t.t_col\n Cache Key: r.t_id\n Hits: 99990 Misses: 10 Evictions: 0 Overflows: 0 Memory Usage:\n2kB\n -> Index Scan using t_pkey on public.t (cost=0.14..0.15 rows=1\nwidth=8) (actual time=0.001..0.001 rows=1 loops=10)\n Output: t.t_id, t.t_col\n Index Cond: (t.t_id = r.t_id)\n Planning Time: 0.692 ms\n Execution Time: 78.512 ms\n(15 rows)\n\nlocaldb=# set enable_resultcache TO 0;\nSET\nlocaldb=# explain (analyze, verbose) select * from t join\nremote_schema.remote_table r on (t.t_id = r.t_id);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=479.10..5847.98 rows=100000 width=16) (actual\ntime=12.855..66.094 rows=90000 loops=1)\n Output: t.t_id, t.t_col, r.rt_id, r.t_id\n Inner Unique: true\n Merge Cond: (r.t_id = t.t_id)\n -> Foreign Scan on remote_schema.remote_table r (cost=0.29..4586.89\nrows=100000 width=8) (actual time=6.235..55.329 rows=100000 loops=1)\n Output: r.rt_id, r.t_id\n *Remote SQL: SELECT rt_id, t_id FROM public.remote_table ORDER BY\nt_id ASC NULLS LAST*\n -> Index Scan using t_pkey on public.t (cost=0.14..12.29 rows=10\nwidth=8) (actual time=0.006..0.024 rows=9 loops=1)\n Output: t.t_id, t.t_col\n Planning Time: 0.704 ms\n Execution Time: 68.724 ms\n(11 rows)\n\nlocaldb=# set enable_mergejoin TO 0;\nSET\nlocaldb=# explain (analyze, verbose) select * from t join\nremote_schema.remote_table r on (t.t_id = r.t_id);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=113.94..8830.28 rows=100000 width=16) (actual\ntime=11.576..89.465 rows=90000 loops=1)\n Output: t.t_id, t.t_col, r.rt_id, r.t_id\n -> Index Scan using t_pkey on public.t (cost=0.14..12.29 rows=10\nwidth=8) (actual time=0.009..0.034 rows=10 loops=1)\n Output: t.t_id, t.t_col\n -> Foreign Scan on remote_schema.remote_table r (cost=113.80..781.80\nrows=10000 width=8) (actual time=7.648..8.108 rows=9000 loops=10)\n Output: r.rt_id, r.t_id\n *Remote SQL: SELECT rt_id, t_id FROM public.remote_table WHERE\n(($1::integer = t_id))*\n Planning Time: 0.667 ms\n Execution Time: 92.131 ms\n(9 rows)\n\nfrom the logs for the last case: (it has open a new cursor everytime for\neach matching id) and is still the *slowest.*\n\n2022-01-06 22:10:48.665 IST [2318] LOG: execute <unnamed>: DECLARE c1\nCURSOR FOR\n SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id))\n2022-01-06 22:10:48.665 IST [2318] DETAIL: parameters: $1 = '1'\n2022-01-06 22:10:48.665 IST [2318] LOG: statement: FETCH 10000 FROM c1\n2022-01-06 22:10:48.679 IST [2318] LOG: statement: FETCH 10000 FROM c1\n2022-01-06 22:10:48.679 IST [2318] LOG: statement: CLOSE c1\n2022-01-06 22:10:48.679 IST [2318] LOG: execute <unnamed>: DECLARE c1\nCURSOR FOR\n SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id))\n2022-01-06 22:10:48.679 IST [2318] DETAIL: parameters: $1 = '2'\n2022-01-06 22:10:48.679 IST [2318] LOG: statement: FETCH 10000 FROM c1\n2022-01-06 22:10:48.686 IST [2318] LOG: statement: FETCH 10000 FROM c1\n2022-01-06 22:10:48.687 IST [2318] LOG: statement: CLOSE c1\n2022-01-06 22:10:48.687 IST [2318] LOG: execute <unnamed>: DECLARE c1\nCURSOR FOR\n SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id))\n2022-01-06 22:10:48.687 IST [2318] DETAIL: parameters: $1 = '3'\n2022-01-06 22:10:48.687 IST [2318] LOG: statement: FETCH 10000 FROM c1\n2022-01-06 22:10:48.698 IST [2318] LOG: statement: FETCH 10000 FROM c1\n2022-01-06 22:10:48.698 IST [2318] LOG: statement: CLOSE c1\n2022-01-06 22:10:48.698 IST [2318] LOG: execute <unnamed>: DECLARE c1\nCURSOR FOR\n SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id))\n2022-01-06 22:10:48.698 IST [2318] DETAIL: parameters: $1 = '4'\n2022-01-06 22:10:48.698 IST [2318] LOG: statement: FETCH 10000 FROM c1\n2022-01-06 22:10:48.705 IST [2318] LOG: statement: FETCH 10000 FROM c1\n2022-01-06 22:10:48.705 IST [2318] LOG: statement: CLOSE c1\n\n\nso i think, i just trust the optimizer, or rewrite my so as to gather the\npredicate first locally and then pass them to remote, or use\nmaterialized views to maintain\na stale copy of the remote table on my local db etc.\n\nOn Thu, 6 Jan 2022 at 20:01, Avi Weinberg <[email protected]> wrote:\n\n\nThanks for the input\n \npostgres_fdw seems to bring the entire table even if all I use in the join is just the id from the remote table.  I know it is possible to query for the missing ids and then perform the delete, but I wonder why all types of joins are so\n inefficient.\n just for fun, please do not do this.I tried out multiple options where we join a small local table to a huge remote table with multiple plan skip settings.postgres@db:~/playground$ psqlpsql (14beta1)Type \"help\" for help.postgres=# \\c localdbYou are now connected to database \"localdb\" as user \"postgres\".localdb=# \\xExpanded display is on.localdb=# table pg_foreign_server;-[ RECORD 1 ]-----------------------------------------------------------------------------------------------oid        | 85462srvname    | remote_serversrvowner   | 10srvfdw     | 85458srvtype    |srvversion |srvacl     |srvoptions | {dbname=remotedb,use_remote_estimate=true,fdw_startup_cost=0,fdw_tuple_cost=0,fetch_size=10000}localdb=# \\xExpanded display is off.localdb=# \\dt        List of relations Schema | Name | Type  |  Owner--------+------+-------+---------- public | t    | table | postgres(1 row)localdb=# \\det remote_schema.remote_table;            List of foreign tables    Schema     |    Table     |    Server---------------+--------------+--------------- remote_schema | remote_table | remote_server(1 row)localdb=# \\c remotedb;You are now connected to database \"remotedb\" as user \"postgres\".remotedb=# \\dt            List of relations Schema |     Name     | Type  |  Owner--------+--------------+-------+---------- public | remote_table | table | postgres(1 row)remotedb=# select count(1) from remote_table; count-------- 100000(1 row)remotedb=# \\c localdbYou are now connected to database \"localdb\" as user \"postgres\".localdb=# select count(1) from t; count-------    10(1 row)# all the set options are forcing the optmizer to skip that plan routelocaldb=# explain (analyze, verbose) select * from t join remote_schema.remote_table r on (t.t_id = r.t_id);                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=1.23..2817.97 rows=100000 width=16) (actual time=5.814..63.310 rows=90000 loops=1)   Output: t.t_id, t.t_col, r.rt_id, r.t_id   Inner Unique: true   Hash Cond: (r.t_id = t.t_id)   ->  Foreign Scan on remote_schema.remote_table r  (cost=0.00..2443.00 rows=100000 width=8) (actual time=5.797..47.329 rows=100000 loops=1)         Output: r.rt_id, r.t_id         Remote SQL: SELECT rt_id, t_id FROM public.remote_table   ->  Hash  (cost=1.10..1.10 rows=10 width=8) (actual time=0.009..0.010 rows=10 loops=1)         Output: t.t_id, t.t_col         Buckets: 1024  Batches: 1  Memory Usage: 9kB         ->  Seq Scan on public.t  (cost=0.00..1.10 rows=10 width=8) (actual time=0.005..0.006 rows=10 loops=1)               Output: t.t_id, t.t_col Planning Time: 4.464 ms Execution Time: 65.995 ms(14 rows)localdb=# set enable_seqscan TO 0;SETlocaldb=# explain (analyze, verbose) select * from t join remote_schema.remote_table r on (t.t_id = r.t_id);                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=12.41..2829.16 rows=100000 width=16) (actual time=5.380..61.028 rows=90000 loops=1)   Output: t.t_id, t.t_col, r.rt_id, r.t_id   Inner Unique: true   Hash Cond: (r.t_id = t.t_id)   ->  Foreign Scan on remote_schema.remote_table r  (cost=0.00..2443.00 rows=100000 width=8) (actual time=5.362..45.625 rows=100000 loops=1)         Output: r.rt_id, r.t_id         Remote SQL: SELECT rt_id, t_id FROM public.remote_table   ->  Hash  (cost=12.29..12.29 rows=10 width=8) (actual time=0.011..0.011 rows=10 loops=1)         Output: t.t_id, t.t_col         Buckets: 1024  Batches: 1  Memory Usage: 9kB         ->  Index Scan using t_pkey on public.t  (cost=0.14..12.29 rows=10 width=8) (actual time=0.005..0.008 rows=10 loops=1)               Output: t.t_id, t.t_col Planning Time: 0.696 ms Execution Time: 63.666 ms(14 rows)localdb=# set enable_hashjoin TO 0;SETlocaldb=# explain (analyze, verbose) select * from t join remote_schema.remote_table r on (t.t_id = r.t_id);                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.15..4821.93 rows=100000 width=16) (actual time=5.199..75.817 rows=90000 loops=1)   Output: t.t_id, t.t_col, r.rt_id, r.t_id   Inner Unique: true   ->  Foreign Scan on remote_schema.remote_table r  (cost=0.00..2443.00 rows=100000 width=8) (actual time=5.186..46.152 rows=100000 loops=1)         Output: r.rt_id, r.t_id         Remote SQL: SELECT rt_id, t_id FROM public.remote_table   ->  Result Cache  (cost=0.15..0.16 rows=1 width=8) (actual time=0.000..0.000 rows=1 loops=100000)         Output: t.t_id, t.t_col         Cache Key: r.t_id         Hits: 99990  Misses: 10  Evictions: 0  Overflows: 0  Memory Usage: 2kB         ->  Index Scan using t_pkey on public.t  (cost=0.14..0.15 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=10)               Output: t.t_id, t.t_col               Index Cond: (t.t_id = r.t_id) Planning Time: 0.692 ms Execution Time: 78.512 ms(15 rows)localdb=# set enable_resultcache TO 0;SETlocaldb=# explain (analyze, verbose) select * from t join remote_schema.remote_table r on (t.t_id = r.t_id);                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=479.10..5847.98 rows=100000 width=16) (actual time=12.855..66.094 rows=90000 loops=1)   Output: t.t_id, t.t_col, r.rt_id, r.t_id   Inner Unique: true   Merge Cond: (r.t_id = t.t_id)   ->  Foreign Scan on remote_schema.remote_table r  (cost=0.29..4586.89 rows=100000 width=8) (actual time=6.235..55.329 rows=100000 loops=1)         Output: r.rt_id, r.t_id         Remote SQL: SELECT rt_id, t_id FROM public.remote_table ORDER BY t_id ASC NULLS LAST   ->  Index Scan using t_pkey on public.t  (cost=0.14..12.29 rows=10 width=8) (actual time=0.006..0.024 rows=9 loops=1)         Output: t.t_id, t.t_col Planning Time: 0.704 ms Execution Time: 68.724 ms(11 rows)localdb=# set enable_mergejoin TO 0;SETlocaldb=# explain (analyze, verbose) select * from t join remote_schema.remote_table r on (t.t_id = r.t_id);                                                                 QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=113.94..8830.28 rows=100000 width=16) (actual time=11.576..89.465 rows=90000 loops=1)   Output: t.t_id, t.t_col, r.rt_id, r.t_id   ->  Index Scan using t_pkey on public.t  (cost=0.14..12.29 rows=10 width=8) (actual time=0.009..0.034 rows=10 loops=1)         Output: t.t_id, t.t_col   ->  Foreign Scan on remote_schema.remote_table r  (cost=113.80..781.80 rows=10000 width=8) (actual time=7.648..8.108 rows=9000 loops=10)         Output: r.rt_id, r.t_id         Remote SQL: SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id)) Planning Time: 0.667 ms Execution Time: 92.131 ms(9 rows)from the logs for the last case: (it has open a new cursor everytime for each matching id)  and is still the slowest.2022-01-06 22:10:48.665 IST [2318] LOG:  execute <unnamed>: DECLARE c1 CURSOR FOR  SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id))2022-01-06 22:10:48.665 IST [2318] DETAIL:  parameters: $1 = '1'2022-01-06 22:10:48.665 IST [2318] LOG:  statement: FETCH 10000 FROM c12022-01-06 22:10:48.679 IST [2318] LOG:  statement: FETCH 10000 FROM c12022-01-06 22:10:48.679 IST [2318] LOG:  statement: CLOSE c12022-01-06 22:10:48.679 IST [2318] LOG:  execute <unnamed>: DECLARE c1 CURSOR FOR  SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id))2022-01-06 22:10:48.679 IST [2318] DETAIL:  parameters: $1 = '2'2022-01-06 22:10:48.679 IST [2318] LOG:  statement: FETCH 10000 FROM c12022-01-06 22:10:48.686 IST [2318] LOG:  statement: FETCH 10000 FROM c12022-01-06 22:10:48.687 IST [2318] LOG:  statement: CLOSE c12022-01-06 22:10:48.687 IST [2318] LOG:  execute <unnamed>: DECLARE c1 CURSOR FOR  SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id))2022-01-06 22:10:48.687 IST [2318] DETAIL:  parameters: $1 = '3'2022-01-06 22:10:48.687 IST [2318] LOG:  statement: FETCH 10000 FROM c12022-01-06 22:10:48.698 IST [2318] LOG:  statement: FETCH 10000 FROM c12022-01-06 22:10:48.698 IST [2318] LOG:  statement: CLOSE c12022-01-06 22:10:48.698 IST [2318] LOG:  execute <unnamed>: DECLARE c1 CURSOR FOR  SELECT rt_id, t_id FROM public.remote_table WHERE (($1::integer = t_id))2022-01-06 22:10:48.698 IST [2318] DETAIL:  parameters: $1 = '4'2022-01-06 22:10:48.698 IST [2318] LOG:  statement: FETCH 10000 FROM c12022-01-06 22:10:48.705 IST [2318] LOG:  statement: FETCH 10000 FROM c12022-01-06 22:10:48.705 IST [2318] LOG:  statement: CLOSE c1so i think, i just trust the optimizer, or rewrite my so as to gather the predicate first locally and then pass them to remote, or use materialized views to maintaina stale copy of the remote table on my local db etc.", "msg_date": "Thu, 6 Jan 2022 22:20:13 +0530", "msg_from": "Vijaykumar Jain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same query 10000x More Time" } ]
[ { "msg_contents": "Hello everyone,\n\nI am currently running queries with the same table structures in 2\ndifferent virtual machines and 2 different versions. and I get results like\nbelow.\n\n\nExecution Query:\n\nselect d.device_id from ats_devices d inner join ats_device_detays dd on\ndd.device_id=d.device_id;\n\nRESULTS:\n\npostgres v10\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.69..7398.76 rows=2325 width=8) (actual\ntime=0.023..5.877 rows=2325 loops=1)\n\n\n\n -> Index Only Scan using ats_device_detays_device_id_idx on\nats_device_detays det (cost=0.28..91.16 rows=2325 width=8) (actual\ntime=0.006..0.483 rows=2325\n Heap Fetches: 373\n -> Index Only Scan using ats_devices_pkey1 on ats_devices d\n (cost=0.41..3.14 rows=1 width=8) (actual time=0.002..0.002 rows=1\nloops=2325)\n Index Cond: (device_id = det.device_id)\n Heap Fetches: 528\n Planning time: 0.180 ms\n Execution time: 6.006 ms\n(8 rows)\n\n###########################################################################################################################################################\n\npostgres v14\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.56..174.59 rows=2279 width=8) (actual\ntime=0.065..2.264 rows=2304 loops=1)\n Merge Cond: (d.device_id = det.device_id)\n -> Index Only Scan using ats_devices_pkey1 on ats_devices d\n (cost=0.28..70.18 rows=2260 width=8) (actual time=0.033..0.603 rows=2304\nloops=1)\n Heap Fetches: 0\n -> Index Only Scan using ats_device_detays_pkey on ats_device_detays\ndet (cost=0.28..70.47 rows=2279 width=8) (actual time=0.024..0.506\nrows=2304 loops=1)\n Heap Fetches: 0\n Planning Time: 0.666 ms\n Execution Time: 2.519 ms\n\nAs a result of that;\n\nAccording to the result og explain analyzer, Although the performance of\nthe machine on which Postgres v14 is installed is better than the\nperformance of the machine on which v10 is installed and their\nconfigurations are the same, in reality it seems to be the opposite. I\nwould appreciate it if you could let me know what could be the cause of\nthis and which parameters I should look?\n\nHello everyone,I am currently running queries with the same table structures in 2 different virtual machines and 2 different versions. and I get results like below.Execution Query:select d.device_id from ats_devices d inner join ats_device_detays dd on dd.device_id=d.device_id;RESULTS:postgres v10                                                                               QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.69..7398.76 rows=2325 width=8) (actual time=0.023..5.877 rows=2325 loops=1)      ->  Index Only Scan using ats_device_detays_device_id_idx on ats_device_detays det  (cost=0.28..91.16 rows=2325 width=8) (actual time=0.006..0.483 rows=2325         Heap Fetches: 373   ->  Index Only Scan using ats_devices_pkey1 on ats_devices d  (cost=0.41..3.14 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=2325)         Index Cond: (device_id = det.device_id)         Heap Fetches: 528 Planning time: 0.180 ms Execution time: 6.006 ms(8 rows)###########################################################################################################################################################postgres v14                                                                           QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=0.56..174.59 rows=2279 width=8) (actual time=0.065..2.264 rows=2304 loops=1)   Merge Cond: (d.device_id = det.device_id)   ->  Index Only Scan using ats_devices_pkey1 on ats_devices d  (cost=0.28..70.18 rows=2260 width=8) (actual time=0.033..0.603 rows=2304 loops=1)         Heap Fetches: 0   ->  Index Only Scan using ats_device_detays_pkey on ats_device_detays det  (cost=0.28..70.47 rows=2279 width=8) (actual time=0.024..0.506 rows=2304 loops=1)         Heap Fetches: 0 Planning Time: 0.666 ms Execution Time: 2.519 msAs a result of that;According to the result og explain analyzer, Although the performance of the machine on which Postgres v14 is installed is better than the performance of the machine on which v10 is installed and their configurations are the same, in reality it seems to be the opposite. I would appreciate it if you could let me know what could be the cause of this and which parameters I should look?", "msg_date": "Tue, 11 Jan 2022 11:41:05 +0300", "msg_from": "=?UTF-8?Q?H=C3=BCseyin_Ellezer?= <[email protected]>", "msg_from_op": true, "msg_subject": "About Query Performaces Problem" }, { "msg_contents": "út 11. 1. 2022 v 9:41 odesílatel Hüseyin Ellezer <[email protected]>\nnapsal:\n\n> Hello everyone,\n>\n> I am currently running queries with the same table structures in 2\n> different virtual machines and 2 different versions. and I get results like\n> below.\n>\n>\n> Execution Query:\n>\n> select d.device_id from ats_devices d inner join ats_device_detays dd on\n> dd.device_id=d.device_id;\n>\n> RESULTS:\n>\n> postgres v10\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.69..7398.76 rows=2325 width=8) (actual\n> time=0.023..5.877 rows=2325 loops=1)\n>\n>\n>\n> -> Index Only Scan using ats_device_detays_device_id_idx on\n> ats_device_detays det (cost=0.28..91.16 rows=2325 width=8) (actual\n> time=0.006..0.483 rows=2325\n> Heap Fetches: 373\n> -> Index Only Scan using ats_devices_pkey1 on ats_devices d\n> (cost=0.41..3.14 rows=1 width=8) (actual time=0.002..0.002 rows=1\n> loops=2325)\n> Index Cond: (device_id = det.device_id)\n> Heap Fetches: 528\n> Planning time: 0.180 ms\n> Execution time: 6.006 ms\n> (8 rows)\n>\n>\n> ###########################################################################################################################################################\n>\n> postgres v14\n>\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=0.56..174.59 rows=2279 width=8) (actual\n> time=0.065..2.264 rows=2304 loops=1)\n> Merge Cond: (d.device_id = det.device_id)\n> -> Index Only Scan using ats_devices_pkey1 on ats_devices d\n> (cost=0.28..70.18 rows=2260 width=8) (actual time=0.033..0.603 rows=2304\n> loops=1)\n> Heap Fetches: 0\n> -> Index Only Scan using ats_device_detays_pkey on ats_device_detays\n> det (cost=0.28..70.47 rows=2279 width=8) (actual time=0.024..0.506\n> rows=2304 loops=1)\n> Heap Fetches: 0\n> Planning Time: 0.666 ms\n> Execution Time: 2.519 ms\n>\n> As a result of that;\n>\n> According to the result og explain analyzer, Although the performance of\n> the machine on which Postgres v14 is installed is better than the\n> performance of the machine on which v10 is installed and their\n> configurations are the same, in reality it seems to be the opposite. I\n> would appreciate it if you could let me know what could be the cause of\n> this and which parameters I should look?\n>\n\n???\n\nPostgreSQL 10 - execution time 6 ms\nPostgreSQL 14 - execution time 2.5 ms\n\nPostgres 14 is about 2x faster\n\nRegards\n\nPavel\n\nút 11. 1. 2022 v 9:41 odesílatel Hüseyin Ellezer <[email protected]> napsal:Hello everyone,I am currently running queries with the same table structures in 2 different virtual machines and 2 different versions. and I get results like below.Execution Query:select d.device_id from ats_devices d inner join ats_device_detays dd on dd.device_id=d.device_id;RESULTS:postgres v10                                                                               QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.69..7398.76 rows=2325 width=8) (actual time=0.023..5.877 rows=2325 loops=1)      ->  Index Only Scan using ats_device_detays_device_id_idx on ats_device_detays det  (cost=0.28..91.16 rows=2325 width=8) (actual time=0.006..0.483 rows=2325         Heap Fetches: 373   ->  Index Only Scan using ats_devices_pkey1 on ats_devices d  (cost=0.41..3.14 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=2325)         Index Cond: (device_id = det.device_id)         Heap Fetches: 528 Planning time: 0.180 ms Execution time: 6.006 ms(8 rows)###########################################################################################################################################################postgres v14                                                                           QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=0.56..174.59 rows=2279 width=8) (actual time=0.065..2.264 rows=2304 loops=1)   Merge Cond: (d.device_id = det.device_id)   ->  Index Only Scan using ats_devices_pkey1 on ats_devices d  (cost=0.28..70.18 rows=2260 width=8) (actual time=0.033..0.603 rows=2304 loops=1)         Heap Fetches: 0   ->  Index Only Scan using ats_device_detays_pkey on ats_device_detays det  (cost=0.28..70.47 rows=2279 width=8) (actual time=0.024..0.506 rows=2304 loops=1)         Heap Fetches: 0 Planning Time: 0.666 ms Execution Time: 2.519 msAs a result of that;According to the result og explain analyzer, Although the performance of the machine on which Postgres v14 is installed is better than the performance of the machine on which v10 is installed and their configurations are the same, in reality it seems to be the opposite. I would appreciate it if you could let me know what could be the cause of this and which parameters I should look????PostgreSQL 10 - execution time 6 msPostgreSQL 14 - execution time 2.5 msPostgres 14 is about 2x fasterRegardsPavel", "msg_date": "Tue, 11 Jan 2022 14:31:07 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About Query Performaces Problem" }, { "msg_contents": "I mean, despite the execution times shown here PostgreSQL 10 is working\nfaster compared to PostgreSQL 14. Is this speed performance about the\ncached or disk data? How can we see where the data comes from?\n\nBest regards\n\nPavel Stehule <[email protected]>, 11 Oca 2022 Sal, 16:31 tarihinde\nşunu yazdı:\n\n>\n>\n> út 11. 1. 2022 v 9:41 odesílatel Hüseyin Ellezer <[email protected]>\n> napsal:\n>\n>> Hello everyone,\n>>\n>> I am currently running queries with the same table structures in 2\n>> different virtual machines and 2 different versions. and I get results like\n>> below.\n>>\n>>\n>> Execution Query:\n>>\n>> select d.device_id from ats_devices d inner join ats_device_detays dd on\n>> dd.device_id=d.device_id;\n>>\n>> RESULTS:\n>>\n>> postgres v10\n>>\n>> QUERY PLAN\n>>\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Nested Loop (cost=0.69..7398.76 rows=2325 width=8) (actual\n>> time=0.023..5.877 rows=2325 loops=1)\n>>\n>>\n>>\n>> -> Index Only Scan using ats_device_detays_device_id_idx on\n>> ats_device_detays det (cost=0.28..91.16 rows=2325 width=8) (actual\n>> time=0.006..0.483 rows=2325\n>> Heap Fetches: 373\n>> -> Index Only Scan using ats_devices_pkey1 on ats_devices d\n>> (cost=0.41..3.14 rows=1 width=8) (actual time=0.002..0.002 rows=1\n>> loops=2325)\n>> Index Cond: (device_id = det.device_id)\n>> Heap Fetches: 528\n>> Planning time: 0.180 ms\n>> Execution time: 6.006 ms\n>> (8 rows)\n>>\n>>\n>> ###########################################################################################################################################################\n>>\n>> postgres v14\n>>\n>> QUERY PLAN\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Merge Join (cost=0.56..174.59 rows=2279 width=8) (actual\n>> time=0.065..2.264 rows=2304 loops=1)\n>> Merge Cond: (d.device_id = det.device_id)\n>> -> Index Only Scan using ats_devices_pkey1 on ats_devices d\n>> (cost=0.28..70.18 rows=2260 width=8) (actual time=0.033..0.603 rows=2304\n>> loops=1)\n>> Heap Fetches: 0\n>> -> Index Only Scan using ats_device_detays_pkey on ats_device_detays\n>> det (cost=0.28..70.47 rows=2279 width=8) (actual time=0.024..0.506\n>> rows=2304 loops=1)\n>> Heap Fetches: 0\n>> Planning Time: 0.666 ms\n>> Execution Time: 2.519 ms\n>>\n>> As a result of that;\n>>\n>> According to the result og explain analyzer, Although the performance of\n>> the machine on which Postgres v14 is installed is better than the\n>> performance of the machine on which v10 is installed and their\n>> configurations are the same, in reality it seems to be the opposite. I\n>> would appreciate it if you could let me know what could be the cause of\n>> this and which parameters I should look?\n>>\n>\n> ???\n>\n> PostgreSQL 10 - execution time 6 ms\n> PostgreSQL 14 - execution time 2.5 ms\n>\n> Postgres 14 is about 2x faster\n>\n> Regards\n>\n> Pavel\n>\n\nI mean, despite the execution times shown here PostgreSQL 10 is working faster compared to PostgreSQL 14. Is this speed performance about the cached or disk data? How can we see where the data comes from?Best regardsPavel Stehule <[email protected]>, 11 Oca 2022 Sal, 16:31 tarihinde şunu yazdı:út 11. 1. 2022 v 9:41 odesílatel Hüseyin Ellezer <[email protected]> napsal:Hello everyone,I am currently running queries with the same table structures in 2 different virtual machines and 2 different versions. and I get results like below.Execution Query:select d.device_id from ats_devices d inner join ats_device_detays dd on dd.device_id=d.device_id;RESULTS:postgres v10                                                                               QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.69..7398.76 rows=2325 width=8) (actual time=0.023..5.877 rows=2325 loops=1)      ->  Index Only Scan using ats_device_detays_device_id_idx on ats_device_detays det  (cost=0.28..91.16 rows=2325 width=8) (actual time=0.006..0.483 rows=2325         Heap Fetches: 373   ->  Index Only Scan using ats_devices_pkey1 on ats_devices d  (cost=0.41..3.14 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=2325)         Index Cond: (device_id = det.device_id)         Heap Fetches: 528 Planning time: 0.180 ms Execution time: 6.006 ms(8 rows)###########################################################################################################################################################postgres v14                                                                           QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=0.56..174.59 rows=2279 width=8) (actual time=0.065..2.264 rows=2304 loops=1)   Merge Cond: (d.device_id = det.device_id)   ->  Index Only Scan using ats_devices_pkey1 on ats_devices d  (cost=0.28..70.18 rows=2260 width=8) (actual time=0.033..0.603 rows=2304 loops=1)         Heap Fetches: 0   ->  Index Only Scan using ats_device_detays_pkey on ats_device_detays det  (cost=0.28..70.47 rows=2279 width=8) (actual time=0.024..0.506 rows=2304 loops=1)         Heap Fetches: 0 Planning Time: 0.666 ms Execution Time: 2.519 msAs a result of that;According to the result og explain analyzer, Although the performance of the machine on which Postgres v14 is installed is better than the performance of the machine on which v10 is installed and their configurations are the same, in reality it seems to be the opposite. I would appreciate it if you could let me know what could be the cause of this and which parameters I should look????PostgreSQL 10 - execution time 6 msPostgreSQL 14 - execution time 2.5 msPostgres 14 is about 2x fasterRegardsPavel", "msg_date": "Wed, 12 Jan 2022 11:23:33 +0300", "msg_from": "=?UTF-8?Q?H=C3=BCseyin_Ellezer?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: About Query Performaces Problem" }, { "msg_contents": "st 12. 1. 2022 v 9:23 odesílatel Hüseyin Ellezer <[email protected]>\nnapsal:\n\n> I mean, despite the execution times shown here PostgreSQL 10 is working\n> faster compared to PostgreSQL 14. Is this speed performance about the\n> cached or disk data? How can we see where the data comes from?\n>\n\nuse EXPLAIN (ANALYZE, BUFFERS) SELECT ...\n\nhttps://www.postgresql.org/docs/current/sql-explain.html\n\nRegards\n\nPavel\n\n\n> Best regards\n>\n> Pavel Stehule <[email protected]>, 11 Oca 2022 Sal, 16:31 tarihinde\n> şunu yazdı:\n>\n>>\n>>\n>> út 11. 1. 2022 v 9:41 odesílatel Hüseyin Ellezer <[email protected]>\n>> napsal:\n>>\n>>> Hello everyone,\n>>>\n>>> I am currently running queries with the same table structures in 2\n>>> different virtual machines and 2 different versions. and I get results like\n>>> below.\n>>>\n>>>\n>>> Execution Query:\n>>>\n>>> select d.device_id from ats_devices d inner join ats_device_detays dd on\n>>> dd.device_id=d.device_id;\n>>>\n>>> RESULTS:\n>>>\n>>> postgres v10\n>>>\n>>> QUERY PLAN\n>>>\n>>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Nested Loop (cost=0.69..7398.76 rows=2325 width=8) (actual\n>>> time=0.023..5.877 rows=2325 loops=1)\n>>>\n>>>\n>>>\n>>> -> Index Only Scan using ats_device_detays_device_id_idx on\n>>> ats_device_detays det (cost=0.28..91.16 rows=2325 width=8) (actual\n>>> time=0.006..0.483 rows=2325\n>>> Heap Fetches: 373\n>>> -> Index Only Scan using ats_devices_pkey1 on ats_devices d\n>>> (cost=0.41..3.14 rows=1 width=8) (actual time=0.002..0.002 rows=1\n>>> loops=2325)\n>>> Index Cond: (device_id = det.device_id)\n>>> Heap Fetches: 528\n>>> Planning time: 0.180 ms\n>>> Execution time: 6.006 ms\n>>> (8 rows)\n>>>\n>>>\n>>> ###########################################################################################################################################################\n>>>\n>>> postgres v14\n>>>\n>>> QUERY PLAN\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Merge Join (cost=0.56..174.59 rows=2279 width=8) (actual\n>>> time=0.065..2.264 rows=2304 loops=1)\n>>> Merge Cond: (d.device_id = det.device_id)\n>>> -> Index Only Scan using ats_devices_pkey1 on ats_devices d\n>>> (cost=0.28..70.18 rows=2260 width=8) (actual time=0.033..0.603 rows=2304\n>>> loops=1)\n>>> Heap Fetches: 0\n>>> -> Index Only Scan using ats_device_detays_pkey on ats_device_detays\n>>> det (cost=0.28..70.47 rows=2279 width=8) (actual time=0.024..0.506\n>>> rows=2304 loops=1)\n>>> Heap Fetches: 0\n>>> Planning Time: 0.666 ms\n>>> Execution Time: 2.519 ms\n>>>\n>>> As a result of that;\n>>>\n>>> According to the result og explain analyzer, Although the performance of\n>>> the machine on which Postgres v14 is installed is better than the\n>>> performance of the machine on which v10 is installed and their\n>>> configurations are the same, in reality it seems to be the opposite. I\n>>> would appreciate it if you could let me know what could be the cause of\n>>> this and which parameters I should look?\n>>>\n>>\n>> ???\n>>\n>> PostgreSQL 10 - execution time 6 ms\n>> PostgreSQL 14 - execution time 2.5 ms\n>>\n>> Postgres 14 is about 2x faster\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n\nst 12. 1. 2022 v 9:23 odesílatel Hüseyin Ellezer <[email protected]> napsal:I mean, despite the execution times shown here PostgreSQL 10 is working faster compared to PostgreSQL 14. Is this speed performance about the cached or disk data? How can we see where the data comes from?use EXPLAIN (ANALYZE, BUFFERS) SELECT ...https://www.postgresql.org/docs/current/sql-explain.htmlRegardsPavel Best regardsPavel Stehule <[email protected]>, 11 Oca 2022 Sal, 16:31 tarihinde şunu yazdı:út 11. 1. 2022 v 9:41 odesílatel Hüseyin Ellezer <[email protected]> napsal:Hello everyone,I am currently running queries with the same table structures in 2 different virtual machines and 2 different versions. and I get results like below.Execution Query:select d.device_id from ats_devices d inner join ats_device_detays dd on dd.device_id=d.device_id;RESULTS:postgres v10                                                                               QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.69..7398.76 rows=2325 width=8) (actual time=0.023..5.877 rows=2325 loops=1)      ->  Index Only Scan using ats_device_detays_device_id_idx on ats_device_detays det  (cost=0.28..91.16 rows=2325 width=8) (actual time=0.006..0.483 rows=2325         Heap Fetches: 373   ->  Index Only Scan using ats_devices_pkey1 on ats_devices d  (cost=0.41..3.14 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=2325)         Index Cond: (device_id = det.device_id)         Heap Fetches: 528 Planning time: 0.180 ms Execution time: 6.006 ms(8 rows)###########################################################################################################################################################postgres v14                                                                           QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=0.56..174.59 rows=2279 width=8) (actual time=0.065..2.264 rows=2304 loops=1)   Merge Cond: (d.device_id = det.device_id)   ->  Index Only Scan using ats_devices_pkey1 on ats_devices d  (cost=0.28..70.18 rows=2260 width=8) (actual time=0.033..0.603 rows=2304 loops=1)         Heap Fetches: 0   ->  Index Only Scan using ats_device_detays_pkey on ats_device_detays det  (cost=0.28..70.47 rows=2279 width=8) (actual time=0.024..0.506 rows=2304 loops=1)         Heap Fetches: 0 Planning Time: 0.666 ms Execution Time: 2.519 msAs a result of that;According to the result og explain analyzer, Although the performance of the machine on which Postgres v14 is installed is better than the performance of the machine on which v10 is installed and their configurations are the same, in reality it seems to be the opposite. I would appreciate it if you could let me know what could be the cause of this and which parameters I should look????PostgreSQL 10 - execution time 6 msPostgreSQL 14 - execution time 2.5 msPostgres 14 is about 2x fasterRegardsPavel", "msg_date": "Wed, 12 Jan 2022 09:31:20 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About Query Performaces Problem" }, { "msg_contents": "Hi,\n\nOn Wed, Jan 12, 2022 at 11:23:33AM +0300, H�seyin Ellezer wrote:\n> I mean, despite the execution times shown here PostgreSQL 10 is working\n> faster compared to PostgreSQL 14.\n\nPlease don't top-post here, see https://wiki.postgresql.org/wiki/Mailing_Lists\nfor more details.\n\n> Is this speed performance about the\n> cached or disk data? How can we see where the data comes from?\n\nWe have no way to know unless you show us some data about queries actually\nbeing slower on your new environment. It could even be something else, like\nthe new server having slower network.\n\nYou should refer to https://wiki.postgresql.org/wiki/Slow_Query_Questions to\nprovide more details, especially the EXPLAIN (ANALYZE, BUFFERS) section which\nwill show how much of the data comes from postgres internal cache. There's\nunfortunately no option to distinguish OS cache access from disk access using\nEXPLAIN.\n\n\n", "msg_date": "Wed, 12 Jan 2022 16:35:23 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About Query Performaces Problem" } ]
[ { "msg_contents": "Hi Team ,\n\nWe are getting below error while running pgbench -\n\ncould not connect to server: Cannot assign requested address\nIs the server running on host \"\ntushartest-ist-pgec2-a7-prod-lb-4637832643276542.elb.us-east-1.amazonaws.com\"\n(100.10.11.4) and accepting\nTCP/IP connections on port 5432?\nclient 75 aborted while establishing connection\nconnection to database \"pgbenchtest\" failed:\ncould not connect to server: Cannot assign requested address\nIs the server running on host \"i\"\ntushartest-ist-pgec2-a7-prod-lb-4637832643276542.elb.us-east-1.amazonaws.com\"\n(100.10.11.4) and accepting\nTCP/IP connections on port 5432?\nclient 38 aborted while establishing connection\nconnection to database \"pgbenchtest\" failed:\ncould not connect to server: Cannot assign requested address\n\n*Pgbench command - *\n\n/usr/pgsql-11/bin/pgbench -U postgres -h\ntushartest-ist-pgec2-a7-prod-lb-4637832643276542.elb.us-east-1.amazonaws.com -p\n5432 -c 100 -C -j 5 -T 21600 pgbenchtest -s 10 -r -P 5 -f\n/var/lib/pgsql/pgbench/test_insert.sql\n\n\nCan you please help us with the expected cause and solution .\n\nNote : we are seeing this error when we are using the *\" -C \" *option only\n* .*\n\n\n-\nThanks & Regards,\n\nTushar K Takate .\nMob-No : +91-860-030-2404\nLinkedIn : Tushar Takate <https://in.linkedin.com/in/tushar-takate-93660867>\n\n-- \n-\nThanks & Regards,\n\nTushar K Takate .\nMob-No : +91-860-030-2404\nLinkedIn : Tushar Takate <https://in.linkedin.com/in/tushar-takate-93660867>\nMy-Blogs : Tushar Blogspot <http://tushar-postgresql.blogspot.in/>\n\nHi Team ,We are getting below error while running pgbench - could not connect to server: Cannot assign requested address\tIs the server running on host \"tushartest-ist-pgec2-a7-prod-lb-4637832643276542.elb.us-east-1.amazonaws.com\" (100.10.11.4) and accepting\tTCP/IP connections on port 5432?client 75 aborted while establishing connectionconnection to database \"pgbenchtest\" failed:could not connect to server: Cannot assign requested address\tIs the server running on host \"i\"tushartest-ist-pgec2-a7-prod-lb-4637832643276542.elb.us-east-1.amazonaws.com\" (100.10.11.4) and accepting\tTCP/IP connections on port 5432?client 38 aborted while establishing connectionconnection to database \"pgbenchtest\" failed:could not connect to server: Cannot assign requested addressPgbench command - /usr/pgsql-11/bin/pgbench -U postgres -h tushartest-ist-pgec2-a7-prod-lb-4637832643276542.elb.us-east-1.amazonaws.com -p 5432 -c 100 -C -j 5 -T 21600 pgbenchtest -s 10 -r -P 5 -f /var/lib/pgsql/pgbench/test_insert.sqlCan you please help us with the expected cause and solution .Note : we are seeing this error when we are using the \" -C \" option only .-Thanks & Regards,Tushar K Takate .Mob-No : +91-860-030-2404LinkedIn : Tushar Takate\n-- -Thanks & Regards,Tushar K Takate .Mob-No : +91-860-030-2404LinkedIn : Tushar TakateMy-Blogs : Tushar Blogspot", "msg_date": "Thu, 13 Jan 2022 07:17:49 +0530", "msg_from": "T T <[email protected]>", "msg_from_op": true, "msg_subject": "PGBench connection issue with -C option only" } ]
[ { "msg_contents": "Postgres version is 13.5, platform is Oracle Linux 8.5, x86_64. Here is \nthe problem:\n\nmgogala=# create table test1(col1 integer,col2 varchar(10));\nCREATE TABLE\nmgogala=# alter table test1 add constraint test1_uq unique(col1,col2);\nALTER TABLE\nmgogala=# insert into test1 values(1,null);\nINSERT 0 1\nmgogala=# insert into test1 values(1,null);\nINSERT 0 1\nmgogala=# select * from test1;\n  col1 | col2\n------+------\n     1 |\n     1 |\n(2 rows)\n\nSo, my unique constraint doesn't work if one of the columns is null. \nBruce Momjian to the rescue: \nhttps://blog.toadworld.com/2017/07/12/allowing-only-one-null\n\nLet's see what happens:\n\nmgogala=# truncate table test1;\nTRUNCATE TABLE\nmgogala=# alter table test1 drop constraint test1_uq;\nALTER TABLE\nmgogala=# create unique index test1_uq on test1(col1,(col2 is null)) \nwhere col2 is null;\nCREATE INDEX\nmgogala=# insert into test1 values(1,null);\nINSERT 0 1\nmgogala=# insert into test1 values(1,null);\nERROR:  duplicate key value violates unique constraint \"test1_uq\"\nDETAIL:  Key (col1, (col2 IS NULL))=(1, t) already exists.\n\n\nSo, this allows only a single NULL value, just what I wanted. However, \nthere is a minor issue: this doesn't work for the general case:\n\nmgogala=# insert into test1 values(1,'test1');\nINSERT 0 1\nmgogala=# insert into test1 values(1,'test1');\nINSERT 0 1\nmgogala=# select * from test1;\n  col1 | col2\n------+-------\n     1 |\n     1 | test1\n     1 | test1\n(3 rows)\n\nI can insert the same row twice, which defeats the purpose. So, let's \nmake the 3d modification:\n\nmgogala=# truncate table test1;\nTRUNCATE TABLE\nmgogala=# drop index test1_uq;\nDROP INDEX\nmgogala=# create unique index test1_uq on test1(col1,coalesce(col2,'*** \nEMPTY ***'));\n\nUsing \"coalesce\" enforces the constraint just the way I need:\n\nmgogala=# insert into test1 values(1,null);\nINSERT 0 1\nmgogala=# insert into test1 values(1,null);\nERROR:  duplicate key value violates unique constraint \"test1_uq\"\nDETAIL:  Key (col1, COALESCE(col2, '*** EMPTY ***'::character \nvarying))=(1, *** EMPTY ***) already exists.\nmgogala=# insert into test1 values(1,'test1');\nINSERT 0 1\nmgogala=# insert into test1 values(1,'test1');\nERROR:  duplicate key value violates unique constraint \"test1_uq\"\nDETAIL:  Key (col1, COALESCE(col2, '*** EMPTY ***'::character \nvarying))=(1, test1) already exists.\nmgogala=#\n\nNow comes the greatest mystery of them all:\n\nexplain (analyze,verbose) select * from test1 where col1=1 and col2='test1';\n                                                    QUERY PLAN\n\n--------------------------------------------------------------------------------\n---------------------------------\n  Bitmap Heap Scan on mgogala.test1  (cost=1.70..7.52 rows=1 width=42) \n(actual ti\nme=0.023..0.024 rows=1 loops=1)\n    Output: col1, col2\n    Recheck Cond: (test1.col1 = 1)\n    Filter: ((test1.col2)::text = 'test1'::text)\n    Rows Removed by Filter: 1\n    Heap Blocks: exact=1\n    ->  Bitmap Index Scan on test1_uq  (cost=0.00..1.70 rows=6 width=0) \n(actual t\nime=0.015..0.016 rows=2 loops=1)\n          Index Cond: (test1.col1 = 1)\n  Planning Time: 1.184 ms\n  Execution Time: 0.407 ms\n(10 rows)\n\nHow come that the index is used for search without the \"coalesce\" \nfunction? The unique index is a function based index and, in theory, it \nshouldn't be usable for searches without the function. I don't \nunderstand why is this working. I am porting application from Oracle to \nPostgres and Oracle behaves like this:\n\nSQLcl: Release 21.3 Production on Tue Jan 18 11:39:43 2022\n\nCopyright (c) 1982, 2022, Oracle.  All rights reserved.\n\nConnected to:\nOracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production\nVersion 19.13.0.0.0\n\nElapsed: 00:00:00.001\nSQL> create table test1(col1 integer,col2 varchar2(10));\n\nTable TEST1 created.\n\nElapsed: 00:00:00.050\n\nSQL> alter table test1 add constraint test1_uq unique(col1,col2);\n\nTable TEST1 altered.\n\nElapsed: 00:00:00.139\nSQL> insert into test1 values(1,null);\n\n1 row inserted.\n\nElapsed: 00:00:00.026\nSQL> insert into test1 values(1,null);\n\nError starting at line : 1 in command -\ninsert into test1 values(1,null)\nError report -\nORA-00001: unique constraint (SCOTT.TEST1_UQ) violated\n\nElapsed: 00:00:00.033\n\nOracle is rejecting the same row twice, regardless of whether it \ncontains NULL values or not. As in  Postgres, the resulting index can be \nused for searches. However, Oracle index is not a function-based index \nbecause it doesn't contain the coalesce function.\n\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n", "msg_date": "Tue, 18 Jan 2022 12:13:27 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Unique constraint blues" }, { "msg_contents": "On Tue, Jan 18, 2022 at 10:13 AM Mladen Gogala <[email protected]>\nwrote:\n\n>\n> mgogala=# create unique index test1_uq on test1(col1,coalesce(col2,'***\n> EMPTY ***'));\n>\n> -> Bitmap Index Scan on test1_uq (cost=0.00..1.70 rows=6 width=0)\n\n .......\n> Index Cond: (test1.col1 = 1)\n>\n> How come that the index is used for search without the \"coalesce\"\n> function?\n\n\nOnly the second column is an expression. The first (leading) column is\nperfectly usable all by itself. It is less efficient, hence the parent\nnode's:\n\n Recheck Cond: (test1.col1 = 1)\n Filter: ((test1.col2)::text = 'test1'::text)\n\nbut usable.\n\nIf you are willing to create partial unique indexes you probably should\njust create two of them. One where col2 is null and one where it isn't.\n\nIf the coalesce version is acceptable you should consider declaring the\ncolumn not null and put the sentinel value directly into the record.\n\nDavid J.\n\nOn Tue, Jan 18, 2022 at 10:13 AM Mladen Gogala <[email protected]> wrote:\nmgogala=# create unique index test1_uq on test1(col1,coalesce(col2,'*** \nEMPTY ***'));\n    ->  Bitmap Index Scan on test1_uq  (cost=0.00..1.70 rows=6 width=0)           .......               Index Cond: (test1.col1 = 1)\n\nHow come that the index is used for search without the \"coalesce\" \nfunction?Only the second column is an expression.  The first (leading) column is perfectly usable all by itself.  It is less efficient, hence the parent node's:    Recheck Cond: (test1.col1 = 1)    Filter: ((test1.col2)::text = 'test1'::text)but usable.If you are willing to create partial unique indexes you probably should just create two of them.  One where col2 is null and one where it isn't.If the coalesce version is acceptable you should consider declaring the column not null and put the sentinel value directly into the record.David J.", "msg_date": "Tue, 18 Jan 2022 10:29:10 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unique constraint blues" } ]
[ { "msg_contents": "Hello:\n\nI ran 2 same queries on PostgreSQL 12.8 machine running in AWS RDS, the first time I ran the query\n\nthe query plan was:\n\nGroupAggregate  (cost=455652.07..455664.99 rows=340 width=16) (actual time=124047.119..124048.777 rows=294 loops=1)\n   Group Key: dvh.history_date\n   ->  Sort  (cost=455652.07..455655.24 rows=1269 width=8) (actual time=124046.989..124047.857 rows=7780 loops=1)\n         Sort Key: dvh.history_date\n         Sort Method: quicksort  Memory: 557kB\n         ->  Nested Loop  (cost=13708.92..455586.66 rows=1269 width=8) (actual time=12.228..124039.597 rows=7780 loops=1)\n               ->  HashAggregate  (cost=13708.35..13746.45 rows=3810 width=4) (actual time=12.210..21.629 rows=6119 loops=1)\n                     Group Key: pdv.id\n                     ->  Nested Loop  (cost=0.84..13698.83 rows=3810 width=4) (actual time=0.038..10.650 rows=6119 loops=1)\n                           ->  Index Scan using idx_pc_device_acct_num on pc_devices  (cost=0.42..565.32 rows=506 width=4) (actual time=0.014..0.193 rows=396 loops=1)\n                                 Index Cond: ((account_number)::text = 'AB32823833'::text)\n                           ->  Index Scan using pdv_uindex on pdv  (cost=0.42..25.73 rows=23 width=8) (actual time=0.007..0.024 rows=15 loops=396)\n                                 Index Cond: (pc_device_id = pc_devices.id)\n               ->  Index Scan using idx_pc_dvh_dvh_id on pdv_history dvh  (cost=0.57..115.96 rows=1 width=12) (actual time=0.342..20.265 rows=1 loops=6119)\n                     Index Cond: (pdv_id = pdv.id)\n                     Filter: (status_changed AND (status_id = 1))\n                     Rows Removed by Filter: 187\nPlanning Time: 1.050 ms\nExecution Time: 124048.850 ms\n\n\nAfter that, I reran the same query again. The plan is basically the same:\n\n\nGroupAggregate  (cost=455652.07..455664.99 rows=340 width=16) (actual time=12180.624..12182.286 rows=294 loops=1)\n   Group Key: dvh.history_date\n   ->  Sort  (cost=455652.07..455655.24 rows=1269 width=8) (actual time=12180.493..12181.363 rows=7780 loops=1)\n         Sort Key: dvh.history_date\n         Sort Method: quicksort  Memory: 557kB\n         ->  Nested Loop  (cost=13708.92..455586.66 rows=1269 width=8) (actual time=1709.341..12177.249 rows=7780 loops=1)\n               ->  HashAggregate  (cost=13708.35..13746.45 rows=3810 width=4) (actual time=1709.319..1713.171 rows=6119 loops=1)\n                     Group Key: pdv.id\n                     ->  Nested Loop  (cost=0.84..13698.83 rows=3810 width=4) (actual time=0.379..1706.606 rows=6119 loops=1)\n                           ->  Index Scan using idx_pc_device_acct_num on pc_devices  (cost=0.42..565.32 rows=506 width=4) (actual time=0.013..0.279 rows=396 loops=1)\n                                 Index Cond: ((account_number)::text = 'AB32823833'::text)\n                           ->  Index Scan using pdv_uindex on pdv  (cost=0.42..25.73 rows=23 width=8) (actual time=0.289..4.306 rows=15 loops=396)\n                                 Index Cond: (pc_device_id = pc_devices.id)\n               ->  Index Scan using idx_pc_dvh_dvh_id on pdv_history dvh  (cost=0.57..115.96 rows=1 width=12) (actual time=0.063..1.709 rows=1 loops=6119)\n                     Index Cond: (pdv_id = pdv.id)\n                     Filter: (status_changed AND (status_id = 1))\n                     Rows Removed by Filter: 187\nPlanning Time: 1.262 ms\nExecution Time: 12182.361 ms\n\n\nBut the gap in the execution time between the two same queries is quite huge : 2 minutes vs 12 seconds.\n\nI noticed that different is actually in Nested Loop join. One is taking 2 minutes, other is taking 12 seconds. I find this puzzling as I assume the nested loop should be done in memory.\n\nThe disk is gp2 SDD so I'm even more baffled by this. What could be the factors that affect the speed of nested loop. I notice for that both loops the rows is 7780 and loops is 1. I don't think those are big numbers\n\nIt was only after the running the 2 queries that I realize I could do EXPLAIN (ANALYZE, BUFFERS), but I couldn't reproduce the slowness.\n\nBelow are other information that might be relevant:\n\nThe database has been vacuum analyzed before running the queries.\n\nPlatform : AWS RDS\nPG version : 12.8\neffective_cache_size : 7.9 GB (7935800kB)\nshared_buffers : 3.9 GB (3967896kB)\nwork_mem : 64MB\nrandom_page_cost : 1.1\nInstance type : db.m6g.xlarge (4 vCPUs / 32 GB RAM)\nDatabase is idle. I'm the only one running the query\nversion : PostgreSQL 12.8 on aarch64-unknown-linux-gnu, compiled by gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-6), 64-bit\n\nThanks,\nLudwig\n\n\n", "msg_date": "Wed, 19 Jan 2022 13:52:55 +0000 (UTC)", "msg_from": "Ludwig Isaac Lim <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 12.8 Same Query Same Execution Plan Different Time" }, { "msg_contents": "On Wed, Jan 19, 2022 at 7:59 AM Ludwig Isaac Lim <[email protected]> wrote:\n\n>\n> I noticed that different is actually in Nested Loop join. One is taking 2\n> minutes, other is taking 12 seconds. I find this puzzling as I assume the\n> nested loop should be done in memory.\n>\n\nEverything is done in memory, but the data has to get there first (hence\nBUFFERS as you figured out below).\n\n\n> The disk is gp2 SDD so I'm even more baffled by this. What could be the\n> factors that affect the speed of nested loop. I notice for that both loops\n> the rows is 7780 and loops is 1. I don't think those are big numbers\n>\n\nThe loops are ~= 400 and 6,000\n\n\n>\n> It was only after the running the 2 queries that I realize I could\n> do EXPLAIN (ANALYZE, BUFFERS), but I couldn't reproduce the slowness.\n>\n\nDid you (can you even in RDS) attempt to clear those buffers? If the first\nquery ran slowly because none of the data was in memory (which you don't\nknow for certain because you didn't run with BUFFERS option then) then\nsubsequent runs would indeed be faster (the implementation of shared\nbuffers having fulfilled one of its major purposes in life).\n\nI'll agree buffers for that query does not seem to account for nearly two\nminutes...though as RDS is a shared resource I'd probably chalk at least\nsome of it to contention on the underlying hardware (disk likely being more\nproblematic than memory).\n\nDavid J.\n\nOn Wed, Jan 19, 2022 at 7:59 AM Ludwig Isaac Lim <[email protected]> wrote:\nI noticed that different is actually in Nested Loop join. One is taking 2 minutes, other is taking 12 seconds. I find this puzzling as I assume the nested loop should be done in memory.Everything is done in memory, but the data has to get there first (hence BUFFERS as you figured out below).\n\nThe disk is gp2 SDD so I'm even more baffled by this. What could be the factors that affect the speed of nested loop. I notice for that both loops the rows is 7780 and loops is 1. I don't think those are big numbersThe loops are ~= 400 and 6,000 \n\nIt was only after the running the 2 queries that I realize I could do EXPLAIN (ANALYZE, BUFFERS), but I couldn't reproduce the slowness.Did you (can you even in RDS) attempt to clear those buffers?  If the first query ran slowly because none of the data was in memory (which you don't know for certain because you didn't run with BUFFERS option then) then subsequent runs would indeed be faster (the implementation of shared buffers having fulfilled one of its major purposes in life).I'll agree buffers for that query does not seem to account for nearly two minutes...though as RDS is a shared resource I'd probably chalk at least some of it to contention on the underlying hardware (disk likely being more problematic than memory).David J.", "msg_date": "Wed, 19 Jan 2022 08:11:00 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.8 Same Query Same Execution Plan Different Time" }, { "msg_contents": "Michel SALAIS\n\nDe : David G. Johnston <[email protected]> \nEnvoyé : mercredi 19 janvier 2022 16:11\nÀ : Ludwig Isaac Lim <[email protected]>\nCc : [email protected]\nObjet : Re: PostgreSQL 12.8 Same Query Same Execution Plan Different Time\n\n \n\nOn Wed, Jan 19, 2022 at 7:59 AM Ludwig Isaac Lim <[email protected] <mailto:[email protected]> > wrote:\n\n\nI noticed that different is actually in Nested Loop join. One is taking 2 minutes, other is taking 12 seconds. I find this puzzling as I assume the nested loop should be done in memory.\n\n \n\nEverything is done in memory, but the data has to get there first (hence BUFFERS as you figured out below).\n\n \n\n\nThe disk is gp2 SDD so I'm even more baffled by this. What could be the factors that affect the speed of nested loop. I notice for that both loops the rows is 7780 and loops is 1. I don't think those are big numbers\n\n \n\nThe loops are ~= 400 and 6,000\n\n \n\n\nIt was only after the running the 2 queries that I realize I could do EXPLAIN (ANALYZE, BUFFERS), but I couldn't reproduce the slowness.\n\n \n\nDid you (can you even in RDS) attempt to clear those buffers? If the first query ran slowly because none of the data was in memory (which you don't know for certain because you didn't run with BUFFERS option then) then subsequent runs would indeed be faster (the implementation of shared buffers having fulfilled one of its major purposes in life).\n\n \n\nI'll agree buffers for that query does not seem to account for nearly two minutes...though as RDS is a shared resource I'd probably chalk at least some of it to contention on the underlying hardware (disk likely being more problematic than memory).\n\n \n\nDavid J.\n\nHi,\n\n \n\nAnother point to check is eventually IOPS…\n\nIt depends on the contracted service, If the quantity of IOPS is guaranteed or not. When it is not guaranteed and a sufficiently heavy load (in I/O) was executed for a while, the value of IOPS falls down dramatically and then you are sure to have performance problems…\n\n \n\nMichel SALAIS\n\n \n\n\nMichel SALAISDe : David G. Johnston <[email protected]> Envoyé : mercredi 19 janvier 2022 16:11À : Ludwig Isaac Lim <[email protected]>Cc : [email protected] : Re: PostgreSQL 12.8 Same Query Same Execution Plan Different Time On Wed, Jan 19, 2022 at 7:59 AM Ludwig Isaac Lim <[email protected]> wrote:I noticed that different is actually in Nested Loop join. One is taking 2 minutes, other is taking 12 seconds. I find this puzzling as I assume the nested loop should be done in memory. Everything is done in memory, but the data has to get there first (hence BUFFERS as you figured out below). The disk is gp2 SDD so I'm even more baffled by this. What could be the factors that affect the speed of nested loop. I notice for that both loops the rows is 7780 and loops is 1. I don't think those are big numbers The loops are ~= 400 and 6,000 It was only after the running the 2 queries that I realize I could do EXPLAIN (ANALYZE, BUFFERS), but I couldn't reproduce the slowness. Did you (can you even in RDS) attempt to clear those buffers?  If the first query ran slowly because none of the data was in memory (which you don't know for certain because you didn't run with BUFFERS option then) then subsequent runs would indeed be faster (the implementation of shared buffers having fulfilled one of its major purposes in life). I'll agree buffers for that query does not seem to account for nearly two minutes...though as RDS is a shared resource I'd probably chalk at least some of it to contention on the underlying hardware (disk likely being more problematic than memory). David J.Hi, Another point to check is eventually IOPS…It depends on the contracted service, If the quantity of IOPS is guaranteed or not. When it is not guaranteed and a sufficiently heavy load (in I/O) was executed for a while, the value of IOPS falls down dramatically and then you are sure to have performance problems… Michel SALAIS", "msg_date": "Thu, 20 Jan 2022 14:15:02 +0100", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: PostgreSQL 12.8 Same Query Same Execution Plan Different Time" } ]
[ { "msg_contents": "Hi everyone,\n\nI have a SELECT query that uses a chain of CTEs (4) that is slow to run on\na large\ndatabase. But if I change a where clause in one of the small CTEs from an\nequality to an equivalent nested IN query, then the query becomes fast.\nLooking\nat the query plan I can see that after the change Postgres avoids a large\nand\nslow index scan by using a different index and aggregation. I am reluctant\nto\naccept the accidental \"fix\" because it seems odd and counter intuitive. Can\nanyone shed some light on what's going on? Is my fix the intended solution\nor\nis there a better way to write this query?\n\nWe have a system which stores resource blobs and extracts search parameters\ninto a number of tables. The query in question tries to find all resources\nwith\na specific tag (cte0) that are related to resource X (cte2) and are dated\nbefore some (recent) date Y (cte1) and sort them by date (cte3 & cte4). The\nquery was working okay on a small database, but over time as the database\ngrew\nthe query started to timeout. Which is why I am looking at it now.\n\nI have accidentally fixed the performance by replacing `system_id = 20` with\n`system_id IN (SELECT system_id FROM fhir.system WHERE value = 'REDACTED')`.\nThe nested query here returns a single row with a value `20`.\n\nHere are the results of EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT\nJSON):\n- Slow: https://explain.depesz.com/s/joHK\n- Fast: https://explain.depesz.com/s/tgd4\n\nSome more info about the CTEs:\n- cte0: select resources with a specific tag\n (most common resource types with the most common tag)\n- cte1: filter resource by date no later than Y\n (matches ~50% of the table, and most of resource from cte0)\n- cte2: select resources that are related to a specific resource X\n (matches 1-5 resources)\n- cte3: adds the date as a sort value\n- cte4: sorts the result\n\nI have also created a gist:\nhttps://gist.github.com/valeneiko/89f8cbe26db7ca2651b47524462b5d18\n- Schema.sql: the SQL script to create tables and indexes\n- Query.sql: the query I am trying to run\n- Postgres Settings, Table sizes and Statistics are also included in the\ngist\n\nPostgreSQL Version:\nPostgreSQL 13.3 (Ubuntu 13.3-1.pgdg18.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n\nSetup: PostgreSQL is running inside a docker container on a dedicate node\nin a\nKubernetes cluster (using Zalando Spilo image:\nhttps://github.com/zalando/spilo)\n\nThank you,\nValentinas\n\nHi everyone,I have a SELECT query that uses a chain of CTEs (4) that is slow to run on a large database. But if I change a where clause in one of the small CTEs from an equality to an equivalent nested IN query, then the query becomes fast. Looking at the query plan I can see that after the change Postgres avoids a large and slow index scan by using a different index and aggregation. I am reluctant to accept the accidental \"fix\" because it seems odd and counter intuitive. Can anyone shed some light on what's going on? Is my fix the intended solution or is there a better way to write this query?We have a system which stores resource blobs and extracts search parameters into a number of tables. The query in question tries to find all resources with a specific tag (cte0) that are related to resource X (cte2) and are dated before some (recent) date Y (cte1) and sort them by date (cte3 & cte4). The query was working okay on a small database, but over time as the database grew the query started to timeout. Which is why I am looking at it now.I have accidentally fixed the performance by replacing `system_id = 20` with`system_id IN (SELECT system_id FROM fhir.system WHERE value = 'REDACTED')`.The nested query here returns a single row with a value `20`.Here are the results of EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON):- Slow: https://explain.depesz.com/s/joHK- Fast: https://explain.depesz.com/s/tgd4Some more info about the CTEs:- cte0: select resources with a specific tag        (most common resource types with the most common tag)- cte1: filter resource by date no later than Y        (matches ~50% of the table, and most of resource from cte0)- cte2: select resources that are related to a specific resource X        (matches 1-5 resources)- cte3: adds the date as a sort value- cte4: sorts the resultI have also created a gist:https://gist.github.com/valeneiko/89f8cbe26db7ca2651b47524462b5d18- Schema.sql: the SQL script to create tables and indexes- Query.sql: the query I am trying to run- Postgres Settings, Table sizes and Statistics are also included in the gistPostgreSQL Version:PostgreSQL 13.3 (Ubuntu 13.3-1.pgdg18.04+1) on x86_64-pc-linux-gnu,compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bitSetup: PostgreSQL is running inside a docker container on a dedicate node in a Kubernetes cluster (using Zalando Spilo image: https://github.com/zalando/spilo)Thank you,Valentinas", "msg_date": "Thu, 20 Jan 2022 19:04:32 +0000", "msg_from": "Valentin Janeiko <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query fixed by replacing equality with a nested query" }, { "msg_contents": "I don't see any reference to cte1. Is that expected?\n\nI'm unclear why these sets are not just inner join'd\non resource_surrogate_id. It seems like that column it is being selected as\nSid1 in each CTE, and then the next one does the below. Why?\n\nwhere resource_surrogate_id IN (SELECT Sid1 FROM cte_previous_number)\n\nI don't see any reference to cte1. Is that expected?I'm unclear why these sets are not just inner join'd on resource_surrogate_id. It seems like that column it is being selected as Sid1 in each CTE, and then the next one does the below. Why?where resource_surrogate_id IN (SELECT Sid1 FROM cte_previous_number)", "msg_date": "Thu, 20 Jan 2022 20:33:17 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query fixed by replacing equality with a nested query" }, { "msg_contents": "My mistake. I have updated the query in the gist: cte1 should have been referenced in cte2. \n\nThe query plans are correct. It was just the query in the gist that was incorrect (I was just verifying cte1 was the culprit – without it the query is fast too).\n\n \n\nThis SQL query is a result of translating a FHIR query into SQL. These queries are generated on the fly from user input. The chains will not always be linear. But I guess I could write an optimizer that rewrites linear parts as JOINS. If that would result in better query plans.\n\n \n\nI have done a few simple experiments in the past comparing CTEs like this to JOINS, but the resultant query plans were the same. CTEs seemed easier to follow when troubleshooting issues, so I left them as such. Do JOINs become better than CTEs at a certain point?\n\n \n\nI will attempt to rewrite the query with JOINs on Monday to see if it makes a difference. It might be tricky, the relationship from resource table to search parameter tables is often a 1 to many.\n\n \n\n\nMy mistake. I have updated the query in the gist: cte1 should have been referenced in cte2. The query plans are correct. It was just the query in the gist that was incorrect (I was just verifying cte1 was the culprit – without it the query is fast too). This SQL query is a result of translating a FHIR query into SQL. These queries are generated on the fly from user input. The chains will not always be linear. But I guess I could write an optimizer that rewrites linear parts as JOINS. If that would result in better query plans. I have done a few simple experiments in the past comparing CTEs like this to JOINS, but the resultant query plans were the same. CTEs seemed easier to follow when troubleshooting issues, so I left them as such. Do JOINs become better than CTEs at a certain point? I will attempt to rewrite the query with JOINs on Monday to see if it makes a difference. It might be tricky, the relationship from resource table to search parameter tables is often a 1 to many.", "msg_date": "Fri, 21 Jan 2022 11:37:54 -0000", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "RE: Slow query fixed by replacing equality with a nested query" }, { "msg_contents": "I have rewritten the query using JOINs. I had to make one of them a\nFULL JOIN, but otherwise JOINs seem like a good idea.\nI have added the new query to the (same) gist:\nhttps://gist.github.com/valeneiko/89f8cbe26db7ca2651b47524462b5d18#file-queryoptimized-sql\nThe query plan is much better with just a few small index scans which\ncompletes in under a millisecond: https://explain.depesz.com/s/vBdG\n\nThank you for your help. Let me know if you have any other suggestions.\n\n\n", "msg_date": "Mon, 24 Jan 2022 13:22:41 +0000", "msg_from": "Valentin Janeiko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query fixed by replacing equality with a nested query" }, { "msg_contents": "On Fri, Jan 21, 2022 at 4:37 AM <[email protected]> wrote:\n\n> I have done a few simple experiments in the past comparing CTEs like this\n> to JOINS, but the resultant query plans were the same. CTEs seemed easier\n> to follow when troubleshooting issues, so I left them as such. Do JOINs\n> become better than CTEs at a certain point?\n>\n\nRead up on from_collapse_limit. If the query can re-write subqueries to\ncollapse the join problem, then it will at first but then once it reaches\nthat threshold, then it won't try anymore to avoid excessive planning time.\nThat's when things can go awry.\n\nOn Fri, Jan 21, 2022 at 4:37 AM <[email protected]> wrote: I have done a few simple experiments in the past comparing CTEs like this to JOINS, but the resultant query plans were the same. CTEs seemed easier to follow when troubleshooting issues, so I left them as such. Do JOINs become better than CTEs at a certain point?Read up on from_collapse_limit. If the query can re-write subqueries to collapse the join problem, then it will at first but then once it reaches that threshold, then it won't try anymore to avoid excessive planning time. That's when things can go awry.", "msg_date": "Mon, 24 Jan 2022 10:50:40 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query fixed by replacing equality with a nested query" }, { "msg_contents": "On Mon, Jan 24, 2022 at 6:22 AM Valentin Janeiko <[email protected]>\nwrote:\n\n> I have rewritten the query using JOINs. I had to make one of them a\n> FULL JOIN, but otherwise JOINs seem like a good idea.\n> I have added the new query to the (same) gist:\n>\n> https://gist.github.com/valeneiko/89f8cbe26db7ca2651b47524462b5d18#file-queryoptimized-sql\n> The query plan is much better with just a few small index scans which\n> completes in under a millisecond: https://explain.depesz.com/s/vBdG\n\n\nGlad to hear it, but as best as I can figure, that right join is actually\nan inner join because of the where clause meaning that cte2Source must not\nbe null and therefore cte2.resource_surrogate_id must not be null.\n\n*RIGHT* JOIN fhir.reference_search_param AS cte2 ON\ncte2.is_history = false\nAND cte2.search_param_id = 561\nAND cte2.resource_type_id IN (42)\nAND cte2.reference_resource_type_id = r.resource_type_id\nAND cte2.reference_resource_id_hash = r.resource_id_hash\n\nINNER JOIN fhir.resource AS cte2Source ON\n cte2Source.is_history = false\n AND cte2Source.resource_type_id IN (42)\n* AND cte2Source.resource_surrogate_id = cte2.resource_surrogate_id*\n\nWHERE cte1.start_date_time <= '2022-01-12 12:13:21.969000Z'\nAND r.resource_type_id IN (10, 52, 95, 119, 60)\n* AND cte2Source.resource_id_hash IN\n('df26ca5a-d2e2-1576-2507-815d8e73f15e'::uuid)*\n\nOn Mon, Jan 24, 2022 at 6:22 AM Valentin Janeiko <[email protected]> wrote:I have rewritten the query using JOINs. I had to make one of them a\nFULL JOIN, but otherwise JOINs seem like a good idea.\nI have added the new query to the (same) gist:\nhttps://gist.github.com/valeneiko/89f8cbe26db7ca2651b47524462b5d18#file-queryoptimized-sql\nThe query plan is much better with just a few small index scans which\ncompletes in under a millisecond: https://explain.depesz.com/s/vBdGGlad to hear it, but as best as I can figure, that right join is actually an inner join because of the where clause meaning that cte2Source must not be null and therefore cte2.resource_surrogate_id must not be null.RIGHT JOIN fhir.reference_search_param AS cte2 ON\tcte2.is_history = false\tAND cte2.search_param_id = 561\tAND cte2.resource_type_id IN (42)\tAND cte2.reference_resource_type_id = r.resource_type_id\tAND cte2.reference_resource_id_hash = r.resource_id_hashINNER JOIN fhir.resource AS cte2Source ON   cte2Source.is_history = false   AND cte2Source.resource_type_id IN (42)   AND cte2Source.resource_surrogate_id = cte2.resource_surrogate_idWHERE cte1.start_date_time <= '2022-01-12 12:13:21.969000Z'\tAND r.resource_type_id IN (10, 52, 95, 119, 60)\tAND cte2Source.resource_id_hash IN ('df26ca5a-d2e2-1576-2507-815d8e73f15e'::uuid)", "msg_date": "Mon, 24 Jan 2022 10:52:40 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query fixed by replacing equality with a nested query" } ]
[ { "msg_contents": "Hi folks,\n\nWe are struggling to figure out what is going on. We are migrating from PostgreSQL 9.6 to PostgreSQL 13 w/ PostGIS. Our 9.6 version was compiled from source and the new version (13) was installed using Yum. BTW, the new version is on a VM that has 16GB of memory, two cores, and 500 GB of disk. In addition, we are using MapServer as our mapping engine and OpenLayers as the client side interface. Once we switch over to the new version of PostgreSQL, the performance takes a big nose dive. We have being tweaking and tuning the database and it appears to be happy but the response times from mapfile requests are 3 -7 seconds. Previously, the response time was below a second.\n\nAnother point is that we populated the new database from the old (9.6), using pg_dump. Could this be causing issues? Should we load the data from scratch? We use ogr2ogr (GDAL) to help assist with loading of spatial data. Anyway, not really sure what the problem is.\n\nLastly, why am I seeing so many requests as to the PostGIS version. It appears that every map request sends the following query \"SELECT PostGIS_Version();\", which in turn takes up a connection.\n\nAny help would be greatly appreciated.\n\nThanks\n\n __:)\n _ \\<,_\n (*)/ (*)\nJames Lugosi\nClackamas County GISP\nIS Software Specialist, Senior\n121 Library Court, Oregon City OR 97045\n503-723-4829\n\n\n\n\n\n\n\n\n\n\nHi folks,\n \nWe are struggling to figure out what is going on. We are migrating from PostgreSQL 9.6 to PostgreSQL 13 w/ PostGIS. Our 9.6 version was compiled from source and the new version (13) was installed using Yum. BTW, the new version is on a\n VM that has 16GB of memory, two cores, and 500 GB of disk. In addition, we are using MapServer as our mapping engine and OpenLayers as the client side interface. Once we switch over to the new version of PostgreSQL, the performance takes a big nose dive. We\n have being tweaking and tuning the database and it appears to be happy but the response times from mapfile requests are 3 -7 seconds. Previously, the response time was below a second.\n \nAnother point is that we populated the new database from the old (9.6), using pg_dump. Could this be causing issues? Should we load the data from scratch? We use ogr2ogr (GDAL) to help assist with loading of spatial data. Anyway, not really\n sure what the problem is.\n\n\nLastly, why am I seeing so many requests as to the PostGIS version. It appears that every map request sends the following query \"SELECT PostGIS_Version();\", which in turn takes up a connection. \n\n \nAny help would be greatly appreciated.\n \nThanks\n \n     __J\n   _ \\<,_\n  (*)/ (*)\n\nJames Lugosi\nClackamas County GISP\nIS Software Specialist, Senior\n121 Library Court, Oregon City OR 97045\n503-723-4829", "msg_date": "Thu, 20 Jan 2022 22:50:07 +0000", "msg_from": "\"Lugosi, Jim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Poor performance PostgreSQL 13/ PostGIS 3.x" }, { "msg_contents": "On Thu, Jan 20, 2022 at 4:50 PM Lugosi, Jim <[email protected]> wrote:\n>\n> Hi folks,\n>\n>\n>\n> We are struggling to figure out what is going on. We are migrating from PostgreSQL 9.6 to PostgreSQL 13 w/ PostGIS. Our 9.6 version was compiled from source and the new version (13) was installed using Yum. BTW, the new version is on a VM that has 16GB of memory, two cores, and 500 GB of disk. In addition, we are using MapServer as our mapping engine and OpenLayers as the client side interface. Once we switch over to the new version of PostgreSQL, the performance takes a big nose dive. We have being tweaking and tuning the database and it appears to be happy but the response times from mapfile requests are 3 -7 seconds. Previously, the response time was below a second.\n\nplease post EXPLAIN ANALYZE for a query that you think is\nunderperforming. Ideally, we can also produce from legacy 9.6\nequivalent.\n\nmerlin\n\n\n", "msg_date": "Fri, 18 Feb 2022 09:38:56 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance PostgreSQL 13/ PostGIS 3.x" } ]
[ { "msg_contents": "All;\n\n\nI am looking for information on how PostgreSQL leverages or interfaces \nwith CPU's on Linux. Does PostgreSQL let Linux do the work? Does it \nbypass the OS? Any information or docs you can send my way would be much \nappreciated.\n\n\nThanks in advance\n\n\n\n\n", "msg_date": "Thu, 20 Jan 2022 16:21:54 -0700", "msg_from": "Sbob <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL and Linux CPU's" }, { "msg_contents": "On Thu, Jan 20, 2022 at 4:22 PM Sbob <[email protected]> wrote:\n\n> I am looking for information on how PostgreSQL leverages or interfaces\n> with CPU's on Linux. Does PostgreSQL let Linux do the work? Does it\n> bypass the OS? Any information or docs you can send my way would be much\n> appreciated.\n>\n>\nPostgreSQL is a user land process in Linux. Linux doesn't allow itself to\nbe bypassed by user land processes when dealing with the CPU. That is kind\nof its main reason for existing...\n\nPostgreSQL uses a process forking model and each process runs on a single\nthread.\n\nYou can probably verify all of that by perusing the PostgreSQL\ndocumentation. Don't know what to recommend regarding Linxu, user land,\nkernel mode, and CPUs...\n\nDavid J.\n\nOn Thu, Jan 20, 2022 at 4:22 PM Sbob <[email protected]> wrote:I am looking for information on how PostgreSQL leverages or interfaces \nwith CPU's on Linux. Does PostgreSQL let Linux do the work? Does it \nbypass the OS? Any information or docs you can send my way would be much \nappreciated.PostgreSQL is a user land process in Linux.  Linux doesn't allow itself to be bypassed by user land processes when dealing with the CPU.  That is kind of its main reason for existing...PostgreSQL uses a process forking model and each process runs on a single thread.You can probably verify all of that by perusing the PostgreSQL documentation.  Don't know what to recommend regarding Linxu, user land, kernel mode, and CPUs...David J.", "msg_date": "Thu, 20 Jan 2022 16:27:23 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux CPU's" } ]
[ { "msg_contents": "Hello,\n\nI have a strange case of a query that runs substantially slower when run as a\nJava PreparedStatement with placeholders, compared to using constant values in\nthe SQL string.\n\nIn my experience, the reason for this is usually a different execution plan for the\nprepared statement.\n\nHowever in this case, the plans are identical but the prepared statements runs substantially\nslower than the \"non-prepared\" plan: 1800ms to 2000ms vs. 250ms to 350ms\n\nI can't disclose the query, but the basic structure is this:\n\n select ...\n from some_table\n where jsonb_column #>> $1 = ANY ($2)\n and some_uuid_column = ANY (.....)\n\nFor various reasons the list of values for the some_uuid_column = ANY(..) condition\nis always passed as constant values.\n\nThe plan is quite reasonable using a Bitmap Heap Scan in both cases on \"some_uuid_column\"\n\nI uploaded the (anonymized) plans to explain.depesz:\n\nFast execution: https://explain.depesz.com/s/QyFR\nSlow execution: https://explain.depesz.com/s/mcQz\n\nThe \"prepared\" plan was created using psql, not through JDBC:\n PREPARE p1(text,text) AS ...\n\n EXPLAIN (analyze, buffers, timing, verbose)\n EXECUTE p1 ('{...}', '{....}')\n\n\nBut the runtime is pretty much what I see when doing this through Java.\n\nMy question is: why is processing the query through a prepared statement so much slower?\n\nThis happens on a test system running Postgres 13.2 on CentOS, and another test system\nrunning 13.5 on Ubuntu.\n\nFor the time being, we can switch off the use of a PreparedStatement, but I'm also\ninteresting to know the underlying root cause.\n\nAny ideas?\n\n\n", "msg_date": "Wed, 26 Jan 2022 08:18:59 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": true, "msg_subject": "Query runs slower as prepared statement - identical execution plans" } ]
[ { "msg_contents": "Hello,\n\nI have a table that contains folders, and another one that contains files.\n\nHere are the table definitions. I have removed most of the columns because\nthey are not important for this question. (There are lots of columns.)\n\nCREATE TABLE media.oo_folder (\nid int8 NOT NULL,\nis_active bool NOT NULL DEFAULT true,\ntitle text NOT NULL,\nrelpath text NOT NULL,\nCONSTRAINT chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\nCONSTRAINT oo_folder_chk_no_slash CHECK ((\"position\"(title, '/'::text) =\n0)),\nCONSTRAINT pk_oo_folder PRIMARY KEY (id),\nCONSTRAINT fk_oo_folder_parent_id FOREIGN KEY (parent_id) REFERENCES\nmedia.oo_folder(id) ON DELETE CASCADE DEFERRABLE\n);\nCREATE INDEX oo_folder_idx_parent ON media.oo_folder USING btree\n(parent_id);\nCREATE INDEX oo_folder_idx_relpath ON media.oo_folder USING btree (relpath);\nCREATE UNIQUE INDEX uidx_oo_folder_active_title ON media.oo_folder USING\nbtree (parent_id, title) WHERE is_active;\n\n\nCREATE TABLE media.oo_file (\nid int8 NOT NULL,\nis_active bool NOT NULL DEFAULT true,\ntitle text NOT NULL,\next text NULL,\nrelpath text NOT NULL,\nsha1 text NOT NULL,\nCONSTRAINT chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\nCONSTRAINT oo_file_chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\nCONSTRAINT pk_oo_file PRIMARY KEY (id),\nCONSTRAINT fk_oo_file_oo_folder_id FOREIGN KEY (oo_folder_id) REFERENCES\nmedia.oo_folder(id) ON DELETE CASCADE DEFERRABLE,\n);\nCREATE INDEX oo_file_idx_oo_folder_id ON media.oo_file USING btree\n(oo_folder_id);\nCREATE INDEX oo_file_idx_relpath ON media.oo_file USING btree (relpath);\nCREATE INDEX oo_file_idx_sha1 ON media.oo_file USING btree (sha1);\nCREATE UNIQUE INDEX uidx_oo_file_active_title ON media.oo_file USING btree\n(oo_folder_id, title) WHERE is_active;\n\nThe \"replath\" field contains the path of the file/folder. For example:\n\"/folder1/folder2/folder3/filename4.ext5\". The replath field is managed by\ntriggers. There are about 1M rows for files and 600K folder rows in the\ndatabase. The files are well distributed between folders, and there are\nonly 45 root folders ( parent_id is null)\n\nThis query runs very fast:\n\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select id, title from\nmedia.oo_folder f where f.parent_id is null\n\nQUERY PLAN\n |\n------------------------------------------------------------------------------------------------------------------------------------------+\nIndex Scan using oo_folder_idx_parent on media.oo_folder f\n (cost=0.42..73.70 rows=20 width=25) (actual time=0.030..0.159 rows=45\nloops=1)|\n Output: id, title\n |\n Index Cond: (f.parent_id IS NULL)\n |\n Buffers: shared hit=40\n |\nPlanning Time: 0.123 ms\n |\nExecution Time: 0.187 ms\n |\n\nMy task is to write a query that tells if a folder has any active file\ninside it - directly or in subfolders. Here is the query for that:\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n\nselect id, title,\n(exists (select f2.id from media.oo_file f2 where f2.relpath like f.relpath\n|| '%')) as has_file\nfrom media.oo_folder f where f.parent_id is null\n\nQUERY PLAN\n\n |\n--------------------------------------------------------------------------------------------------------------------------------------------------------------+\nIndex Scan using oo_folder_idx_parent on media.oo_folder f\n (cost=0.42..488.02 rows=20 width=26) (actual time=713.419..25414.969\nrows=45 loops=1) |\n Output: f.id, f.title, (SubPlan 1)\n\n |\n Index Cond: (f.parent_id IS NULL)\n\n |\n Buffers: shared hit=7014170\n\n |\n SubPlan 1\n\n |\n -> Index Only Scan using oo_file_idx_relpath on media.oo_file f2\n (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756\nrows=0 loops=45)|\n Filter: (f2.relpath ~~ (f.relpath || '%'::text))\n\n |\n Rows Removed by Filter: 792025\n\n |\n Heap Fetches: 768960\n\n |\n Buffers: shared hit=7014130\n\n |\nPlanning Time: 0.361 ms\n\n |\nExecution Time: 25415.088 ms\n\n |\n\nIt also returns 45 rows, but in 25 seconds which is unacceptable.\n\nIt I execute the \"has_file\" subquery for one specific relpath then it\nspeeds up again, to < 1msec:\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect exists ( select id from media.oo_file of2 where relpath like\n'Felhasználók%')\nQUERY PLAN\n |\n--------------------------------------------------------------------------------------------------------------------------+\nResult (cost=1.66..1.67 rows=1 width=1) (actual time=0.049..0.050 rows=1\nloops=1) |\n Output: $0\n |\n Buffers: shared hit=2\n |\n InitPlan 1 (returns $0)\n |\n -> Seq Scan on media.oo_file of2 (cost=0.00..144714.70 rows=86960\nwidth=0) (actual time=0.044..0.044 rows=1 loops=1)|\n Filter: (of2.relpath ~~ 'Felhasználók%'::text)\n |\n Rows Removed by Filter: 15\n |\n Buffers: shared hit=2\n |\nPlanning Time: 0.290 ms\n |\nExecution Time: 0.076 ms\n |\n\nIn other words, I could write a pl/sql function with a nested loop instead\nof the problematic query, and it will be 1000 times faster.\n\nWhat am I missing?\n\nThanks,\n\n Laszlo\n\nHello,I have a table that contains folders, and another one that contains files. Here are the table definitions. I have removed most of the columns because they are not important for this question. (There are lots of columns.)CREATE TABLE media.oo_folder (\tid int8 NOT NULL,is_active bool NOT NULL DEFAULT true,\ttitle text NOT NULL,relpath text NOT NULL,CONSTRAINT chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\tCONSTRAINT oo_folder_chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\tCONSTRAINT pk_oo_folder PRIMARY KEY (id),CONSTRAINT fk_oo_folder_parent_id FOREIGN KEY (parent_id) REFERENCES media.oo_folder(id) ON DELETE CASCADE DEFERRABLE);CREATE INDEX oo_folder_idx_parent ON media.oo_folder USING btree (parent_id);CREATE INDEX oo_folder_idx_relpath ON media.oo_folder USING btree (relpath);CREATE UNIQUE INDEX uidx_oo_folder_active_title ON media.oo_folder USING btree (parent_id, title) WHERE is_active;CREATE TABLE media.oo_file (\tid int8 NOT NULL,is_active bool NOT NULL DEFAULT true,title text NOT NULL,\text text NULL,relpath text NOT NULL,sha1 text NOT NULL,CONSTRAINT chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\tCONSTRAINT oo_file_chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\tCONSTRAINT pk_oo_file PRIMARY KEY (id),CONSTRAINT fk_oo_file_oo_folder_id FOREIGN KEY (oo_folder_id) REFERENCES media.oo_folder(id) ON DELETE CASCADE DEFERRABLE,);CREATE INDEX oo_file_idx_oo_folder_id ON media.oo_file USING btree (oo_folder_id);CREATE INDEX oo_file_idx_relpath ON media.oo_file USING btree (relpath);CREATE INDEX oo_file_idx_sha1 ON media.oo_file USING btree (sha1);CREATE UNIQUE INDEX uidx_oo_file_active_title ON media.oo_file USING btree (oo_folder_id, title) WHERE is_active;The \"replath\" field contains the path of the file/folder. For example: \"/folder1/folder2/folder3/filename4.ext5\".  The replath field is managed by triggers. There are about 1M rows for files and 600K folder rows in the database. The files are well distributed between folders, and there are only 45 root folders ( parent_id is null)This query runs very fast:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select id, title from media.oo_folder f where f.parent_id is nullQUERY PLAN                                                                                                                                |------------------------------------------------------------------------------------------------------------------------------------------+Index Scan using oo_folder_idx_parent on media.oo_folder f  (cost=0.42..73.70 rows=20 width=25) (actual time=0.030..0.159 rows=45 loops=1)|  Output: id, title                                                                                                                       |  Index Cond: (f.parent_id IS NULL)                                                                                                       |  Buffers: shared hit=40                                                                                                                  |Planning Time: 0.123 ms                                                                                                                   |Execution Time: 0.187 ms                                                                                                                  |My task is to write a query that tells if a folder has any active file inside it - directly or in subfolders. Here is the query for that:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select id, title,\t(exists (select f2.id from media.oo_file f2 where f2.relpath like f.relpath || '%')) as has_filefrom media.oo_folder f where f.parent_id is nullQUERY PLAN                                                                                                                                                    |--------------------------------------------------------------------------------------------------------------------------------------------------------------+Index Scan using oo_folder_idx_parent on media.oo_folder f  (cost=0.42..488.02 rows=20 width=26) (actual time=713.419..25414.969 rows=45 loops=1)             |  Output: f.id, f.title, (SubPlan 1)                                                                                                                          |  Index Cond: (f.parent_id IS NULL)                                                                                                                           |  Buffers: shared hit=7014170                                                                                                                                 |  SubPlan 1                                                                                                                                                   |    ->  Index Only Scan using oo_file_idx_relpath on media.oo_file f2  (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756 rows=0 loops=45)|          Filter: (f2.relpath ~~ (f.relpath || '%'::text))                                                                                                    |          Rows Removed by Filter: 792025                                                                                                                      |          Heap Fetches: 768960                                                                                                                                |          Buffers: shared hit=7014130                                                                                                                         |Planning Time: 0.361 ms                                                                                                                                       |Execution Time: 25415.088 ms                                                                                                                                  |It also returns 45 rows, but in 25 seconds which is unacceptable. It I execute the \"has_file\" subquery for one specific relpath then it speeds up again, to < 1msec:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select exists ( select id from media.oo_file of2  where relpath  like 'Felhasználók%')QUERY PLAN                                                                                                                |--------------------------------------------------------------------------------------------------------------------------+Result  (cost=1.66..1.67 rows=1 width=1) (actual time=0.049..0.050 rows=1 loops=1)                                        |  Output: $0                                                                                                              |  Buffers: shared hit=2                                                                                                   |  InitPlan 1 (returns $0)                                                                                                 |    ->  Seq Scan on media.oo_file of2  (cost=0.00..144714.70 rows=86960 width=0) (actual time=0.044..0.044 rows=1 loops=1)|          Filter: (of2.relpath ~~ 'Felhasználók%'::text)                                                                  |          Rows Removed by Filter: 15                                                                                      |          Buffers: shared hit=2                                                                                           |Planning Time: 0.290 ms                                                                                                   |Execution Time: 0.076 ms                                                                                                  |In other words, I could write a pl/sql function with a nested loop instead of the problematic query, and it will be 1000 times faster.What am I missing?Thanks,   Laszlo", "msg_date": "Fri, 4 Feb 2022 10:11:31 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Terribly slow query with very good plan?" }, { "msg_contents": "On Fri, 2022-02-04 at 10:11 +0100, Les wrote:\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> \n> select id, title,\n>  (exists (select f2.id from media.oo_file f2 where f2.relpath like f.relpath || '%')) as has_file\n> from media.oo_folder f where f.parent_id is null\n> \n> QUERY PLAN                                                                                                                                                    |\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------+\n> Index Scan using oo_folder_idx_parent on media.oo_folder f  (cost=0.42..488.02 rows=20 width=26) (actual time=713.419..25414.969 rows=45 loops=1)             |\n>   Output: f.id, f.title, (SubPlan 1)                                                                                                                          |\n>   Index Cond: (f.parent_id IS NULL)                                                                                                                           |\n>   Buffers: shared hit=7014170                                                                                                                                 |\n>   SubPlan 1                                                                                                                                                   |\n>     ->  Index Only Scan using oo_file_idx_relpath on media.oo_file f2  (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756 rows=0 loops=45)|\n>           Filter: (f2.relpath ~~ (f.relpath || '%'::text))                                                                                                    |\n>           Rows Removed by Filter: 792025                                                                                                                      |\n>           Heap Fetches: 768960                                                                                                                                |\n>           Buffers: shared hit=7014130                                                                                                                         |\n> Planning Time: 0.361 ms                                                                                                                                       |\n> Execution Time: 25415.088 ms                                                                                                                                  |\n> \n> It also returns 45 rows, but in 25 seconds which is unacceptable. \n\nYou should create an index that supports LIKE; for example\n\nCREATE INDEX ON media.oo_file (relpath COLLATE \"C\");\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 04 Feb 2022 10:18:43 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "pá 4. 2. 2022 v 10:11 odesílatel Les <[email protected]> napsal:\n\n> Hello,\n>\n> I have a table that contains folders, and another one that contains files.\n>\n> Here are the table definitions. I have removed most of the columns because\n> they are not important for this question. (There are lots of columns.)\n>\n> CREATE TABLE media.oo_folder (\n> id int8 NOT NULL,\n> is_active bool NOT NULL DEFAULT true,\n> title text NOT NULL,\n> relpath text NOT NULL,\n> CONSTRAINT chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\n> CONSTRAINT oo_folder_chk_no_slash CHECK ((\"position\"(title, '/'::text) =\n> 0)),\n> CONSTRAINT pk_oo_folder PRIMARY KEY (id),\n> CONSTRAINT fk_oo_folder_parent_id FOREIGN KEY (parent_id) REFERENCES\n> media.oo_folder(id) ON DELETE CASCADE DEFERRABLE\n> );\n> CREATE INDEX oo_folder_idx_parent ON media.oo_folder USING btree\n> (parent_id);\n> CREATE INDEX oo_folder_idx_relpath ON media.oo_folder USING btree\n> (relpath);\n> CREATE UNIQUE INDEX uidx_oo_folder_active_title ON media.oo_folder USING\n> btree (parent_id, title) WHERE is_active;\n>\n>\n> CREATE TABLE media.oo_file (\n> id int8 NOT NULL,\n> is_active bool NOT NULL DEFAULT true,\n> title text NOT NULL,\n> ext text NULL,\n> relpath text NOT NULL,\n> sha1 text NOT NULL,\n> CONSTRAINT chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\n> CONSTRAINT oo_file_chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\n> CONSTRAINT pk_oo_file PRIMARY KEY (id),\n> CONSTRAINT fk_oo_file_oo_folder_id FOREIGN KEY (oo_folder_id) REFERENCES\n> media.oo_folder(id) ON DELETE CASCADE DEFERRABLE,\n> );\n> CREATE INDEX oo_file_idx_oo_folder_id ON media.oo_file USING btree\n> (oo_folder_id);\n> CREATE INDEX oo_file_idx_relpath ON media.oo_file USING btree (relpath);\n> CREATE INDEX oo_file_idx_sha1 ON media.oo_file USING btree (sha1);\n> CREATE UNIQUE INDEX uidx_oo_file_active_title ON media.oo_file USING btree\n> (oo_folder_id, title) WHERE is_active;\n>\n> The \"replath\" field contains the path of the file/folder. For example:\n> \"/folder1/folder2/folder3/filename4.ext5\". The replath field is managed by\n> triggers. There are about 1M rows for files and 600K folder rows in the\n> database. The files are well distributed between folders, and there are\n> only 45 root folders ( parent_id is null)\n>\n> This query runs very fast:\n>\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select id, title from\n> media.oo_folder f where f.parent_id is null\n>\n> QUERY PLAN\n> |\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------+\n> Index Scan using oo_folder_idx_parent on media.oo_folder f\n> (cost=0.42..73.70 rows=20 width=25) (actual time=0.030..0.159 rows=45\n> loops=1)|\n> Output: id, title\n> |\n> Index Cond: (f.parent_id IS NULL)\n> |\n> Buffers: shared hit=40\n> |\n> Planning Time: 0.123 ms\n> |\n> Execution Time: 0.187 ms\n> |\n>\n> My task is to write a query that tells if a folder has any active file\n> inside it - directly or in subfolders. Here is the query for that:\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n>\n> select id, title,\n> (exists (select f2.id from media.oo_file f2 where f2.relpath like\n> f.relpath || '%')) as has_file\n> from media.oo_folder f where f.parent_id is null\n>\n> QUERY PLAN\n>\n> |\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------+\n> Index Scan using oo_folder_idx_parent on media.oo_folder f\n> (cost=0.42..488.02 rows=20 width=26) (actual time=713.419..25414.969\n> rows=45 loops=1) |\n> Output: f.id, f.title, (SubPlan 1)\n>\n> |\n> Index Cond: (f.parent_id IS NULL)\n>\n> |\n> Buffers: shared hit=7014170\n>\n> |\n> SubPlan 1\n>\n> |\n> -> Index Only Scan using oo_file_idx_relpath on media.oo_file f2\n> (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756\n> rows=0 loops=45)|\n> Filter: (f2.relpath ~~ (f.relpath || '%'::text))\n>\n> |\n> Rows Removed by Filter: 792025\n>\n> |\n> Heap Fetches: 768960\n>\n> |\n> Buffers: shared hit=7014130\n>\n> |\n> Planning Time: 0.361 ms\n>\n> |\n> Execution Time: 25415.088 ms\n>\n> |\n>\n> It also returns 45 rows, but in 25 seconds which is unacceptable.\n>\n> It I execute the \"has_file\" subquery for one specific relpath then it\n> speeds up again, to < 1msec:\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> select exists ( select id from media.oo_file of2 where relpath like\n> 'Felhasználók%')\n> QUERY PLAN\n> |\n>\n> --------------------------------------------------------------------------------------------------------------------------+\n> Result (cost=1.66..1.67 rows=1 width=1) (actual time=0.049..0.050 rows=1\n> loops=1) |\n> Output: $0\n> |\n> Buffers: shared hit=2\n> |\n> InitPlan 1 (returns $0)\n> |\n> -> Seq Scan on media.oo_file of2 (cost=0.00..144714.70 rows=86960\n> width=0) (actual time=0.044..0.044 rows=1 loops=1)|\n> Filter: (of2.relpath ~~ 'Felhasználók%'::text)\n> |\n> Rows Removed by Filter: 15\n> |\n> Buffers: shared hit=2\n> |\n> Planning Time: 0.290 ms\n> |\n> Execution Time: 0.076 ms\n> |\n>\n> In other words, I could write a pl/sql function with a nested loop instead\n> of the problematic query, and it will be 1000 times faster.\n>\n> What am I missing?\n>\n\nI don't understand how it is possible in the slow case Rows Removed by\nFilter: 792025 (returns 0 row) and in the second case Rows Removed by\nFilter: 15 (returns 1 row).\n\nIt is strange.\n\n\n\n\n> Thanks,\n>\n> Laszlo\n>\n\npá 4. 2. 2022 v 10:11 odesílatel Les <[email protected]> napsal:Hello,I have a table that contains folders, and another one that contains files. Here are the table definitions. I have removed most of the columns because they are not important for this question. (There are lots of columns.)CREATE TABLE media.oo_folder (\tid int8 NOT NULL,is_active bool NOT NULL DEFAULT true,\ttitle text NOT NULL,relpath text NOT NULL,CONSTRAINT chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\tCONSTRAINT oo_folder_chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\tCONSTRAINT pk_oo_folder PRIMARY KEY (id),CONSTRAINT fk_oo_folder_parent_id FOREIGN KEY (parent_id) REFERENCES media.oo_folder(id) ON DELETE CASCADE DEFERRABLE);CREATE INDEX oo_folder_idx_parent ON media.oo_folder USING btree (parent_id);CREATE INDEX oo_folder_idx_relpath ON media.oo_folder USING btree (relpath);CREATE UNIQUE INDEX uidx_oo_folder_active_title ON media.oo_folder USING btree (parent_id, title) WHERE is_active;CREATE TABLE media.oo_file (\tid int8 NOT NULL,is_active bool NOT NULL DEFAULT true,title text NOT NULL,\text text NULL,relpath text NOT NULL,sha1 text NOT NULL,CONSTRAINT chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\tCONSTRAINT oo_file_chk_no_slash CHECK ((\"position\"(title, '/'::text) = 0)),\tCONSTRAINT pk_oo_file PRIMARY KEY (id),CONSTRAINT fk_oo_file_oo_folder_id FOREIGN KEY (oo_folder_id) REFERENCES media.oo_folder(id) ON DELETE CASCADE DEFERRABLE,);CREATE INDEX oo_file_idx_oo_folder_id ON media.oo_file USING btree (oo_folder_id);CREATE INDEX oo_file_idx_relpath ON media.oo_file USING btree (relpath);CREATE INDEX oo_file_idx_sha1 ON media.oo_file USING btree (sha1);CREATE UNIQUE INDEX uidx_oo_file_active_title ON media.oo_file USING btree (oo_folder_id, title) WHERE is_active;The \"replath\" field contains the path of the file/folder. For example: \"/folder1/folder2/folder3/filename4.ext5\".  The replath field is managed by triggers. There are about 1M rows for files and 600K folder rows in the database. The files are well distributed between folders, and there are only 45 root folders ( parent_id is null)This query runs very fast:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select id, title from media.oo_folder f where f.parent_id is nullQUERY PLAN                                                                                                                                |------------------------------------------------------------------------------------------------------------------------------------------+Index Scan using oo_folder_idx_parent on media.oo_folder f  (cost=0.42..73.70 rows=20 width=25) (actual time=0.030..0.159 rows=45 loops=1)|  Output: id, title                                                                                                                       |  Index Cond: (f.parent_id IS NULL)                                                                                                       |  Buffers: shared hit=40                                                                                                                  |Planning Time: 0.123 ms                                                                                                                   |Execution Time: 0.187 ms                                                                                                                  |My task is to write a query that tells if a folder has any active file inside it - directly or in subfolders. Here is the query for that:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select id, title,\t(exists (select f2.id from media.oo_file f2 where f2.relpath like f.relpath || '%')) as has_filefrom media.oo_folder f where f.parent_id is nullQUERY PLAN                                                                                                                                                    |--------------------------------------------------------------------------------------------------------------------------------------------------------------+Index Scan using oo_folder_idx_parent on media.oo_folder f  (cost=0.42..488.02 rows=20 width=26) (actual time=713.419..25414.969 rows=45 loops=1)             |  Output: f.id, f.title, (SubPlan 1)                                                                                                                          |  Index Cond: (f.parent_id IS NULL)                                                                                                                           |  Buffers: shared hit=7014170                                                                                                                                 |  SubPlan 1                                                                                                                                                   |    ->  Index Only Scan using oo_file_idx_relpath on media.oo_file f2  (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756 rows=0 loops=45)|          Filter: (f2.relpath ~~ (f.relpath || '%'::text))                                                                                                    |          Rows Removed by Filter: 792025                                                                                                                      |          Heap Fetches: 768960                                                                                                                                |          Buffers: shared hit=7014130                                                                                                                         |Planning Time: 0.361 ms                                                                                                                                       |Execution Time: 25415.088 ms                                                                                                                                  |It also returns 45 rows, but in 25 seconds which is unacceptable. It I execute the \"has_file\" subquery for one specific relpath then it speeds up again, to < 1msec:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select exists ( select id from media.oo_file of2  where relpath  like 'Felhasználók%')QUERY PLAN                                                                                                                |--------------------------------------------------------------------------------------------------------------------------+Result  (cost=1.66..1.67 rows=1 width=1) (actual time=0.049..0.050 rows=1 loops=1)                                        |  Output: $0                                                                                                              |  Buffers: shared hit=2                                                                                                   |  InitPlan 1 (returns $0)                                                                                                 |    ->  Seq Scan on media.oo_file of2  (cost=0.00..144714.70 rows=86960 width=0) (actual time=0.044..0.044 rows=1 loops=1)|          Filter: (of2.relpath ~~ 'Felhasználók%'::text)                                                                  |          Rows Removed by Filter: 15                                                                                      |          Buffers: shared hit=2                                                                                           |Planning Time: 0.290 ms                                                                                                   |Execution Time: 0.076 ms                                                                                                  |In other words, I could write a pl/sql function with a nested loop instead of the problematic query, and it will be 1000 times faster.What am I missing?I don't understand how it is possible in the slow case Rows Removed by Filter: 792025 (returns 0 row) and in the second case Rows Removed by Filter: 15 (returns 1 row).It is strange.Thanks,   Laszlo", "msg_date": "Fri, 4 Feb 2022 10:23:03 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "Laurenz Albe <[email protected]> ezt írta (időpont: 2022. febr. 4.,\nP, 10:18):\n\n> |\n> >\n> > It also returns 45 rows, but in 25 seconds which is unacceptable.\n>\n> You should create an index that supports LIKE; for example\n>\n> CREATE INDEX ON media.oo_file (relpath COLLATE \"C\");\n>\n>\nCREATE INDEX test ON media.oo_file (relpath COLLATE \"C\");\n EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n\nselect id, title,\n(exists (select f2.id from\nmedia.oo_file f2\nwhere f2.relpath like f.relpath || '%' )) as has_file\nfrom media.oo_folder f where f.parent_id is null;\nQUERY PLAN\n |\n-------------------------------------------------------------------------------------------------------------------------------------------------+\nIndex Scan using oo_folder_idx_parent on media.oo_folder f\n (cost=0.42..459.38 rows=20 width=26) (actual time=772.566..24081.820\nrows=45 loops=1)|\n Output: f.id, f.title, (SubPlan 1)\n |\n Index Cond: (f.parent_id IS NULL)\n |\n Buffers: shared hit=6672274\n |\n SubPlan 1\n |\n -> Index Only Scan using test on media.oo_file f2\n (cost=0.55..100756.64 rows=5379 width=0) (actual time=535.113..535.113\nrows=0 loops=45) |\n Filter: (f2.relpath ~~ (f.relpath || '%'::text))\n |\n Rows Removed by Filter: 777428\n |\n Heap Fetches: 736418\n |\n Buffers: shared hit=6672234\n |\nPlanning Time: 0.338 ms\n |\nExecution Time: 24082.152 ms\n |\n\nNot helping :-(\n\nLaurenz Albe <[email protected]> ezt írta (időpont: 2022. febr. 4., P, 10:18):                                                          |\r\n> \r\n> It also returns 45 rows, but in 25 seconds which is unacceptable. \n\r\nYou should create an index that supports LIKE; for example\n\r\nCREATE INDEX ON media.oo_file (relpath COLLATE \"C\");\nCREATE INDEX test ON media.oo_file (relpath COLLATE \"C\"); EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select id, title,\t(exists (select f2.id from\t\tmedia.oo_file f2\twhere f2.relpath like f.relpath || '%' )) as has_filefrom media.oo_folder f where f.parent_id is null;QUERY PLAN                                                                                                                                       |-------------------------------------------------------------------------------------------------------------------------------------------------+Index Scan using oo_folder_idx_parent on media.oo_folder f  (cost=0.42..459.38 rows=20 width=26) (actual time=772.566..24081.820 rows=45 loops=1)|  Output: f.id, f.title, (SubPlan 1)                                                                                                             |  Index Cond: (f.parent_id IS NULL)                                                                                                              |  Buffers: shared hit=6672274                                                                                                                    |  SubPlan 1                                                                                                                                      |    ->  Index Only Scan using test on media.oo_file f2  (cost=0.55..100756.64 rows=5379 width=0) (actual time=535.113..535.113 rows=0 loops=45)  |          Filter: (f2.relpath ~~ (f.relpath || '%'::text))                                                                                       |          Rows Removed by Filter: 777428                                                                                                         |          Heap Fetches: 736418                                                                                                                   |          Buffers: shared hit=6672234                                                                                                            |Planning Time: 0.338 ms                                                                                                                          |Execution Time: 24082.152 ms                                                                                                                     |Not helping :-(", "msg_date": "Fri, 4 Feb 2022 10:32:58 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "On Fri, 4 Feb 2022 at 09:11, Les <[email protected]> wrote:\n\n |\n> -> Index Only Scan using oo_file_idx_relpath on media.oo_file f2 (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756 rows=0 loops=45)|\n> Filter: (f2.relpath ~~ (f.relpath || '%'::text)) |\n> Rows Removed by Filter: 792025 |\n> Heap Fetches: 768960 |\n> Buffers: shared hit=7014130 |\n> Planning Time: 0.361 ms\n> Execution Time: 25415.088 ms\n\n\n> -> Seq Scan on media.oo_file of2 (cost=0.00..144714.70 rows=86960 width=0) (actual time=0.044..0.044 rows=1 loops=1)|\n> Filter: (of2.relpath ~~ 'Felhasználók%'::text) |\n> Rows Removed by Filter: 15 |\n> Buffers: shared hit=2 |\n> Planning Time: 0.290 ms |\n> Execution Time: 0.076 ms |\n>\n> In other words, I could write a pl/sql function with a nested loop instead of the problematic query, and it will be 1000 times faster.\n>\n> What am I missing?\n\nIn the fast case the 'Felhasználók%' part is known at query planning\ntime, so it can be a prefix search.\n\nIn the slow case, the planner doesn't know what that value will be, it\ncould be something that starts with '%' for example.\n\nAlso your logic looks a bit unsafe, the query you have would include\nfiles under all top-level folders with names starting with\nFelhasználók, so you could accidentally merge in files in folders\ncalled Felhasználókfoo and Felhasználókbar for example.\n\n\n", "msg_date": "Fri, 4 Feb 2022 09:59:55 +0000", "msg_from": "Nick Cleaton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "I really think now that the query plan is wrong (or \"could be improved\" so\nto say). As far as I understand, the \"index only scan\" is essentially a\nsequential scan on the index data. In this specific case, where the filter\nis a \"begins with\" condition on a field that is the starting (and only)\ncolumn of an index, there is a much much better way to find out if there is\na row or not: lookup the closest value in the index and see if it begins\nwith the value. The operation of looking up the closest value in an index\nwould be much more efficient.\n\n\n> I don't understand how it is possible in the slow case Rows Removed by\nFilter: 792025 (returns 0 row) and in the second case Rows Removed by\nFilter: 15 (returns 1 row).\n\nPavel, I think it is because the scan found a suitable row at the beginning\nof the scan and stopped the scan. If you look at that plan you will see\nthat it uses a seq scan. It was fast by accident. :-)\n\nThe plan of that single-row version was changed to a normal index scan,\nafter I added the collation \"C\" index:\n\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect exists (\nselect id from media.oo_file of2 where relpath like 'this does not exist%'\n);\nQUERY PLAN\n |\n-------------------------------------------------------------------------------------------------------------------------------------+\nResult (cost=0.63..0.64 rows=1 width=1) (actual time=0.022..0.023 rows=1\nloops=1) |\n Output: $0\n |\n Buffers: shared hit=4\n |\n InitPlan 1 (returns $0)\n |\n -> Index Only Scan using test on media.oo_file of2 (cost=0.55..8.57\nrows=108 width=0) (actual time=0.018..0.018 rows=0 loops=1)|\n Index Cond: ((of2.relpath >= 'this does not exist'::text) AND\n(of2.relpath < 'this does not exisu'::text)) |\n Filter: (of2.relpath ~~ 'this does not exist%'::text)\n |\n Heap Fetches: 0\n |\n Buffers: shared hit=4\n |\nPlanning Time: 0.530 ms\n |\nExecution Time: 0.055 ms\n |\n\nI would expect for the same originally slow query with the has_file column,\nbut it does not happen. :-(\n\nI really think now that the query plan is wrong (or \"could be improved\" so to say). As far as I understand, the \"index only scan\" is essentially a sequential scan on the index data. In this specific case, where the filter is a \"begins with\" condition on a field that is the starting (and only) column of an index, there is a much much better way to find out if there is a row or not: lookup the closest value in the index and see if it begins with the value. The operation of looking up the closest value in an index would be much more efficient.> I don't understand how it is possible in the slow case Rows Removed by Filter: 792025 (returns 0 row) and in the second case Rows Removed by Filter: 15 (returns 1 row).Pavel, I think it is because the scan found a suitable row at the beginning of the scan and stopped the scan. If you look at that plan you will see that it uses a seq scan. It was fast by accident. :-)The plan of that single-row version was changed to a normal index scan, after I added the collation \"C\" index:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select exists ( \tselect id from media.oo_file of2  where relpath like 'this does not exist%');QUERY PLAN                                                                                                                           |-------------------------------------------------------------------------------------------------------------------------------------+Result  (cost=0.63..0.64 rows=1 width=1) (actual time=0.022..0.023 rows=1 loops=1)                                                   |  Output: $0                                                                                                                         |  Buffers: shared hit=4                                                                                                              |  InitPlan 1 (returns $0)                                                                                                            |    ->  Index Only Scan using test on media.oo_file of2  (cost=0.55..8.57 rows=108 width=0) (actual time=0.018..0.018 rows=0 loops=1)|          Index Cond: ((of2.relpath >= 'this does not exist'::text) AND (of2.relpath < 'this does not exisu'::text))                 |          Filter: (of2.relpath ~~ 'this does not exist%'::text)                                                                      |          Heap Fetches: 0                                                                                                            |          Buffers: shared hit=4                                                                                                      |Planning Time: 0.530 ms                                                                                                              |Execution Time: 0.055 ms                                                                                                             |I would expect for the same originally slow query with the has_file column, but it does not happen. :-(", "msg_date": "Fri, 4 Feb 2022 11:00:47 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "Nick Cleaton <[email protected]> ezt írta (időpont: 2022. febr. 4., P,\n11:00):\n\n>\n> In the fast case the 'Felhasználók%' part is known at query planning\n> time, so it can be a prefix search.\n>\n> In the slow case, the planner doesn't know what that value will be, it\n> could be something that starts with '%' for example.\n>\n>\nFirst of all, it CANNOT start with '%'. This is a fact and this fact can be\ndetermined by analyzing the query. Something that the query planner should\ndo, right?\n\nSecond argument: the same query is also slow with the ^@ operator...\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n\nselect id, title,\n(exists (select f2.id from\nmedia.oo_file f2\nwhere f2.relpath ^@ f.relpath )) as has_file\nfrom media.oo_folder f where f.parent_id is null\n\nQUERY PLAN\n |\n--------------------------------------------------------------------------------------------------------------------------------------------------+\nIndex Scan using oo_folder_idx_parent on media.oo_folder f\n (cost=0.42..449.38 rows=20 width=26) (actual time=1652.624..61636.232\nrows=45 loops=1)|\n Output: f.id, f.title, (SubPlan 1)\n |\n Index Cond: (f.parent_id IS NULL)\n |\n Buffers: shared hit=6672274\n |\n SubPlan 1\n |\n -> Index Only Scan using test on media.oo_file f2\n (cost=0.55..98067.11 rows=5379 width=0) (actual time=1369.665..1369.665\nrows=0 loops=45) |\n Filter: (f2.relpath ^@ f.relpath)\n |\n Rows Removed by Filter: 777428\n |\n Heap Fetches: 736418\n |\n Buffers: shared hit=6672234\n |\nPlanning Time: 0.346 ms\n |\nExecution Time: 61636.319 ms\n |\n\n\n> Also your logic looks a bit unsafe, the query you have would include\n> files under all top-level folders with names starting with\n> Felhasználók, so you could accidentally merge in files in folders\n> called Felhasználókfoo and Felhasználókbar for example.\n>\n\nForgive me, I typed in these examples for demonstration. The actual code\nuses relpath || '/%' and it avoids those cases.\n\nNick Cleaton <[email protected]> ezt írta (időpont: 2022. febr. 4., P, 11:00):\r\nIn the fast case the 'Felhasználók%' part is known at query planning\r\ntime, so it can be a prefix search.\n\r\nIn the slow case, the planner doesn't know what that value will be, it\r\ncould be something that starts with '%' for example.\nFirst of all, it CANNOT start with '%'. This is a fact and this fact can be determined by analyzing the query. Something that the query planner should do, right?Second argument: the same query is also slow with the ^@ operator...EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select id, title,\t(exists (select f2.id from\t\tmedia.oo_file f2\twhere f2.relpath ^@ f.relpath )) as has_filefrom media.oo_folder f where f.parent_id is nullQUERY PLAN                                                                                                                                        |--------------------------------------------------------------------------------------------------------------------------------------------------+Index Scan using oo_folder_idx_parent on media.oo_folder f  (cost=0.42..449.38 rows=20 width=26) (actual time=1652.624..61636.232 rows=45 loops=1)|  Output: f.id, f.title, (SubPlan 1)                                                                                                              |  Index Cond: (f.parent_id IS NULL)                                                                                                               |  Buffers: shared hit=6672274                                                                                                                     |  SubPlan 1                                                                                                                                       |    ->  Index Only Scan using test on media.oo_file f2  (cost=0.55..98067.11 rows=5379 width=0) (actual time=1369.665..1369.665 rows=0 loops=45)  |          Filter: (f2.relpath ^@ f.relpath)                                                                                                       |          Rows Removed by Filter: 777428                                                                                                          |          Heap Fetches: 736418                                                                                                                    |          Buffers: shared hit=6672234                                                                                                             |Planning Time: 0.346 ms                                                                                                                           |Execution Time: 61636.319 ms                                                                                                                      | \r\nAlso your logic looks a bit unsafe, the query you have would include\r\nfiles under all top-level folders with names starting with\r\nFelhasználók, so you could accidentally merge in files in folders\r\ncalled Felhasználókfoo and Felhasználókbar for example.Forgive me, I typed in these examples for demonstration. The actual code uses relpath || '/%' and it avoids those cases.", "msg_date": "Fri, 4 Feb 2022 11:05:13 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "> In the fast case the 'Felhasználók%' part is known at query planning\n>> time, so it can be a prefix search.\n>>\n>> In the slow case, the planner doesn't know what that value will be, it\n>> could be something that starts with '%' for example.\n>>\n>>\n> First of all, it CANNOT start with '%'. This is a fact and this fact can\n> be determined by analyzing the query. Something that the query planner\n> should do, right?\n>\n> Second argument: the same query is also slow with the ^@ operator...\n>\n\nOh I see, the query planner does not know that there will be no %\ncharacters in file and folder names.\n\nBut what is the solution then? It just seems wrong that I can speed up a\nquery 1000 times by replacing it with a nested loop in a pl/sql function :(\n\n\nIn the fast case the 'Felhasználók%' part is known at query planning\ntime, so it can be a prefix search.\n\nIn the slow case, the planner doesn't know what that value will be, it\ncould be something that starts with '%' for example.\nFirst of all, it CANNOT start with '%'. This is a fact and this fact can be determined by analyzing the query. Something that the query planner should do, right?Second argument: the same query is also slow with the ^@ operator...Oh I see, the query planner does not know that there will be no % characters in file and folder names.But what is the solution then? It just seems wrong that I can speed up a query 1000 times by replacing it with a nested loop in a pl/sql function :(", "msg_date": "Fri, 4 Feb 2022 11:09:52 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": ">\n>\n>>\n>> First of all, it CANNOT start with '%'. This is a fact and this fact can\n>> be determined by analyzing the query. Something that the query planner\n>> should do, right?\n>>\n>> Second argument: the same query is also slow with the ^@ operator...\n>>\n>\n> Oh I see, the query planner does not know that there will be no %\n> characters in file and folder names.\n>\n> On second thought, it does not explain why it is also slow with the ^@\noperator.\n\nFirst of all, it CANNOT start with '%'. This is a fact and this fact can be determined by analyzing the query. Something that the query planner should do, right?Second argument: the same query is also slow with the ^@ operator...Oh I see, the query planner does not know that there will be no % characters in file and folder names.On second thought, it does not explain why it is also slow with the ^@ operator.", "msg_date": "Fri, 4 Feb 2022 11:38:53 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "On Fri, 4 Feb 2022 at 10:09, Les <[email protected]> wrote:\n>\n> Oh I see, the query planner does not know that there will be no % characters in file and folder names.\n>\n> But what is the solution then? It just seems wrong that I can speed up a query 1000 times by replacing it with a nested loop in a pl/sql function :(\n\nYou don't need a nested loop, doing it in two stages in pl/pgsql would\nbe enough I think, first get the folder name and then construct a new\nquery using it as a constant.\n\nI'd use SELECT FOR SHARE when getting the folder name, so that no\nother process can change it underneath you before you run your second\nquery.\n\nWith the ^@ operator, my guess is that because the planner knows\nnothing about the folder name value it could be the empty string,\nwhich would be a prefix of everything.\n\n\n", "msg_date": "Fri, 4 Feb 2022 10:57:13 +0000", "msg_from": "Nick Cleaton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "Nick Cleaton <[email protected]> ezt írta (időpont: 2022. febr. 4., P,\n11:57):\n\n>\n> With the ^@ operator, my guess is that because the planner knows\n> nothing about the folder name value it could be the empty string,\n> which would be a prefix of everything.\n>\n\nI think I could narrow down the problem to the simplest query possible.\n\nThe \"title could be empty\" does not hold for this:\n\nCREATE index test ON media.oo_file (relpath COLLATE \"C\");\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active and fi.relpath ^@ 'Természettudomány' limit 1\n\nLimit (cost=0.00..2.70 rows=1 width=8) (actual time=14445.559..14445.561\nrows=0 loops=1)\n Output: id\n Buffers: shared hit=22288 read=108975\n -> Seq Scan on media.oo_file fi (cost=0.00..144710.65 rows=53574\nwidth=8) (actual time=14445.555..14445.556 rows=0 loops=1)\n Output: id\n Filter: (fi.is_active AND (fi.relpath ^@ 'Természettudomány'::text))\n Rows Removed by Filter: 1075812\n Buffers: shared hit=22288 read=108975\nPlanning Time: 0.398 ms\nExecution Time: 14445.593 ms\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active and fi.relpath like 'Természettudomány%' limit 1\n\nLimit (cost=0.00..2.70 rows=1 width=8) (actual time=11222.280..11222.282\nrows=0 loops=1)\n Output: id\n Buffers: shared hit=22320 read=108943\n -> Seq Scan on media.oo_file fi (cost=0.00..144710.65 rows=53574\nwidth=8) (actual time=11222.278..11222.279 rows=0 loops=1)\n Output: id\n Filter: (fi.is_active AND (fi.relpath ~~\n'Természettudomány%'::text))\n Rows Removed by Filter: 1075812\n Buffers: shared hit=22320 read=108943\nPlanning Time: 0.488 ms\nExecution Time: 11222.307 ms\n\nIt is using seq scan for both cases. This is definitely wrong!\n\nOne of my collaguage has noticed that the LIKE query uses index scan for\nsome of the letters:\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active and fi.relpath like 'A%' limit 1;\nLimit (cost=0.55..60.85 rows=1 width=8) (actual time=6.508..6.509 rows=0\nloops=1)\n Output: id\n Buffers: shared hit=2776\n -> Index Scan using test on media.oo_file fi (cost=0.55..4583.29\nrows=76 width=8) (actual time=6.506..6.507 rows=0 loops=1)\n Output: id\n Index Cond: ((fi.relpath >= 'A'::text) AND (fi.relpath < 'B'::text))\n Filter: (fi.is_active AND (fi.relpath ~~ 'A%'::text))\n Rows Removed by Filter: 3784\n Buffers: shared hit=2776\nPlanning Time: 0.543 ms\nExecution Time: 6.560 ms\n\nActually, the number of files per starting letter is:\n\nselect substr(relpath, 0, 2), count(*)\nfrom media.oo_file\ngroup by substr(relpath, 0, 2)\norder by count(*) desc\nsubstr|count |\n------+------+\nO |386087|\nF |236752|\nN |167171|\nÓ |111479|\nT |109786|\nM | 34348|\nP | 19878|\nL | 5657|\nA | 3784|\nI | 869|\nC | 1|\n\nPostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan for\nA, I, C letters (with the \"like\" query).\n\nThere might be a problem with the planner here, because I think that using\nan index scan will always be faster than a seq scan. The number of rows for\nthe prefix should not matter at all, because we are trying to get the first\nmatching row only. For some reason it decides between seq/index scan based\non the number of rows stored in some stats. At least it seems that way.\n\nIf I could tell the planner to use the index, I think my problem would be\nsolved. Is there a way to put optimizer hints into the query?\n\nThere could be some improvement made to the @^ operator too, because it\nalways uses seq scan, no matter what.\n\nWhat do you think?\n\nNick Cleaton <[email protected]> ezt írta (időpont: 2022. febr. 4., P, 11:57):\n\nWith the ^@ operator, my guess is that because the planner knows\nnothing about the folder name value it could be the empty string,\nwhich would be a prefix of everything.I think I could narrow down the problem to the simplest query possible. The \"title could be empty\" does not hold for this:CREATE index test ON media.oo_file (relpath COLLATE \"C\");EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  and fi.relpath ^@ 'Természettudomány' limit 1Limit  (cost=0.00..2.70 rows=1 width=8) (actual time=14445.559..14445.561 rows=0 loops=1)  Output: id  Buffers: shared hit=22288 read=108975  ->  Seq Scan on media.oo_file fi  (cost=0.00..144710.65 rows=53574 width=8) (actual time=14445.555..14445.556 rows=0 loops=1)        Output: id        Filter: (fi.is_active AND (fi.relpath ^@ 'Természettudomány'::text))        Rows Removed by Filter: 1075812        Buffers: shared hit=22288 read=108975Planning Time: 0.398 msExecution Time: 14445.593 msEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  and fi.relpath like 'Természettudomány%' limit 1Limit  (cost=0.00..2.70 rows=1 width=8) (actual time=11222.280..11222.282 rows=0 loops=1)  Output: id  Buffers: shared hit=22320 read=108943  ->  Seq Scan on media.oo_file fi  (cost=0.00..144710.65 rows=53574 width=8) (actual time=11222.278..11222.279 rows=0 loops=1)        Output: id        Filter: (fi.is_active AND (fi.relpath ~~ 'Természettudomány%'::text))        Rows Removed by Filter: 1075812        Buffers: shared hit=22320 read=108943Planning Time: 0.488 msExecution Time: 11222.307 msIt is using seq scan for both cases. This is definitely wrong! One of my collaguage has noticed that the LIKE query uses index scan for some of the letters:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  and fi.relpath like 'A%' limit 1;Limit  (cost=0.55..60.85 rows=1 width=8) (actual time=6.508..6.509 rows=0 loops=1)  Output: id  Buffers: shared hit=2776  ->  Index Scan using test on media.oo_file fi  (cost=0.55..4583.29 rows=76 width=8) (actual time=6.506..6.507 rows=0 loops=1)        Output: id        Index Cond: ((fi.relpath >= 'A'::text) AND (fi.relpath < 'B'::text))        Filter: (fi.is_active AND (fi.relpath ~~ 'A%'::text))        Rows Removed by Filter: 3784        Buffers: shared hit=2776Planning Time: 0.543 msExecution Time: 6.560 msActually, the number of files per starting letter is:select substr(relpath, 0, 2), count(*) from media.oo_filegroup by substr(relpath, 0, 2)order by count(*) descsubstr|count |------+------+O     |386087|F     |236752|N     |167171|Ó     |111479|T     |109786|M     | 34348|P     | 19878|L     |  5657|A     |  3784|I     |   869|C     |     1|PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan for A, I, C letters (with the \"like\" query). There might be a problem with the planner here, because I think that using an index scan will always be faster than a seq scan. The number of rows for the prefix should not matter at all, because we are trying to get the first matching row only. For some reason it decides between seq/index scan based on the number of rows stored in some stats. At least it seems that way.If I could tell the planner to use the index, I think my problem would be solved. Is there a way to put optimizer hints into the query?There could be some improvement made to the @^ operator too, because it always uses seq scan, no matter what.What do you think?", "msg_date": "Fri, 4 Feb 2022 13:27:28 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "On Fri, 4 Feb 2022 at 12:27, Les <[email protected]> wrote:\n\n> PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan for A, I, C letters (with the \"like\" query).\n\nThat's interesting.\n\nDoes it help if you create an additional index on relpath with the\ntext_pattern_ops modifier, e.g.\n\nCREATE INDEX ... USING btree (relpath text_pattern_ops);\n\n\n", "msg_date": "Fri, 4 Feb 2022 12:59:21 +0000", "msg_from": "Nick Cleaton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "> > PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan\n> for A, I, C letters (with the \"like\" query).\n>\n> That's interesting.\n>\n> Does it help if you create an additional index on relpath with the\n> text_pattern_ops modifier, e.g.\n>\n> CREATE INDEX ... USING btree (relpath text_pattern_ops);\n>\n\nIt does not help. Details below. (PostgreSQL version 12.8)\n\nCREATE index test ON media.oo_file (relpath COLLATE \"C\");\nCREATE INDEX test2 ON media.oo_file USING btree (relpath text_pattern_ops);\nCREATE INDEX test3 ON media.oo_file USING btree (relpath collate \"C\"\ntext_pattern_ops);\n-- letter \"A\" ^@ operator -> slow seq scan\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active and fi.relpath ^@ 'A' limit 1;\nQUERY PLAN\n |\n----------------------------------------------------------------------------------------------------------------------------+\nLimit (cost=0.00..1904.09 rows=1 width=8) (actual\ntime=10779.585..10779.587 rows=0 loops=1) |\n Output: id\n |\n Buffers: shared hit=9960 read=121303\n |\n -> Seq Scan on media.oo_file fi (cost=0.00..144710.65 rows=76 width=8)\n(actual time=10779.582..10779.583 rows=0 loops=1)|\n Output: id\n |\n Filter: (fi.is_active AND (fi.relpath ^@ 'A'::text))\n |\n Rows Removed by Filter: 1075812\n |\n Buffers: shared hit=9960 read=121303\n |\nPlanning Time: 0.428 ms\n |\nExecution Time: 10779.613 ms\n |\n\n-- letter 'A' like expression index scan fast\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active and fi.relpath like 'A%' limit 1;\nQUERY PLAN\n |\n-------------------------------------------------------------------------------------------------------------------------------+\nLimit (cost=0.55..60.85 rows=1 width=8) (actual time=7.047..7.048 rows=0\nloops=1) |\n Output: id\n |\n Buffers: shared hit=2776\n |\n -> Index Scan using test on media.oo_file fi (cost=0.55..4583.29\nrows=76 width=8) (actual time=7.045..7.045 rows=0 loops=1)|\n Output: id\n |\n Index Cond: ((fi.relpath >= 'A'::text) AND (fi.relpath <\n'B'::text)) |\n Filter: (fi.is_active AND (fi.relpath ~~ 'A%'::text))\n |\n Rows Removed by Filter: 3784\n |\n Buffers: shared hit=2776\n |\nPlanning Time: 0.937 ms\n |\nExecution Time: 7.091 ms\n |\n\n\n-- letter 'T' like expression, seq scan slow\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active and fi.relpath like 'Természettudomány%' limit 1;\nQUERY PLAN\n |\n-----------------------------------------------------------------------------------------------------------------------------+\nLimit (cost=0.00..2.70 rows=1 width=8) (actual time=9842.935..9842.938\nrows=0 loops=1) |\n Output: id\n |\n Buffers: shared hit=10024 read=121239\n |\n -> Seq Scan on media.oo_file fi (cost=0.00..144710.65 rows=53574\nwidth=8) (actual time=9842.933..9842.934 rows=0 loops=1)|\n Output: id\n |\n Filter: (fi.is_active AND (fi.relpath ~~\n'Természettudomány%'::text))\n |\n Rows Removed by Filter: 1075812\n |\n Buffers: shared hit=10024 read=121239\n |\nPlanning Time: 0.975 ms\n |\nExecution Time: 9842.962 ms\n |\n\n \n\r\n> PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan for A, I, C letters (with the \"like\" query).\n\r\nThat's interesting.\n\r\nDoes it help if you create an additional index on relpath with the\r\ntext_pattern_ops modifier, e.g.\n\r\nCREATE INDEX ... USING btree (relpath text_pattern_ops);It does not help. Details below. (PostgreSQL version 12.8)CREATE index test ON media.oo_file (relpath COLLATE \"C\");CREATE INDEX test2 ON media.oo_file USING btree (relpath text_pattern_ops);CREATE INDEX test3 ON media.oo_file USING btree (relpath collate \"C\" text_pattern_ops);-- letter \"A\" ^@ operator -> slow seq scanEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  and fi.relpath ^@ 'A' limit 1;QUERY PLAN                                                                                                                  |----------------------------------------------------------------------------------------------------------------------------+Limit  (cost=0.00..1904.09 rows=1 width=8) (actual time=10779.585..10779.587 rows=0 loops=1)                                |  Output: id                                                                                                                |  Buffers: shared hit=9960 read=121303                                                                                      |  ->  Seq Scan on media.oo_file fi  (cost=0.00..144710.65 rows=76 width=8) (actual time=10779.582..10779.583 rows=0 loops=1)|        Output: id                                                                                                          |        Filter: (fi.is_active AND (fi.relpath ^@ 'A'::text))                                                                |        Rows Removed by Filter: 1075812                                                                                     |        Buffers: shared hit=9960 read=121303                                                                                |Planning Time: 0.428 ms                                                                                                     |Execution Time: 10779.613 ms                                                                                                |-- letter 'A' like expression index scan fastEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  and fi.relpath like 'A%' limit 1;QUERY PLAN                                                                                                                     |-------------------------------------------------------------------------------------------------------------------------------+Limit  (cost=0.55..60.85 rows=1 width=8) (actual time=7.047..7.048 rows=0 loops=1)                                             |  Output: id                                                                                                                   |  Buffers: shared hit=2776                                                                                                     |  ->  Index Scan using test on media.oo_file fi  (cost=0.55..4583.29 rows=76 width=8) (actual time=7.045..7.045 rows=0 loops=1)|        Output: id                                                                                                             |        Index Cond: ((fi.relpath >= 'A'::text) AND (fi.relpath < 'B'::text))                                                   |        Filter: (fi.is_active AND (fi.relpath ~~ 'A%'::text))                                                                  |        Rows Removed by Filter: 3784                                                                                           |        Buffers: shared hit=2776                                                                                               |Planning Time: 0.937 ms                                                                                                        |Execution Time: 7.091 ms                                                                                                       |-- letter 'T' like expression, seq scan slowEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  and fi.relpath like 'Természettudomány%' limit 1;QUERY PLAN                                                                                                                   |-----------------------------------------------------------------------------------------------------------------------------+Limit  (cost=0.00..2.70 rows=1 width=8) (actual time=9842.935..9842.938 rows=0 loops=1)                                      |  Output: id                                                                                                                 |  Buffers: shared hit=10024 read=121239                                                                                      |  ->  Seq Scan on media.oo_file fi  (cost=0.00..144710.65 rows=53574 width=8) (actual time=9842.933..9842.934 rows=0 loops=1)|        Output: id                                                                                                           |        Filter: (fi.is_active AND (fi.relpath ~~ 'Természettudomány%'::text))                                                |        Rows Removed by Filter: 1075812                                                                                      |        Buffers: shared hit=10024 read=121239                                                                                |Planning Time: 0.975 ms                                                                                                      |Execution Time: 9842.962 ms                                                                                                  |", "msg_date": "Fri, 4 Feb 2022 14:07:59 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "On Fri, 4 Feb 2022 at 13:07, Les <[email protected]> wrote:\n>\n>>\n>> > PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan for A, I, C letters (with the \"like\" query).\n>>\n>> That's interesting.\n>>\n>> Does it help if you create an additional index on relpath with the\n>> text_pattern_ops modifier, e.g.\n>>\n>> CREATE INDEX ... USING btree (relpath text_pattern_ops);\n>\n>\n> It does not help.\n\nWhat if you try applying the C collation to the values from the table:\n\nwhere fi.is_active and fi.relpath collate \"C\" ^@ 'A'\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:19:04 +0000", "msg_from": "Nick Cleaton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "> >\n> > It does not help.\n>\n> What if you try applying the C collation to the values from the table:\n>\n> where fi.is_active and fi.relpath collate \"C\" ^@ 'A'\n>\n\nSlow\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active and fi.relpath collate \"C\" ^@ 'A' limit 1;\nQUERY PLAN\n |\n--------------------------------------------------------------------------------------------------------------------------+\nLimit (cost=0.00..1904.09 rows=1 width=8) (actual time=3837.338..3837.340\nrows=0 loops=1) |\n Output: id\n |\n Buffers: shared hit=9355 read=121908\n |\n -> Seq Scan on media.oo_file fi (cost=0.00..144710.65 rows=76 width=8)\n(actual time=3837.336..3837.336 rows=0 loops=1)|\n Output: id\n |\n Filter: (fi.is_active AND ((fi.relpath)::text ^@ 'A'::text))\n |\n Rows Removed by Filter: 1075812\n |\n Buffers: shared hit=9355 read=121908\n |\nPlanning Time: 0.391 ms\n |\nExecution Time: 3837.364 ms\n |\n\n\r\n>\r\n> It does not help.\n\r\nWhat if you try applying the C collation to the values from the table:\n\r\nwhere fi.is_active  and fi.relpath collate \"C\" ^@ 'A'SlowEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  and fi.relpath collate \"C\" ^@ 'A' limit 1;QUERY PLAN                                                                                                                |--------------------------------------------------------------------------------------------------------------------------+Limit  (cost=0.00..1904.09 rows=1 width=8) (actual time=3837.338..3837.340 rows=0 loops=1)                                |  Output: id                                                                                                              |  Buffers: shared hit=9355 read=121908                                                                                    |  ->  Seq Scan on media.oo_file fi  (cost=0.00..144710.65 rows=76 width=8) (actual time=3837.336..3837.336 rows=0 loops=1)|        Output: id                                                                                                        |        Filter: (fi.is_active AND ((fi.relpath)::text ^@ 'A'::text))                                                      |        Rows Removed by Filter: 1075812                                                                                   |        Buffers: shared hit=9355 read=121908                                                                              |Planning Time: 0.391 ms                                                                                                   |Execution Time: 3837.364 ms                                                                                               |", "msg_date": "Fri, 4 Feb 2022 14:21:51 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "Hi Les,\n\nI have reviewed the whole thread, and I do not see usage of gist or gin\nindexes. Have you tried using Gist or GIN indexes instead of a normal\nb-tree?\n\nB-trees are a good option when your search is simple(e.g. =, >, <). The\noperators you are using are \"like\" or \"^@\", which fall into a full-text\nsearch category; in such scenarios, b-tree may not be effective every time.\nHence, it may not deliver the result in the expected time-frame. I\nrecommend you to try creating a Gist or a GIN index here.\n\n\nRegards,\nNinad\n\n\nOn Fri, Feb 4, 2022 at 6:52 PM Les <[email protected]> wrote:\n\n>\n> >\n>> > It does not help.\n>>\n>> What if you try applying the C collation to the values from the table:\n>>\n>> where fi.is_active and fi.relpath collate \"C\" ^@ 'A'\n>>\n>\n> Slow\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> select fi.id from media.oo_file fi\n> where fi.is_active and fi.relpath collate \"C\" ^@ 'A' limit 1;\n> QUERY PLAN\n> |\n>\n> --------------------------------------------------------------------------------------------------------------------------+\n> Limit (cost=0.00..1904.09 rows=1 width=8) (actual time=3837.338..3837.340\n> rows=0 loops=1) |\n> Output: id\n> |\n> Buffers: shared hit=9355 read=121908\n> |\n> -> Seq Scan on media.oo_file fi (cost=0.00..144710.65 rows=76 width=8)\n> (actual time=3837.336..3837.336 rows=0 loops=1)|\n> Output: id\n> |\n> Filter: (fi.is_active AND ((fi.relpath)::text ^@ 'A'::text))\n> |\n> Rows Removed by Filter: 1075812\n> |\n> Buffers: shared hit=9355 read=121908\n> |\n> Planning Time: 0.391 ms\n> |\n> Execution Time: 3837.364 ms\n> |\n>\n\nHi Les,I have reviewed the whole thread, and I do not see usage of gist or gin indexes. Have you tried using Gist or GIN indexes instead of a normal b-tree?B-trees are a good option when your search is simple(e.g. =, >, <). The operators you are using are \"like\" or \"^@\", which fall into a full-text search category; in such scenarios, b-tree may not be effective every time. Hence, it may not deliver the result in the expected time-frame. I recommend you to try creating a Gist or a GIN index here.Regards,NinadOn Fri, Feb 4, 2022 at 6:52 PM Les <[email protected]> wrote:\r\n>\r\n> It does not help.\n\r\nWhat if you try applying the C collation to the values from the table:\n\r\nwhere fi.is_active  and fi.relpath collate \"C\" ^@ 'A'SlowEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  and fi.relpath collate \"C\" ^@ 'A' limit 1;QUERY PLAN                                                                                                                |--------------------------------------------------------------------------------------------------------------------------+Limit  (cost=0.00..1904.09 rows=1 width=8) (actual time=3837.338..3837.340 rows=0 loops=1)                                |  Output: id                                                                                                              |  Buffers: shared hit=9355 read=121908                                                                                    |  ->  Seq Scan on media.oo_file fi  (cost=0.00..144710.65 rows=76 width=8) (actual time=3837.336..3837.336 rows=0 loops=1)|        Output: id                                                                                                        |        Filter: (fi.is_active AND ((fi.relpath)::text ^@ 'A'::text))                                                      |        Rows Removed by Filter: 1075812                                                                                   |        Buffers: shared hit=9355 read=121908                                                                              |Planning Time: 0.391 ms                                                                                                   |Execution Time: 3837.364 ms                                                                                               |", "msg_date": "Fri, 4 Feb 2022 19:03:16 +0530", "msg_from": "Ninad Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "On Fri, 4 Feb 2022 at 13:21, Les <[email protected]> wrote:\n>\n>> What if you try applying the C collation to the values from the table:\n>>\n>> where fi.is_active and fi.relpath collate \"C\" ^@ 'A'\n>\n>\n> Slow\n\nWhat about this:\n\nfi.relpath between ('A' collate \"C\") and ('A'||chr(255) collate \"C\")\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:57:08 +0000", "msg_from": "Nick Cleaton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "> Slow\n>\n> What about this:\n>\n> fi.relpath between ('A' collate \"C\") and ('A'||chr(255) collate \"C\")\n>\n\nIt uses index scan.\n\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active\nand fi.relpath between ('A' collate \"C\") and ('A'||chr(255) collate \"C\")\nlimit 1;\nQUERY PLAN\n |\n---------------------------------------------------------------------------------------------------------------------------------------+\nLimit (cost=0.55..6.12 rows=1 width=8) (actual time=1623.069..1623.070\nrows=0 loops=1) |\n Output: id\n |\n Buffers: shared hit=2439 read=1994 dirtied=1107\n |\n -> Index Scan using test on media.oo_file fi (cost=0.55..5732.47\nrows=1029 width=8) (actual time=1623.067..1623.067 rows=0 loops=1)|\n Output: id\n |\n Index Cond: ((fi.relpath >= 'A'::text COLLATE \"C\") AND (fi.relpath\n<= 'A '::text COLLATE \"C\")) |\n Filter: fi.is_active\n |\n Rows Removed by Filter: 3784\n |\n Buffers: shared hit=2439 read=1994 dirtied=1107\n |\nPlanning Time: 18.817 ms\n |\nExecution Time: 1623.104 ms\n |\n\nAlthough the same with 'Természettudomány' uses seq scan:\n\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active\nand fi.relpath between\n('Természettudomány' collate \"C\")\nand ('Természettudomány'||chr(255) collate \"C\")\nlimit 1;\n\nQUERY PLAN\n |\n---------------------------------------------------------------------------------------------------------------------------------------------------+\nLimit (cost=0.00..2.13 rows=1 width=8) (actual time=7521.531..7521.532\nrows=0 loops=1) |\n Output: id\n |\n Buffers: shared hit=17018 read=150574\n |\n -> Seq Scan on media.oo_file fi (cost=0.00..188195.39 rows=88290\nwidth=8) (actual time=7521.528..7521.529 rows=0 loops=1)\n |\n Output: id\n |\n Filter: (fi.is_active AND (fi.relpath >= 'Természettudomány'::text\nCOLLATE \"C\") AND (fi.relpath <= 'Természettudomány '::text COLLATE \"C\"))|\n Rows Removed by Filter: 1075812\n |\n Buffers: shared hit=17018 read=150574\n |\nPlanning Time: 8.918 ms\n |\nExecution Time: 7521.560 ms\n |\n\n\r\n> Slow\n\r\nWhat about this:\n\r\nfi.relpath between ('A' collate \"C\") and ('A'||chr(255) collate \"C\")It uses index scan.EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  \tand fi.relpath between ('A' collate \"C\") and ('A'||chr(255) collate \"C\")\tlimit 1;QUERY PLAN                                                                                                                             |---------------------------------------------------------------------------------------------------------------------------------------+Limit  (cost=0.55..6.12 rows=1 width=8) (actual time=1623.069..1623.070 rows=0 loops=1)                                                |  Output: id                                                                                                                           |  Buffers: shared hit=2439 read=1994 dirtied=1107                                                                                      |  ->  Index Scan using test on media.oo_file fi  (cost=0.55..5732.47 rows=1029 width=8) (actual time=1623.067..1623.067 rows=0 loops=1)|        Output: id                                                                                                                     |        Index Cond: ((fi.relpath >= 'A'::text COLLATE \"C\") AND (fi.relpath <= 'A '::text COLLATE \"C\"))                                 |        Filter: fi.is_active                                                                                                           |        Rows Removed by Filter: 3784                                                                                                   |        Buffers: shared hit=2439 read=1994 dirtied=1107                                                                                |Planning Time: 18.817 ms                                                                                                               |Execution Time: 1623.104 ms                                                                                                            |Although the same with 'Természettudomány' uses seq scan:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  \tand fi.relpath between \t('Természettudomány' collate \"C\") \tand ('Természettudomány'||chr(255) collate \"C\")\tlimit 1;QUERY PLAN                                                                                                                                         |---------------------------------------------------------------------------------------------------------------------------------------------------+Limit  (cost=0.00..2.13 rows=1 width=8) (actual time=7521.531..7521.532 rows=0 loops=1)                                                            |  Output: id                                                                                                                                       |  Buffers: shared hit=17018 read=150574                                                                                                            |  ->  Seq Scan on media.oo_file fi  (cost=0.00..188195.39 rows=88290 width=8) (actual time=7521.528..7521.529 rows=0 loops=1)                      |        Output: id                                                                                                                                 |        Filter: (fi.is_active AND (fi.relpath >= 'Természettudomány'::text COLLATE \"C\") AND (fi.relpath <= 'Természettudomány '::text COLLATE \"C\"))|        Rows Removed by Filter: 1075812                                                                                                            |        Buffers: shared hit=17018 read=150574                                                                                                      |Planning Time: 8.918 ms                                                                                                                            |Execution Time: 7521.560 ms                                                                                                                        |", "msg_date": "Fri, 4 Feb 2022 15:07:10 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "On Fri, 4 Feb 2022 at 14:07, Les <[email protected]> wrote:\n>\n>\n>\n>> > Slow\n>>\n>> What about this:\n>>\n>> fi.relpath between ('A' collate \"C\") and ('A'||chr(255) collate \"C\")\n>\n>\n> It uses index scan.\n\n> Although the same with 'Természettudomány' uses seq scan:\n>\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> select fi.id from media.oo_file fi\n> where fi.is_active\n> and fi.relpath between\n> ('Természettudomány' collate \"C\")\n> and ('Természettudomány'||chr(255) collate \"C\")\n> limit 1;\n>\n> QUERY PLAN |\n> ---------------------------------------------------------------------------------------------------------------------------------------------------+\n> Limit (cost=0.00..2.13 rows=1 width=8) (actual time=7521.531..7521.532 rows=0 loops=1) |\n> Output: id |\n> Buffers: shared hit=17018 read=150574 |\n> -> Seq Scan on media.oo_file fi (cost=0.00..188195.39 rows=88290 width=8) (actual time=7521.528..7521.529 rows=0 loops=1) |\n> Output: id |\n> Filter: (fi.is_active AND (fi.relpath >= 'Természettudomány'::text COLLATE \"C\") AND (fi.relpath <= 'Természettudomány '::text COLLATE \"C\"))|\n> Rows Removed by Filter: 1075812 |\n> Buffers: shared hit=17018 read=150574 |\n> Planning Time: 8.918 ms |\n> Execution Time: 7521.560 ms\n\nThat may be because it's expecting to get 88290 rows from the\nsequential scan, and the\"limit 1\" means it expects sequential scan to\nbe fast because on average it will only need to scan 1/88290 of the\ntable before it finds a matching row, then it can stop.\n\nTry it without the \"limit 1\"\n\n\n", "msg_date": "Fri, 4 Feb 2022 14:34:52 +0000", "msg_from": "Nick Cleaton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "Les schrieb am 04.02.2022 um 10:11:\n\n> My task is to write a query that tells if a folder has any active file inside it - directly or in subfolders. Here is the query for that:\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n>\n> select id, title,\n> (exists (select f2.id <http://f2.id> from media.oo_file f2 where f2.relpath like f.relpath || '%')) as has_file\n> from media.oo_folder f where f.parent_id is null\n>\n> QUERY PLAN |\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------+\n> Index Scan using oo_folder_idx_parent on media.oo_folder f (cost=0.42..488.02 rows=20 width=26) (actual time=713.419..25414.969 rows=45 loops=1) |\n> Output: f.id <http://f.id>, f.title, (SubPlan 1) |\n> Index Cond: (f.parent_id IS NULL) |\n> Buffers: shared hit=7014170 |\n> SubPlan 1 |\n> -> Index Only Scan using oo_file_idx_relpath on media.oo_file f2 (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756 rows=0 loops=45)|\n> Filter: (f2.relpath ~~ (f.relpath || '%'::text)) |\n> Rows Removed by Filter: 792025 |\n> Heap Fetches: 768960 |\n> Buffers: shared hit=7014130 |\n\nIn addition to the collation tweaks, I wonder if using a lateral join might result in a more efficient plan:\n\n select id, title, c.id is not null as has_path\n from media.oo_folder f\n left join lateral (\n select f2.id\n from media.oo_file f2\n where f2.relpath like f.relpath || '%'\n limit 1\n ) c on true\n where f.parent_id is null\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:42:37 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": ">\n> That may be because it's expecting to get 88290 rows from the\n> sequential scan, and the\"limit 1\" means it expects sequential scan to\n> be fast because on average it will only need to scan 1/88290 of the\n> table before it finds a matching row, then it can stop.\n>\n\n\nWe are looking for a single row. With an index scan, it is always much\nfaster to find a single row. No seq scan can be faster \"on average\", when\nyou are looking for a single row. Am I wrong?\n\n> Try it without the \"limit 1\"\n\n\nWithout the limit it uses bitmap heap scan. Unbelievable!\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nselect fi.id from media.oo_file fi\nwhere fi.is_active\nand fi.relpath between\n('Természettudomány' collate \"C\")\nand ('Természettudomány'||chr(255) collate \"C\");\n\nQUERY PLAN\n |\n--------------------------------------------------------------------------------------------------------------------------------------+\nBitmap Heap Scan on media.oo_file fi (cost=10480.10..140065.96 rows=70010\nwidth=8) (actual time=9757.917..9757.920 rows=0 loops=1) |\n Output: id\n |\n Recheck Cond: ((fi.relpath >= 'Természettudomány'::text COLLATE \"C\") AND\n(fi.relpath <= 'Természettudomány '::text COLLATE \"C\")) |\n Filter: fi.is_active\n |\n Rows Removed by Filter: 85207\n |\n Heap Blocks: exact=24954\n |\n Buffers: shared hit=197 read=26531\n |\n -> Bitmap Index Scan on test (cost=0.00..10462.59 rows=99404 width=0)\n(actual time=425.571..425.572 rows=85207 loops=1) |\n Index Cond: ((fi.relpath >= 'Természettudomány'::text COLLATE \"C\")\nAND (fi.relpath <= 'Természettudomány '::text COLLATE \"C\"))|\n Buffers: shared hit=6 read=1768\n |\nPlanning Time: 1.145 ms\n |\nJIT:\n |\n Functions: 6\n |\n Options: Inlining false, Optimization false, Expressions true, Deforming\ntrue |\n Timing: Generation 2.295 ms, Inlining 0.000 ms, Optimization 1.142 ms,\nEmission 11.632 ms, Total 15.070 ms |\nExecution Time: 9760.361 ms\n |\n\n\n\n\n\n>\n>\n\n\n\r\nThat may be because it's expecting to get 88290 rows from the\r\nsequential scan, and the\"limit 1\" means it expects sequential scan to\r\nbe fast because on average it will only need to scan 1/88290 of the\r\ntable before it finds a matching row, then it can stop.We are looking for a single row. With an index scan, it is always much faster to find a single row. No seq scan can be faster \"on average\", when you are looking for a single row. Am I wrong?> Try it without the \"limit 1\" Without the limit it uses bitmap heap scan. Unbelievable!EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)select fi.id from media.oo_file fi\twhere fi.is_active  \tand fi.relpath between \t('Természettudomány' collate \"C\") \tand ('Természettudomány'||chr(255) collate \"C\");QUERY PLAN                                                                                                                            |--------------------------------------------------------------------------------------------------------------------------------------+Bitmap Heap Scan on media.oo_file fi  (cost=10480.10..140065.96 rows=70010 width=8) (actual time=9757.917..9757.920 rows=0 loops=1)   |  Output: id                                                                                                                          |  Recheck Cond: ((fi.relpath >= 'Természettudomány'::text COLLATE \"C\") AND (fi.relpath <= 'Természettudomány '::text COLLATE \"C\"))    |  Filter: fi.is_active                                                                                                                |  Rows Removed by Filter: 85207                                                                                                       |  Heap Blocks: exact=24954                                                                                                            |  Buffers: shared hit=197 read=26531                                                                                                  |  ->  Bitmap Index Scan on test  (cost=0.00..10462.59 rows=99404 width=0) (actual time=425.571..425.572 rows=85207 loops=1)           |        Index Cond: ((fi.relpath >= 'Természettudomány'::text COLLATE \"C\") AND (fi.relpath <= 'Természettudomány '::text COLLATE \"C\"))|        Buffers: shared hit=6 read=1768                                                                                               |Planning Time: 1.145 ms                                                                                                               |JIT:                                                                                                                                  |  Functions: 6                                                                                                                        |  Options: Inlining false, Optimization false, Expressions true, Deforming true                                                       |  Timing: Generation 2.295 ms, Inlining 0.000 ms, Optimization 1.142 ms, Emission 11.632 ms, Total 15.070 ms                          |Execution Time: 9760.361 ms                                                                                                           |", "msg_date": "Fri, 4 Feb 2022 15:43:55 +0100", "msg_from": "Les <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terribly slow query with very good plan?" }, { "msg_contents": "On Fri, 4 Feb 2022 at 14:41, Les <[email protected]> wrote:\n\n> Hello,\n>\n> The \"replath\" field contains the path of the file/folder. For example:\n> \"/folder1/folder2/folder3/filename4.ext5\". The replath field is managed by\n> triggers. There are about 1M rows for files and 600K folder rows in the\n> database. The files are well distributed between folders, and there are\n> only 45 root folders ( parent_id is null)\n>\n> Replying in a separate thread, just in case this does not help.\nIt seems you already store relpath but as text via triggers, will the\n'ltree' extension be of any help to get your results faster (to help\nimplement path enumeration, but has a limitation of 65K objects)\nhttps://www.postgresql.org/docs/current/ltree.html\nhttps://github.com/postgres/postgres/blob/master/contrib/ltree/sql/ltree.sql\n(test cases for elaborate examples)\nhttps://patshaughnessy.net/2017/12/14/manipulating-trees-using-sql-and-the-postgres-ltree-extension\n\nalso, another pattern i came across was via closure tables\nhttps://www.slideshare.net/billkarwin/models-for-hierarchical-data\nhttps://stackoverflow.com/questions/19834400/what-is-the-simplest-way-to-save-a-file-tree-in-a-postgres-database/19835575\n\nex. (from the doc)\npostgres=# drop table test;\nDROP TABLE\npostgres=# CREATE TABLE test (path ltree);\nINSERT INTO test VALUES ('Top');\nINSERT INTO test VALUES ('Top.Science');\nINSERT INTO test VALUES ('Top.Science.Astronomy');\nINSERT INTO test VALUES ('Top.Science.Astronomy.Astrophysics');\nINSERT INTO test VALUES ('Top.Science.Astronomy.Cosmology');\nINSERT INTO test VALUES ('Top.Hobbies');\nINSERT INTO test VALUES ('Top.Hobbies.Amateurs_Astronomy');\nINSERT INTO test VALUES ('Top.Collections');\nINSERT INTO test VALUES ('Top.Collections.Pictures');\nINSERT INTO test VALUES ('Top.Collections.Pictures.Astronomy');\nINSERT INTO test VALUES ('Top.Collections.Pictures.Astronomy.Stars');\nINSERT INTO test VALUES ('Top.Collections.Pictures.Astronomy.Galaxies');\nINSERT INTO test VALUES ('Top.Collections.Pictures.Astronomy.Astronauts');\nCREATE INDEX path_gist_idx ON test USING GIST (path);\nCREATE INDEX path_idx ON test USING BTREE (path); -- we can even make this\nunique index as there would only be one path\n-- we can also create partial indexes depending on the query pattern\n\n#my focus is no rows filtered (hence less wasted operations)\npostgres=# analyze test;\nANALYZE\npostgres=# explain analyze select exists (SELECT 1 FROM test WHERE path ~\n'*.Stars');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Result (cost=1.16..1.17 rows=1 width=1) (actual time=0.010..0.010 rows=1\nloops=1)\n InitPlan 1 (returns $0)\n -> Seq Scan on test (cost=0.00..1.16 rows=1 width=0) (actual\ntime=0.008..0.009 rows=1 loops=1)\n Filter: (path ~ '*.Stars'::lquery)\n Rows Removed by Filter: 10\n Planning Time: 0.248 ms\n Execution Time: 0.023 ms\n(7 rows)\n\npostgres=# set enable_seqscan TO 0; -- small table, hence\nSET\npostgres=# explain analyze select exists (SELECT 1 FROM test WHERE path ~\n'*.Stars');\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Result (cost=8.15..8.16 rows=1 width=1) (actual time=0.020..0.021 rows=1\nloops=1)\n InitPlan 1 (returns $0)\n -> Index Scan using path_gist_idx on test (cost=0.13..8.15 rows=1\nwidth=0) (actual time=0.019..0.019 rows=1 loops=1)\n Index Cond: (path ~ '*.Stars'::lquery)\n Planning Time: 0.079 ms\n Execution Time: 0.037 ms\n(6 rows)\n\nPlease ignore, if not relevant to the discussion.\n\n-- \nThanks,\nVijay\nLinkedIn - Vijaykumar Jain <https://www.linkedin.com/in/vijaykumarjain/>\n\nOn Fri, 4 Feb 2022 at 14:41, Les <[email protected]> wrote:Hello,The \"replath\" field contains the path of the file/folder. For example: \"/folder1/folder2/folder3/filename4.ext5\".  The replath field is managed by triggers. There are about 1M rows for files and 600K folder rows in the database. The files are well distributed between folders, and there are only 45 root folders ( parent_id is null)Replying in a separate thread, just in case this does not help.It seems you already store relpath but as text via triggers, will the 'ltree' extension be of any help to get your results faster (to help implement path enumeration, but has a limitation of 65K objects)https://www.postgresql.org/docs/current/ltree.htmlhttps://github.com/postgres/postgres/blob/master/contrib/ltree/sql/ltree.sql  (test cases for elaborate examples)https://patshaughnessy.net/2017/12/14/manipulating-trees-using-sql-and-the-postgres-ltree-extensionalso, another pattern i came across was via closure tableshttps://www.slideshare.net/billkarwin/models-for-hierarchical-datahttps://stackoverflow.com/questions/19834400/what-is-the-simplest-way-to-save-a-file-tree-in-a-postgres-database/19835575ex. (from the doc)postgres=# drop table test;DROP TABLEpostgres=# CREATE TABLE test (path ltree);INSERT INTO test VALUES ('Top');INSERT INTO test VALUES ('Top.Science');INSERT INTO test VALUES ('Top.Science.Astronomy');INSERT INTO test VALUES ('Top.Science.Astronomy.Astrophysics');INSERT INTO test VALUES ('Top.Science.Astronomy.Cosmology');INSERT INTO test VALUES ('Top.Hobbies');INSERT INTO test VALUES ('Top.Hobbies.Amateurs_Astronomy');INSERT INTO test VALUES ('Top.Collections');INSERT INTO test VALUES ('Top.Collections.Pictures');INSERT INTO test VALUES ('Top.Collections.Pictures.Astronomy');INSERT INTO test VALUES ('Top.Collections.Pictures.Astronomy.Stars');INSERT INTO test VALUES ('Top.Collections.Pictures.Astronomy.Galaxies');INSERT INTO test VALUES ('Top.Collections.Pictures.Astronomy.Astronauts');CREATE INDEX path_gist_idx ON test USING GIST (path);CREATE INDEX path_idx ON test USING BTREE (path); -- we can even make this unique index as there would only be one path-- we can also create partial indexes depending on the query pattern  #my focus is no rows filtered (hence less wasted operations)postgres=# analyze test;ANALYZEpostgres=# explain analyze select exists (SELECT 1 FROM test WHERE path ~ '*.Stars');                                              QUERY PLAN------------------------------------------------------------------------------------------------------ Result  (cost=1.16..1.17 rows=1 width=1) (actual time=0.010..0.010 rows=1 loops=1)   InitPlan 1 (returns $0)     ->  Seq Scan on test  (cost=0.00..1.16 rows=1 width=0) (actual time=0.008..0.009 rows=1 loops=1)           Filter: (path ~ '*.Stars'::lquery)           Rows Removed by Filter: 10 Planning Time: 0.248 ms Execution Time: 0.023 ms(7 rows)postgres=# set enable_seqscan TO 0; -- small table, henceSETpostgres=# explain analyze select exists (SELECT 1 FROM test WHERE path ~ '*.Stars');                                                         QUERY PLAN---------------------------------------------------------------------------------------------------------------------------- Result  (cost=8.15..8.16 rows=1 width=1) (actual time=0.020..0.021 rows=1 loops=1)   InitPlan 1 (returns $0)     ->  Index Scan using path_gist_idx on test  (cost=0.13..8.15 rows=1 width=0) (actual time=0.019..0.019 rows=1 loops=1)           Index Cond: (path ~ '*.Stars'::lquery) Planning Time: 0.079 ms Execution Time: 0.037 ms(6 rows)Please ignore, if not relevant to the discussion.-- Thanks,VijayLinkedIn - Vijaykumar Jain", "msg_date": "Sun, 6 Feb 2022 02:55:02 +0530", "msg_from": "Vijaykumar Jain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terribly slow query with very good plan?" } ]
[ { "msg_contents": "*Postgres version:* 11.4\n\n*Problem:*\n Query choosing Bad Index Path. Details are provided below:\n\n\n*Table :*\n\n\n\n\n\n\n*Doubt*\n 1. Why is this Query choosing *Index Scan Backward using table1_pkey\nIndex *though it's cost is high. It can rather choose\n BITMAP OR\n (Index on RECORDID) i.e; table1_idx6\n (Index on RELATEDID) i.e; table1_idx7\n\n Below is the selectivity details from *pg_stats* table\n - Recordid has 51969 distinct values. And selectivity\n(most_common_freqs) for *recordid = 15842006928391817* is 0.00376667\n - Relatedid has 82128 distinct values. And selectivity\n(most_common_freqs) for *recordid = 15842006928391817* is 0.0050666\n\nSince, selectivity is less, this should logically choose this Index, which\nwould have improve my query performance here.\nI cross-checked the same by removing PrimaryKey to this table and query now\nchooses these indexes and response is in 100ms. Please refer the plan below\n(after removing primary key):", "msg_date": "Mon, 7 Feb 2022 10:45:12 +0530", "msg_from": "Valli Annamalai <[email protected]>", "msg_from_op": true, "msg_subject": "Query choosing Bad Index Path" }, { "msg_contents": "*Postgres version:* 11.4\n\n*Problem:*\n Query choosing Bad Index Path. Details are provided below:\n\n\n*Table :*\n\n\n\n\n\n\n*Doubt*\n 1. Why is this Query choosing *Index Scan Backward using table1_pkey\nIndex *though it's cost is high. It can rather choose\n BITMAP OR\n (Index on RECORDID) i.e; table1_idx6\n (Index on RELATEDID) i.e; table1_idx7\n\n Below is the selectivity details from *pg_stats* table\n - Recordid has 51969 distinct values. And selectivity\n(most_common_freqs) for *recordid = 15842006928391817* is 0.00376667\n - Relatedid has 82128 distinct values. And selectivity\n(most_common_freqs) for *recordid = 15842006928391817* is 0.0050666\n\nSince, selectivity is less, this should logically choose this Index, which\nwould have improve my query performance here.\nI cross-checked the same by removing PrimaryKey to this table and query now\nchooses these indexes and response is in 100ms. Please refer the plan below\n(after removing primary key):", "msg_date": "Mon, 7 Feb 2022 11:36:17 +0530", "msg_from": "Valli Annamalai <[email protected]>", "msg_from_op": true, "msg_subject": "Query choosing Bad Index Path" }, { "msg_contents": "Hi\n\npo 7. 2. 2022 v 6:15 odesílatel Valli Annamalai <[email protected]>\nnapsal:\n\n>\n> *Postgres version:* 11.4\n>\n> *Problem:*\n> Query choosing Bad Index Path. Details are provided below:\n>\n>\n> *Table :*\n>\n>\n>\n>\n>\n>\n>\nplease, don't use screenshots\n\n\n\n\n> *Doubt*\n> 1. Why is this Query choosing *Index Scan Backward using table1_pkey\n> Index *though it's cost is high. It can rather choose\n> BITMAP OR\n> (Index on RECORDID) i.e; table1_idx6\n> (Index on RELATEDID) i.e; table1_idx7\n>\n> Below is the selectivity details from *pg_stats* table\n> - Recordid has 51969 distinct values. And selectivity\n> (most_common_freqs) for *recordid = 15842006928391817* is 0.00376667\n> - Relatedid has 82128 distinct values. And selectivity\n> (most_common_freqs) for *recordid = 15842006928391817* is 0.0050666\n>\n> Since, selectivity is less, this should logically choose this Index, which\n> would have improve my query performance here.\n> I cross-checked the same by removing PrimaryKey to this table and query\n> now chooses these indexes and response is in 100ms. Please refer the plan\n> below (after removing primary key):\n>\n>\n>\n>\n>\n You can see very bad estimation 32499 x 0 rows\n\nNext source of problems can be LIMIT clause. Postgres expects so data are\nuniformly stored, and then LIMIT 10 quickly finds wanted rows. But it is\nnot true in your case.\n\nYou can try to use a multicolumn index, or you can transform your query\nfrom OR based to UNION ALL based\n\nSELECT * FROM tab WHERE p1 OR p1 => SELECT * FROM tab WHERE p1 UNION ALL\nSELECT * FROM tab WHERE p2\n\nRegards\n\nPavel", "msg_date": "Mon, 7 Feb 2022 07:18:14 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query choosing Bad Index Path" }, { "msg_contents": "Hi,\nOn Mon, Feb 07, 2022 at 11:36:17AM +0530, Valli Annamalai wrote:\n> *Postgres version:* 11.4\n> \n> *Problem:*\n> Query choosing Bad Index Path. Details are provided below:\n> \n> \n> *Table :*\n> \n> \n> \n> \n> \n> \n> *Doubt*\n> 1. Why is this Query choosing *Index Scan Backward using table1_pkey\n> Index *though it's cost is high. It can rather choose\n> BITMAP OR\n> (Index on RECORDID) i.e; table1_idx6\n> (Index on RELATEDID) i.e; table1_idx7\n> \n> Below is the selectivity details from *pg_stats* table\n> - Recordid has 51969 distinct values. And selectivity\n> (most_common_freqs) for *recordid = 15842006928391817* is 0.00376667\n> - Relatedid has 82128 distinct values. And selectivity\n> (most_common_freqs) for *recordid = 15842006928391817* is 0.0050666\n> \n> Since, selectivity is less, this should logically choose this Index, which\n> would have improve my query performance here.\n> I cross-checked the same by removing PrimaryKey to this table and query now\n> chooses these indexes and response is in 100ms. Please refer the plan below\n> (after removing primary key):\n\nYou already sent the same email less than an hour ago on pgsql-performance\n([1]), which is the correct mailing list, please don't post on this mailing\nlist also.\n\nNote also that sending information as images can be problematic as some people\nhere (including me) won't be able to see them. I tried on a web-based MUA and\nthat's not really better though, as the images are hardly readable and\ndefinitely not grep-able. You will likely have more answers sending\ninformation in plain text.\n\n[1] https://www.postgresql.org/message-id/CADkhgiJ+gT_FDKZWgP8oZsy6iRbYMYkmRjsPhqhcT1A2KBgcHA@mail.gmail.com\n\n\n", "msg_date": "Mon, 7 Feb 2022 14:18:40 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query choosing Bad Index Path" } ]
[ { "msg_contents": "Hi\n\nSometimes simple sql's like this takes a very long time \"select count(*) from information_schema.tables;\"\n\nOther sql's not including system tables may work ok but login also takes a very long time.\n\nThe CPU load on the server is around 25%. There is no iowait.\n\n\nThis happens typically when we are running many functions in parallel creating many temp tables and unlogged tables I think.\n\n\nHere is a slow one:\n\nhttps://explain.depesz.com/s/tUt5\n\n\nand here is fast one :\n\nhttps://explain.depesz.com/s/yYG4\n\n\nHere are my settings (the server has around 256 GB og memory) :\n\nmax_connections = 500\n\nwork_mem = 20MB\n\neffective_cache_size = 96GB\n\neffective_io_concurrency = 256\n\nshared_buffers = 96GB\n\ntemp_buffers = 80MB\n\n\nAny hints ?\n\n\nThanks .\n\n\nLars\n\n\n\n\n\n\n\nHi\n\n\n\nSometimes simple sql's like this takes a very long time  \"select\n count(*) from information_schema.tables;\"\n\n\nOther sql's not including system tables may work ok but login also takes a very long time. \n\n\nThe CPU load on the server is around 25%. There is no iowait.\n\n\n\nThis happens typically when we are running many functions in parallel creating many temp tables and unlogged tables\n I think.\n\n\nHere is a slow one:  \n\nhttps://explain.depesz.com/s/tUt5 \n\n\nand here is fast one :\nhttps://explain.depesz.com/s/yYG4 \n\n\nHere are my settings (the server has around 256 GB og memory) :\n\n\nmax_connections = 500\nwork_mem = 20MB\neffective_cache_size = 96GB \n\neffective_io_concurrency = 256 \n\nshared_buffers = 96GB\ntemp_buffers = 80MB\n\n\n\nAny hints ?\n\n\nThanks .\n\n\n\nLars", "msg_date": "Mon, 7 Feb 2022 16:56:35 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "slow \"select count(*) from information_schema.tables;\" in some cases" }, { "msg_contents": "On Mon, Feb 07, 2022 at 04:56:35PM +0000, Lars Aksel Opsahl wrote:\n> Sometimes simple sql's like this takes a very long time \"select count(*) from information_schema.tables;\"\n> \n> Other sql's not including system tables may work ok but login also takes a very long time.\n> \n> The CPU load on the server is around 25%. There is no iowait.\n> \n> This happens typically when we are running many functions in parallel creating many temp tables and unlogged tables I think.\n> \n> Here is a slow one:\n> https://explain.depesz.com/s/tUt5\n> \n> and here is fast one :\n> https://explain.depesz.com/s/yYG4\n\nThe only difference is that this is sometimes many times slower.\n\n Finalize Aggregate (cost=42021.15..42021.16 rows=1 width=8) (actual time=50602.755..117201.768 rows=1 loops=1)\n -> Gather (cost=42020.94..42021.15 rows=2 width=8) (actual time=130.527..117201.754 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n\n> Here are my settings (the server has around 256 GB og memory) :\n\nWhat version of postgres ? What OS/version ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nAre there any server logs around that time ?\nOr session logs for the slow query ?\n\nIs it because the table creation is locking (rows of) various system catalogs ?\nI'm not sure if it'd be a single, long delay that you could see easily with\nlog_lock_waits, or a large number of small delays, maybe depending on whether\nyour table creation is done within a transaction.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 7 Feb 2022 11:09:27 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow \"select count(*) from information_schema.tables;\" in some\n cases" }, { "msg_contents": ">>\r\n\r\n>> Here is a slow one:\r\n\r\n>> https://explain.depesz.com/s/tUt5\r\n\r\n>>\r\n\r\n>> and here is fast one :\r\n\r\n>> https://explain.depesz.com/s/yYG4\r\n\r\n>\r\n\r\n>The only difference is that this is sometimes many times slower.\r\n\r\n>\r\n\r\n> Finalize Aggregate (cost=42021.15..42021.16 rows=1 width=8) (actual time=50602.755..117201.768 rows=1 loops=1)\r\n\r\n> -> Gather (cost=42020.94..42021.15 rows=2 width=8) (actual time=130.527..117201.754 rows=3 loops=1)\r\n\r\n> Workers Planned: 2\r\n\r\n> Workers Launched: 2\r\n\r\n>\r\n\r\n>> Here are my settings (the server has around 256 GB og memory) :\r\n\r\n>\r\n\r\n\r\nHi\r\n\r\n\r\nHere is some more info.\r\n\r\n\r\n>What version of postgres ? What OS/version ?\r\n\r\n\r\npsql (14.1, server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\r\n\r\n\r\n>https://wiki.postgresql.org/wiki/Slow_Query_Questions\r\n\r\n>\r\n\r\n>Are there any server logs around that time ?\r\n\r\n\r\nYes but nothing in the logs that I could find.\r\n\r\n\r\n>Or session logs for the slow query ?\r\n\r\n>\r\n\r\n>Is it because the table creation is locking (rows of) various system catalogs ?\r\n\r\n>I'm not sure if it'd be a single, long delay that you could see easily with\r\n\r\n>log_lock_waits, or a large number of small delays, maybe depending on whether\r\n\r\n>your table creation is done within a transaction.\r\n\r\n\r\nAdded log_lock_waits but could not anything new in the logs\r\n\r\n\r\nSHOW deadlock_timeout ;\r\n\r\n deadlock_timeout\r\n\r\n------------------\r\n\r\n 1s\r\n\r\n SHOW log_lock_waits;\r\n\r\n log_lock_waits\r\n\r\n----------------\r\n\r\n on\r\n\r\n(1 row)\r\n\r\n\r\n\r\nIn the logs I only things like this\r\n\r\nLOG: duration: 71841.233 ms statement: CREATE UNLOGGED TABLE IF NOT EXISTS tmp_klimagass.styredata_tidligbygg_159298.....\r\n\r\n\r\n​LOG: duration: 12645.127 ms statement: GRANT SELECT ON TABLE tmp_klimagass.vaerdata_159296 TO org_mojo2_sl_read_role;\r\n\r\nLOG: duration: 15783.611 ms statement: EXPLAIN ANALYZE select count(*)\r\n\r\n from information_schema.tables;\r\n\r\nLOG: duration: 35594.903 ms statement: EXPLAIN ANALYZE select count(*)\r\n\r\n\r\nCan not find anything here either\r\n\r\n\r\nselect relation::regclass, * from pg_locks where not granted;\r\n\r\n relation | locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\r\n\r\n----------+----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-----+------+---------+----------\r\n\r\n(0 rows)\r\n\r\n\r\nTime: 55.270 ms\r\n\r\n\r\n>\r\n\r\n>--\r\n\r\n>Justin\r\n\r\nThanks\r\n\r\nLars\r\n\n\n\n\n\n\n\n\n\n\n>>\n>> Here is a slow one:\n>> https://explain.depesz.com/s/tUt5\n>>\n>> and here is fast one :\n>> https://explain.depesz.com/s/yYG4\n>\n>The only difference is that this is sometimes many times slower.\n>\n> Finalize Aggregate \r\n(cost=42021.15..42021.16 rows=1 width=8) (actual time=50602.755..117201.768 rows=1 loops=1)\n>\r\n \r\n-> \r\nGather \r\n(cost=42020.94..42021.15 rows=2 width=8) (actual time=130.527..117201.754 rows=3 loops=1)\n>\r\n       \r\nWorkers Planned: 2\n>\r\n       \r\nWorkers Launched: 2\n>\n>> Here are my settings (the server has around 256 GB\r\nog memory) :\n>\n\n\nHi\n\n\nHere is some more info.\n\n\n>What version of\r\npostgres ? \r\nWhat OS/version ?\n\n\n\npsql (14.1, server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\n\n\n>https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n>Are there any server logs around that time ?\n\n\nYes but nothing in the logs that I could find.\n\n\n>Or session logs for the slow query ?\n>\n>Is it because the table creation is locking (rows of) various system catalogs ?\n>I'm not sure if it'd be a single, long delay that you could see easily with\n\n>log_lock_waits, or a large number of small delays, maybe depending on whether\n>your table creation is done within a transaction.\n\n\n\nAdded log_lock_waits\r\n but could not  anything new in the logs\n\n\n\nSHOW deadlock_timeout ;\n deadlock_timeout \n------------------\n 1s\n SHOW log_lock_waits;\n log_lock_waits \n----------------\n on\n(1 row)\n\n\n\n\nIn the logs I only things like this\n\nLOG: \r\nduration: 71841.233 ms \r\nstatement: CREATE UNLOGGED TABLE IF NOT EXISTS tmp_klimagass.styredata_tidligbygg_159298.....\n\n\n\n\n​LOG: \r\nduration: 12645.127 ms \r\nstatement: GRANT SELECT ON TABLE tmp_klimagass.vaerdata_159296 TO org_mojo2_sl_read_role;\nLOG: \r\nduration: 15783.611 ms \r\nstatement: EXPLAIN ANALYZE select count(*)\n       \r\nfrom information_schema.tables;\nLOG: \r\nduration: 35594.903 ms \r\nstatement: EXPLAIN ANALYZE select count(*)\n\n\nCan not find anything here either\n\n\nselect relation::regclass, * from pg_locks where not granted; \n relation | locktype\r\n | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath \n----------+----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-----+------+---------+----------\n(0 rows)\n\n\nTime: 55.270 ms\n\n\n\n\n>\n>--\n>Justin\n\n\n\nThanks\n\n\nLars", "msg_date": "Mon, 7 Feb 2022 17:39:56 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow \"select count(*) from information_schema.tables;\" in some\n cases" }, { "msg_contents": "On Mon, Feb 7, 2022, 10:26 PM Lars Aksel Opsahl <[email protected]>\nwrote:\n\n> Hi\n>\n> Sometimes simple sql's like this takes a very long time \"select count(*) from\n> information_schema.tables;\"\n>\n> Other sql's not including system tables may work ok but login also takes a\n> very long time.\n>\n> The CPU load on the server is around 25%. There is no iowait.\n>\n>\n> This happens typically when we are running many functions in parallel\n> creating many temp tables and unlogged tables I think.\n>\n> Here is a slow one:\n>\n> https://explain.depesz.com/s/tUt5\n>\n>\n> and here is fast one :\n>\n> https://explain.depesz.com/s/yYG4\n>\n>\n> Here are my settings (the server has around 256 GB og memory) :\n>\n> max_connections = 500\n>\n> work_mem = 20MB\n>\n> effective_cache_size = 96GB\n>\n> effective_io_concurrency = 256\n>\n> shared_buffers = 96GB\n>\n> temp_buffers = 80MB\n>\n> Any hints ?\n>\n>\n> Thanks .\n>\n>\n> Lars\n>\n\nCan you share the output of the below query?\n\n From the past threads I have learnt that too many templates objects may add\nto bloat of system catalogs and may in start resulting in impacting\nperformance.\nMake a note especially around\n\npg_attribute\npg_depends\nand check for bloat, if required, vacuum full? these objects to speed up.\n\n\n\nSELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C\nLEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname =\n'pg_catalog' ORDER BY 2 DESC LIMIT 20; can you show the output of this query\n\n>\n\nOn Mon, Feb 7, 2022, 10:26 PM Lars Aksel Opsahl <[email protected]> wrote:\n\nHi\n\n\n\nSometimes simple sql's like this takes a very long time  \"select\n count(*) from information_schema.tables;\"\n\n\nOther sql's not including system tables may work ok but login also takes a very long time. \n\n\nThe CPU load on the server is around 25%. There is no iowait.\n\n\n\nThis happens typically when we are running many functions in parallel creating many temp tables and unlogged tables\n I think.\n\n\nHere is a slow one:  \n\nhttps://explain.depesz.com/s/tUt5 \n\n\nand here is fast one :\nhttps://explain.depesz.com/s/yYG4 \n\n\nHere are my settings (the server has around 256 GB og memory) :\n\n\nmax_connections = 500\nwork_mem = 20MB\neffective_cache_size = 96GB \n\neffective_io_concurrency = 256 \n\nshared_buffers = 96GB\ntemp_buffers = 80MB\n\n\n\nAny hints ?\n\n\nThanks .\n\n\n\nLarsCan you share the output of the below query?From the past threads I have learnt that too many templates objects may add to bloat of system catalogs and may in start resulting in impacting performance.Make a note especially  aroundpg_attributepg_dependsand check for bloat, if required, vacuum full? these objects to speed up.SELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY 2 DESC LIMIT 20; can you show the output of this query", "msg_date": "Mon, 7 Feb 2022 23:18:44 +0530", "msg_from": "Vijaykumar Jain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow \"select count(*) from information_schema.tables;\" in some\n cases" }, { "msg_contents": ">Vijaykumar Jain <[email protected]>\n\n>Mon 2/7/2022 6:49 PM\n\n>\n\n>On Mon, Feb 7, 2022, 10:26 PM Lars Aksel Opsahl <[email protected]> wrote:\n\n>Hi\n\n>\n\n\nHi\n\n\n>Can you share the output of the below query?\n\n>\n\n>From the past threads I have learnt that too many templates objects may add to bloat of system catalogs and may in start resulting in impacting performance.\n\n>Make a note especially around\n\n>\n\n>pg_attribute\n\n>pg_depends\n\n>and check for bloat, if required, vacuum full? these objects to speed up.\n\n>\n\n>\n\n>\n\n>SELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY 2 DESC LIMIT 20; can you show the output of this query\n\n\n\nSELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY 2 DESC LIMIT 20;\n\n relname | pg_size_pretty\n\n--------------------------------+----------------\n\n pg_attrdef_oid_index | 9744 kB\n\n pg_attrdef_adrelid_adnum_index | 9712 kB\n\n pg_type_typname_nsp_index | 87 MB\n\n pg_sequence_seqrelid_index | 8224 kB\n\n pg_foreign_table_relid_index | 8192 bytes\n\n pg_enum_typid_sortorder_index | 8192 bytes\n\n pg_largeobject_metadata | 8192 bytes\n\n pg_event_trigger_oid_index | 8192 bytes\n\n pg_extension | 8192 bytes\n\n pg_event_trigger_evtname_index | 8192 bytes\n\n pg_am | 8192 bytes\n\n pg_foreign_data_wrapper | 8192 bytes\n\n pg_foreign_server_name_index | 8192 bytes\n\n pg_enum_typid_label_index | 8192 bytes\n\n pg_default_acl | 8192 bytes\n\n pg_foreign_server_oid_index | 8192 bytes\n\n pg_db_role_setting | 8192 bytes\n\n pg_database | 8192 bytes\n\n pg_enum_oid_index | 8192 bytes\n\n pg_language | 8192 bytes\n\n(20 rows)\n\n\nTime: 22.354 ms\n\n\nVACUUM full pg_attribute;\n\n40P01: deadlock detected\n\nVACUUM full pg_depends;\n\n40P01: deadlock detected\n\n\nI have to test those later\n\n\nThis works ok\n\nVACUUM pg_attribute;\n\nVACUUM pg_depends;\n\n\nVACUUM full pg_attrdef;\n\nVACUUM full pg_type ;\n\nVACUUM full pg_sequence;\n\nVACUUM full pg_type;\n\n\nSELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY 2 DESC LIMIT 20;\n\n relname | pg_size_pretty\n\n--------------------------------------+----------------\n\n pg_type_oid_index | 960 kB\n\n pg_language | 8192 bytes\n\n pg_enum_typid_label_index | 8192 bytes\n\n pg_pltemplate | 8192 bytes\n\n pg_event_trigger_oid_index | 8192 bytes\n\n pg_foreign_server_oid_index | 8192 bytes\n\n pg_foreign_server_name_index | 8192 bytes\n\n pg_enum_oid_index | 8192 bytes\n\n pg_largeobject_metadata | 8192 bytes\n\n pg_foreign_table_relid_index | 8192 bytes\n\n pg_am | 8192 bytes\n\n pg_database | 8192 bytes\n\n pg_event_trigger_evtname_index | 8192 bytes\n\n pg_extension | 8192 bytes\n\n pg_partitioned_table_partrelid_index | 8192 bytes\n\n pg_enum_typid_sortorder_index | 8192 bytes\n\n pg_db_role_setting | 8192 bytes\n\n pg_default_acl | 8192 bytes\n\n pg_foreign_data_wrapper | 8192 bytes\n\n pg_publication_oid_index | 8192 bytes\n\nStill slow.\n\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n>Vijaykumar\nJain <[email protected]>\n>Mon 2/7/2022 6:49 PM\n>\n>On Mon,\nFeb 7, 2022, 10:26 PM\nLars\nAksel\nOpsahl <[email protected]>\n wrote:\n>Hi\n>\n\n\nHi\n\n\n>Can you share the output of the below query?\n>\n>From the past threads I have\nlearnt that too many templates objects\n may add to bloat of system catalogs and may in start resulting in impacting performance.\n>Make a note especially \naround\n>\n>pg_attribute\n>pg_depends\n>and check for bloat, if required, vacuum full? these objects to speed up.\n>\n>\n>\n>SELECT\nrelname, pg_size_pretty(pg_relation_size(C.oid))\n FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE \nnspname = 'pg_catalog' ORDER BY 2 DESC LIMIT\n 20; can you show the output of this query\n\n\n\n\nSELECT\nrelname, pg_size_pretty(pg_relation_size(C.oid))\n FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE \nnspname = 'pg_catalog' ORDER BY 2 DESC LIMIT\n 20;\n           \nrelname\n           \n| pg_size_pretty \n--------------------------------+----------------\n pg_attrdef_oid_index\n         \n| 9744 kB\n pg_attrdef_adrelid_adnum_index\n | 9712 kB\n pg_type_typname_nsp_index \n     | 87 MB\n pg_sequence_seqrelid_index\n   \n| 8224 kB\n pg_foreign_table_relid_index\n \n| 8192 bytes\n pg_enum_typid_sortorder_index \n| 8192 bytes\n pg_largeobject_metadata \n       | 8192 bytes\n pg_event_trigger_oid_index\n   \n| 8192 bytes\n pg_extension\n                 \n| 8192 bytes\n pg_event_trigger_evtname_index\n | 8192 bytes\n pg_am \n                         | 8192 bytes\n pg_foreign_data_wrapper \n       | 8192 bytes\n pg_foreign_server_name_index\n \n| 8192 bytes\n pg_enum_typid_label_index \n     | 8192 bytes\n pg_default_acl\n               \n| 8192 bytes\n pg_foreign_server_oid_index \n   | 8192 bytes\n pg_db_role_setting\n           \n| 8192 bytes\n pg_database \n                   | 8192 bytes\n pg_enum_oid_index \n             | 8192 bytes\n pg_language \n                   | 8192 bytes\n(20 rows)\n\n\nTime: 22.354\nms\n\n\nVACUUM full pg_attribute;\n40P01: deadlock detected\nVACUUM full pg_depends;\n40P01: deadlock detected\n\n\nI have to test those later\n\n\nThis works\nok\nVACUUM pg_attribute;\nVACUUM pg_depends;\n\n\nVACUUM full pg_attrdef;\nVACUUM full pg_type ;\nVACUUM full pg_sequence;\nVACUUM full pg_type;\n\n\nSELECT\nrelname, pg_size_pretty(pg_relation_size(C.oid))\n FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE \nnspname = 'pg_catalog' ORDER BY 2 DESC LIMIT\n 20;\n              \nrelname               \n| pg_size_pretty \n--------------------------------------+----------------\n pg_type_oid_index \n                   | 960 kB\n pg_language \n                         | 8192 bytes\n pg_enum_typid_label_index \n           | 8192 bytes\n pg_pltemplate \n                       | 8192 bytes\n pg_event_trigger_oid_index\n         \n| 8192 bytes\n pg_foreign_server_oid_index \n         | 8192 bytes\n pg_foreign_server_name_index\n       \n| 8192 bytes\n pg_enum_oid_index \n                   | 8192 bytes\n pg_largeobject_metadata \n             | 8192 bytes\n pg_foreign_table_relid_index\n       \n| 8192 bytes\n pg_am \n                               | 8192 bytes\n pg_database \n                         | 8192 bytes\n pg_event_trigger_evtname_index\n     \n| 8192 bytes\n pg_extension\n                       \n| 8192 bytes\n pg_partitioned_table_partrelid_index\n | 8192 bytes\n pg_enum_typid_sortorder_index \n       | 8192 bytes\n pg_db_role_setting\n                 \n| 8192 bytes\n pg_default_acl\n                     \n| 8192 bytes\n pg_foreign_data_wrapper \n             | 8192 bytes\n pg_publication_oid_index\n           \n| 8192 bytes\n\nStill slow.\n\n\n\nLars", "msg_date": "Mon, 7 Feb 2022 18:55:57 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow \"select count(*) from information_schema.tables;\" in some\n cases" }, { "msg_contents": "Lars Aksel Opsahl <[email protected]> writes:\n>> SELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY 2 DESC LIMIT 20; can you show the output of this query\n\n\"ORDER BY 2\" is giving you a textual sort of the sizes, which is entirely\nunhelpful. Try\n\nSELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid) DESC LIMIT 20;\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Feb 2022 14:02:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow \"select count(*) from information_schema.tables;\" in some\n cases" }, { "msg_contents": ">From: Tom Lane <[email protected]>\n\n>Sent: Monday, February 7, 2022 8:02 PM\n\n>To: Lars Aksel Opsahl <[email protected]>\n\n>Cc: Vijaykumar Jain <[email protected]>; Pgsql Performance <[email protected]>\n\n>Subject: Re: slow \"select count(*) from information_schema.tables;\" in some cases\n\n>\n\n>Lars Aksel Opsahl <[email protected]> writes:\n\n>>> SELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY 2 DESC LIMIT 20; can you show the output of this query\n\n>\n\n>\"ORDER BY 2\" is giving you a textual sort of the sizes, which is entirely\n\n>unhelpful. Try\n\n>\n\n>SELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid) DESC LIMIT 20;\n\n>\n\n> regards, tom lane\n\n>\n\nHi\n\nThen pg_attribute show up yes. I have to vacuum full later when server is free.\n\n\nSELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid) DESC LIMIT 20;\n\n relname | pg_size_pretty\n\n-----------------------------------+----------------\n\n pg_largeobject | 17 GB\n\n pg_attribute | 1452 MB\n\n pg_statistic | 1103 MB\n\n pg_class | 364 MB\n\n pg_attribute_relid_attnam_index | 307 MB\n\n pg_depend | 285 MB\n\n pg_largeobject_loid_pn_index | 279 MB\n\n pg_attribute_relid_attnum_index | 230 MB\n\n pg_depend_reference_index | 207 MB\n\n pg_depend_depender_index | 198 MB\n\n pg_class_relname_nsp_index | 133 MB\n\n pg_index | 111 MB\n\n pg_statistic_relid_att_inh_index | 101 MB\n\n pg_class_oid_index | 52 MB\n\n pg_class_tblspc_relfilenode_index | 46 MB\n\n pg_shdepend | 38 MB\n\n pg_shdepend_depender_index | 25 MB\n\n pg_index_indexrelid_index | 24 MB\n\n pg_shdepend_reference_index | 21 MB\n\n pg_index_indrelid_index | 18 MB\n\n(20 rows)\n\n\n\nThanks\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n>From: Tom Lane <[email protected]>\n>Sent: Monday, February 7, 2022 8:02 PM\n>To:\nLars\nAksel\nOpsahl <[email protected]>\n>Cc:\nVijaykumar\nJain <[email protected]>;\nPgsql Performance <[email protected]>\n>Subject: Re: slow \"select count(*) from information_schema.tables;\" in some cases\n> \n>Lars\nAksel\nOpsahl <[email protected]>\n writes:\n>>> SELECT\nrelname, pg_size_pretty(pg_relation_size(C.oid))\n FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE \nnspname = 'pg_catalog' ORDER BY 2 DESC LIMIT\n 20; can you show the output of this query\n>\n>\"ORDER BY 2\" is giving you a textual sort of the sizes, which is entirely\n>unhelpful. \nTry\n>\n>SELECT\nrelname, pg_size_pretty(pg_relation_size(C.oid))\n FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE \nnspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid)\n DESC LIMIT 20;\n>\n>                 \n       regards, tom lane\n>\n\n\n\nHi\n\n\nThen pg_attribute show up yes.\n I have to vacuum full later when server is free.\n\n\n\nSELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE\n nspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid) DESC LIMIT 20;\n             \nrelname     \n         | pg_size_pretty \n-----------------------------------+----------------\n pg_largeobject \n                   | 17 GB\n pg_attribute \n                     | 1452 MB\n pg_statistic \n                     | 1103 MB\n pg_class \n                         | 364 MB\n pg_attribute_relid_attnam_index\n \n| 307 MB\n pg_depend\n                       \n| 285 MB\n pg_largeobject_loid_pn_index \n     | 279 MB\n pg_attribute_relid_attnum_index\n \n| 230 MB\n pg_depend_reference_index\n       \n| 207 MB\n pg_depend_depender_index \n         | 198 MB\n pg_class_relname_nsp_index \n       | 133 MB\n pg_index \n                         | 111 MB\n pg_statistic_relid_att_inh_index \n| 101 MB\n pg_class_oid_index \n               | 52 MB\n pg_class_tblspc_relfilenode_index\n | 46 MB\n pg_shdepend\n                     \n| 38 MB\n pg_shdepend_depender_index \n       | 25 MB\n pg_index_indexrelid_index\n       \n| 24 MB\n pg_shdepend_reference_index\n     \n| 21 MB\n pg_index_indrelid_index\n         \n| 18 MB\n(20 rows)\n\n\n\n\n\nThanks", "msg_date": "Mon, 7 Feb 2022 19:11:58 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow \"select count(*) from information_schema.tables;\" in some\n cases" }, { "msg_contents": "Hi Lars,\n\n\n> psql (14.1, server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\n\nMaybe you can upgrade to 12.9 ( from 12.6 ) (\nhttps://www.postgresql.org/docs/release/12.9/ )\nAnd the next minor release = pg 12.10 is expected on February 10th, 2022\nhttps://www.postgresql.org/developer/roadmap/\nAs I see - only a minor fix exists for \"system columns\": \"Don't ignore\nsystem columns when estimating the number of groups using extended\nstatistics (Tomas Vondra)\" in 12.7\n\nI have similar experiences with the system tables - vacuuming is extreme\nimportant\nin my case - I am calling \"vacuum\" in every ETL job - cleaning my system\ntables.\n\nselect\n schemaname\n ,relname\n ,n_tup_ins\n ,n_tup_upd\n ,n_tup_del\n ,n_tup_hot_upd\n ,n_live_tup\n ,n_dead_tup\nfrom pg_stat_all_tables\nwhere n_dead_tup > 0 and schemaname='pg_catalog'\n;\n\nRegards,\n Imre\n\n\nLars Aksel Opsahl <[email protected]> ezt írta (időpont: 2022. febr. 7.,\nH, 18:40):\n\n> >>\n>\n> >> Here is a slow one:\n>\n> >> https://explain.depesz.com/s/tUt5\n>\n> >>\n>\n> >> and here is fast one :\n>\n> >> https://explain.depesz.com/s/yYG4\n>\n> >\n>\n> >The only difference is that this is sometimes many times slower.\n>\n> >\n>\n> > Finalize Aggregate (cost=42021.15..42021.16 rows=1 width=8) (actual\n> time=50602.755..117201.768 rows=1 loops=1)\n>\n> > -> Gather (cost=42020.94..42021.15 rows=2 width=8) (actual\n> time=130.527..117201.754 rows=3 loops=1)\n>\n> > Workers Planned: 2\n>\n> > Workers Launched: 2\n>\n> >\n>\n> >> Here are my settings (the server has around 256 GB og memory) :\n>\n> >\n>\n>\n> Hi\n>\n>\n> Here is some more info.\n>\n>\n> >What version of postgres ? What OS/version ?\n>\n>\n> psql (14.1, server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\n>\n> >https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> >\n>\n> >Are there any server logs around that time ?\n>\n>\n> Yes but nothing in the logs that I could find.\n>\n>\n> >Or session logs for the slow query ?\n>\n> >\n>\n> >Is it because the table creation is locking (rows of) various system\n> catalogs ?\n>\n> >I'm not sure if it'd be a single, long delay that you could see easily\n> with\n>\n> >log_lock_waits, or a large number of small delays, maybe depending on\n> whether\n>\n> >your table creation is done within a transaction.\n>\n>\n> Added log_lock_waits but could not anything new in the logs\n>\n>\n> SHOW deadlock_timeout ;\n>\n> deadlock_timeout\n>\n> ------------------\n>\n> 1s\n>\n> SHOW log_lock_waits;\n>\n> log_lock_waits\n>\n> ----------------\n>\n> on\n>\n> (1 row)\n>\n>\n> In the logs I only things like this\n>\n> LOG: duration: 71841.233 ms statement: CREATE UNLOGGED TABLE IF NOT\n> EXISTS tmp_klimagass.styredata_tidligbygg_159298.....\n>\n>\n> ​LOG: duration: 12645.127 ms statement: GRANT SELECT ON TABLE\n> tmp_klimagass.vaerdata_159296 TO org_mojo2_sl_read_role;\n>\n> LOG: duration: 15783.611 ms statement: EXPLAIN ANALYZE select count(*)\n>\n> from information_schema.tables;\n>\n> LOG: duration: 35594.903 ms statement: EXPLAIN ANALYZE select count(*)\n>\n> Can not find anything here either\n>\n>\n> select relation::regclass, * from pg_locks where not granted;\n>\n> relation | locktype | database | relation | page | tuple | virtualxid |\n> transactionid | classid | objid | objsubid | virtualtransaction | pid |\n> mode | granted | fastpath\n>\n>\n> ----------+----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-----+------+---------+----------\n>\n> (0 rows)\n>\n>\n> Time: 55.270 ms\n>\n>\n> >\n>\n> >--\n>\n> >Justin\n>\n> Thanks\n>\n> Lars\n>\n\nHi Lars,> psql (14.1, server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))Maybe you can upgrade to 12.9 ( from 12.6 )     ( https://www.postgresql.org/docs/release/12.9/ )And the next minor release = pg 12.10 is expected on February 10th, 2022 https://www.postgresql.org/developer/roadmap/ As I see - only a minor fix exists for \"system columns\":  \"Don't ignore system columns when estimating the number of groups using extended statistics (Tomas Vondra)\"  in 12.7I have similar experiences with the system tables - vacuuming is extreme importantin my case -  I am calling \"vacuum\" in every ETL job - cleaning my system tables. select  schemaname  ,relname  ,n_tup_ins  ,n_tup_upd  ,n_tup_del  ,n_tup_hot_upd  ,n_live_tup  ,n_dead_tupfrom pg_stat_all_tableswhere n_dead_tup > 0 and schemaname='pg_catalog';Regards, ImreLars Aksel Opsahl <[email protected]> ezt írta (időpont: 2022. febr. 7., H, 18:40):\n\n\n\n\n>>\n>> Here is a slow one:\n>> https://explain.depesz.com/s/tUt5\n>>\n>> and here is fast one :\n>> https://explain.depesz.com/s/yYG4\n>\n>The only difference is that this is sometimes many times slower.\n>\n> Finalize Aggregate \n(cost=42021.15..42021.16 rows=1 width=8) (actual time=50602.755..117201.768 rows=1 loops=1)\n>\n \n-> \nGather \n(cost=42020.94..42021.15 rows=2 width=8) (actual time=130.527..117201.754 rows=3 loops=1)\n>\n       \nWorkers Planned: 2\n>\n       \nWorkers Launched: 2\n>\n>> Here are my settings (the server has around 256 GB\nog memory) :\n>\n\n\nHi\n\n\nHere is some more info.\n\n\n>What version of\npostgres ? \nWhat OS/version ?\n\n\n\npsql (14.1, server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\n\n\n>https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n>Are there any server logs around that time ?\n\n\nYes but nothing in the logs that I could find.\n\n\n>Or session logs for the slow query ?\n>\n>Is it because the table creation is locking (rows of) various system catalogs ?\n>I'm not sure if it'd be a single, long delay that you could see easily with\n\n>log_lock_waits, or a large number of small delays, maybe depending on whether\n>your table creation is done within a transaction.\n\n\n\nAdded log_lock_waits\n but could not  anything new in the logs\n\n\n\nSHOW deadlock_timeout ;\n deadlock_timeout \n------------------\n 1s\n SHOW log_lock_waits;\n log_lock_waits \n----------------\n on\n(1 row)\n\n\n\n\nIn the logs I only things like this\n\nLOG: \nduration: 71841.233 ms \nstatement: CREATE UNLOGGED TABLE IF NOT EXISTS tmp_klimagass.styredata_tidligbygg_159298.....\n\n\n\n\n​LOG: \nduration: 12645.127 ms \nstatement: GRANT SELECT ON TABLE tmp_klimagass.vaerdata_159296 TO org_mojo2_sl_read_role;\nLOG: \nduration: 15783.611 ms \nstatement: EXPLAIN ANALYZE select count(*)\n       \nfrom information_schema.tables;\nLOG: \nduration: 35594.903 ms \nstatement: EXPLAIN ANALYZE select count(*)\n\n\nCan not find anything here either\n\n\nselect relation::regclass, * from pg_locks where not granted; \n relation | locktype\n | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath \n----------+----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-----+------+---------+----------\n(0 rows)\n\n\nTime: 55.270 ms\n\n\n\n\n>\n>--\n>Justin\n\n\n\nThanks\n\n\nLars", "msg_date": "Mon, 7 Feb 2022 20:51:20 +0100", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow \"select count(*) from information_schema.tables;\" in some\n cases" }, { "msg_contents": "Hi\r\n\r\n\r\n________________________________\r\n>From: Imre Samu <[email protected]>\r\n>Sent: Monday, February 7, 2022 8:51 PM\r\n>Maybe you can upgrade to 12.9 ( from 12.6 ) ( https://www.postgresql.org/docs/release/12.9/ )\r\n\r\n>And the next minor release = pg 12.10 is expected on February 10th, 2022 https://www.postgresql.org/developer/roadmap/\r\n\r\n>As I see - only a minor fix exists for \"system columns\": \"Don't ignore system columns when estimating the number of groups using extended statistics (Tomas Vondra)\" in 12.7\r\n\r\n>\r\n\r\n>I have similar experiences with the system tables - vacuuming is extreme important\r\n\r\n>in my case - I am calling \"vacuum\" in every ETL job - cleaning my system tables.\r\n\r\n>\r\n\r\n\r\nThanks we may test upgrade later seems like the problem here was related to both vacuum and set parallel_workers to 0 in this case, see mail for more info.\r\n\r\n\r\nSELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid) DESC LIMIT 20;\r\n\r\n relname | pg_size_pretty\r\n\r\n-----------------------------------+----------------\r\n\r\n pg_largeobject | 17 GB\r\n\r\n pg_attribute | 1452 MB\r\n\r\n pg_statistic | 1103 MB\r\n\r\n pg_class | 364 MB\r\n\r\n pg_attribute_relid_attnam_index | 307 MB\r\n\r\n pg_depend | 285 MB\r\n\r\n pg_largeobject_loid_pn_index | 279 MB\r\n\r\n pg_attribute_relid_attnum_index | 230 MB\r\n\r\n pg_depend_reference_index | 207 MB\r\n\r\n pg_depend_depender_index | 198 MB\r\n\r\n pg_class_relname_nsp_index | 133 MB\r\n\r\n pg_index | 111 MB\r\n\r\n pg_statistic_relid_att_inh_index | 101 MB\r\n\r\n pg_class_oid_index | 52 MB\r\n\r\n pg_class_tblspc_relfilenode_index | 46 MB\r\n\r\n pg_shdepend | 38 MB\r\n\r\n pg_shdepend_depender_index | 25 MB\r\n\r\n pg_index_indexrelid_index | 24 MB\r\n\r\n pg_shdepend_reference_index | 21 MB\r\n\r\n pg_index_indrelid_index | 18 MB\r\n\r\n(20 rows)\r\n\r\n\r\nselect\r\n\r\n schemaname\r\n\r\n ,relname\r\n\r\n ,n_tup_ins\r\n\r\n ,n_tup_upd\r\n\r\n ,n_tup_del\r\n\r\n ,n_tup_hot_upd\r\n\r\n ,n_live_tup\r\n\r\n ,n_dead_tup\r\n\r\nfrom pg_stat_all_tables\r\n\r\nwhere n_dead_tup > 0 and schemaname='pg_catalog'\r\n\r\n;\r\n\r\n schemaname | relname | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup\r\n\r\n------------+--------------------+-----------+-----------+-----------+---------------+------------+------------\r\n\r\n pg_catalog | pg_default_acl | 6 | 2 | 1 | 2 | 5 | 3\r\n\r\n pg_catalog | pg_shdepend | 10994319 | 53 | 10975090 | 0 | 32982 | 1711\r\n\r\n pg_catalog | pg_type | 24820549 | 4558 | 24610078 | 300 | 41619 | 5492\r\n\r\n pg_catalog | pg_attribute | 183016129 | 13549029 | 181178505 | 8326103 | 418492 | 46415\r\n\r\n pg_catalog | pg_proc | 1406 | 1340 | 1187 | 1122 | 6551 | 1351\r\n\r\n pg_catalog | pg_class | 30278004 | 8510013 | 30021392 | 5917849 | 50569 | 6193\r\n\r\n pg_catalog | pg_authid | 50 | 7 | 10 | 7 | 887 | 30\r\n\r\n pg_catalog | pg_auth_members | 39 | 0 | 1 | 0 | 38 | 2\r\n\r\n pg_catalog | pg_sequence | 5101683 | 5100683 | 5087311 | 5045867 | 3250 | 507\r\n\r\n pg_catalog | pg_attrdef | 6859893 | 0 | 6683508 | 0 | 3973 | 256\r\n\r\n pg_catalog | pg_constraint | 56521 | 4 | 42635 | 0 | 9317 | 1782\r\n\r\n pg_catalog | pg_depend | 89540444 | 8 | 88833727 | 0 | 211747 | 21601\r\n\r\n pg_catalog | pg_description | 3561 | 4478 | 3528 | 3745 | 8259 | 967\r\n\r\n pg_catalog | pg_index | 12360100 | 262429 | 12220917 | 258746 | 40690 | 1003\r\n\r\n pg_catalog | pg_namespace | 210 | 122 | 14 | 118 | 841 | 145\r\n\r\n pg_catalog | pg_rewrite | 659 | 83 | 573 | 62 | 1757 | 161\r\n\r\n pg_catalog | pg_statistic | 2342496 | 25301064 | 2317015 | 2452817 | 732310 | 48825\r\n\r\n pg_catalog | pg_trigger | 2495 | 0 | 2085 | 0 | 7367 | 697\r\n\r\n pg_catalog | pg_db_role_setting | 0 | 1 | 0 | 1 | 0 | 1\r\n\r\n(19 rows)\r\n\r\n\r\nFirst I tested vacuum only on the big tables\r\n\r\n\r\n VACUUM full pg_largeobject;\r\n\r\n VACUUM full pg_class ;\r\n\r\n VACUUM full pg_attribute;\r\n\r\n VACUUM full pg_depend ;\r\n\r\n VACUUM full pg_depend_reference_index ;\r\n\r\n VACUUM full pg_index;\r\n\r\n\r\nBut then select count(*) from information_schema.tables started to slow down again.\r\n\r\n\r\n--select format('vacuum FULL verbose %I.%I;', n.nspname::varchar, t.relname::varchar) FROM pg_class t JOIN pg_namespace n ON n.oid = t.relnamespace WHERE t.relkind = 'r' and n.nspname::varchar = 'pg_catalog' order by 1\r\n\r\n\r\nThen I did vacuum all tables in pg_catalog and then \"select count(*) from information_schema.tables;\" is seems to be fast while running the background job.\r\n\r\n\r\nSELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid) DESC LIMIT 20;\r\n\r\n relname | pg_size_pretty\r\n\r\n----------------------------------+----------------\r\n\r\n pg_largeobject | 4624 MB\r\n\r\n pg_statistic | 76 MB\r\n\r\n pg_attribute | 61 MB\r\n\r\n pg_largeobject_loid_pn_index | 42 MB\r\n\r\n pg_attribute_relid_attnam_index | 13 MB\r\n\r\n pg_depend | 12 MB\r\n\r\n pg_class | 9664 kB\r\n\r\n pg_attribute_relid_attnum_index | 9376 kB\r\n\r\n pg_type | 7632 kB\r\n\r\n pg_depend_reference_index | 6592 kB\r\n\r\n pg_depend_depender_index | 6576 kB\r\n\r\n pg_index | 4184 kB\r\n\r\n pg_proc | 3512 kB\r\n\r\n pg_constraint | 3336 kB\r\n\r\n pg_statistic_relid_att_inh_index | 3200 kB\r\n\r\n pg_class_relname_nsp_index | 2568 kB\r\n\r\n pg_type_typname_nsp_index | 2000 kB\r\n\r\n pg_shdepend | 1960 kB\r\n\r\n pg_attrdef | 1800 kB\r\n\r\n pg_rewrite | 1392 kB\r\n\r\n(20 rows)\r\n\r\n\r\nselect\r\n\r\n schemaname\r\n\r\n ,relname\r\n\r\n ,n_tup_ins\r\n\r\n ,n_tup_upd\r\n\r\n ,n_tup_del\r\n\r\n ,n_tup_hot_upd\r\n\r\n ,n_live_tup\r\n\r\n ,n_dead_tup\r\n\r\nfrom pg_stat_all_tables\r\n\r\nwhere n_dead_tup > 0 and schemaname='pg_catalog'\r\n\r\n;\r\n\r\n schemaname | relname | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup\r\n\r\n------------+--------------------+-----------+-----------+-----------+---------------+------------+------------\r\n\r\n pg_catalog | pg_default_acl | 6 | 2 | 1 | 2 | 17 | 10\r\n\r\n pg_catalog | pg_shdepend | 10995081 | 53 | 10975244 | 0 | 33155 | 2296\r\n\r\n pg_catalog | pg_type | 24822122 | 4558 | 24610467 | 300 | 41907 | 6773\r\n\r\n pg_catalog | pg_attribute | 183035424 | 13549029 | 181183256 | 8326103 | 424171 | 13615\r\n\r\n pg_catalog | pg_proc | 1406 | 1340 | 1187 | 1122 | 6551 | 1351\r\n\r\n pg_catalog | pg_class | 30278894 | 8510613 | 30021642 | 5918109 | 50712 | 1169\r\n\r\n pg_catalog | pg_authid | 50 | 7 | 10 | 7 | 887 | 30\r\n\r\n pg_catalog | pg_auth_members | 39 | 0 | 1 | 0 | 860 | 2\r\n\r\n pg_catalog | pg_database | 0 | 0 | 0 | 0 | 6 | 4\r\n\r\n pg_catalog | pg_sequence | 5101683 | 5100683 | 5087311 | 5045867 | 3250 | 507\r\n\r\n pg_catalog | pg_shdescription | 0 | 0 | 0 | 0 | 11 | 8\r\n\r\n pg_catalog | pg_attrdef | 6859893 | 0 | 6683508 | 0 | 3973 | 256\r\n\r\n pg_catalog | pg_constraint | 56521 | 4 | 42635 | 0 | 9317 | 1782\r\n\r\n pg_catalog | pg_depend | 89542906 | 8 | 88834333 | 0 | 212177 | 2024\r\n\r\n pg_catalog | pg_description | 3561 | 4478 | 3528 | 3745 | 8259 | 967\r\n\r\n pg_catalog | pg_index | 12360169 | 262429 | 12220954 | 258746 | 23660 | 69\r\n\r\n pg_catalog | pg_namespace | 210 | 122 | 14 | 118 | 841 | 146\r\n\r\n pg_catalog | pg_operator | 0 | 0 | 0 | 0 | 840 | 20\r\n\r\n pg_catalog | pg_rewrite | 659 | 83 | 573 | 62 | 1757 | 161\r\n\r\n pg_catalog | pg_statistic | 2346816 | 25301535 | 2317159 | 2453015 | 144622 | 475\r\n\r\n pg_catalog | pg_trigger | 2495 | 0 | 2085 | 0 | 7367 | 697\r\n\r\n pg_catalog | pg_db_role_setting | 0 | 1 | 0 | 1 | 4 | 4\r\n\r\n pg_catalog | pg_extension | 0 | 0 | 0 | 0 | 10 | 6\r\n\r\n pg_catalog | pg_init_privs | 0 | 0 | 0 | 0 | 180 | 1\r\n\r\n(24 rows)\r\n\r\n\r\nAnd that solved the simple count sql.\r\n\r\n\r\nBUT \"psql -h dbhost -p 5432 -U postgres dbname\" login is still becomes slow after a while when running code that creates a lot of unlogged tables in 16 threads.\r\n\r\n\r\nWhen I kill the test job it is instantly fast again\r\n\r\n\r\nWhat seems to take time was this call triggered by psql (I could not find anything find else related for instance related to this locks)\r\n\r\n\r\nEXPLAIN ANALYZE SELECT pg_catalog.quote_ident(c.relname) FROM pg_catalog.pg_class c WHERE c.relkind IN ('r', 'S', 'v', 'm', 'f', 'p') AND substring(pg_catalog.quote_ident(c.relname),1,6)='pg_sta' AND pg_catalog.pg_table_is_visible(c.oid)\r\n\r\nUNION\r\n\r\nSELECT pg_catalog.quote_ident(n.nspname) || '.' FROM pg_catalog.pg_namespace n WHERE substring(pg_catalog.quote_ident(n.nspname) || '.',1,6)='pg_sta' AND (SELECT pg_catalog.count(*) FROM pg_catalog.pg_namespace WHERE substring(pg_catalog.quote_ident(nspname) || '.',1,6) = substring('pg_sta',1,pg_catalog.length(pg_catalog.quote_ident(nspname))+1)) > 1\r\n\r\nUNION\r\n\r\nSELECT pg_catalog.quote_ident(n.nspname) || '.' || pg_catalog.quote_ident(c.relname) FROM pg_catalog.pg_class c, pg_catalog.pg_namespace n WHERE c.relnamespace = n.oid AND c.relkind IN ('r', 'S', 'v', 'm', 'f', 'p') AND substring(pg_catalog.quote_ident(n.nspname) || '.' || pg_catalog.quote_ident(c.relname),1,6)='pg_sta' AND substring(pg_catalog.quote_ident(n.nspname) || '.',1,6) = substring('pg_sta',1,pg_catalog.length(pg_catalog.quote_ident(n.nspname))+1) AND (SELECT pg_catalog.count(*) FROM pg_catalog.pg_namespace WHERE substring(pg_catalog.quote_ident(nspname) || '.',1,6) = substring('pg_sta',1,pg_catalog.length(pg_catalog.quote_ident(nspname))+1)) = 1\r\n\r\nLIMIT 1000\r\n\r\n\r\nHere is slow one https://explain.depesz.com/s/x2Vf the app is running\r\n\r\n\r\nAfter the killing the application is fast https://explain.depesz.com/s/h4fK\r\n\r\n\r\nWe also tried to change the code to do this in 2 steps.\r\n\r\n- First create table\r\n\r\n- Then insert data into table\r\n\r\n\r\nBut that does not help on login either the time vary from 30 secs to 75 sec.\r\n\r\nhttps://explain.depesz.com/s/4SXl\r\n\r\n\r\nThere is no iowait the server the CPU load is 25%, the problem seems to be related to parallel_workers\r\n\r\n\r\nmax_parallel_workers_per_gather\r\n\r\n---------------------------------\r\n\r\n 2\r\n\r\n max_parallel_workers\r\n\r\n----------------------\r\n\r\n 8\r\n\r\n max_worker_processes\r\n\r\n----------------------\r\n\r\n 8\r\n\r\n\r\nSo if we change max_parallel_workers_per_gather = 0\r\n\r\n\r\nThen https://explain.depesz.com/s/kMEm query is fast.\r\n\r\n\r\nThanks for help everybody seems like we have to dig into the parallel_workers world.\r\n\r\n\r\n(have to wait to test that until we can restart postgres)\r\n\r\n​\r\n\r\nLars\r\n\r\n\n\n\n\n\n\n\n\n\n\nHi\n\n\n\n\n>From: Imre\r\n Samu <[email protected]>\n>Sent: Monday,\r\n February 7, 2022 8:51 PM\n>Maybe you can upgrade to 12.9 ( from 12.6 )\r\n   \r\n( https://www.postgresql.org/docs/release/12.9/ )\n\n>And the next minor release =\r\npg 12.10 is expected on February 10th,\r\n 2022 https://www.postgresql.org/developer/roadmap/ \n>As I see - only a minor fix exists for \"system columns\": \r\n\"Don't ignore system columns when estimating the number of groups using extended statistics (Tomas\nVondra)\" \r\nin 12.7\n>\n>I have similar experiences with the system tables - vacuuming is extreme important\n>in my case - \r\nI am calling \"vacuum\" in every ETL job - cleaning my system tables. \n>\n\n\nThanks we may test upgrade later seems like the problem here was related to both vacuum and set parallel_workers to 0 in this case,\r\n see mail for more info.\n\n\nSELECT\r\nrelname, pg_size_pretty(pg_relation_size(C.oid))\r\n FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE \nnspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid)\r\n DESC LIMIT 20;\n             \r\nrelname             \r\n| pg_size_pretty \n-----------------------------------+----------------\n pg_largeobject \r\n                   | 17 GB\n pg_attribute \r\n                     | 1452 MB\n pg_statistic \r\n                     | 1103 MB\n pg_class \r\n                         | 364 MB\n pg_attribute_relid_attnam_index\r\n \r\n| 307 MB\n pg_depend\r\n                       \r\n| 285 MB\n pg_largeobject_loid_pn_index \r\n     | 279 MB\n pg_attribute_relid_attnum_index\r\n \r\n| 230 MB\n pg_depend_reference_index\r\n       \r\n| 207 MB\n pg_depend_depender_index \r\n         | 198 MB\n pg_class_relname_nsp_index \r\n       | 133 MB\n pg_index \r\n                         | 111 MB\n pg_statistic_relid_att_inh_index \r\n| 101 MB\n pg_class_oid_index \r\n               | 52 MB\n pg_class_tblspc_relfilenode_index\r\n | 46 MB\n pg_shdepend\r\n                     \r\n| 38 MB\n pg_shdepend_depender_index \r\n       | 25 MB\n pg_index_indexrelid_index\r\n       \r\n| 24 MB\n pg_shdepend_reference_index\r\n     \r\n| 21 MB\n pg_index_indrelid_index\r\n         \r\n| 18 MB\n(20 rows)\n\n\nselect\n \r\nschemaname\n \r\n,relname\n \r\n,n_tup_ins\n \r\n,n_tup_upd\n \r\n,n_tup_del\n \r\n,n_tup_hot_upd\n \r\n,n_live_tup\n \r\n,n_dead_tup\nfrom pg_stat_all_tables\nwhere n_dead_tup > 0 and\r\nschemaname='pg_catalog'\n;\n schemaname\r\n |     \r\nrelname\n     \r\n| n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup \n------------+--------------------+-----------+-----------+-----------+---------------+------------+------------\n pg_catalog | pg_default_acl\r\n   \r\n|\r\n       \r\n6 |\r\n       \r\n2 |\r\n       \r\n1 |\r\n           \r\n2 |         \r\n5 |         \r\n3\n pg_catalog | pg_shdepend \r\n       | \r\n10994319 |       \r\n53 | \r\n10975090 |\r\n           \r\n0 |     \r\n32982 |\r\n     \r\n1711\n pg_catalog | pg_type \r\n           | \r\n24820549 |     \r\n4558 | \r\n24610078 |\r\n         \r\n300 |     \r\n41619 |\r\n     \r\n5492\n pg_catalog | pg_attribute\r\n     \r\n| 183016129 | \r\n13549029 | 181178505 |\r\n     \r\n8326103 |\r\n   \r\n418492 |     \r\n46415\n pg_catalog | pg_proc \r\n           |     \r\n1406 |     \r\n1340 |     \r\n1187 |         \r\n1122 |\r\n     \r\n6551 |\r\n     \r\n1351\n pg_catalog | pg_class\r\n         \r\n| \r\n30278004 |\r\n \r\n8510013 | \r\n30021392 |\r\n     \r\n5917849 |     \r\n50569 |\r\n     \r\n6193\n pg_catalog | pg_authid \r\n         |       \r\n50 |\r\n       \r\n7 |       \r\n10 |\r\n           \r\n7 |       \r\n887 |\r\n       \r\n30\n pg_catalog | pg_auth_members \r\n   |       \r\n39 |\r\n       \r\n0 |\r\n       \r\n1 |\r\n           \r\n0 |\r\n       \r\n38 |         \r\n2\n pg_catalog | pg_sequence \r\n       |\r\n \r\n5101683 |\r\n \r\n5100683 |\r\n \r\n5087311 |\r\n     \r\n5045867 |\r\n     \r\n3250 |       \r\n507\n pg_catalog | pg_attrdef\r\n       \r\n|\r\n \r\n6859893 |\r\n       \r\n0 |\r\n \r\n6683508 |\r\n           \r\n0 |\r\n     \r\n3973 |       \r\n256\n pg_catalog | pg_constraint \r\n     |\r\n   \r\n56521 |\r\n       \r\n4 |\r\n   \r\n42635 |\r\n           \r\n0 |\r\n     \r\n9317 |\r\n     \r\n1782\n pg_catalog | pg_depend \r\n         | \r\n89540444 |\r\n       \r\n8 | \r\n88833727 |\r\n           \r\n0 |\r\n   \r\n211747 |     \r\n21601\n pg_catalog | pg_description\r\n   \r\n|     \r\n3561 |     \r\n4478 |     \r\n3528 |         \r\n3745 |\r\n     \r\n8259 |       \r\n967\n pg_catalog | pg_index\r\n         \r\n| \r\n12360100 |   \r\n262429 | \r\n12220917 |       \r\n258746 |     \r\n40690 |\r\n     \r\n1003\n pg_catalog | pg_namespace\r\n     \r\n|\r\n     \r\n210 |\r\n     \r\n122 |       \r\n14 |\r\n         \r\n118 |       \r\n841 |       \r\n145\n pg_catalog | pg_rewrite\r\n       \r\n|\r\n     \r\n659 |       \r\n83 |\r\n     \r\n573 |           \r\n62 |\r\n     \r\n1757 |       \r\n161\n pg_catalog | pg_statistic\r\n     \r\n|\r\n \r\n2342496 | \r\n25301064 |\r\n \r\n2317015 |\r\n     \r\n2452817 |\r\n   \r\n732310 |     \r\n48825\n pg_catalog | pg_trigger\r\n       \r\n|     \r\n2495 |\r\n       \r\n0 |     \r\n2085 |\r\n           \r\n0 |\r\n     \r\n7367 |       \r\n697\n pg_catalog | pg_db_role_setting\r\n |        \r\n0 |\r\n       \r\n1 |\r\n       \r\n0 |\r\n           \r\n1 |         \r\n0 |         \r\n1\n(19 rows)\n\n\nFirst I tested vacuum only on the big tables \n\n\n VACUUM full pg_largeobject;\n VACUUM full pg_class\r\n ;\n VACUUM full pg_attribute;\n VACUUM full pg_depend\r\n ;\n VACUUM full pg_depend_reference_index\r\n ;\n VACUUM full pg_index;\n\n\nBut then select count(*) from information_schema.tables started to slow down again.\n\n\n--select format('vacuum FULL verbose %I.%I;', n.nspname::varchar,\r\n t.relname::varchar) FROM pg_class\r\n t JOIN pg_namespace n ON n.oid = t.relnamespace WHERE t.relkind = 'r' and n.nspname::varchar\r\n = 'pg_catalog' order by 1\n\n\nThen I did vacuum all tables in pg_catalog and then \"select count(*) from information_schema.tables;\" is seems to be fast while running\r\n the background job.\n\n\nSELECT\r\nrelname, pg_size_pretty(pg_relation_size(C.oid))\r\n FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE \nnspname = 'pg_catalog' ORDER BY pg_relation_size(C.oid)\r\n DESC LIMIT 20;\n            \r\nrelname             \r\n| pg_size_pretty \n----------------------------------+----------------\n pg_largeobject\r\n                 \r\n| 4624 MB\n pg_statistic\r\n                   \r\n| 76 MB\n pg_attribute\r\n                   \r\n| 61 MB\n pg_largeobject_loid_pn_index\r\n   \r\n| 42 MB\n pg_attribute_relid_attnam_index \r\n| 13 MB\n pg_depend \r\n                       | 12 MB\n pg_class\r\n                       \r\n| 9664 kB\n pg_attribute_relid_attnum_index \r\n| 9376 kB\n pg_type \r\n                         | 7632 kB\n pg_depend_reference_index \r\n       | 6592 kB\n pg_depend_depender_index\r\n       \r\n| 6576 kB\n pg_index\r\n                       \r\n| 4184 kB\n pg_proc \r\n                         | 3512 kB\n pg_constraint \r\n                   | 3336 kB\n pg_statistic_relid_att_inh_index\r\n | 3200 kB\n pg_class_relname_nsp_index\r\n     \r\n| 2568 kB\n pg_type_typname_nsp_index \r\n       | 2000 kB\n pg_shdepend \r\n                     | 1960 kB\n pg_attrdef\r\n                     \r\n| 1800 kB\n pg_rewrite\r\n                     \r\n| 1392 kB\n(20 rows)\n\n\nselect             \r\n                                                                                                                                                      \n \r\nschemaname\n \r\n,relname\n \r\n,n_tup_ins\n \r\n,n_tup_upd\n \r\n,n_tup_del\n \r\n,n_tup_hot_upd\n \r\n,n_live_tup\n \r\n,n_dead_tup\nfrom pg_stat_all_tables\nwhere n_dead_tup > 0 and\r\nschemaname='pg_catalog'\n;\n schemaname\r\n |     \r\nrelname\n     \r\n| n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup \n------------+--------------------+-----------+-----------+-----------+---------------+------------+------------\n pg_catalog | pg_default_acl\r\n   \r\n|\r\n       \r\n6 |\r\n       \r\n2 |\r\n       \r\n1 |\r\n           \r\n2 |\r\n       \r\n17 |\r\n       \r\n10\n pg_catalog | pg_shdepend \r\n       | \r\n10995081 |       \r\n53 | \r\n10975244 |\r\n           \r\n0 |     \r\n33155 |\r\n     \r\n2296\n pg_catalog | pg_type \r\n           | \r\n24822122 |     \r\n4558 | \r\n24610467 |\r\n         \r\n300 |     \r\n41907 |\r\n     \r\n6773\n pg_catalog | pg_attribute\r\n     \r\n| 183035424 | \r\n13549029 | 181183256 |\r\n     \r\n8326103 |\r\n   \r\n424171 |     \r\n13615\n pg_catalog | pg_proc \r\n           |     \r\n1406 |     \r\n1340 |     \r\n1187 |         \r\n1122 |\r\n     \r\n6551 |\r\n     \r\n1351\n pg_catalog | pg_class\r\n         \r\n| \r\n30278894 |\r\n \r\n8510613 | \r\n30021642 |\r\n     \r\n5918109 |     \r\n50712 |\r\n     \r\n1169\n pg_catalog | pg_authid \r\n         |       \r\n50 |\r\n       \r\n7 |       \r\n10 |\r\n           \r\n7 |       \r\n887 |\r\n       \r\n30\n pg_catalog | pg_auth_members \r\n   |       \r\n39 |\r\n       \r\n0 |\r\n       \r\n1 |\r\n           \r\n0 |       \r\n860 |         \r\n2\n pg_catalog | pg_database \r\n       |\r\n       \r\n0 |\r\n       \r\n0 |\r\n       \r\n0 |\r\n           \r\n0 |         \r\n6 |         \r\n4\n pg_catalog | pg_sequence \r\n       |\r\n \r\n5101683 |\r\n \r\n5100683 |\r\n \r\n5087311 |\r\n     \r\n5045867 |\r\n     \r\n3250 |       \r\n507\n pg_catalog | pg_shdescription\r\n \r\n|\r\n       \r\n0 |\r\n       \r\n0 |\r\n       \r\n0 |\r\n           \r\n0 |\r\n       \r\n11 |         \r\n8\n pg_catalog | pg_attrdef\r\n       \r\n|\r\n \r\n6859893 |\r\n       \r\n0 |\r\n \r\n6683508 |\r\n           \r\n0 |\r\n     \r\n3973 |       \r\n256\n pg_catalog | pg_constraint \r\n     |\r\n   \r\n56521 |\r\n       \r\n4 |\r\n   \r\n42635 |\r\n           \r\n0 |\r\n     \r\n9317 |\r\n     \r\n1782\n pg_catalog | pg_depend \r\n         | \r\n89542906 |\r\n       \r\n8 | \r\n88834333 |\r\n           \r\n0 |\r\n   \r\n212177 |\r\n     \r\n2024\n pg_catalog | pg_description\r\n   \r\n|     \r\n3561 |     \r\n4478 |     \r\n3528 |         \r\n3745 |\r\n     \r\n8259 |       \r\n967\n pg_catalog | pg_index\r\n         \r\n| \r\n12360169 |   \r\n262429 | \r\n12220954 |       \r\n258746 |     \r\n23660 |\r\n       \r\n69\n pg_catalog | pg_namespace\r\n     \r\n|\r\n     \r\n210 |\r\n     \r\n122 |       \r\n14 |\r\n         \r\n118 |       \r\n841 |       \r\n146\n pg_catalog | pg_operator \r\n       |\r\n       \r\n0 |\r\n       \r\n0 |\r\n       \r\n0 |\r\n           \r\n0 |       \r\n840 |\r\n       \r\n20\n pg_catalog | pg_rewrite\r\n       \r\n|\r\n     \r\n659 |       \r\n83 |\r\n     \r\n573 |           \r\n62 |\r\n     \r\n1757 |       \r\n161\n pg_catalog | pg_statistic\r\n     \r\n|\r\n \r\n2346816 | \r\n25301535 |\r\n \r\n2317159 |\r\n     \r\n2453015 |\r\n   \r\n144622 |       \r\n475\n pg_catalog | pg_trigger\r\n       \r\n|     \r\n2495 |\r\n       \r\n0 |     \r\n2085 |\r\n           \r\n0 |\r\n     \r\n7367 |       \r\n697\n pg_catalog | pg_db_role_setting\r\n |        \r\n0 |\r\n       \r\n1 |\r\n       \r\n0 |\r\n           \r\n1 |         \r\n4 |         \r\n4\n pg_catalog | pg_extension\r\n     \r\n|\r\n       \r\n0 |\r\n       \r\n0 |\r\n       \r\n0 |\r\n           \r\n0 |\r\n       \r\n10 |         \r\n6\n pg_catalog | pg_init_privs \r\n     |\r\n       \r\n0 |\r\n       \r\n0 |\r\n       \r\n0 |\r\n           \r\n0 |       \r\n180 |         \r\n1\n(24 rows)\n\n\nAnd that solved the simple count sql.\n\n\nBUT \"psql\r\n -h dbhost -p 5432 -U\r\npostgres\ndbname\" login is still becomes slow\r\n after a while when running code that creates a lot of unlogged tables in 16 threads.\n\n\nWhen I kill the test job it is instantly fast again\n\n\nWhat seems to take time was this call triggered by\r\npsql (I could not find anything find else related for instance related to this locks)\n\n\nEXPLAIN ANALYZE SELECT pg_catalog.quote_ident(c.relname) FROM pg_catalog.pg_class c WHERE c.relkind IN ('r', 'S', 'v', 'm', 'f', 'p')\r\n AND substring(pg_catalog.quote_ident(c.relname),1,6)='pg_sta' AND pg_catalog.pg_table_is_visible(c.oid)\nUNION\nSELECT pg_catalog.quote_ident(n.nspname) || '.' FROM pg_catalog.pg_namespace n WHERE substring(pg_catalog.quote_ident(n.nspname) ||\r\n '.',1,6)='pg_sta' AND (SELECT pg_catalog.count(*) FROM pg_catalog.pg_namespace WHERE substring(pg_catalog.quote_ident(nspname)\r\n || '.',1,6) = substring('pg_sta',1,pg_catalog.length(pg_catalog.quote_ident(nspname))+1))\r\n > 1\nUNION\nSELECT pg_catalog.quote_ident(n.nspname) || '.' || pg_catalog.quote_ident(c.relname) FROM pg_catalog.pg_class c, pg_catalog.pg_namespace\r\n n WHERE c.relnamespace = n.oid AND c.relkind IN ('r', 'S', 'v', 'm', 'f', 'p') AND substring(pg_catalog.quote_ident(n.nspname) || '.' || pg_catalog.quote_ident(c.relname),1,6)='pg_sta' AND substring(pg_catalog.quote_ident(n.nspname) || '.',1,6) = substring('pg_sta',1,pg_catalog.length(pg_catalog.quote_ident(n.nspname))+1)\r\n AND (SELECT pg_catalog.count(*) FROM pg_catalog.pg_namespace WHERE substring(pg_catalog.quote_ident(nspname)\r\n || '.',1,6) = substring('pg_sta',1,pg_catalog.length(pg_catalog.quote_ident(nspname))+1))\r\n = 1\nLIMIT 1000\n\n\nHere is slow one https://explain.depesz.com/s/x2Vf the\r\napp is running \n\n\nAfter the killing the application is fast https://explain.depesz.com/s/h4fK\n\n\nWe also tried to change the code to do this in 2 steps.\n- First create table\n- Then insert data into table\n\n\nBut that does not help on login either the time vary from 30\r\nsecs to 75\r\nsec.\nhttps://explain.depesz.com/s/4SXl\n\n\nThere is no\r\niowait the server the CPU load is 25%,\r\n the problem seems to be related to parallel_workers\n\n\nmax_parallel_workers_per_gather \n---------------------------------\n 2\n max_parallel_workers \n----------------------\n 8\n max_worker_processes \n----------------------\n 8\n\n\nSo if we change max_parallel_workers_per_gather = 0\n\n\n\nThen https://explain.depesz.com/s/kMEm query is fast.\n\n\n\nThanks for help everybody seems like we have to dig into the parallel_workers world.\n\n\n(have to wait to test that until we can restart postgres)\n​\n\nLars", "msg_date": "Tue, 8 Feb 2022 12:28:31 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow \"select count(*) from information_schema.tables;\" in some\n cases" } ]
[ { "msg_contents": "Postgres version: 11.4\n\nProblem:\nQuery choosing Bad Index Path (ASC/DESC ordering). Details are provided\nbelow\n\nTable:\n\\d public.distdbentityauditlog1_46625_temp_mahi3;\n Table \"public.distdbentityauditlog1_46625_temp_mahi3\"\n Column | Type | Collation | Nullable |\nDefault\n------------------+-----------------------------+-----------+----------+---------\n zgid | bigint | | not null |\n auditlogid | bigint | | not null |\n recordid | bigint | | |\n recordname | text | | |\n module | character varying(50) | | not null |\n actioninfo | character varying(255) | | not null |\n relatedid | bigint | | |\n relatedname | character varying(255) | | |\n relatedmodule | character varying(50) | | |\n accountid | bigint | | |\n accountname | character varying(255) | | |\n doneby | character varying(255) | | not null |\n userid | bigint | | |\n auditedtime | timestamp without time zone | | not null |\n fieldhistoryinfo | text | | |\n isauditlogdata | boolean | | not null |\n otherdetails | text | | |\n audittype | integer | | not null |\n requesteruserid | bigint | | |\n actiontype | integer | | not null |\n source | integer | | not null |\n module_lower | character varying(50) | | not null |\nIndexes:\n \"distdbentityauditlog1_46625_temp_mahi3_pkey\" PRIMARY KEY, btree (zgid,\nauditedtime, auditlogid)\n \"distdbentityauditlog1_idx1_46625_temp_mahi3\" btree (recordid)\n \"distdbentityauditlog1_idx2_46625_temp_mahi3\" btree (auditlogid)\n \"distdbentityauditlog1_idx3_46625_temp_mahi3\" btree (relatedid)\n \"distdbentityauditlog1_idx4_46625_temp_mahi3\" gist (actioninfo\ngist_trgm_ops)\n \"distdbentityauditlog1_idx5_46625_temp_mahi3\" btree (actioninfo)\n \"distdbentityauditlog1_idx6_46625_temp_mahi3\" btree (auditedtime DESC,\nmodule)\n\n\nexplain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid,\nrecordname, module, actioninfo, relatedid, relatedname, relatedmodule,\naccountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo,\nisauditlogdata, otherdetails, audittype, requesteruserid, actiontype,\nsource FROM public.distdbentityauditlog1_46625_temp_mahi3\ndistdbentityauditlog1 WHERE ((actiontype = ANY\n('{2,9,14,55,56,67}'::integer[])) AND ((recordid =\n'15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND\n((recordid = '15842006928391817'::bigint) OR (relatedid =\n'15842006928391817'::bigint)) AND (audittype <> ALL\n('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27\n09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC,\n2 DESC LIMIT '10'::bigint;\n\nLimit (cost=0.43..415.30 rows=10 width=400) (actual\ntime=7582.965..7583.477 rows=10 loops=1)\n Output: zgid, auditlogid, recordid, recordname, module, actioninfo,\nrelatedid, relatedname, relatedmodule, accountid, accountname, doneby,\nuserid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails,\naudittype, requesteruserid, actiontype, source\n Buffers: shared hit=552685 read=1464159\n -> Index Scan Backward using\ndistdbentityauditlog1_46625_temp_mahi3_pkey on\npublic.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1\n (cost=0.43..436281.55 rows=10516 width=400) (actual\ntime=7582.962..7583.470 rows=10\n loops=1)\n Output: zgid, auditlogid, recordid, recordname, module,\nactioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\ndoneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\notherdetails, audittype, requesteruserid, actiontype, source\n Index Cond: ((distdbentityauditlog1.zgid = 100) AND\n(distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp\nwithout time zone))\n Filter: (((distdbentityauditlog1.recordid =\n'15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text =\n'Contacts'::text)) AND ((distdbentityauditlog1.recordid =\n'15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid =\n'15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL\n('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY\n('{2,9,14,55,56,67}'::integer[])))\n Rows Removed by Filter: 2943989\n Buffers: shared hit=552685 read=1464159\n Planning Time: 0.567 ms\n Execution Time: 7583.558 ms\n(11 rows)\n\nDoubt:\n In Index Scan Backward using\ndistdbentityauditlog1_46625_temp_mahi3_pkey, the startup time was more. So\nthinking about whether backward scanning takes more time, created a new\nindex and tested with the same query as follows.\n\ncreate index distdbentityauditlog1_idx7_46625_temp_mahi3 on\ndistdbentityauditlog1_46625_temp_mahi3(zgid, auditedtime desc, module desc);\nanalyse public.distdbentityauditlog1_46625_temp_mahi3;\n\nexplain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid,\nrecordname, module, actioninfo, relatedid, relatedname, relatedmodule,\naccountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo,\nisauditlogdata, otherdetails, audittype, requesteruserid, actiontype,\nsource FROM public.distdbentityauditlog1_46625_temp_mahi3\ndistdbentityauditlog1 WHERE ((actiontype = ANY\n('{2,9,14,55,56,67}'::integer[])) AND ((recordid =\n'15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND\n((recordid = '15842006928391817'::bigint) OR (relatedid =\n'15842006928391817'::bigint)) AND (audittype <> ALL\n('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27\n09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC,\n2 DESC LIMIT '10'::bigint;\n\nLimit (cost=0.43..393.34 rows=10 width=399) (actual\ntime=8115.775..8116.441 rows=10 loops=1)\n Output: zgid, auditlogid, recordid, recordname, module, actioninfo,\nrelatedid, relatedname, relatedmodule, accountid, accountname, doneby,\nuserid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails,\naudittype, requesteruserid, actiontype, source\n Buffers: shared hit=519970 read=1496874 written=44\n -> Index Scan Backward using\ndistdbentityauditlog1_46625_temp_mahi3_pkey on\npublic.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1\n (cost=0.43..436209.86 rows=11102 width=399) (actual\ntime=8115.772..8116.435 rows=10\n loops=1)\n Output: zgid, auditlogid, recordid, recordname, module,\nactioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\ndoneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\notherdetails, audittype, requesteruserid, actiontype, source\n Index Cond: ((distdbentityauditlog1.zgid = 100) AND\n(distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp\nwithout time zone))\n Filter: (((distdbentityauditlog1.recordid =\n'15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text =\n'Contacts'::text)) AND ((distdbentityauditlog1.recordid =\n'15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid =\n'15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL\n('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY\n('{2,9,14,55,56,67}'::integer[])))\n Rows Removed by Filter: 2943989\n Buffers: shared hit=519970 read=1496874 written=44\n Planning Time: 1.152 ms\n Execution Time: 8116.518 ms\n\nStill no improvement in performance.\n\nIf DESC has been removed from ORDER BY clause in query, then the\nperformance is good as follows\n\nexplain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid,\nrecordname, module, actioninfo, relatedid, relatedname, relatedmodule,\naccountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo,\nisauditlogdata, otherdetails, audittype, requesteruserid, actiontype,\nsource FROM public.distdbentityauditlog1_46625_temp_mahi3\ndistdbentityauditlog1 WHERE ((actiontype = ANY\n('{2,9,14,55,56,67}'::integer[])) AND ((recordid =\n'15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND\n((recordid = '15842006928391817'::bigint) OR (relatedid =\n'15842006928391817'::bigint)) AND (audittype <> ALL\n('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27\n09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14, 2\nLIMIT '10'::bigint;\n\nLimit (cost=0.43..393.34 rows=10 width=399) (actual time=0.471..0.865\nrows=10 loops=1)\n Output: zgid, auditlogid, recordid, recordname, module, actioninfo,\nrelatedid, relatedname, relatedmodule, accountid, accountname, doneby,\nuserid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails,\naudittype, requesteruserid, actiontype, source\n Buffers: shared hit=24 read=111\n -> Index Scan using distdbentityauditlog1_46625_temp_mahi3_pkey on\npublic.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1\n (cost=0.43..436209.86 rows=11102 width=399) (actual time=0.468..0.860\nrows=10 loops=1)\n Output: zgid, auditlogid, recordid, recordname, module,\nactioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\ndoneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\notherdetails, audittype, requesteruserid, actiontype, source\n Index Cond: ((distdbentityauditlog1.zgid = 100) AND\n(distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp\nwithout time zone))\n Filter: (((distdbentityauditlog1.recordid =\n'15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text =\n'Contacts'::text)) AND ((distdbentityauditlog1.recordid =\n'15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid =\n'15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL\n('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY\n('{2,9,14,55,56,67}'::integer[])))\n Rows Removed by Filter: 174\n Buffers: shared hit=24 read=111\n Planning Time: 0.442 ms\n Execution Time: 0.923 ms\n\n Thus how to improve performance for DESC operation here?\n\nPostgres version: 11.4Problem:\tQuery choosing Bad Index Path (ASC/DESC ordering). Details are provided belowTable:\\d public.distdbentityauditlog1_46625_temp_mahi3;              Table \"public.distdbentityauditlog1_46625_temp_mahi3\"      Column      |            Type             | Collation | Nullable | Default ------------------+-----------------------------+-----------+----------+--------- zgid             | bigint                      |           | not null |  auditlogid       | bigint                      |           | not null |  recordid         | bigint                      |           |          |  recordname       | text                        |           |          |  module           | character varying(50)       |           | not null |  actioninfo       | character varying(255)      |           | not null |  relatedid        | bigint                      |           |          |  relatedname      | character varying(255)      |           |          |  relatedmodule    | character varying(50)       |           |          |  accountid        | bigint                      |           |          |  accountname      | character varying(255)      |           |          |  doneby           | character varying(255)      |           | not null |  userid           | bigint                      |           |          |  auditedtime      | timestamp without time zone |           | not null |  fieldhistoryinfo | text                        |           |          |  isauditlogdata   | boolean                     |           | not null |  otherdetails     | text                        |           |          |  audittype        | integer                     |           | not null |  requesteruserid  | bigint                      |           |          |  actiontype       | integer                     |           | not null |  source           | integer                     |           | not null |  module_lower     | character varying(50)       |           | not null | Indexes:    \"distdbentityauditlog1_46625_temp_mahi3_pkey\" PRIMARY KEY, btree (zgid, auditedtime, auditlogid)    \"distdbentityauditlog1_idx1_46625_temp_mahi3\" btree (recordid)    \"distdbentityauditlog1_idx2_46625_temp_mahi3\" btree (auditlogid)    \"distdbentityauditlog1_idx3_46625_temp_mahi3\" btree (relatedid)    \"distdbentityauditlog1_idx4_46625_temp_mahi3\" gist (actioninfo gist_trgm_ops)    \"distdbentityauditlog1_idx5_46625_temp_mahi3\" btree (actioninfo)    \"distdbentityauditlog1_idx6_46625_temp_mahi3\" btree (auditedtime DESC, module)explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source FROM public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1 WHERE ((actiontype = ANY ('{2,9,14,55,56,67}'::integer[])) AND ((recordid = '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND ((recordid = '15842006928391817'::bigint) OR (relatedid = '15842006928391817'::bigint)) AND (audittype <> ALL ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC, 2 DESC LIMIT '10'::bigint;Limit  (cost=0.43..415.30 rows=10 width=400) (actual time=7582.965..7583.477 rows=10 loops=1)   Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source   Buffers: shared hit=552685 read=1464159   ->  Index Scan Backward using distdbentityauditlog1_46625_temp_mahi3_pkey on public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1  (cost=0.43..436281.55 rows=10516 width=400) (actual time=7582.962..7583.470 rows=10 loops=1)         Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source         Index Cond: ((distdbentityauditlog1.zgid = 100) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))         Filter: (((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid = '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))         Rows Removed by Filter: 2943989         Buffers: shared hit=552685 read=1464159 Planning Time: 0.567 ms Execution Time: 7583.558 ms(11 rows)Doubt:   In Index Scan Backward using distdbentityauditlog1_46625_temp_mahi3_pkey, the startup time was more. So thinking about whether backward scanning takes more time, created a new index and tested with the same query as follows.create index distdbentityauditlog1_idx7_46625_temp_mahi3 on distdbentityauditlog1_46625_temp_mahi3(zgid, auditedtime desc, module desc);analyse public.distdbentityauditlog1_46625_temp_mahi3;explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source FROM public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1 WHERE ((actiontype = ANY ('{2,9,14,55,56,67}'::integer[])) AND ((recordid = '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND ((recordid = '15842006928391817'::bigint) OR (relatedid = '15842006928391817'::bigint)) AND (audittype <> ALL ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC, 2 DESC LIMIT '10'::bigint;Limit  (cost=0.43..393.34 rows=10 width=399) (actual time=8115.775..8116.441 rows=10 loops=1)   Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source   Buffers: shared hit=519970 read=1496874 written=44   ->  Index Scan Backward using distdbentityauditlog1_46625_temp_mahi3_pkey on public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1  (cost=0.43..436209.86 rows=11102 width=399) (actual time=8115.772..8116.435 rows=10 loops=1)         Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source         Index Cond: ((distdbentityauditlog1.zgid = 100) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))         Filter: (((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid = '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))         Rows Removed by Filter: 2943989         Buffers: shared hit=519970 read=1496874 written=44 Planning Time: 1.152 ms Execution Time: 8116.518 msStill no improvement in performance.     If DESC has been removed from ORDER BY clause in query, then the performance is good as followsexplain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source FROM public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1 WHERE ((actiontype = ANY ('{2,9,14,55,56,67}'::integer[])) AND ((recordid = '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND ((recordid = '15842006928391817'::bigint) OR (relatedid = '15842006928391817'::bigint)) AND (audittype <> ALL ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14, 2 LIMIT '10'::bigint;Limit  (cost=0.43..393.34 rows=10 width=399) (actual time=0.471..0.865 rows=10 loops=1)   Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source   Buffers: shared hit=24 read=111   ->  Index Scan using distdbentityauditlog1_46625_temp_mahi3_pkey on public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1  (cost=0.43..436209.86 rows=11102 width=399) (actual time=0.468..0.860 rows=10 loops=1)         Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source         Index Cond: ((distdbentityauditlog1.zgid = 100) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))         Filter: (((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid = '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))         Rows Removed by Filter: 174         Buffers: shared hit=24 read=111 Planning Time: 0.442 ms Execution Time: 0.923 ms Thus how to improve performance for DESC operation here?", "msg_date": "Tue, 8 Feb 2022 11:25:14 +0530", "msg_from": "Valli Annamalai <[email protected]>", "msg_from_op": true, "msg_subject": "Query choosing Bad Index Path (ASC/DESC ordering)." }, { "msg_contents": "It seems an issue of data distribution. More likely when traversing without\norderby [default ascending order] matching rows were found quickly. You can\nvalidate the same by using order by 14 asc, 2 asc limit 10.\n\nYou can try creating an index on auditlogid desc, auditedtime desc. OR any\nother combination with auditedtime which here higher chances of finding\nmatching rows quickly.\n\nOn Tue, Feb 8, 2022 at 11:25 AM Valli Annamalai <[email protected]>\nwrote:\n\n> Postgres version: 11.4\n>\n> Problem:\n> Query choosing Bad Index Path (ASC/DESC ordering). Details are provided\n> below\n>\n> Table:\n> \\d public.distdbentityauditlog1_46625_temp_mahi3;\n> Table \"public.distdbentityauditlog1_46625_temp_mahi3\"\n> Column | Type | Collation | Nullable |\n> Default\n>\n> ------------------+-----------------------------+-----------+----------+---------\n> zgid | bigint | | not null |\n> auditlogid | bigint | | not null |\n> recordid | bigint | | |\n> recordname | text | | |\n> module | character varying(50) | | not null |\n> actioninfo | character varying(255) | | not null |\n> relatedid | bigint | | |\n> relatedname | character varying(255) | | |\n> relatedmodule | character varying(50) | | |\n> accountid | bigint | | |\n> accountname | character varying(255) | | |\n> doneby | character varying(255) | | not null |\n> userid | bigint | | |\n> auditedtime | timestamp without time zone | | not null |\n> fieldhistoryinfo | text | | |\n> isauditlogdata | boolean | | not null |\n> otherdetails | text | | |\n> audittype | integer | | not null |\n> requesteruserid | bigint | | |\n> actiontype | integer | | not null |\n> source | integer | | not null |\n> module_lower | character varying(50) | | not null |\n> Indexes:\n> \"distdbentityauditlog1_46625_temp_mahi3_pkey\" PRIMARY KEY, btree\n> (zgid, auditedtime, auditlogid)\n> \"distdbentityauditlog1_idx1_46625_temp_mahi3\" btree (recordid)\n> \"distdbentityauditlog1_idx2_46625_temp_mahi3\" btree (auditlogid)\n> \"distdbentityauditlog1_idx3_46625_temp_mahi3\" btree (relatedid)\n> \"distdbentityauditlog1_idx4_46625_temp_mahi3\" gist (actioninfo\n> gist_trgm_ops)\n> \"distdbentityauditlog1_idx5_46625_temp_mahi3\" btree (actioninfo)\n> \"distdbentityauditlog1_idx6_46625_temp_mahi3\" btree (auditedtime DESC,\n> module)\n>\n>\n> explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid,\n> recordname, module, actioninfo, relatedid, relatedname, relatedmodule,\n> accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo,\n> isauditlogdata, otherdetails, audittype, requesteruserid, actiontype,\n> source FROM public.distdbentityauditlog1_46625_temp_mahi3\n> distdbentityauditlog1 WHERE ((actiontype = ANY\n> ('{2,9,14,55,56,67}'::integer[])) AND ((recordid =\n> '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND\n> ((recordid = '15842006928391817'::bigint) OR (relatedid =\n> '15842006928391817'::bigint)) AND (audittype <> ALL\n> ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27\n> 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC,\n> 2 DESC LIMIT '10'::bigint;\n>\n> Limit (cost=0.43..415.30 rows=10 width=400) (actual\n> time=7582.965..7583.477 rows=10 loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module, actioninfo,\n> relatedid, relatedname, relatedmodule, accountid, accountname, doneby,\n> userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails,\n> audittype, requesteruserid, actiontype, source\n> Buffers: shared hit=552685 read=1464159\n> -> Index Scan Backward using\n> distdbentityauditlog1_46625_temp_mahi3_pkey on\n> public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1\n> (cost=0.43..436281.55 rows=10516 width=400) (actual\n> time=7582.962..7583.470 rows=10\n> loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module,\n> actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\n> doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\n> otherdetails, audittype, requesteruserid, actiontype, source\n> Index Cond: ((distdbentityauditlog1.zgid = 100) AND\n> (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp\n> without time zone))\n> Filter: (((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text =\n> 'Contacts'::text)) AND ((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid =\n> '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL\n> ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY\n> ('{2,9,14,55,56,67}'::integer[])))\n> Rows Removed by Filter: 2943989\n> Buffers: shared hit=552685 read=1464159\n> Planning Time: 0.567 ms\n> Execution Time: 7583.558 ms\n> (11 rows)\n>\n> Doubt:\n> In Index Scan Backward using\n> distdbentityauditlog1_46625_temp_mahi3_pkey, the startup time was more. So\n> thinking about whether backward scanning takes more time, created a new\n> index and tested with the same query as follows.\n>\n> create index distdbentityauditlog1_idx7_46625_temp_mahi3 on\n> distdbentityauditlog1_46625_temp_mahi3(zgid, auditedtime desc, module desc);\n> analyse public.distdbentityauditlog1_46625_temp_mahi3;\n>\n> explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid,\n> recordname, module, actioninfo, relatedid, relatedname, relatedmodule,\n> accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo,\n> isauditlogdata, otherdetails, audittype, requesteruserid, actiontype,\n> source FROM public.distdbentityauditlog1_46625_temp_mahi3\n> distdbentityauditlog1 WHERE ((actiontype = ANY\n> ('{2,9,14,55,56,67}'::integer[])) AND ((recordid =\n> '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND\n> ((recordid = '15842006928391817'::bigint) OR (relatedid =\n> '15842006928391817'::bigint)) AND (audittype <> ALL\n> ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27\n> 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC,\n> 2 DESC LIMIT '10'::bigint;\n>\n> Limit (cost=0.43..393.34 rows=10 width=399) (actual\n> time=8115.775..8116.441 rows=10 loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module, actioninfo,\n> relatedid, relatedname, relatedmodule, accountid, accountname, doneby,\n> userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails,\n> audittype, requesteruserid, actiontype, source\n> Buffers: shared hit=519970 read=1496874 written=44\n> -> Index Scan Backward using\n> distdbentityauditlog1_46625_temp_mahi3_pkey on\n> public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1\n> (cost=0.43..436209.86 rows=11102 width=399) (actual\n> time=8115.772..8116.435 rows=10\n> loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module,\n> actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\n> doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\n> otherdetails, audittype, requesteruserid, actiontype, source\n> Index Cond: ((distdbentityauditlog1.zgid = 100) AND\n> (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp\n> without time zone))\n> Filter: (((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text =\n> 'Contacts'::text)) AND ((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid =\n> '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL\n> ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY\n> ('{2,9,14,55,56,67}'::integer[])))\n> Rows Removed by Filter: 2943989\n> Buffers: shared hit=519970 read=1496874 written=44\n> Planning Time: 1.152 ms\n> Execution Time: 8116.518 ms\n>\n> Still no improvement in performance.\n>\n> If DESC has been removed from ORDER BY clause in query, then the\n> performance is good as follows\n>\n> explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid,\n> recordname, module, actioninfo, relatedid, relatedname, relatedmodule,\n> accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo,\n> isauditlogdata, otherdetails, audittype, requesteruserid, actiontype,\n> source FROM public.distdbentityauditlog1_46625_temp_mahi3\n> distdbentityauditlog1 WHERE ((actiontype = ANY\n> ('{2,9,14,55,56,67}'::integer[])) AND ((recordid =\n> '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND\n> ((recordid = '15842006928391817'::bigint) OR (relatedid =\n> '15842006928391817'::bigint)) AND (audittype <> ALL\n> ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27\n> 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14, 2\n> LIMIT '10'::bigint;\n>\n> Limit (cost=0.43..393.34 rows=10 width=399) (actual time=0.471..0.865\n> rows=10 loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module, actioninfo,\n> relatedid, relatedname, relatedmodule, accountid, accountname, doneby,\n> userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails,\n> audittype, requesteruserid, actiontype, source\n> Buffers: shared hit=24 read=111\n> -> Index Scan using distdbentityauditlog1_46625_temp_mahi3_pkey on\n> public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1\n> (cost=0.43..436209.86 rows=11102 width=399) (actual time=0.468..0.860\n> rows=10 loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module,\n> actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\n> doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\n> otherdetails, audittype, requesteruserid, actiontype, source\n> Index Cond: ((distdbentityauditlog1.zgid = 100) AND\n> (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp\n> without time zone))\n> Filter: (((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text =\n> 'Contacts'::text)) AND ((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid =\n> '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL\n> ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY\n> ('{2,9,14,55,56,67}'::integer[])))\n> Rows Removed by Filter: 174\n> Buffers: shared hit=24 read=111\n> Planning Time: 0.442 ms\n> Execution Time: 0.923 ms\n>\n> Thus how to improve performance for DESC operation here?\n>\n>\n>\n>\n>\n\n-- \n*Monika Yadav*\nPhone: 9971515242\n\nIt seems an issue of data distribution. More likely when traversing without orderby [default ascending order] matching rows were found quickly. You can validate the same by using order by 14 asc, 2 asc limit 10.You can try creating an index on auditlogid desc, auditedtime desc. OR any other combination with auditedtime which here higher chances of finding matching rows quickly.On Tue, Feb 8, 2022 at 11:25 AM Valli Annamalai <[email protected]> wrote:Postgres version: 11.4Problem:\tQuery choosing Bad Index Path (ASC/DESC ordering). Details are provided belowTable:\\d public.distdbentityauditlog1_46625_temp_mahi3;              Table \"public.distdbentityauditlog1_46625_temp_mahi3\"      Column      |            Type             | Collation | Nullable | Default ------------------+-----------------------------+-----------+----------+--------- zgid             | bigint                      |           | not null |  auditlogid       | bigint                      |           | not null |  recordid         | bigint                      |           |          |  recordname       | text                        |           |          |  module           | character varying(50)       |           | not null |  actioninfo       | character varying(255)      |           | not null |  relatedid        | bigint                      |           |          |  relatedname      | character varying(255)      |           |          |  relatedmodule    | character varying(50)       |           |          |  accountid        | bigint                      |           |          |  accountname      | character varying(255)      |           |          |  doneby           | character varying(255)      |           | not null |  userid           | bigint                      |           |          |  auditedtime      | timestamp without time zone |           | not null |  fieldhistoryinfo | text                        |           |          |  isauditlogdata   | boolean                     |           | not null |  otherdetails     | text                        |           |          |  audittype        | integer                     |           | not null |  requesteruserid  | bigint                      |           |          |  actiontype       | integer                     |           | not null |  source           | integer                     |           | not null |  module_lower     | character varying(50)       |           | not null | Indexes:    \"distdbentityauditlog1_46625_temp_mahi3_pkey\" PRIMARY KEY, btree (zgid, auditedtime, auditlogid)    \"distdbentityauditlog1_idx1_46625_temp_mahi3\" btree (recordid)    \"distdbentityauditlog1_idx2_46625_temp_mahi3\" btree (auditlogid)    \"distdbentityauditlog1_idx3_46625_temp_mahi3\" btree (relatedid)    \"distdbentityauditlog1_idx4_46625_temp_mahi3\" gist (actioninfo gist_trgm_ops)    \"distdbentityauditlog1_idx5_46625_temp_mahi3\" btree (actioninfo)    \"distdbentityauditlog1_idx6_46625_temp_mahi3\" btree (auditedtime DESC, module)explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source FROM public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1 WHERE ((actiontype = ANY ('{2,9,14,55,56,67}'::integer[])) AND ((recordid = '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND ((recordid = '15842006928391817'::bigint) OR (relatedid = '15842006928391817'::bigint)) AND (audittype <> ALL ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC, 2 DESC LIMIT '10'::bigint;Limit  (cost=0.43..415.30 rows=10 width=400) (actual time=7582.965..7583.477 rows=10 loops=1)   Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source   Buffers: shared hit=552685 read=1464159   ->  Index Scan Backward using distdbentityauditlog1_46625_temp_mahi3_pkey on public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1  (cost=0.43..436281.55 rows=10516 width=400) (actual time=7582.962..7583.470 rows=10 loops=1)         Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source         Index Cond: ((distdbentityauditlog1.zgid = 100) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))         Filter: (((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid = '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))         Rows Removed by Filter: 2943989         Buffers: shared hit=552685 read=1464159 Planning Time: 0.567 ms Execution Time: 7583.558 ms(11 rows)Doubt:   In Index Scan Backward using distdbentityauditlog1_46625_temp_mahi3_pkey, the startup time was more. So thinking about whether backward scanning takes more time, created a new index and tested with the same query as follows.create index distdbentityauditlog1_idx7_46625_temp_mahi3 on distdbentityauditlog1_46625_temp_mahi3(zgid, auditedtime desc, module desc);analyse public.distdbentityauditlog1_46625_temp_mahi3;explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source FROM public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1 WHERE ((actiontype = ANY ('{2,9,14,55,56,67}'::integer[])) AND ((recordid = '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND ((recordid = '15842006928391817'::bigint) OR (relatedid = '15842006928391817'::bigint)) AND (audittype <> ALL ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC, 2 DESC LIMIT '10'::bigint;Limit  (cost=0.43..393.34 rows=10 width=399) (actual time=8115.775..8116.441 rows=10 loops=1)   Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source   Buffers: shared hit=519970 read=1496874 written=44   ->  Index Scan Backward using distdbentityauditlog1_46625_temp_mahi3_pkey on public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1  (cost=0.43..436209.86 rows=11102 width=399) (actual time=8115.772..8116.435 rows=10 loops=1)         Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source         Index Cond: ((distdbentityauditlog1.zgid = 100) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))         Filter: (((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid = '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))         Rows Removed by Filter: 2943989         Buffers: shared hit=519970 read=1496874 written=44 Planning Time: 1.152 ms Execution Time: 8116.518 msStill no improvement in performance.     If DESC has been removed from ORDER BY clause in query, then the performance is good as followsexplain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source FROM public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1 WHERE ((actiontype = ANY ('{2,9,14,55,56,67}'::integer[])) AND ((recordid = '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND ((recordid = '15842006928391817'::bigint) OR (relatedid = '15842006928391817'::bigint)) AND (audittype <> ALL ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14, 2 LIMIT '10'::bigint;Limit  (cost=0.43..393.34 rows=10 width=399) (actual time=0.471..0.865 rows=10 loops=1)   Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source   Buffers: shared hit=24 read=111   ->  Index Scan using distdbentityauditlog1_46625_temp_mahi3_pkey on public.distdbentityauditlog1_46625_temp_mahi3 distdbentityauditlog1  (cost=0.43..436209.86 rows=11102 width=399) (actual time=0.468..0.860 rows=10 loops=1)         Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source         Index Cond: ((distdbentityauditlog1.zgid = 100) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))         Filter: (((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid = '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))         Rows Removed by Filter: 174         Buffers: shared hit=24 read=111 Planning Time: 0.442 ms Execution Time: 0.923 ms Thus how to improve performance for DESC operation here?\n-- Monika YadavPhone: 9971515242", "msg_date": "Tue, 8 Feb 2022 17:34:41 +0530", "msg_from": "Mind Body Nature <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query choosing Bad Index Path (ASC/DESC ordering)." } ]
[ { "msg_contents": "Postgres version: 11.4\n\nProblem:\n Query choosing Bad Index Path. Details are provided below\n\nTable :\n\\d public.distdbentityauditlog1_46625_temp_mahi1;\n Table \"public.distdbentityauditlog1_46625_temp_mahi1\"\n Column | Type | Collation | Nullable |\nDefault\n------------------+-----------------------------+-----------+----------+---------\n zgid | bigint | | not null |\n auditlogid | bigint | | not null |\n recordid | bigint | | |\n recordname | text | | |\n module | character varying(50) | | not null |\n actioninfo | character varying(255) | | not null |\n relatedid | bigint | | |\n relatedname | character varying(255) | | |\n relatedmodule | character varying(50) | | |\n accountid | bigint | | |\n accountname | character varying(255) | | |\n doneby | character varying(255) | | not null |\n userid | bigint | | |\n auditedtime | timestamp without time zone | | not null |\n fieldhistoryinfo | text | | |\n isauditlogdata | boolean | | not null |\n otherdetails | text | | |\n audittype | integer | | not null |\n requesteruserid | bigint | | |\n actiontype | integer | | not null |\n source | integer | | not null |\n module_lower | character varying(50) | | not null |\nIndexes:\n \"distdbentityauditlog1_46625_temp_mahi1_pkey\" PRIMARY KEY, btree (zgid,\nauditedtime, auditlogid)\n \"distdbentityauditlog1_46625_temp_mahi1_actioninfo_idx\" gist\n(actioninfo gist_trgm_ops)\n \"distdbentityauditlog1_46625_temp_mahi1_actioninfo_idx1\" btree\n(actioninfo)\n \"distdbentityauditlog1_46625_temp_mahi1_auditedtime_module_idx\" btree\n(auditedtime DESC, module)\n \"distdbentityauditlog1_46625_temp_mahi1_auditlogid_idx\" btree\n(auditlogid DESC)\n \"distdbentityauditlog1_46625_temp_mahi1_idx5\" btree (module)\n \"distdbentityauditlog1_46625_temp_mahi1_idx6\" btree (recordid,\nauditedtime DESC)\n \"distdbentityauditlog1_46625_temp_mahi1_idx7\" btree (relatedid,\nauditedtime DESC)\n\n\nexplain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid,\nrecordname, module, actioninfo, relatedid, relatedname, relatedmodule,\naccountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo,\nisauditlogdata, otherdetails, audittype, requesteruserid, actiontype,\nsource FROM public.distdbentityauditlog1_46625_temp_mahi1\ndistdbentityauditlog1 WHERE ((actiontype = ANY\n('{2,9,14,55,56,67}'::integer[])) AND ((recordid =\n'15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND\n((recordid = '15842006928391817'::bigint) OR (relatedid =\n'15842006928391817'::bigint)) AND (audittype <> ALL\n('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27\n09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC,\n2 DESC LIMIT '10'::bigint;\n\n Limit (cost=0.43..438.62 rows=10 width=400) (actual\n> time=8045.030..8045.576 rows=10 loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module, actioninfo,\n> relatedid, relatedname, relatedmodule, accountid, accountname, doneby,\n> userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails,\n> audittype, requesteru\n> serid, actiontype, source\n> Buffers: shared hit=548660 read=1485553\n> -> Index Scan Backward using\n> distdbentityauditlog1_46625_temp_mahi1_pkey on\n> public.distdbentityauditlog1_46625_temp_mahi1 distdbentityauditlog1\n> (cost=0.43..445948.91 rows=10177 width=400) (actual\n> time=8045.027..8045.569 rows=10\n> loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module,\n> actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\n> doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\n> otherdetails, audittype, requ\n> esteruserid, actiontype, source\n> Index Cond: ((distdbentityauditlog1.zgid = 100) AND\n> (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp\n> without time zone))\n> Filter: (((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text =\n> 'Contacts'::text)) AND ((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) OR (distdbentityaudi\n> tlog1.relatedid = '15842006928391817'::bigint)) AND\n> (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND\n> (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))\n> Rows Removed by Filter: 2943989\n> Buffers: shared hit=548660 read=1485553\n> Planning Time: 0.530 ms\n> Execution Time: 8045.687 ms\n>\n\n\nDoubt\n 1. Why is this Query choosing Index Scan Backward using table1_pkey\nIndex though it's cost is high. It can rather choose\n BITMAP OR\n (Index on RECORDID) i.e; table1_idx6\n (Index on RELATEDID) i.e; table1_idx7\n\n Below is the selectivity details from pg_stats table\n - Recordid has 51969 distinct values. And selectivity\n(most_common_freqs) for recordid = 15842006928391817 is 0.00376667\n - Relatedid has 82128 distinct values. And selectivity\n(most_common_freqs) for recordid = 15842006928391817 is 0.0050666\n\nSince, selectivity is less, this should logically choose this Index, which\nwould have improve my query performance here.\nI cross-checked the same by removing PrimaryKey to this table and query now\nchooses these indexes and response is in 100ms. Please refer the plan below\n(after removing primary key):\n\n\n alter table public.distdbentityauditlog1_46625_temp_mahi1 drop constraint\ndistdbentityauditlog1_46625_temp_mahi1_pkey;\n analyse public.distdbentityauditlog1_46625_temp_mahi1;\n\n\n explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid,\nrecordname, module, actioninfo, relatedid, relatedname, relatedmodule,\naccountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo,\nisauditlogdata, otherdetails, audittype, requesteruserid, actiontype,\nsource FROM public.distdbentityauditlog1_46625_temp_mahi1\ndistdbentityauditlog1 WHERE ((actiontype = ANY\n('{2,9,14,55,56,67}'::integer[])) AND ((recordid =\n'15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND\n((recordid = '15842006928391817'::bigint) OR (relatedid =\n'15842006928391817'::bigint)) AND (audittype <> ALL\n('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27\n09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC,\n2 DESC LIMIT '10'::bigint;\n\n Limit (cost=140917.99..140918.01 rows=10 width=402) (actual\n> time=103.667..103.673 rows=10 loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module, actioninfo,\n> relatedid, relatedname, relatedmodule, accountid, accountname, doneby,\n> userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails,\n> audittype, requesteru\n> serid, actiontype, source\n> Buffers: shared read=10448 written=9\n> -> Sort (cost=140917.99..140942.38 rows=9759 width=402) (actual\n> time=103.665..103.667 rows=10 loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module,\n> actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\n> doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\n> otherdetails, audittype, requ\n> esteruserid, actiontype, source\n> Sort Key: distdbentityauditlog1.auditedtime DESC,\n> distdbentityauditlog1.auditlogid DESC\n> Sort Method: top-N heapsort Memory: 34kB\n> Buffers: shared read=10448 written=9\n> -> Bitmap Heap Scan on\n> public.distdbentityauditlog1_46625_temp_mahi1 distdbentityauditlog1\n> (cost=686.74..140707.10 rows=9759 width=402) (actual time=12.291..79.847\n> rows=16824 loops=1)\n> Output: zgid, auditlogid, recordid, recordname, module,\n> actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname,\n> doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata,\n> otherdetails, audittype\n> , requesteruserid, actiontype, source\n> Recheck Cond: (((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) AND (distdbentityauditlog1.auditedtime >=\n> '2021-03-27 09:43:17'::timestamp without time zone)) OR\n> ((distdbentityauditlog1.relatedid = '158\n> 42006928391817'::bigint) AND (distdbentityauditlog1.auditedtime >=\n> '2021-03-27 09:43:17'::timestamp without time zone)))\n> Filter: ((distdbentityauditlog1.zgid = 100) AND\n> ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR\n> ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND\n> (distdbentityauditlog1.audittype <>\n> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY\n> ('{2,9,14,55,56,67}'::integer[])))\n> Heap Blocks: exact=10267\n> Buffers: shared read=10448 written=9\n> -> BitmapOr (cost=686.74..686.74 rows=32499 width=0)\n> (actual time=9.464..9.464 rows=0 loops=1)\n> Buffers: shared read=181\n> -> Bitmap Index Scan on\n> distdbentityauditlog1_46625_temp_mahi1_idx6 (cost=0.00..348.93 rows=16250\n> width=0) (actual time=5.812..5.812 rows=16928 loops=1)\n> Index Cond: ((distdbentityauditlog1.recordid =\n> '15842006928391817'::bigint) AND (distdbentityauditlog1.auditedtime >=\n> '2021-03-27 09:43:17'::timestamp without time zone))\n> Buffers: shared read=95\n> -> Bitmap Index Scan on\n> distdbentityauditlog1_46625_temp_mahi1_idx7 (cost=0.00..332.93 rows=16250\n> width=0) (actual time=3.650..3.650 rows=16824 loops=1)\n> Index Cond: ((distdbentityauditlog1.relatedid =\n> '15842006928391817'::bigint) AND (distdbentityauditlog1.auditedtime >=\n> '2021-03-27 09:43:17'::timestamp without time zone))\n> Buffers: shared read=86\n> Planning Time: 1.110 ms\n> Execution Time: 103.755 ms\n>\n\nPostgres version: 11.4Problem:    Query choosing Bad Index Path. Details are provided belowTable :\\d public.distdbentityauditlog1_46625_temp_mahi1;              Table \"public.distdbentityauditlog1_46625_temp_mahi1\"      Column      |            Type             | Collation | Nullable | Default ------------------+-----------------------------+-----------+----------+--------- zgid             | bigint                      |           | not null |  auditlogid       | bigint                      |           | not null |  recordid         | bigint                      |           |          |  recordname       | text                        |           |          |  module           | character varying(50)       |           | not null |  actioninfo       | character varying(255)      |           | not null |  relatedid        | bigint                      |           |          |  relatedname      | character varying(255)      |           |          |  relatedmodule    | character varying(50)       |           |          |  accountid        | bigint                      |           |          |  accountname      | character varying(255)      |           |          |  doneby           | character varying(255)      |           | not null |  userid           | bigint                      |           |          |  auditedtime      | timestamp without time zone |           | not null |  fieldhistoryinfo | text                        |           |          |  isauditlogdata   | boolean                     |           | not null |  otherdetails     | text                        |           |          |  audittype        | integer                     |           | not null |  requesteruserid  | bigint                      |           |          |  actiontype       | integer                     |           | not null |  source           | integer                     |           | not null |  module_lower     | character varying(50)       |           | not null | Indexes:    \"distdbentityauditlog1_46625_temp_mahi1_pkey\" PRIMARY KEY, btree (zgid, auditedtime, auditlogid)    \"distdbentityauditlog1_46625_temp_mahi1_actioninfo_idx\" gist (actioninfo gist_trgm_ops)    \"distdbentityauditlog1_46625_temp_mahi1_actioninfo_idx1\" btree (actioninfo)    \"distdbentityauditlog1_46625_temp_mahi1_auditedtime_module_idx\" btree (auditedtime DESC, module)    \"distdbentityauditlog1_46625_temp_mahi1_auditlogid_idx\" btree (auditlogid DESC)    \"distdbentityauditlog1_46625_temp_mahi1_idx5\" btree (module)    \"distdbentityauditlog1_46625_temp_mahi1_idx6\" btree (recordid, auditedtime DESC)    \"distdbentityauditlog1_46625_temp_mahi1_idx7\" btree (relatedid, auditedtime DESC)explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source FROM public.distdbentityauditlog1_46625_temp_mahi1 distdbentityauditlog1 WHERE ((actiontype = ANY ('{2,9,14,55,56,67}'::integer[])) AND ((recordid = '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND ((recordid = '15842006928391817'::bigint) OR (relatedid = '15842006928391817'::bigint)) AND (audittype <> ALL ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC, 2 DESC LIMIT '10'::bigint; Limit  (cost=0.43..438.62 rows=10 width=400) (actual time=8045.030..8045.576 rows=10 loops=1)   Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source   Buffers: shared hit=548660 read=1485553   ->  Index Scan Backward using distdbentityauditlog1_46625_temp_mahi1_pkey on public.distdbentityauditlog1_46625_temp_mahi1 distdbentityauditlog1  (cost=0.43..445948.91 rows=10177 width=400) (actual time=8045.027..8045.569 rows=10 loops=1)         Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source         Index Cond: ((distdbentityauditlog1.zgid = 100) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))         Filter: (((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR (distdbentityauditlog1.relatedid = '15842006928391817'::bigint)) AND (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))         Rows Removed by Filter: 2943989         Buffers: shared hit=548660 read=1485553 Planning Time: 0.530 ms Execution Time: 8045.687 msDoubt   1. Why is this Query choosing Index Scan Backward using table1_pkey Index though it's cost is high. It can rather choose            BITMAP OR                  (Index on RECORDID) i.e; table1_idx6                  (Index on RELATEDID) i.e; table1_idx7      Below is the selectivity details from pg_stats table        - Recordid has 51969 distinct values. And selectivity (most_common_freqs) for recordid = 15842006928391817 is 0.00376667        - Relatedid has 82128 distinct values. And selectivity (most_common_freqs) for recordid = 15842006928391817 is 0.0050666Since, selectivity is less, this should logically choose this Index, which would have improve my query performance here.I cross-checked the same by removing PrimaryKey to this table and query now chooses these indexes and response is in 100ms. Please refer the plan below (after removing primary key): alter table public.distdbentityauditlog1_46625_temp_mahi1 drop constraint distdbentityauditlog1_46625_temp_mahi1_pkey; analyse public.distdbentityauditlog1_46625_temp_mahi1; explain (analyse, buffers, verbose) SELECT zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source FROM public.distdbentityauditlog1_46625_temp_mahi1 distdbentityauditlog1 WHERE ((actiontype = ANY ('{2,9,14,55,56,67}'::integer[])) AND ((recordid = '15842006928391817'::bigint) OR ((module)::text = 'Contacts'::text)) AND ((recordid = '15842006928391817'::bigint) OR (relatedid = '15842006928391817'::bigint)) AND (audittype <> ALL ('{2,4,5,6}'::integer[])) AND (auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone) AND (zgid = 100)) ORDER BY 14 DESC, 2 DESC LIMIT '10'::bigint; Limit  (cost=140917.99..140918.01 rows=10 width=402) (actual time=103.667..103.673 rows=10 loops=1)   Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source   Buffers: shared read=10448 written=9   ->  Sort  (cost=140917.99..140942.38 rows=9759 width=402) (actual time=103.665..103.667 rows=10 loops=1)         Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source         Sort Key: distdbentityauditlog1.auditedtime DESC, distdbentityauditlog1.auditlogid DESC         Sort Method: top-N heapsort  Memory: 34kB         Buffers: shared read=10448 written=9         ->  Bitmap Heap Scan on public.distdbentityauditlog1_46625_temp_mahi1 distdbentityauditlog1  (cost=686.74..140707.10 rows=9759 width=402) (actual time=12.291..79.847 rows=16824 loops=1)               Output: zgid, auditlogid, recordid, recordname, module, actioninfo, relatedid, relatedname, relatedmodule, accountid, accountname, doneby, userid, auditedtime, fieldhistoryinfo, isauditlogdata, otherdetails, audittype, requesteruserid, actiontype, source               Recheck Cond: (((distdbentityauditlog1.recordid = '15842006928391817'::bigint) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone)) OR ((distdbentityauditlog1.relatedid = '15842006928391817'::bigint) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone)))               Filter: ((distdbentityauditlog1.zgid = 100) AND ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) OR ((distdbentityauditlog1.module)::text = 'Contacts'::text)) AND (distdbentityauditlog1.audittype <> ALL ('{2,4,5,6}'::integer[])) AND (distdbentityauditlog1.actiontype = ANY ('{2,9,14,55,56,67}'::integer[])))               Heap Blocks: exact=10267               Buffers: shared read=10448 written=9               ->  BitmapOr  (cost=686.74..686.74 rows=32499 width=0) (actual time=9.464..9.464 rows=0 loops=1)                     Buffers: shared read=181                     ->  Bitmap Index Scan on distdbentityauditlog1_46625_temp_mahi1_idx6  (cost=0.00..348.93 rows=16250 width=0) (actual time=5.812..5.812 rows=16928 loops=1)                           Index Cond: ((distdbentityauditlog1.recordid = '15842006928391817'::bigint) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))                           Buffers: shared read=95                     ->  Bitmap Index Scan on distdbentityauditlog1_46625_temp_mahi1_idx7  (cost=0.00..332.93 rows=16250 width=0) (actual time=3.650..3.650 rows=16824 loops=1)                           Index Cond: ((distdbentityauditlog1.relatedid = '15842006928391817'::bigint) AND (distdbentityauditlog1.auditedtime >= '2021-03-27 09:43:17'::timestamp without time zone))                           Buffers: shared read=86 Planning Time: 1.110 ms Execution Time: 103.755 ms", "msg_date": "Wed, 9 Feb 2022 11:07:55 +0530", "msg_from": "Valli Annamalai <[email protected]>", "msg_from_op": true, "msg_subject": "Query chooses Bad Index Path" }, { "msg_contents": "It's a bit annoying that you post the same query over and over again, \nstarting a new thread every time. Don't do that, please, it's just \nconfusing, people lose track of information you already provided in \nother threads etc.\n\nNow, to the question ...\n\nOn 2/9/22 06:37, Valli Annamalai wrote:\n> Postgres version: 11.4\n> \n> Problem:\n>     Query choosing Bad Index Path. Details are provided below\n >\n> ...\n> \n> \n> Doubt\n>    1. Why is this Query choosing Index Scan Backward using table1_pkey \n> Index though it's cost is high. It can rather choose\n>             BITMAP OR\n>                   (Index on RECORDID) i.e; table1_idx6\n>                   (Index on RELATEDID) i.e; table1_idx7\n> \n>       Below is the selectivity details from pg_stats table\n>         - Recordid has 51969 distinct values. And selectivity \n> (most_common_freqs) for recordid = 15842006928391817 is 0.00376667\n>         - Relatedid has 82128 distinct values. And selectivity \n> (most_common_freqs) for recordid = 15842006928391817 is 0.0050666\n> \n> Since, selectivity is less, this should logically choose this Index, \n> which would have improve my query performance here.\n\nWell, the filter condition is much more complex - it's not just \nconditions on recordid, but various conditions on other columns, with \nboth AND and OR. So it's possible the estimate is off, and the optimizer \npicks the wrong plan. Try running explain analyze without the LIMIT, \nthat'll tell you how accurate the estimates are (LIMIT terminates early, \nso the actual rowcount is incomplete).\n\nThe other option is data distribution issue, as pointed out by Monika \nYadav in the other thread. The optimizer assumes matching rows are \ndistributed uniformly in the input relation, but chances are they're \neither close to beginning/end depending on how you sort it.\n\nImagine you have 1000000 rows, 1000 of them match the filter, and you \nhave LIMIT 10. It the matching rows are distributed uniformly, it's \nenough to scan 1% of the input, i.e. 10000 rows (because there's one \nmatching row for every 1000 rows, on average).\n\nBut let's assume the matching rows are not distributed uniformly, but at \nthe end, when you sort it. Well, you'll have go through 100% of the \ninput. But the optimizer won't realize that.\n\nThis is a known / common issue with LIMIT, unfortunately. The estimated \ncost is much lower that it should be, and it's hard to fix.\n\n> I cross-checked the same by removing PrimaryKey to this table and query \n> now chooses these indexes and response is in 100ms. Please refer the \n> plan below (after removing primary key):\n> \n> \n\nWell, yeah. That's mostly consistent with the data distribution theory.\n\nI'd try two things:\n\n1) define a covering index, so that the query can do Index Only Scan\n\n2) define partial index, moving some of the filter conditions to index \npredicate (not sure if that's possible, it depends on what parameters of \nthe condition are static)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 9 Feb 2022 17:51:48 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query chooses Bad Index Path" } ]
[ { "msg_contents": "Hi Team,\n\nGreetings,\n\nWe are facing an issue with long running queries in PostgreSQL Database. We recently migrated the database from Oracle to PostgreSQL and we found that there are approx. 4 to 5 queries which was running in oracle in 5 mins and it is taking more than 50 mins in PostgreSQL. We checked every parameter in PostgreSQL, but it is not helping.\n\nI am attaching the query and explain plan with Analyze option in the attachment, but it is not helping.\n\nRequest you to please help and assist on this as it is hampering the productivity and effecting the business.\n\nThanks and Regards,\nMukesh Kumar", "msg_date": "Tue, 22 Feb 2022 14:11:58 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Running Queries in Azure PostgreSQL " }, { "msg_contents": "On Tue, Feb 22, 2022 at 02:11:58PM +0000, Kumar, Mukesh wrote:\n\n> -> Hash Join (cost=6484.69..43117.63 rows=1 width=198) (actual time=155.508..820.705 rows=52841 loops=1)\"\n> Hash Cond: (((lms_doc_property_rights_assoc.doc_sid_c)::text = (lms_doc_propright_status_assoc.doc_sid_c)::text) AND ((lms_property_rights_base.property_sid_k)::text = (lms_doc_propright_status_assoc.property_sid_c)::text))\"\n\nYour problem seems to start here. It thinks it'll get one row but actually\ngets 53k. You can join those two tables on their own to understand the problem\nbetter. Is either or both halves of the AND estimated well ?\n\nIf both halves are individually estimated well, but estimated poorly together\nwith AND, then you have correlation.\n\nAre either of those conditions redundant with the other ? Half of the AND\nmight be unnecessary and could be removed.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 22 Feb 2022 15:27:10 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Running Queries in Azure PostgreSQL" }, { "msg_contents": "Hi Justin , \n\nThanks for your help , After committing 1 parameter , the whole query executed in less than 1 min.\n\n\n\nThanks and Regards, \nMukesh Kumar\n\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: Wednesday, February 23, 2022 2:57 AM\nTo: Kumar, Mukesh <[email protected]>\nCc: [email protected]\nSubject: Re: Slow Running Queries in Azure PostgreSQL\n\nOn Tue, Feb 22, 2022 at 02:11:58PM +0000, Kumar, Mukesh wrote:\n\n> -> Hash Join (cost=6484.69..43117.63 rows=1 width=198) (actual time=155.508..820.705 rows=52841 loops=1)\"\n> Hash Cond: (((lms_doc_property_rights_assoc.doc_sid_c)::text = (lms_doc_propright_status_assoc.doc_sid_c)::text) AND ((lms_property_rights_base.property_sid_k)::text = (lms_doc_propright_status_assoc.property_sid_c)::text))\"\n\nYour problem seems to start here. It thinks it'll get one row but actually gets 53k. You can join those two tables on their own to understand the problem better. Is either or both halves of the AND estimated well ?\n\nIf both halves are individually estimated well, but estimated poorly together with AND, then you have correlation.\n\nAre either of those conditions redundant with the other ? Half of the AND might be unnecessary and could be removed.\n\n--\nJustin\n\n\n", "msg_date": "Fri, 25 Feb 2022 12:41:33 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Slow Running Queries in Azure PostgreSQL" } ]
[ { "msg_contents": "Hi,\n\n\nI've experienced a situation where the planner seems to make a very poor \nchoice with a prepared query after the first five executions.  Looking \nat the documentation, I think this happens because it switches from a \ncustom plan to a generic one, and doesn't make a good choice for the \ngeneric one.\n\nPostgres version: running in docker, reports to be 'Debian 14.1-1.pgdg110+1'\n\nIf I force it to use a custom plan via 'set local plan_cache_mode = \nforce_custom_plan', then I don't notice any slowdown.  Without it, the \n6th and onwards calls can take 1 second to 15 seconds each, as opposed \nto about 10ms.\n\nSince I have a workaround, I don't necessarily need assistance, but \nposting this here in case it's of value as a test case. Here's a test \ncase that reliably duplicates this issue for me:\n\n----\n\ncreate table test (\n   test_id serial primary key,\n   data text\n);\n\ninsert into test (data) (select data from (select \ngenerate_series(1,10000) AS id, md5(random()::text) AS data) x);\n\nprepare foo_test(text, text, int, text, bool) as SELECT * FROM (SELECT\n   *,\n   count(*) OVER () > $3 AS has_more,\n   row_number() OVER ()\n   FROM (\n     WITH counted AS (\n       SELECT count(*) AS total\n       FROM   (select test_id::text, data\nfrom test\nwhere\n   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\nand\n   (cast($2 as text) is null or lower(test_id::text) like '%' || \nlower($2) || '%')) base\n     ), cursor_row AS (\n       SELECT base.test_id\n       FROM   (select test_id::text, data\nfrom test\nwhere\n   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\nand\n   (cast($2 as text) is null or lower(test_id::text) like '%' || \nlower($2) || '%')) base\n       WHERE  base.test_id = $4\n     )\n     SELECT counted.*, base.*\n       FROM   (select test_id::text, data\nfrom test\nwhere\n   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\nand\n   (cast($2 as text) is null or lower(test_id::text) like '%' || \nlower($2) || '%')) base\n       LEFT JOIN   cursor_row ON true\n       LEFT JOIN   counted ON true\n       WHERE ((\n             $4 IS NULL OR cast($5 as bool) IS NULL\n           ) OR (\n             (base.test_id)\n               > (cursor_row.test_id)\n           ))\n       ORDER BY base.test_id ASC\n       LIMIT $3 + 1\n) xy LIMIT $3 ) z ORDER BY row_number ASC;\n\n\\timing\n\nexecute foo_test(null, null, 5, 500, true);\nexecute foo_test(null, null, 5, 500, true);\nexecute foo_test(null, null, 5, 500, true);\nexecute foo_test(null, null, 5, 500, true);\nexecute foo_test(null, null, 5, 500, true);\n\n-- This one should be slower:\nexecute foo_test(null, null, 5, 500, true);\n\n\n\n", "msg_date": "Thu, 24 Feb 2022 14:37:59 +1100", "msg_from": "Mark Saward <[email protected]>", "msg_from_op": true, "msg_subject": "Slow plan choice with prepared query" }, { "msg_contents": "Dag, if you ain't right!  I can duplicate this on the ones I tested \nwith: PG v11 and v14.  Gonna start diving into this myself...\n\nRegards,\nMichael Vitale\n\n\nMark Saward wrote on 2/23/2022 10:37 PM:\n> Hi,\n>\n>\n> I've experienced a situation where the planner seems to make a very \n> poor choice with a prepared query after the first five executions.  \n> Looking at the documentation, I think this happens because it switches \n> from a custom plan to a generic one, and doesn't make a good choice \n> for the generic one.\n>\n> Postgres version: running in docker, reports to be 'Debian \n> 14.1-1.pgdg110+1'\n>\n> If I force it to use a custom plan via 'set local plan_cache_mode = \n> force_custom_plan', then I don't notice any slowdown.  Without it, the \n> 6th and onwards calls can take 1 second to 15 seconds each, as opposed \n> to about 10ms.\n>\n> Since I have a workaround, I don't necessarily need assistance, but \n> posting this here in case it's of value as a test case. Here's a test \n> case that reliably duplicates this issue for me:\n>\n> ----\n>\n> create table test (\n>   test_id serial primary key,\n>   data text\n> );\n>\n> insert into test (data) (select data from (select \n> generate_series(1,10000) AS id, md5(random()::text) AS data) x);\n>\n> prepare foo_test(text, text, int, text, bool) as SELECT * FROM (SELECT\n>   *,\n>   count(*) OVER () > $3 AS has_more,\n>   row_number() OVER ()\n>   FROM (\n>     WITH counted AS (\n>       SELECT count(*) AS total\n>       FROM   (select test_id::text, data\n> from test\n> where\n>   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\n> and\n>   (cast($2 as text) is null or lower(test_id::text) like '%' || \n> lower($2) || '%')) base\n>     ), cursor_row AS (\n>       SELECT base.test_id\n>       FROM   (select test_id::text, data\n> from test\n> where\n>   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\n> and\n>   (cast($2 as text) is null or lower(test_id::text) like '%' || \n> lower($2) || '%')) base\n>       WHERE  base.test_id = $4\n>     )\n>     SELECT counted.*, base.*\n>       FROM   (select test_id::text, data\n> from test\n> where\n>   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\n> and\n>   (cast($2 as text) is null or lower(test_id::text) like '%' || \n> lower($2) || '%')) base\n>       LEFT JOIN   cursor_row ON true\n>       LEFT JOIN   counted ON true\n>       WHERE ((\n>             $4 IS NULL OR cast($5 as bool) IS NULL\n>           ) OR (\n>             (base.test_id)\n>               > (cursor_row.test_id)\n>           ))\n>       ORDER BY base.test_id ASC\n>       LIMIT $3 + 1\n> ) xy LIMIT $3 ) z ORDER BY row_number ASC;\n>\n> \\timing\n>\n> execute foo_test(null, null, 5, 500, true);\n> execute foo_test(null, null, 5, 500, true);\n> execute foo_test(null, null, 5, 500, true);\n> execute foo_test(null, null, 5, 500, true);\n> execute foo_test(null, null, 5, 500, true);\n>\n> -- This one should be slower:\n> execute foo_test(null, null, 5, 500, true);\n>\n>\n>\n\n\n\n", "msg_date": "Thu, 24 Feb 2022 13:45:57 -0500", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan choice with prepared query" }, { "msg_contents": "As per PG official documentation on PREPARE, it is working as expected.  \nUse custom plan, but after 5th iteration compare cost of custom plan vs \ngeneric plan and use the one with the less cost which is the generic \nplan even though it is not as performant. Look at explain output to see \nthe diffs between 5th iteration and 6th one:  explain (analyze, summary, \nbuffers true) execute foo_test(null, null, 5, 500, true);\n\nIt appears the SORT is the problem and a mismatch between text and \ninteger for base.text_id? --> WHERE  base.test_id = $4\n\nRegards,\nMichael Vitale\n\n\n\nMichaelDBA wrote on 2/24/2022 1:45 PM:\n> Dag, if you ain't right!  I can duplicate this on the ones I tested \n> with: PG v11 and v14.  Gonna start diving into this myself...\n>\n> Regards,\n> Michael Vitale\n>\n>\n> Mark Saward wrote on 2/23/2022 10:37 PM:\n>> Hi,\n>>\n>>\n>> I've experienced a situation where the planner seems to make a very \n>> poor choice with a prepared query after the first five executions. \n>> Looking at the documentation, I think this happens because it \n>> switches from a custom plan to a generic one, and doesn't make a good \n>> choice for the generic one.\n>>\n>> Postgres version: running in docker, reports to be 'Debian \n>> 14.1-1.pgdg110+1'\n>>\n>> If I force it to use a custom plan via 'set local plan_cache_mode = \n>> force_custom_plan', then I don't notice any slowdown.  Without it, \n>> the 6th and onwards calls can take 1 second to 15 seconds each, as \n>> opposed to about 10ms.\n>>\n>> Since I have a workaround, I don't necessarily need assistance, but \n>> posting this here in case it's of value as a test case. Here's a test \n>> case that reliably duplicates this issue for me:\n>>\n>> ----\n>>\n>> create table test (\n>>   test_id serial primary key,\n>>   data text\n>> );\n>>\n>> insert into test (data) (select data from (select \n>> generate_series(1,10000) AS id, md5(random()::text) AS data) x);\n>>\n>> prepare foo_test(text, text, int, text, bool) as SELECT * FROM (SELECT\n>>   *,\n>>   count(*) OVER () > $3 AS has_more,\n>>   row_number() OVER ()\n>>   FROM (\n>>     WITH counted AS (\n>>       SELECT count(*) AS total\n>>       FROM   (select test_id::text, data\n>> from test\n>> where\n>>   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\n>> and\n>>   (cast($2 as text) is null or lower(test_id::text) like '%' || \n>> lower($2) || '%')) base\n>>     ), cursor_row AS (\n>>       SELECT base.test_id\n>>       FROM   (select test_id::text, data\n>> from test\n>> where\n>>   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\n>> and\n>>   (cast($2 as text) is null or lower(test_id::text) like '%' || \n>> lower($2) || '%')) base\n>>       WHERE  base.test_id = $4\n>>     )\n>>     SELECT counted.*, base.*\n>>       FROM   (select test_id::text, data\n>> from test\n>> where\n>>   (cast($1 as text) is null or lower(data) like '%' || lower($1) || '%')\n>> and\n>>   (cast($2 as text) is null or lower(test_id::text) like '%' || \n>> lower($2) || '%')) base\n>>       LEFT JOIN   cursor_row ON true\n>>       LEFT JOIN   counted ON true\n>>       WHERE ((\n>>             $4 IS NULL OR cast($5 as bool) IS NULL\n>>           ) OR (\n>>             (base.test_id)\n>>               > (cursor_row.test_id)\n>>           ))\n>>       ORDER BY base.test_id ASC\n>>       LIMIT $3 + 1\n>> ) xy LIMIT $3 ) z ORDER BY row_number ASC;\n>>\n>> \\timing\n>>\n>> execute foo_test(null, null, 5, 500, true);\n>> execute foo_test(null, null, 5, 500, true);\n>> execute foo_test(null, null, 5, 500, true);\n>> execute foo_test(null, null, 5, 500, true);\n>> execute foo_test(null, null, 5, 500, true);\n>>\n>> -- This one should be slower:\n>> execute foo_test(null, null, 5, 500, true);\n>>\n>>\n>>\n>\n>\n>\n\n\n\n", "msg_date": "Thu, 24 Feb 2022 14:55:34 -0500", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan choice with prepared query" } ]
[ { "msg_contents": "Dear fellow DBAs,\n\nI am seeking for some guidance with the following case that our developers\nhave thrown at me and I apologize in advance for this lengthy mail ...\n\n$> postgres --version\npostgres (PostgreSQL) 13.6\n\n\nWe are dealing with the following issue:\n\n\nselect version, content from orderstore.order\nWHERE jsonb_to_tsvector('english', content, '[\"all\"]') @@\nwebsearch_to_tsquery('english', '1.20709841') limit 10 ;\n\n\nThe Devs told me that this query normally finishes within a reasonable\namount of time (<1sec) but every day - and all of a sudden - performance\nworsens to execution times > 20sec.\n\nFurthermore I was told:\n\n\"When we change the query to 'limit 100' it runs fast again\"\n\"When we execute a 'vacuum orderstore.order' everything becomes good again\n- but that only lasts for a few hours\"\n\nSo I scheduled a little script to be executed every minute which contains 3\nexplains.\n\n1 query with limit 10\n1 query with limit 100\n1 query with the limit-clause omitted\n\nAnd here's a quick grep of the result after a few hours:\n\n ...\n\n Execution Time: 1.413 ms <= limit 10\n Execution Time: 0.389 ms <= limit 100\n Execution Time: 0.297 ms <= limit clause omitted\n Execution Time: 1.456 ms\n Execution Time: 0.396 ms\n Execution Time: 0.302 ms\n Execution Time: 1.412 ms\n Execution Time: 0.428 ms\n Execution Time: 0.255 ms\n Execution Time: 1.404 ms\n Execution Time: 0.403 ms\n Execution Time: 0.258 ms\n Execution Time: 25588.448 ms <= limit 10\n Execution Time: 0.919 ms <= limit 100\n Execution Time: 0.453 ms <= limit clause omitted\n Execution Time: 25657.524 ms\n Execution Time: 0.965 ms\n Execution Time: 0.452 ms\n Execution Time: 25843.139 ms\n Execution Time: 0.959 ms\n Execution Time: 0.446 ms\n Execution Time: 25631.389 ms\n Execution Time: 0.946 ms\n Execution Time: 0.447 ms\n Execution Time: 25452.764 ms\n Execution Time: 0.937 ms\n Execution Time: 0.444 ms\n <= here I manually vacuumed the table\n Execution Time: 0.071 ms\n Execution Time: 0.021 ms\n Execution Time: 0.015 ms\n Execution Time: 0.072 ms\n Execution Time: 0.023 ms\n Execution Time: 0.017 ms\n Execution Time: 0.064 ms\n Execution Time: 0.021 ms\n Execution Time: 0.015 ms\n Execution Time: 0.063 ms\n Execution Time: 0.020 ms\n Execution Time: 0.015 ms\n\n ...\n\n\nTurned out the devs were right with their complaints.\n\nThe execution plan changed within one minute from using an index to a\nsequential scan;\n\nHere are the details (In the \"LOG:\"-line I select the current timestamp and\nthe row count of the table):\n\n****************************\n*** the last 'good' run: ***\n****************************\n\n LOG: | 2022-02-24 13:47:01.747416+01 | 9653\n\n LIMIT 10:\n\n Limit (cost=752.37..789.30 rows=10 width=22) (actual time=1.388..1.390\nrows=1 loops=1)\n -> Bitmap Heap Scan on \"order\" (cost=752.37..929.63 rows=48 width=22)\n(actual time=1.387..1.388 rows=1 loops=1)\n Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_fulltext_content (cost=0.00..752.36\nrows=48 width=0) (actual time=1.374..1.374 rows=1 loops=1)\n Index Cond: (jsonb_to_tsvector('english'::regconfig,\ncontent, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Planning Time: 0.401 ms\n Execution Time: 1.404 ms\n\n LIMIT 100:\n\n Limit (cost=752.37..929.63 rows=48 width=22) (actual time=0.391..0.391\nrows=1 loops=1)\n -> Bitmap Heap Scan on \"order\" (cost=752.37..929.63 rows=48 width=22)\n(actual time=0.390..0.391 rows=1 loops=1)\n Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_fulltext_content (cost=0.00..752.36\nrows=48 width=0) (actual time=0.387..0.387 rows=1 loops=1)\n Index Cond: (jsonb_to_tsvector('english'::regconfig,\ncontent, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Planning Time: 0.136 ms\n Execution Time: 0.403 ms\n\n NO LIMIT:\n\n Bitmap Heap Scan on \"order\" (cost=752.37..929.63 rows=48 width=22)\n(actual time=0.248..0.249 rows=1 loops=1)\n Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_fulltext_content (cost=0.00..752.36\nrows=48 width=0) (actual time=0.245..0.245 rows=1 loops=1)\n Index Cond: (jsonb_to_tsvector('english'::regconfig, content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Planning Time: 0.107 ms\n Execution Time: 0.258 ms\n\n\n*********************************************\n*** the first 'bad' run (one minute later ***\n*********************************************\n\n LOG: | 2022-02-24 13:48:01.840362+01 | 9653\n\n LIMIT 10:\n\n Limit (cost=0.00..804.97 rows=10 width=22) (actual\ntime=23970.845..25588.432 rows=1 loops=1)\n -> Seq Scan on \"order\" (cost=0.00..3863.86 rows=48 width=22) (actual\ntime=23970.843..25588.429 rows=1 loops=1)\n Filter: (jsonb_to_tsvector('english'::regconfig, content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Rows Removed by Filter: 9652\n Planning Time: 0.430 ms\n Execution Time: 25588.448 ms\n\n LIMIT 100:\n\n Limit (cost=788.37..965.63 rows=48 width=22) (actual time=0.900..0.902\nrows=1 loops=1)\n -> Bitmap Heap Scan on \"order\" (cost=788.37..965.63 rows=48 width=22)\n(actual time=0.900..0.901 rows=1 loops=1)\n Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_fulltext_content (cost=0.00..788.36\nrows=48 width=0) (actual time=0.894..0.895 rows=1 loops=1)\n Index Cond: (jsonb_to_tsvector('english'::regconfig,\ncontent, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Planning Time: 0.187 ms\n Execution Time: 0.919 ms\n\n NO LIMIT:\n\n Bitmap Heap Scan on \"order\" (cost=788.37..965.63 rows=48 width=22)\n(actual time=0.442..0.442 rows=1 loops=1)\n Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_fulltext_content (cost=0.00..788.36\nrows=48 width=0) (actual time=0.438..0.438 rows=1 loops=1)\n Index Cond: (jsonb_to_tsvector('english'::regconfig, content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n Planning Time: 0.151 ms\n Execution Time: 0.453 ms\n\n\n\n\nThe table in question isn't that big:\n\n oid | table_schema | table_name | row_estimate | total | index |\ntoast | table\n--------+--------------+------------+--------------+--------+--------+-------+-------\n 155544 | orderstore | order | 9649 | 210 MB | 108 MB | 91\nMB | 10 MB\n\n\n\nTable DDL:\n\nCREATE TABLE orderstore.\"order\" (\n pk_id bigint DEFAULT nextval('orderstore.order_pk_id_seq'::regclass)\nNOT NULL,\n version integer NOT NULL,\n content jsonb NOT NULL,\n manipulation_history jsonb NOT NULL,\n CONSTRAINT chk_external_id_not_null CHECK (((content ->>\n'externalId'::text) IS NOT NULL)),\n CONSTRAINT chk_id_not_null CHECK (((content ->> 'id'::text) IS NOT\nNULL))\n);\n\nDDL of the index used (one amongst many others that exist):\n\n--\n-- Name: idx_fulltext_content; Type: INDEX; Schema: orderstore; Owner:\norderstore\n--\n\nCREATE INDEX idx_fulltext_content ON orderstore.\"order\" USING gin\n(jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb));\n\n\nThe record in pg_stat_all_tables before the manual vacuum:\n\nrelid | 155544\nschemaname | orderstore\nrelname | order\nseq_scan | 249\nseq_tup_read | 2209150\nidx_scan | 24696\nidx_tup_fetch | 1155483\nn_tup_ins | 87\nn_tup_upd | 1404\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 9653\nn_dead_tup | 87\nn_mod_since_analyze | 152\nn_ins_since_vacuum | 4\nlast_vacuum | 2022-02-24 10:44:34.524241+01\nlast_autovacuum |\nlast_analyze | 2022-02-24 03:20:05.79219+01\nlast_autoanalyze |\nvacuum_count | 3\nautovacuum_count | 0\nanalyze_count | 8\nautoanalyze_count | 0\n\nThe entry in pg_stat_all_tables after the manual vacuum:\n\nrelid | 155544\nschemaname | orderstore\nrelname | order\nseq_scan | 249\nseq_tup_read | 2209150\nidx_scan | 24753\nidx_tup_fetch | 1155561\nn_tup_ins | 87\nn_tup_upd | 1404\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 9476\nn_dead_tup | 0\nn_mod_since_analyze | 152\nn_ins_since_vacuum | 0\nlast_vacuum | 2022-02-24 14:32:16.083692+01\nlast_autovacuum |\nlast_analyze | 2022-02-24 03:20:05.79219+01\nlast_autoanalyze |\nvacuum_count | 4\nautovacuum_count | 0\nanalyze_count | 8\nautoanalyze_count | 0\n\n\nCan someone provide any hints on how to deal with this issue? What am I\nmissing?\n\nIn case you need additional informations pls let me know.\n\n\nkind regards,\n\npeter\n\nDear fellow DBAs, I am seeking for some guidance with the following case that our developers have thrown at me and I apologize in advance for this lengthy mail ... $> postgres --versionpostgres (PostgreSQL) 13.6We are dealing with the following issue: select version, content from orderstore.orderWHERE jsonb_to_tsvector('english', content, '[\"all\"]') @@ websearch_to_tsquery('english', '1.20709841') limit 10 ;The Devs told me that this query normally finishes within a reasonable amount of time (<1sec) but every day - and all of a sudden - performance worsens to execution times > 20sec.Furthermore I was told: \"When we change the query to 'limit 100' it runs fast again\" \"When we execute a 'vacuum orderstore.order' everything becomes good again - but that only lasts for a few hours\" So I scheduled a little script to be executed every minute which contains 3 explains. 1 query with limit 10 1 query with limit 100 1 query with the limit-clause omitted And here's a quick grep of the result after a few hours:  ...  Execution Time: 1.413 ms       <= limit 10 Execution Time: 0.389 ms       <= limit 100 Execution Time: 0.297 ms       <= limit clause omitted  Execution Time: 1.456 ms Execution Time: 0.396 ms Execution Time: 0.302 ms Execution Time: 1.412 ms Execution Time: 0.428 ms Execution Time: 0.255 ms Execution Time: 1.404 ms Execution Time: 0.403 ms Execution Time: 0.258 ms Execution Time: 25588.448 ms    <= limit 10 Execution Time: 0.919 ms        <= limit 100 Execution Time: 0.453 ms        <= limit clause omitted  Execution Time: 25657.524 ms Execution Time: 0.965 ms Execution Time: 0.452 ms Execution Time: 25843.139 ms Execution Time: 0.959 ms Execution Time: 0.446 ms Execution Time: 25631.389 ms Execution Time: 0.946 ms Execution Time: 0.447 ms Execution Time: 25452.764 ms Execution Time: 0.937 ms Execution Time: 0.444 ms                                  <= here I manually vacuumed the table  Execution Time: 0.071 ms Execution Time: 0.021 ms Execution Time: 0.015 ms Execution Time: 0.072 ms Execution Time: 0.023 ms Execution Time: 0.017 ms Execution Time: 0.064 ms Execution Time: 0.021 ms Execution Time: 0.015 ms Execution Time: 0.063 ms Execution Time: 0.020 ms Execution Time: 0.015 ms                                  ... Turned out the devs were right with their complaints. The execution plan changed within one minute from using an index to a sequential scan; Here are the details (In the \"LOG:\"-line I select the current timestamp and the row count of the table): ******************************* the last 'good' run: *******************************  LOG:     | 2022-02-24 13:47:01.747416+01 |  9653 LIMIT 10: Limit  (cost=752.37..789.30 rows=10 width=22) (actual time=1.388..1.390 rows=1 loops=1)   ->  Bitmap Heap Scan on \"order\"  (cost=752.37..929.63 rows=48 width=22) (actual time=1.387..1.388 rows=1 loops=1)         Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)         Heap Blocks: exact=1         ->  Bitmap Index Scan on idx_fulltext_content  (cost=0.00..752.36 rows=48 width=0) (actual time=1.374..1.374 rows=1 loops=1)               Index Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery) Planning Time: 0.401 ms Execution Time: 1.404 ms LIMIT 100: Limit  (cost=752.37..929.63 rows=48 width=22) (actual time=0.391..0.391 rows=1 loops=1)   ->  Bitmap Heap Scan on \"order\"  (cost=752.37..929.63 rows=48 width=22) (actual time=0.390..0.391 rows=1 loops=1)         Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)         Heap Blocks: exact=1         ->  Bitmap Index Scan on idx_fulltext_content  (cost=0.00..752.36 rows=48 width=0) (actual time=0.387..0.387 rows=1 loops=1)               Index Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery) Planning Time: 0.136 ms Execution Time: 0.403 ms NO LIMIT: Bitmap Heap Scan on \"order\"  (cost=752.37..929.63 rows=48 width=22) (actual time=0.248..0.249 rows=1 loops=1)   Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)   Heap Blocks: exact=1   ->  Bitmap Index Scan on idx_fulltext_content  (cost=0.00..752.36 rows=48 width=0) (actual time=0.245..0.245 rows=1 loops=1)         Index Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery) Planning Time: 0.107 ms Execution Time: 0.258 ms************************************************ the first 'bad' run (one minute later ************************************************ LOG:     | 2022-02-24 13:48:01.840362+01 |  9653 LIMIT 10: Limit  (cost=0.00..804.97 rows=10 width=22) (actual time=23970.845..25588.432 rows=1 loops=1)   ->  Seq Scan on \"order\"  (cost=0.00..3863.86 rows=48 width=22) (actual time=23970.843..25588.429 rows=1 loops=1)         Filter: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)         Rows Removed by Filter: 9652 Planning Time: 0.430 ms Execution Time: 25588.448 ms LIMIT 100: Limit  (cost=788.37..965.63 rows=48 width=22) (actual time=0.900..0.902 rows=1 loops=1)   ->  Bitmap Heap Scan on \"order\"  (cost=788.37..965.63 rows=48 width=22) (actual time=0.900..0.901 rows=1 loops=1)         Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)         Heap Blocks: exact=1         ->  Bitmap Index Scan on idx_fulltext_content  (cost=0.00..788.36 rows=48 width=0) (actual time=0.894..0.895 rows=1 loops=1)               Index Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery) Planning Time: 0.187 ms Execution Time: 0.919 ms NO LIMIT: Bitmap Heap Scan on \"order\"  (cost=788.37..965.63 rows=48 width=22) (actual time=0.442..0.442 rows=1 loops=1)   Recheck Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)   Heap Blocks: exact=1   ->  Bitmap Index Scan on idx_fulltext_content  (cost=0.00..788.36 rows=48 width=0) (actual time=0.438..0.438 rows=1 loops=1)         Index Cond: (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery) Planning Time: 0.151 ms Execution Time: 0.453 msThe table in question isn't that big:   oid   | table_schema | table_name | row_estimate | total  | index  | toast | table--------+--------------+------------+--------------+--------+--------+-------+------- 155544 | orderstore   | order      |         9649 | 210 MB | 108 MB | 91 MB | 10 MBTable DDL: CREATE TABLE orderstore.\"order\" (    pk_id bigint DEFAULT nextval('orderstore.order_pk_id_seq'::regclass) NOT NULL,    version integer NOT NULL,    content jsonb NOT NULL,    manipulation_history jsonb NOT NULL,    CONSTRAINT chk_external_id_not_null CHECK (((content ->> 'externalId'::text) IS NOT NULL)),    CONSTRAINT chk_id_not_null CHECK (((content ->> 'id'::text) IS NOT NULL)));DDL of the index used (one amongst many others that exist): ---- Name: idx_fulltext_content; Type: INDEX; Schema: orderstore; Owner: orderstore--CREATE INDEX idx_fulltext_content ON orderstore.\"order\" USING gin (jsonb_to_tsvector('english'::regconfig, content, '[\"all\"]'::jsonb));The record in pg_stat_all_tables before the manual vacuum: relid               | 155544schemaname          | orderstorerelname             | orderseq_scan            | 249seq_tup_read        | 2209150idx_scan            | 24696idx_tup_fetch       | 1155483n_tup_ins           | 87n_tup_upd           | 1404n_tup_del           | 0n_tup_hot_upd       | 0n_live_tup          | 9653n_dead_tup          | 87n_mod_since_analyze | 152n_ins_since_vacuum  | 4last_vacuum         | 2022-02-24 10:44:34.524241+01last_autovacuum     |last_analyze        | 2022-02-24 03:20:05.79219+01last_autoanalyze    |vacuum_count        | 3autovacuum_count    | 0analyze_count       | 8autoanalyze_count   | 0The entry in pg_stat_all_tables after the manual vacuum: relid               | 155544schemaname          | orderstorerelname             | orderseq_scan            | 249seq_tup_read        | 2209150idx_scan            | 24753idx_tup_fetch       | 1155561n_tup_ins           | 87n_tup_upd           | 1404n_tup_del           | 0n_tup_hot_upd       | 0n_live_tup          | 9476n_dead_tup          | 0n_mod_since_analyze | 152n_ins_since_vacuum  | 0last_vacuum         | 2022-02-24 14:32:16.083692+01last_autovacuum     |last_analyze        | 2022-02-24 03:20:05.79219+01last_autoanalyze    |vacuum_count        | 4autovacuum_count    | 0analyze_count       | 8autoanalyze_count   | 0Can someone provide any hints on how to deal with this issue? What am I missing?In case you need additional informations pls let me know. kind regards, peter", "msg_date": "Thu, 24 Feb 2022 14:53:12 +0100", "msg_from": "Peter Adlersburg <[email protected]>", "msg_from_op": true, "msg_subject": "Advice needed: query performance deteriorates by 2000% within 1\n minute" }, { "msg_contents": "You are getting row estimate 48 in both cases, so it seems perhaps tied to\nthe free space map that will mean more heap lookups from the index, to the\npoint where the planner thinks that doing sequential scan is less costly.\n\nWhat is random_page_cost set to? Do you have default autovacuum/analyze\nsettings?\n\nIt is probably worth running \"explain (analyze, buffers, verbose)\nselect...\" to get a bit more insight. I expect that the buffers increase\ngradually and then it switches to sequential scan at some point.\n\n\nPerhaps not directly related, but might be interesting to look at-\nWith indexes on expressions, you get custom stats. It might be worth taking\na look at those and seeing if they give anything approaching proper\nestimates.\n\neg.\nselect * from pg_class where relname =\n'idx_customer_phone_numbers_phone_number_gist';\nselect * from pg_statistic where starelid =\n'idx_customer_phone_numbers_phone_number_gist'::regclass;\nselect * from pg_stats where tablename =\n'idx_customer_phone_numbers_phone_number_gist';\n\nJSONB is a bit painful to use from a query planning perspective. Values in\na jsonb column are fine for me in a select clause, but not ON or WHERE with\nvery rare exceptions. Though, maybe that's not so applicable when you are\ndoing full text search.\n\nYou are getting row estimate 48 in both cases, so it seems perhaps tied to the free space map that will mean more heap lookups from the index, to the point where the planner thinks that doing sequential scan is less costly.What is random_page_cost set to? Do you have default autovacuum/analyze settings?It is probably worth running \"explain (analyze, buffers, verbose) select...\" to get a bit more insight. I expect that the buffers increase gradually and then it switches to sequential scan at some point.Perhaps not directly related, but might be interesting to look at-With indexes on expressions, you get custom stats. It might be worth taking a look at those and seeing if they give anything approaching proper estimates.eg.select * from pg_class where relname = 'idx_customer_phone_numbers_phone_number_gist';select * from pg_statistic where starelid = 'idx_customer_phone_numbers_phone_number_gist'::regclass;select * from pg_stats where tablename = 'idx_customer_phone_numbers_phone_number_gist';JSONB is a bit painful to use from a query planning perspective. Values in a jsonb column are fine for me in a select clause, but not ON or WHERE with very rare exceptions. Though, maybe that's not so applicable when you are doing full text search.", "msg_date": "Thu, 24 Feb 2022 08:05:50 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice needed: query performance deteriorates by 2000% within 1\n minute" }, { "msg_contents": "Peter Adlersburg <[email protected]> writes:\n> Limit (cost=0.00..804.97 rows=10 width=22) (actual\n> time=23970.845..25588.432 rows=1 loops=1)\n> -> Seq Scan on \"order\" (cost=0.00..3863.86 rows=48 width=22) (actual\n> time=23970.843..25588.429 rows=1 loops=1)\n> Filter: (jsonb_to_tsvector('english'::regconfig, content,\n> '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n> Rows Removed by Filter: 9652\n> Planning Time: 0.430 ms\n> Execution Time: 25588.448 ms\n\nI think the expense here comes from re-executing jsonb_to_tsvector\na lot of times. By default that's estimated as 100 times more expensive\nthan a simple function (such as addition), but these results make it\nseem like that's an understatement. You might try something like\n\nalter function jsonb_to_tsvector(regconfig, jsonb, jsonb) cost 1000;\n\nto further discourage the planner from picking this plan shape.\n\nPossibly the cost estimate for ts_match_tq (the function underlying\nthis variant of @@) needs to be bumped up as well.\n\n(Bear in mind that pg_dump will not propagate such hacks on\nsystem-defined objects, so you'll need a note to reapply\nany such changes after dump/reload or pg_upgrade.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Feb 2022 11:10:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice needed: query performance deteriorates by 2000% within 1\n minute" }, { "msg_contents": "Hello,\n\nMichael, Tom: thanks for all the insights and informations in your previous\nmails.\n\nA quick update of the explain outputs (this time using explain (analyze,\nbuffers, verbose))\n\n*The good: *\n\n*LOG Time: | 2022-02-28 09:30:01.400777+01 | order rows: | 9668*\n\n\n\n Limit (cost=616.37..653.30 rows=10 width=22) (actual time=1.062..1.063\nrows=1 loops=1)\n\n Output: version, content\n\n Buffers: shared hit=154\n\n -> Bitmap Heap Scan on orderstore.\"order\" (cost=616.37..793.63 rows=48\nwidth=22) (actual time=1.061..1.062 rows=1 loops=1)\n\n Output: version, content\n\n Recheck Cond: (jsonb_to_tsvector('english'::regconfig,\n\"order\".content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n\n Heap Blocks: exact=1\n\n Buffers: shared hit=154\n\n -> Bitmap Index Scan on idx_fulltext_content (cost=0.00..616.36\nrows=48 width=0) (actual time=1.053..1.053 rows=1 loops=1)\n\n Index Cond: (jsonb_to_tsvector('english'::regconfig,\n\"order\".content, '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n\n Buffers: shared hit=153\n\nPlanning:\n\n Buffers: shared hit=50\n\nPlanning Time: 0.408 ms\n*Execution Time: 1.079 ms*\n\n*pg_stat_all_tables: *\n\n\nn_tup_ins | 102\n\n*n_tup_upd | 1554*\n\nn_tup_del | 0\n\nn_tup_hot_upd | 0\n\nn_live_tup | 9668\n\n*n_dead_tup | 69*\n\nn_mod_since_analyze | 61\n\nn_ins_since_vacuum | 8\n\nlast_vacuum | 2022-02-25 07:54:46.196508+01\n\nlast_autovacuum |\n\nlast_analyze | 2022-02-28 03:20:38.761482+01\n\nlast_autoanalyze |\n\n\n*The bad: *\n\n\n*LOG Time: | 2022-02-28 09:45:01.662702+01 | order rows: | 9668*\n\n\n\nLIMIT 10:\n\n\n\nLimit (cost=0.00..805.63 rows=10 width=22) (actual\ntime=24175.964..25829.767 rows=1 loops=1)\n\n Output: version, content\n\n Buffers: shared hit=26284 read=12550 dirtied=4\n\n -> Seq Scan on orderstore.\"order\" (cost=0.00..3867.01 rows=48\nwidth=22) (actual time=24175.962..25829.763 rows=1 loops=1)\n\n Output: version, content\n\n Filter: (jsonb_to_tsvector('english'::regconfig, \"order\".content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n\n Rows Removed by Filter: 9667\n\n Buffers: shared hit=26284 read=12550 dirtied=4\n\nPlanning:\n\n Buffers: shared hit=50\n\nPlanning Time: 0.377 ms\n\n*Execution Time: 25829.778 ms*\n\n*pg_stat_all_tables:*\n\nn_tup_ins | 102\n\n*n_tup_upd | 1585*\n\nn_tup_del | 0\n\nn_tup_hot_upd | 0\n\nn_live_tup | 9668\n\n*n_dead_tup | 100*\n\nn_mod_since_analyze | 92\n\nn_ins_since_vacuum | 8\n\nlast_vacuum | 2022-02-25 07:54:46.196508+01\n\nlast_autovacuum |\n\nlast_analyze | 2022-02-28 03:20:38.761482+01\n\nlast_autoanalyze |\n\n\n*The ugly:*\n\n\nIt should be mentioned that the table in question mainly lives in toast\nland (but I have no idea if this also influences the query planner):\n\n\n oid | table_schema | table_name | row_estimate | total_bytes |\nindex_bytes | toast_bytes | table_bytes | total | index | toast | table\n--------+--------------+------------+--------------+-------------+-------------+-------------+-------------+--------+--------+-------+-------\n 155544 | orderstore | order | 9570 | 229826560 |\n120184832 | 98557952 | 11083776 | 219 MB | 115 MB | 94 MB | 11 MB\n\n\nSince tinkering with the text search functions is out of the question we\ncame up with three possibilities on how to deal with this issue:\n\n- significantly increase the limit clause or omit it at all (meh ...)\n- use 'set random_page_cost = 0.5' in the transaction in order to convince\nthe query planner to prefer the index (tested and works)\n- schedule an hourly vacuum job for the table (the most likely solution we\nwill settle on since it comes with the least implementation effort)\n\nNone of these seems very elegant or viable in the long run ... we'll see.\n\nAh, yes: our global settings for random_page_cost and autovacuum/analyze\nare set to the defaults.\n\n Will json-processing experience some improvements in pg14/15? We are about\nto update to 14 in the near future with our devs saying that this topic is\nthe main trigger to do so.\n\nAny further thoughts on the case are very much appreciated.\n\nkr p.\n\n\nAm Do., 24. Feb. 2022 um 17:10 Uhr schrieb Tom Lane <[email protected]>:\n\n> Peter Adlersburg <[email protected]> writes:\n> > Limit (cost=0.00..804.97 rows=10 width=22) (actual\n> > time=23970.845..25588.432 rows=1 loops=1)\n> > -> Seq Scan on \"order\" (cost=0.00..3863.86 rows=48 width=22) (actual\n> > time=23970.843..25588.429 rows=1 loops=1)\n> > Filter: (jsonb_to_tsvector('english'::regconfig, content,\n> > '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n> > Rows Removed by Filter: 9652\n> > Planning Time: 0.430 ms\n> > Execution Time: 25588.448 ms\n>\n> I think the expense here comes from re-executing jsonb_to_tsvector\n> a lot of times. By default that's estimated as 100 times more expensive\n> than a simple function (such as addition), but these results make it\n> seem like that's an understatement. You might try something like\n>\n> alter function jsonb_to_tsvector(regconfig, jsonb, jsonb) cost 1000;\n>\n> to further discourage the planner from picking this plan shape.\n>\n> Possibly the cost estimate for ts_match_tq (the function underlying\n> this variant of @@) needs to be bumped up as well.\n>\n> (Bear in mind that pg_dump will not propagate such hacks on\n> system-defined objects, so you'll need a note to reapply\n> any such changes after dump/reload or pg_upgrade.)\n>\n> regards, tom lane\n>\n\nHello, Michael, Tom: thanks for all the insights and informations in your previous mails.A quick update of the explain outputs (this time using explain (analyze, buffers, verbose)) The good: LOG Time: | 2022-02-28 09:30:01.400777+01 | order rows:\n|  9668\n \n Limit \n(cost=616.37..653.30 rows=10 width=22) (actual time=1.062..1.063 rows=1\nloops=1)\n  \nOutput: version, content\n  \nBuffers: shared hit=154\n  \n->  Bitmap Heap Scan on orderstore.\"order\" \n(cost=616.37..793.63 rows=48 width=22) (actual time=1.061..1.062 rows=1\nloops=1)\n        \nOutput: version, content\n        \nRecheck Cond: (jsonb_to_tsvector('english'::regconfig,\n\"order\".content, '[\"all\"]'::jsonb) @@\n'''1.20709841'''::tsquery)\n        \nHeap Blocks: exact=1\n        \nBuffers: shared hit=154\n        \n->  Bitmap Index Scan on idx_fulltext_content  (cost=0.00..616.36\nrows=48 width=0) (actual time=1.053..1.053 rows=1 loops=1)\n              \nIndex Cond: (jsonb_to_tsvector('english'::regconfig, \"order\".content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n              \nBuffers: shared hit=153\nPlanning:\n  \nBuffers: shared hit=50\nPlanning\nTime: 0.408 ms\nExecution Time: 1.079 mspg_stat_all_tables: n_tup_ins         \n | 102n_tup_upd          \n| 1554n_tup_del          \n| 0n_tup_hot_upd      \n| 0n_live_tup         \n| 9668n_dead_tup          |\n69n_mod_since_analyze\n| 61n_ins_since_vacuum \n| 8last_vacuum        \n| 2022-02-25 07:54:46.196508+01last_autovacuum    \n|last_analyze       \n| 2022-02-28 03:20:38.761482+01\nlast_autoanalyze    |The bad: LOG\nTime: | 2022-02-28 09:45:01.662702+01 | order rows: |  9668\n \nLIMIT\n10:\n \nLimit \n(cost=0.00..805.63 rows=10 width=22) (actual time=24175.964..25829.767 rows=1\nloops=1)\n  \nOutput: version, content\n  \nBuffers: shared hit=26284 read=12550 dirtied=4\n  \n->  Seq Scan on orderstore.\"order\"  (cost=0.00..3867.01\nrows=48 width=22) (actual time=24175.962..25829.763 rows=1 loops=1)\n        \nOutput: version, content\n        \nFilter: (jsonb_to_tsvector('english'::regconfig, \"order\".content,\n'[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n        \nRows Removed by Filter: 9667\n        \nBuffers: shared hit=26284 read=12550 dirtied=4\nPlanning:\n  \nBuffers: shared hit=50\nPlanning\nTime: 0.377 ms\nExecution Time: 25829.778 mspg_stat_all_tables:n_tup_ins          \n| 102\nn_tup_upd          \n| 1585\nn_tup_del          \n| 0\nn_tup_hot_upd      \n| 0\nn_live_tup         \n| 9668\nn_dead_tup          |\n100\nn_mod_since_analyze\n| 92\nn_ins_since_vacuum \n| 8\nlast_vacuum        \n| 2022-02-25 07:54:46.196508+01\nlast_autovacuum    \n|\nlast_analyze       \n| 2022-02-28 03:20:38.761482+01\nlast_autoanalyze   \n|The ugly: It should be mentioned that the table in question mainly lives in toast land (but I have no idea if this also influences the query planner):    oid   | table_schema | table_name | row_estimate | total_bytes | index_bytes | toast_bytes | table_bytes | total  | index  | toast | table--------+--------------+------------+--------------+-------------+-------------+-------------+-------------+--------+--------+-------+------- 155544 | orderstore   | order      |         9570 |   229826560 |   120184832 |    98557952 |    11083776 | 219 MB | 115 MB | 94 MB | 11 MBSince tinkering with the text search functions is out of the question we came up with three possibilities on how to deal with this issue: - significantly increase the limit clause or omit it at all (meh ...) - use 'set random_page_cost = 0.5'  in the transaction in order to convince the query planner to prefer the index (tested and works)- schedule an hourly vacuum job for  the table (the most likely solution we will settle on since it comes with the least implementation effort)None of these seems very elegant or viable in the long run ... we'll see.Ah, yes: our global settings for random_page_cost and autovacuum/analyze are set to the defaults. Will json-processing experience some improvements in pg14/15? We are about to update to 14 in the near future with our devs saying that this topic is the main trigger to do so. Any further thoughts on the case are very much appreciated. kr p. Am Do., 24. Feb. 2022 um 17:10 Uhr schrieb Tom Lane <[email protected]>:Peter Adlersburg <[email protected]> writes:\n>  Limit  (cost=0.00..804.97 rows=10 width=22) (actual\n> time=23970.845..25588.432 rows=1 loops=1)\n>    ->  Seq Scan on \"order\"  (cost=0.00..3863.86 rows=48 width=22) (actual\n> time=23970.843..25588.429 rows=1 loops=1)\n>          Filter: (jsonb_to_tsvector('english'::regconfig, content,\n> '[\"all\"]'::jsonb) @@ '''1.20709841'''::tsquery)\n>          Rows Removed by Filter: 9652\n>  Planning Time: 0.430 ms\n>  Execution Time: 25588.448 ms\n\nI think the expense here comes from re-executing jsonb_to_tsvector\na lot of times.  By default that's estimated as 100 times more expensive\nthan a simple function (such as addition), but these results make it\nseem like that's an understatement.  You might try something like\n\nalter function jsonb_to_tsvector(regconfig, jsonb, jsonb) cost 1000;\n\nto further discourage the planner from picking this plan shape.\n\nPossibly the cost estimate for ts_match_tq (the function underlying\nthis variant of @@) needs to be bumped up as well.\n\n(Bear in mind that pg_dump will not propagate such hacks on\nsystem-defined objects, so you'll need a note to reapply\nany such changes after dump/reload or pg_upgrade.)\n\n                        regards, tom lane", "msg_date": "Tue, 1 Mar 2022 08:29:04 +0100", "msg_from": "Peter Adlersburg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Advice needed: query performance deteriorates by 2000% within 1\n minute" }, { "msg_contents": "If you expect to have high cache hits and/or have ssd or similar fast\nstorage, random page cost should be more like 1-2 rather than the default\n4. When using jsonb, you'd normally have estimates based solely on the\nconstants for the associated datatype (1/3 or 2/3 for a nullable boolean\nfor instance, and I think half a percent for an int column) but because you\nare using an index on a function, you should be getting custom stats\nrelated to that. They just don't seem to be helping you a ton.\n\nWith gin indexes, there is also the pending list to consider. I haven't had\nto deal with that much, but just know of it from the documentation.\n\nIf you expect to have high cache hits and/or have ssd or similar fast storage, random page cost should be more like 1-2 rather than the default 4. When using jsonb, you'd normally have estimates based solely on the constants for the associated datatype (1/3 or 2/3 for a nullable boolean for instance, and I think half a percent for an int column) but because you are using an index on a function, you should be getting custom stats related to that. They just don't seem to be helping you a ton.With gin indexes, there is also the pending list to consider. I haven't had to deal with that much, but just know of it from the documentation.", "msg_date": "Tue, 1 Mar 2022 18:39:40 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice needed: query performance deteriorates by 2000% within 1\n minute" }, { "msg_contents": "And I would absolutely crank up autovacuum and analyze settings. Turn up\nthe cost limits, turn down the cost delays, decrease the scale factor.\nWhatever you need to do such that autovacuum runs often. No need to\nschedule a manual vacuum at all. Just don't wait until 20% of the table is\ndead before an autovacuum is triggered like the default behavior. The cost\nto gather new stats and do garbage collection is rather minimal compared to\nthe benefit to queries that rely on the data in many cases.\n\nAnd I would absolutely crank up autovacuum and analyze settings. Turn up the cost limits, turn down the cost delays, decrease the scale factor. Whatever you need to do such that autovacuum runs often. No need to schedule a manual vacuum at all. Just don't wait until 20% of the table is dead before an autovacuum is triggered like the default behavior. The cost to gather new stats and do garbage collection is rather minimal compared to the benefit to queries that rely on the data in many cases.", "msg_date": "Tue, 1 Mar 2022 18:44:48 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice needed: query performance deteriorates by 2000% within 1\n minute" } ]
[ { "msg_contents": "Hi,\n\nCould some some verify the attached query to verify the performance and\nsuggest some steps to improve it, this query is created as a view. This\nview is used to get the aggregates of orders based on its current status\n\nThanks", "msg_date": "Fri, 25 Feb 2022 23:18:22 +0300", "msg_from": "Ayub Khan <[email protected]>", "msg_from_op": true, "msg_subject": "slow query to improve performace" }, { "msg_contents": "Please provide some more information, like your postgres version and settings.\n\nSome relevant things are included here.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 25 Feb 2022 16:52:01 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query to improve performace" }, { "msg_contents": "On Fri, Feb 25, 2022 at 3:18 PM Ayub Khan <[email protected]> wrote:\n\n> Hi,\n>\n> Could some some verify the attached query to verify the performance and\n> suggest some steps to improve it, this query is created as a view. This\n> view is used to get the aggregates of orders based on its current status\n>\n\nI don't see how it is possible for that query to yield that plan. For\nexample, what part of that query could have been transformed into this part\nof the plan \"order_status_code <> ALL ('{T,J,C,D}'::bpchar[])\"?\n\nWell, I suppose some of the tables used in that query could themselves be\nviews over the same tables? In that case, we might need to know the\ndefinitions of those views. As well as knowing the version, and seeing the\nEXPLAIN (ANALYZE, BUFFERS) for the query, run after track_io_timing is\nturned on.\n\nCheers,\n\nJeff\n\n>\n\nOn Fri, Feb 25, 2022 at 3:18 PM Ayub Khan <[email protected]> wrote:Hi,Could some some verify the attached query to verify the performance and suggest some steps to improve it, this query is created as a view. This view is used to get the aggregates of orders based on its current statusI don't see how it is possible for that query to yield that plan. For example, what part of that query could have been transformed into this part of the plan \"order_status_code <> ALL ('{T,J,C,D}'::bpchar[])\"?Well, I suppose some of the tables used in that query could themselves be views over the same tables?  In that case, we might need to know the definitions of those views.  As well as knowing the version, and seeing the EXPLAIN (ANALYZE, BUFFERS) for the query, run after track_io_timing is turned on.Cheers,Jeff", "msg_date": "Sun, 27 Feb 2022 22:18:30 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query to improve performace" } ]
[ { "msg_contents": "Hi Team,\n\nCan you please help in tunning the attached query as , i am trying to run this query and it runs for several hours and it did not give any output.\n\nI am not able to generate the explain analyze plan as well and it keeps on running for several hours and did not give output.\n\nI have attached the query and explain plan without analyze. Please help if nayone has any idea how to tune that query.\n\nRegards,\nMukesh Kumar", "msg_date": "Sun, 27 Feb 2022 04:40:16 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Never Ending query in PostgreSQL " }, { "msg_contents": "Hi,\n\nOn Sun, Feb 27, 2022 at 04:40:16AM +0000, Kumar, Mukesh wrote:\n>\n> Can you please help in tunning the attached query as , i am trying to run\n> this query and it runs for several hours and it did not give any output.\n>\n> I am not able to generate the explain analyze plan as well and it keeps on\n> running for several hours and did not give output.\n>\n> I have attached the query and explain plan without analyze. Please help if\n> nayone has any idea how to tune that query.\n\nYou attached the explain plan in both files. Also even if there was the query\nthere wouldn't be enough information to be able to help, please consult\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions to provide more details.\n\n\n", "msg_date": "Sun, 27 Feb 2022 20:22:25 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Never Ending query in PostgreSQL" }, { "msg_contents": "On Sun, Feb 27, 2022 at 7:09 AM Kumar, Mukesh <[email protected]>\nwrote:\n\n> Hi Team,\n>\n> Can you please help in tunning the attached query as , i am trying to run\n> this query and it runs for several hours and it did not give any output.\n>\n\nSeveral hours is not all that long. Without an EXPLAIN ANALYZE, we could\neasily spend several hours scratching our heads and still get nowhere. So\nunless having this running cripples the rest of your system, please queue\nup another one and let it go longer. But first, do an ANALYZE (and\npreferably a VACUUM ANALYZE) on all the tables. If you have a test db\nwhich is a recent clone of production, you could do it there so as not to\nslow down production. The problem is that the row estimates must be way\noff (otherwise, it shouldn't take long) and if that is the case, we can't\nuse the plan to decide much of anything, since we don't trust it.\n\nIn parallel you could start evicting table joins from the query to simplify\nit until it gets to the point where it will run, so you can then see the\nactual row counts. To do that it does help if you know what the intent of\nthe query is (or for that matter, the text of the query--you attached the\nplan twice).\n\nCheers,\n\nJeff\n\n>\n\nOn Sun, Feb 27, 2022 at 7:09 AM Kumar, Mukesh <[email protected]> wrote:\n\n\nHi Team, \n\n\n\n\nCan you please help in tunning the attached query as , i am trying to run this query and it runs for several hours and it did not give any output.Several hours is not all that long.  Without an EXPLAIN ANALYZE, we could easily spend several hours scratching our heads and still get nowhere.  So unless having this running cripples the rest of your system, please queue up another one and let it go longer.  But first, do an ANALYZE (and preferably a VACUUM ANALYZE) on all the tables.  If you have a test db which is a recent clone of production, you could do it there so as not to slow down production.  The problem is that the row estimates must be way off (otherwise, it shouldn't take long) and if that is the case, we can't use the plan to decide much of anything, since we don't trust it.In parallel you could start evicting table joins from the query to simplify it until it gets to the point where it will run, so you can then see the actual row counts.  To do that it does help if you know what the intent of the query is (or for that matter, the text of the query--you attached the plan twice).Cheers,Jeff", "msg_date": "Sun, 27 Feb 2022 12:20:42 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Never Ending query in PostgreSQL" }, { "msg_contents": "On 2/26/22 23:40, Kumar, Mukesh wrote:\n> Hi Team,\n>\n> Can you please help in tunning the attached query as , i am trying to \n> run this query and it runs for several hours and it did not give any \n> output.\n>\n> I am not able to generate the explain analyze plan as well and it \n> keeps on running for several hours and did not give output.\n>\n> I have attached the query and explain plan without analyze. Please \n> help if nayone has any idea how to tune that query.\n>\n> Regards,\n> Mukesh Kumar\n\n\nHi Team Member,\n\nYour attachments are not SQL, they are plans. Judging by the size of the \nplans, your best course of action is to completely rewrite the queries, \nprobably using CTE and temporary tables. May the Force be with you.\n\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 2/26/22 23:40, Kumar, Mukesh wrote:\n\n\n\n\n\n Hi Team, \n\n\n\n\n Can you please help in tunning the attached query as , i am\n trying to run this query and it runs for several hours and it\n did not give any output.\n\n\n\n\n I am not able to generate the explain analyze plan as well and\n it keeps on running for several hours and did not give output.\n\n\n\n\n I have attached the query and explain plan without analyze.\n Please help if nayone has any idea how to tune that query.\n\n\n\n\n Regards, \n\n Mukesh Kumar \n\n\n\nHi Team Member,\nYour attachments are not SQL, they are plans. Judging by the size\n of the plans, your best course of action is to completely rewrite\n the queries, probably using CTE and temporary tables. May the\n Force be with you.\n\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Mon, 28 Feb 2022 22:44:09 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Never Ending query in PostgreSQL" }, { "msg_contents": "On 2/27/22 12:20, Jeff Janes wrote:\n> Several hours is not all that long.\n\nWell, the pyramids in the Valley of the Kings last for around 4500 \nyears. Dinosaurs have ruled the Earth for approximately 120 million \nyears. Solar system is 5 billion years old. Cosmos is around 13 billion \nyears old. Compared to those numbers, indeed, several hours isn't that \nlong. Furthermore, you have to account for the time dilatation. One hour \non the planet that's evolving and revolving at 900 miles an hour is not \nthe same as one hour of standing still. To make things even more \ninteresting, it's orbiting at 19 miles a second, so it's reckoned,The \nsun that is the source of all our power. So, several hours is relative.  \nEach object has its relative time so it's not possible to conclude \nwhether several hours is a long time or not.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 2/27/22 12:20, Jeff Janes wrote:\n\nSeveral\n hours is not all that long.\nWell, the pyramids in the Valley of the Kings last for around\n 4500 years. Dinosaurs have ruled the Earth for approximately 120\n million years. Solar system is 5 billion years old. Cosmos is\n around 13 billion years old. Compared to those numbers, indeed,\n several hours isn't that long. Furthermore, you have to account\n for the time dilatation. One hour on the planet that's evolving\n and revolving at 900 miles an hour is not the same as one hour of\n standing still. To make things even more interesting, it's\n orbiting at 19 miles a second, so it's reckoned,\n The sun that is the source of all our power. So, several hours\n is relative.  Each object has its relative time so it's not\n possible to conclude whether several hours is a long time or\n not.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Mon, 28 Feb 2022 22:54:44 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Never Ending query in PostgreSQL" }, { "msg_contents": "On 2/27/22 18:20, Jeff Janes wrote:\n> \n> On Sun, Feb 27, 2022 at 7:09 AM Kumar, Mukesh <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Hi Team, \n> \n> Can you please help in tunning the attached query as , i am trying\n> to run this query and it runs for several hours and it did not give\n> any output.\n> \n> \n> Several hours is not all that long.  Without an EXPLAIN ANALYZE, we\n> could easily spend several hours scratching our heads and still get\n> nowhere.  So unless having this running cripples the rest of your\n> system, please queue up another one and let it go longer.  But first, do\n> an ANALYZE (and preferably a VACUUM ANALYZE) on all the tables.  If you\n> have a test db which is a recent clone of production, you could do it\n> there so as not to slow down production.  The problem is that the row\n> estimates must be way off (otherwise, it shouldn't take long) and if\n> that is the case, we can't use the plan to decide much of anything,\n> since we don't trust it.\n> \n\nI'd bet Jeff is right and poor estimates are the root cause. The pattern\nwith a cascade of \"nested loop\" in the explain is fairly typical. This\nis likely due to the complex join conditions and correlation.\n\n\n> In parallel you could start evicting table joins from the query to\n> simplify it until it gets to the point where it will run, so you can\n> then see the actual row counts.  To do that it does help if you know\n> what the intent of the query is (or for that matter, the text of the\n> query--you attached the plan twice).\n> \n\nRight, simplify the query. Or maybe do it the other way around - start\nwith the simplest query (the inner-most part of the explain) and add\njoins one by one (by following the explains) until it suddenly starts\nbeing much slower.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 1 Mar 2022 15:05:12 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Never Ending query in PostgreSQL" }, { "msg_contents": "Hi Tomas , \r\n\r\nThanks for replying , We have identified a Join condition which is creating a problem for that query.\r\n\r\nAccept my apologies for pasting the plan twice. I am attaching the query again in this mail\r\n\r\nWe have found that by evicting the View paymenttransdetails_view from the attached query runs in approx. 10 secs and the view contains multiple conditions and 1 jojn as well.\r\n\r\nI am attaching the View definition as well.\r\n\r\nPlease suggest if there is a work around for this query to run faster without evicting the above from the query.\r\n\r\n\r\n\r\nThanks and Regards, \r\nMukesh Kumar\r\n\r\n-----Original Message-----\r\nFrom: Tomas Vondra <[email protected]> \r\nSent: Tuesday, March 1, 2022 7:35 PM\r\nTo: Jeff Janes <[email protected]>; Kumar, Mukesh <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: Never Ending query in PostgreSQL\r\n\r\nOn 2/27/22 18:20, Jeff Janes wrote:\r\n> \r\n> On Sun, Feb 27, 2022 at 7:09 AM Kumar, Mukesh \r\n> <[email protected] <mailto:[email protected]>> wrote:\r\n> \r\n> Hi Team,\r\n> \r\n> Can you please help in tunning the attached query as , i am trying\r\n> to run this query and it runs for several hours and it did not give\r\n> any output.\r\n> \r\n> \r\n> Several hours is not all that long.  Without an EXPLAIN ANALYZE, we \r\n> could easily spend several hours scratching our heads and still get \r\n> nowhere.  So unless having this running cripples the rest of your \r\n> system, please queue up another one and let it go longer.  But first, \r\n> do an ANALYZE (and preferably a VACUUM ANALYZE) on all the tables.  If \r\n> you have a test db which is a recent clone of production, you could do \r\n> it there so as not to slow down production.  The problem is that the \r\n> row estimates must be way off (otherwise, it shouldn't take long) and \r\n> if that is the case, we can't use the plan to decide much of anything, \r\n> since we don't trust it.\r\n> \r\n\r\nI'd bet Jeff is right and poor estimates are the root cause. The pattern with a cascade of \"nested loop\" in the explain is fairly typical. This is likely due to the complex join conditions and correlation.\r\n\r\n\r\n> In parallel you could start evicting table joins from the query to \r\n> simplify it until it gets to the point where it will run, so you can \r\n> then see the actual row counts.  To do that it does help if you know \r\n> what the intent of the query is (or for that matter, the text of the \r\n> query--you attached the plan twice).\r\n> \r\n\r\nRight, simplify the query. Or maybe do it the other way around - start with the simplest query (the inner-most part of the explain) and add joins one by one (by following the explains) until it suddenly starts being much slower.\r\n\r\n\r\nregards\r\n\r\n--\r\nTomas Vondra\r\nEnterpriseDB: https://urldefense.com/v3/__http://www.enterprisedb.com__;!!KupS4sW4BlfImQPd!P_2LgOrDOnTxBqFECBDdQolWyDNytft5mDbiJF_Bn827W6GdEOflXZ8a-NWSzdi6nJgewzgEJom8uFDBFgGKSETUD5VHA38$\r\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 1 Mar 2022 15:01:23 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Never Ending query in PostgreSQL" }, { "msg_contents": "On 3/1/22 16:01, Kumar, Mukesh wrote:\n> Hi Tomas ,\n> \n> Thanks for replying , We have identified a Join condition which is\n> creating a problem for that query.\n> \n> Accept my apologies for pasting the plan twice. I am attaching the\n> query again in this mail\n> \n\nQueries without explain (or even better \"explain analyze\") are useless.\nWe don't have the data, we don't know what the executed plan is, we\ndon't know what plan might be a better one.\n\nThere's a wiki page about reporting slow queries (what info to include,\netc):\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n> We have found that by evicting the View paymenttransdetails_view from\n> the attached query runs in approx. 10 secs and the view contains\n> multiple conditions and 1 jojn as well.\n> \n\nYou need to add individual tables, not a view which is itself a join of\n10+ tables. The idea is that you start with a fast query, add tables one\nby one (in the join order from the explain). You'll be able to do\nEXPLAIN ANALYZE and watch estimate accuracy, and then at some point it\ngets much slower, which is the join that causes trouble. But you might\nstill be able to do explain analyze.\n\nSo looking at the explain plan you shared before, you'd start with a\njoin of so_vendor_address_base + so_vendor_base, and then you'd add\n\n- sapecc_lfa1_assoc\n- lms_payment_item_vendor_base\n- lms_payment_line_item_base\n- lms_payment_check_request\n- lms_pay_line_item_acct_base\n- ...\n\n(in this order). I'd bet \"lms_payment_check_request\" is where things\nstart to go south.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 1 Mar 2022 16:39:12 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Never Ending query in PostgreSQL" } ]
[ { "msg_contents": "Hi Postgres community,\n\nWe are experiencing some performance issues when RLS is enabled for large tables. With simplified example:\n\nWe have a table:\n\nCREATE TABLE emp.employees (\n employee_id INTEGER PRIMARY KEY,\n-- companies table are defined in a different schema, not accessible to emp service\n company_id INTEGER NOT NULL, \n employee_name TEXT NOT NULL\n);\n\nIndex for employees table:\n\nCREATE INDEX employees_company_id_idx ON emp.employees (company_id);\n\nAnd for the table we have RLS select policy:\n\nCREATE POLICY employee_select_policy ON emp.employees FOR SELECT\n USING (\n company_id = ANY(coalesce(string_to_array(current_setting('emp.authorized_read_company_ids', TRUE), ',')::INTEGER[], ARRAY []::INTEGER[]))\n );\n\n\nWhen a very simple query is executed, for instance:\n\nSET emp.authorized_read_company_ids = '1, 2, 3, ..., 200';\nSELECT count(*) FROM emp.employees WHERE TRUE; -- 68091 rows\n\nThe query plan for this query reads:\n\nAggregate (cost=1096.02..1096.03 rows=1 width=8) (actual time=8.740..8.740 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=778\n -> Index Only Scan using employees_company_id_idx on emp.employees (cost=0.35..970.78 rows=50099 width=0) (actual time=0.124..4.976 rows=49953 loops=1)\n Output: company_id\n Index Cond: (employees.company_id = ANY (COALESCE((string_to_array(current_setting('emp.authorized_read_company_ids'::text, true), ','::text))::integer[], '{}'::integer[])))\n Heap Fetches: 297\n Buffers: shared hit=778\nPlanning:\n Buffers: shared hit=12\nPlanning Time: 0.824 ms\nExecution Time: 8.768 ms\n\nThe problem rises when we make the RLS select policy condition a bit more complicated by adding admin checks inside RLS select policy:\n\nCREATE POLICY employee_select_policy ON emp.employees FOR SELECT\n USING (\n coalesce(nullif(current_setting('emp.is_admin', TRUE), ''), 'false')::BOOLEAN\n OR company_id = ANY(coalesce(string_to_array(current_setting('emp.authorized_read_company_ids', TRUE), ',')::INTEGER[], ARRAY []::INTEGER[]))\n );\n\nWhen the same simple query is executed:\n\nSET emp.is_admin = TRUE;\nSET emp.authorized_read_company_ids = '1, 2, 3, ..., 200';\nSELECT count(*) FROM emp.employees WHERE TRUE; -- 68091 rows\n\nThe query plan now reads:\n\nAggregate (cost=6238.51..6238.52 rows=1 width=8) (actual time=2156.271..2156.272 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=367\n -> Index Only Scan using employees_company_id_idx on emp.employees (cost=0.29..6099.16 rows=55740 width=0) (actual time=0.065..2151.939 rows=49953 loops=1)\n Output: company_id\n Filter: ((COALESCE(NULLIF(current_setting('emp.is_admin'::text, true), ''::text), 'false'::text))::boolean OR (employees.company_id = ANY (COALESCE((string_to_array(current_setting('emp.authorized_read_company_ids'::text, true), ','::text))::integer[], '{}'::integer[]))))\n Rows Removed by Filter: 11430\n Heap Fetches: 392\n Buffers: shared hit=367\nPlanning Time: 0.744 ms\nExecution Time: 2156.302 ms\n\nWe can see the performance has deteriorated horribly because the RLS is not using index any more for the company ids, the RLS scan happens for every single row in the result set against every single company id in the db context.\n\nWith the size of table and the number of company ids inside the db context growing, the execution time becomes longer and longer.\n\nTo summarise: We would like to have admin users run without any RLS restrictions, and normal users to have RLS enforced using an index based on company_ids. Unfortunately, we cannot have queries executed by admin users connect to the database as a different database user.\n\nIs there anything you could suggest?\n\nThanks,\nCharles\nHi Postgres community,We are experiencing some performance issues when RLS is enabled for large tables. With simplified example:We have a table:CREATE TABLE emp.employees (  employee_id INTEGER PRIMARY KEY,-- companies table are defined in a different schema, not accessible to emp service  company_id INTEGER NOT NULL,     employee_name TEXT NOT NULL);Index for employees table:CREATE INDEX employees_company_id_idx ON emp.employees (company_id);And for the table we have RLS select policy:CREATE POLICY employee_select_policy ON emp.employees FOR SELECT  USING (    company_id = ANY(coalesce(string_to_array(current_setting('emp.authorized_read_company_ids', TRUE), ',')::INTEGER[], ARRAY []::INTEGER[]))  );When a very simple query is executed, for instance:SET emp.authorized_read_company_ids = '1, 2, 3, ..., 200';SELECT count(*) FROM emp.employees WHERE TRUE;   -- 68091 rowsThe query plan for this query reads:Aggregate  (cost=1096.02..1096.03 rows=1 width=8) (actual time=8.740..8.740 rows=1 loops=1)  Output: count(*)  Buffers: shared hit=778  ->  Index Only Scan using employees_company_id_idx on emp.employees  (cost=0.35..970.78 rows=50099 width=0) (actual time=0.124..4.976 rows=49953 loops=1)        Output: company_id        Index Cond: (employees.company_id = ANY (COALESCE((string_to_array(current_setting('emp.authorized_read_company_ids'::text, true), ','::text))::integer[], '{}'::integer[])))        Heap Fetches: 297        Buffers: shared hit=778Planning:  Buffers: shared hit=12Planning Time: 0.824 msExecution Time: 8.768 msThe problem rises when we make the RLS select policy condition a bit more complicated by adding admin checks inside RLS select policy:CREATE POLICY employee_select_policy ON emp.employees FOR SELECT  USING (       coalesce(nullif(current_setting('emp.is_admin', TRUE), ''), 'false')::BOOLEAN    OR company_id = ANY(coalesce(string_to_array(current_setting('emp.authorized_read_company_ids', TRUE), ',')::INTEGER[], ARRAY []::INTEGER[]))  );When the same simple query is executed:SET emp.is_admin = TRUE;SET emp.authorized_read_company_ids = '1, 2, 3, ..., 200';SELECT count(*) FROM emp.employees WHERE TRUE;   -- 68091 rowsThe query plan now reads:Aggregate  (cost=6238.51..6238.52 rows=1 width=8) (actual time=2156.271..2156.272 rows=1 loops=1)  Output: count(*)  Buffers: shared hit=367  ->  Index Only Scan using employees_company_id_idx on emp.employees  (cost=0.29..6099.16 rows=55740 width=0) (actual time=0.065..2151.939 rows=49953 loops=1)        Output: company_id        Filter: ((COALESCE(NULLIF(current_setting('emp.is_admin'::text, true), ''::text), 'false'::text))::boolean OR (employees.company_id = ANY (COALESCE((string_to_array(current_setting('emp.authorized_read_company_ids'::text, true), ','::text))::integer[], '{}'::integer[]))))        Rows Removed by Filter: 11430        Heap Fetches: 392        Buffers: shared hit=367Planning Time: 0.744 msExecution Time: 2156.302 msWe can see the performance has deteriorated horribly because the RLS is not using index any more for the company ids, the RLS scan happens for every single row in the result set against every single company id in the db context.With the size of table and the number of company ids inside the db context growing, the execution time becomes longer and longer.To summarise: We would like to have admin users run without any RLS restrictions, and normal users to have RLS enforced using an index based on company_ids. Unfortunately, we cannot have queries executed by admin users connect to the database as a different database user.Is there anything you could suggest?Thanks,Charles", "msg_date": "Tue, 1 Mar 2022 10:33:54 +1100", "msg_from": "Charles Huang <[email protected]>", "msg_from_op": true, "msg_subject": "RLS not using index scan but seq scan when condition gets a bit\n complicated" } ]
[ { "msg_contents": "Hello all -\n\nI have a task which is simple at the first look. I have a table which\ncontains hierarchy of address objects starting with macro region end ends\nwith particular buildings. You can imagine how big is it. \n\nHere is short sample of table declaration:\n\n \n\ncreate table region_hierarchy(\n\n gid uuid not null default uuid_generate_v1mc(),\n\n parent_gid uuid null,\n\n region_code int2,\n\n .\n\n constraint pk_region_hierarchy primary key (gid),\n\n constraint fk_region_hierarchy_region_hierarchy_parent foreign key\n(parent_gid) references region_hierarchy(gid)\n\n);\n\n \n\nBeing an Oracle specialist, I planned to using same declarative partitioning\nby list on the region_code field as I did in Oracle database. I've carefully\nlooked thru docs/faqs/google/communities and found out that I must include\n\"gid\" field into partition key because a primary key field. Thus partition\nmethod \"by list\" is not appropriate method in this case and \"by range\"\neither. What I have left from partition methods? Hash? How can I create\npartitions by gid & region_code by hash? Feasible? Will it be working\nproperly (with partition pruning) when search criteria is by region_code\nonly? Same problem appears when there is simple serial \"id\" used as primary\nidentifier. Removing all constraints is not considered. I understand that\nsuch specific PostgreSQL partitioning implementation has done by tons of\nreasons but how I can implement partitioning for my EASY case? I see the\nonly legacy inheritance is left, right? Very sad if it's true.\n\nYour advices are very important.\n\nThanks in advance.\n\nAndrew.\n\n \n\n\nHello all –I have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. Here is short sample of table declaration: create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  …    constraint pk_region_hierarchy primary key (gid),    constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)); Being an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.Your advices are very important.Thanks in advance.Andrew.", "msg_date": "Tue, 1 Mar 2022 18:37:28 +0300", "msg_from": "\"Andrew Zakharov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Simple task with partitioning which I can't realize" }, { "msg_contents": "On Tue, Mar 1, 2022 at 8:37 AM Andrew Zakharov <[email protected]> wrote:\n\n> create table region_hierarchy(\n>\n> gid uuid not null default uuid_generate_v1mc(),\n>\n> parent_gid uuid null,\n>\n> region_code int2,\n>\n>\n>\n\n\n> I’ve carefully looked thru docs/faqs/google/communities and found out that\n> I must include “gid” field into partition key because a primary key field.\n>\n\nYes, you are coming up against the following limitation:\n\n\"Unique constraints (and hence primary keys) on partitioned tables must\ninclude all the partition key columns. This limitation exists because the\nindividual indexes making up the constraint can only directly enforce\nuniqueness within their own partitions; therefore, the partition structure\nitself must guarantee that there are not duplicates in different\npartitions.\"\n\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE\n\nThat limitation is independent of partitioning; i.e., the legacy\ninheritance option doesn't bypass it.\n\nThus, your true \"key\" is composite: (region, identifier). Thus you need to\nadd a \"parent_region_code\" column as well, redefine the PK as (region_code,\ngid), and the REFERENCES clause to link the two paired fields.\n\nYou can decide whether that is sufficient or if you want some added comfort\nin ensuring that a gid cannot appear in multiple regions by creating a\nsingle non-partitioned table containing all gid values and add a unique\nconstraint there.\n\nOr maybe allow for duplicates across region codes and save space by using a\nsmaller data type (int or bigint - while renaming the column to \"rid\" or\nsome such) - combined with having the non-partitioned reference table being\ndefined as (region_code, rid, gid).\n\nDavid J.\n\nOn Tue, Mar 1, 2022 at 8:37 AM Andrew Zakharov <[email protected]> wrote:create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field.Yes, you are coming up against the following limitation:\"Unique constraints (and hence primary keys) on partitioned tables must include all the partition key columns. This limitation exists because the individual indexes making up the constraint can only directly enforce uniqueness within their own partitions; therefore, the partition structure itself must guarantee that there are not duplicates in different partitions.\"https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVEThat limitation is independent of partitioning; i.e., the legacy inheritance option doesn't bypass it.Thus, your true \"key\" is composite: (region, identifier).  Thus you need to add a \"parent_region_code\" column as well, redefine the PK as (region_code, gid), and the REFERENCES clause to link the two paired fields.You can decide whether that is sufficient or if you want some added comfort in ensuring that a gid cannot appear in multiple regions by creating a single non-partitioned table containing all gid values and add a unique constraint there.Or maybe allow for duplicates across region codes and save space by using a smaller data type (int or bigint - while renaming the column to \"rid\" or some such) - combined with having the non-partitioned reference table being defined as (region_code, rid, gid).David J.", "msg_date": "Tue, 1 Mar 2022 08:54:04 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple task with partitioning which I can't realize" }, { "msg_contents": "Hi,\n\nis there any chance (risk ?) that a given gid be present in more than one\nregion ?\nif not (or if you implement it via a dedicated, non partition table),\n\nyou may create a simple table partitioned by region, and create unique\nindexes for each partition.\nthis is NOT equivalent to a unique constraint at global table level, of\ncourse.\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\n\n\nOn Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:\n\n> Hello all –\n>\n> I have a task which is simple at the first look. I have a table which\n> contains hierarchy of address objects starting with macro region end ends\n> with particular buildings. You can imagine how big is it.\n>\n> Here is short sample of table declaration:\n>\n>\n>\n> create table region_hierarchy(\n>\n> gid uuid not null default uuid_generate_v1mc(),\n>\n> parent_gid uuid null,\n>\n> region_code int2,\n>\n> …\n>\n> constraint pk_region_hierarchy primary key (gid),\n>\n> constraint fk_region_hierarchy_region_hierarchy_parent foreign key\n> (parent_gid) references region_hierarchy(gid)\n>\n> );\n>\n>\n>\n> Being an Oracle specialist, I planned to using same declarative\n> partitioning by list on the region_code field as I did in Oracle database.\n> I’ve carefully looked thru docs/faqs/google/communities and found out that\n> I must include “gid” field into partition key because a primary key field.\n> Thus partition method “by list” is not appropriate method in this case and\n> “by range” either. What I have left from partition methods? Hash? How can I\n> create partitions by gid & region_code by hash? Feasible? Will it be\n> working properly (with partition pruning) when search criteria is by\n> region_code only? Same problem appears when there is simple serial “id”\n> used as primary identifier. Removing all constraints is not considered. I\n> understand that such specific PostgreSQL partitioning implementation has\n> done by tons of reasons but how I can implement partitioning for my EASY\n> case? I see the only legacy inheritance is left, right? Very sad if it’s\n> true.\n>\n> Your advices are very important.\n>\n> Thanks in advance.\n>\n> Andrew.\n>\n>\n>\n\nHi,is there any chance (risk ?) that a given gid be present in more than one region ?if not (or if you implement it via a dedicated, non partition table), you may create a simple table partitioned by region, and create unique indexes for each partition.this is NOT equivalent to a unique constraint at global table level, of course.Marc MILLASSenior Architect+33607850334www.mokadb.comOn Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:Hello all –I have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. Here is short sample of table declaration: create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  …    constraint pk_region_hierarchy primary key (gid),    constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)); Being an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.Your advices are very important.Thanks in advance.Andrew.", "msg_date": "Tue, 1 Mar 2022 17:28:45 +0100", "msg_from": "Marc Millas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple task with partitioning which I can't realize" }, { "msg_contents": "David, - yes, creation composite foreign/primary key is not a problem. But the main question is what method should I use for partitioning by composite key gid, region_code? The partition method itself created not only for faster data access but for better administration. The administration like a truncate/insert is a main reason why I split the data for my DWH case. If the only hash method is left I cannot administer the partitions separately this way. But anyway, could you please provide your vision the brief declaration for main table and partition?\n\nThanks.\n\nAndrew.\n\n \n\nFrom: David G. Johnston <[email protected]> \nSent: Tuesday, March 01, 2022 6:54 PM\nTo: Andrew Zakharov <[email protected]>\nCc: Pgsql Performance <[email protected]>\nSubject: Re: Simple task with partitioning which I can't realize\n\n \n\nOn Tue, Mar 1, 2022 at 8:37 AM Andrew Zakharov <[email protected] <mailto:[email protected]> > wrote:\n\ncreate table region_hierarchy(\n\n gid uuid not null default uuid_generate_v1mc(),\n\n parent_gid uuid null,\n\n region_code int2,\n\n \n\n \n\nI’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field.\n\n \n\nYes, you are coming up against the following limitation:\n\n \n\n\"Unique constraints (and hence primary keys) on partitioned tables must include all the partition key columns. This limitation exists because the individual indexes making up the constraint can only directly enforce uniqueness within their own partitions; therefore, the partition structure itself must guarantee that there are not duplicates in different partitions.\"\n\n \n\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE\n\n \n\nThat limitation is independent of partitioning; i.e., the legacy inheritance option doesn't bypass it.\n\n \n\nThus, your true \"key\" is composite: (region, identifier). Thus you need to add a \"parent_region_code\" column as well, redefine the PK as (region_code, gid), and the REFERENCES clause to link the two paired fields.\n\n \n\nYou can decide whether that is sufficient or if you want some added comfort in ensuring that a gid cannot appear in multiple regions by creating a single non-partitioned table containing all gid values and add a unique constraint there.\n\n \n\nOr maybe allow for duplicates across region codes and save space by using a smaller data type (int or bigint - while renaming the column to \"rid\" or some such) - combined with having the non-partitioned reference table being defined as (region_code, rid, gid).\n\n \n\nDavid J.\n\n \n\n\nDavid, - yes, creation composite foreign/primary key is not a problem. But the main question is what method should I use for partitioning by composite key gid, region_code? The partition method itself created not only for faster data access but for better administration. The administration like a truncate/insert is a main reason why I split the data for my DWH case. If the only hash method is left I cannot administer the partitions separately this way. But anyway, could you please provide your vision the brief declaration for main table and partition?Thanks.Andrew. From: David G. Johnston <[email protected]> Sent: Tuesday, March 01, 2022 6:54 PMTo: Andrew Zakharov <[email protected]>Cc: Pgsql Performance <[email protected]>Subject: Re: Simple task with partitioning which I can't realize On Tue, Mar 1, 2022 at 8:37 AM Andrew Zakharov <[email protected]> wrote:create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Yes, you are coming up against the following limitation: \"Unique constraints (and hence primary keys) on partitioned tables must include all the partition key columns. This limitation exists because the individual indexes making up the constraint can only directly enforce uniqueness within their own partitions; therefore, the partition structure itself must guarantee that there are not duplicates in different partitions.\" https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE That limitation is independent of partitioning; i.e., the legacy inheritance option doesn't bypass it. Thus, your true \"key\" is composite: (region, identifier).  Thus you need to add a \"parent_region_code\" column as well, redefine the PK as (region_code, gid), and the REFERENCES clause to link the two paired fields. You can decide whether that is sufficient or if you want some added comfort in ensuring that a gid cannot appear in multiple regions by creating a single non-partitioned table containing all gid values and add a unique constraint there. Or maybe allow for duplicates across region codes and save space by using a smaller data type (int or bigint - while renaming the column to \"rid\" or some such) - combined with having the non-partitioned reference table being defined as (region_code, rid, gid). David J.", "msg_date": "Tue, 1 Mar 2022 19:37:09 +0300", "msg_from": "\"Andrew Zakharov\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Simple task with partitioning which I can't realize" }, { "msg_contents": "On Tue, Mar 1, 2022 at 9:37 AM Andrew Zakharov <[email protected]> wrote:\n\n> David, - yes, creation composite foreign/primary key is not a problem. But\n> the main question is what method should I use for partitioning by composite\n> key gid, region_code?\n>\n\nThe convention here is to inline or bottom-post responses.\n\nYour original plan - list partitions by region_code. You couldn't do that\nbefore because you weren't seeing the region_code as being part of your PK\nand all partition columns must be part of the PK. My suggestion is that\ninstead of figuring out how to work around that limitation (not that I\nthink there is a good one to be had) you accept it and just add region_code\nto the PK.\n\nDavid J.\n\nOn Tue, Mar 1, 2022 at 9:37 AM Andrew Zakharov <[email protected]> wrote:David, - yes, creation composite foreign/primary key is not a problem. But the main question is what method should I use for partitioning by composite key gid, region_code?The convention here is to inline or bottom-post responses.Your original plan - list partitions by region_code.  You couldn't do that before because you weren't seeing the region_code as being part of your PK and all partition columns must be part of the PK.  My suggestion is that instead of figuring out how to work around that limitation (not that I think there is a good one to be had) you accept it and just add region_code to the PK.David J.", "msg_date": "Tue, 1 Mar 2022 09:43:37 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple task with partitioning which I can't realize" }, { "msg_contents": "Hi Marc –\n\nSince there is a DWH fed by ETL there no risks to have same gids in different region partitions. I considered simple partitioned table w/o any keys but I’d believed there is a solutions with keys that’s why I’m seeking the clue.\n\nThanks.\n\nAndrew.\n\n \n\nFrom: Marc Millas <[email protected]> \nSent: Tuesday, March 01, 2022 7:29 PM\nTo: Andrew Zakharov <[email protected]>\nCc: [email protected]\nSubject: Re: Simple task with partitioning which I can't realize\n\n \n\nHi,\n\n \n\nis there any chance (risk ?) that a given gid be present in more than one region ?\n\nif not (or if you implement it via a dedicated, non partition table), \n\n \n\nyou may create a simple table partitioned by region, and create unique indexes for each partition.\n\nthis is NOT equivalent to a unique constraint at global table level, of course.\n\n\n\n\nMarc MILLAS\n\nSenior Architect\n\n+33607850334\n\nwww.mokadb.com <http://www.mokadb.com> \n\n \n\n \n\n \n\nOn Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected] <mailto:[email protected]> > wrote:\n\nHello all –\n\nI have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. \n\nHere is short sample of table declaration:\n\n \n\ncreate table region_hierarchy(\n\n gid uuid not null default uuid_generate_v1mc(),\n\n parent_gid uuid null,\n\n region_code int2,\n\n …\n\n constraint pk_region_hierarchy primary key (gid),\n\n constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)\n\n);\n\n \n\nBeing an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.\n\nYour advices are very important.\n\nThanks in advance.\n\nAndrew.\n\n \n\n\nHi Marc –Since there is a DWH fed by ETL there no risks to have same gids in different region partitions. I considered simple partitioned table w/o any keys but I’d believed there is a solutions with keys that’s why I’m seeking the clue.Thanks.Andrew. From: Marc Millas <[email protected]> Sent: Tuesday, March 01, 2022 7:29 PMTo: Andrew Zakharov <[email protected]>Cc: [email protected]: Re: Simple task with partitioning which I can't realize Hi, is there any chance (risk ?) that a given gid be present in more than one region ?if not (or if you implement it via a dedicated, non partition table),  you may create a simple table partitioned by region, and create unique indexes for each partition.this is NOT equivalent to a unique constraint at global table level, of course.Marc MILLASSenior Architect+33607850334www.mokadb.com   On Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:Hello all –I have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. Here is short sample of table declaration: create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  …    constraint pk_region_hierarchy primary key (gid),    constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)); Being an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.Your advices are very important.Thanks in advance.Andrew.", "msg_date": "Tue, 1 Mar 2022 19:45:40 +0300", "msg_from": "\"Andrew Zakharov\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Simple task with partitioning which I can't realize" }, { "msg_contents": "Andrew,\n\ncontrary to Oracle, in postgres you can add the indexes and/or the\nconstraints which are meaningful to you at partition level.\nI was not saying NOT to create keys, but I was saying to create them at\npartition level.\n\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\n\n\nOn Tue, Mar 1, 2022 at 5:45 PM Andrew Zakharov <[email protected]> wrote:\n\n> Hi Marc –\n>\n> Since there is a DWH fed by ETL there no risks to have same gids in\n> different region partitions. I considered simple partitioned table w/o any\n> keys but I’d believed there is a solutions with keys that’s why I’m seeking\n> the clue.\n>\n> Thanks.\n>\n> Andrew.\n>\n>\n>\n> *From:* Marc Millas <[email protected]>\n> *Sent:* Tuesday, March 01, 2022 7:29 PM\n> *To:* Andrew Zakharov <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: Simple task with partitioning which I can't realize\n>\n>\n>\n> Hi,\n>\n>\n>\n> is there any chance (risk ?) that a given gid be present in more than one\n> region ?\n>\n> if not (or if you implement it via a dedicated, non partition table),\n>\n>\n>\n> you may create a simple table partitioned by region, and create unique\n> indexes for each partition.\n>\n> this is NOT equivalent to a unique constraint at global table level, of\n> course.\n>\n>\n> Marc MILLAS\n>\n> Senior Architect\n>\n> +33607850334\n>\n> www.mokadb.com\n>\n>\n>\n>\n>\n>\n>\n> On Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:\n>\n> Hello all –\n>\n> I have a task which is simple at the first look. I have a table which\n> contains hierarchy of address objects starting with macro region end ends\n> with particular buildings. You can imagine how big is it.\n>\n> Here is short sample of table declaration:\n>\n>\n>\n> create table region_hierarchy(\n>\n> gid uuid not null default uuid_generate_v1mc(),\n>\n> parent_gid uuid null,\n>\n> region_code int2,\n>\n> …\n>\n> constraint pk_region_hierarchy primary key (gid),\n>\n> constraint fk_region_hierarchy_region_hierarchy_parent foreign key\n> (parent_gid) references region_hierarchy(gid)\n>\n> );\n>\n>\n>\n> Being an Oracle specialist, I planned to using same declarative\n> partitioning by list on the region_code field as I did in Oracle database.\n> I’ve carefully looked thru docs/faqs/google/communities and found out that\n> I must include “gid” field into partition key because a primary key field.\n> Thus partition method “by list” is not appropriate method in this case and\n> “by range” either. What I have left from partition methods? Hash? How can I\n> create partitions by gid & region_code by hash? Feasible? Will it be\n> working properly (with partition pruning) when search criteria is by\n> region_code only? Same problem appears when there is simple serial “id”\n> used as primary identifier. Removing all constraints is not considered. I\n> understand that such specific PostgreSQL partitioning implementation has\n> done by tons of reasons but how I can implement partitioning for my EASY\n> case? I see the only legacy inheritance is left, right? Very sad if it’s\n> true.\n>\n> Your advices are very important.\n>\n> Thanks in advance.\n>\n> Andrew.\n>\n>\n>\n>\n\nAndrew,contrary to Oracle, in postgres you can add the indexes and/or the constraints which are meaningful to you at partition level.I was not saying NOT to create keys, but I was saying to create them at partition level.Marc MILLASSenior Architect+33607850334www.mokadb.comOn Tue, Mar 1, 2022 at 5:45 PM Andrew Zakharov <[email protected]> wrote:Hi Marc –Since there is a DWH fed by ETL there no risks to have same gids in different region partitions. I considered simple partitioned table w/o any keys but I’d believed there is a solutions with keys that’s why I’m seeking the clue.Thanks.Andrew. From: Marc Millas <[email protected]> Sent: Tuesday, March 01, 2022 7:29 PMTo: Andrew Zakharov <[email protected]>Cc: [email protected]: Re: Simple task with partitioning which I can't realize Hi, is there any chance (risk ?) that a given gid be present in more than one region ?if not (or if you implement it via a dedicated, non partition table),  you may create a simple table partitioned by region, and create unique indexes for each partition.this is NOT equivalent to a unique constraint at global table level, of course.Marc MILLASSenior Architect+33607850334www.mokadb.com   On Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:Hello all –I have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. Here is short sample of table declaration: create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  …    constraint pk_region_hierarchy primary key (gid),    constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)); Being an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.Your advices are very important.Thanks in advance.Andrew.", "msg_date": "Tue, 1 Mar 2022 18:59:37 +0100", "msg_from": "Marc Millas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple task with partitioning which I can't realize" }, { "msg_contents": "Yes, Marc –\n\nI understood you properly and totally. I was just saying about the hope that there is a trick to keep constraints on the base table level for my case.\n\nThanks a bunch.\n\nAndrew.\n\n \n\n \n\nOn Tue, Mar 01, 2022 at 9:00 PM Marc Millas <[email protected]> wrote:\n\n\n\nAndrew,\n\n \n\ncontrary to Oracle, in postgres you can add the indexes and/or the constraints which are meaningful to you at partition level.\n\nI was not saying NOT to create keys, but I was saying to create them at partition level.\n\n \n\n\n\n\nMarc MILLAS\n\nSenior Architect\n\n+33607850334\n\nwww.mokadb.com <http://www.mokadb.com> \n\n \n\n \n\n \n\nOn Tue, Mar 1, 2022 at 5:45 PM Andrew Zakharov <[email protected] <mailto:[email protected]> > wrote:\n\nHi Marc –\n\nSince there is a DWH fed by ETL there no risks to have same gids in different region partitions. I considered simple partitioned table w/o any keys but I’d believed there is a solutions with keys that’s why I’m seeking the clue.\n\nThanks.\n\nAndrew.\n\n \n\nFrom: Marc Millas <[email protected] <mailto:[email protected]> > \nSent: Tuesday, March 01, 2022 7:29 PM\nTo: Andrew Zakharov <[email protected] <mailto:[email protected]> >\nCc: [email protected] <mailto:[email protected]> \nSubject: Re: Simple task with partitioning which I can't realize\n\n \n\nHi,\n\n \n\nis there any chance (risk ?) that a given gid be present in more than one region ?\n\nif not (or if you implement it via a dedicated, non partition table), \n\n \n\nyou may create a simple table partitioned by region, and create unique indexes for each partition.\n\nthis is NOT equivalent to a unique constraint at global table level, of course.\n\n\n\n\nMarc MILLAS\n\nSenior Architect\n\n+33607850334\n\nwww.mokadb.com <http://www.mokadb.com> \n\n \n\n \n\n \n\nOn Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected] <mailto:[email protected]> > wrote:\n\nHello all –\n\nI have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. \n\nHere is short sample of table declaration:\n\n \n\ncreate table region_hierarchy(\n\n gid uuid not null default uuid_generate_v1mc(),\n\n parent_gid uuid null,\n\n region_code int2,\n\n …\n\n constraint pk_region_hierarchy primary key (gid),\n\n constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)\n\n);\n\n \n\nBeing an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.\n\nYour advices are very important.\n\nThanks in advance.\n\nAndrew.\n\n \n\n\nYes, Marc –I understood you properly and totally. I was just saying about the hope that there is a trick to keep constraints on the base table level for my case.Thanks a bunch.Andrew.  On Tue, Mar 01, 2022 at 9:00 PM  Marc Millas <[email protected]> wrote:Andrew, contrary to Oracle, in postgres you can add the indexes and/or the constraints which are meaningful to you at partition level.I was not saying NOT to create keys, but I was saying to create them at partition level. Marc MILLASSenior Architect+33607850334www.mokadb.com   On Tue, Mar 1, 2022 at 5:45 PM Andrew Zakharov <[email protected]> wrote:Hi Marc –Since there is a DWH fed by ETL there no risks to have same gids in different region partitions. I considered simple partitioned table w/o any keys but I’d believed there is a solutions with keys that’s why I’m seeking the clue.Thanks.Andrew. From: Marc Millas <[email protected]> Sent: Tuesday, March 01, 2022 7:29 PMTo: Andrew Zakharov <[email protected]>Cc: [email protected]: Re: Simple task with partitioning which I can't realize Hi, is there any chance (risk ?) that a given gid be present in more than one region ?if not (or if you implement it via a dedicated, non partition table),  you may create a simple table partitioned by region, and create unique indexes for each partition.this is NOT equivalent to a unique constraint at global table level, of course.Marc MILLASSenior Architect+33607850334www.mokadb.com   On Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:Hello all –I have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. Here is short sample of table declaration: create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  …    constraint pk_region_hierarchy primary key (gid),    constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)); Being an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.Your advices are very important.Thanks in advance.Andrew.", "msg_date": "Tue, 1 Mar 2022 21:43:43 +0300", "msg_from": "\"Andrew Zakharov\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Simple task with partitioning which I can't realize" }, { "msg_contents": "De : Marc Millas <[email protected]> \nEnvoyé : mardi 1 mars 2022 19:00\nÀ : Andrew Zakharov <[email protected]>\nCc : [email protected]\nObjet : Re: Simple task with partitioning which I can't realize\n\n \n\nAndrew,\n\n \n\ncontrary to Oracle, in postgres you can add the indexes and/or the constraints which are meaningful to you at partition level.\n\nI was not saying NOT to create keys, but I was saying to create them at partition level.\n\n \n\n\n\n\nMarc MILLAS\n\nSenior Architect\n\n+33607850334\n\n <http://www.mokadb.com> www.mokadb.com\n\n \n\n \n\n \n\nOn Tue, Mar 1, 2022 at 5:45 PM Andrew Zakharov < <mailto:[email protected]> [email protected]> wrote:\n\nHi Marc –\n\nSince there is a DWH fed by ETL there no risks to have same gids in different region partitions. I considered simple partitioned table w/o any keys but I’d believed there is a solutions with keys that’s why I’m seeking the clue.\n\nThanks.\n\nAndrew.\n\n \n\nFrom: Marc Millas <[email protected] <mailto:[email protected]> > \nSent: Tuesday, March 01, 2022 7:29 PM\nTo: Andrew Zakharov <[email protected] <mailto:[email protected]> >\nCc: [email protected] <mailto:[email protected]> \nSubject: Re: Simple task with partitioning which I can't realize\n\n \n\nHi,\n\n \n\nis there any chance (risk ?) that a given gid be present in more than one region ?\n\nif not (or if you implement it via a dedicated, non partition table), \n\n \n\nyou may create a simple table partitioned by region, and create unique indexes for each partition.\n\nthis is NOT equivalent to a unique constraint at global table level, of course.\n\n\n\n\nMarc MILLAS\n\nSenior Architect\n\n+33607850334\n\nwww.mokadb.com <http://www.mokadb.com> \n\n \n\n \n\n \n\nOn Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected] <mailto:[email protected]> > wrote:\n\nHello all –\n\nI have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. \n\nHere is short sample of table declaration:\n\n \n\ncreate table region_hierarchy(\n\n gid uuid not null default uuid_generate_v1mc(),\n\n parent_gid uuid null,\n\n region_code int2,\n\n …\n\n constraint pk_region_hierarchy primary key (gid),\n\n constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)\n\n);\n\n \n\nBeing an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.\n\nYour advices are very important.\n\nThanks in advance.\n\nAndrew.\n\n _________________________________________________________________________________________\n\nHi\n\nTo say it using Oracle vocabulary, PostgreSQL doesn’t offer GLOBAL INDEXES. Even when we create an index on the partitioned table which is now possible, PostgreSQL create LOCAL indexes on each partition separately.\n\nThere is no global indexes on partitioned tables in PostgreSQL. So it is not simple to offer uniqueness at global level using indexes. That is why, it is required that partition key columns be part of the primary key AND any other UNIQE constraint.\n\n \n\nMichel SALAIS\n\n\nDe : Marc Millas <[email protected]> Envoyé : mardi 1 mars 2022 19:00À : Andrew Zakharov <[email protected]>Cc : [email protected] : Re: Simple task with partitioning which I can't realize Andrew, contrary to Oracle, in postgres you can add the indexes and/or the constraints which are meaningful to you at partition level.I was not saying NOT to create keys, but I was saying to create them at partition level. Marc MILLASSenior Architect+33607850334www.mokadb.com   On Tue, Mar 1, 2022 at 5:45 PM Andrew Zakharov <[email protected]> wrote:Hi Marc –Since there is a DWH fed by ETL there no risks to have same gids in different region partitions. I considered simple partitioned table w/o any keys but I’d believed there is a solutions with keys that’s why I’m seeking the clue.Thanks.Andrew. From: Marc Millas <[email protected]> Sent: Tuesday, March 01, 2022 7:29 PMTo: Andrew Zakharov <[email protected]>Cc: [email protected]: Re: Simple task with partitioning which I can't realize Hi, is there any chance (risk ?) that a given gid be present in more than one region ?if not (or if you implement it via a dedicated, non partition table),  you may create a simple table partitioned by region, and create unique indexes for each partition.this is NOT equivalent to a unique constraint at global table level, of course.Marc MILLASSenior Architect+33607850334www.mokadb.com   On Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:Hello all –I have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. Here is short sample of table declaration: create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  …    constraint pk_region_hierarchy primary key (gid),    constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)); Being an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.Your advices are very important.Thanks in advance.Andrew. _________________________________________________________________________________________HiTo say it using Oracle vocabulary, PostgreSQL doesn’t offer GLOBAL INDEXES. Even when we create an index on the partitioned table which is now possible, PostgreSQL create LOCAL indexes on each partition separately.There is no global indexes on partitioned tables in PostgreSQL. So it is not simple to offer uniqueness at global level using indexes. That is why, it is required that partition key columns be part of the primary key AND any other UNIQE constraint. Michel SALAIS", "msg_date": "Wed, 2 Mar 2022 08:28:34 +0100", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Simple task with partitioning which I can't realize" }, { "msg_contents": "If you are wanting to ensure uniqueness for the original oracle pk across\nthe partitions, you could look into adding an advisory trigger to the table.\n\nOn Wed, Mar 2, 2022, 2:28 AM Michel SALAIS <[email protected]> wrote:\n\n> *De :* Marc Millas <[email protected]>\n> *Envoyé :* mardi 1 mars 2022 19:00\n> *À :* Andrew Zakharov <[email protected]>\n> *Cc :* [email protected]\n> *Objet :* Re: Simple task with partitioning which I can't realize\n>\n>\n>\n> Andrew,\n>\n>\n>\n> contrary to Oracle, in postgres you can add the indexes and/or the\n> constraints which are meaningful to you at partition level.\n>\n> I was not saying NOT to create keys, but I was saying to create them at\n> partition level.\n>\n>\n>\n>\n> Marc MILLAS\n>\n> Senior Architect\n>\n> +33607850334\n>\n> www.mokadb.com\n>\n>\n>\n>\n>\n>\n>\n> On Tue, Mar 1, 2022 at 5:45 PM Andrew Zakharov <[email protected]> wrote:\n>\n> Hi Marc –\n>\n> Since there is a DWH fed by ETL there no risks to have same gids in\n> different region partitions. I considered simple partitioned table w/o any\n> keys but I’d believed there is a solutions with keys that’s why I’m seeking\n> the clue.\n>\n> Thanks.\n>\n> Andrew.\n>\n>\n>\n> *From:* Marc Millas <[email protected]>\n> *Sent:* Tuesday, March 01, 2022 7:29 PM\n> *To:* Andrew Zakharov <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: Simple task with partitioning which I can't realize\n>\n>\n>\n> Hi,\n>\n>\n>\n> is there any chance (risk ?) that a given gid be present in more than one\n> region ?\n>\n> if not (or if you implement it via a dedicated, non partition table),\n>\n>\n>\n> you may create a simple table partitioned by region, and create unique\n> indexes for each partition.\n>\n> this is NOT equivalent to a unique constraint at global table level, of\n> course.\n>\n>\n> Marc MILLAS\n>\n> Senior Architect\n>\n> +33607850334\n>\n> www.mokadb.com\n>\n>\n>\n>\n>\n>\n>\n> On Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:\n>\n> Hello all –\n>\n> I have a task which is simple at the first look. I have a table which\n> contains hierarchy of address objects starting with macro region end ends\n> with particular buildings. You can imagine how big is it.\n>\n> Here is short sample of table declaration:\n>\n>\n>\n> create table region_hierarchy(\n>\n> gid uuid not null default uuid_generate_v1mc(),\n>\n> parent_gid uuid null,\n>\n> region_code int2,\n>\n> …\n>\n> constraint pk_region_hierarchy primary key (gid),\n>\n> constraint fk_region_hierarchy_region_hierarchy_parent foreign key\n> (parent_gid) references region_hierarchy(gid)\n>\n> );\n>\n>\n>\n> Being an Oracle specialist, I planned to using same declarative\n> partitioning by list on the region_code field as I did in Oracle database.\n> I’ve carefully looked thru docs/faqs/google/communities and found out that\n> I must include “gid” field into partition key because a primary key field.\n> Thus partition method “by list” is not appropriate method in this case and\n> “by range” either. What I have left from partition methods? Hash? How can I\n> create partitions by gid & region_code by hash? Feasible? Will it be\n> working properly (with partition pruning) when search criteria is by\n> region_code only? Same problem appears when there is simple serial “id”\n> used as primary identifier. Removing all constraints is not considered. I\n> understand that such specific PostgreSQL partitioning implementation has\n> done by tons of reasons but how I can implement partitioning for my EASY\n> case? I see the only legacy inheritance is left, right? Very sad if it’s\n> true.\n>\n> Your advices are very important.\n>\n> Thanks in advance.\n>\n> Andrew.\n>\n>\n> _________________________________________________________________________________________\n>\n> Hi\n>\n> To say it using Oracle vocabulary, PostgreSQL doesn’t offer GLOBAL\n> INDEXES. Even when we create an index on the partitioned table which is now\n> possible, PostgreSQL create LOCAL indexes on each partition separately.\n>\n> There is no global indexes on partitioned tables in PostgreSQL. So it is\n> not simple to offer uniqueness at global level using indexes. That is why,\n> it is required that partition key columns be part of the primary key AND\n> any other UNIQE constraint.\n>\n>\n>\n> *Michel SALAIS*\n>\n\nIf you are wanting to ensure uniqueness for the original oracle pk across the partitions, you could look into adding an advisory trigger to the table.On Wed, Mar 2, 2022, 2:28 AM Michel SALAIS <[email protected]> wrote:De : Marc Millas <[email protected]> Envoyé : mardi 1 mars 2022 19:00À : Andrew Zakharov <[email protected]>Cc : [email protected] : Re: Simple task with partitioning which I can't realize Andrew, contrary to Oracle, in postgres you can add the indexes and/or the constraints which are meaningful to you at partition level.I was not saying NOT to create keys, but I was saying to create them at partition level. Marc MILLASSenior Architect+33607850334www.mokadb.com   On Tue, Mar 1, 2022 at 5:45 PM Andrew Zakharov <[email protected]> wrote:Hi Marc –Since there is a DWH fed by ETL there no risks to have same gids in different region partitions. I considered simple partitioned table w/o any keys but I’d believed there is a solutions with keys that’s why I’m seeking the clue.Thanks.Andrew. From: Marc Millas <[email protected]> Sent: Tuesday, March 01, 2022 7:29 PMTo: Andrew Zakharov <[email protected]>Cc: [email protected]: Re: Simple task with partitioning which I can't realize Hi, is there any chance (risk ?) that a given gid be present in more than one region ?if not (or if you implement it via a dedicated, non partition table),  you may create a simple table partitioned by region, and create unique indexes for each partition.this is NOT equivalent to a unique constraint at global table level, of course.Marc MILLASSenior Architect+33607850334www.mokadb.com   On Tue, Mar 1, 2022 at 4:37 PM Andrew Zakharov <[email protected]> wrote:Hello all –I have a task which is simple at the first look. I have a table which contains hierarchy of address objects starting with macro region end ends with particular buildings. You can imagine how big is it. Here is short sample of table declaration: create table region_hierarchy(  gid uuid not null default uuid_generate_v1mc(),  parent_gid uuid null,  region_code int2,  …    constraint pk_region_hierarchy primary key (gid),    constraint fk_region_hierarchy_region_hierarchy_parent foreign key (parent_gid) references region_hierarchy(gid)); Being an Oracle specialist, I planned to using same declarative partitioning by list on the region_code field as I did in Oracle database. I’ve carefully looked thru docs/faqs/google/communities and found out that I must include “gid” field into partition key because a primary key field. Thus partition method “by list” is not appropriate method in this case and “by range” either. What I have left from partition methods? Hash? How can I create partitions by gid & region_code by hash? Feasible? Will it be working properly (with partition pruning) when search criteria is by region_code only? Same problem appears when there is simple serial “id” used as primary identifier. Removing all constraints is not considered. I understand that such specific PostgreSQL partitioning implementation has done by tons of reasons but how I can implement partitioning for my EASY case? I see the only legacy inheritance is left, right? Very sad if it’s true.Your advices are very important.Thanks in advance.Andrew. _________________________________________________________________________________________HiTo say it using Oracle vocabulary, PostgreSQL doesn’t offer GLOBAL INDEXES. Even when we create an index on the partitioned table which is now possible, PostgreSQL create LOCAL indexes on each partition separately.There is no global indexes on partitioned tables in PostgreSQL. So it is not simple to offer uniqueness at global level using indexes. That is why, it is required that partition key columns be part of the primary key AND any other UNIQE constraint. Michel SALAIS", "msg_date": "Wed, 2 Mar 2022 07:14:39 -0500", "msg_from": "Geri Wright <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple task with partitioning which I can't realize" }, { "msg_contents": "On 3/1/22 10:54, David G. Johnston wrote:\n> On Tue, Mar 1, 2022 at 8:37 AM Andrew Zakharov <[email protected]> wrote:\n>\n> create table region_hierarchy(\n>\n>   gid uuid not null default uuid_generate_v1mc(),\n>\n>   parent_gid uuid null,\n>\n>   region_code int2,\n>\n> I’ve carefully looked thru docs/faqs/google/communities and found\n> out that I must include “gid” field into partition key because a\n> primary key field.\n>\n>\n> Yes, you are coming up against the following limitation:\n>\n> \"Unique constraints (and hence primary keys) on partitioned tables \n> must include all the partition key columns. This limitation exists \n> because the individual indexes making up the constraint can only \n> directly enforce uniqueness within their own partitions; therefore, \n> the partition structure itself must guarantee that there are not \n> duplicates in different partitions.\"\n>\n> https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE\n>\n> That limitation is independent of partitioning; i.e., the legacy \n> inheritance option doesn't bypass it.\n>\n> Thus, your true \"key\" is composite: (region, identifier).  Thus you \n> need to add a \"parent_region_code\" column as well, redefine the PK as \n> (region_code, gid), and the REFERENCES clause to link the two paired \n> fields.\n>\n> You can decide whether that is sufficient or if you want some added \n> comfort in ensuring that a gid cannot appear in multiple regions by \n> creating a single non-partitioned table containing all gid values and \n> add a unique constraint there.\n>\n> Or maybe allow for duplicates across region codes and save space by \n> using a smaller data type (int or bigint - while renaming the column \n> to \"rid\" or some such) - combined with having the non-partitioned \n> reference table being defined as (region_code, rid, gid).\n>\n> David J.\n>\nHi David,\n\nAre there any concrete plans to address that particular limitation? That \nlimitation can be re-stated as \"PostgreSQL doesn't support global \nindexes on the partitioned tables\" and I've have also run into it. My \nway around it was not to use partitioning but to use much larger machine \nwith the NVME disks, which can handle the necesary I/O. Are there any \nplans to allow global indexes? I am aware that this is not a small \nchange but is the only real advantage that Oracle holds over PostgreSQL.\n\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 3/1/22 10:54, David G. Johnston\n wrote:\n\n\n\n\n\nOn Tue, Mar\n 1, 2022 at 8:37 AM Andrew Zakharov <[email protected]>\n wrote:\n\n\n\n\n\n\ncreate\n table region_hierarchy(\n\n  gid uuid not null default\n uuid_generate_v1mc(),\n  parent_gid uuid null,\n  region_code int2,\n \n\n\n\n \n\n\n\nI’ve carefully\n looked thru docs/faqs/google/communities and found\n out that I must include “gid” field into partition\n key because a primary key field.\n\n\n\n\n\n\nYes, you\n are coming up against the following limitation:\n\n\n\"Unique\n constraints (and hence primary keys) on partitioned tables\n must include all the partition key columns. This\n limitation exists because the individual indexes making up\n the constraint can only directly enforce uniqueness within\n their own partitions; therefore, the partition structure\n itself must guarantee that there are not duplicates in\n different partitions.\"\n\n\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE\n\n\n\n\nThat\n limitation is independent of partitioning; i.e., the legacy\n inheritance option doesn't bypass it.\n\n\nThus, your\n true \"key\" is composite: (region, identifier).  Thus you\n need to add a \"parent_region_code\" column as well, redefine\n the PK as (region_code, gid), and the REFERENCES clause to\n link the two paired fields.\n\n\nYou can\n decide whether that is sufficient or if you want some added\n comfort in ensuring that a gid cannot appear in multiple\n regions by creating a single non-partitioned table\n containing all gid values and add a unique constraint there.\n\n\nOr maybe\n allow for duplicates across region codes and save space by\n using a smaller data type (int or bigint - while renaming\n the column to \"rid\" or some such) - combined with having the\n non-partitioned reference table being defined as\n (region_code, rid, gid).\n\n\nDavid J.\n\n\n\n\n\nHi David,\nAre there any concrete plans to address that particular\n limitation? That limitation can be re-stated as \"PostgreSQL\n doesn't support global indexes on the partitioned tables\" and I've\n have also run into it. My way around it was not to use\n partitioning but to use much larger machine with the NVME disks,\n which can handle the necesary I/O. Are there any plans to allow\n global indexes? I am aware that this is not a small change but is\n the only real advantage that Oracle holds over PostgreSQL.\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Wed, 2 Mar 2022 09:04:20 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple task with partitioning which I can't realize" } ]
[ { "msg_contents": "Hello,\n\nWe have a pg_restore which fails due to RAM over-consumption of the \ncorresponding PG backend, which ends-up with OOM killer.\n\nThe table has one PK, one index, and 3 FK constraints, active while \nrestoring.\nThe dump contains over 200M rows for that table and is in custom format, \nwhich corresponds to 37 GB of total relation size in the original DB.\n\nWhile importing, one can see the RSS + swap increasing linearly for the \nbackend (executing the COPY)\n\nOn my machine (quite old PC), it failed after 16 hours, while the disk \nusage was reaching 26 GB and memory usage was 9.1g (RSS+swap)\n\nIf we do the same test, suppressing firstly the 5 constraints on the \ntable, the restore takes less than 15 minutes !\n\nThis was tested on both PG 14.2 and PG 13.6 (linux 64-bit machines).\n\nIt there a memory leak or that is normal that a bacend process may \nexhaust the RAM to such an extent ?\n\nThanks\n\nRegards\n\n\n\n\n\n\nHello,\n\n We have a pg_restore which fails due to RAM over-consumption\n of the corresponding PG backend, which ends-up with OOM\n killer.\n\n The table has one PK, one index, and 3 FK constraints, active\n while restoring.\n The dump contains over 200M rows for that table and is in\n custom format, which corresponds to 37 GB of total relation\n size in the original DB.\n\n While importing, one can see the RSS + swap increasing\n linearly for the backend (executing the COPY)\n\n On my machine (quite old PC), it failed after 16 hours, while\n the disk usage was reaching 26 GB and memory usage was 9.1g\n (RSS+swap)\n\n If we do the same test, suppressing firstly the 5 constraints\n on the table, the restore takes less than 15 minutes !\n\n This was tested on both PG 14.2 and PG 13.6 (linux 64-bit\n machines).\n\n It there a memory leak or that is normal that a bacend process\n may exhaust the RAM to such an extent ?\n\n Thanks\n\n Regards", "msg_date": "Thu, 3 Mar 2022 09:59:03 +0100", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]>", "msg_from_op": true, "msg_subject": "OOM killer while pg_restore" }, { "msg_contents": "Em qui., 3 de mar. de 2022 às 05:59, Marc Rechté <[email protected]> escreveu:\n\n> Hello,\n>\n> We have a pg_restore which fails due to RAM over-consumption of the\n> corresponding PG backend, which ends-up with OOM killer.\n>\n> The table has one PK, one index, and 3 FK constraints, active while\n> restoring.\n> The dump contains over 200M rows for that table and is in custom format,\n> which corresponds to 37 GB of total relation size in the original DB.\n>\n> While importing, one can see the RSS + swap increasing linearly for the\n> backend (executing the COPY)\n>\n> On my machine (quite old PC), it failed after 16 hours, while the disk\n> usage was reaching 26 GB and memory usage was 9.1g (RSS+swap)\n>\n> If we do the same test, suppressing firstly the 5 constraints on the\n> table, the restore takes less than 15 minutes !\n>\n> This was tested on both PG 14.2 and PG 13.6 (linux 64-bit machines).\n>\n> It there a memory leak or that is normal that a bacend process may exhaust\n> the RAM to such an extent ?\n>\nHi Marc,\nCan you post the server logs?\n\nregards,\nRanier Vilela\n\nEm qui., 3 de mar. de 2022 às 05:59, Marc Rechté <[email protected]> escreveu:\n\nHello,\n\n We have a pg_restore which fails due to RAM over-consumption\n of the corresponding PG backend, which ends-up with OOM\n killer.\n\n The table has one PK, one index, and 3 FK constraints, active\n while restoring.\n The dump contains over 200M rows for that table and is in\n custom format, which corresponds to 37 GB of total relation\n size in the original DB.\n\n While importing, one can see the RSS + swap increasing\n linearly for the backend (executing the COPY)\n\n On my machine (quite old PC), it failed after 16 hours, while\n the disk usage was reaching 26 GB and memory usage was 9.1g\n (RSS+swap)\n\n If we do the same test, suppressing firstly the 5 constraints\n on the table, the restore takes less than 15 minutes !\n\n This was tested on both PG 14.2 and PG 13.6 (linux 64-bit\n machines).\n\n It there a memory leak or that is normal that a bacend process\n may exhaust the RAM to such an extent ?Hi Marc,Can you post the server logs?regards,Ranier Vilela", "msg_date": "Thu, 3 Mar 2022 08:00:38 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM killer while pg_restore" }, { "msg_contents": "Em qui., 3 de mar. de 2022 às 05:59, Marc Rechté <[email protected]> escreveu:\n>\n> Hello,\n>\n> We have a pg_restore which fails due to RAM over-consumption of\n> the corresponding PG backend, which ends-up with OOM killer.\n>\n> The table has one PK, one index, and 3 FK constraints, active\n> while restoring.\n> The dump contains over 200M rows for that table and is in custom\n> format, which corresponds to 37 GB of total relation size in the\n> original DB.\n>\n> While importing, one can see the RSS + swap increasing linearly\n> for the backend (executing the COPY)\n>\n> On my machine (quite old PC), it failed after 16 hours, while the\n> disk usage was reaching 26 GB and memory usage was 9.1g (RSS+swap)\n>\n> If we do the same test, suppressing firstly the 5 constraints on\n> the table, the restore takes less than 15 minutes !\n>\n> This was tested on both PG 14.2 and PG 13.6 (linux 64-bit machines).\n>\n> It there a memory leak or that is normal that a bacend process may\n> exhaust the RAM to such an extent ?\n>\n> Hi Marc,\n> Can you post the server logs?\n>\n> regards,\n> Ranier Vilela\n\nWill it help ?\n\n2022-02-25 12:01:29.306 GMT [1468:24] user=,db=,app=,client= LOG:  \nserver process (PID 358995) was terminated by signal 9: Killed\n2022-02-25 12:01:29.306 GMT [1468:25] user=,db=,app=,client= DETAIL:  \nFailed process was running: COPY simulations_ecarts_relatifs_saison \n(idpoint, annee, saison, idreferentiel, ecartreltav, ecartreltnav, \necartreltxav, ecartreltrav, ecartreltxq90, ecartreltxq10, ecartreltnq10, \necartreltnq90, ecartreltxnd, ecartreltnnd, ecartreltnht, ecartreltxhwd, \necartreltncwd, ecartreltnfd, ecartreltxfd, ecartrelsd, ecartreltr, \necartrelhdd, ecartrelcdd, ecartrelpav, ecartrelpint, ecartrelrr, \necartrelpfl90, ecartrelrr1mm, ecartrelpxcwd, ecartrelpn20mm, \necartrelpxcdd, ecartrelhusav, ecartreltx35, ecartrelpq90, ecartrelpq99, \necartrelrr99, ecartrelffav, ecartrelff3, ecartrelffq98, ecartrelff98) \nFROM stdin;\n\n2022-02-25 12:01:29.306 GMT [1468:26] user=,db=,app=,client= LOG: \nterminating any other active server processes\n2022-02-25 12:01:29.311 GMT [1468:27] user=,db=,app=,client= LOG: all \nserver processes terminated; reinitializing\n2022-02-25 12:01:29.311 GMT [1468:27] user=,db=,app=,client= LOG: all \nserver processes terminated; reinitializing\n2022-02-25 12:01:29.326 GMT [360309:1] user=,db=,app=,client= LOG:  \ndatabase system was interrupted; last known up at 2022-02-25 12:01:12 GMT\n2022-02-25 12:01:29.362 GMT [360310:1] \nuser=[unknown],db=[unknown],app=[unknown],client=[local] LOG: connection \nreceived: host=[local]\n2022-02-25 12:01:29.363 GMT [360310:2] \nuser=postgres,db=drias,app=[unknown],client=[local] FATAL:  the database \nsystem is in recovery mode\n2022-02-25 12:01:29.365 GMT [360309:2] user=,db=,app=,client= LOG:  \ndatabase system was not properly shut down; automatic recovery in progress\n2022-02-25 12:01:29.367 GMT [360309:3] user=,db=,app=,client= LOG:  redo \nstarts at C3/1E0D31F0\n2022-02-25 12:01:40.845 GMT [360309:4] user=,db=,app=,client= LOG:  redo \ndone at C3/6174BC00 system usage: CPU: user: 4.15 s, system: 1.40 s, \nelapsed: 11.47 s\n2022-02-25 12:01:40.847 GMT [360309:5] user=,db=,app=,client= LOG:  \ncheckpoint starting: end-of-recovery immediate\n2022-02-25 12:01:41.806 GMT [360309:6] user=,db=,app=,client= LOG:  \ncheckpoint complete: wrote 125566 buffers (100.0%); 0 WAL file(s) added, \n54 removed, 13 recycled; write=0.915 s, sync=0.001 s, total=0.960 s; \nsync files=10, longest=0.001 s, average=0.001 s; distance=1104355 kB, \nestimate=1104355 kB\n2022-02-25 12:01:41.810 GMT [1468:28] user=,db=,app=,client= LOG: \ndatabase system is ready to accept connections\n\n\n\n", "msg_date": "Thu, 3 Mar 2022 13:18:59 +0100", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OOM killer while pg_restore" }, { "msg_contents": "Em qui., 3 de mar. de 2022 às 09:19, Marc Rechté <[email protected]> escreveu:\n\n> Em qui., 3 de mar. de 2022 às 05:59, Marc Rechté <[email protected]>\n> escreveu:\n> >\n> > Hello,\n> >\n> > We have a pg_restore which fails due to RAM over-consumption of\n> > the corresponding PG backend, which ends-up with OOM killer.\n> >\n> > The table has one PK, one index, and 3 FK constraints, active\n> > while restoring.\n> > The dump contains over 200M rows for that table and is in custom\n> > format, which corresponds to 37 GB of total relation size in the\n> > original DB.\n> >\n> > While importing, one can see the RSS + swap increasing linearly\n> > for the backend (executing the COPY)\n> >\n> > On my machine (quite old PC), it failed after 16 hours, while the\n> > disk usage was reaching 26 GB and memory usage was 9.1g (RSS+swap)\n> >\n> > If we do the same test, suppressing firstly the 5 constraints on\n> > the table, the restore takes less than 15 minutes !\n> >\n> > This was tested on both PG 14.2 and PG 13.6 (linux 64-bit machines).\n> >\n> > It there a memory leak or that is normal that a bacend process may\n> > exhaust the RAM to such an extent ?\n> >\n> > Hi Marc,\n> > Can you post the server logs?\n> >\n> > regards,\n> > Ranier Vilela\n>\n> Will it help ?\n>\nShow some direction.\n\n\n> 2022-02-25 12:01:29.306 GMT [1468:24] user=,db=,app=,client= LOG:\n> server process (PID 358995) was terminated by signal 9: Killed\n> 2022-02-25 12:01:29.306 GMT [1468:25] user=,db=,app=,client= DETAIL:\n> Failed process was running: COPY simulations_ecarts_relatifs_saison\n> (idpoint, annee, saison, idreferentiel, ecartreltav, ecartreltnav,\n> ecartreltxav, ecartreltrav, ecartreltxq90, ecartreltxq10, ecartreltnq10,\n> ecartreltnq90, ecartreltxnd, ecartreltnnd, ecartreltnht, ecartreltxhwd,\n> ecartreltncwd, ecartreltnfd, ecartreltxfd, ecartrelsd, ecartreltr,\n> ecartrelhdd, ecartrelcdd, ecartrelpav, ecartrelpint, ecartrelrr,\n> ecartrelpfl90, ecartrelrr1mm, ecartrelpxcwd, ecartrelpn20mm,\n> ecartrelpxcdd, ecartrelhusav, ecartreltx35, ecartrelpq90, ecartrelpq99,\n> ecartrelrr99, ecartrelffav, ecartrelff3, ecartrelffq98, ecartrelff98)\n> FROM stdin;\n>\nCOPY leak?\n\nregards,\nRanier Vilela\n\nEm qui., 3 de mar. de 2022 às 09:19, Marc Rechté <[email protected]> escreveu:Em qui., 3 de mar. de 2022 às 05:59, Marc Rechté <[email protected]> escreveu:\n>\n>     Hello,\n>\n>     We have a pg_restore which fails due to RAM over-consumption of\n>     the corresponding PG backend, which ends-up with OOM killer.\n>\n>     The table has one PK, one index, and 3 FK constraints, active\n>     while restoring.\n>     The dump contains over 200M rows for that table and is in custom\n>     format, which corresponds to 37 GB of total relation size in the\n>     original DB.\n>\n>     While importing, one can see the RSS + swap increasing linearly\n>     for the backend (executing the COPY)\n>\n>     On my machine (quite old PC), it failed after 16 hours, while the\n>     disk usage was reaching 26 GB and memory usage was 9.1g (RSS+swap)\n>\n>     If we do the same test, suppressing firstly the 5 constraints on\n>     the table, the restore takes less than 15 minutes !\n>\n>     This was tested on both PG 14.2 and PG 13.6 (linux 64-bit machines).\n>\n>     It there a memory leak or that is normal that a bacend process may\n>     exhaust the RAM to such an extent ?\n>\n> Hi Marc,\n> Can you post the server logs?\n>\n> regards,\n> Ranier Vilela\n\nWill it help ?Show some direction. \n\n2022-02-25 12:01:29.306 GMT [1468:24] user=,db=,app=,client= LOG:  \nserver process (PID 358995) was terminated by signal 9: Killed\n2022-02-25 12:01:29.306 GMT [1468:25] user=,db=,app=,client= DETAIL:  \nFailed process was running: COPY simulations_ecarts_relatifs_saison \n(idpoint, annee, saison, idreferentiel, ecartreltav, ecartreltnav, \necartreltxav, ecartreltrav, ecartreltxq90, ecartreltxq10, ecartreltnq10, \necartreltnq90, ecartreltxnd, ecartreltnnd, ecartreltnht, ecartreltxhwd, \necartreltncwd, ecartreltnfd, ecartreltxfd, ecartrelsd, ecartreltr, \necartrelhdd, ecartrelcdd, ecartrelpav, ecartrelpint, ecartrelrr, \necartrelpfl90, ecartrelrr1mm, ecartrelpxcwd, ecartrelpn20mm, \necartrelpxcdd, ecartrelhusav, ecartreltx35, ecartrelpq90, ecartrelpq99, \necartrelrr99, ecartrelffav, ecartrelff3, ecartrelffq98, ecartrelff98) \nFROM stdin;COPY leak?regards,Ranier Vilela", "msg_date": "Thu, 3 Mar 2022 09:22:09 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM killer while pg_restore" }, { "msg_contents": "On Thu, Mar 03, 2022 at 09:59:03AM +0100, Marc Recht� wrote:\n> Hello,\n> \n> We have a pg_restore which fails due to RAM over-consumption of the\n> corresponding PG backend, which ends-up with OOM killer.\n> \n> The table has one PK, one index, and 3 FK constraints, active while restoring.\n\nSend the schema for the table, index, and constraints (\\d in psql).\n\nWhat are the server settings ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nWhat OS/version ?\n\n> The dump contains over 200M rows for that table and is in custom format,\n> which corresponds to 37 GB of total relation size in the original DB.\n> \n> While importing, one can see the RSS + swap increasing linearly for the\n> backend (executing the COPY)\n> \n> On my machine (quite old PC), it failed after 16 hours, while the disk usage\n> was reaching 26 GB and memory usage was 9.1g (RSS+swap)\n\n\n", "msg_date": "Thu, 3 Mar 2022 08:46:13 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM killer while pg_restore" }, { "msg_contents": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]> writes:\n> We have a pg_restore which fails due to RAM over-consumption of the \n> corresponding PG backend, which ends-up with OOM killer.\n> The table has one PK, one index, and 3 FK constraints, active while \n> restoring.\n> The dump contains over 200M rows for that table and is in custom format, \n> which corresponds to 37 GB of total relation size in the original DB.\n\nThe FKs would result in queueing row trigger events, which would occupy\nsome memory. But those should only need ~12 bytes per FK per row,\nwhich works out to less than 10GB for this number of rows, so it may\nbe that you've hit something else that we would consider a leak.\n\nDoes memory consumption hold steady if you drop the FK constraints?\n\nIf not, as others have noted, we'd need more info to investigate\nthis. The leak is probably independent of the specific data in\nthe table, so maybe you could make a small self-contained example\nusing a script to generate dummy data.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Mar 2022 10:31:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM killer while pg_restore" }, { "msg_contents": "Le 03/03/2022 à 16:31, Tom Lane a écrit :\n> =?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]> writes:\n>> We have a pg_restore which fails due to RAM over-consumption of the\n>> corresponding PG backend, which ends-up with OOM killer.\n>> The table has one PK, one index, and 3 FK constraints, active while\n>> restoring.\n>> The dump contains over 200M rows for that table and is in custom format,\n>> which corresponds to 37 GB of total relation size in the original DB.\n> The FKs would result in queueing row trigger events, which would occupy\n> some memory. But those should only need ~12 bytes per FK per row,\n> which works out to less than 10GB for this number of rows, so it may\n> be that you've hit something else that we would consider a leak.\n>\n> Does memory consumption hold steady if you drop the FK constraints?\n>\n> If not, as others have noted, we'd need more info to investigate\n> this. The leak is probably independent of the specific data in\n> the table, so maybe you could make a small self-contained example\n> using a script to generate dummy data.\n>\n> \t\t\tregards, tom lane\n>\n>\nActually the number of rows is 232735712.\n\nAccordingly the RAM consumption would be x12 x3 = 7.8 GiB.\n\nThis is close to the 8,1g I reported earlier (actually it was closer to \n7.8 GB, due to GiB vs. GB confusion).\n\nSo there is no memory leak.\n\nIt took 16 hours on my box to reach that RAM consumption, and then the \nCOPY failed when checking the first FK (as the referenced table was empty).\n\nI dropped the FK, index, and 3 FK constraints and started over the \npg_restore:\n\n11 minutes to load the table (I did not have time to note RAM consumption)\n\nI then created the PK and index:\n\n24 minutes\n\nFor FK, I don't know because the referenced table are empty (but I'll be \nable to test next week, if deemed necessary).\n\n16 hours vs. 35 minutes to reach the same state.\n\nThis is the data structure:\n\n=================\n\n--\n-- Name: simulations_ecarts_relatifs_saison; Type: TABLE; Schema: \ndonnees2019; Owner: drias; Tablespace:\n--\n\nCREATE TABLE simulations_ecarts_relatifs_saison (\n     idpoint integer NOT NULL,\n     annee integer NOT NULL,\n     saison integer NOT NULL,\n     idreferentiel integer NOT NULL,\n     ecartreltav real,\n     ecartreltnav real,\n     ecartreltxav real,\n     ecartreltrav real,\n     ecartreltxq90 real,\n     ecartreltxq10 real,\n     ecartreltnq10 real,\n     ecartreltnq90 real,\n     ecartreltxnd smallint,\n     ecartreltnnd smallint,\n     ecartreltnht smallint,\n     ecartreltxhwd smallint,\n     ecartreltncwd smallint,\n     ecartreltnfd smallint,\n     ecartreltxfd smallint,\n     ecartrelsd smallint,\n     ecartreltr smallint,\n     ecartrelhdd real,\n     ecartrelcdd real,\n     ecartrelpav real,\n     ecartrelpint real,\n     ecartrelrr real,\n     ecartrelpfl90 real,\n     ecartrelrr1mm real,\n     ecartrelpxcwd smallint,\n     ecartrelpn20mm smallint,\n     ecartrelpxcdd smallint,\n     ecartrelhusav real,\n     ecartreltx35 real,\n     ecartrelpq90 real,\n     ecartrelpq99 real,\n     ecartrelrr99 real,\n     ecartrelffav real,\n     ecartrelff3 real,\n     ecartrelffq98 real,\n     ecartrelff98 real\n);\n\n--\n-- Name: pk_simulations_ecarts_relatifs_saison_2019; Type: CONSTRAINT; \nSchema: donnees2019; Owner: drias; Tablespace:\n--\n\nALTER TABLE ONLY simulations_ecarts_relatifs_saison\n     ADD CONSTRAINT pk_simulations_ecarts_relatifs_saison_2019 PRIMARY \nKEY (idpoint, annee, saison, idreferentiel);\n\n--\n-- Name: i_expe_annee_saison_simulations_ecarts_relatifs_saison_2019; \nType: INDEX; Schema: donnees2019; Owner: drias; Tablespace:\n--\n\nCREATE INDEX i_expe_annee_saison_simulations_ecarts_relatifs_saison_2019 \nON simulations_ecarts_relatifs_saison USING btree (idreferentiel, annee, \nsaison);\n\n--\n-- Name: fk_id_point_ecarts_relatifs_saison_2019; Type: FK CONSTRAINT; \nSchema: donnees2019; Owner: drias\n--\n\nALTER TABLE ONLY simulations_ecarts_relatifs_saison\n     ADD CONSTRAINT fk_id_point_ecarts_relatifs_saison_2019 FOREIGN KEY \n(idpoint) REFERENCES grilles.points_grille(id);\n\n\n--\n-- Name: fk_id_referentiel_ecarts_relatifs_saison_2019; Type: FK \nCONSTRAINT; Schema: donnees2019; Owner: drias\n--\n\nALTER TABLE ONLY simulations_ecarts_relatifs_saison\n     ADD CONSTRAINT fk_id_referentiel_ecarts_relatifs_saison_2019 \nFOREIGN KEY (idreferentiel) REFERENCES \nreferentiel.referentiel_simulations(id);\n\n--\n-- Name: fk_saison_ecarts_relatifs_saison_2019; Type: FK CONSTRAINT; \nSchema: donnees2019; Owner: drias\n--\n\nALTER TABLE ONLY simulations_ecarts_relatifs_saison\n     ADD CONSTRAINT fk_saison_ecarts_relatifs_saison_2019 FOREIGN KEY \n(saison) REFERENCES donnees.liste_saison(code_saison);\n\nThis is how is init / started the test instance:\n\n=============================\n\n$ initdb -D $MYDIR\n$ pg_ctl -D $MYDIR -o \"-p 5432 -c unix_socket_directories=. -c \nshared_buffers=981MB -c work_mem=20MB -c maintenance_work_mem=98MB\" start\n\n\n\n", "msg_date": "Thu, 3 Mar 2022 19:32:31 +0100", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OOM killer while pg_restore" }, { "msg_contents": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]> writes:\n> Le 03/03/2022 à 16:31, Tom Lane a écrit :\n>> Does memory consumption hold steady if you drop the FK constraints?\n\n> Actually the number of rows is 232735712.\n> Accordingly the RAM consumption would be x12 x3 = 7.8 GiB.\n> This is close to the 8,1g I reported earlier (actually it was closer to \n> 7.8 GB, due to GiB vs. GB confusion).\n\n> So there is no memory leak.\n\n> It took 16 hours on my box to reach that RAM consumption, and then the \n> COPY failed when checking the first FK (as the referenced table was empty).\n\nI'm guessing it was swapping like mad :-(\n\nWe've long recommended dropping FK constraints during bulk data loads,\nand then re-establishing them later. That's a lot cheaper than retail\nvalidity checks, even without the memory-consumption angle. Ideally\nthat sort of behavior would be automated, but nobody's gotten that\ndone yet. (pg_restore does do it like that during a full restore,\nbut not for a data-only restore, so I guess you were doing the latter.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Mar 2022 13:43:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM killer while pg_restore" }, { "msg_contents": "Le 03/03/2022 à 19:43, Tom Lane a écrit :\n> =?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]> writes:\n>> Le 03/03/2022 à 16:31, Tom Lane a écrit :\n>>> Does memory consumption hold steady if you drop the FK constraints?\n>> Actually the number of rows is 232735712.\n>> Accordingly the RAM consumption would be x12 x3 = 7.8 GiB.\n>> This is close to the 8,1g I reported earlier (actually it was closer to\n>> 7.8 GB, due to GiB vs. GB confusion).\n>> So there is no memory leak.\n>> It took 16 hours on my box to reach that RAM consumption, and then the\n>> COPY failed when checking the first FK (as the referenced table was empty).\n> I'm guessing it was swapping like mad :-(\n>\n> We've long recommended dropping FK constraints during bulk data loads,\n> and then re-establishing them later. That's a lot cheaper than retail\n> validity checks, even without the memory-consumption angle. Ideally\n> that sort of behavior would be automated, but nobody's gotten that\n> done yet. (pg_restore does do it like that during a full restore,\n> but not for a data-only restore, so I guess you were doing the latter.)\n>\n> \t\t\tregards, tom lane\n>\n>\nDid the test without the 3 FK, but with PK and index:\n\nI took 9.5 hours and consumed 1GB of RAM (vs. 16 hours and 8 GB).\n\nThanks you for the explanations.\n\nI  assume there is currently no GUC to limit RAM consumption of a backend ?\n\nMarc\n\n\n\n\n", "msg_date": "Sat, 5 Mar 2022 09:56:59 +0100", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OOM killer while pg_restore" }, { "msg_contents": "Em qui., 3 de mar. de 2022 às 15:32, Marc Rechté <[email protected]> escreveu:\n\n> Le 03/03/2022 à 16:31, Tom Lane a écrit :\n> > =?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]> writes:\n> >> We have a pg_restore which fails due to RAM over-consumption of the\n> >> corresponding PG backend, which ends-up with OOM killer.\n> >> The table has one PK, one index, and 3 FK constraints, active while\n> >> restoring.\n> >> The dump contains over 200M rows for that table and is in custom format,\n> >> which corresponds to 37 GB of total relation size in the original DB.\n> > The FKs would result in queueing row trigger events, which would occupy\n> > some memory. But those should only need ~12 bytes per FK per row,\n> > which works out to less than 10GB for this number of rows, so it may\n> > be that you've hit something else that we would consider a leak.\n> >\n> > Does memory consumption hold steady if you drop the FK constraints?\n> >\n> > If not, as others have noted, we'd need more info to investigate\n> > this. The leak is probably independent of the specific data in\n> > the table, so maybe you could make a small self-contained example\n> > using a script to generate dummy data.\n> >\n> > regards, tom lane\n> >\n> >\n> Actually the number of rows is 232735712.\n>\n> Accordingly the RAM consumption would be x12 x3 = 7.8 GiB.\n>\n> This is close to the 8,1g I reported earlier (actually it was closer to\n> 7.8 GB, due to GiB vs. GB confusion).\n>\n> So there is no memory leak.\n>\n> It took 16 hours on my box to reach that RAM consumption, and then the\n> COPY failed when checking the first FK (as the referenced table was empty).\n>\n> I dropped the FK, index, and 3 FK constraints and started over the\n> pg_restore:\n>\n> 11 minutes to load the table (I did not have time to note RAM consumption)\n>\n> I then created the PK and index:\n>\n> 24 minutes\n>\n> For FK, I don't know because the referenced table are empty (but I'll be\n> able to test next week, if deemed necessary).\n>\n> 16 hours vs. 35 minutes to reach the same state.\n>\nMaybe it's out of reach, but one way to help Postgres developers fix this\nis to provide Flame Graphs [1] based on these slow operations.\nFor confidentiality and privacy reasons, the data is out of reach.\n\nMy 2c here.\n\nregards,\nRanier Vilela\n[1] https://www.brendangregg.com/flamegraphs.html\n\nEm qui., 3 de mar. de 2022 às 15:32, Marc Rechté <[email protected]> escreveu:Le 03/03/2022 à 16:31, Tom Lane a écrit :\n> =?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]> writes:\n>> We have a pg_restore which fails due to RAM over-consumption of the\n>> corresponding PG backend, which ends-up with OOM killer.\n>> The table has one PK, one index, and 3 FK constraints, active while\n>> restoring.\n>> The dump contains over 200M rows for that table and is in custom format,\n>> which corresponds to 37 GB of total relation size in the original DB.\n> The FKs would result in queueing row trigger events, which would occupy\n> some memory.  But those should only need ~12 bytes per FK per row,\n> which works out to less than 10GB for this number of rows, so it may\n> be that you've hit something else that we would consider a leak.\n>\n> Does memory consumption hold steady if you drop the FK constraints?\n>\n> If not, as others have noted, we'd need more info to investigate\n> this.  The leak is probably independent of the specific data in\n> the table, so maybe you could make a small self-contained example\n> using a script to generate dummy data.\n>\n>                       regards, tom lane\n>\n>\nActually the number of rows is 232735712.\n\nAccordingly the RAM consumption would be x12 x3 = 7.8 GiB.\n\nThis is close to the 8,1g I reported earlier (actually it was closer to \n7.8 GB, due to GiB vs. GB confusion).\n\nSo there is no memory leak.\n\nIt took 16 hours on my box to reach that RAM consumption, and then the \nCOPY failed when checking the first FK (as the referenced table was empty).\n\nI dropped the FK, index, and 3 FK constraints and started over the \npg_restore:\n\n11 minutes to load the table (I did not have time to note RAM consumption)\n\nI then created the PK and index:\n\n24 minutes\n\nFor FK, I don't know because the referenced table are empty (but I'll be \nable to test next week, if deemed necessary).\n\n16 hours vs. 35 minutes to reach the same state.Maybe it's out of reach, but one way to help Postgres developers fix thisis to provide Flame Graphs [1] based on these slow operations.For confidentiality and privacy reasons, the data is out of reach.My 2c here.regards,Ranier Vilela[1] https://www.brendangregg.com/flamegraphs.html", "msg_date": "Sat, 5 Mar 2022 09:08:45 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM killer while pg_restore" } ]
[ { "msg_contents": "Hi,\nOne of the service layer app is inserting Millions of records in a table\nbut one row at a time. Although COPY is the fastest way to import a file in\na table. Application has a requirement of processing a row and inserting it\ninto a table. Is there any way this INSERT can be tuned by increasing\nparameters? It is taking almost 10 hours for just 2.2 million rows in a\ntable. Table does not have any indexes or triggers.\n\nRegards,\nAditya.\n\nHi,One of the service layer app is inserting Millions of records in a table but one row at a time. Although COPY is the fastest way to import a file in a table. Application has a requirement of processing a row and inserting it into a table. Is there any way this INSERT can be tuned by increasing parameters? It is taking almost 10 hours for just 2.2 million rows in a table. Table does not have any indexes or triggers.Regards,Aditya.", "msg_date": "Sat, 5 Mar 2022 00:01:52 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Any way to speed up INSERT INTO" }, { "msg_contents": "On Sat, Mar 5, 2022 at 12:01:52AM +0530, aditya desai wrote:\n> Hi,\n> One of the service layer app is inserting Millions of records in a table but\n> one row at a time. Although COPY is the fastest way to import a file in a\n> table. Application has a requirement of processing a row and inserting it into\n> a table. Is there any way this INSERT can be tuned by increasing parameters? It\n> is taking almost 10 hours for just 2.2 million rows in a table. Table does not\n> have any indexes or triggers.\n\nWell, sections 14.4 and 14.5 might help:\n\n\thttps://www.postgresql.org/docs/14/performance-tips.html\n\nYour time seems very slow --- are the rows very wide?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Mar 2022 13:38:51 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "aditya desai <[email protected]> writes:\n> One of the service layer app is inserting Millions of records in a table\n> but one row at a time. Although COPY is the fastest way to import a file in\n> a table. Application has a requirement of processing a row and inserting it\n> into a table. Is there any way this INSERT can be tuned by increasing\n> parameters? It is taking almost 10 hours for just 2.2 million rows in a\n> table. Table does not have any indexes or triggers.\n\nUsing a prepared statement for the INSERT would help a little bit.\nWhat would help more, if you don't expect any insertion failures,\nis to group multiple inserts per transaction (ie put BEGIN ... COMMIT\naround each batch of 100 or 1000 or so insertions). There's not\ngoing to be any magic bullet that lets you get away without changing\nthe app, though.\n\nIt's quite possible that network round trip costs are a big chunk of your\nproblem, in which case physically grouping multiple rows into each INSERT\ncommand (... or COPY ...) is the only way to fix it. But I'd start with\ntrying to reduce the transaction commit overhead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Mar 2022 13:42:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "Hi Bruce,\nCorrect rows are wider. One of the columns is text and one is bytea.\n\nRegards,\nAditya.\n\nOn Sat, Mar 5, 2022 at 12:08 AM Bruce Momjian <[email protected]> wrote:\n\n> On Sat, Mar 5, 2022 at 12:01:52AM +0530, aditya desai wrote:\n> > Hi,\n> > One of the service layer app is inserting Millions of records in a table\n> but\n> > one row at a time. Although COPY is the fastest way to import a file in a\n> > table. Application has a requirement of processing a row and inserting\n> it into\n> > a table. Is there any way this INSERT can be tuned by increasing\n> parameters? It\n> > is taking almost 10 hours for just 2.2 million rows in a table. Table\n> does not\n> > have any indexes or triggers.\n>\n> Well, sections 14.4 and 14.5 might help:\n>\n> https://www.postgresql.org/docs/14/performance-tips.html\n>\n> Your time seems very slow --- are the rows very wide?\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nHi Bruce,Correct rows are wider. One of the columns is text and one is bytea.Regards,Aditya.On Sat, Mar 5, 2022 at 12:08 AM Bruce Momjian <[email protected]> wrote:On Sat, Mar  5, 2022 at 12:01:52AM +0530, aditya desai wrote:\n> Hi,\n> One of the service layer app is inserting Millions of records in a table but\n> one row at a time. Although COPY is the fastest way to import a file in a\n> table. Application has a requirement of processing a row and inserting it into\n> a table. Is there any way this INSERT can be tuned by increasing parameters? It\n> is taking almost 10 hours for just 2.2 million rows in a table. Table does not\n> have any indexes or triggers.\n\nWell, sections 14.4 and 14.5 might help:\n\n        https://www.postgresql.org/docs/14/performance-tips.html\n\nYour time seems very slow --- are the rows very wide?\n\n-- \n  Bruce Momjian  <[email protected]>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  If only the physical world exists, free will is an illusion.", "msg_date": "Sat, 5 Mar 2022 00:12:40 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "On Fri, Mar 4, 2022 at 01:42:39PM -0500, Tom Lane wrote:\n> aditya desai <[email protected]> writes:\n> > One of the service layer app is inserting Millions of records in a table\n> > but one row at a time. Although COPY is the fastest way to import a file in\n> > a table. Application has a requirement of processing a row and inserting it\n> > into a table. Is there any way this INSERT can be tuned by increasing\n> > parameters? It is taking almost 10 hours for just 2.2 million rows in a\n> > table. Table does not have any indexes or triggers.\n> \n> Using a prepared statement for the INSERT would help a little bit.\n\nYeah, I thought about that but it seems it would only minimally help.\n\n> What would help more, if you don't expect any insertion failures,\n> is to group multiple inserts per transaction (ie put BEGIN ... COMMIT\n> around each batch of 100 or 1000 or so insertions). There's not\n> going to be any magic bullet that lets you get away without changing\n> the app, though.\n\nYeah, he/she could insert via multiple rows too:\n\n\tCREATE TABLE test (x int);\n\tINSERT INTO test VALUES (1), (2), (3);\n\t\n> It's quite possible that network round trip costs are a big chunk of your\n> problem, in which case physically grouping multiple rows into each INSERT\n> command (... or COPY ...) is the only way to fix it. But I'd start with\n> trying to reduce the transaction commit overhead.\n\nAgreed, turning off synchronous_commit for that those queries would be\nmy first approach.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Mar 2022 13:47:35 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "Hi, \n\nOn March 4, 2022 10:42:39 AM PST, Tom Lane <[email protected]> wrote:\n>aditya desai <[email protected]> writes:\n>> One of the service layer app is inserting Millions of records in a table\n>> but one row at a time. Although COPY is the fastest way to import a file in\n>> a table. Application has a requirement of processing a row and inserting it\n>> into a table. Is there any way this INSERT can be tuned by increasing\n>> parameters? It is taking almost 10 hours for just 2.2 million rows in a\n>> table. Table does not have any indexes or triggers.\n>\n>Using a prepared statement for the INSERT would help a little bit.\n>What would help more, if you don't expect any insertion failures,\n>is to group multiple inserts per transaction (ie put BEGIN ... COMMIT\n>around each batch of 100 or 1000 or so insertions). There's not\n>going to be any magic bullet that lets you get away without changing\n>the app, though.\n>\n>It's quite possible that network round trip costs are a big chunk of your\n>problem, in which case physically grouping multiple rows into each INSERT\n>command (... or COPY ...) is the only way to fix it. But I'd start with\n>trying to reduce the transaction commit overhead.\n\nPipelining could also help.\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 04 Mar 2022 10:52:28 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "De: Andres Freund<mailto:[email protected]>\nEnviado:sexta-feira, 4 de março de 2022 15:52\nPara: [email protected]<mailto:[email protected]>; Tom Lane<mailto:[email protected]>; aditya desai<mailto:[email protected]>\nCc:Pgsql Performance<mailto:[email protected]>\nAssunto: Re: Any way to speed up INSERT INTO\n\nHi,\n\nOn March 4, 2022 10:42:39 AM PST, Tom Lane <[email protected]> wrote:\n>aditya desai <[email protected]> writes:\n>> One of the service layer app is inserting Millions of records in a table\n>> but one row at a time. Although COPY is the fastest way to import a file in\n>> a table. Application has a requirement of processing a row and inserting it\n>> into a table. Is there any way this INSERT can be tuned by increasing\n>> parameters? It is taking almost 10 hours for just 2.2 million rows in a\n>> table. Table does not have any indexes or triggers.\n>\n>Using a prepared statement for the INSERT would help a little bit.\n>What would help more, if you don't expect any insertion failures,\n>is to group multiple inserts per transaction (ie put BEGIN ... COMMIT\n>around each batch of 100 or 1000 or so insertions). There's not\n>going to be any magic bullet that lets you get away without changing\n>the app, though.\n>\n>It's quite possible that network round trip costs are a big chunk of your\n>problem, in which case physically grouping multiple rows into each INSERT\n>command (... or COPY ...) is the only way to fix it. But I'd start with\n>trying to reduce the transaction commit overhead.\n\nPipelining could also help.\n--\nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\nSorry for disturbing – I had similar problem with storing logs for e-commerce service mesh producing millions of records per day; to not loose anything, I do record every log records in Apache ActiveMQ Artemis, and then another microservice collects data from MQ and store in PostgreSQL. Since we have logs in waves, ActiveMQ Artemis reduces the “impedance” between systems.\nJust my 2c.\n\nRegards,\n\nER.\n\n\n\n\n\n\n\n\n\n \n\nDe: Andres Freund\nEnviado:sexta-feira, 4 de março de 2022 15:52\nPara: [email protected];\nTom Lane; \naditya desai\nCc:Pgsql Performance\nAssunto: Re: Any way to speed up INSERT INTO\n\n \nHi, \n\nOn March 4, 2022 10:42:39 AM PST, Tom Lane <[email protected]> wrote:\n>aditya desai <[email protected]> writes:\n>> One of the service layer app is inserting Millions of records in a table\n>> but one row at a time. Although COPY is the fastest way to import a file in\n>> a table. Application has a requirement of processing a row and inserting it\n>> into a table. Is there any way this INSERT can be tuned by increasing\n>> parameters? It is taking almost 10 hours for just 2.2 million rows in a\n>> table. Table does not have any indexes or triggers.\n>\n>Using a prepared statement for the INSERT would help a little bit.\n>What would help more, if you don't expect any insertion failures,\n>is to group multiple inserts per transaction (ie put BEGIN ... COMMIT\n>around each batch of 100 or 1000 or so insertions).  There's not\n>going to be any magic bullet that lets you get away without changing\n>the app, though.\n>\n>It's quite possible that network round trip costs are a big chunk of your\n>problem, in which case physically grouping multiple rows into each INSERT\n>command (... or COPY ...) is the only way to fix it.  But I'd start with\n>trying to reduce the transaction commit overhead.\n\nPipelining could also help.\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n \nSorry for disturbing – I had similar problem with storing logs for e-commerce service mesh producing millions of records per day; to not loose anything, I do record every log records in Apache ActiveMQ Artemis, and then another microservice\n collects data from MQ and store in PostgreSQL. Since we have logs in waves, ActiveMQ Artemis reduces the “impedance” between systems.\nJust my 2c.\n \nRegards,\n \nER.", "msg_date": "Fri, 4 Mar 2022 20:04:42 +0000", "msg_from": "Edson Richter <[email protected]>", "msg_from_op": false, "msg_subject": "RES: Any way to speed up INSERT INTO" }, { "msg_contents": "> Correct rows are wider. One of the columns is text and one is bytea.\n\nwith the PG14 the LZ4 compression is worth checking.\n\nvia\nhttps://www.postgresql.fastware.com/blog/what-is-the-new-lz4-toast-compression-in-postgresql-14\n\n\n\n\n*\"\"\"INSERT statements with 16 clientsAnother common scenario that I tested\nwas accessing the database from multiple clients - 16 in this case.What I\nfound out, as can be seen below, is that compression performance of single\nlarge files (HTML, English text, source code, executable binary, pictures)\nusing LZ4 was 60% to 70% faster compared to PGLZ, and that there was also a\nsmall improvement while inserting multiple small files (PostgreSQL\ndocument).*\n*\"\"\"*\n\nkind regards,\n Imre\n\naditya desai <[email protected]> ezt írta (időpont: 2022. márc. 4., P,\n19:42):\n\n> Hi Bruce,\n> Correct rows are wider. One of the columns is text and one is bytea.\n>\n> Regards,\n> Aditya.\n>\n> On Sat, Mar 5, 2022 at 12:08 AM Bruce Momjian <[email protected]> wrote:\n>\n>> On Sat, Mar 5, 2022 at 12:01:52AM +0530, aditya desai wrote:\n>> > Hi,\n>> > One of the service layer app is inserting Millions of records in a\n>> table but\n>> > one row at a time. Although COPY is the fastest way to import a file in\n>> a\n>> > table. Application has a requirement of processing a row and inserting\n>> it into\n>> > a table. Is there any way this INSERT can be tuned by increasing\n>> parameters? It\n>> > is taking almost 10 hours for just 2.2 million rows in a table. Table\n>> does not\n>> > have any indexes or triggers.\n>>\n>> Well, sections 14.4 and 14.5 might help:\n>>\n>> https://www.postgresql.org/docs/14/performance-tips.html\n>>\n>> Your time seems very slow --- are the rows very wide?\n>>\n>> --\n>> Bruce Momjian <[email protected]> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> If only the physical world exists, free will is an illusion.\n>>\n>>\n\n> Correct rows are wider. One of the columns is text and one is bytea.with the PG14 the LZ4 compression is worth checking.via https://www.postgresql.fastware.com/blog/what-is-the-new-lz4-toast-compression-in-postgresql-14\"\"\"INSERT statements with 16 clientsAnother common scenario that I tested was accessing the database from multiple clients - 16 in this case.What I found out, as can be seen below, is that compression performance of single large files (HTML, English text, source code, executable binary, pictures) using LZ4 was 60% to 70% faster compared to PGLZ, and that there was also a small improvement while inserting multiple small files (PostgreSQL document).\"\"\"kind regards,  Imreaditya desai <[email protected]> ezt írta (időpont: 2022. márc. 4., P, 19:42):Hi Bruce,Correct rows are wider. One of the columns is text and one is bytea.Regards,Aditya.On Sat, Mar 5, 2022 at 12:08 AM Bruce Momjian <[email protected]> wrote:On Sat, Mar  5, 2022 at 12:01:52AM +0530, aditya desai wrote:\n> Hi,\n> One of the service layer app is inserting Millions of records in a table but\n> one row at a time. Although COPY is the fastest way to import a file in a\n> table. Application has a requirement of processing a row and inserting it into\n> a table. Is there any way this INSERT can be tuned by increasing parameters? It\n> is taking almost 10 hours for just 2.2 million rows in a table. Table does not\n> have any indexes or triggers.\n\nWell, sections 14.4 and 14.5 might help:\n\n        https://www.postgresql.org/docs/14/performance-tips.html\n\nYour time seems very slow --- are the rows very wide?\n\n-- \n  Bruce Momjian  <[email protected]>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  If only the physical world exists, free will is an illusion.", "msg_date": "Sat, 5 Mar 2022 02:22:13 +0100", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "Thanks all for your inputs. We will try to implement inserts in single\ntransaction. I feel that is the best approach.\n\nThanks,\nAD.\n\nOn Saturday, March 5, 2022, Bruce Momjian <[email protected]> wrote:\n\n> On Fri, Mar 4, 2022 at 01:42:39PM -0500, Tom Lane wrote:\n> > aditya desai <[email protected]> writes:\n> > > One of the service layer app is inserting Millions of records in a\n> table\n> > > but one row at a time. Although COPY is the fastest way to import a\n> file in\n> > > a table. Application has a requirement of processing a row and\n> inserting it\n> > > into a table. Is there any way this INSERT can be tuned by increasing\n> > > parameters? It is taking almost 10 hours for just 2.2 million rows in a\n> > > table. Table does not have any indexes or triggers.\n> >\n> > Using a prepared statement for the INSERT would help a little bit.\n>\n> Yeah, I thought about that but it seems it would only minimally help.\n>\n> > What would help more, if you don't expect any insertion failures,\n> > is to group multiple inserts per transaction (ie put BEGIN ... COMMIT\n> > around each batch of 100 or 1000 or so insertions). There's not\n> > going to be any magic bullet that lets you get away without changing\n> > the app, though.\n>\n> Yeah, he/she could insert via multiple rows too:\n>\n> CREATE TABLE test (x int);\n> INSERT INTO test VALUES (1), (2), (3);\n>\n> > It's quite possible that network round trip costs are a big chunk of your\n> > problem, in which case physically grouping multiple rows into each INSERT\n> > command (... or COPY ...) is the only way to fix it. But I'd start with\n> > trying to reduce the transaction commit overhead.\n>\n> Agreed, turning off synchronous_commit for that those queries would be\n> my first approach.\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nThanks all for your inputs. We will try to implement inserts in single transaction. I feel that is the best approach.Thanks,AD.On Saturday, March 5, 2022, Bruce Momjian <[email protected]> wrote:On Fri, Mar  4, 2022 at 01:42:39PM -0500, Tom Lane wrote:\n> aditya desai <[email protected]> writes:\n> > One of the service layer app is inserting Millions of records in a table\n> > but one row at a time. Although COPY is the fastest way to import a file in\n> > a table. Application has a requirement of processing a row and inserting it\n> > into a table. Is there any way this INSERT can be tuned by increasing\n> > parameters? It is taking almost 10 hours for just 2.2 million rows in a\n> > table. Table does not have any indexes or triggers.\n> \n> Using a prepared statement for the INSERT would help a little bit.\n\nYeah, I thought about that but it seems it would only minimally help.\n\n> What would help more, if you don't expect any insertion failures,\n> is to group multiple inserts per transaction (ie put BEGIN ... COMMIT\n> around each batch of 100 or 1000 or so insertions).  There's not\n> going to be any magic bullet that lets you get away without changing\n> the app, though.\n\nYeah, he/she could insert via multiple rows too:\n\n        CREATE TABLE test (x int);\n        INSERT INTO test VALUES (1), (2), (3);\n        \n> It's quite possible that network round trip costs are a big chunk of your\n> problem, in which case physically grouping multiple rows into each INSERT\n> command (... or COPY ...) is the only way to fix it.  But I'd start with\n> trying to reduce the transaction commit overhead.\n\nAgreed, turning off synchronous_commit for that those queries would be\nmy first approach.\n\n-- \n  Bruce Momjian  <[email protected]>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  If only the physical world exists, free will is an illusion.", "msg_date": "Sat, 5 Mar 2022 12:32:59 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "Hi Tom,\nI added BEGIN and COMMIT as shown below around insert and executed it from\npgadmin for 100,000 rows. It ran in just 1 min.\n\nBEGIN;\nINSERT INTO TABLE VALUES(....);\nINSERT INTO TABLE VALUES(....);\n.\n,\nCOMMIT;\n\nHowever when I run above from psql by passing it to psql(As shown below) as\na file. It still takes a lot of time. Am I doing anything wrong? How can I\nrun this from pgadmin within a minute?\n\npsql -h host -U user -p Port -d database < INSERT_FILE.sql\n\nPSQL is still printing as below.\nINSERT 0 1\nINSERT 0 1\n\n\nRegards,\nAditya.\n\n\nOn Sat, Mar 5, 2022 at 12:12 AM Tom Lane <[email protected]> wrote:\n\n> aditya desai <[email protected]> writes:\n> > One of the service layer app is inserting Millions of records in a table\n> > but one row at a time. Although COPY is the fastest way to import a file\n> in\n> > a table. Application has a requirement of processing a row and inserting\n> it\n> > into a table. Is there any way this INSERT can be tuned by increasing\n> > parameters? It is taking almost 10 hours for just 2.2 million rows in a\n> > table. Table does not have any indexes or triggers.\n>\n> Using a prepared statement for the INSERT would help a little bit.\n> What would help more, if you don't expect any insertion failures,\n> is to group multiple inserts per transaction (ie put BEGIN ... COMMIT\n> around each batch of 100 or 1000 or so insertions). There's not\n> going to be any magic bullet that lets you get away without changing\n> the app, though.\n>\n> It's quite possible that network round trip costs are a big chunk of your\n> problem, in which case physically grouping multiple rows into each INSERT\n> command (... or COPY ...) is the only way to fix it. But I'd start with\n> trying to reduce the transaction commit overhead.\n>\n> regards, tom lane\n>\n\nHi Tom,I added BEGIN and COMMIT as shown below around insert and executed it from pgadmin for 100,000 rows. It ran in just 1 min.BEGIN;INSERT INTO TABLE VALUES(....);INSERT INTO TABLE VALUES(....);.,COMMIT;However when I run above from psql by passing it to psql(As shown below) as a file. It still takes a lot of time. Am I doing anything wrong? How can I run this from pgadmin within a minute?psql -h host -U user -p Port -d database < INSERT_FILE.sqlPSQL is still printing as below.INSERT 0 1INSERT 0 1Regards,Aditya.On Sat, Mar 5, 2022 at 12:12 AM Tom Lane <[email protected]> wrote:aditya desai <[email protected]> writes:\n> One of the service layer app is inserting Millions of records in a table\n> but one row at a time. Although COPY is the fastest way to import a file in\n> a table. Application has a requirement of processing a row and inserting it\n> into a table. Is there any way this INSERT can be tuned by increasing\n> parameters? It is taking almost 10 hours for just 2.2 million rows in a\n> table. Table does not have any indexes or triggers.\n\nUsing a prepared statement for the INSERT would help a little bit.\nWhat would help more, if you don't expect any insertion failures,\nis to group multiple inserts per transaction (ie put BEGIN ... COMMIT\naround each batch of 100 or 1000 or so insertions).  There's not\ngoing to be any magic bullet that lets you get away without changing\nthe app, though.\n\nIt's quite possible that network round trip costs are a big chunk of your\nproblem, in which case physically grouping multiple rows into each INSERT\ncommand (... or COPY ...) is the only way to fix it.  But I'd start with\ntrying to reduce the transaction commit overhead.\n\n                        regards, tom lane", "msg_date": "Tue, 8 Mar 2022 18:36:17 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "On Tue, Mar 8, 2022 at 06:36:17PM +0530, aditya desai wrote:\n> Hi Tom,\n> I added BEGIN and COMMIT as shown below around insert and executed it from\n> pgadmin for 100,000 rows. It ran in just 1 min.\n> \n> BEGIN;\n> INSERT INTO TABLE VALUES(....);\n> INSERT INTO TABLE VALUES(....);\n> .\n> ,\n> COMMIT;\n> \n> However when I run above from psql by passing it to psql(As shown below) as a\n> file. It still takes a lot of time. Am I doing anything wrong? How can I run\n> this from pgadmin within a minute?\n> \n> psql -h host -U user -p Port -d database < INSERT_FILE.sql\n> \n> PSQL is still printing as below.\n> INSERT 0 1\n> INSERT 0 1\n\nUh, they should be the same. You can turn on log_statement=all on the\nserver and look at what queries are being issued in each case.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 8 Mar 2022 09:53:49 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to speed up INSERT INTO" }, { "msg_contents": "Ok Will check. But from pgadmin it takes 1min and by psql it is taking 20\nmins for 100,000 rows with BEGIN; COMMIT;\n\nThanks,\nAditya.\n\nOn Tue, Mar 8, 2022 at 8:23 PM Bruce Momjian <[email protected]> wrote:\n\n> On Tue, Mar 8, 2022 at 06:36:17PM +0530, aditya desai wrote:\n> > Hi Tom,\n> > I added BEGIN and COMMIT as shown below around insert and executed it\n> from\n> > pgadmin for 100,000 rows. It ran in just 1 min.\n> >\n> > BEGIN;\n> > INSERT INTO TABLE VALUES(....);\n> > INSERT INTO TABLE VALUES(....);\n> > .\n> > ,\n> > COMMIT;\n> >\n> > However when I run above from psql by passing it to psql(As shown below)\n> as a\n> > file. It still takes a lot of time. Am I doing anything wrong? How can I\n> run\n> > this from pgadmin within a minute?\n> >\n> > psql -h host -U user -p Port -d database < INSERT_FILE.sql\n> >\n> > PSQL is still printing as below.\n> > INSERT 0 1\n> > INSERT 0 1\n>\n> Uh, they should be the same. You can turn on log_statement=all on the\n> server and look at what queries are being issued in each case.\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nOk Will check. But from pgadmin it takes 1min and by psql it is taking 20 mins for 100,000 rows with BEGIN; COMMIT;Thanks,Aditya.On Tue, Mar 8, 2022 at 8:23 PM Bruce Momjian <[email protected]> wrote:On Tue, Mar  8, 2022 at 06:36:17PM +0530, aditya desai wrote:\n> Hi Tom,\n> I added BEGIN and COMMIT as shown below around insert and executed it from\n> pgadmin for 100,000 rows. It ran in just 1 min.\n> \n> BEGIN;\n> INSERT INTO TABLE VALUES(....);\n> INSERT INTO TABLE VALUES(....);\n> .\n> ,\n> COMMIT;\n> \n> However when I run above from psql by passing it to psql(As shown below) as a\n> file. It still takes a lot of time. Am I doing anything wrong? How can I run\n> this from pgadmin within a minute?\n> \n> psql -h host -U user -p Port -d database < INSERT_FILE.sql\n> \n> PSQL is still printing as below.\n> INSERT 0 1\n> INSERT 0 1\n\nUh, they should be the same.  You can turn on log_statement=all on the\nserver and look at what queries are being issued in each case.\n\n-- \n  Bruce Momjian  <[email protected]>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  If only the physical world exists, free will is an illusion.", "msg_date": "Wed, 9 Mar 2022 11:39:55 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any way to speed up INSERT INTO" } ]
[ { "msg_contents": "My boss asked me to upgrade one of the development  databases from 13.5 \n--> 14.2. One thing that we've noticed right away is that XA \ntransactions (2-phase commit) are much slower on 14.2 than on 13.5. Were \nthere any significant changes to the XA protocol in the version 14? Did \nanybody else encountered this problem?\n\nWhen I say \"XA transactions are much slower\", I mean that commit and/or \nrollback take much longer. The SQL execution takes the same and the \nplans are identical to the 13.5 version. The application code is the \nsame, using IBM WebSphere 9.0.4.\n\nRegards\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nMy boss asked me to upgrade one of the development  databases\n from 13.5 --> 14.2. One thing that we've noticed right away is\n that XA transactions (2-phase commit) are much slower on 14.2 than\n on 13.5. Were there any significant changes to the XA protocol in\n the version 14? Did anybody else encountered this problem?\nWhen I say \"XA transactions are much slower\", I mean that commit\n and/or rollback take much longer. The SQL execution takes the same\n and the plans are identical to the 13.5 version. The application\n code is the same, using IBM WebSphere 9.0.4.\n\n Regards\n -- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Fri, 4 Mar 2022 21:33:01 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "XA transactions much slower on 14.2 than on 13.5" }, { "msg_contents": "Mladen Gogala <[email protected]> writes:\n> My boss asked me to upgrade one of the development  databases from 13.5 \n> --> 14.2. One thing that we've noticed right away is that XA \n> transactions (2-phase commit) are much slower on 14.2 than on 13.5. Were \n> there any significant changes to the XA protocol in the version 14? Did \n> anybody else encountered this problem?\n\nThere were a bunch of changes around the 2PC code to support logical\nreplication of 2PC transactions, but I don't think they should have\nmade for any particular performance difference in non-replicated\nservers. Can you put together a self-contained test case that\ndemonstrates what you're seeing?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Mar 2022 21:44:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XA transactions much slower on 14.2 than on 13.5" } ]
[ { "msg_contents": "Hi everybody!\n\nI have a big application running on premise. One of my main database\nservers has the following configuration:\n\n72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240\n1TB of ram or 786GB (5 servers at all)\nA huge storage( I don't know for sure what kind is, but is very powerful)\n\nA consulting company recommended the following configuration for theses\nmain servers(let me know if something important was left behind):\n\nmaxx_connections = 2000\nshared_buffers = 32GB\ntemp_buffers = 1024\nmax_prepared_transactions = 3000\nwork_men = 32MB\neffective_io_concurrency = 200\nmax_worker_processes = 24\ncheckpoint_timeout = 15min\nmax_wal_size = 64GB\nmin_wall_size = 2GB\neffective_cache_size = 96GB\n(...)\n\nI Think this is too low memory setting for de size of server... The number\nof connections, I'm still measuring to reduce this value( I think it's too\nhigh for the needs of application, but untill hit a value too high to\njustfy any memory issue, I think is not a problem)\n\nMy current problem:\n\nunder heavyload, i'm getting \"connection closed\" on the application\nlevel(java-jdbc, jboss ds)\n\nThe server never spikes more the 200GB of used ram(that's why I thing the\nconfiguration is too low)\n\nThis is the output of free command:\n\n[image: image.png]\n\nThanks in advance!\n\n\nFelipph", "msg_date": "Mon, 7 Mar 2022 08:51:24 -0300", "msg_from": "Luiz Felipph <[email protected]>", "msg_from_op": true, "msg_subject": "Optimal configuration for server" }, { "msg_contents": "Em seg., 7 de mar. de 2022 às 08:54, Luiz Felipph <[email protected]>\nescreveu:\n\n> Hi everybody!\n>\n> I have a big application running on premise. One of my main database\n> servers has the following configuration:\n>\n> 72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240\n> 1TB of ram or 786GB (5 servers at all)\n> A huge storage( I don't know for sure what kind is, but is very powerful)\n>\n> A consulting company recommended the following configuration for theses\n> main servers(let me know if something important was left behind):\n>\n> maxx_connections = 2000\n> shared_buffers = 32GB\n> temp_buffers = 1024\n> max_prepared_transactions = 3000\n> work_men = 32MB\n> effective_io_concurrency = 200\n> max_worker_processes = 24\n> checkpoint_timeout = 15min\n> max_wal_size = 64GB\n> min_wall_size = 2GB\n> effective_cache_size = 96GB\n> (...)\n>\n> I Think this is too low memory setting for de size of server... The number\n> of connections, I'm still measuring to reduce this value( I think it's too\n> high for the needs of application, but untill hit a value too high to\n> justfy any memory issue, I think is not a problem)\n>\n> My current problem:\n>\n> under heavyload, i'm getting \"connection closed\" on the application\n> level(java-jdbc, jboss ds)\n>\nServer logs?\nWhat OS (version)\nWhat Postgres version.\nKeep-alive may not be configured at the client side?\n\nregards,\nRanier Vilela\n\n>\n\nEm seg., 7 de mar. de 2022 às 08:54, Luiz Felipph <[email protected]> escreveu:Hi everybody!I have a big application running on premise. One of my main database servers has the following configuration:72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 62401TB of ram or 786GB (5 servers at all)A huge storage( I don't know for sure what kind is, but is very powerful)A consulting company recommended the following configuration for theses main servers(let me know if something important was left behind):maxx_connections = 2000shared_buffers = 32GBtemp_buffers = 1024max_prepared_transactions = 3000work_men = 32MBeffective_io_concurrency = 200max_worker_processes = 24checkpoint_timeout = 15minmax_wal_size = 64GBmin_wall_size = 2GBeffective_cache_size = 96GB(...)I Think this is too low memory setting for de size of server... The number of connections, I'm still measuring to reduce this value( I think it's too high for the needs of application, but untill hit a value too high to justfy any memory issue, I think is not a problem)My current problem:under heavyload, i'm getting \"connection closed\" on the application level(java-jdbc, jboss ds)Server logs?What OS (version)What Postgres version.Keep-alive may not be configured at the client side?regards,Ranier Vilela", "msg_date": "Mon, 7 Mar 2022 13:51:35 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal configuration for server" }, { "msg_contents": "Greatings Ranieri,\n\nServer logs I need ask to someone to get it\n\nRedhat EL 7\n\nPostgres 12\n\nHumm.. I will find out were I should put keep Alive setting\n\n\n\n\n\nEm seg., 7 de mar. de 2022 13:51, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em seg., 7 de mar. de 2022 às 08:54, Luiz Felipph <[email protected]>\n> escreveu:\n>\n>> Hi everybody!\n>>\n>> I have a big application running on premise. One of my main database\n>> servers has the following configuration:\n>>\n>> 72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240\n>> 1TB of ram or 786GB (5 servers at all)\n>> A huge storage( I don't know for sure what kind is, but is very powerful)\n>>\n>> A consulting company recommended the following configuration for theses\n>> main servers(let me know if something important was left behind):\n>>\n>> maxx_connections = 2000\n>> shared_buffers = 32GB\n>> temp_buffers = 1024\n>> max_prepared_transactions = 3000\n>> work_men = 32MB\n>> effective_io_concurrency = 200\n>> max_worker_processes = 24\n>> checkpoint_timeout = 15min\n>> max_wal_size = 64GB\n>> min_wall_size = 2GB\n>> effective_cache_size = 96GB\n>> (...)\n>>\n>> I Think this is too low memory setting for de size of server... The\n>> number of connections, I'm still measuring to reduce this value( I think\n>> it's too high for the needs of application, but untill hit a value too high\n>> to justfy any memory issue, I think is not a problem)\n>>\n>> My current problem:\n>>\n>> under heavyload, i'm getting \"connection closed\" on the application\n>> level(java-jdbc, jboss ds)\n>>\n> Server logs?\n> What OS (version)\n> What Postgres version.\n> Keep-alive may not be configured at the client side?\n>\n> regards,\n> Ranier Vilela\n>\n>>\n\nGreatings Ranieri,Server logs I need ask to someone to get itRedhat EL 7Postgres 12Humm.. I will find out were I should put keep Alive settingEm seg., 7 de mar. de 2022 13:51, Ranier Vilela <[email protected]> escreveu:Em seg., 7 de mar. de 2022 às 08:54, Luiz Felipph <[email protected]> escreveu:Hi everybody!I have a big application running on premise. One of my main database servers has the following configuration:72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 62401TB of ram or 786GB (5 servers at all)A huge storage( I don't know for sure what kind is, but is very powerful)A consulting company recommended the following configuration for theses main servers(let me know if something important was left behind):maxx_connections = 2000shared_buffers = 32GBtemp_buffers = 1024max_prepared_transactions = 3000work_men = 32MBeffective_io_concurrency = 200max_worker_processes = 24checkpoint_timeout = 15minmax_wal_size = 64GBmin_wall_size = 2GBeffective_cache_size = 96GB(...)I Think this is too low memory setting for de size of server... The number of connections, I'm still measuring to reduce this value( I think it's too high for the needs of application, but untill hit a value too high to justfy any memory issue, I think is not a problem)My current problem:under heavyload, i'm getting \"connection closed\" on the application level(java-jdbc, jboss ds)Server logs?What OS (version)What Postgres version.Keep-alive may not be configured at the client side?regards,Ranier Vilela", "msg_date": "Mon, 7 Mar 2022 14:17:53 -0300", "msg_from": "Luiz Felipph <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimal configuration for server" }, { "msg_contents": "Em seg., 7 de mar. de 2022 às 14:18, Luiz Felipph <[email protected]>\nescreveu:\n\n> Greatings Ranieri,\n>\n> Server logs I need ask to someone to get it\n>\n> Redhat EL 7\n>\n> Postgres 12\n>\n> Humm.. I will find out were I should put keep Alive setting\n>\nAre you using nested connections?\n\nregards,\nRanier Vilela\n\n>\n\nEm seg., 7 de mar. de 2022 às 14:18, Luiz Felipph <[email protected]> escreveu:Greatings Ranieri,Server logs I need ask to someone to get itRedhat EL 7Postgres 12Humm.. I will find out were I should put keep Alive settingAre you using nested connections?regards,Ranier Vilela", "msg_date": "Mon, 7 Mar 2022 14:39:49 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal configuration for server" }, { "msg_contents": "\n\nOn 3/7/22 12:51, Luiz Felipph wrote:\n> Hi everybody!\n> \n> I have a big application running on premise. One of my main database\n> servers has the following configuration:\n> \n> 72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240\n> 1TB of ram or 786GB (5 servers at all)\n> A huge storage( I don't know for sure what kind is, but is very powerful)\n> \n> A consulting company recommended the following configuration for theses\n> main servers(let me know if something important was left behind):\n> \n> maxx_connections = 2000\n> shared_buffers = 32GB\n> temp_buffers = 1024\n> max_prepared_transactions = 3000\n> work_men = 32MB\n> effective_io_concurrency = 200\n> max_worker_processes = 24\n> checkpoint_timeout = 15min\n> max_wal_size = 64GB\n> min_wall_size = 2GB\n> effective_cache_size = 96GB\n> (...)\n> \n> I Think this is too low memory setting for de size of server... The\n> number of connections, I'm still measuring to reduce this value( I think\n> it's too high for the needs of application, but untill hit a value too\n> high to justfy any memory issue, I think is not a problem)\n> \n\nHard to judge, not knowing your workload. We don't know what information\nwas provided to the consulting company, you'll have to ask them for\njustification of the values they recommended.\n\nI'd say it looks OK, but max_connections/max_prepared_transactions are\nrather high, considering you only have 72 threads. But it depends ...\n\n> My current problem:\n> \n> under heavyload, i'm getting \"connection closed\" on the application\n> level(java-jdbc, jboss ds)\n> \n\nMost likely a java/jboss connection pool config. The database won't just\narbitrarily close connections (unless there are timeouts set, but you\nhaven't included any such info).\n\n> The server never spikes more the 200GB of used ram(that's why I thing\n> the configuration is too low)\n> \n\nUnlikely. If needed, the system would use memory for page cache, to\ncache filesystem data. So most likely this is due to the database not\nbeing large enough to need more memory.\n\nYou're optimizing the wrong thing - the goal is not to use as much\nmemory as possible. The goal is to give good performance given the\navailable amount of memory.\n\nYou need to monitor shared buffers cache hit rate (from pg_stat_database\nview) - if that's low, increase shared buffers. Then monitor and tune\nslow queries - if a slow query benefits from higher work_mem values, do\nincrease that value. It's nonsense to just increase the parameters to\nconsume more memory.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Mar 2022 19:07:46 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal configuration for server" }, { "msg_contents": "Hi Tomas,\n\nThank you for your reply!\n\nThomas,\n\n> You need to monitor shared buffers cache hit rate (from pg_stat_database\n> view) - if that's low, increase shared buffers. Then monitor and tune\n> slow queries - if a slow query benefits from higher work_mem values, do\n> increase that value. It's nonsense to just increase the parameters to\n> consume more memory.\n\n\nMakes perfect sense! The system is a OLTP and unfortunately has some issues\nabout how big the single lines are(too many colunms). In some cases I have\nto bring to app 150k lines(in some not so rare cases, 200k ~300k) to\nprocess in a single transaction, then update and insert new rows. It's\nworks fine, except when eventually start to outOfMemory or Connection has\nbeen closed forcing us to restart the application cluster. Finally I'll\nhave access to a performance environment to see how is configured(they\npromised me a production mirror) and then get back to you to provide more\ndetailed information.\n\nThanks for you time!\n\nRanier,\n\n> Are you using nested connections?\n\n\nWhat do you mean with \"nested connections\"? If you are talking about nested\ntransactions, then yes, and I'm aware of subtransaction problem but I think\nthis is not the case right now (we had, removed multiple points, some other\npoints we delivered to God's hands(joking), but know I don't see this issue)\n\n\nFelipph\n\n\nEm seg., 7 de mar. de 2022 às 15:07, Tomas Vondra <\[email protected]> escreveu:\n\n>\n>\n> On 3/7/22 12:51, Luiz Felipph wrote:\n> > Hi everybody!\n> >\n> > I have a big application running on premise. One of my main database\n> > servers has the following configuration:\n> >\n> > 72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240\n> > 1TB of ram or 786GB (5 servers at all)\n> > A huge storage( I don't know for sure what kind is, but is very powerful)\n> >\n> > A consulting company recommended the following configuration for theses\n> > main servers(let me know if something important was left behind):\n> >\n> > maxx_connections = 2000\n> > shared_buffers = 32GB\n> > temp_buffers = 1024\n> > max_prepared_transactions = 3000\n> > work_men = 32MB\n> > effective_io_concurrency = 200\n> > max_worker_processes = 24\n> > checkpoint_timeout = 15min\n> > max_wal_size = 64GB\n> > min_wall_size = 2GB\n> > effective_cache_size = 96GB\n> > (...)\n> >\n> > I Think this is too low memory setting for de size of server... The\n> > number of connections, I'm still measuring to reduce this value( I think\n> > it's too high for the needs of application, but untill hit a value too\n> > high to justfy any memory issue, I think is not a problem)\n> >\n>\n> Hard to judge, not knowing your workload. We don't know what information\n> was provided to the consulting company, you'll have to ask them for\n> justification of the values they recommended.\n>\n> I'd say it looks OK, but max_connections/max_prepared_transactions are\n> rather high, considering you only have 72 threads. But it depends ...\n>\n> > My current problem:\n> >\n> > under heavyload, i'm getting \"connection closed\" on the application\n> > level(java-jdbc, jboss ds)\n> >\n>\n> Most likely a java/jboss connection pool config. The database won't just\n> arbitrarily close connections (unless there are timeouts set, but you\n> haven't included any such info).\n>\n> > The server never spikes more the 200GB of used ram(that's why I thing\n> > the configuration is too low)\n> >\n>\n> Unlikely. If needed, the system would use memory for page cache, to\n> cache filesystem data. So most likely this is due to the database not\n> being large enough to need more memory.\n>\n> You're optimizing the wrong thing - the goal is not to use as much\n> memory as possible. The goal is to give good performance given the\n> available amount of memory.\n>\n> You need to monitor shared buffers cache hit rate (from pg_stat_database\n> view) - if that's low, increase shared buffers. Then monitor and tune\n> slow queries - if a slow query benefits from higher work_mem values, do\n> increase that value. It's nonsense to just increase the parameters to\n> consume more memory.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi Tomas,Thank you for your reply!Thomas,  You need to monitor shared buffers cache hit rate (from pg_stat_databaseview) - if that's low, increase shared buffers. Then monitor and tuneslow queries - if a slow query benefits from higher work_mem values, doincrease that value. It's nonsense to just increase the parameters toconsume more memory.Makes perfect sense! The system is a OLTP and unfortunately has some issues about how big the single lines are(too many colunms). In some cases I have to bring to app 150k lines(in some not so rare cases, 200k ~300k) to process in a single transaction, then update and insert new rows. It's works fine, except when eventually start to outOfMemory or Connection has been closed forcing us to restart the application cluster. Finally I'll have access to a performance environment to see how is configured(they promised me a production mirror) and then get back to you to provide more detailed information.Thanks for you time!Ranier,Are you using nested connections?What do you mean with \"nested connections\"? If you are talking about nested transactions, then yes, and I'm aware of subtransaction problem but I think this is not the case right now (we had, removed multiple points, some other points we delivered to God's hands(joking), but know I don't see this issue)FelipphEm seg., 7 de mar. de 2022 às 15:07, Tomas Vondra <[email protected]> escreveu:\n\nOn 3/7/22 12:51, Luiz Felipph wrote:\n> Hi everybody!\n> \n> I have a big application running on premise. One of my main database\n> servers has the following configuration:\n> \n> 72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240\n> 1TB of ram or 786GB (5 servers at all)\n> A huge storage( I don't know for sure what kind is, but is very powerful)\n> \n> A consulting company recommended the following configuration for theses\n> main servers(let me know if something important was left behind):\n> \n> maxx_connections = 2000\n> shared_buffers = 32GB\n> temp_buffers = 1024\n> max_prepared_transactions = 3000\n> work_men = 32MB\n> effective_io_concurrency = 200\n> max_worker_processes = 24\n> checkpoint_timeout = 15min\n> max_wal_size = 64GB\n> min_wall_size = 2GB\n> effective_cache_size = 96GB\n> (...)\n> \n> I Think this is too low memory setting for de size of server... The\n> number of connections, I'm still measuring to reduce this value( I think\n> it's too high for the needs of application, but untill hit a value too\n> high to justfy any memory issue, I think is not a problem)\n> \n\nHard to judge, not knowing your workload. We don't know what information\nwas provided to the consulting company, you'll have to ask them for\njustification of the values they recommended.\n\nI'd say it looks OK, but max_connections/max_prepared_transactions are\nrather high, considering you only have 72 threads. But it depends ...\n\n> My current problem:\n> \n> under heavyload, i'm getting \"connection closed\" on the application\n> level(java-jdbc, jboss ds)\n> \n\nMost likely a java/jboss connection pool config. The database won't just\narbitrarily close connections (unless there are timeouts set, but you\nhaven't included any such info).\n\n> The server never spikes more the 200GB of used ram(that's why I thing\n> the configuration is too low)\n> \n\nUnlikely. If needed, the system would use memory for page cache, to\ncache filesystem data. So most likely this is due to the database not\nbeing large enough to need more memory.\n\nYou're optimizing the wrong thing - the goal is not to use as much\nmemory as possible. The goal is to give good performance given the\navailable amount of memory.\n\nYou need to monitor shared buffers cache hit rate (from pg_stat_database\nview) - if that's low, increase shared buffers. Then monitor and tune\nslow queries - if a slow query benefits from higher work_mem values, do\nincrease that value. It's nonsense to just increase the parameters to\nconsume more memory.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 7 Mar 2022 18:07:14 -0300", "msg_from": "Luiz Felipph <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimal configuration for server" }, { "msg_contents": "Em seg., 7 de mar. de 2022 às 18:10, Luiz Felipph <[email protected]>\nescreveu:\n\n> Hi Tomas,\n>\n> Thank you for your reply!\n>\n> Thomas,\n>\n>> You need to monitor shared buffers cache hit rate (from pg_stat_database\n>> view) - if that's low, increase shared buffers. Then monitor and tune\n>> slow queries - if a slow query benefits from higher work_mem values, do\n>> increase that value. It's nonsense to just increase the parameters to\n>> consume more memory.\n>\n>\n> Makes perfect sense! The system is a OLTP and unfortunately has some\n> issues about how big the single lines are(too many colunms). In some cases\n> I have to bring to app 150k lines(in some not so rare cases, 200k ~300k) to\n> process in a single transaction, then update and insert new rows. It's\n> works fine, except when eventually start to outOfMemory or Connection has\n> been closed forcing us to restart the application cluster. Finally I'll\n> have access to a performance environment to see how is configured(they\n> promised me a production mirror) and then get back to you to provide more\n> detailed information.\n>\n> Thanks for you time!\n>\n> Ranier,\n>\n>> Are you using nested connections?\n>\n>\n> What do you mean with \"nested connections\"? If you are talking about\n> nested transactions, then yes, and I'm aware of subtransaction problem but\n> I think this is not the case right now (we had, removed multiple points,\n> some other points we delivered to God's hands(joking), but know I don't see\n> this issue)\n>\nI mean \"nested\", even.\nTwo or more connections opened by app.\nIf this is case, is need processing the second connection first,\nbefore the first connection.\n\nJust a guess.\n\nregards,\nRanier Vilela\n\nEm seg., 7 de mar. de 2022 às 18:10, Luiz Felipph <[email protected]> escreveu:Hi Tomas,Thank you for your reply!Thomas,  You need to monitor shared buffers cache hit rate (from pg_stat_databaseview) - if that's low, increase shared buffers. Then monitor and tuneslow queries - if a slow query benefits from higher work_mem values, doincrease that value. It's nonsense to just increase the parameters toconsume more memory.Makes perfect sense! The system is a OLTP and unfortunately has some issues about how big the single lines are(too many colunms). In some cases I have to bring to app 150k lines(in some not so rare cases, 200k ~300k) to process in a single transaction, then update and insert new rows. It's works fine, except when eventually start to outOfMemory or Connection has been closed forcing us to restart the application cluster. Finally I'll have access to a performance environment to see how is configured(they promised me a production mirror) and then get back to you to provide more detailed information.Thanks for you time!Ranier,Are you using nested connections?What do you mean with \"nested connections\"? If you are talking about nested transactions, then yes, and I'm aware of subtransaction problem but I think this is not the case right now (we had, removed multiple points, some other points we delivered to God's hands(joking), but know I don't see this issue)I mean \"nested\", even.Two or more connections opened by app.If this is case, is need processing the second connection first, before the first connection.Just a guess.regards,Ranier Vilela", "msg_date": "Mon, 7 Mar 2022 19:28:13 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal configuration for server" }, { "msg_contents": "On Mon, Mar 07, 2022 at 08:51:24AM -0300, Luiz Felipph wrote:\n> My current problem:\n> \n> under heavyload, i'm getting \"connection closed\" on the application\n> level(java-jdbc, jboss ds)\n\nCould you check whether the server is crashing ?\n\nIf you run \"ps -fu postgres\", you can compare the start time (\"STIME\") of the\npostmaster parent process with that of the persistent, auxilliary, child\nprocesses like the checkpointer. If there was a crash, the checkpointer will\nhave restarted more recently than the parent process.\n\nThe SQL version of that is like:\nSELECT date_trunc('second', pg_postmaster_start_time() - backend_start) FROM pg_stat_activity ORDER BY 1 DESC LIMIT 1;\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Mar 2022 00:44:55 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal configuration for server" }, { "msg_contents": "Could you enable the connections logs and share the results when it is\nreproduced, please?\n\nIt generally shows the error code and message\n\nSo, you can double-confirm if it is because of KeepAlive configuration or\nsomething else\n\n-- \nMoisés López Calderón\nMobile: (+521) 477-752-22-30\nTwitter: @moylop260\nhangout: [email protected]\nhttp://www.vauxoo.com - Odoo Gold Partner\nTwitter: @vauxoo\n\nCould you enable the connections logs and share the results when it is reproduced, please?It generally shows the error code and messageSo, you can double-confirm if it is because of KeepAlive configuration or something else-- Moisés López CalderónMobile: (+521) 477-752-22-30Twitter: @moylop260hangout: [email protected]://www.vauxoo.com - Odoo Gold PartnerTwitter: @vauxoo", "msg_date": "Fri, 11 Mar 2022 20:00:20 -0500", "msg_from": "Moises Lopez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal configuration for server" }, { "msg_contents": "Hi,\n\n \n\nAnother point to verify is idle_in_transaction_session_timeout\n\nWhat is the value of this parameter?\n\n \n\nRegards\n\n \n\nMichel SALAIS\n\nDe : Luiz Felipph <[email protected]> \nEnvoyé : lundi 7 mars 2022 22:07\nÀ : Tomas Vondra <[email protected]>\nCc : Pgsql Performance <[email protected]>\nObjet : Re: Optimal configuration for server\n\n \n\nHi Tomas,\n\n \n\nThank you for your reply!\n\n \n\nThomas, \n\nYou need to monitor shared buffers cache hit rate (from pg_stat_database\nview) - if that's low, increase shared buffers. Then monitor and tune\nslow queries - if a slow query benefits from higher work_mem values, do\nincrease that value. It's nonsense to just increase the parameters to\nconsume more memory.\n\n \n\nMakes perfect sense! The system is a OLTP and unfortunately has some issues about how big the single lines are(too many colunms). In some cases I have to bring to app 150k lines(in some not so rare cases, 200k ~300k) to process in a single transaction, then update and insert new rows. It's works fine, except when eventually start to outOfMemory or Connection has been closed forcing us to restart the application cluster. Finally I'll have access to a performance environment to see how is configured(they promised me a production mirror) and then get back to you to provide more detailed information.\n\n \n\nThanks for you time!\n\n \n\nRanier,\n\nAre you using nested connections?\n\n \n\nWhat do you mean with \"nested connections\"? If you are talking about nested transactions, then yes, and I'm aware of subtransaction problem but I think this is not the case right now (we had, removed multiple points, some other points we delivered to God's hands(joking), but know I don't see this issue)\n\n \n\n \n\nFelipph\n\n \n\n \n\nEm seg., 7 de mar. de 2022 às 15:07, Tomas Vondra <[email protected] <mailto:[email protected]> > escreveu:\n\n\n\nOn 3/7/22 12:51, Luiz Felipph wrote:\n> Hi everybody!\n> \n> I have a big application running on premise. One of my main database\n> servers has the following configuration:\n> \n> 72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240\n> 1TB of ram or 786GB (5 servers at all)\n> A huge storage( I don't know for sure what kind is, but is very powerful)\n> \n> A consulting company recommended the following configuration for theses\n> main servers(let me know if something important was left behind):\n> \n> maxx_connections = 2000\n> shared_buffers = 32GB\n> temp_buffers = 1024\n> max_prepared_transactions = 3000\n> work_men = 32MB\n> effective_io_concurrency = 200\n> max_worker_processes = 24\n> checkpoint_timeout = 15min\n> max_wal_size = 64GB\n> min_wall_size = 2GB\n> effective_cache_size = 96GB\n> (...)\n> \n> I Think this is too low memory setting for de size of server... The\n> number of connections, I'm still measuring to reduce this value( I think\n> it's too high for the needs of application, but untill hit a value too\n> high to justfy any memory issue, I think is not a problem)\n> \n\nHard to judge, not knowing your workload. We don't know what information\nwas provided to the consulting company, you'll have to ask them for\njustification of the values they recommended.\n\nI'd say it looks OK, but max_connections/max_prepared_transactions are\nrather high, considering you only have 72 threads. But it depends ...\n\n> My current problem:\n> \n> under heavyload, i'm getting \"connection closed\" on the application\n> level(java-jdbc, jboss ds)\n> \n\nMost likely a java/jboss connection pool config. The database won't just\narbitrarily close connections (unless there are timeouts set, but you\nhaven't included any such info).\n\n> The server never spikes more the 200GB of used ram(that's why I thing\n> the configuration is too low)\n> \n\nUnlikely. If needed, the system would use memory for page cache, to\ncache filesystem data. So most likely this is due to the database not\nbeing large enough to need more memory.\n\nYou're optimizing the wrong thing - the goal is not to use as much\nmemory as possible. The goal is to give good performance given the\navailable amount of memory.\n\nYou need to monitor shared buffers cache hit rate (from pg_stat_database\nview) - if that's low, increase shared buffers. Then monitor and tune\nslow queries - if a slow query benefits from higher work_mem values, do\nincrease that value. It's nonsense to just increase the parameters to\nconsume more memory.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\nHi, Another point to verify is idle_in_transaction_session_timeoutWhat is the value of this parameter? Regards Michel SALAISDe : Luiz Felipph <[email protected]> Envoyé : lundi 7 mars 2022 22:07À : Tomas Vondra <[email protected]>Cc : Pgsql Performance <[email protected]>Objet : Re: Optimal configuration for server Hi Tomas, Thank you for your reply! Thomas,  You need to monitor shared buffers cache hit rate (from pg_stat_databaseview) - if that's low, increase shared buffers. Then monitor and tuneslow queries - if a slow query benefits from higher work_mem values, doincrease that value. It's nonsense to just increase the parameters toconsume more memory. Makes perfect sense! The system is a OLTP and unfortunately has some issues about how big the single lines are(too many colunms). In some cases I have to bring to app 150k lines(in some not so rare cases, 200k ~300k) to process in a single transaction, then update and insert new rows. It's works fine, except when eventually start to outOfMemory or Connection has been closed forcing us to restart the application cluster. Finally I'll have access to a performance environment to see how is configured(they promised me a production mirror) and then get back to you to provide more detailed information. Thanks for you time! Ranier,Are you using nested connections? What do you mean with \"nested connections\"? If you are talking about nested transactions, then yes, and I'm aware of subtransaction problem but I think this is not the case right now (we had, removed multiple points, some other points we delivered to God's hands(joking), but know I don't see this issue)  Felipph  Em seg., 7 de mar. de 2022 às 15:07, Tomas Vondra <[email protected]> escreveu:On 3/7/22 12:51, Luiz Felipph wrote:> Hi everybody!> > I have a big application running on premise. One of my main database> servers has the following configuration:> > 72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240> 1TB of ram or 786GB (5 servers at all)> A huge storage( I don't know for sure what kind is, but is very powerful)> > A consulting company recommended the following configuration for theses> main servers(let me know if something important was left behind):> > maxx_connections = 2000> shared_buffers = 32GB> temp_buffers = 1024> max_prepared_transactions = 3000> work_men = 32MB> effective_io_concurrency = 200> max_worker_processes = 24> checkpoint_timeout = 15min> max_wal_size = 64GB> min_wall_size = 2GB> effective_cache_size = 96GB> (...)> > I Think this is too low memory setting for de size of server... The> number of connections, I'm still measuring to reduce this value( I think> it's too high for the needs of application, but untill hit a value too> high to justfy any memory issue, I think is not a problem)> Hard to judge, not knowing your workload. We don't know what informationwas provided to the consulting company, you'll have to ask them forjustification of the values they recommended.I'd say it looks OK, but max_connections/max_prepared_transactions arerather high, considering you only have 72 threads. But it depends ...> My current problem:> > under heavyload, i'm getting \"connection closed\" on the application> level(java-jdbc, jboss ds)> Most likely a java/jboss connection pool config. The database won't justarbitrarily close connections (unless there are timeouts set, but youhaven't included any such info).> The server never spikes more the 200GB of used ram(that's why I thing> the configuration is too low)> Unlikely. If needed, the system would use memory for page cache, tocache filesystem data. So most likely this is due to the database notbeing large enough to need more memory.You're optimizing the wrong thing - the goal is not to use as muchmemory as possible. The goal is to give good performance given theavailable amount of memory.You need to monitor shared buffers cache hit rate (from pg_stat_databaseview) - if that's low, increase shared buffers. Then monitor and tuneslow queries - if a slow query benefits from higher work_mem values, doincrease that value. It's nonsense to just increase the parameters toconsume more memory.regards-- Tomas VondraEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Wed, 16 Mar 2022 15:01:10 +0100", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Optimal configuration for server" } ]
[ { "msg_contents": "Hello team,\n\nWhat is the unit of I/O Timings in explain (analyze, buffers) ? There is a\nplan with quite a few nodes. In each case, the value of I/O Timings is\nmuch more than the time for the outer node. A few lines from the plan -\n\n Hash Left Join (cost=14320945.22..7099974624.27 rows=194335062701\nwidth=5511) (actual time=107913.021..*108109*.313 rows=759 loops=1)\n Buffers: shared hit=738871 read=1549646, temp read=92710 written=92973\n I/O Timings: read=*228324*.357\n -> Hash Left Join (cost=14049069.69..246411189.41 rows=18342148438\nwidth=5467) (actual time=96579.630..*96774*.534 rows=759 loops=1)\n Buffers: shared hit=684314 read=1377851, temp read=92710\nwritten=92973\n I/O Timings: read=*217899*.233\nAt the end, there is\nExecution Time: 108117.006 ms\n\nSo it takes about 108 seconds. But the I/O Timings are higher.\n\nBest Regards,\nJay\n\nHello team,What is the unit of I/O Timings in explain (analyze, buffers) ? There is a plan with quite a few nodes.  In each case, the value of I/O Timings is much more than the time for the outer node. A few lines from the plan -  Hash Left Join  (cost=14320945.22..7099974624.27 rows=194335062701 width=5511) (actual time=107913.021..108109.313 rows=759 loops=1)   Buffers: shared hit=738871 read=1549646, temp read=92710 written=92973   I/O Timings: read=228324.357   ->  Hash Left Join  (cost=14049069.69..246411189.41 rows=18342148438 width=5467) (actual time=96579.630..96774.534 rows=759 loops=1)         Buffers: shared hit=684314 read=1377851, temp read=92710 written=92973         I/O Timings: read=217899.233At the end, there is Execution Time: 108117.006 msSo it takes about 108 seconds. But the I/O Timings are higher.Best Regards,Jay", "msg_date": "Thu, 10 Mar 2022 10:40:17 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": true, "msg_subject": "Explain analyse with track_io_timing" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 10, 2022 at 10:40:17AM +0530, Jayadevan M wrote:\n>\n> What is the unit of I/O Timings in explain (analyze, buffers) ?\n\nmilliseconds\n\n> There is a plan with quite a few nodes. In each case, the value of I/O\n> Timings is much more than the time for the outer node. A few lines from the\n> plan -\n>\n> Hash Left Join (cost=14320945.22..7099974624.27 rows=194335062701\n> width=5511) (actual time=107913.021..*108109*.313 rows=759 loops=1)\n> Buffers: shared hit=738871 read=1549646, temp read=92710 written=92973\n> I/O Timings: read=*228324*.357\n> -> Hash Left Join (cost=14049069.69..246411189.41 rows=18342148438\n> width=5467) (actual time=96579.630..*96774*.534 rows=759 loops=1)\n> Buffers: shared hit=684314 read=1377851, temp read=92710\n> written=92973\n> I/O Timings: read=*217899*.233\n> At the end, there is\n> Execution Time: 108117.006 ms\n>\n> So it takes about 108 seconds. But the I/O Timings are higher.\n\nIs it a parallel query? If yes the total time is only the time spent in the\nmain process, and the IO time is sum of all IO time spent in main process and\nthe parallel workers, which can obviously be a lot more than the total\nexecution time.\n\n\n", "msg_date": "Thu, 10 Mar 2022 14:05:28 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain analyse with track_io_timing" }, { "msg_contents": "Is it a parallel query? If yes the total time is only the time spent in the\n> main process, and the IO time is sum of all IO time spent in main process\n> and\n> the parallel workers, which can obviously be a lot more than the total\n> execution time.\n>\nYes, there are parallel workers, that explains it. Thank you.\nRegards,\nJay\n\nIs it a parallel query?  If yes the total time is only the time spent in the\nmain process, and the IO time is sum of all IO time spent in main process and\nthe parallel workers, which can obviously be a lot more than the total\nexecution time.Yes, there are parallel workers, that explains it. Thank you.Regards,Jay", "msg_date": "Thu, 10 Mar 2022 11:40:24 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explain analyse with track_io_timing" } ]
[ { "msg_contents": "Hello Team,\n\nThere is change in query plan in 12.4 version and Version 13 resulting in performance slowness post upgrade.\n\n\n * In 12.4 version, Sort Operation Group Aggregate is selected which results to Merge Join. Query takes ~5 seconds.\n * In 13.5 version, optimizer wrongly estimates and due to new Disk Based Hash Aggregate feature, it prefers Hash Aggregate instead of Sort Operation which finally blocks merge-join and chooses Nested Loop Left Join. Query takes ~5 minutes.\n\nWhen we increase work_mem to 23 MB, Disk Usage gets cleared from Query Plan but still Optimizer estimates Hash Aggregate-Nested Loop Left Join (compared to Sort-Merge Join) causing slowness. Query takes ~22 seconds.\n\nVersion 13 query plan has lower estimated cost than that of 12.4 which implies 13.5 planner thought it found a better plan, but it is running slower.\n\n12.4 Version:\n\"Merge Right Join (cost=202198.78..295729.10 rows=1 width=8) (actual time=1399.727..5224.574 rows=296 loops=1)\"\n\n13.5 version:-\n\"Nested Loop Left Join (cost=196360.90..287890.45 rows=1 width=8) (actual time=3209.577..371300.693 rows=296 loops=1)\"\n\n\n\nThanks & Regards,\n\nPrajna Shetty\nTechnical Specialist,\nData Platform Support & Delivery\n[cid:[email protected]]\n\n________________________________\n\nhttp://www.mindtree.com/email/disclaimer.html", "msg_date": "Mon, 21 Mar 2022 11:45:05 +0000", "msg_from": "Prajna Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue post upgrade on Version 13 - Incorrect Estimation\n Cost choosing Hash Aggregate-Nested Left Loop Join" }, { "msg_contents": "Prajna Shetty <[email protected]> writes:\n> There is change in query plan in 12.4 version and Version 13 resulting in performance slowness post upgrade.\n\nStandard upgrade methods don't transfer statistics from the old version,\nso the first question to ask is have you ANALYZE'd the relevant tables\nsince upgrading?\n\nIf you have, then to offer useful help with this we'll need to see all\nthe details described in\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nIn any case, this is unlikely to be a bug. The pgsql-performance\nlist would be a more suitable place to discuss it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Mar 2022 09:59:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue post upgrade on Version 13 - Incorrect\n Estimation Cost choosing Hash Aggregate-Nested Left Loop Join" }, { "msg_contents": "++ [email protected]<https://www.postgresql.org/list/pgsql-performance/>\n\nHello Team,\n\nThere is change in query plan in 12.4 version and Version 13 resulting in performance slowness post upgrade.\n\n* In 12.4 version, Sort Operation Group Aggregate is selected which results to Merge Join. Query takes ~5 seconds.\n* In 13.5 version, optimizer wrongly estimates and due to new Disk Based Hash Aggregate feature, it prefers Hash Aggregate instead of Sort Operation which finally blocks merge-join and chooses Nested Loop Left Join. Query takes ~5 minutes.\n\n NOTE: Disabling Hash Aggregate on instance level forces optimizer to choose merge operation but such instance level modification is not possible in terms of Application Functionality.\n\nThis performance issue is on all over most of queries. Attached one of the query and its plan in both version for reference in case that helps for recreating the issue.\n\nVersion 13 query plan has lower estimated cost than that of 12.4 which implies 13.5 planner thought it found a better plan, but it is running slower and actual cost show more.\n\n12.4 Version:\n\"Merge Right Join (cost=202198.78..295729.10 rows=1 width=8) (actual time=1399.727..5224.574 rows=296 loops=1)\"\n\n13.5 version:-\n\"Nested Loop Left Join (cost=196360.90..287890.45 rows=1 width=8) (actual time=3209.577..371300.693 rows=296 loops=1)\"\n\n\nDetails:-\n1. It is AWS Aurora-Postgresql RDS instance. We have raised case with AWS and since this issue is a regression coming from the community PostgreSQL code, we would like to raise bug here.\n2. We were upgrading from 12.4 version to (13.4 and later)\n3. vCPU: 2 , RAM: 8 GB\n4. Attached Stats for all tables in this schema for your reference.\n\n5. Attached is metadata for one of the table person for your reference.\n\nWe have performed many such below steps, but it did not help:-\n\n1. We have performed Vacuum/Analyze/Reindex post Upgrade.\n2. Tweaked work_mem so it does not spill to Disk. We can Disk Usage But it is still using Hash Aggregate and came down from 5 minutes to 20 seconds. (Expected ~5 seconds). Attached plan after modifying work_mem\n\n3. Disabled Seqcan/ nestedloop\n4. Tweaked random_page_cost/seq_page_cost\n5. Set default_statistics_target=1000 and then run vacuum(analyze,verbose) on selected tables.\n6. We have also tested performance by increasing resources up to 4 vCPU and 32 GB RAM.\n\nCould you please check and confirm if this incorrect Cost Estimation is known concern in Version 13 where in some cases optimizer calculates and prefers Hash Aggregate==>Nested Left Loop Join instead of Merge Join?\n\n\n\nThanks & Regards,\n\nPrajna Shetty\nTechnical Specialist,\nData Platform Support & Delivery\n\n\n\n\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]>\nSent: Monday, March 21, 2022 7:29 PM\nTo: Prajna Shetty <[email protected]>\nCc: [email protected]; Beenu Sharma <[email protected]>\nSubject: Re: Performance issue post upgrade on Version 13 - Incorrect Estimation Cost choosing Hash Aggregate-Nested Left Loop Join\n\n* This e-mail originated outside of Mindtree. Exercise caution before clicking links or opening attachments *\n\nPrajna Shetty <[email protected]<mailto:[email protected]>> writes:\n> There is change in query plan in 12.4 version and Version 13 resulting in performance slowness post upgrade.\n\nStandard upgrade methods don't transfer statistics from the old version, so the first question to ask is have you ANALYZE'd the relevant tables since upgrading?\n\nIf you have, then to offer useful help with this we'll need to see all the details described in\n\nhttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FSlow_Query_Questions&amp;data=04%7C01%7CPrajna.Shetty%40mindtree.com%7C5ca04f6fdd7b452f51f508da0b42fc8e%7C85c997b9f49446b3a11d772983cf6f11%7C0%7C0%7C637834679772208865%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&amp;sdata=sx8OsD%2FpfdcSHV%2FUsm4Vtm7tadbZIugLFZaXfD7X%2BZc%3D&amp;reserved=0\n\nIn any case, this is unlikely to be a bug. The pgsql-performance list would be a more suitable place to discuss it.\n\n regards, tom lane\n\n\n\n ________________________________\n\nhttp://www.mindtree.com/email/disclaimer.html", "msg_date": "Tue, 22 Mar 2022 12:57:10 +0000", "msg_from": "Prajna Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue post upgrade on Version 13 - Incorrect Estimation\n Cost choosing Hash Aggregate-Nested Left Loop Join" }, { "msg_contents": "\n\nOn 3/22/22 13:57, Prajna Shetty wrote:\n> ++ [email protected]_\n> <https://www.postgresql.org/list/pgsql-performance/>\n>  \n> Hello Team,\n>  \n> There is change in query plan in 12.4 version and Version 13 resulting\n> in performance slowness post upgrade.\n>  \n> \n> * In 12.4 version, Sort Operation Group Aggregate is selected which\n> results to Merge Join. Query takes ~5 seconds.\n> * In 13.5 version, optimizer wrongly estimates and due to new Disk\n> Based Hash Aggregate feature, it prefers Hash Aggregate instead of\n> Sort Operation which finally blocks merge-join and chooses Nested\n> Loop Left Join. Query takes ~5 minutes.\n> \n>  \n> *_NOTE: _*Disabling Hash Aggregate on instance level forces optimizer to\n> choose merge operation but such instance level modification is not\n> possible in terms of Application Functionality.\n>  \n> This performance issue is on all over most of queries. Attached one of\n> the query and its plan in both version for reference in case that helps\n> for recreating the issue.\n>  \n\nIt's impossible to comment those other queries, but chances are the root\ncause is the same.\n\n> Version 13 query plan has lower estimated cost than that of 12.4 which\n> implies 13.5 planner thought it found a better plan, but it is running\n> slower and actual cost show more.\n>  \n> 12.4 Version:\n> \"Merge Right Join  (cost=*202198.78..295729.10* rows=1 width=8) (actual\n> time=1399.727..*5224.574* rows=296 loops=1)\"\n>  \n> 13.5 version:-\n> \"Nested Loop Left Join  (cost=*196360.90..287890.45* rows=1 width=8)\n> (actual time=3209.577..*371300.693* rows=296 loops=1)\"\n>  \n\nThis is not a costing issue, the problem is that we expect 1 row and\ncalculate the cost for that, but then get 296. And unfortunately a\nnested loop degrades much faster than a merge join.\n\nI'm not sure why exactly 12.4 picked a merge join, chances are the\ncosting formular changed a bit somewhere. But as I said, the problem is\nin bogus row cardinality estimates - 12.4 is simply lucky.\n\nThe problem most likely stems from this part:\n\n -> GroupAggregate (cost=0.43..85743.24 rows=1830 width=72) (actual\ntime=1.621..3452.034 rows=282179 loops=3)\n Group Key: student_class_detail.aamc_id\n Filter: (max((student_class_detail.class_level_cd)::text) = '4'::text)\n Rows Removed by Filter: 76060\n -> Index Scan using uk_student_class_detail_aamcid_classlevelcd on\nstudent_class_detail (cost=0.43..74747.61 rows=1284079 width=6) (actual\ntime=1.570..2723.014 rows=1272390 loops=3)\n Filter: (class_level_start_dt IS NOT NULL)\n Rows Removed by Filter: 160402\n\nThe filter is bound to be misestimated, and the error then snowballs.\nTry replacing this part with a temporary table (with pre-aggregated\nresults) - you can run analyze on it, etc. I'd bet that'll make the\nissue go away.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 24 Mar 2022 12:55:41 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue post upgrade on Version 13 - Incorrect\n Estimation Cost choosing Hash Aggregate-Nested Left Loop Join" }, { "msg_contents": "On Tue, Mar 22, 2022 at 12:57:10PM +0000, Prajna Shetty wrote:\n> 1. We have performed Vacuum/Analyze/Reindex post Upgrade.\n> 2. Tweaked work_mem so it does not spill to Disk. We can Disk Usage But it is still using Hash Aggregate and came down from 5 minutes to 20 seconds. (Expected ~5 seconds). Attached plan after modifying work_mem\n> 3. Disabled Seqcan/ nestedloop\n> 4. Tweaked random_page_cost/seq_page_cost\n> 5. Set default_statistics_target=1000 and then run vacuum(analyze,verbose) on selected tables.\n> 6. We have also tested performance by increasing resources up to 4 vCPU and 32 GB RAM.\n\nWould you provide your current settings ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\n\n", "msg_date": "Thu, 24 Mar 2022 07:24:14 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue post upgrade on Version 13 - Incorrect\n Estimation Cost choosing Hash Aggregate-Nested Left Loop Join" } ]
[ { "msg_contents": "Hi\n\nWe are running\npostgres server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\nPOSTGIS=\"3.1.1 aaf4c79\" [EXTENSION] PGSQL=\"120\" GEOS=\"3.9.0-CAPI-1.16.2\" SFCGAL=\"1.3.7\" PROJ=\"7.2.1\" GDAL=\"GDAL 3.2.1, released 2020/12/29\" LIBXML=\"2.9.10\" LIBJSON=\"0.13.1\" LIBPROTOBUF=\"1.3.3\" WAGYU=\"0.5.0 (Internal)\" TOPOLOGY RASTER\n\nThe problem is that it takes more than 10 hours (duration: 36885527.039) to browse tables geometry from qgis https://explain.depesz.com/s/MxAN#bquery with high load on the server.\nWe have at least 45 jobs running and around 70% CPU load on the server.\n\nThen I started to check views/tables involved and found that the view geometry_columns seems to be using a very long time\n'explain analyze select * from geometry_columns' have been waiting for more than 2 hours now, will paste the result to https://explain.depesz.com when done.\n\nWhile waiting I created temp table for the system tables involved in view geometry_columns like this\n\ncreate temp table pg_attribute_temp as select attcollation,attrelid,attname,atttypid,attstattarget,attlen,attnum,attndims,attcacheoff,atttypmod,attbyval,attstorage,attalign,attnotnull,atthasdef,atthasmissing,attidentity,attgenerated,attisdropped,attislocal,attinhcount,attacl,attoptions,attfdwoptions from pg_attribute;\ncreate temp table pg_namespace_temp as select * from pg_namespace;\ncreate temp table pg_type_temp as select * from pg_type;\ncreate temp table pg_constraint_temp as select * from pg_constraint;\n\nSELECT 1702623\nTime: 42552.899 ms (00:42.553)\nSELECT 841\nTime: 132.595 ms\nSELECT 245239\nTime: 3378.395 ms (00:03.378)\nSELECT 9575\nTime: 205.036 ms\n\nThat did not take very long time.\n\nThen created geometry_columns_temp_no_rules using those new temp tables.\n\nexplain analyze select * from geometry_columns_temp_no_rules\n\nAnd that takes less than 6 seconds with no indexes. Here is the explain from https://explain.depesz.com/s/yBSd\n\nWhy is temp tables with no indexes much faster system tables with indexes ?\n\n(I do not think it's related to not having rules I tested to crated a view using system tables with but with no rules and that hanged for more that 15 minuttes an dthen I gave up)\n\nHere is the view def that I used.\n\nCREATE VIEW geometry_columns_temp_no_rules AS\nSELECT current_database()::character varying(256) AS f_table_catalog,\n n.nspname AS f_table_schema,\n c.relname AS f_table_name,\n a.attname AS f_geometry_column,\n COALESCE(postgis_typmod_dims(a.atttypmod), sn.ndims, 2) AS coord_dimension,\n COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n FROM pg_class c\n JOIN pg_attribute_temp a ON a.attrelid = c.oid AND NOT a.attisdropped\n JOIN pg_namespace_temp n ON c.relnamespace = n.oid\n JOIN pg_type_temp t ON a.atttypid = t.oid\n LEFT JOIN ( SELECT s.connamespace,\n s.conrelid,\n s.conkey,\n replace(split_part(s.consrc, ''''::text, 2), ')'::text, ''::text) AS type\n FROM ( SELECT pg_constraint_temp.connamespace,\n pg_constraint_temp.conrelid,\n pg_constraint_temp.conkey,\n pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n FROM pg_constraint_temp) s\n WHERE s.consrc ~~* '%geometrytype(% = %'::text) st ON st.connamespace = n.oid AND st.conrelid = c.oid AND (a.attnum = ANY (st.conkey))\n LEFT JOIN ( SELECT s.connamespace,\n s.conrelid,\n s.conkey,\n replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text)::integer AS ndims\n FROM ( SELECT pg_constraint_temp.connamespace,\n pg_constraint_temp.conrelid,\n pg_constraint_temp.conkey,\n pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n FROM pg_constraint_temp) s\n WHERE s.consrc ~~* '%ndims(% = %'::text) sn ON sn.connamespace = n.oid AND sn.conrelid = c.oid AND (a.attnum = ANY (sn.conkey))\n LEFT JOIN ( SELECT s.connamespace,\n s.conrelid,\n s.conkey,\n replace(replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text), '('::text, ''::text)::integer AS srid\n FROM ( SELECT pg_constraint_temp.connamespace,\n pg_constraint_temp.conrelid,\n pg_constraint_temp.conkey,\n pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n FROM pg_constraint_temp) s\n WHERE s.consrc ~~* '%srid(% = %'::text) sr ON sr.connamespace = n.oid AND sr.conrelid = c.oid AND (a.attnum = ANY (sr.conkey))\n WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"])) AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name AND NOT pg_is_other_temp_schema(c.relnamespace) AND has_table_privilege(c.oid, 'SELECT'::text);\n;\n\nThanks.\n\nLars\n\n\n\n\n\n\n\n\nHi\n\n\n\n\nWe are running \n\n\npostgres server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\nPOSTGIS=\"3.1.1 aaf4c79\" [EXTENSION] PGSQL=\"120\" GEOS=\"3.9.0-CAPI-1.16.2\" SFCGAL=\"1.3.7\" PROJ=\"7.2.1\" GDAL=\"GDAL 3.2.1, released 2020/12/29\" LIBXML=\"2.9.10\" LIBJSON=\"0.13.1\" LIBPROTOBUF=\"1.3.3\" WAGYU=\"0.5.0 (Internal)\" TOPOLOGY RASTER\n\n\nThe problem is that it takes more than 10 hours (duration: 36885527.039) to browse tables geometry from qgis\nhttps://explain.depesz.com/s/MxAN#bquery with high load on the server.\n\n\nWe have at least 45 jobs running and around 70% CPU load on the server.\n\n\n\nThen I started to check views/tables involved and found that the view geometry_columns seems to be using a very long time\n\n'explain analyze select * from geometry_columns' have been waiting for more than 2 hours now, will paste the result to https://explain.depesz.com when done.\n\n\nWhile waiting I created temp table for the system tables involved in view geometry_columns like this\n\n\n\ncreate temp table pg_attribute_temp as select attcollation,attrelid,attname,atttypid,attstattarget,attlen,attnum,attndims,attcacheoff,atttypmod,attbyval,attstorage,attalign,attnotnull,atthasdef,atthasmissing,attidentity,attgenerated,attisdropped,attislocal,attinhcount,attacl,attoptions,attfdwoptions\n from pg_attribute;\ncreate temp table pg_namespace_temp as select * from pg_namespace;\ncreate temp table pg_type_temp as select * from pg_type;\ncreate temp table pg_constraint_temp as select * from pg_constraint;\n\n\nSELECT 1702623\nTime: 42552.899 ms (00:42.553)\nSELECT 841\nTime: 132.595 ms\nSELECT 245239\nTime: 3378.395 ms (00:03.378)\nSELECT 9575\nTime: 205.036 ms\n\n\nThat did not take very long time.\n\n\n\nThen created geometry_columns_temp_no_rules using those new temp tables.\n\n\nexplain analyze select * from geometry_columns_temp_no_rules \n\n\n\nAnd that takes less than 6 seconds with no indexes. Here is the explain from https://explain.depesz.com/s/yBSd\n\n\nWhy is temp tables with no indexes much faster system tables with indexes ?\n\n\n(I do not think it's related to not having rules I tested to crated a view using system tables with but with no rules and that hanged for more that 15 minuttes an dthen I gave up)\n\n\n\nHere is the view def that I used.\n\n\n\nCREATE VIEW geometry_columns_temp_no_rules AS\nSELECT current_database()::character varying(256) AS f_table_catalog,\n    n.nspname AS f_table_schema,\n    c.relname AS f_table_name,\n    a.attname AS f_geometry_column,\n    COALESCE(postgis_typmod_dims(a.atttypmod), sn.ndims, 2) AS coord_dimension,\n    COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n    replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n   FROM pg_class c\n     JOIN pg_attribute_temp a ON a.attrelid = c.oid AND NOT a.attisdropped\n     JOIN pg_namespace_temp n ON c.relnamespace = n.oid\n     JOIN pg_type_temp t ON a.atttypid = t.oid\n     LEFT JOIN ( SELECT s.connamespace,\n            s.conrelid,\n            s.conkey,\n            replace(split_part(s.consrc, ''''::text, 2), ')'::text, ''::text) AS type\n           FROM ( SELECT pg_constraint_temp.connamespace,\n                    pg_constraint_temp.conrelid,\n                    pg_constraint_temp.conkey,\n                    pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n                   FROM pg_constraint_temp) s\n          WHERE s.consrc ~~* '%geometrytype(% = %'::text) st ON st.connamespace = n.oid AND st.conrelid = c.oid AND (a.attnum = ANY (st.conkey))\n     LEFT JOIN ( SELECT s.connamespace,\n            s.conrelid,\n            s.conkey,\n            replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text)::integer AS ndims\n           FROM ( SELECT pg_constraint_temp.connamespace,\n                    pg_constraint_temp.conrelid,\n                    pg_constraint_temp.conkey,\n                    pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n                   FROM pg_constraint_temp) s\n          WHERE s.consrc ~~* '%ndims(% = %'::text) sn ON sn.connamespace = n.oid AND sn.conrelid = c.oid AND (a.attnum = ANY (sn.conkey))\n     LEFT JOIN ( SELECT s.connamespace,\n            s.conrelid,\n            s.conkey,\n            replace(replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text), '('::text, ''::text)::integer AS srid\n           FROM ( SELECT pg_constraint_temp.connamespace,\n                    pg_constraint_temp.conrelid,\n                    pg_constraint_temp.conkey,\n                    pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n                   FROM pg_constraint_temp) s\n          WHERE s.consrc ~~* '%srid(% = %'::text) sr ON sr.connamespace = n.oid AND sr.conrelid = c.oid AND (a.attnum = ANY (sr.conkey))\n  WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"])) AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name AND NOT pg_is_other_temp_schema(c.relnamespace) AND has_table_privilege(c.oid,\n 'SELECT'::text);\n;    \n\n\n\n\nThanks.\n\n\n\n\nLars", "msg_date": "Wed, 23 Mar 2022 09:44:09 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Using system tables directly takes many hours, using temp tables with\n no indexes takes a few seconds for geometry_columns view." }, { "msg_contents": ">Hi\n>\n>We are running\n>postgres server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\n>POSTGIS=\"3.1.1 aaf4c79\" [EXTENSION] PGSQL=\"120\" GEOS=\"3.9.0-CAPI-1.16.2\" SFCGAL=\"1.3.7\" PROJ=\"7.2.1\" GDAL=\"GDAL 3.2.1, released 2020/12/29\" LIBXML=\"2.9.10\" LIBJSON=\"0.13.1\" LIBPROTOBUF=\"1.3.3\" WAGYU=\"0.5.0 (Internal)\" TOPOLOGY RASTER\n>\n>The problem is that it takes more than 10 hours (duration: 36885527.039) to browse tables geometry from qgis https://explain.depesz.com/s/MxAN#bquery with high load on the server.\n>We have at least 45 jobs running and around 70% CPU load on the server.\n>\n>Then I started to check views/tables involved and found that the view geometry_columns seems to be using a very long time\n>'explain analyze select * from geometry_columns' have been waiting for more than 2 hours now, will paste the result to https://explain.depesz.com when done.\n>\n>While waiting I created temp table for the system tables involved in view geometry_columns like this\n>\n>create temp table pg_attribute_temp as select attcollation,attrelid,attname,atttypid,attstattarget,attlen,attnum,attndims,attcacheoff,atttypmod,attbyval,attstorage,attalign,attnotnull,atthasdef,atthasmissing,attidentity,attgenerated,attisdropped,attislocal,attinhcount,attacl,attoptions,attfdwoptions from pg_attribute;\n>create temp table pg_namespace_temp as select * from pg_namespace;\n>create temp table pg_type_temp as select * from pg_type;\n>create temp table pg_constraint_temp as select * from pg_constraint;\n>\n>SELECT 1702623\n>Time: 42552.899 ms (00:42.553)\n>SELECT 841\n>Time: 132.595 ms\n>SELECT 245239\n>Time: 3378.395 ms (00:03.378)\n>SELECT 9575\n>Time: 205.036 ms\n>\n>That did not take very long time.\n>\n>Then created geometry_columns_temp_no_rules using those new temp tables.\n>\n>explain analyze select * from geometry_columns_temp_no_rules\n>\n>And that takes less than 6 seconds with no indexes. Here is the explain from https://explain.depesz.com/s/yBSd\n>\n>Why is temp tables with no indexes much faster system tables with indexes ?\n>\n>(I do not think it's related to not having rules I tested to crated a view using system tables with but with no rules and that hanged for more that 15 minuttes an dthen I gave up)\n>\n>Here is the view def that I used.\n>\n>CREATE VIEW geometry_columns_temp_no_rules AS\n>SELECT current_database()::character varying(256) AS f_table_catalog,\n> n.nspname AS f_table_schema,\n> c.relname AS f_table_name,\n> a.attname AS f_geometry_column,\n> COALESCE(postgis_typmod_dims(a.atttypmod), sn.ndims, 2) AS coord_dimension,\n> COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n> replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n> FROM pg_class c\n> JOIN pg_attribute_temp a ON a.attrelid = c.oid AND NOT a.attisdropped\n> JOIN pg_namespace_temp n ON c.relnamespace = n.oid\n> JOIN pg_type_temp t ON a.atttypid = t.oid\n> LEFT JOIN ( SELECT s.connamespace,\n> s.conrelid,\n> s.conkey,\n> replace(split_part(s.consrc, ''''::text, 2), ')'::text, ''::text) AS type\n> FROM ( SELECT pg_constraint_temp.connamespace,\n> pg_constraint_temp.conrelid,\n> pg_constraint_temp.conkey,\n> pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n> FROM pg_constraint_temp) s\n> WHERE s.consrc ~~* '%geometrytype(% = %'::text) st ON st.connamespace = n.oid AND st.conrelid = c.oid AND (a.attnum = ANY (st.conkey))\n> LEFT JOIN ( SELECT s.connamespace,\n> s.conrelid,\n> s.conkey,\n> replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text)::integer AS ndims\n> FROM ( SELECT pg_constraint_temp.connamespace,\n> pg_constraint_temp.conrelid,\n> pg_constraint_temp.conkey,\n> pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n> FROM pg_constraint_temp) s\n> WHERE s.consrc ~~* '%ndims(% = %'::text) sn ON sn.connamespace = n.oid AND sn.conrelid = c.oid AND (a.attnum = ANY (sn.conkey))\n> LEFT JOIN ( SELECT s.connamespace,\n> s.conrelid,\n> s.conkey,\n> replace(replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text), '('::text, ''::text)::integer AS srid\n> FROM ( SELECT pg_constraint_temp.connamespace,\n> pg_constraint_temp.conrelid,\n> pg_constraint_temp.conkey,\n> pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n> FROM pg_constraint_temp) s\n> WHERE s.consrc ~~* '%srid(% = %'::text) sr ON sr.connamespace = n.oid AND sr.conrelid = c.oid AND (a.attnum = ANY (sr.conkey))\n> WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"])) AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name AND NOT pg_is_other_temp_schema(c.relnamespace) AND has_table_privilege(c.oid, 'SELECT'::text);\n>;\n>\n>Thanks.\n>\n>Lars\nHi\n\nI did another test.\n\nHere ( https://explain.depesz.com/s/H7f9 ) I use pg_attribute_temp and we use around 6 seconds\n\nBut in this query (https://explain.depesz.com/s/Op7i) I use pg_attribute system table directly and execution time is around 50 seconds\n\nThe explain analyze is still running on select * from geometry_columns.\n\nThanks\n\nLars\n\n\n\n\n\n\n\n>Hi\n>\n>We are running\n>postgres server 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1))\n>POSTGIS=\"3.1.1 aaf4c79\" [EXTENSION] PGSQL=\"120\" GEOS=\"3.9.0-CAPI-1.16.2\" SFCGAL=\"1.3.7\" PROJ=\"7.2.1\" GDAL=\"GDAL 3.2.1, released 2020/12/29\" LIBXML=\"2.9.10\" LIBJSON=\"0.13.1\" LIBPROTOBUF=\"1.3.3\" WAGYU=\"0.5.0 (Internal)\" TOPOLOGY RASTER\n>\n>The problem is that it takes more than 10 hours (duration: 36885527.039) to browse tables geometry from qgis https://explain.depesz.com/s/MxAN#bquery with high load on the server.\n>We have at least 45 jobs running and around 70% CPU load on the server.\n>\n>Then I started to check views/tables involved and found that the view geometry_columns seems to be using a very long time\n>'explain analyze select * from geometry_columns' have been waiting for more than 2 hours now, will paste the result to https://explain.depesz.com when done.\n>\n>While waiting I created temp table for the system tables involved in view geometry_columns like this\n>\n>create temp table pg_attribute_temp as select attcollation,attrelid,attname,atttypid,attstattarget,attlen,attnum,attndims,attcacheoff,atttypmod,attbyval,attstorage,attalign,attnotnull,atthasdef,atthasmissing,attidentity,attgenerated,attisdropped,attislocal,attinhcount,attacl,attoptions,attfdwoptions\n from pg_attribute;\n>create temp table pg_namespace_temp as select * from pg_namespace;\n>create temp table pg_type_temp as select * from pg_type;\n>create temp table pg_constraint_temp as select * from pg_constraint;\n>\n>SELECT 1702623\n>Time: 42552.899 ms (00:42.553)\n>SELECT 841\n>Time: 132.595 ms\n>SELECT 245239\n>Time: 3378.395 ms (00:03.378)\n>SELECT 9575\n>Time: 205.036 ms\n>\n>That did not take very long time.\n>\n>Then created geometry_columns_temp_no_rules using those new temp tables.\n>\n>explain analyze select * from geometry_columns_temp_no_rules\n>\n>And that takes less than 6 seconds with no indexes. Here is the explain from https://explain.depesz.com/s/yBSd\n>\n>Why is temp tables with no indexes much faster system tables with indexes ?\n>\n>(I do not think it's related to not having rules I tested to crated a view using system tables with but with no rules and that hanged for more that 15 minuttes an dthen I gave up)\n>\n>Here is the view def that I used.\n>\n>CREATE VIEW geometry_columns_temp_no_rules AS\n>SELECT current_database()::character varying(256) AS f_table_catalog,\n>    n.nspname AS f_table_schema,\n>    c.relname AS f_table_name,\n>    a.attname AS f_geometry_column,\n>    COALESCE(postgis_typmod_dims(a.atttypmod), sn.ndims, 2) AS coord_dimension,\n>    COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n>    replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n>   FROM pg_class c\n>     JOIN pg_attribute_temp a ON a.attrelid = c.oid AND NOT a.attisdropped\n>     JOIN pg_namespace_temp n ON c.relnamespace = n.oid\n>     JOIN pg_type_temp t ON a.atttypid = t.oid\n>     LEFT JOIN ( SELECT s.connamespace,\n>            s.conrelid,\n>            s.conkey,\n>            replace(split_part(s.consrc, ''''::text, 2), ')'::text, ''::text) AS type\n>           FROM ( SELECT pg_constraint_temp.connamespace,\n>                    pg_constraint_temp.conrelid,\n>                    pg_constraint_temp.conkey,\n>                    pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n>                   FROM pg_constraint_temp) s\n>          WHERE s.consrc ~~* '%geometrytype(% = %'::text) st ON st.connamespace = n.oid AND st.conrelid = c.oid AND (a.attnum = ANY (st.conkey))\n>     LEFT JOIN ( SELECT s.connamespace,\n>            s.conrelid,\n>            s.conkey,\n>            replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text)::integer AS ndims\n>           FROM ( SELECT pg_constraint_temp.connamespace,\n>                    pg_constraint_temp.conrelid,\n>                    pg_constraint_temp.conkey,\n>                    pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n>                   FROM pg_constraint_temp) s\n>          WHERE s.consrc ~~* '%ndims(% = %'::text) sn ON sn.connamespace = n.oid AND sn.conrelid = c.oid AND (a.attnum = ANY (sn.conkey))\n>     LEFT JOIN ( SELECT s.connamespace,\n>            s.conrelid,\n>            s.conkey,\n>            replace(replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text), '('::text, ''::text)::integer AS srid\n>           FROM ( SELECT pg_constraint_temp.connamespace,\n>                    pg_constraint_temp.conrelid,\n>                    pg_constraint_temp.conkey,\n>                    pg_get_constraintdef(pg_constraint_temp.oid) AS consrc\n>                   FROM pg_constraint_temp) s\n>          WHERE s.consrc ~~* '%srid(% = %'::text) sr ON sr.connamespace = n.oid AND sr.conrelid = c.oid AND (a.attnum = ANY (sr.conkey))\n>  WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"])) AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name AND NOT pg_is_other_temp_schema(c.relnamespace) AND has_table_privilege(c.oid,\n 'SELECT'::text);\n>;    \n>\n>Thanks.\n>\n>Lars\nHi\n\n\nI did another test. \n\n\nHere ( https://explain.depesz.com/s/H7f9 ) I use pg_attribute_temp and we use around 6 seconds\n\n\nBut in this query (https://explain.depesz.com/s/Op7i) I use pg_attribute system table directly and execution time is around 50 seconds\n\n\nThe explain analyze is still running on select * from geometry_columns.\n\n\n\n\nThanks\n\n\n\nLars", "msg_date": "Wed, 23 Mar 2022 12:56:39 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using system tables directly takes many hours, using temp tables\n with no indexes takes a few seconds for geometry_columns view." }, { "msg_contents": "On Wed, Mar 23, 2022 at 09:44:09AM +0000, Lars Aksel Opsahl wrote:\n> Why is temp tables with no indexes much faster system tables with indexes ?\n\nI think the \"temp table\" way is accidentally faster due to having no\nstatistics, not because it has no indexes. If you run ANALYZE, you may hit the\nsame issue (or, maybe you just need to VACUUM ANALYZE your system catalogs).\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 23 Mar 2022 08:19:21 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using system tables directly takes many hours, using temp tables\n with no indexes takes a few seconds for geometry_columns view." }, { "msg_contents": "________________________________\n>From: Justin Pryzby <[email protected]>Sent: Wednesday, March 23, 2022 2:19 PMTo: Lars Aksel Opsahl <[email protected]>Cc: [email protected] <[email protected]>Subject: Re: Using system tables directly takes many hours, using temp tables with no indexes takes a few seconds for geometry_columns view.\n>\n>On Wed, Mar 23, 2022 at 09:44:09AM +0000, Lars Aksel Opsahl wrote:\n>> Why is temp tables with no indexes much faster system tables with indexes ?\n>\n>I think the \"temp table\" way is accidentally faster due to having no\n>statistics, not because it has no indexes. If you run ANALYZE, you may hit the\n>same issue (or, maybe you just need to VACUUM ANALYZE your system catalogs).\n\nHi\n\nI had tested this in the morning and it did not work (VACUUM ANALYZE pg_class; VACUUM ANALYZE pg_attribute; VACUUM ANALYZE pg_namespace; VACUUM ANALYZE raster_columns; VACUUM ANALYZE pg_type; )\n\nBut now it seemed to work maybe one time, the 50 secs query (https://explain.depesz.com/s/Op7i<https://explain.depesz.com/s/Op7i#stats>) was down to 6 secs, but just to be sure I rerun the query one more time and we where where back to execution time of 50 seconds.\n\nIt seems like stats may be valid for just some few seconds before having to run analyze again and that takes a long time.\n\nThe 45 jobs running on the server are creating a lot temp tables and maybe some unlogged tables\n\nWe can not run run analyze in every job because this may be many hundred thounsed jobs that we need to run.\n\nDoes this mean that we can not use temp tables in this extent and in stead use https://www.postgresql.org/docs/12/queries-with.html ?\nBut the problem with \"with\" is that we can not create indexes.\n\nOr is a option to exclude temp tables geometry_columns in effective way , but that will probably cause problems if we create temp table in jobs where we use postgis.so that not a good solution either,\n\nThanks\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n>From: Justin Pryzby <[email protected]>Sent: Wednesday, March 23, 2022 2:19 PMTo: Lars Aksel Opsahl <[email protected]>Cc: [email protected] <[email protected]>Subject: Re: Using system tables directly takes\n many hours, using temp tables with no indexes takes a few seconds for geometry_columns view.\n> \n>On Wed, Mar 23, 2022 at 09:44:09AM +0000, Lars Aksel Opsahl wrote:\n>> Why is temp tables with no indexes much faster system tables with indexes ?\n>\n>I think the \"temp table\" way is accidentally faster due to having no\n>statistics, not because it has no indexes.  If you run ANALYZE, you may hit the\n>same issue (or, maybe you just need to VACUUM ANALYZE your system catalogs).\n\n\n\nHi\n\n\nI had tested this in the morning and it did not work  (VACUUM ANALYZE pg_class; VACUUM ANALYZE pg_attribute; VACUUM ANALYZE pg_namespace; VACUUM ANALYZE raster_columns; VACUUM ANALYZE pg_type; )\n\n\n\n\nBut now it seemed to work maybe one time, the 50 secs query\n(https://explain.depesz.com/s/Op7i) \nwas down to 6 secs, but just to be sure I rerun the query one more time and we where where back to execution time of 50 seconds.\n\n\nIt seems like stats may be valid for just some few seconds  before having to run analyze again and that takes a long time.\n\n\nThe 45 jobs running on the server are creating a lot temp tables and maybe some unlogged tables\n\n\n\n\nWe can not run run analyze in every job because this may be many hundred thounsed jobs that we need to run.\n\n\n\nDoes this mean that we can not use temp tables in this extent and in stead use\n\nhttps://www.postgresql.org/docs/12/queries-with.html ? \n\n\nBut the problem with \"with\" is that we can not create indexes.\n\n\n\nOr is a option to exclude temp tables geometry_columns in effective way , but that will probably cause problems if we create temp table in jobs where we use postgis.so that not a good solution\n either,\n\n\nThanks \n\n\n\nLars", "msg_date": "Wed, 23 Mar 2022 14:49:39 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using system tables directly takes many hours, using temp tables\n with no indexes takes a few seconds for geometry_columns view." }, { "msg_contents": "________________________________\nFrom: Justin Pryzby <[email protected]>\nSent: Wednesday, March 23, 2022 2:19 PM\n\n>On Wed, Mar 23, 2022 at 09:44:09AM +0000, Lars Aksel Opsahl wrote:\n>> Why is temp tables with no indexes much faster system tables with indexes ?\n>\n>I think the \"temp table\" way is accidentally faster due to having no\n>statistics, not because it has no indexes. If you run ANALYZE, you may hit the\n>same issue (or, maybe you just need to VACUUM ANALYZE your system catalogs).\n\nHi\n\nSorry I misread your mail you are totally right.\n\nBefore I do vacuum we have these execution Time: 9422.964 ms (00:09.423)\n\nThe vacuum as you suggested\nVACUUM ANALYZE pg_attribute_temp;\nVACUUM ANALYZE pg_namespace_temp;\nVACUUM ANALYZE pg_type_temp;\nVACUUM ANALYZE pg_constraint_temp;\n\nI can wait for 10 minutes and it just hangs, yes so we have the same problem as suggested.\n\nThe original query \"select * from geometry_columns\" finally finished after almost 9 hours .\n\nThe plan is here https://explain.depesz.com/s/jGXf\n\nI did some more testing and if remove LEFT JOIN to pg_constraint in runs in less than a minute and return 75219 rows.\n\nWITH geo_column_list AS (SELECT\ncurrent_database()::character varying(256) AS f_table_catalog,\n n.nspname AS f_table_schema,\n n.oid AS n_oid,\n c.relname AS f_table_name,\n c.oid AS c_oid,\n a.attname AS f_geometry_column,\n a.attnum AS a_attnum\n --COALESCE(postgis_typmod_dims(a.atttypmod), sn.ndims, 2) AS coord_dimension,\n --COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n --replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n FROM pg_class c\n JOIN pg_attribute a ON a.attrelid = c.oid AND NOT a.attisdropped\n JOIN pg_namespace n ON c.relnamespace = n.oid\n JOIN pg_type t ON a.atttypid = t.oid\n WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"]))\n AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name\n AND NOT pg_is_other_temp_schema(c.relnamespace)\n AND has_table_privilege(c.oid, 'SELECT'::text)\n)\nSELECT * FROM geo_column_list;\n\nBut if I try this with LEFT JOIN it hangs for hours it seems like.\n\nWITH geo_column_list AS (SELECT\ncurrent_database()::character varying(256) AS f_table_catalog,\n n.nspname AS f_table_schema,\n n.oid AS n_oid,\n c.relname AS f_table_name,\n c.oid AS c_oid,\n a.attname AS f_geometry_column,\n a.attnum AS a_attnum,\n a.atttypmod\n --COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n --replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n FROM pg_class c\n JOIN pg_attribute a ON a.attrelid = c.oid AND NOT a.attisdropped\n JOIN pg_namespace n ON c.relnamespace = n.oid\n JOIN pg_type t ON a.atttypid = t.oid\n WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"]))\n AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name\n AND NOT pg_is_other_temp_schema(c.relnamespace)\n AND has_table_privilege(c.oid, 'SELECT'::text)\n),\npg_constraint_list AS (SELECT pg_constraint.connamespace,\n pg_constraint.conrelid,\n pg_constraint.conkey,\n pg_get_constraintdef(pg_constraint.oid) AS consrc\n FROM pg_constraint, geo_column_list\nWHERE connamespace = n_oid AND conrelid = c_oid AND (a_attnum = ANY (conkey))\n)\n,\ngeo_column_list_full AS (SELECT * FROM geo_column_list\n LEFT JOIN ( SELECT s.connamespace,\n s.conrelid,\n s.conkey,\n replace(split_part(s.consrc, ''''::text, 2), ')'::text, ''::text) AS type\n FROM pg_constraint_list s\n WHERE s.consrc ~~* '%geometrytype(% = %'::text) st ON TRUE\n LEFT JOIN ( SELECT s.connamespace,\n s.conrelid,\n s.conkey,\n replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text)::integer AS ndims\n FROM pg_constraint_list s\n WHERE s.consrc ~~* '%ndims(% = %'::text) sn ON TRUE\n LEFT JOIN ( SELECT s.connamespace,\n s.conrelid,\n s.conkey,\n replace(replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text), '('::text, ''::text)::integer AS srid\n FROM pg_constraint_list s\n WHERE s.consrc ~~* '%srid(% = %'::text) sr ON TRUE\n)\nSELECT *,\n COALESCE(postgis_typmod_dims(atttypmod), ndims, 2) AS coord_dimension\nFROM geo_column_list_full;\n\nbut if I try this it return 648 rows in less than second\n\nWITH geo_column_list AS (SELECT\ncurrent_database()::character varying(256) AS f_table_catalog,\n n.nspname AS f_table_schema,\n n.oid AS n_oid,\n c.relname AS f_table_name,\n c.oid AS c_oid,\n a.attname AS f_geometry_column,\n a.attnum AS a_attnum,\n a.atttypmod\n --COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n --replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n FROM pg_class c\n JOIN pg_attribute a ON a.attrelid = c.oid AND NOT a.attisdropped\n JOIN pg_namespace n ON c.relnamespace = n.oid\n JOIN pg_type t ON a.atttypid = t.oid\n WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"]))\n AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name\n AND NOT pg_is_other_temp_schema(c.relnamespace)\n AND has_table_privilege(c.oid, 'SELECT'::text)\n),\npg_constraint_list AS (SELECT pg_constraint.connamespace,\n pg_constraint.conrelid,\n pg_constraint.conkey,\n pg_get_constraintdef(pg_constraint.oid) AS consrc\n FROM pg_constraint, geo_column_list\nWHERE connamespace = n_oid AND conrelid = c_oid AND (a_attnum = ANY (conkey))\n)\nSELECT *\nFROM pg_constraint_list;\n\nThanks.\n\nLars\n\n\n\n\n\n\n\n\nFrom: Justin Pryzby <[email protected]>\nSent: Wednesday, March 23, 2022 2:19 PM\n\n\n\n\n\n>On Wed, Mar 23, 2022 at 09:44:09AM +0000, Lars Aksel Opsahl wrote:\n>> Why is temp tables with no indexes much faster system tables with indexes ?\n>\n>I think the \"temp table\" way is accidentally faster due to having no\n>statistics, not because it has no indexes.  If you run ANALYZE, you may hit the\n>same issue (or, maybe you just need to VACUUM ANALYZE your system catalogs).\n\n\nHi \n\n\nSorry I misread your mail you are totally right. \n\n\nBefore I do vacuum we have these execution Time: 9422.964 ms (00:09.423)\n\n\nThe vacuum as you suggested \nVACUUM ANALYZE pg_attribute_temp;\nVACUUM ANALYZE pg_namespace_temp;\nVACUUM ANALYZE pg_type_temp;\nVACUUM ANALYZE pg_constraint_temp;\n\n\nI can wait for 10 minutes and it just hangs, yes so we have the same problem as suggested.\n\n\nThe original query \"select * from geometry_columns\" finally finished after almost 9 hours .\n\n\nThe plan is here https://explain.depesz.com/s/jGXf \n\n\nI did some more testing and if remove LEFT JOIN to pg_constraint in runs in less than a minute and return  75219 rows.\n\n\nWITH geo_column_list AS (SELECT\ncurrent_database()::character varying(256) AS f_table_catalog,\n    n.nspname AS f_table_schema,\n    n.oid AS n_oid,\n    c.relname AS f_table_name,\n    c.oid AS c_oid,\n    a.attname AS f_geometry_column,\n    a.attnum AS a_attnum\n    --COALESCE(postgis_typmod_dims(a.atttypmod), sn.ndims, 2) AS coord_dimension,\n    --COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n    --replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n   FROM pg_class c\n     JOIN pg_attribute a ON a.attrelid = c.oid AND NOT a.attisdropped\n     JOIN pg_namespace n ON c.relnamespace = n.oid\n     JOIN pg_type t ON a.atttypid = t.oid\n  WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"]))\n\n  AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name\n\n  AND NOT pg_is_other_temp_schema(c.relnamespace) \n  AND has_table_privilege(c.oid, 'SELECT'::text)\n)\nSELECT * FROM geo_column_list;\n\n\nBut if I try this with LEFT JOIN it hangs for hours it seems like.\n\n\nWITH geo_column_list AS (SELECT\ncurrent_database()::character varying(256) AS f_table_catalog,\n    n.nspname AS f_table_schema,\n    n.oid AS n_oid,\n    c.relname AS f_table_name,\n    c.oid AS c_oid,\n    a.attname AS f_geometry_column,\n    a.attnum AS a_attnum,\n    a.atttypmod\n    --COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n    --replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n   FROM pg_class c\n     JOIN pg_attribute a ON a.attrelid = c.oid AND NOT a.attisdropped\n     JOIN pg_namespace n ON c.relnamespace = n.oid\n     JOIN pg_type t ON a.atttypid = t.oid\n  WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"]))\n\n  AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name\n\n  AND NOT pg_is_other_temp_schema(c.relnamespace) \n  AND has_table_privilege(c.oid, 'SELECT'::text)\n),\npg_constraint_list AS (SELECT pg_constraint.connamespace,\n                    pg_constraint.conrelid,\n                    pg_constraint.conkey,\n                    pg_get_constraintdef(pg_constraint.oid) AS consrc\n                   FROM pg_constraint, geo_column_list \nWHERE connamespace = n_oid AND conrelid = c_oid AND (a_attnum = ANY (conkey))\n)\n,\ngeo_column_list_full AS (SELECT * FROM geo_column_list\n     LEFT JOIN ( SELECT s.connamespace,\n            s.conrelid,\n            s.conkey,\n            replace(split_part(s.consrc, ''''::text, 2), ')'::text, ''::text) AS type\n           FROM pg_constraint_list s\n          WHERE s.consrc ~~* '%geometrytype(% = %'::text) st ON TRUE\n     LEFT JOIN ( SELECT s.connamespace,\n            s.conrelid,\n            s.conkey,\n            replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text)::integer AS ndims\n           FROM pg_constraint_list s\n          WHERE s.consrc ~~* '%ndims(% = %'::text) sn ON TRUE\n     LEFT JOIN ( SELECT s.connamespace,\n            s.conrelid,\n            s.conkey,\n            replace(replace(split_part(s.consrc, ' = '::text, 2), ')'::text, ''::text), '('::text, ''::text)::integer AS srid\n           FROM pg_constraint_list s\n          WHERE s.consrc ~~* '%srid(% = %'::text) sr ON TRUE\n)\nSELECT *,\n    COALESCE(postgis_typmod_dims(atttypmod), ndims, 2) AS coord_dimension\nFROM geo_column_list_full;\n\n\nbut if I try this it return 648 rows in less than second\n\n\nWITH geo_column_list AS (SELECT\ncurrent_database()::character varying(256) AS f_table_catalog,\n    n.nspname AS f_table_schema,\n    n.oid AS n_oid,\n    c.relname AS f_table_name,\n    c.oid AS c_oid,\n    a.attname AS f_geometry_column,\n    a.attnum AS a_attnum,\n    a.atttypmod\n    --COALESCE(NULLIF(postgis_typmod_srid(a.atttypmod), 0), sr.srid, 0) AS srid,\n    --replace(replace(COALESCE(NULLIF(upper(postgis_typmod_type(a.atttypmod)), 'GEOMETRY'::text), st.type, 'GEOMETRY'::text), 'ZM'::text, ''::text), 'Z'::text, ''::text)::character varying(30) AS type\n   FROM pg_class c\n     JOIN pg_attribute a ON a.attrelid = c.oid AND NOT a.attisdropped\n     JOIN pg_namespace n ON c.relnamespace = n.oid\n     JOIN pg_type t ON a.atttypid = t.oid\n  WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'm'::\"char\", 'f'::\"char\", 'p'::\"char\"]))\n\n  AND NOT c.relname = 'raster_columns'::name AND t.typname = 'geometry'::name\n\n  AND NOT pg_is_other_temp_schema(c.relnamespace) \n  AND has_table_privilege(c.oid, 'SELECT'::text)\n),\npg_constraint_list AS (SELECT pg_constraint.connamespace,\n                    pg_constraint.conrelid,\n                    pg_constraint.conkey,\n                    pg_get_constraintdef(pg_constraint.oid) AS consrc\n                   FROM pg_constraint, geo_column_list \nWHERE connamespace = n_oid AND conrelid = c_oid AND (a_attnum = ANY (conkey))\n)\nSELECT *\nFROM pg_constraint_list;\n\n\nThanks.\n\n\nLars", "msg_date": "Thu, 24 Mar 2022 09:39:59 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using system tables directly takes many hours, using temp tables\n with no indexes takes a few seconds for geometry_columns view." } ]
[ { "msg_contents": "Hi,\n\nWhen running our application, we noticed that some processes are taking a lot of memory ( 10, 15, 20GB or so, of RSS ).\nIt is also reproduced when running in psql.\n\nPG version is 12.6\n\n2 examples:\n\n * Common table, PG_BUFFERCACHE, when doing group by, session takes 140MB, which is not a lot, but still more than the 20MB that set for work_mem.\n * Application table, ~15M rows, grouping by a smallint columns - takes ~1000 MB.\n\nAs I wrote before, application processes reached tens of GB.\n\nIn the first case, PG also used temp files, at the second case, when more memory was used and also in the application case, temp files were not created.\n\nI will try to add as much details as possible, please let me know if there is something additional that is required.\n\nThanks,\nShai\n\n\nMore details:\n\nFirst what I see, and then versions, parameters, etc.\nNote: this DB is set with Patroni, replication, etc. but the scenario was reproduce ( up to few hundreds MB, not tens of GB ) on other environment, without it.\n\nQueries:\n\n 1. PG_BUFFERCACHE :\n\n * INSERT INTO PGAWR_BUFFERCACHE_SUMMARY( SELECT 1, NOW(), RELFILENODE, RELDATABASE, COUNT(*) AS BUFFERS_COUNT FROM PG_BUFFERCACHE GROUP BY RELFILENODE, RELDATABASE) ;\n * Insert 12309 rows.\n * Table has 2097152 rows.\n\n\n 1. Application table:\n\n\n * Query: select cycle_code, count(*) from ape1_subscr_offers group by cycle_code ;\n\n\n * table has 15318725 rows.\n\n\n * The cycle_code column is the first column of an index.\n\n\n\n * Table is partitioned, 176 partitions.\n\n\n\n * The result\ncycle_code | count\n------------+---------\n 1 | 3824276\n 2 | 3824745\n 3 | 3834609\n 9 | 3835095\n(4 rows)\n\npaaspg=> show work_mem;\nwork_mem\n----------\n20MB\n(1 row)\n\n\nTable structure:\n\npaaspg=> \\d ape1_subscr_offers\n Partitioned table \"vm1app.ape1_subscr_offers\"\n Column | Type | Collation | Nullable | Default\n-----------------------+-----------------------------+-----------+----------+---------\ncycle_code | smallint | | not null |\n customer_segment | smallint | | not null |\n subscriber_id | bigint | | not null |\n offer_id | integer | | not null |\n offer_instance | bigint | | not null |\n offer_eff_date | timestamp without time zone | | not null |\n sys_creation_date | timestamp without time zone | | not null |\n sys_update_date | timestamp without time zone | | |\n operator_id | integer | | |\n application_id | character(6) | | |\n dl_service_code | character(5) | | |\n dl_update_stamp | smallint | | | 0\nupdate_id | bigint | | |\n offer_exp_date | timestamp without time zone | | |\n source_offer_agr_id | bigint | | |\n source_offer_instance | bigint | | |\n eff_act_code_pror | character varying(25) | | |\n exp_act_code_pror | character varying(25) | | |\n load_ind | character(1) | | |\nPartition key: RANGE (cycle_code, customer_segment)\nIndexes:\n \"ape1_subscr_offers_pkey\" PRIMARY KEY, btree (cycle_code, customer_segment, subscriber_id, offer_id, offer_instance, offer_eff_date)\n \"ape1_subscr_offers_1ix\" btree (update_id)\nNumber of partitions: 176 (Use \\d+ to list them.)\n\nExplain:\npaaspg=> explain select cycle_code, count(*) from ape1_subscr_offers group by cycle_code ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\nFinalize GroupAggregate (cost=385331.98..385382.65 rows=200 width=10)\n Group Key: ape1_subscr_offers_p40.cycle_code\n -> Gather Merge (cost=385331.98..385378.65 rows=400 width=10)\n Workers Planned: 2\n -> Sort (cost=384331.96..384332.46 rows=200 width=10)\n Sort Key: ape1_subscr_offers_p40.cycle_code\n -> Partial HashAggregate (cost=384322.31..384324.31 rows=200 width=10)\n Group Key: ape1_subscr_offers_p40.cycle_code\n -> Parallel Append (cost=0.00..352347.81 rows=6394900 width=2)\n -> Parallel Seq Scan on ape1_subscr_offers_p40 (cost=0.00..5052.94 rows=101094 width=2)\n -> Parallel Seq Scan on ape1_subscr_offers_p46 (cost=0.00..5042.73 rows=100972 width=2)\n -> Parallel Seq Scan on ape1_subscr_offers_p37 (cost=0.00..5040.12 rows=100912 width=2)\n -> Parallel Seq Scan on ape1_subscr_offers_p149 (cost=0.00..5037.25 rows=100825 width=2)\n -> Parallel Seq Scan on ape1_subscr_offers_p145 (cost=0.00..5029.36 rows=100536 width=2)\n\n..\n -> Parallel Seq Scan on ape1_subscr_offers_p183 (cost=0.00..11.53 rows=153 width=2)\n -> Parallel Seq Scan on ape1_subscr_offers_p184 (cost=0.00..11.53 rows=153 width=2)\n -> Parallel Seq Scan on ape1_subscr_offers_p185 (cost=0.00..11.53 rows=153 width=2)\n(185 rows)\n\nMemory consumption: ( of case 2, application table, using system_stats )\n\nselect act.pid, application_name,\nbackend_type,\npretty_timestamp(xact_start) as xact_start, pretty_timestamp(query_start) as query_start,\npretty_timestamp(backend_start) as backend_start,\ncpu_usage,\npg_size_pretty(memory_bytes) as memory_bytes,\npretty_query(query,50 ) as query\nfrom pg_sys_cpu_memory_by_process() stat, pg_stat_activity act\nwhere stat.pid = act.pid\nand act.application_name like 'psql%'\norder by 1\n;\n pid | application_name | backend_type | xact_start | query_start | backend_start | cpu_usage | memory_bytes | query\n-------+------------------+----------------+---------------------+---------------------+---------------------+-----------+--------------+----------------------------------------------------\n10142 | psql | client backend | 2022-03-23 16:32:20 | 2022-03-23 16:32:20 | 2022-03-23 16:32:20 | 8.79 | 8568 kB | select act.pid, application_name, backend_type, pr\n15298 | psql | client backend | | 2022-03-23 16:32:11 | 2022-03-23 16:05:44 | 0 | 1134 MB | select cycle_code, count(*) from ape1_subscr_offer\n\n\nUsing top:\n\ntop - 16:30:46 up 17 days, 3:10, 3 users, load average: 0.41, 0.35, 0.37\nTasks: 507 total, 1 running, 506 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 5.4 us, 0.6 sy, 0.0 ni, 94.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem : 65804144 total, 5241032 free, 1811912 used, 58751200 buff/cache\nKiB Swap: 15728636 total, 13837292 free, 1891344 used. 46488956 avail Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n15298 postgres 20 0 16.8g 1.1g 1.1g S 0.0 1.7 0:02.63 postgres\n13524 postgres 20 0 17.1g 777016 510644 S 0.0 1.2 7:35.34 postgres\n19971 postgres 20 0 17.1g 776540 517872 S 0.0 1.2 7:22.66 postgres\n 8514 postgres 20 0 16.8g 639680 638964 S 0.0 1.0 0:53.79 postgres\n26120 postgres 20 0 16.8g 574916 557856 S 0.0 0.9 0:20.33 postgres\n22529 postgres 20 0 16.9g 572728 556956 S 0.0 0.9 0:04.80 postgres\n\nPG version:\n\npaaspg=> SELECT version()\npaaspg-> ;\n version\n---------------------------------------------------------------------------------------------------------\nPostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit\n\n\nOS version:\n\npostgres@illin7504:pgsql/Users/Shai> uname -a\nLinux illin7504 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 11 19:12:04 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux\n\n\nParameters: ( which are not default )\n\nwith params as\n(\nSELECT name, source, context, substring(setting,1,50) val, unit,\n substring(boot_val,1,20) default_val\n FROM pg_settings\n)\nselect * from params\n WHERE source != 'default'\nORDER BY 1;\n name | source | context | val | unit | default_val\n-------------------------------------+----------------------+-------------------+----------------------------------------------------+------+----------------------\napplication_name | client | user | psql | |\n archive_command | configuration file | sighup | (disabled) | |\n archive_mode | configuration file | postmaster | off | | off\nautovacuum_analyze_scale_factor | configuration file | sighup | 0.15 | | 0.1\nautovacuum_max_workers | configuration file | postmaster | 4 | | 3\nautovacuum_naptime | configuration file | sighup | 15 | s | 60\nautovacuum_vacuum_cost_limit | configuration file | sighup | 1200 | | -1\nautovacuum_vacuum_scale_factor | configuration file | sighup | 0.05 | | 0.2\ncheckpoint_completion_target | configuration file | sighup | 0.9 | | 0.5\ncluster_name | command line | postmaster | postgres-cluster | |\n config_file | override | postmaster | /pgcluster/pgdata/12.6/data/postgresql.conf | |\n data_checksums | override | internal | off | | off\ndata_directory | override | postmaster | /pgcluster/pgdata/12.6/data | |\n DateStyle | configuration file | user | ISO, MDY | | ISO, MDY\ndefault_text_search_config | configuration file | user | pg_catalog.english | | pg_catalog.simple\ndynamic_shared_memory_type | configuration file | postmaster | posix | | posix\neffective_cache_size | configuration file | user | 6291456 | 8kB | 524288\neffective_io_concurrency | configuration file | user | 200 | | 1\nhba_file | override | postmaster | /pgcluster/pgdata/12.6/data/pg_hba.conf | |\n hot_standby | command line | postmaster | on | | on\nident_file | override | postmaster | /pgcluster/pgdata/12.6/data/pg_ident.conf | |\n idle_in_transaction_session_timeout | configuration file | user | 3600000 | ms | 0\nlc_collate | override | internal | en_US.UTF-8 | | C\nlc_ctype | override | internal | en_US.UTF-8 | | C\nlc_messages | configuration file | superuser | en_US.UTF-8 | |\n lc_monetary | configuration file | user | en_US.UTF-8 | | C\nlc_numeric | configuration file | user | en_US.UTF-8 | | C\nlc_time | configuration file | user | en_US.UTF-8 | | C\nlisten_addresses | command line | postmaster | 10.234.167.191,10.234.166.148,127.0.0.1 | | localhost\nlog_autovacuum_min_duration | configuration file | sighup | 0 | ms | -1\nlog_checkpoints | configuration file | sighup | on | | off\nlog_connections | configuration file | superuser-backend | on | | off\nlog_destination | configuration file | sighup | stderr | | stderr\nlog_directory | configuration file | sighup | pg_log | | log\nlog_disconnections | configuration file | superuser-backend | on | | off\nlog_filename | configuration file | sighup | postgresql-%a-%H.log | | postgresql-%Y-%m-%d_\nlogging_collector | configuration file | postmaster | on | | off\nlog_hostname | configuration file | sighup | on | | off\nlog_line_prefix | configuration file | sighup | %t:%r:%u@%d:[%p]: | | %m [%p]\n log_lock_waits | configuration file | superuser | on | | off\nlog_min_duration_statement | configuration file | superuser | 100 | ms | -1\nlog_rotation_age | configuration file | sighup | 60 | min | 1440\nlog_rotation_size | configuration file | sighup | 0 | kB | 10240\nlog_statement | configuration file | superuser | all | | none\nlog_temp_files | configuration file | superuser | 4096 | kB | -1\nlog_timezone | configuration file | sighup | Asia/Jerusalem | | GMT\nlog_transaction_sample_rate | configuration file | superuser | 0 | | 0\nlog_truncate_on_rotation | configuration file | sighup | on | | off\nmaintenance_work_mem | configuration file | user | 2097152 | kB | 65536\nmax_connections | command line | postmaster | 3000 | | 100\nmax_locks_per_transaction | command line | postmaster | 100 | | 64\nmax_parallel_maintenance_workers | configuration file | user | 2 | | 2\nmax_parallel_workers | configuration file | user | 8 | | 8\nmax_parallel_workers_per_gather | configuration file | user | 2 | | 2\nmax_prepared_transactions | command line | postmaster | 0 | | 0\nmax_replication_slots | command line | postmaster | 18 | | 10\nmax_stack_depth | environment variable | superuser | 2048 | kB | 100\nmax_wal_senders | command line | postmaster | 10 | | 10\nmax_wal_size | configuration file | sighup | 8192 | MB | 1024\nmax_worker_processes | command line | postmaster | 8 | | 8\nmin_wal_size | configuration file | sighup | 2048 | MB | 80\npg_stat_statements.track | configuration file | superuser | all | | top\nport | command line | postmaster | 5432 | | 5432\nprimary_conninfo | configuration file | postmaster | user=replicator passfile=/tmp/pgpass host=10.234.1 | |\n primary_slot_name | configuration file | postmaster | illin7504 | |\n random_page_cost | configuration file | user | 1.1 | | 4\nrecovery_target_lsn | configuration file | postmaster | | |\n recovery_target_name | configuration file | postmaster | | |\n recovery_target_time | configuration file | postmaster | | |\n recovery_target_timeline | configuration file | postmaster | latest | | latest\nrecovery_target_xid | configuration file | postmaster | | |\n server_encoding | override | internal | UTF8 | | SQL_ASCII\nshared_buffers | configuration file | postmaster | 2097152 | 8kB | 1024\nshared_preload_libraries | configuration file | postmaster | pg_stat_statements,auto_explain | |\n synchronous_standby_names | configuration file | sighup | illin7505 | |\n temp_buffers | configuration file | user | 8192 | 8kB | 1024\nTimeZone | configuration file | user | Asia/Jerusalem | | GMT\ntrack_commit_timestamp | command line | postmaster | on | | off\ntrack_io_timing | configuration file | superuser | on | | off\ntransaction_deferrable | override | user | off | | off\ntransaction_isolation | override | user | read committed | | read committed\ntransaction_read_only | override | user | off | | off\nunix_socket_directories | configuration file | postmaster | /var/run/postgresql | | /var/run/postgresql,\nwal_buffers | override | postmaster | 2048 | 8kB | -1\nwal_keep_segments | configuration file | sighup | 8 | | 0\nwal_level | command line | postmaster | logical | | replica\nwal_log_hints | command line | postmaster | on | | off\nwal_segment_size | override | internal | 16777216 | B | 16777216\nwal_sync_method | configuration file | sighup | fdatasync | | fdatasync\nwork_mem | configuration file | user | 20480 | kB | 4096\n(90 rows)\n\nShai Shapira\n* [email protected]<mailto:[email protected]>\n* +972 9 776 4171\n\nThis email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service <https://www.amdocs.com/about/email-terms-of-service>\n\n\n\n\n\n\n\n\n\nHi,\n \nWhen running our application, we noticed that some processes are taking a lot of memory (\n10, 15, 20GB or so, of RSS ).\nIt is also reproduced when running in psql.\n \nPG version is 12.6\n \n2 examples:\n\nCommon table, PG_BUFFERCACHE, when doing group by, session takes 140MB, which is not a lot, but still more than the 20MB that set for work_mem.Application table, ~15M rows, grouping by a smallint columns – takes ~1000 MB.\n \nAs I wrote before, application processes reached tens of GB.\n \nIn the first case, PG also used temp files, at the second case, when more memory was used and also in the application case, temp files were not created.\n \nI will try to add as much details as possible, please let me know if there is something additional that is required.\n \nThanks,\nShai\n \n \nMore details:\n \nFirst what I see, and then versions, parameters, etc.\nNote: this DB is set with Patroni, replication, etc. but the scenario was reproduce ( up to few hundreds MB, not tens of GB ) on other environment, without it.\n \nQueries:\n\nPG_BUFFERCACHE  :\n\nINSERT INTO PGAWR_BUFFERCACHE_SUMMARY( SELECT 1, NOW(), RELFILENODE, RELDATABASE, COUNT(*) AS BUFFERS_COUNT FROM PG_BUFFERCACHE GROUP BY RELFILENODE, RELDATABASE) ;Insert 12309 rows.Table has 2097152 rows.\n \n\nApplication table:\n \n\nQuery:\nselect cycle_code, count(*) from ape1_subscr_offers group by cycle_code ;  \n\n \n\ntable has 15318725 rows.\n \n\nThe cycle_code column is the first column of an index.\n \n\nTable is partitioned, 176 partitions.\n \n\nThe result\ncycle_code |  count  \n------------+---------\n          1 | 3824276\n          2 | 3824745\n          3 | 3834609\n          9 | 3835095\n(4 rows)\n \npaaspg=> show work_mem;\nwork_mem \n----------\n20MB\n(1 row)\n \n \nTable structure:\n \npaaspg=> \\d ape1_subscr_offers\n                    Partitioned table \"vm1app.ape1_subscr_offers\"\n        Column         |            Type             | Collation | Nullable | Default\n\n-----------------------+-----------------------------+-----------+----------+---------\ncycle_code            | smallint                    |           | not null |\n\n customer_segment      | smallint                    |           | not null |\n\n subscriber_id         | bigint                      |           | not null |\n\n offer_id              | integer                     |           | not null |\n\n offer_instance        | bigint                      |           | not null |\n\n offer_eff_date        | timestamp without time zone |           | not null |\n\n sys_creation_date     | timestamp without time zone |           | not null |\n\n sys_update_date       | timestamp without time zone |           |          |\n\n operator_id           | integer                     |           |          |\n\n application_id        | character(6)                |           |          |\n\n dl_service_code       | character(5)                |           |          |\n\n dl_update_stamp       | smallint                    |           |          | 0\nupdate_id             | bigint                      |           |          |\n\n offer_exp_date        | timestamp without time zone |           |          |\n\n source_offer_agr_id   | bigint                      |           |          |\n\n source_offer_instance | bigint                      |           |          |\n\n eff_act_code_pror     | character varying(25)       |           |          |\n\n exp_act_code_pror     | character varying(25)       |           |          |\n\n load_ind              | character(1)                |           |          |\n\nPartition key: RANGE (cycle_code, customer_segment)\nIndexes:\n    \"ape1_subscr_offers_pkey\" PRIMARY KEY, btree (cycle_code, customer_segment, subscriber_id, offer_id, offer_instance, offer_eff_date)\n    \"ape1_subscr_offers_1ix\" btree (update_id)\nNumber of partitions: 176 (Use \\d+ to list them.)\n \nExplain:\npaaspg=> explain select cycle_code, count(*) from ape1_subscr_offers group by cycle_code ;\n                                                      QUERY PLAN                                                     \n\n----------------------------------------------------------------------------------------------------------------------\nFinalize GroupAggregate  (cost=385331.98..385382.65 rows=200 width=10)\n   Group Key: ape1_subscr_offers_p40.cycle_code\n   ->  Gather Merge  (cost=385331.98..385378.65 rows=400 width=10)\n         Workers Planned: 2\n         ->  Sort  (cost=384331.96..384332.46 rows=200 width=10)\n               Sort Key: ape1_subscr_offers_p40.cycle_code\n               ->  Partial HashAggregate  (cost=384322.31..384324.31 rows=200 width=10)\n                     Group Key: ape1_subscr_offers_p40.cycle_code\n                     ->  Parallel Append  (cost=0.00..352347.81 rows=6394900 width=2)\n                           ->  Parallel Seq Scan on ape1_subscr_offers_p40  (cost=0.00..5052.94 rows=101094 width=2)\n                           ->  Parallel Seq Scan on ape1_subscr_offers_p46  (cost=0.00..5042.73 rows=100972 width=2)\n                           ->  Parallel Seq Scan on ape1_subscr_offers_p37  (cost=0.00..5040.12 rows=100912 width=2)\n                           ->  Parallel Seq Scan on ape1_subscr_offers_p149  (cost=0.00..5037.25 rows=100825 width=2)\n                           ->  Parallel Seq Scan on ape1_subscr_offers_p145  (cost=0.00..5029.36 rows=100536 width=2)\n \n..\n                           ->  Parallel Seq Scan on ape1_subscr_offers_p183  (cost=0.00..11.53 rows=153 width=2)\n                           ->  Parallel Seq Scan on ape1_subscr_offers_p184  (cost=0.00..11.53 rows=153 width=2)\n                           ->  Parallel Seq Scan on ape1_subscr_offers_p185  (cost=0.00..11.53 rows=153 width=2)\n(185 rows)\n \nMemory consumption: ( of case 2, application table, using system_stats )\n \nselect act.pid, application_name,\nbackend_type,\npretty_timestamp(xact_start) as xact_start, pretty_timestamp(query_start) as query_start,\n\npretty_timestamp(backend_start) as backend_start,\ncpu_usage,\npg_size_pretty(memory_bytes) as memory_bytes,\npretty_query(query,50 ) as query\nfrom pg_sys_cpu_memory_by_process() stat, pg_stat_activity act\nwhere stat.pid = act.pid\nand act.application_name like 'psql%'\norder by 1\n;\n  pid  | application_name |  backend_type  |     xact_start      |     query_start     |    backend_start    | cpu_usage | memory_bytes |                       query                       \n\n-------+------------------+----------------+---------------------+---------------------+---------------------+-----------+--------------+----------------------------------------------------\n10142 | psql             | client backend | 2022-03-23 16:32:20 | 2022-03-23 16:32:20 | 2022-03-23 16:32:20 |      8.79 | 8568 kB      | select act.pid, application_name, backend_type,\n pr\n15298 | psql             | client backend |                     | 2022-03-23 16:32:11 | 2022-03-23 16:05:44 |         0 |\n1134 MB      | select cycle_code, count(*) from ape1_subscr_offer\n \n \nUsing top:\n \ntop - 16:30:46 up 17 days,  3:10,  3 users,  load average: 0.41, 0.35, 0.37\nTasks: 507 total,   1 running, 506 sleeping,   0 stopped,   0 zombie\n%Cpu(s):  5.4 us,  0.6 sy,  0.0 ni, 94.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st\nKiB Mem : 65804144 total,  5241032 free,  1811912 used, 58751200 buff/cache\nKiB Swap: 15728636 total, 13837292 free,  1891344 used. 46488956 avail Mem\n\n \n  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                               \n\n15298 postgres  20   0   16.8g  \n1.1g  \n1.1g S   0.0  1.7   0:02.63 postgres                                                                                                                               \n13524 postgres  20   0   17.1g 777016 510644 S   0.0  1.2   7:35.34 postgres                                                                                                                               \n19971 postgres  20   0   17.1g 776540 517872 S   0.0  1.2   7:22.66 postgres                                                                                                              \n                 \n 8514 postgres  20   0   16.8g 639680 638964 S   0.0  1.0   0:53.79 postgres                                                                                                                              \n\n26120 postgres  20   0   16.8g 574916 557856 S   0.0  0.9   0:20.33 postgres                                                                                                                              \n\n22529 postgres  20   0   16.9g 572728 556956 S   0.0  0.9   0:04.80 postgres     \n\n \nPG version:\n \npaaspg=> SELECT version()\npaaspg-> ;\n                                                 version                                                \n\n---------------------------------------------------------------------------------------------------------\nPostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit\n \n \nOS version:\n \npostgres@illin7504:pgsql/Users/Shai> uname -a\nLinux illin7504 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 11 19:12:04 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux\n \n \nParameters: ( which are not default ) \n \nwith params as\n(\nSELECT name, source, context, substring(setting,1,50) val, unit,\n         substring(boot_val,1,20) default_val\n    FROM pg_settings\n)\nselect * from params\n   WHERE source != 'default'\nORDER BY 1;\n                name                 |        source        |      context      |                        val                         | unit |     default_val     \n\n-------------------------------------+----------------------+-------------------+----------------------------------------------------+------+----------------------\napplication_name                    | client               | user              | psql                                               |      |\n\n archive_command                     | configuration file   | sighup            | (disabled)                                         |      |\n\n archive_mode                        | configuration file   | postmaster        | off                                                |      | off\nautovacuum_analyze_scale_factor     | configuration file   | sighup            | 0.15                                               |      | 0.1\nautovacuum_max_workers              | configuration file   | postmaster        | 4                                                  |      | 3\nautovacuum_naptime                  | configuration file   | sighup            | 15                                                 | s    | 60\nautovacuum_vacuum_cost_limit        | configuration file   | sighup            | 1200                                               |      | -1\nautovacuum_vacuum_scale_factor      | configuration file   | sighup            | 0.05                                               |      | 0.2\ncheckpoint_completion_target        | configuration file   | sighup            | 0.9                                                |      | 0.5\ncluster_name                        | command line         | postmaster        | postgres-cluster                                   |      |\n\n config_file                         | override             | postmaster        | /pgcluster/pgdata/12.6/data/postgresql.conf        |      |\n\n data_checksums                      | override             | internal          | off                                                |      | off\ndata_directory                      | override             | postmaster        | /pgcluster/pgdata/12.6/data                        |      |\n\n DateStyle                           | configuration file   | user              | ISO, MDY                                           |      | ISO, MDY\ndefault_text_search_config          | configuration file   | user              | pg_catalog.english                                 |      | pg_catalog.simple\ndynamic_shared_memory_type          | configuration file   | postmaster        | posix                                              |      | posix\neffective_cache_size                | configuration file   | user              | 6291456                                            | 8kB  | 524288\neffective_io_concurrency            | configuration file   | user              | 200                                                |      | 1\nhba_file                            | override             | postmaster        | /pgcluster/pgdata/12.6/data/pg_hba.conf            |      |\n\n hot_standby                         | command line         | postmaster        | on                                                 |      | on\nident_file                          | override             | postmaster        | /pgcluster/pgdata/12.6/data/pg_ident.conf          |      |\n\n idle_in_transaction_session_timeout | configuration file   | user              | 3600000                                            | ms   | 0\nlc_collate                          | override             | internal          | en_US.UTF-8                                        |      | C\nlc_ctype                            | override             | internal          | en_US.UTF-8                                        |      | C\nlc_messages                         | configuration file   | superuser         | en_US.UTF-8                                        |      |\n\n lc_monetary                         | configuration file   | user              | en_US.UTF-8                                        |      | C\nlc_numeric                          | configuration file   | user              | en_US.UTF-8                                        |      | C\nlc_time                             | configuration file   | user              | en_US.UTF-8                                        |      | C\nlisten_addresses                    | command line         | postmaster        | 10.234.167.191,10.234.166.148,127.0.0.1            |      | localhost\nlog_autovacuum_min_duration         | configuration file   | sighup            | 0                                                  | ms   | -1\nlog_checkpoints                     | configuration file   | sighup            | on                                                 |      | off\nlog_connections                     | configuration file   | superuser-backend | on                                                 |      | off\nlog_destination                     | configuration file   | sighup            | stderr                                             |      | stderr\nlog_directory                       | configuration file   | sighup            | pg_log                                             |      | log\nlog_disconnections                  | configuration file   | superuser-backend | on                                                 |      | off\nlog_filename                        | configuration file   | sighup            | postgresql-%a-%H.log                               |      | postgresql-%Y-%m-%d_\nlogging_collector                   | configuration file   | postmaster        | on                                                 |      | off\nlog_hostname                        | configuration file   | sighup            | on                                                 |      | off\nlog_line_prefix                     | configuration file   | sighup            | %t:%r:%u@%d:[%p]:                                  |      | %m [%p]\n\n log_lock_waits                      | configuration file   | superuser         | on                                                 |      | off\nlog_min_duration_statement          | configuration file   | superuser         | 100                                                | ms   | -1\nlog_rotation_age                    | configuration file   | sighup            | 60                                                 | min  | 1440\nlog_rotation_size                   | configuration file   | sighup            | 0                                                  | kB   | 10240\nlog_statement                       | configuration file   | superuser         | all                                                |      | none\nlog_temp_files                      | configuration file   | superuser         | 4096                                               | kB   | -1\nlog_timezone                        | configuration file   | sighup            | Asia/Jerusalem                                     |      | GMT\nlog_transaction_sample_rate         | configuration file   | superuser         | 0                                                  |      | 0\nlog_truncate_on_rotation            | configuration file   | sighup            | on                                                 |      | off\nmaintenance_work_mem                | configuration file   | user              | 2097152                                            | kB   | 65536\nmax_connections                     | command line         | postmaster        | 3000                                               |      | 100\nmax_locks_per_transaction           | command line         | postmaster        | 100                                                |      | 64\nmax_parallel_maintenance_workers    | configuration file   | user              | 2                                                  |      | 2\nmax_parallel_workers                | configuration file   | user              | 8                                                  |      | 8\nmax_parallel_workers_per_gather     | configuration file   | user              | 2                                                  |      | 2\nmax_prepared_transactions           | command line         | postmaster        | 0                                                  |      | 0\nmax_replication_slots               | command line         | postmaster        | 18                                                 |      | 10\nmax_stack_depth                     | environment variable | superuser         | 2048                                               | kB   | 100\nmax_wal_senders                     | command line         | postmaster        | 10                                                 |      | 10\nmax_wal_size                        | configuration file   | sighup            | 8192                                               | MB   | 1024\nmax_worker_processes                | command line         | postmaster        | 8                                                  |      | 8\nmin_wal_size                        | configuration file   | sighup            | 2048                                               | MB   | 80\npg_stat_statements.track            | configuration file   | superuser         | all                                                |      | top\nport                                | command line         | postmaster        | 5432                                               |      | 5432\nprimary_conninfo                    | configuration file   | postmaster        | user=replicator passfile=/tmp/pgpass host=10.234.1 |      |\n\n primary_slot_name                   | configuration file   | postmaster        | illin7504                                          |      |\n\n random_page_cost                    | configuration file   | user              | 1.1                                                |      | 4\nrecovery_target_lsn                 | configuration file   | postmaster        |                                                    |      |\n\n recovery_target_name                | configuration file   | postmaster        |                                                    |      |\n\n recovery_target_time                | configuration file   | postmaster        |                                                    |      |\n\n recovery_target_timeline            | configuration file   | postmaster        | latest                                             |      | latest\nrecovery_target_xid                 | configuration file   | postmaster        |                                                    |      |\n\n server_encoding                     | override             | internal          | UTF8                                               |      | SQL_ASCII\nshared_buffers                      | configuration file   | postmaster        | 2097152                                            | 8kB  | 1024\nshared_preload_libraries            | configuration file   | postmaster        | pg_stat_statements,auto_explain                    |      |\n\n synchronous_standby_names           | configuration file   | sighup            | illin7505                                          |      |\n\n temp_buffers                        | configuration file   | user              | 8192                                               | 8kB  | 1024\nTimeZone                            | configuration file   | user              | Asia/Jerusalem                                     |      | GMT\ntrack_commit_timestamp              | command line         | postmaster        | on                                                 |      | off\ntrack_io_timing                     | configuration file   | superuser         | on                                                 |      | off\ntransaction_deferrable              | override             | user              | off                                                |      | off\ntransaction_isolation               | override             | user              | read committed                                     |      | read committed\ntransaction_read_only               | override             | user              | off                                                |      | off\nunix_socket_directories             | configuration file   | postmaster        | /var/run/postgresql                                |      | /var/run/postgresql,\nwal_buffers                         | override             | postmaster        | 2048                                               | 8kB  | -1\nwal_keep_segments                   | configuration file   | sighup            | 8                                                  |      | 0\nwal_level                           | command line         | postmaster        | logical                                            |      | replica\nwal_log_hints                       | command line         | postmaster        | on                                                 |      | off\nwal_segment_size                    | override             | internal          | 16777216                                           | B    | 16777216\nwal_sync_method                     | configuration file   | sighup            | fdatasync                                          |      | fdatasync\nwork_mem                            | configuration file   | user              | 20480                                              | kB   | 4096\n(90 rows)\n \nShai Shapira\n+ \[email protected]\n(\n+972 9\n776 4171\n \n\nThis email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service", "msg_date": "Wed, 23 Mar 2022 14:42:06 +0000", "msg_from": "Shai Shapira <[email protected]>", "msg_from_op": true, "msg_subject": "High process memory consumption when running sort" }, { "msg_contents": "On Wed, Mar 23, 2022 at 02:42:06PM +0000, Shai Shapira wrote:\n> Hi,\n> \n> When running our application, we noticed that some processes are taking a lot of memory ( 10, 15, 20GB or so, of RSS ).\n> It is also reproduced when running in psql.\n\nNote that RSS can include shared_buffers read by that backend.\nThat's a linux behavior, not specific to postgres. It's what Andres was\ndescribing here:\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nYou have effective_cache_size = 48GB, so this seems to be working as intended.\n(ecc is expected to include data cached not only by postgres but by the OS page\ncache, too).\n\n> Memory consumption: ( of case 2, application table, using system_stats )\n\nI'm not sure, but I guess this is just a postgres view of whatever the OS\nshows.\n\n> Using top:\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 15298 postgres 20 0 16.8g 1.1g 1.1g S 0.0 1.7 0:02.63 postgres\n\n> PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit\n> Linux illin7504 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 11 19:12:04 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux\n\n> shared_buffers | configuration file | postmaster | 2097152 | 8kB | 1024\n> effective_cache_size | configuration file | user | 6291456 | 8kB | 524288\n> work_mem | configuration file | user | 20480 | kB | 4096\n\n\n", "msg_date": "Wed, 23 Mar 2022 10:20:22 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High process memory consumption when running sort" }, { "msg_contents": "Thanks a lot Justin,\n\n\n\nI used the link that you shared and I noticed that in most cases, when I was simulating the issue with simple SQL's, most of the RSS was actually shared.\n\n\n\ne.g.\n\n\n\nfrom /proc/pid/status\n\nRssAnon: 8672 kB\n\nRssFile: 4576 kB\n\nRssShmem: 4596656 kB\n\n\n\nAnd when I looked at /proc/pid/smaps\n\nIs so it as \"Referenced\" ( in /dev/zero (deleted) section )\n\nReferenced: 4596624 kB\n\nAnd the change in server's free/available memory was not significant.\n\n\n\nBut when running our application, the picture was different, most of it was Anon, and server's available memory was decreasing.\n\nfrom /proc/pid/status\n\nRssAnon: 14115188 kB\n\nRssFile: 4648 kB\n\nRssShmem: 282816 kB\n\n\n\nEventually, we found out that the reason for this phenome is combination of tables with many partitions ( 2112 ) and specific SQL.\n\nWe reduced the number of partitions from 2112 to 132 and the issue was resolved.\n\nIt seems the PG is still struggling with tables with so many partitions.\n\nThe application was written originally for Oracle, and these huge number of partition there was also abuse, but Oracle can handle it.\n\n\n\n\n\nThanks,\n\nShai\n\n\n\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]>\nSent: Wednesday, March 23, 2022 5:20 PM\nTo: Shai Shapira <[email protected]>\nCc: [email protected]\nSubject: Re: High process memory consumption when running sort\n\n\n\nCAUTION: This message was sent from outside of Amdocs. Please do not click links or open attachments unless you recognize the source of this email and know the content is safe.\n\n\n\nOn Wed, Mar 23, 2022 at 02:42:06PM +0000, Shai Shapira wrote:\n\n> Hi,\n\n>\n\n> When running our application, we noticed that some processes are taking a lot of memory ( 10, 15, 20GB or so, of RSS ).\n\n> It is also reproduced when running in psql.\n\n\n\nNote that RSS can include shared_buffers read by that backend.\n\nThat's a linux behavior, not specific to postgres. It's what Andres was describing here:\n\nhttps://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2Fflat%2F20201003230149.mtd7fjsjwgii3jv7%40alap3.anarazel.de&amp;data=04%7C01%7CShai.Shapira%40Amdocs.com%7C50cc35ffdc134d58920708da0ce0ba20%7Cc8eca3ca127646d59d9da0f2a028920f%7C0%7C0%7C637836456579176065%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=dEcakp0g6rUjw0aLhJJNnut4RhC7EQ0edRK%2FbzBzF%2F8%3D&amp;reserved=0\n\n\n\nYou have effective_cache_size = 48GB, so this seems to be working as intended.\n\n(ecc is expected to include data cached not only by postgres but by the OS page cache, too).\n\n\n\n> Memory consumption: ( of case 2, application table, using system_stats\n\n> )\n\n\n\nI'm not sure, but I guess this is just a postgres view of whatever the OS shows.\n\n\n\n> Using top:\n\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n\n> 15298 postgres 20 0 16.8g 1.1g 1.1g S 0.0 1.7 0:02.63 postgres\n\n\n\n> PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n\n> 20150623 (Red Hat 4.8.5-44), 64-bit Linux illin7504\n\n> 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 11 19:12:04 EDT 2020 x86_64\n\n> x86_64 x86_64 GNU/Linux\n\n\n\n> shared_buffers | configuration file | postmaster | 2097152 | 8kB | 1024\n\n> effective_cache_size | configuration file | user | 6291456 | 8kB | 524288\n\n> work_mem | configuration file | user | 20480 | kB | 4096\n\n\nThis email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service <https://www.amdocs.com/about/email-terms-of-service>\n\n\n\n\n\n\n\n\n\nThanks a lot Justin,\n \nI used the link that you shared and I noticed that in most cases, when I was simulating the issue with simple SQL's, most of the RSS was actually shared.\n \ne.g.\n \nfrom /proc/pid/status\n\nRssAnon:            8672 kB\nRssFile:            4576 kB\nRssShmem:        4596656 kB\n \nAnd when I looked at  /proc/pid/smaps\nIs so it as “Referenced”  ( in /dev/zero (deleted) section )\n\nReferenced:      4596624 kB\nAnd the change in server’s free/available memory was not significant.\n \nBut when running our application, the picture was different, most of it was Anon, and server’s available memory was decreasing.\nfrom /proc/pid/status\n\nRssAnon:        14115188 kB\nRssFile:            4648 kB\nRssShmem:         282816 kB\n \nEventually, we found out that the reason for this phenome is combination of tables with many partitions ( 2112 ) and specific SQL.\nWe reduced the number of partitions from 2112 to 132 and the issue was resolved.\nIt seems the PG is still struggling with tables with so many partitions.\n\nThe application was written originally for Oracle, and these huge number of partition there was also abuse, but Oracle can handle it.\n \n \nThanks,\nShai\n \n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: Wednesday, March 23, 2022 5:20 PM\nTo: Shai Shapira <[email protected]>\nCc: [email protected]\nSubject: Re: High process memory consumption when running sort\n \nCAUTION: This message was sent from outside of Amdocs. Please do not click links or open attachments unless you recognize the source of this email and know the content is safe.\n \nOn Wed, Mar 23, 2022 at 02:42:06PM +0000, Shai Shapira wrote:\n> Hi,\n> \n> When running our application, we noticed that some processes are taking a lot of memory ( 10, 15, 20GB or so, of RSS ).\n> It is also reproduced when running in psql.\n \nNote that RSS can include shared_buffers read by that backend.\nThat's a linux behavior, not specific to postgres.  It's what Andres was describing here:\nhttps://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2Fflat%2F20201003230149.mtd7fjsjwgii3jv7%40alap3.anarazel.de&amp;data=04%7C01%7CShai.Shapira%40Amdocs.com%7C50cc35ffdc134d58920708da0ce0ba20%7Cc8eca3ca127646d59d9da0f2a028920f%7C0%7C0%7C637836456579176065%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=dEcakp0g6rUjw0aLhJJNnut4RhC7EQ0edRK%2FbzBzF%2F8%3D&amp;reserved=0\n \nYou have effective_cache_size = 48GB, so this seems to be working as intended.\n(ecc is expected to include data cached not only by postgres but by the OS page cache, too).\n \n> Memory consumption: ( of case 2, application table, using system_stats\n\n> )\n \nI'm not sure, but I guess this is just a postgres view of whatever the OS shows.\n \n> Using top:\n>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND\n> 15298 postgres  20   0   16.8g   1.1g   1.1g S   0.0  1.7   0:02.63 postgres\n \n> PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n\n> 20150623 (Red Hat 4.8.5-44), 64-bit Linux illin7504 \n\n> 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 11 19:12:04 EDT 2020 x86_64\n\n> x86_64 x86_64 GNU/Linux\n \n> shared_buffers                      | configuration file   | postmaster        | 2097152                                            | 8kB  | 1024\n> effective_cache_size                | configuration file   | user              | 6291456                                            | 8kB  | 524288\n> work_mem                            | configuration file   | user              | 20480                                              | kB   | 4096\n \n\nThis email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service", "msg_date": "Wed, 30 Mar 2022 10:42:16 +0000", "msg_from": "Shai Shapira <[email protected]>", "msg_from_op": true, "msg_subject": "RE: High process memory consumption when running sort" } ]
[ { "msg_contents": "Hi Team and All ,\n\nGreeting for the day.\n\nWe have recently migrated from Oracle to PostgreSQL on version 11.4 on azure postgres PaaS instance.\n\nThere is 1 query which is taking approx. 10 secs in Oracle and when we ran the same query it is taking approx. 1 min\n\nCan anyone suggest to improve the query as from application end 1 min time is not accepted by client.\n\nPlease find the query and explain analyze report from below link\n\nhttps://explain.depesz.com/s/RLJn#stats\n\n\nThanks and Regards,\nMukesh Kumar\n\n\n\n\n\n\n\n\n\n\nHi Team and All ,\n \nGreeting for the day.\n \nWe have recently migrated from Oracle to PostgreSQL on version 11.4 on azure postgres PaaS instance.\n \nThere is 1 query which is taking approx. 10 secs in Oracle and when we ran the same query it is taking approx. 1 min\n \nCan anyone suggest to improve the query as from application end 1 min time is not accepted by client.\n \nPlease find the query and explain analyze report from below link\n \nhttps://explain.depesz.com/s/RLJn#stats\n \n \nThanks and Regards, \nMukesh Kumar", "msg_date": "Thu, 24 Mar 2022 15:59:54 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "View taking time to show records" }, { "msg_contents": "Hi,\n1. Have you tried creating indexes on columns for which it is showing\nsequential scans?\n2. In my experience if the view is referring some other view inside it, it\nis advisable to directly query on tables instead on child view.\n3. This table 'so_vendor_address_base' definitely needs indexing to remove\nsequentials scans.\n\nRegards,\nAD.\n\n\nOn Fri, Mar 25, 2022 at 3:35 PM Kumar, Mukesh <[email protected]>\nwrote:\n\n> Hi Team and All ,\n>\n>\n>\n> Greeting for the day.\n>\n>\n>\n> We have recently migrated from Oracle to PostgreSQL on version 11.4 on\n> azure postgres PaaS instance.\n>\n>\n>\n> There is 1 query which is taking approx. 10 secs in Oracle and when we ran\n> the same query it is taking approx. 1 min\n>\n>\n>\n> Can anyone suggest to improve the query as from application end 1 min time\n> is not accepted by client.\n>\n>\n>\n> Please find the query and explain analyze report from below link\n>\n>\n>\n> https://explain.depesz.com/s/RLJn#stats\n>\n>\n>\n>\n>\n> Thanks and Regards,\n>\n> Mukesh Kumar\n>\n>\n>\n\nHi,1. Have you tried creating indexes on columns for which it is showing sequential scans?2. In my experience if the view is referring some other view inside it, it is advisable to directly query on tables instead on child view.3. This table 'so_vendor_address_base' definitely needs indexing to remove sequentials scans.Regards,AD.On Fri, Mar 25, 2022 at 3:35 PM Kumar, Mukesh <[email protected]> wrote:\n\n\nHi Team and All ,\n \nGreeting for the day.\n \nWe have recently migrated from Oracle to PostgreSQL on version 11.4 on azure postgres PaaS instance.\n \nThere is 1 query which is taking approx. 10 secs in Oracle and when we ran the same query it is taking approx. 1 min\n \nCan anyone suggest to improve the query as from application end 1 min time is not accepted by client.\n \nPlease find the query and explain analyze report from below link\n \nhttps://explain.depesz.com/s/RLJn#stats\n \n \nThanks and Regards, \nMukesh Kumar", "msg_date": "Fri, 25 Mar 2022 15:50:26 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View taking time to show records" }, { "msg_contents": "On Thu, Mar 24, 2022 at 03:59:54PM +0000, Kumar, Mukesh wrote:\n> Can anyone suggest to improve the query as from application end 1 min time is not accepted by client.\n> Please find the query and explain analyze report from below link\n\nIt's hard to say for sure without seeing real query (query on view is\nnice, but we can't tell what is going on there, really) - we'd need to\nknow definitions of all views and tables that are involved there.\n\nfor starters, I'd suggest adding indexes:\n1. on so_vendor_address_base (ap_vendor_id_lf || ap_vendor_suffix_lf)\n2. on so_vendor_address_base (vendor_type_f)\n\nwhether this will fix the problem, can't really tell.\n\nAlso - you might want to join slack/irc to have a conversation about it\n- there are people in there who can help, and I think that\nconversation-style help will be better suited for this particular\nproblem.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Fri, 25 Mar 2022 11:39:15 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View taking time to show records" }, { "msg_contents": "On Thu, 2022-03-24 at 15:59 +0000, Kumar, Mukesh wrote:\n> We have recently migrated from Oracle to PostgreSQL on version 11.4 on azure postgres PaaS instance.\n>  \n> There is 1 query which is taking approx. 10 secs in Oracle and when we ran the same query it is taking approx. 1 min\n>  \n> Can anyone suggest to improve the query as from application end 1 min time is not accepted by client.\n>  \n> Please find the query and explain analyze report from below link\n>  \n> https://explain.depesz.com/s/RLJn#stats\n\nI would split the query in two parts: the one from line 3 to line 49 of your execution plan,\nand the rest. The problem is the bad estimate of that first part, so execute only that, write\nthe result to a temporary table and ANALYZE that. Then execute the rest of the query using that\ntemporary table.\n\nPerhaps it is also enough to blindly disable nested loop joins for the whole query, rather than\ndoing the right thing and fixing the estimates:\n\nBEGIN;\nSET LOCAL enable_nestloop = off;\nSELECT ...;\nCOMMIT;\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 25 Mar 2022 11:43:20 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View taking time to show records" }, { "msg_contents": "Hi Albe , \r\n\r\nThanks for the below suggestion , When I ran the query with the parameter , it is taking only 1 sec.\r\n\r\nSo could you please let me know if I can put this parameter to OFF . at database and it will not create any issues to queries running in database.\r\n \r\nCould you please share some light on it.\r\n\r\nThanks and Regards, \r\nMukesh Kumar\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]> \r\nSent: Friday, March 25, 2022 4:13 PM\r\nTo: Kumar, Mukesh <[email protected]>; [email protected]\r\nSubject: Re: View taking time to show records\r\n\r\nOn Thu, 2022-03-24 at 15:59 +0000, Kumar, Mukesh wrote:\r\n> We have recently migrated from Oracle to PostgreSQL on version 11.4 on azure postgres PaaS instance.\r\n>  \r\n> There is 1 query which is taking approx. 10 secs in Oracle and when we \r\n> ran the same query it is taking approx. 1 min\r\n>  \r\n> Can anyone suggest to improve the query as from application end 1 min time is not accepted by client.\r\n>  \r\n> Please find the query and explain analyze report from below link\r\n>  \r\n> https://urldefense.com/v3/__https://explain.depesz.com/s/RLJn*stats__;\r\n> Iw!!KupS4sW4BlfImQPd!Ln-8-n9OcKKifiwKjYcs_JOUo80VTTp5hA9V_-gYjOfDr3DDm\r\n> psmbIY_MQxw5RwQ2ZQtMlobbmvex2CIaJtISv0ZkaSn5w$\r\n\r\nI would split the query in two parts: the one from line 3 to line 49 of your execution plan, and the rest. The problem is the bad estimate of that first part, so execute only that, write the result to a temporary table and ANALYZE that. Then execute the rest of the query using that temporary table.\r\n\r\nPerhaps it is also enough to blindly disable nested loop joins for the whole query, rather than doing the right thing and fixing the estimates:\r\n\r\nBEGIN;\r\nSET LOCAL enable_nestloop = off;\r\nSELECT ...;\r\nCOMMIT;\r\n\r\nYours,\r\nLaurenz Albe\r\n--\r\nCybertec | https://urldefense.com/v3/__https://www.cybertec-postgresql.com__;!!KupS4sW4BlfImQPd!Ln-8-n9OcKKifiwKjYcs_JOUo80VTTp5hA9V_-gYjOfDr3DDmpsmbIY_MQxw5RwQ2ZQtMlobbmvex2CIaJtISv1qNNoktA$ \r\n\r\n", "msg_date": "Fri, 25 Mar 2022 14:07:28 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: View taking time to show records" }, { "msg_contents": "On Fri, 2022-03-25 at 14:07 +0000, Kumar, Mukesh wrote:\n\n> > [recommendation to fix the estimate]\n> >\n> > Perhaps it is also enough to blindly disable nested loop joins for the whole query,\n> > rather than doing the right thing and fixing the estimates:\n> >\n> > BEGIN;\n> > SET LOCAL enable_nestloop = off;\n> > SELECT ...;\n> > COMMIT;\n> \n> Thanks for the below suggestion , When I ran the query with the parameter , it is taking only 1 sec.\n> \n> So could you please let me know if I can put this parameter to OFF . at database and it will not\n> create any issues to queries running in database.\n\nThat will very likely cause problems in your database, because sometimes a nested loop join\nis by far the most efficient way to run a query.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Sat, 26 Mar 2022 05:37:38 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View taking time to show records" } ]
[ { "msg_contents": "Hi All,\n\nWe have an issue with high load and IO Wait's but less cpu on postgres\nDatabase, The emp Table size is around 500GB, and the connections are very\nless.\n\nPlease suggest to us do we need to change and config parameters at system\nlevel or Postgres configuration.\n\npostgres=# select version();\n\n version\n\n\n----------------------------------------------------------------------------------------------------------\n\n PostgreSQL 11.15 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-44), 64-bit\n\n(1 row)\n\n\npostgres=# \\q\n\n\n*Postgres Parameters Setting :*\n\n\nshared_buffers=12GB\nwork_mem=128MB\neffective_cache_size=48GB\nmaintenance_work_mem=2GB\nmax_connections=500\n\n\n14428 | 04:45:59.712892 | active | INSERT INTO target (empno, name)\n SELECT\nempno, '' AS name FROM (select distinct empno from emp where sname='test'\nand tp='EMP NAME 1' LIMIT 10) AS query ;\n\n\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n\n\n14428 postgres 20 0 12.6g 12.2g 12.2g D 5.3 13.3 4:43.57\npostgres: postgres postgres (59436) INSERT\n\n\n29136 postgres 20 0 12.6g 401812 398652 D 4.7 0.4 0:01.20\npostgres: postgres postgres (48220) SELECT\n\n\n29119 postgres 20 0 12.6g 677704 674064 S 3.3 0.7 0:02.05\npostgres: postgres postgres (37684) idle\n\n\n29121 postgres 20 0 12.6g 758428 755252 S 3.0 0.8 0:02.33\npostgres: postgres postgres (57392) idle\n\n\n29166 postgres 20 0 12.6g 260436 257408 S 3.0 0.3 0:00.63\npostgres: postgres postgres (59424) idle\n\n\n29181 postgres 20 0 12.6g 179136 175860 D 2.3 0.2 0:00.18\npostgres: postgres postgres (57092) SELECT\n\n\n29129 postgres 20 0 12.6g 442444 439212 S 1.7 0.5 0:01.33\npostgres: postgres postgres (36560) idle\n\n\n\n-bash-4.2$ cat /etc/redhat-release\n\nRed Hat Enterprise Linux Server release 7.9 (Maipo)\n\n-bash-4.2$ uname\n\nLinux\n\n-bash-4.2$ uname -a\n\nLinux ip.ec2.internal 3.10.0-1160.59.1.el7.x86_64 #1 SMP Wed Feb 16\n12:17:35 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux\n\n-bash-4.2$ top\n\n\ntop - 17:02:52 up 1 day, 1:44, 2 users, load average: 11.60, 22.27, 22.22\n\nTasks:* 316 *total,* 1 *running,* 315 *sleeping,* 0 *stopped,* 0 *\nzombie\n\n%Cpu(s):* 0.5 *us,* 0.5 *sy,* 0.0 *ni,* 92.0 *id,* 7.0 *wa,* 0.0 *hi,\n* 0.0 *si,* 0.0 *st\n\nKiB Mem :* 96639952 *total,* 483896 *free,* 1693960 *used,* 94462096 *\nbuff/cache\n\nKiB Swap:* 0 *total,* 0 *free,* 0 *used.* 81408928 *avail\nMem\n\n\n\n\n\n-bash-4.2$ iostat -x\n\nLinux 3.10.0-1160.59.1.el7.x86_64 (ip.ec2.internal) 03/29/2022 _x86_64_ (24\nCPU)\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 0.33 0.00 0.24 7.54 0.00 91.88\n\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz\navgqu-sz await r_await w_await svctm %util\n\nnvme1n1 0.00 3.45 1042.22 29.88 41998.88 1476.75\n 81.10 7.61 7.10 6.62 23.70 0.40 43.19\n\nnvme2n1 0.00 0.02 0.02 1.06\n 0.15 268.80 497.00 0.09 80.87 0.85 82.56 1.40 0.15\n\nnvme0n1 0.00 0.01 0.21 0.08 4.94 7.07\n 81.37 0.00 6.88 0.61 22.83 0.64 0.02\n\n\n-bash-4.2$ vmstat -a\n\nprocs -----------memory---------- ---swap-- -----io---- -system--\n------cpu-----\n\n r b swpd free inact active si so bi bo in cs us sy id\nwa st\n\n 1 8 0 476180 40092640 53043492 0 0 1753 73 2 14 0 0\n92 8 0\n\n-bash-4.2$ vmstat -d\n\ndisk- ------------reads------------ ------------writes-----------\n-----IO------\n\n total merged sectors ms total merged sectors ms cur\n sec\n\nnvme1n1 99492480 0 8016369922 658540488 2849690 329519 281661496\n67518819 0 41210\n\nnvme2n1 2126 0 27946 1811 101078 2312 51264208 8344670\n 0 144\n\nnvme0n1 20254 6 942763 12340 7953 641 1348866 181438\n 0 18\n\n\n-bash-4.2$ sar\n\nLinux 3.10.0-1160.59.1.el7.x86_64 (ip.ec2.internal) 03/29/2022 _x86_64_ (24\nCPU)\n\n\n04:20:01 PM CPU %user %nice %system %iowait %steal\n%idle\n\n04:30:01 PM all 0.70 0.00 0.69 27.92 0.00\n70.68\n\n04:40:01 PM all 0.71 0.00 0.70 27.76 0.00\n70.84\n\n04:50:01 PM all 0.70 0.00 0.69 26.34 0.00\n72.27\n\n05:00:01 PM all 0.70 0.00 0.68 27.32 0.00\n71.31\n\n05:10:01 PM all 0.70 0.00 0.69 27.83 0.00\n70.77\n\n05:20:01 PM all 0.70 0.00 0.69 28.16 0.00\n70.45\n\n05:30:01 PM all 0.71 0.00 0.69 26.62 0.00\n71.98\n\n05:40:01 PM all 0.69 0.00 0.68 25.77 0.00\n72.85\n\nAverage: all 0.70 0.00 0.69 27.21 0.00\n71.40\n\n-bash-4.2$\n\n-bash-4.2$ free -g\n\n total used free shared buff/cache\navailable\n\nMem: 92 1 0 12 90\n 77\n\nSwap: 0 0 0\n\n-bash-4.2$ free -m\n\n total used free shared buff/cache\navailable\n\nMem: 94374 1721 474 12581 92178\n79430\n\nSwap: 0 0 0\n\n-bash-4.2$ lscpu\n\nArchitecture: x86_64\n\nCPU op-mode(s): 32-bit, 64-bit\n\nByte Order: Little Endian\n\nCPU(s): 24\n\nOn-line CPU(s) list: 0-23\n\nThread(s) per core: 2\n\nCore(s) per socket: 12\n\nSocket(s): 1\n\nNUMA node(s): 1\n\nVendor ID: GenuineIntel\n\nCPU family: 6\n\nModel: 85\n\nModel name: Intel(R) Xeon(R) Platinum 8252C CPU @ 3.80GHz\n\nStepping: 7\n\nCPU MHz: 3799.998\n\nBogoMIPS: 7599.99\n\nHypervisor vendor: KVM\n\nVirtualization type: full\n\nL1d cache: 32K\n\nL1i cache: 32K\n\nL2 cache: 1024K\n\nL3 cache: 25344K\n\nNUMA node0 CPU(s): 0-23\n\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb\nrdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc\naperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2\nx2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor\nlahf_lm abm 3dnowprefetch invpcid_single fsgsbase tsc_adjust bmi1 avx2 smep\nbmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb\navx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 ida arat pku ospke\navx512_vnni\n\n\n\n*Thanks & Regards,*\n\n*Ramababu.*\n\nHi All,We have an issue with high load and IO Wait's but less cpu on postgres Database, The emp Table size is around 500GB, and the connections are very less.Please suggest to us do we need to change and config parameters at system level or Postgres configuration.postgres=# select version();                                                 version                                                  ---------------------------------------------------------------------------------------------------------- PostgreSQL 11.15 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit(1 row)postgres=# \\qPostgres Parameters Setting :shared_buffers=12GBwork_mem=128MBeffective_cache_size=48GBmaintenance_work_mem=2GBmax_connections=50014428 | 04:45:59.712892 | active  | INSERT INTO target (empno, name)                                                                            SELECT empno, '' AS name FROM (select distinct \nempno  from emp where sname='test' and tp='EMP NAME 1' LIMIT 10) AS query   ;             PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                14428 postgres  20   0   12.6g  12.2g  12.2g D   5.3 13.3   4:43.57 postgres:  postgres   postgres (59436) INSERT                                                                                       29136 postgres  20   0   12.6g 401812 398652 D   4.7  0.4   0:01.20 postgres:  postgres   postgres (48220) SELECT                                                                                       29119 postgres  20   0   12.6g 677704 674064 S   3.3  0.7   0:02.05 postgres:  postgres   postgres (37684) idle                                                                                        29121 postgres  20   0   12.6g 758428 755252 S   3.0  0.8   0:02.33 postgres:  postgres   postgres (57392) idle                                                                                        29166 postgres  20   0   12.6g 260436 257408 S   3.0  0.3   0:00.63 postgres:  postgres   postgres (59424) idle                                                                                       29181 postgres  20   0   12.6g 179136 175860 D   2.3  0.2   0:00.18 postgres:  postgres   postgres (57092) SELECT                                                                                       29129 postgres  20   0   12.6g 442444 439212 S   1.7  0.5   0:01.33 postgres:  postgres   postgres (36560) idle -bash-4.2$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.9 (Maipo)-bash-4.2$ unameLinux-bash-4.2$ uname -aLinux ip.ec2.internal 3.10.0-1160.59.1.el7.x86_64 #1 SMP Wed Feb 16 12:17:35 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux-bash-4.2$ toptop - 17:02:52 up 1 day,  1:44,  2 users,  load average: 11.60, 22.27, 22.22Tasks: 316 total,   1 running, 315 sleeping,   0 stopped,   0 zombie%Cpu(s):  0.5 us,  0.5 sy,  0.0 ni, 92.0 id,  7.0 wa,  0.0 hi,  0.0 si,  0.0 stKiB Mem : 96639952 total,   483896 free,  1693960 used, 94462096 buff/cacheKiB Swap:        0 total,        0 free,        0 used. 81408928 avail Mem -bash-4.2$ iostat -xLinux 3.10.0-1160.59.1.el7.x86_64 (ip.ec2.internal)  03/29/2022  _x86_64_ (24 CPU)avg-cpu:  %user   %nice %system %iowait  %steal   %idle           0.33    0.00    0.24    7.54    0.00   91.88Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %utilnvme1n1           0.00     3.45 1042.22   29.88 41998.88  1476.75    81.10     7.61    7.10    6.62   23.70   0.40  43.19nvme2n1           0.00     0.02    0.02    1.06     0.15   268.80   497.00     0.09   80.87    0.85   82.56   1.40   0.15nvme0n1           0.00     0.01    0.21    0.08     4.94     7.07    81.37     0.00    6.88    0.61   22.83   0.64   0.02-bash-4.2$ vmstat -aprocs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st 1  8      0 476180 40092640 53043492    0    0  1753    73    2   14  0  0 92  8  0-bash-4.2$ vmstat -ddisk- ------------reads------------ ------------writes----------- -----IO------       total merged sectors      ms  total merged sectors      ms    cur    secnvme1n1 99492480      0 8016369922 658540488 2849690 329519 281661496 67518819      0  41210nvme2n1   2126      0   27946    1811 101078   2312 51264208 8344670      0    144nvme0n1  20254      6  942763   12340   7953    641 1348866  181438      0     18-bash-4.2$ sar\nLinux 3.10.0-1160.59.1.el7.x86_64 (ip.ec2.internal) 03/29/2022 _x86_64_ (24 CPU)\n\n04:20:01 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle\n04:30:01 PM     all      0.70      0.00      0.69     27.92      0.00     70.68\n04:40:01 PM     all      0.71      0.00      0.70     27.76      0.00     70.84\n04:50:01 PM     all      0.70      0.00      0.69     26.34      0.00     72.27\n05:00:01 PM     all      0.70      0.00      0.68     27.32      0.00     71.31\n05:10:01 PM     all      0.70      0.00      0.69     27.83      0.00     70.77\n05:20:01 PM     all      0.70      0.00      0.69     28.16      0.00     70.45\n05:30:01 PM     all      0.71      0.00      0.69     26.62      0.00     71.98\n05:40:01 PM     all      0.69      0.00      0.68     25.77      0.00     72.85\nAverage:        all      0.70      0.00      0.69     27.21      0.00     71.40\n-bash-4.2$ \n-bash-4.2$ free -g\n              total        used        free      shared  buff/cache   available\nMem:             92           1           0          12          90          77\nSwap:             0           0           0\n-bash-4.2$ free -m\n              total        used        free      shared  buff/cache   available\nMem:          94374        1721         474       12581       92178       79430\nSwap:             0           0           0\n-bash-4.2$ lscpu\nArchitecture:          x86_64\nCPU op-mode(s):        32-bit, 64-bit\nByte Order:            Little Endian\nCPU(s):                24\nOn-line CPU(s) list:   0-23\nThread(s) per core:    2\nCore(s) per socket:    12\nSocket(s):             1\nNUMA node(s):          1\nVendor ID:             GenuineIntel\nCPU family:            6\nModel:                 85\nModel name:            Intel(R) Xeon(R) Platinum 8252C CPU @ 3.80GHz\nStepping:              7\nCPU MHz:               3799.998\nBogoMIPS:              7599.99\nHypervisor vendor:     KVM\nVirtualization type:   full\nL1d cache:             32K\nL1i cache:             32K\nL2 cache:              1024K\nL3 cache:              25344K\nNUMA node0 CPU(s):     0-23\nFlags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 ida arat pku ospke avx512_vnniThanks & Regards,Ramababu.", "msg_date": "Tue, 29 Mar 2022 23:34:18 +0530", "msg_from": "Rambabu g <[email protected]>", "msg_from_op": true, "msg_subject": "HIGH IO and Less CPU utilization" }, { "msg_contents": "Hi,\n\nThanks for providing all this info.\n\nOn Tue, Mar 29, 2022 at 11:34:18PM +0530, Rambabu g wrote:\n> Hi All,\n> \n> We have an issue with high load and IO Wait's but less cpu on postgres\n> Database, The emp Table size is around 500GB, and the connections are very\n> less.\n\nWhat indexes are defined on this table ?\nHow large are they ?\n\n> Red Hat Enterprise Linux Server release 7.9 (Maipo)\n> PostgreSQL 11.15 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit\n> \n> shared_buffers=12GB\n> work_mem=128MB\n\n> 14428 | 04:45:59.712892 | active | INSERT INTO target (empno, name)\n> SELECT empno, '' AS name FROM (select distinct empno from emp where sname='test'\n> and tp='EMP NAME 1' LIMIT 10) AS query ;\n\nIs the only only problem query, or just one example or ??\nAre your issues with loading data, querying data or both ?\n\n> -bash-4.2$ iostat -x\n\nIt shows that you only have a few filesystems in use.\nIt's common to have WAL and temp_tablespaces on a separate FS.\nThat probably wouldn't help your performance at all, but it would help to tell\nwhat's doing I/O. Is there anything else running on the VM besides postgres ?\n\nYou can also check:\nSELECT COUNT(1), wait_event FROM pg_stat_activity GROUP BY 2 ORDER BY 1 DESC;\n\nAnd the pg_buffercache extension:\nSELECT COUNT(nullif(isdirty,'f')) dirty, COUNT(1) all, COALESCE(c.relname, b.relfilenode::text) FROM pg_buffercache b LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) GROUP BY 3 ORDER BY 1 DESC,2 DESC LIMIT 9;\n\n> Hypervisor vendor: KVM\n\nAre KSM or THP enabled on the hypervisor ?\n\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag \n\n-- \nJustin\n\n\n", "msg_date": "Tue, 29 Mar 2022 13:24:53 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HIGH IO and Less CPU utilization" }, { "msg_contents": "Hi Justin,\n\nThanks for the quick response and your help, Please go through the inputs\nand let me know if need to change anything at OS level parameters tune and\nDB parameters.\n\n\nOn Tue, 29 Mar 2022 at 23:54, Justin Pryzby <[email protected]> wrote:\n\n> Hi,\n>\n> Thanks for providing all this info.\n>\n> On Tue, Mar 29, 2022 at 11:34:18PM +0530, Rambabu g wrote:\n> > Hi All,\n> >\n> > We have an issue with high load and IO Wait's but less cpu on postgres\n> > Database, The emp Table size is around 500GB, and the connections are\n> very\n> > less.\n>\n> What indexes are defined on this table ?\n> How large are they ?\n>\n>\nThere are three indexes defined on the table, each one is around 20 to 25GB\nand the indexes is create on\n\npostgres=# explain select distinct empno from emp where sname='test' and\ntp='EMP NAME 1'\n\n QUERY PLAN\n\n\n------------------------------------------------------------------------------------------------------\n\n HashAggregate (cost=71899575.17..71900816.97 rows=124179 width=9)\n\n Group Key: empno\n\n -> Gather (cost=1000.00..71820473.80 rows=31640550 width=9)\n\n Workers Planned: 2\n\n -> Parallel Seq Scan on emp (cost=0.00..68655418.80\nrows=13183562 width=9)\n\n Filter: (((sname)::text = 'test'::text) AND ((tp)::text =\n'EMP NAME 1'::text)\n\n\n> > Red Hat Enterprise Linux Server release 7.9 (Maipo)\n> > PostgreSQL 11.15 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n> 20150623 (Red Hat 4.8.5-44), 64-bit\n> >\n> > shared_buffers=12GB\n> > work_mem=128MB\n>\n> > 14428 | 04:45:59.712892 | active | INSERT INTO target (empno, name)\n> > SELECT empno, '' AS name FROM (select distinct empno from emp where\n> sname='test'\n> > and tp='EMP NAME 1' LIMIT 10) AS query ;\n>\n> Is the only only problem query, or just one example or ??\n> Are your issues with loading data, querying data or both ?\n>\n> > -bash-4.2$ iostat -x\n>\n> It shows that you only have a few filesystems in use.\n> It's common to have WAL and temp_tablespaces on a separate FS.\n> That probably wouldn't help your performance at all, but it would help to\n> tell\n> what's doing I/O. Is there anything else running on the VM besides\n> postgres ?\n>\n>\nNo, the Ec2 VM is delicate to postgres DB instances only.\n\n\n> You can also check:\n> SELECT COUNT(1), wait_event FROM pg_stat_activity GROUP BY 2 ORDER BY 1\n> DESC;\n>\n\npostgres=# SELECT COUNT(1), wait_event FROM pg_stat_activity GROUP BY 2\nORDER BY 1 DESC;\n\n count | wait_event\n\n-------+---------------------\n\n 70 | ClientRead\n\n 34 | DataFileRead\n\n 3 |\n\n 1 | LogicalLauncherMain\n\n 1 | WalWriterMain\n\n 1 | BgWriterMain\n\n 1 | AutoVacuumMain\n(7 rows)\n\n\n> And the pg_buffercache extension:\n> SELECT COUNT(nullif(isdirty,'f')) dirty, COUNT(1) as all,\n> COALESCE(c.relname, b.relfilenode::text) FROM pg_buffercache b LEFT JOIN\n> pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) GROUP BY 3 ORDER BY\n> 1 DESC,2 DESC LIMIT 9;\n>\n>\npostgres=# SELECT COUNT(nullif(isdirty,'f')) dirty, COUNT(1) as all,\nCOALESCE(c.relname, b.relfilenode::text) FROM pg_buffercache b LEFT JOIN\npg_class c ON b.relfilenode=pg_relation_filenode(c.oid) GROUP BY 3 ORDER BY\n1 DESC,2 DESC LIMIT 9;\n\n dirty | all | coalesce\n\n-------+---------+----------------------------------------------------\n\n 189 | 237348 | emp_status\n\n 97 | 1214949 | emp\n\n 77 | 259 | public_group\n\n 75 | 432 | public_gid\n\n 74 | 233 | public_utpu\n\n 26 | 115 | code_evd\n\n 15 | 55 | group\n\n 15 | 49 | output\n\n 14 | 77 | output_status\n\n(9 rows\n\n\n> > Hypervisor vendor: KVM\n>\n> Are KSM or THP enabled on the hypervisor ?\n>\n> tail /sys/kernel/mm/ksm/run\n> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag\n> /sys/kernel/mm/transparent_hugepage/enabled\n> /sys/kernel/mm/transparent_hugepage/defrag\n>\n>\n>\n-bash-4.2$ tail /sys/kernel/mm/ksm/run\n/sys/kernel/mm/transparent_hugepage/khugepaged/defrag\n/sys/kernel/mm/transparent_hugepage/enabled\n/sys/kernel/mm/transparent_hugepage/defrag\n\n==> /sys/kernel/mm/ksm/run <==\n\n0\n\n\n==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n\n1\n\n\n==> /sys/kernel/mm/transparent_hugepage/enabled <==\n\n[always] madvise never\n\n\n==> /sys/kernel/mm/transparent_hugepage/defrag <==\n\n[always] madvise never\n\n\n> --\n> Justin\n>\n\n\nRegards,\nRambabu.\n\nHi Justin,Thanks for the quick response and your help,  Please go through the inputs and let me know if need to change anything at OS level parameters tune and DB parameters.On Tue, 29 Mar 2022 at 23:54, Justin Pryzby <[email protected]> wrote:Hi,\n\nThanks for providing all this info.\n\nOn Tue, Mar 29, 2022 at 11:34:18PM +0530, Rambabu g wrote:\n> Hi All,\n> \n> We have an issue with high load and IO Wait's but less cpu on postgres\n> Database, The emp Table size is around 500GB, and the connections are very\n> less.\n\nWhat indexes are defined on this table ?\nHow large are they ?\nThere are three indexes defined on the table, each one is around 20 to 25GB and the indexes is create on \npostgres=# explain select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'\n                                              QUERY PLAN                                              \n------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=71899575.17..71900816.97 rows=124179 width=9)\n   Group Key: empno\n   ->  Gather  (cost=1000.00..71820473.80 rows=31640550 width=9)\n         Workers Planned: 2\n         ->  Parallel Seq Scan on emp  (cost=0.00..68655418.80 rows=13183562 width=9)\n               Filter: (((sname)::text = 'test'::text) AND ((tp)::text = 'EMP NAME 1'::text) \n> Red Hat Enterprise Linux Server release 7.9 (Maipo)\n>  PostgreSQL 11.15 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit\n> \n> shared_buffers=12GB\n> work_mem=128MB\n\n> 14428 | 04:45:59.712892 | active  | INSERT INTO target (empno, name)\n> SELECT empno, '' AS name FROM (select distinct  empno  from emp where sname='test'\n> and tp='EMP NAME 1' LIMIT 10) AS query   ;\n\nIs the only only problem query, or just one example or ??\nAre your issues with loading data, querying data or both ?\n\n> -bash-4.2$ iostat -x\n\nIt shows that you only have a few filesystems in use.\nIt's common to have WAL and temp_tablespaces on a separate FS.\nThat probably wouldn't help your performance at all, but it would help to tell\nwhat's doing I/O.  Is there anything else running on the VM besides postgres ?\n No, the Ec2 VM is delicate to postgres DB instances only. \nYou can also check:\nSELECT COUNT(1), wait_event FROM pg_stat_activity GROUP BY 2 ORDER BY 1 DESC;\npostgres=# SELECT COUNT(1), wait_event FROM pg_stat_activity GROUP BY 2 ORDER BY 1 DESC;\n count |     wait_event      \n-------+---------------------\n    70 | ClientRead\n    34 | DataFileRead\n     3 | \n     1 | LogicalLauncherMain\n     1 | WalWriterMain\n     1 | BgWriterMain\n     1 | AutoVacuumMain\n(7 rows) \nAnd the pg_buffercache extension:\nSELECT COUNT(nullif(isdirty,'f')) dirty, COUNT(1) as all, COALESCE(c.relname, b.relfilenode::text) FROM pg_buffercache b LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) GROUP BY 3 ORDER BY 1 DESC,2 DESC LIMIT 9;\n\npostgres=# SELECT COUNT(nullif(isdirty,'f')) dirty, COUNT(1) as all, COALESCE(c.relname, b.relfilenode::text) FROM pg_buffercache b LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) GROUP BY 3 ORDER BY 1 DESC,2 DESC LIMIT 9;\n dirty |   all   |                      coalesce                      \n-------+---------+----------------------------------------------------\n   189 |  237348 | emp_status\n    97 | 1214949 | emp\n    77 |     259 | public_group\n    75 |     432 | public_gid\n    74 |     233 | public_utpu\n    26 |     115 | code_evd\n    15 |      55 | group\n    15 |      49 | output\n    14 |      77 | output_status\n(9 rows \n> Hypervisor vendor:     KVM\n\nAre KSM or THP enabled on the hypervisor ?\n\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag                                                           \n\n-bash-4.2$ tail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag \n==> /sys/kernel/mm/ksm/run <==\n0\n\n==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n1\n\n==> /sys/kernel/mm/transparent_hugepage/enabled <==\n[always] madvise never\n\n==> /sys/kernel/mm/transparent_hugepage/defrag <==\n[always] madvise never \n-- \nJustinRegards,Rambabu.", "msg_date": "Wed, 30 Mar 2022 00:52:05 +0530", "msg_from": "Rambabu g <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HIGH IO and Less CPU utilization" }, { "msg_contents": "On Wed, Mar 30, 2022 at 12:52:05AM +0530, Rambabu g wrote:\n> > What indexes are defined on this table ?\n> > How large are they ?\n>\n> There are three indexes defined on the table, each one is around 20 to 25GB\n> and the indexes is create on\n\nDid you mean to say something else after \"on\" ?\n\nShow the definition of the indexes from psql \\d\n\n> postgres=# explain select distinct empno from emp where sname='test' and tp='EMP NAME 1'\n\nIs this the only query that's performing poorly ?\nYou should send explain (analyze,buffers) for the prolematic queries.\n\n> > > Hypervisor vendor: KVM\n> >\n> > Are KSM or THP enabled on the hypervisor ?\n\n> No, the Ec2 VM is delicate to postgres DB instances only.\n\nOh, so this is an EC2 and you cannot change the hypervisor itself.\n\n> -bash-4.2$ tail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag\n...\n> ==> /sys/kernel/mm/transparent_hugepage/defrag <==\n> [always] madvise never\n\nI doubt it will help, but you could try disabling these.\nIt's a quick experiment anyway.\n\n\n", "msg_date": "Tue, 29 Mar 2022 14:39:41 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HIGH IO and Less CPU utilization" }, { "msg_contents": "Hi Justin,\n\nOnly one query is causing the issue, sharing the def of indexes. Please\nhave a look.\n\n\n\nOn Wed, 30 Mar 2022 at 01:09, Justin Pryzby <[email protected]> wrote:\n\n> On Wed, Mar 30, 2022 at 12:52:05AM +0530, Rambabu g wrote:\n> > > What indexes are defined on this table ?\n> > > How large are they ?\n> >\n> > There are three indexes defined on the table, each one is around 20 to\n> 25GB\n> > and the indexes is create on\n>\n> Did you mean to say something else after \"on\" ?\n>\n> Show the definition of the indexes from psql \\d\n>\n\nIndex Definition :\n\npostgres=# \\d+ idx_empno\n\n Index \"l2.pd_activity_empi\"\n\n Column | Type | Key? | Definition | Storage | Stats\ntarget\n\n--------+-------------------------+------+------------+----------+--------------\n\n empno | character varying(2000) | yes | empno | extended |\n\nbtree, for table \"emp\"\n\n\npostgres=# \\d+ id_dt\n\n Index \"dt\"\n\n Column | Type | Key? | Definition | Storage | Stats\ntarget\n\n--------+-----------------------------+------+------------+---------+--------------\n\n dt | timestamp without time zone | yes | dt | plain |\n\nbtree, for table \"emp\"\n\n\npostgres=# \\d+ idx_tp\n\n Index \"idx_tp\"\n\n Column | Type | Key? | Definition | Storage | Stats\ntarget\n\n--------+-------------------------+------+------------+----------+--------------\n\n tp | character varying(2000) | yes | tp | extended |\n\nbtree, for table \"emp\"\n\n\n\n\nQuery is been running for 30min.\n\n> postgres=# explain select distinct empno from emp where sname='test'\n> and tp='EMP NAME 1'\n>\n> Is this the only query that's performing poorly ?\n> You should send explain (analyze,buffers) for the prolematic queries.\n>\n\n\npostgres=# select pid,(now()-query_start) as\nage,wait_event_type,wait_event,query from pg_stat_activity where\nstate!='idle';\n\n pid | age | wait_event_type | wait_event |\n query\n\n\n-------+-----------------+-----------------+---------------+-------------------------------------------------------------------------------------------------------------------\n\n 32154 | 00:09:56.131136 | IPC | ExecuteGather | explain\nanalyze select distinct empno from emp where sname='test' and tp='EMP\nNAME 1'\n\n 847 | 00:09:56.131136 | IO | DataFileRead | explain\nanalyze select distinct empno from emp where sname='test' and tp='EMP\nNAME 1'\n\n 848 | 00:09:56.131136 | IO | DataFileRead | explain\nanalyze select distinct empno from emp where sname='test' and tp='EMP\nNAME 1'\n\n 849 | 00:09:56.131136 | IO | DataFileRead | explain\nanalyze select distinct empno from emp where sname='test' and tp='EMP\nNAME 1'\n\n 850 | 00:09:56.131136 | IO | DataFileRead | explain\nanalyze select distinct empno from emp where sname='test' and tp='EMP\nNAME 1'\n\n 851 | 00:09:56.131136 | IO | DataFileRead | explain\nanalyze select distinct empno from emp where sname='test' and tp='EMP\nNAME 1'\n\n 852 | 00:09:56.131136 | IO | DataFileRead | explain\nanalyze select distinct empno from emp where sname='test' and tp='EMP\nNAME 1'\n\n 645 | 00:00:00 | | | select\npid,(now()-query_start) as age,wait_event_type,wait_event,query from\npg_stat_activity where state!='idle'\n\n\n\n\npostgres=# SELECT COUNT(nullif(isdirty,'f')) dirty, COUNT(1) as all,\nCOALESCE(c.relname, b.relfilenode::text) FROM pg_buffercache b LEFT JOIN\npg_class c ON b.relfilenode=pg_relation_filenode(c.oid) GROUP BY 3 ORDER BY\n1 DESC,2 DESC LIMIT 9;\n\n dirty | all | coalesce\n\n-------+---------+---------------------------------\n\n 32 | 136 | fn_deployment\n\n 18 | 176 | fn_deployment_key\n\n 8 | 12 | event_logs_pkey\n\n 6 | 157 | event_logs\n\n 1 | 355 | pg_class\n\n 0 | 2890261 |\n\n 0 | 252734 | utput_status\n\n 0 | 378 | emp\n\n 0 | 299 | 1249\n\n(9 rows)\n\n\n\n-bash-4.2$ sar\n\nLinux 3.10.0-1160.59.1.el7.x86_64 (ip-10-54-145-108.ec2.internal)\n03/30/2022 _x86_64_ (24 CPU)\n\n\n12:00:01 AM CPU %user %nice %system %iowait %steal\n%idle\n\n12:10:01 AM all 1.19 0.00 0.82 36.17 0.00\n61.81\n\n12:20:01 AM all 0.72 0.00 0.75 35.59 0.00\n62.94\n\n12:30:01 AM all 0.74 0.00 0.77 35.04 0.00\n63.46\n\n12:40:02 AM all 0.74 0.00 0.76 34.65 0.00\n63.85\n\n12:50:01 AM all 0.77 0.00 0.78 33.36 0.00\n65.09\n\n01:00:01 AM all 0.83 0.00 0.78 27.46 0.00\n70.93\n\n01:10:01 AM all 0.85 0.00 0.78 30.11 0.00\n68.26\n\n01:20:01 AM all 0.70 0.00 0.61 20.46 0.00\n78.24\n\n01:30:01 AM all 0.15 0.00 0.06 0.02 0.00\n99.77\n\n01:40:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.80\n\n01:50:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.80\n\n02:00:01 AM all 0.15 0.00 0.06 0.00 0.00\n99.78\n\n02:10:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.80\n\n02:20:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.81\n\n02:30:01 AM all 0.15 0.00 0.06 0.00 0.00\n99.80\n\n02:40:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.80\n\n02:50:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.80\n\n03:00:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.80\n\n03:10:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.81\n\n03:20:01 AM all 0.14 0.00 0.05 0.00 0.00\n99.81\n\n03:30:01 AM all 0.23 0.00 0.15 2.18 0.00\n97.44\n\n03:40:01 AM all 1.16 0.00 0.87 22.76 0.00\n75.21\n\n03:50:01 AM all 0.75 0.00 0.60 13.89 0.00\n84.76\n\n04:00:01 AM all 1.13 0.00 0.87 22.75 0.00\n75.26\n\n04:10:01 AM all 0.87 0.00 0.79 22.91 0.00\n75.43\n\n04:20:01 AM all 0.71 0.00 0.71 22.07 0.00\n76.50\n\nAverage: all 0.50 0.00 0.41 13.81 0.00\n85.28\n\n-bash-4.2$ iostat\n\nLinux 3.10.0-1160.59.1.el7.x86_64 (ip-.ec2.internal) 03/30/2022 _x86_64_ (24\nCPU)\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 0.44 0.00 0.34 13.35 0.00 85.86\n\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n\nnvme1n1 1370.20 54514.54 4964.18 7297971937 664565000\n\nnvme2n1 0.92 0.12 223.19 16085 29878260\n\nnvme0n1 0.30 5.12 5.23 685029 699968\n\n\n-bash-4.2$ iostat -d\n\nLinux 3.10.0-1160.59.1.el7.x86_64 (ip-ec2.internal) 03/30/2022 _x86_64_ (24\nCPU)\n\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n\nnvme1n1 1370.25 54518.06 4963.95 7298793425 664565248\n\nnvme2n1 0.92 0.12 223.17 16085 29878260\n\nnvme0n1 0.30 5.12 5.23 685029 699968\n\n\n-bash-4.2$ free -g\n\n total used free shared buff/cache\navailable\n\nMem: 92 1 0 2 90\n 87\n\nSwap: 0 0 0\n\n\n\n\n>\n> > > > Hypervisor vendor: KVM\n> > >\n> > > Are KSM or THP enabled on the hypervisor ?\n>\n> > No, the Ec2 VM is delicate to postgres DB instances only.\n>\n> Oh, so this is an EC2 and you cannot change the hypervisor itself.\n>\n> > -bash-4.2$ tail /sys/kernel/mm/ksm/run\n> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag\n> /sys/kernel/mm/transparent_hugepage/enabled\n> /sys/kernel/mm/transparent_hugepage/defrag\n> ...\n> > ==> /sys/kernel/mm/transparent_hugepage/defrag <==\n> > [always] madvise never\n>\nI doubt it will help, but you could try disabling these.\n> It's a quick experiment anyway.\n>\n\nDisable THP\n\n-bash-4.2$ tail /sys/kernel/mm/ksm/run\n/sys/kernel/mm/transparent_hugepage/khugepaged/defrag\n/sys/kernel/mm/transparent_hugepage/enabled\n/sys/kernel/mm/transparent_hugepage/defrag\n\n==> /sys/kernel/mm/ksm/run <==\n\n0\n\n\n==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n\n1\n\n\n==> /sys/kernel/mm/transparent_hugepage/enabled <==\n\nalways madvise [never]\n\n\n==> /sys/kernel/mm/transparent_hugepage/defrag <==\n\nalways madvise [never]\n\nRegards,\nRambabu.\n\nHi Justin, Only one query is causing the issue, sharing the def of indexes. Please have a look.On Wed, 30 Mar 2022 at 01:09, Justin Pryzby <[email protected]> wrote:On Wed, Mar 30, 2022 at 12:52:05AM +0530, Rambabu g wrote:\n> > What indexes are defined on this table ?\n> > How large are they ?\n>\n> There are three indexes defined on the table, each one is around 20 to 25GB\n> and the indexes is create on\n\nDid you mean to say something else after \"on\" ?\n\nShow the definition of the indexes from psql \\dIndex Definition : \npostgres=#                    \\d+ idx_empno\n                          Index \"l2.pd_activity_empi\"\n Column |          Type           | Key? | Definition | Storage  | Stats target \n--------+-------------------------+------+------------+----------+--------------\n empno   | character varying(2000) | yes  | empno       | extended | \nbtree, for table \"emp\"\n\npostgres=#                    \\d+ id_dt\n                           Index \"dt\"\n Column |            Type             | Key? | Definition | Storage | Stats target \n--------+-----------------------------+------+------------+---------+--------------\n   dt  | timestamp without time zone  | yes  | dt      | plain   | \nbtree, for table \"emp\"\n\npostgres=#                    \\d+ idx_tp\n                          Index \"idx_tp\"\n Column |          Type           | Key? | Definition | Storage  | Stats target \n--------+-------------------------+------+------------+----------+--------------\n tp    | character varying(2000)   | yes    | tp       | extended | \nbtree, for table \"emp\" \n Query is  been running  for 30min.\n> postgres=# explain select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'\n\nIs this the only query that's performing poorly ?\nYou should send explain (analyze,buffers) for the prolematic queries.postgres=# select pid,(now()-query_start) as age,wait_event_type,wait_event,query from pg_stat_activity where state!='idle';  pid  |       age       | wait_event_type |  wait_event   |                                                       query                                                       -------+-----------------+-----------------+---------------+------------------------------------------------------------------------------------------------------------------- 32154 | 00:09:56.131136 | IPC             | ExecuteGather | explain analyze select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'   847 | 00:09:56.131136 | IO              | DataFileRead  | explain analyze select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'   848 | 00:09:56.131136 | IO              | DataFileRead  | explain analyze select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'   849 | 00:09:56.131136 | IO              | DataFileRead  | explain analyze select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'   850 | 00:09:56.131136 | IO              | DataFileRead  | explain analyze select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'   851 | 00:09:56.131136 | IO              | DataFileRead  | explain analyze select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'   852 | 00:09:56.131136 | IO              | DataFileRead  | explain analyze select distinct  empno  from emp where sname='test' and tp='EMP NAME 1'   645 | 00:00:00        |                 |               | select pid,(now()-query_start) as age,wait_event_type,wait_event,query from pg_stat_activity where state!='idle'postgres=# SELECT COUNT(nullif(isdirty,'f')) dirty, COUNT(1) as all, COALESCE(c.relname, b.relfilenode::text) FROM pg_buffercache b LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) GROUP BY 3 ORDER BY 1 DESC,2 DESC LIMIT 9; dirty |   all   |            coalesce             -------+---------+---------------------------------    32 |     136 | fn_deployment    18 |     176 | fn_deployment_key     8 |      12 | event_logs_pkey     6 |     157 | event_logs     1 |     355 | pg_class     0 | 2890261 |      0 |  252734 | utput_status     0 |     378 | emp     0 |     299 | 1249(9 rows)-bash-4.2$ sarLinux 3.10.0-1160.59.1.el7.x86_64 (ip-10-54-145-108.ec2.internal)  03/30/2022  _x86_64_ (24 CPU)12:00:01 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle12:10:01 AM     all      1.19      0.00      0.82     36.17      0.00     61.8112:20:01 AM     all      0.72      0.00      0.75     35.59      0.00     62.9412:30:01 AM     all      0.74      0.00      0.77     35.04      0.00     63.4612:40:02 AM     all      0.74      0.00      0.76     34.65      0.00     63.8512:50:01 AM     all      0.77      0.00      0.78     33.36      0.00     65.0901:00:01 AM     all      0.83      0.00      0.78     27.46      0.00     70.9301:10:01 AM     all      0.85      0.00      0.78     30.11      0.00     68.2601:20:01 AM     all      0.70      0.00      0.61     20.46      0.00     78.2401:30:01 AM     all      0.15      0.00      0.06      0.02      0.00     99.7701:40:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8001:50:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8002:00:01 AM     all      0.15      0.00      0.06      0.00      0.00     99.7802:10:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8002:20:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8102:30:01 AM     all      0.15      0.00      0.06      0.00      0.00     99.8002:40:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8002:50:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8003:00:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8003:10:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8103:20:01 AM     all      0.14      0.00      0.05      0.00      0.00     99.8103:30:01 AM     all      0.23      0.00      0.15      2.18      0.00     97.4403:40:01 AM     all      1.16      0.00      0.87     22.76      0.00     75.2103:50:01 AM     all      0.75      0.00      0.60     13.89      0.00     84.7604:00:01 AM     all      1.13      0.00      0.87     22.75      0.00     75.2604:10:01 AM     all      0.87      0.00      0.79     22.91      0.00     75.4304:20:01 AM     all      0.71      0.00      0.71     22.07      0.00     76.50Average:        all      0.50      0.00      0.41     13.81      0.00     85.28-bash-4.2$ iostatLinux 3.10.0-1160.59.1.el7.x86_64 (ip-.ec2.internal)  03/30/2022  _x86_64_ (24 CPU)avg-cpu:  %user   %nice %system %iowait  %steal   %idle           0.44    0.00    0.34   13.35    0.00   85.86Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtnnvme1n1        1370.20     54514.54      4964.18 7297971937  664565000nvme2n1           0.92         0.12       223.19      16085   29878260nvme0n1           0.30         5.12         5.23     685029     699968-bash-4.2$ iostat -dLinux 3.10.0-1160.59.1.el7.x86_64 (ip-ec2.internal)  03/30/2022  _x86_64_ (24 CPU)Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtnnvme1n1        1370.25     54518.06      4963.95 7298793425  664565248nvme2n1           0.92         0.12       223.17      16085   29878260nvme0n1           0.30         5.12         5.23     685029     699968-bash-4.2$ free -g              total        used        free      shared  buff/cache   availableMem:             92           1           0           2          90          87Swap:             0           0           0 \n\n> > > Hypervisor vendor:     KVM\n> >\n> > Are KSM or THP enabled on the hypervisor ?\n\n> No, the Ec2 VM is delicate to postgres DB instances only.\n\nOh, so this is an EC2 and you cannot change the hypervisor itself.\n\n> -bash-4.2$ tail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag\n...\n> ==> /sys/kernel/mm/transparent_hugepage/defrag <==\n> [always] madvise neverI doubt it will help, but you could try disabling these.It's a quick experiment anyway.Disable THP -bash-4.2$  tail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag==> /sys/kernel/mm/ksm/run <==0==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==1==> /sys/kernel/mm/transparent_hugepage/enabled <==always madvise [never]==> /sys/kernel/mm/transparent_hugepage/defrag <==always madvise [never] Regards,Rambabu.", "msg_date": "Wed, 30 Mar 2022 10:17:38 +0530", "msg_from": "Rambabu g <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HIGH IO and Less CPU utilization" }, { "msg_contents": "On Wed, Mar 30, 2022 at 10:17:38AM +0530, Rambabu g wrote:\n> Hi Justin,\n> \n> Only one query is causing the issue, sharing the def of indexes. Please\n> have a look.\n> \n> > > There are three indexes defined on the table, each one is around 20 to 25GB\n> \n> tp | character varying(2000) | yes | tp | extended |\n> \n> 852 | 00:09:56.131136 | IO | DataFileRead | explain\n> analyze select distinct empno from emp where sname='test' and tp='EMP\n> NAME 1'\n\nThe server is doing a scan of the large table.\nThe tp index matches a lot of rows (13e6) which probably aren't clustered, so\nit elects to scan the 500GB table each time.\n\nLooking at this in isolation, maybe it'd be enough to create an index on\ntp,empno (and maybe drop the tp index). CREATE INDEX CONCURRENTLY if you don't\nwant to disrupt other queries.\n\nBut This seems like something that should be solved in a better way though ;\nlike keeping a table with all the necessary \"empno\" maintained with \"INSERT ON\nCONFLICT DO NOTHING\". Or a trigger.\n\n\n", "msg_date": "Thu, 31 Mar 2022 01:49:41 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HIGH IO and Less CPU utilization" }, { "msg_contents": "On 3/29/22 14:04, Rambabu g wrote:\n> Hi All,\n>\n> We have an issue with high load and IO Wait's but less cpu on postgres \n> Database, The emp Table size is around 500GB, and the connections are \n> very less.\n>\n> Please suggest to us do we need to change and config parameters at \n> system level or Postgres configuration.\n\n\nThe \"emp\" table is 500 GB? You're doing something wrong, The \"emp\" table \nshould have 14 rows and the \"dept\" table should have 4 rows The \"bonus\" \nand \"salgrade\" tables should also be very small. The guy named Bruce \nScott could probably help you with that schema. Other than that, do you \nhave a SQL causing all this ruckus and a detailed explain plan (\"explain \n(analyze,costs,buffers)\") for the SQL using most of the time? You can \nanalyze the log file with PgBadger to get the queries consuming the most \ntime.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 3/29/22 14:04, Rambabu g wrote:\n\n\n\nHi All,\n\n\nWe have an issue with high load and\n IO Wait's but less cpu on postgres Database, The emp Table size is\n around 500GB, and the connections are very less.\n\n\n\nPlease suggest to us do we\n need to change and config parameters at system level or\n Postgres configuration.\n\n\n\n\n\nThe \"emp\" table is 500 GB? You're doing something wrong, The\n \"emp\" table should have 14 rows and the \"dept\" table should have 4\n rows The \"bonus\" and \"salgrade\" tables should also be very small.\n The guy named Bruce Scott could probably help you with that\n schema. Other than that, do you have a SQL causing all this ruckus\n and a detailed explain plan (\"explain (analyze,costs,buffers)\")\n for the SQL using most of the time? You can analyze the log file\n with PgBadger to get the queries consuming the most time.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Thu, 31 Mar 2022 10:44:57 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HIGH IO and Less CPU utilization" }, { "msg_contents": "Hi Justin,\n\nI executed the same query first time it's takes 6+ sec, but if I run again\nsame query that is taking 34 mill seconds, it's seems shared buffer reads\nare taking, but the second time shared buffer reads are not showing us, so\nPlease suggest me ig I need to change any parameters to tune here.\n\n\npostgres=# explain (analyze,buffers) SELECT * FROM emp WHERE (empno='\nC3916271986');\n\n QUERY PLAN\n\n\n-----------------------------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using pd_activity_empi on pd_activity (cost=0.57..23391.90\nrows=7956 width=9202) (actual time=4.346..6442.761 rows=12771 loops=1)\n\n Index Cond: ((empno)::text = 'C3916271986'::text)\n\n Buffers: shared hit=598 read=12224\n\n Planning Time: 0.130 ms\n\n Execution Time: 6446.664 ms\n\n(5 rows)\n\n\npostgres=# explain (analyze,buffers) SELECT * FROM emp WHERE (empno='\nC3916271986');\n\n QUERY PLAN\n\n\n---------------------------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using pd_activity_empi on pd_activity (cost=0.57..23391.90\nrows=7956 width=9202) (actual time=0.027..33.921 rows=12771 loops=1)\n\n Index Cond: ((empi)::text = 'C3916271986'::text)\n\n Buffers: shared hit=12822\n\n Planning Time: 0.138 ms\n\n Execution Time: 34.344 ms\n\n(5 rows)\n\n\nempno Changed :\n\npostgres=# explain (analyze,buffers) SELECT * FROM emp WHERE (empno='C\n6853372011');\n\n QUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using pd_activity_empi on pd_activity (cost=0.57..23391.90\nrows=7956 width=9202) (actual time=2.764..430.357 rows=758 loops=1)\n\n Index Cond: ((empi)::text = 'C6853372011'::text)\n\n Buffers: shared hit=46 read=718\n\n Planning Time: 0.136 ms\n\n Execution Time: 430.617 ms\n\n(5 rows)\n\n\n\nRegards,\nRambabu.\n\nOn Thu, 31 Mar 2022 at 12:19, Justin Pryzby <[email protected]> wrote:\n\n> On Wed, Mar 30, 2022 at 10:17:38AM +0530, Rambabu g wrote:\n> > Hi Justin,\n> >\n> > Only one query is causing the issue, sharing the def of indexes. Please\n> > have a look.\n> >\n> > > > There are three indexes defined on the table, each one is around 20\n> to 25GB\n> >\n> > tp | character varying(2000) | yes | tp | extended |\n> >\n> > 852 | 00:09:56.131136 | IO | DataFileRead | explain\n> > analyze select distinct empno from emp where sname='test' and tp='EMP\n> > NAME 1'\n>\n> The server is doing a scan of the large table.\n> The tp index matches a lot of rows (13e6) which probably aren't clustered,\n> so\n> it elects to scan the 500GB table each time.\n>\n> Looking at this in isolation, maybe it'd be enough to create an index on\n> tp,empno (and maybe drop the tp index). CREATE INDEX CONCURRENTLY if you\n> don't\n> want to disrupt other queries.\n>\n> But This seems like something that should be solved in a better way though\n> ;\n> like keeping a table with all the necessary \"empno\" maintained with\n> \"INSERT ON\n> CONFLICT DO NOTHING\". Or a trigger.\n>\n\nHi Justin,I executed the same query first time it's takes 6+ sec, but if I run again same query that is taking 34 mill seconds, it's seems shared buffer reads are taking, but the second time shared buffer reads are not showing us,  so Please suggest me ig I need to change any  parameters to tune here.\npostgres=# explain (analyze,buffers)   SELECT * FROM emp WHERE (empno='C3916271986');\n                                                                  QUERY PLAN                                                                   \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using pd_activity_empi on pd_activity  (cost=0.57..23391.90 rows=7956 width=9202) (actual time=4.346..6442.761 rows=12771 loops=1)\n   Index Cond: ((empno)::text = 'C3916271986'::text)\n   Buffers: shared hit=598 read=12224\n Planning Time: 0.130 ms\n Execution Time: 6446.664 ms\n(5 rows)\n\npostgres=# explain (analyze,buffers)    SELECT * FROM emp WHERE (empno='C3916271986');\n                                                                 QUERY PLAN                                                                  \n---------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using pd_activity_empi on pd_activity  (cost=0.57..23391.90 rows=7956 width=9202) (actual time=0.027..33.921 rows=12771 loops=1)\n   Index Cond: ((empi)::text = 'C3916271986'::text)\n   Buffers: shared hit=12822\n Planning Time: 0.138 ms\n Execution Time: 34.344 ms\n(5 rows)empno Changed :\npostgres=# explain (analyze,buffers)  SELECT * FROM emp WHERE (empno='C6853372011');\n                                                                 QUERY PLAN                                                                 \n--------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using pd_activity_empi on pd_activity  (cost=0.57..23391.90 rows=7956 width=9202) (actual time=2.764..430.357 rows=758 loops=1)\n   Index Cond: ((empi)::text = 'C6853372011'::text)\n   Buffers: shared hit=46 read=718\n Planning Time: 0.136 ms\n Execution Time: 430.617 ms\n(5 rows)\nRegards,Rambabu.On Thu, 31 Mar 2022 at 12:19, Justin Pryzby <[email protected]> wrote:On Wed, Mar 30, 2022 at 10:17:38AM +0530, Rambabu g wrote:\n> Hi Justin,\n> \n> Only one query is causing the issue, sharing the def of indexes. Please\n> have a look.\n> \n> > > There are three indexes defined on the table, each one is around 20 to 25GB\n> \n>  tp    | character varying(2000)   | yes    | tp       | extended |\n> \n>    852 | 00:09:56.131136 | IO              | DataFileRead  | explain\n> analyze select distinct  empno  from emp where sname='test' and tp='EMP\n> NAME 1'\n\nThe server is doing a scan of the large table.\nThe tp index matches a lot of rows (13e6) which probably aren't clustered, so\nit elects to scan the 500GB table each time.\n\nLooking at this in isolation, maybe it'd be enough to create an index on\ntp,empno (and maybe drop the tp index).  CREATE INDEX CONCURRENTLY if you don't\nwant to disrupt other queries.\n\nBut This seems like something that should be solved in a better way though ;\nlike keeping a table with all the necessary \"empno\" maintained with \"INSERT ON\nCONFLICT DO NOTHING\".  Or a trigger.", "msg_date": "Mon, 4 Apr 2022 13:17:59 +0530", "msg_from": "Rambabu g <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HIGH IO and Less CPU utilization" } ]
[ { "msg_contents": "Hi everyone,\n\nI am a bachelor's student and writing my thesis about the scaling and\nperformance of an application. The application is using postgresql as a\ndatabase but we can't scale any further currently as it seems postgres\nis hitting the limit.\n\nWith the application, as well as with pgbench, we don't get more than\n(max) 70k TPS on postgres. But the servers' resources are not utilized\ncompletely (more below).\n\nI've tried many different configurations but none of them had any major\nperformance impact (unless fsync and synchronous_commit = off).\n\nThis is the (custom) configuration I am using:\n\nshared_buffers=65551953kB\neffective_cache_size=147491895kB\nhuge_pages=on\nmin_wal_size=20GB\nmax_wal_size=200GB\nwal_buffers=1GB\nmax_wal_senders=0\narchive_mode=off\nwal_level=minimal\nwork_mem=2GB\nmaintenance_work_mem=4GB\ncheckpoint_completion_target=0.9\ncheckpoint_timeout = 30min\nrandom_page_cost=1.1\nbgwriter_flush_after = 2MB\neffective_io_concurrency = 200\n# Disabled just for performance experiments\nfsync = off\nsynchronous_commit = off\nfull_page_writes = on\nmax_worker_processes=64\nmax_parallel_workers=64\nmax_parallel_workers_per_gather=10\nmax_parallel_maintenance_workers=12\n\nThe system is as follows:\n\n* 64 Cores (Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16\ncores/CPU))\n* 192 GiB RAM (12 * 16GiB DIMM DDR4 Synchronous Registered (Buffered)\n2666 MHz (0.4 ns))\n* 2 * SSD SATA Samsung MZ7KM240HMHQ0D3 (one is used for the WAL and the\nother for the data)\n* 10 Gbps network link\n* OS: Debian 11\n* Postgres 13 from apt\n\n(I've also written a stackoverflow post about it -\nhttps://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o )\n[https://cdn.sstatic.net/Sites/stackoverflow/Img/[email protected]?v=73d79a89bded]<https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\nperformance - Postgresql bottleneck neither CPU, network nor I/O - Stack Overflow<https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\nWe are testing our application for performance, which is using Postgresql 13 as a database. It is very insert and update heavy and we cannot get more than 65k TPS on the database. But none of the m...\nstackoverflow.com\n\n\nBelow is just an example of the pgbench I ran:\n\npgbench -i -s 50 -U postgres -h <DB_HOST> -d <DB_NAME>\npgbench -c 64 -j 32 -t 100000 -h <DB_HOST> -U postgres <DB_NAME>\n\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 50\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 32\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 6400000/6400000\nlatency average = 0.976 ms\ntps = 65584.664360 (including connections establishing)\ntps = 65594.330678 (excluding connections establishing)\n\nAs comparison (average of three runs with pgbench as above):\n\nnum clients default config custom config above\n\n10 11336 16848\n20 19528 30187\n30 25769 39430\n40 29792 50150\n50 31096 60133\n60 33900 64916\n70 34986 64308\n80 34170 63075\n90 35108 59910\n100 34864 58320\n120 35124 55414\n140 33403 53610\n\n(with fsync=off alone I almost get the TPS from the right already)\n\nFor `-S -M prepared` the TPS is ~700k and for `-S` ~500k but as the\napplication is very write heavy this is not really useful for me.\n\nWith the app the CPU is only at 25% load and the disks are also no\nproblem. For pgbench its about 75% CPU but still no disk bottleneck\n(about 5%).\n\nThere are also Grafana snapshots I created for the system (node-\nexporter) and postgres (prometheus-postgres-exporter) while running\nwith our application (same configuration as above). Both do not show\nany kind of bottleneck (except high amounts context switches and pages\nin/out)\n\nnode: https://147.87.255.221:3000/dashboard/snapshot/3eXe1sS3QDL6cbvI7HkPjYnjrVLCNOOF\npostgres: https://147.87.255.221:3000/dashboard/snapshot/wHkRphdr3D4k5kRckhn57Pc6ZD3st1x7\n\nI have also looked at postgresql's lock tables while running the above\nexperiment, but there is nothing which seemed strange to me. There are\nabout 300 locks but all are granted (select * from pg_locks).\n\nAlso, the following query:\n\nselect wait_event, count(*) from pg_stat_activity where state='idle in\ntransaction' group by wait_event;\n\ndid not show some contention there the output looks always similar to\nthis (80 clients):\n\n wait_event | count\n--------------------------+-------\n ClientRead | 2\n SerializableFinishedList | 1\n\nThanks to the slack channel I got a link to edb which used a more\npowerful server and they achieved also about 70k TPS but did not set\nfsync=off. So maybe they were limited by disk IO (just guessing, as\nunfortunately, it is not pointed out in the post).\n\nhttps://www.enterprisedb.com/blog/pgbench-performance-benchmark-postgresql-12-and-edb-advanced-server-12\n\nSo, my question is if anyone knows what could be the bottleneck, or if\nit is even possible to get more TPS in this write-heavy load.\n\n(dmesg does also not contain error messages which would point to a\nkernel misconfiguration)\n\nOptimally I would like to fully use the CPU and get about 3-4 times\nmore TPS (if even possible).\n\nThanks already for everyone's time and help.\n\n\n\n\n\n\n\nHi everyone,\n\n\n\nI am a bachelor's student and writing my thesis about the scaling and\n\nperformance of an application. The application is using postgresql as a\n\ndatabase but we can't scale any further currently as it seems postgres\n\nis hitting the limit.\n\n\n\nWith the application, as well as with pgbench, we don't get more than\n\n(max) 70k TPS on postgres. But the servers' resources are not utilized\n\ncompletely (more below).\n\n\n\nI've tried many different configurations but none of them had any major\n\nperformance impact (unless fsync and synchronous_commit = off).\n\n\n\nThis is the (custom) configuration I am using:\n\n\n\nshared_buffers=65551953kB\n\neffective_cache_size=147491895kB\n\nhuge_pages=on\n\nmin_wal_size=20GB\n\nmax_wal_size=200GB\n\nwal_buffers=1GB\n\nmax_wal_senders=0\n\narchive_mode=off\n\nwal_level=minimal\n\nwork_mem=2GB\n\nmaintenance_work_mem=4GB\n\ncheckpoint_completion_target=0.9\n\ncheckpoint_timeout = 30min\n\nrandom_page_cost=1.1\n\nbgwriter_flush_after = 2MB\n\neffective_io_concurrency = 200\n\n# Disabled just for performance experiments\n\nfsync = off\n\nsynchronous_commit = off\n\nfull_page_writes = on\n\nmax_worker_processes=64\n\nmax_parallel_workers=64\n\nmax_parallel_workers_per_gather=10\n\nmax_parallel_maintenance_workers=12\n\n\n\nThe system is as follows:\n\n\n\n* 64 Cores (Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16\n\ncores/CPU))\n\n* 192 GiB RAM (12 * 16GiB DIMM DDR4 Synchronous Registered (Buffered)\n\n2666 MHz (0.4 ns))\n\n* 2 * SSD SATA Samsung MZ7KM240HMHQ0D3 (one is used for the WAL and the\n\nother for the data)\n\n* 10 Gbps network link\n\n* OS: Debian 11\n\n* Postgres 13 from apt\n\n\n\n(I've also written a stackoverflow post about it -\n\nhttps://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o )\n\n\n\n\n\n\n\n\n\n\n\n\nperformance - Postgresql bottleneck neither CPU, network\n nor I/O - Stack Overflow\n\nWe are testing our application for performance, which is using Postgresql 13 as a database. It is very insert and update heavy and we cannot get more than 65k TPS on the database. But none of the m...\n\nstackoverflow.com\n\n\n\n\n\n\n\n\n\nBelow is just an example of the pgbench I ran:\n\n\n\npgbench -i -s 50 -U postgres -h <DB_HOST> -d <DB_NAME>\n\npgbench -c 64 -j 32 -t 100000 -h <DB_HOST> -U postgres <DB_NAME>\n\n\n\nstarting vacuum...end.\n\ntransaction type: <builtin: TPC-B (sort of)>\n\nscaling factor: 50\n\nquery mode: simple\n\nnumber of clients: 64\n\nnumber of threads: 32\n\nnumber of transactions per client: 100000\n\nnumber of transactions actually processed: 6400000/6400000\n\nlatency average = 0.976 ms\n\ntps = 65584.664360 (including connections establishing)\n\ntps = 65594.330678 (excluding connections establishing)\n\n\n\nAs comparison (average of three runs with pgbench as above):\n\n\n\nnum clients     default config      custom config above\n\n\n\n10              11336               16848\n\n20              19528               30187\n\n30              25769               39430\n\n40              29792               50150\n\n50              31096               60133\n\n60              33900               64916\n\n70              34986               64308\n\n80              34170               63075\n\n90              35108               59910\n\n100             34864               58320\n\n120             35124               55414\n\n140             33403               53610\n\n\n\n(with fsync=off alone I almost get the TPS from the right already)\n\n\n\nFor `-S -M prepared` the TPS is ~700k and for `-S` ~500k but as the\n\napplication is very write heavy this is not really useful for me.\n\n\n\nWith the app the CPU is only at 25% load and the disks are also no\n\nproblem. For pgbench its about 75% CPU but still no disk bottleneck\n\n(about 5%).\n\n\n\nThere are also Grafana snapshots I created for the system (node-\n\nexporter) and postgres (prometheus-postgres-exporter) while running\n\nwith our application (same configuration as above). Both do not show\n\nany kind of bottleneck (except high amounts context switches and pages\n\nin/out)\n\n\n\nnode: \nhttps://147.87.255.221:3000/dashboard/snapshot/3eXe1sS3QDL6cbvI7HkPjYnjrVLCNOOF\n\n\npostgres: \nhttps://147.87.255.221:3000/dashboard/snapshot/wHkRphdr3D4k5kRckhn57Pc6ZD3st1x7\n\n\n\n\nI have also looked at postgresql's lock tables while running the above\n\nexperiment, but there is nothing which seemed strange to me. There are\n\nabout 300 locks but all are granted (select * from pg_locks). \n\n\n\nAlso, the following query:\n\n\n\nselect wait_event, count(*) from pg_stat_activity where state='idle in\n\ntransaction' group by wait_event;\n\n\n\ndid not show some contention there the output looks always similar to\n\nthis (80 clients):\n\n\n\n    wait_event                  | count \n\n--------------------------+-------\n\n ClientRead                     |     2\n\n SerializableFinishedList  |     1\n\n\n\nThanks to the slack channel I got a link to edb which used a more\n\npowerful server and they achieved also about 70k TPS but did not set\n\nfsync=off. So maybe they were limited by disk IO (just guessing, as\n\nunfortunately, it is not pointed out in the post).\n\n\n\nhttps://www.enterprisedb.com/blog/pgbench-performance-benchmark-postgresql-12-and-edb-advanced-server-12\n\n\n\n\nSo, my question is if anyone knows what could be the bottleneck, or if\n\nit is even possible to get more TPS in this write-heavy load.\n\n\n\n(dmesg does also not contain error messages which would point to a\n\nkernel misconfiguration)\n\n\n\nOptimally I would like to fully use the CPU and get about 3-4 times\n\nmore TPS (if even possible).\n\n\n\nThanks already for everyone's time and help.", "msg_date": "Thu, 31 Mar 2022 11:50:34 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql TPS Bottleneck" }, { "msg_contents": "<wakandavision 'at' outlook.com> writes:\n\n> Optimally I would like to fully use the CPU and get about 3-4 times\n> more TPS (if even possible).\n\nDisclaimer: I'm really not a pg performance expert.\nI don't understand your hope to fully use the CPU; if your\nscenario is disk-limited, which may very well be the case, then\nof course you cannot fully use the CPU. With synchronous commits\nand fsync, the system is probably spending time just waiting for\nthe disks to report the writes completion. Are iostat/vmstat\nshowing a lot of IO-wait?\nAlso, if you can live with a few lost transactions in case of\nserver crash, synchronous_commit=off is very ok and provides a\nlot of performance gain.\n\n-- \nGuillaume Cottenceau\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:18:37 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql TPS Bottleneck" }, { "msg_contents": "While setting these 2 parameters to off will make things go faster \n(especially for fsync), it is unrealistic to have these settings in a \nproduction environment, especiall fsync=off.� You might get by with \nsynchronous_commit=off, but with fsync=off you could end up with \ncorruption in your database.� synchronous_commit may not make anything \ngo faster just change where the time is being spent.\n\nRegards,\nMichael Vitale\n\n\[email protected] wrote on 3/31/2022 7:50 AM:\n> fsync = off\n> synchronous_commit = off\n\n\n\n", "msg_date": "Thu, 31 Mar 2022 14:55:00 -0400", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql TPS Bottleneck" }, { "msg_contents": "\n\nOn 3/31/22 13:50, [email protected] wrote:\n> Hi everyone,\n> \n> I am a bachelor's student and writing my thesis about the scaling and\n> performance of an application. The application is using postgresql as a\n> database but we can't scale any further currently as it seems postgres\n> is hitting the limit.\n> \n> With the application, as well as with pgbench, we don't get more than\n> (max) 70k TPS on postgres. But the servers' resources are not utilized\n> completely (more below).\n> \n> I've tried many different configurations but none of them had any major\n> performance impact (unless fsync and synchronous_commit = off).\n> \n> This is the (custom) configuration I am using:\n> \n> shared_buffers=65551953kB\n> effective_cache_size=147491895kB\n> huge_pages=on\n> min_wal_size=20GB\n> max_wal_size=200GB\n> wal_buffers=1GB\n> max_wal_senders=0\n> archive_mode=off\n> wal_level=minimal\n> work_mem=2GB\n> maintenance_work_mem=4GB\n> checkpoint_completion_target=0.9\n> checkpoint_timeout = 30min\n> random_page_cost=1.1\n> bgwriter_flush_after = 2MB\n> effective_io_concurrency = 200\n> # Disabled just for performance experiments\n> fsync = off\n> synchronous_commit = off\n> full_page_writes = on\n> max_worker_processes=64\n> max_parallel_workers=64\n> max_parallel_workers_per_gather=10\n> max_parallel_maintenance_workers=12\n> \n> The system is as follows:\n> \n> * 64 Cores (Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16\n> cores/CPU))\n> * 192 GiB RAM (12 * 16GiB DIMM DDR4 Synchronous Registered (Buffered)\n> 2666 MHz (0.4 ns))\n> * 2 * SSD SATA Samsung MZ7KM240HMHQ0D3 (one is used for the WAL and the\n> other for the data)\n> * 10 Gbps network link\n> * OS: Debian 11\n> * Postgres 13 from apt\n> \n> (I've also written a stackoverflow post about it -\n> https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n> )\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n> \t\n> performance - Postgresql bottleneck neither CPU, network nor I/O - Stack\n> Overflow\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n> We are testing our application for performance, which is using\n> Postgresql 13 as a database. It is very insert and update heavy and we\n> cannot get more than 65k TPS on the database. But none of the m...\n> stackoverflow.com\n> \n> \n> \n> Below is just an example of the pgbench I ran:\n> \n> pgbench -i -s 50 -U postgres -h <DB_HOST> -d <DB_NAME>\n> pgbench -c 64 -j 32 -t 100000 -h <DB_HOST> -U postgres <DB_NAME>\n> \n\nI'd bet you need to use \"pgbench -N\" because the regular transaction\nupdates the \"branch\" table, and you only have 50 branches. Which\nprobably means a lot of conflicts and locking. The other thing you might\ntry is \"-M prepared\" which saves time on query planning.\n\nFWIW I really doubt \"fsync=off\" will give you any meaningful results.\n\nMaybe try assessing the hardware capability first, using tools like fio\nto measure IOPS with different workloads.\n\nThen try pgbench with a single client, and finally increase the number\nof clients and see how it behaves and compare it to what you expect.\n\nIn any case, every system has a bottleneck. You're clearly hitting one,\notherwise the numbers would go faster. Usually, it's either CPU bound,\nin which case \"perf top\" might tell us more, or it's IO bound, in which\ncase try e.g. \"iostat -x -k 1\" or something.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 31 Mar 2022 21:16:50 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql TPS Bottleneck" }, { "msg_contents": "On 3/31/22 07:50, [email protected] wrote:\n> Hi everyone,\n>\n> I am a bachelor's student and writing my thesis about the scaling and\n> performance of an application. The application is using postgresql as a\n> database but we can't scale any further currently as it seems postgres\n> is hitting the limit.\n>\n> With the application, as well as with pgbench, we don't get more than\n> (max) 70k TPS on postgres. But the servers' resources are not utilized\n> completely (more below).\n\nI would try monitoring using \"perf top\" and \"atop -d\" to see what is \ngoing on on the system. Also, try using sar to figure out what's going \non. Are you paging, waiting for I/O or having some other kind of \nbottleneck. Once you figure where is your system spending time, you can \naddress the problem. In addition to that, analyze the log files with \npgbadger to find out which queries are time consuming and try optimizing \nthem.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 3/31/22 07:50,\n [email protected] wrote:\n\n\nHi everyone,\n\n\n\nI am a bachelor's student and writing my thesis about the\n scaling and\n\nperformance of an application. The application is using\n postgresql as a\n\ndatabase but we can't scale any further currently as it seems\n postgres\n\nis hitting the limit.\n\n\n\nWith the application, as well as with pgbench, we don't get\n more than\n\n(max) 70k TPS on postgres. But the servers' resources are not\n utilized\n\ncompletely (more below).\n\nI would try monitoring using \"perf top\" and \"atop -d\" to see what\n is going on on the system. Also, try using sar to figure out\n what's going on. Are you paging, waiting for I/O or having some\n other kind of bottleneck. Once you figure where is your system\n spending time, you can address the problem. In addition to that,\n analyze the log files with pgbadger to find out which queries are\n time consuming and try optimizing them.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com", "msg_date": "Thu, 31 Mar 2022 22:40:25 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql TPS Bottleneck" }, { "msg_contents": "Hi, thanks for your answer.\n\nWe have a Grafana instance monitoring all those metrics, no one I asked so far could identify an obvious bottleneck.\nHowever, I have done further experiments to see if we are missing something.\n\nWhile running the benchmark with our application I've run tools on the\nDB node to smoke up the resources. These were cpuburn, iperf and fio.\nWhile cpuburn did result in a small drop of Postgres TPS it was nothing\nwhich was not expected. However, iperf and fio did not have any impact\nat all (except iperf when more than our 10Gbps are sent - clearly). The\ndisks were utilized 100% but Postgres stayed at about 65k TPS.\n\nThe next thing I did was starting two independent Postgres instances on\nthe same server and run independent client applications against each of\nthem. This resulted in our application getting almost double of the TPS\ncompared to running a single instance (from 13k to 23k) - Each Postgres\ninstance had about 45k TPS which did not increase (?).\n\nI think what's also interesting is that our DB server has the TPS peak\nwhen using about 80 clients (more results in the TPS going down again),\nwhile when I search the internet most benchmarks peak at about 400-600\nclients.\n\nDoes anyone have an idea what might be the problem?\nMaybe I am missing a kernel/Postgres configuration parameter?\n________________________________\nFrom: Tomas Vondra <[email protected]>\nSent: Thursday, March 31, 2022 9:16 PM\nTo: [email protected] <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Postgresql TPS Bottleneck\n\n\n\nOn 3/31/22 13:50, [email protected] wrote:\n> Hi everyone,\n>\n> I am a bachelor's student and writing my thesis about the scaling and\n> performance of an application. The application is using postgresql as a\n> database but we can't scale any further currently as it seems postgres\n> is hitting the limit.\n>\n> With the application, as well as with pgbench, we don't get more than\n> (max) 70k TPS on postgres. But the servers' resources are not utilized\n> completely (more below).\n>\n> I've tried many different configurations but none of them had any major\n> performance impact (unless fsync and synchronous_commit = off).\n>\n> This is the (custom) configuration I am using:\n>\n> shared_buffers=65551953kB\n> effective_cache_size=147491895kB\n> huge_pages=on\n> min_wal_size=20GB\n> max_wal_size=200GB\n> wal_buffers=1GB\n> max_wal_senders=0\n> archive_mode=off\n> wal_level=minimal\n> work_mem=2GB\n> maintenance_work_mem=4GB\n> checkpoint_completion_target=0.9\n> checkpoint_timeout = 30min\n> random_page_cost=1.1\n> bgwriter_flush_after = 2MB\n> effective_io_concurrency = 200\n> # Disabled just for performance experiments\n> fsync = off\n> synchronous_commit = off\n> full_page_writes = on\n> max_worker_processes=64\n> max_parallel_workers=64\n> max_parallel_workers_per_gather=10\n> max_parallel_maintenance_workers=12\n>\n> The system is as follows:\n>\n> * 64 Cores (Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16\n> cores/CPU))\n> * 192 GiB RAM (12 * 16GiB DIMM DDR4 Synchronous Registered (Buffered)\n> 2666 MHz (0.4 ns))\n> * 2 * SSD SATA Samsung MZ7KM240HMHQ0D3 (one is used for the WAL and the\n> other for the data)\n> * 10 Gbps network link\n> * OS: Debian 11\n> * Postgres 13 from apt\n>\n> (I've also written a stackoverflow post about it -\n> https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n> )\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n>\n> performance - Postgresql bottleneck neither CPU, network nor I/O - Stack\n> Overflow\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n> We are testing our application for performance, which is using\n> Postgresql 13 as a database. It is very insert and update heavy and we\n> cannot get more than 65k TPS on the database. But none of the m...\n> stackoverflow.com\n>\n>\n>\n> Below is just an example of the pgbench I ran:\n>\n> pgbench -i -s 50 -U postgres -h <DB_HOST> -d <DB_NAME>\n> pgbench -c 64 -j 32 -t 100000 -h <DB_HOST> -U postgres <DB_NAME>\n>\n\nI'd bet you need to use \"pgbench -N\" because the regular transaction\nupdates the \"branch\" table, and you only have 50 branches. Which\nprobably means a lot of conflicts and locking. The other thing you might\ntry is \"-M prepared\" which saves time on query planning.\n\nFWIW I really doubt \"fsync=off\" will give you any meaningful results.\n\nMaybe try assessing the hardware capability first, using tools like fio\nto measure IOPS with different workloads.\n\nThen try pgbench with a single client, and finally increase the number\nof clients and see how it behaves and compare it to what you expect.\n\nIn any case, every system has a bottleneck. You're clearly hitting one,\notherwise the numbers would go faster. Usually, it's either CPU bound,\nin which case \"perf top\" might tell us more, or it's IO bound, in which\ncase try e.g. \"iostat -x -k 1\" or something.\n\nregards\n\n--\nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n\nHi, thanks for your answer.\n\n\n\n\nWe have a Grafana instance monitoring all those metrics, no one I asked so far could identify an obvious bottleneck.\n\n\n\nHowever, I have done further experiments to see if we are missing something.\n\n\n\n\nWhile running the benchmark with our application I've run tools on the\n\nDB node to smoke up the resources. These were cpuburn, iperf and fio.\n\nWhile cpuburn did result in a small drop of Postgres TPS it was nothing\n\nwhich was not expected. However, iperf and fio did not have any impact\n\nat all (except iperf when more than our 10Gbps are sent - clearly). The\n\ndisks were utilized 100% but Postgres stayed at about 65k TPS.\n\n\n\n\nThe next thing I did was starting two independent Postgres instances on\n\nthe same server and run independent client applications against each of\n\nthem. This resulted in our application getting almost double of the TPS\n\ncompared to running a single instance (from 13k to 23k) - Each Postgres\n\ninstance had about 45k TPS which did not increase (?).\n\n\n\n\nI think what's also interesting is that our DB server has the TPS peak\n\nwhen using about 80 clients (more results in the TPS going down again),\n\nwhile when I search the internet most benchmarks peak at about 400-600\n\nclients.\n\n\n\n\nDoes anyone have an idea what might be the problem? \nMaybe I am missing a kernel/Postgres configuration parameter?\n\n\n\nFrom: Tomas Vondra <[email protected]>\nSent: Thursday, March 31, 2022 9:16 PM\nTo: [email protected] <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Postgresql TPS Bottleneck\n \n\n\n\n\nOn 3/31/22 13:50, [email protected] wrote:\n> Hi everyone,\n> \n> I am a bachelor's student and writing my thesis about the scaling and\n> performance of an application. The application is using postgresql as a\n> database but we can't scale any further currently as it seems postgres\n> is hitting the limit.\n> \n> With the application, as well as with pgbench, we don't get more than\n> (max) 70k TPS on postgres. But the servers' resources are not utilized\n> completely (more below).\n> \n> I've tried many different configurations but none of them had any major\n> performance impact (unless fsync and synchronous_commit = off).\n> \n> This is the (custom) configuration I am using:\n> \n> shared_buffers=65551953kB\n> effective_cache_size=147491895kB\n> huge_pages=on\n> min_wal_size=20GB\n> max_wal_size=200GB\n> wal_buffers=1GB\n> max_wal_senders=0\n> archive_mode=off\n> wal_level=minimal\n> work_mem=2GB\n> maintenance_work_mem=4GB\n> checkpoint_completion_target=0.9\n> checkpoint_timeout = 30min\n> random_page_cost=1.1\n> bgwriter_flush_after = 2MB\n> effective_io_concurrency = 200\n> # Disabled just for performance experiments\n> fsync = off\n> synchronous_commit = off\n> full_page_writes = on\n> max_worker_processes=64\n> max_parallel_workers=64\n> max_parallel_workers_per_gather=10\n> max_parallel_maintenance_workers=12\n> \n> The system is as follows:\n> \n> * 64 Cores (Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16\n> cores/CPU))\n> * 192 GiB RAM (12 * 16GiB DIMM DDR4 Synchronous Registered (Buffered)\n> 2666 MHz (0.4 ns))\n> * 2 * SSD SATA Samsung MZ7KM240HMHQ0D3 (one is used for the WAL and the\n> other for the data)\n> * 10 Gbps network link\n> * OS: Debian 11\n> * Postgres 13 from apt\n> \n> (I've also written a stackoverflow post about it -\n> \nhttps://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n> )\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n>        \n> performance - Postgresql bottleneck neither CPU, network nor I/O - Stack\n> Overflow\n> <https://stackoverflow.com/questions/71631348/postgresql-bottleneck-neither-cpu-network-nor-i-o>\n> We are testing our application for performance, which is using\n> Postgresql 13 as a database. It is very insert and update heavy and we\n> cannot get more than 65k TPS on the database. But none of the m...\n> stackoverflow.com\n> \n> \n> \n> Below is just an example of the pgbench I ran:\n> \n> pgbench -i -s 50 -U postgres -h <DB_HOST> -d <DB_NAME>\n> pgbench -c 64 -j 32 -t 100000 -h <DB_HOST> -U postgres <DB_NAME>\n> \n\nI'd bet you need to use \"pgbench -N\" because the regular transaction\nupdates the \"branch\" table, and you only have 50 branches. Which\nprobably means a lot of conflicts and locking. The other thing you might\ntry is \"-M prepared\" which saves time on query planning.\n\nFWIW I really doubt \"fsync=off\" will give you any meaningful results.\n\nMaybe try assessing the hardware capability first, using tools like fio\nto measure IOPS with different workloads.\n\nThen try pgbench with a single client, and finally increase the number\nof clients and see how it behaves and compare it to what you expect.\n\nIn any case, every system has a bottleneck. You're clearly hitting one,\notherwise the numbers would go faster. Usually, it's either CPU bound,\nin which case \"perf top\" might tell us more, or it's IO bound, in which\ncase try e.g. \"iostat -x -k 1\" or something.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Apr 2022 06:49:16 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql TPS Bottleneck" }, { "msg_contents": "On Wed, Apr 20, 2022 at 5:13 AM <[email protected]> wrote:\n\n>\n> The next thing I did was starting two independent Postgres instances on\n> the same server and run independent client applications against each of\n> them. This resulted in our application getting almost double of the TPS\n> compared to running a single instance (from 13k to 23k) - Each Postgres\n> instance had about 45k TPS which did not increase (?).\n>\n\nHow could that be? Isn't there a one to one correspondence between app\nprogress and PostgreSQL transactions? How could one almost double while\nthe other did not increase? Anyway, 2x45 does seem like an increase\n(smallish) over 65.\n\nYour bottleneck for pgbench may be IPC/context switches. I noticed that -S\ndid about 7 times more than the default, and it only makes one round trip\nto the database while the default makes 7.\n\nYou could package up the different queries made by the default transaction\ninto one function call, in order to do the same thing but with fewer round\ntrips to the database. This would be an easy way to see if my theory is\ntrue. If it is, I don't know what that would mean for your app though, as\nwe know nothing about its structure.\n\nI have a patch handy (attached) which implements this feature as the\nbuiltin transaction \"-b tpcb-func\". If you don't want to recompile\npgbench, you could dissect the patch to reimplement the same thing as a -f\nstyle transaction instead.\n\nNote that packaging it up this way does violate the spirit of the\nbenchmark, as clearly someone is supposed to look at the results of the\nfirst select before deciding to proceed with the rest of the transaction.\nBut you don't seem very interested in the spirit of the tpc-b benchmark,\njust in using it as a tool to track down a bottleneck.\n\nCheers,\n\nJeff", "msg_date": "Wed, 20 Apr 2022 11:49:30 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql TPS Bottleneck" }, { "msg_contents": "Clearly, I have only supplied half of the information there. I'm really sorry about that. The TPS measurement of the application does in no way correspond to the TPS of Postgres.\nThey are measured completely different but it's the measure we actually are interested in - as we want to assess the scalability of the application.\n\nWhat I wanted to show is that the server we are hosting Postgres on is not bottlenecked (in an obvious way), as running two instances in parallel on the same server gives us almost double\nthe performance in our application and double the resource usage on the DB server. But what actually is strange(?), is that the TPS of Postgres does not change much, i.e. it's just 'distributed' to the two instances.\n\nIt would seem like our application could not handle more throughput, but I did the same with three instances, where we stayed again with 'only' double the performance and the TPS of Postgres distributed to three instances\n(each client application running on an independent node).\n\nI'm really getting frustrated here as I (and no one I asked yet) has an explanation for this behavior.\n________________________________\nFrom: Jeff Janes <[email protected]>\nSent: Wednesday, April 20, 2022 5:49 PM\nTo: [email protected] <[email protected]>\nCc: Tomas Vondra <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Postgresql TPS Bottleneck\n\nOn Wed, Apr 20, 2022 at 5:13 AM <[email protected]<mailto:[email protected]>> wrote:\n\nThe next thing I did was starting two independent Postgres instances on\nthe same server and run independent client applications against each of\nthem. This resulted in our application getting almost double of the TPS\ncompared to running a single instance (from 13k to 23k) - Each Postgres\ninstance had about 45k TPS which did not increase (?).\n\nHow could that be? Isn't there a one to one correspondence between app progress and PostgreSQL transactions? How could one almost double while the other did not increase? Anyway, 2x45 does seem like an increase (smallish) over 65.\n\nYour bottleneck for pgbench may be IPC/context switches. I noticed that -S did about 7 times more than the default, and it only makes one round trip to the database while the default makes 7.\n\nYou could package up the different queries made by the default transaction into one function call, in order to do the same thing but with fewer round trips to the database. This would be an easy way to see if my theory is true. If it is, I don't know what that would mean for your app though, as we know nothing about its structure.\n\nI have a patch handy (attached) which implements this feature as the builtin transaction \"-b tpcb-func\". If you don't want to recompile pgbench, you could dissect the patch to reimplement the same thing as a -f style transaction instead.\n\nNote that packaging it up this way does violate the spirit of the benchmark, as clearly someone is supposed to look at the results of the first select before deciding to proceed with the rest of the transaction. But you don't seem very interested in the spirit of the tpc-b benchmark, just in using it as a tool to track down a bottleneck.\n\nCheers,\n\nJeff\n\n\n\n\n\n\n\n\nClearly, I have only supplied half of the information there. I'm really sorry about that. The TPS measurement of the application does in no way correspond to the TPS of Postgres.\n\nThey are measured completely different but it's the measure we actually are interested in - as we want to assess the scalability of the application.\n\n\n\n\n\n\nWhat I wanted to show is that the server we are hosting Postgres on is not bottlenecked (in an obvious way), as running two instances in parallel on the same server gives us almost double\n\nthe performance in our application and double the resource usage on the DB server. But what actually is strange(?), is that the TPS of Postgres does not change much, i.e. it's just 'distributed' to the two instances.\n\n\n\n\n\nIt would seem like our application could not handle more throughput, but I did the same with three instances, where we stayed again with 'only' double the performance and the TPS of Postgres distributed to three instances\n\n(each client application running on an independent node).\n\n\n\n\nI'm really getting frustrated here as I (and no one I asked yet) has an explanation for this behavior.\n\n\nFrom: Jeff Janes <[email protected]>\nSent: Wednesday, April 20, 2022 5:49 PM\nTo: [email protected] <[email protected]>\nCc: Tomas Vondra <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Postgresql TPS Bottleneck\n \n\n\n\nOn Wed, Apr 20, 2022 at 5:13 AM <[email protected]> wrote:\n\n\n\n\n\nThe next thing I did was starting two independent Postgres instances on\nthe same server and run independent client applications against each of\nthem. This resulted in our application getting almost double of the TPS\ncompared to running a single instance (from 13k to 23k) - Each Postgres\ninstance had about 45k TPS which did not increase (?).\n\n\n\n\n\nHow could that be?  Isn't there a one to one correspondence between app progress and PostgreSQL transactions?  How could one almost double while the other did not increase?  Anyway, 2x45 does seem like an increase (smallish) over 65.\n\n\nYour bottleneck for pgbench may be IPC/context switches.  I noticed that -S did about 7 times more than the default, and it only makes one round trip to the database while the default makes 7.\n\n\nYou could package up the different queries made by the default transaction into one function call, in order to do the same thing but with fewer round trips to the database. This would be an easy way to see if my theory is true.  If it is, I don't know\n what that would mean for your app though, as we know nothing about its structure.\n\n\nI have a patch handy (attached) which implements this feature as the builtin transaction \"-b tpcb-func\".  If you don't want to recompile pgbench, you could dissect the patch to reimplement the same thing as a -f style transaction instead.\n\n\nNote that packaging it up this way does violate the spirit of the benchmark, as clearly someone is supposed to look at the results of the first select before deciding to proceed with the rest of the transaction.  But you don't seem very interested in the\n spirit of the tpc-b benchmark, just in using it as a tool to track down a bottleneck.\n\n\nCheers,\n\n\nJeff", "msg_date": "Wed, 20 Apr 2022 16:35:07 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql TPS Bottleneck" }, { "msg_contents": "Ypu wouldn't get an increasing running two instances on the same server.\nDistributed database severs is a complex application and tuning it will\ndepend on storage and CPU capacity. It could be as simple as a bus. Are you\nrunning this locally or on the cloud? Are you running this on a distributed\nfile system or across a network? There are a dozen different reasons why a\ndatabase would not be using 100% of capacity from indexing to disk or bus\nbound or network bound.\n\nThanks,\nBen\n\nOn Wed, Apr 20, 2022, 1:27 PM <[email protected]> wrote:\n\n> Clearly, I have only supplied half of the information there. I'm really\n> sorry about that. The TPS measurement of the application does in no way\n> correspond to the TPS of Postgres.\n> They are measured completely different but it's the measure we actually\n> are interested in - as we want to assess the scalability of the\n> application.\n>\n> What I wanted to show is that the server we are hosting Postgres on is not\n> bottlenecked (in an obvious way), as running two instances in parallel on\n> the same server gives us almost double\n> the performance in our application and double the resource usage on the DB\n> server. But what actually is strange(?), is that the TPS of Postgres does\n> not change much, i.e. it's just 'distributed' to the two instances.\n>\n> It would seem like our application could not handle more throughput, but I\n> did the same with three instances, where we stayed again with 'only' double\n> the performance and the TPS of Postgres distributed to three instances\n> (each client application running on an independent node).\n>\n> I'm really getting frustrated here as I (and no one I asked yet) has an\n> explanation for this behavior.\n> ------------------------------\n> *From:* Jeff Janes <[email protected]>\n> *Sent:* Wednesday, April 20, 2022 5:49 PM\n> *To:* [email protected] <[email protected]>\n> *Cc:* Tomas Vondra <[email protected]>;\n> [email protected] <\n> [email protected]>\n> *Subject:* Re: Postgresql TPS Bottleneck\n>\n> On Wed, Apr 20, 2022 at 5:13 AM <[email protected]> wrote:\n>\n>\n> The next thing I did was starting two independent Postgres instances on\n> the same server and run independent client applications against each of\n> them. This resulted in our application getting almost double of the TPS\n> compared to running a single instance (from 13k to 23k) - Each Postgres\n> instance had about 45k TPS which did not increase (?).\n>\n>\n> How could that be? Isn't there a one to one correspondence between app\n> progress and PostgreSQL transactions? How could one almost double while\n> the other did not increase? Anyway, 2x45 does seem like an increase\n> (smallish) over 65.\n>\n> Your bottleneck for pgbench may be IPC/context switches. I noticed that\n> -S did about 7 times more than the default, and it only makes one round\n> trip to the database while the default makes 7.\n>\n> You could package up the different queries made by the default transaction\n> into one function call, in order to do the same thing but with fewer round\n> trips to the database. This would be an easy way to see if my theory is\n> true. If it is, I don't know what that would mean for your app though, as\n> we know nothing about its structure.\n>\n> I have a patch handy (attached) which implements this feature as the\n> builtin transaction \"-b tpcb-func\". If you don't want to recompile\n> pgbench, you could dissect the patch to reimplement the same thing as a -f\n> style transaction instead.\n>\n> Note that packaging it up this way does violate the spirit of the\n> benchmark, as clearly someone is supposed to look at the results of the\n> first select before deciding to proceed with the rest of the transaction.\n> But you don't seem very interested in the spirit of the tpc-b benchmark,\n> just in using it as a tool to track down a bottleneck.\n>\n> Cheers,\n>\n> Jeff\n>\n\nYpu wouldn't get an increasing running two instances on the same server. Distributed database severs is a complex application and tuning it will depend on storage and CPU capacity. It could be as simple as a bus. Are you running this locally or on the cloud? Are you running this on a distributed file system or across a network? There are a dozen different reasons why a database would not be using 100% of capacity from indexing to disk or bus bound or network bound.Thanks,BenOn Wed, Apr 20, 2022, 1:27 PM <[email protected]> wrote:\n\n\nClearly, I have only supplied half of the information there. I'm really sorry about that. The TPS measurement of the application does in no way correspond to the TPS of Postgres.\n\nThey are measured completely different but it's the measure we actually are interested in - as we want to assess the scalability of the application.\n\n\n\n\n\n\nWhat I wanted to show is that the server we are hosting Postgres on is not bottlenecked (in an obvious way), as running two instances in parallel on the same server gives us almost double\n\nthe performance in our application and double the resource usage on the DB server. But what actually is strange(?), is that the TPS of Postgres does not change much, i.e. it's just 'distributed' to the two instances.\n\n\n\n\n\nIt would seem like our application could not handle more throughput, but I did the same with three instances, where we stayed again with 'only' double the performance and the TPS of Postgres distributed to three instances\n\n(each client application running on an independent node).\n\n\n\n\nI'm really getting frustrated here as I (and no one I asked yet) has an explanation for this behavior.\n\n\nFrom: Jeff Janes <[email protected]>\nSent: Wednesday, April 20, 2022 5:49 PM\nTo: [email protected] <[email protected]>\nCc: Tomas Vondra <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Postgresql TPS Bottleneck\n \n\n\n\nOn Wed, Apr 20, 2022 at 5:13 AM <[email protected]> wrote:\n\n\n\n\n\nThe next thing I did was starting two independent Postgres instances on\nthe same server and run independent client applications against each of\nthem. This resulted in our application getting almost double of the TPS\ncompared to running a single instance (from 13k to 23k) - Each Postgres\ninstance had about 45k TPS which did not increase (?).\n\n\n\n\n\nHow could that be?  Isn't there a one to one correspondence between app progress and PostgreSQL transactions?  How could one almost double while the other did not increase?  Anyway, 2x45 does seem like an increase (smallish) over 65.\n\n\nYour bottleneck for pgbench may be IPC/context switches.  I noticed that -S did about 7 times more than the default, and it only makes one round trip to the database while the default makes 7.\n\n\nYou could package up the different queries made by the default transaction into one function call, in order to do the same thing but with fewer round trips to the database. This would be an easy way to see if my theory is true.  If it is, I don't know\n what that would mean for your app though, as we know nothing about its structure.\n\n\nI have a patch handy (attached) which implements this feature as the builtin transaction \"-b tpcb-func\".  If you don't want to recompile pgbench, you could dissect the patch to reimplement the same thing as a -f style transaction instead.\n\n\nNote that packaging it up this way does violate the spirit of the benchmark, as clearly someone is supposed to look at the results of the first select before deciding to proceed with the rest of the transaction.  But you don't seem very interested in the\n spirit of the tpc-b benchmark, just in using it as a tool to track down a bottleneck.\n\n\nCheers,\n\n\nJeff", "msg_date": "Wed, 20 Apr 2022 14:18:22 -0400", "msg_from": "Benedict Holland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql TPS Bottleneck" } ]
[ { "msg_contents": "I have the following query:\n\n *explain* (*analyze*, costs, timing) *SELECT* rr.* *FROM* rpc rpc\n\n *INNER* *JOIN* rr rr\n\n *ON* rr.uuid = rpc.rr_id\n\n *INNER* *JOIN* rs rs\n\n *ON* rs.r_id = rpc.r_id\n\n *INNER* *JOIN* *role* r\n\n *ON* r.uuid = rs.r_id\n\n *LEFT* *JOIN* spc spc\n\n *ON* spc.rr_id = rpc.rr_id\n\n *WHERE* rs.s_id = 'caa767b8-8371-43a3-aa11-d1dba1893601'\n\n *and* spc.s_id =\n'caa767b8-8371-43a3-aa11-d1dba1893601'\n\n *and* spc.rd_id =\n'9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n\n *AND* rpc.rd_id =\n'9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n\n *AND* rpc.c_id =\n'9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n\n *and* spc.c_id =\n'9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n\n *AND* rr.b_id = 'xyz'\n\n *AND* (('GLOBAL' = ' NO_PROJECT_ID + \"' ) *OR* (rr.\np_id = 'GLOBAL'))\n\n *AND* spc.permission_type *IS* *null* *and* spc.\nis_active = *true*\n\n *AND* rpc.is_active = *true* *AND* rr.is_active =\n*true* *AND* rs.is_active = *true* *AND* r.is_active = *true*\n\n\nI don't think it is super complex. But when I run explain analyze on this I\nget the following:\n\nPlanning Time: 578.068 ms\nExecution Time: 0.113 ms\n\nThis is a huge deviation in planning vs. execution times. The explain plan\nlooks good since the execution time is < 1ms. It doesn't matter though\nsince the planning time is high. I don't see anything in the explain\nanalyze output that tells me why the planning time is high. On average, the\ntables being joined have 3 indexes/table. How can I debug this?\n\nBeen stuck on this for weeks. Any help is appreciated. Thank you!\n\nSaurabh\n\nI have the following query:\n explain (analyze, costs, timing) SELECT  rr.* FROM rpc rpc\n                       INNER JOIN rr rr\n                           ON rr.uuid = rpc.rr_id\n                       INNER JOIN rs rs\n                           ON rs.r_id = rpc.r_id\n                       INNER JOIN role r\n                           ON r.uuid = rs.r_id\n                       LEFT JOIN spc spc\n                           ON spc.rr_id = rpc.rr_id\n                   WHERE rs.s_id = 'caa767b8-8371-43a3-aa11-d1dba1893601' \n                       and spc.s_id  = 'caa767b8-8371-43a3-aa11-d1dba1893601' \n                       and spc.rd_id  = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.rd_id = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.c_id = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       and spc.c_id  = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       AND rr.b_id = 'xyz'\n                       AND (('GLOBAL' = ' NO_PROJECT_ID + \"' ) OR (rr.p_id = 'GLOBAL'))\n                       AND spc.permission_type IS null and spc.is_active  = true\n                       AND rpc.is_active = true AND rr.is_active = true AND rs.is_active = true AND r.is_active = true I don't think it is super complex. But when I run explain analyze on this I get the following:Planning Time: 578.068 msExecution Time: 0.113 msThis is a huge deviation in planning vs. execution times. The explain plan looks good since the execution time is < 1ms. It doesn't matter though since the planning time is high. I don't see anything in the explain analyze output that tells me why the planning time is high. On average, the tables being joined have 3 indexes/table. How can I debug this?Been stuck on this for weeks. Any help is appreciated. Thank you!Saurabh", "msg_date": "Wed, 6 Apr 2022 17:26:59 -0700", "msg_from": "Saurabh Sehgal <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Planning Times" }, { "msg_contents": "To clarify - I have run \"vaccum full\" and \"vacuum analyze\" on every single\ntable involved in the query and the planning times are still around the\nsame and were not impacted.\n\nOn Wed, Apr 6, 2022 at 5:26 PM Saurabh Sehgal <[email protected]> wrote:\n\n>\n> I have the following query:\n>\n> *explain* (*analyze*, costs, timing) *SELECT* rr.* *FROM* rpc rpc\n>\n> *INNER* *JOIN* rr rr\n>\n> *ON* rr.uuid = rpc.rr_id\n>\n> *INNER* *JOIN* rs rs\n>\n> *ON* rs.r_id = rpc.r_id\n>\n> *INNER* *JOIN* *role* r\n>\n> *ON* r.uuid = rs.r_id\n>\n> *LEFT* *JOIN* spc spc\n>\n> *ON* spc.rr_id = rpc.rr_id\n>\n> *WHERE* rs.s_id =\n> 'caa767b8-8371-43a3-aa11-d1dba1893601'\n>\n> *and* spc.s_id =\n> 'caa767b8-8371-43a3-aa11-d1dba1893601'\n>\n> *and* spc.rd_id =\n> '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n>\n> *AND* rpc.rd_id =\n> '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n>\n> *AND* rpc.c_id =\n> '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n>\n> *and* spc.c_id =\n> '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n>\n> *AND* rr.b_id = 'xyz'\n>\n> *AND* (('GLOBAL' = ' NO_PROJECT_ID + \"' ) *OR* (rr.\n> p_id = 'GLOBAL'))\n>\n> *AND* spc.permission_type *IS* *null* *and* spc.\n> is_active = *true*\n>\n> *AND* rpc.is_active = *true* *AND* rr.is_active =\n> *true* *AND* rs.is_active = *true* *AND* r.is_active = *true*\n>\n>\n> I don't think it is super complex. But when I run explain analyze on this\n> I get the following:\n>\n> Planning Time: 578.068 ms\n> Execution Time: 0.113 ms\n>\n> This is a huge deviation in planning vs. execution times. The explain plan\n> looks good since the execution time is < 1ms. It doesn't matter though\n> since the planning time is high. I don't see anything in the explain\n> analyze output that tells me why the planning time is high. On average, the\n> tables being joined have 3 indexes/table. How can I debug this?\n>\n> Been stuck on this for weeks. Any help is appreciated. Thank you!\n>\n> Saurabh\n>\n\n\n-- \nSaurabh Sehgal\nE-mail: [email protected]\nPhone: 425-269-1324\nLinkedIn: https://www.linkedin.com/in/saurabh-s-4367a31/\n\nTo clarify -  I have run \"vaccum full\" and \"vacuum analyze\" on every single table involved in the query and the planning times are still around the same and were not impacted. On Wed, Apr 6, 2022 at 5:26 PM Saurabh Sehgal <[email protected]> wrote:I have the following query:\n explain (analyze, costs, timing) SELECT  rr.* FROM rpc rpc\n                       INNER JOIN rr rr\n                           ON rr.uuid = rpc.rr_id\n                       INNER JOIN rs rs\n                           ON rs.r_id = rpc.r_id\n                       INNER JOIN role r\n                           ON r.uuid = rs.r_id\n                       LEFT JOIN spc spc\n                           ON spc.rr_id = rpc.rr_id\n                   WHERE rs.s_id = 'caa767b8-8371-43a3-aa11-d1dba1893601' \n                       and spc.s_id  = 'caa767b8-8371-43a3-aa11-d1dba1893601' \n                       and spc.rd_id  = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.rd_id = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.c_id = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       and spc.c_id  = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       AND rr.b_id = 'xyz'\n                       AND (('GLOBAL' = ' NO_PROJECT_ID + \"' ) OR (rr.p_id = 'GLOBAL'))\n                       AND spc.permission_type IS null and spc.is_active  = true\n                       AND rpc.is_active = true AND rr.is_active = true AND rs.is_active = true AND r.is_active = true I don't think it is super complex. But when I run explain analyze on this I get the following:Planning Time: 578.068 msExecution Time: 0.113 msThis is a huge deviation in planning vs. execution times. The explain plan looks good since the execution time is < 1ms. It doesn't matter though since the planning time is high. I don't see anything in the explain analyze output that tells me why the planning time is high. On average, the tables being joined have 3 indexes/table. How can I debug this?Been stuck on this for weeks. Any help is appreciated. Thank you!Saurabh\n-- Saurabh SehgalE-mail:     [email protected]:     425-269-1324LinkedIn: https://www.linkedin.com/in/saurabh-s-4367a31/", "msg_date": "Wed, 6 Apr 2022 17:40:30 -0700", "msg_from": "Saurabh Sehgal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Planning Times" }, { "msg_contents": "On Wed, Apr 6, 2022 at 5:27 PM Saurabh Sehgal <[email protected]> wrote:\n\n>\n> I have the following query:\n>\n> *explain* (*analyze*, costs, timing) *SELECT* rr.* *FROM* rpc rpc\n>\n> *INNER* *JOIN* rr rr\n>\n> *ON* rr.uuid = rpc.rr_id\n>\n> *INNER* *JOIN* rs rs\n>\n> *ON* rs.r_id = rpc.r_id\n>\n> *INNER* *JOIN* *role* r\n>\n> *ON* r.uuid = rs.r_id\n>\n> *LEFT* *JOIN* spc spc\n>\n> *ON* spc.rr_id = rpc.rr_id\n>\n> *WHERE* rs.s_id =\n> 'caa767b8-8371-43a3-aa11-d1dba1893601'\n>\n> *and* spc.s_id =\n> 'caa767b8-8371-43a3-aa11-d1dba1893601'\n>\n> *and* spc.rd_id =\n> '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n>\n> *AND* rpc.rd_id =\n> '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n>\n> *AND* rpc.c_id =\n> '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n>\n> *and* spc.c_id =\n> '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n>\n> *AND* rr.b_id = 'xyz'\n>\n> *AND* (('GLOBAL' = ' NO_PROJECT_ID + \"' ) *OR* (rr.\n> p_id = 'GLOBAL'))\n>\n> *AND* spc.permission_type *IS* *null* *and* spc.\n> is_active = *true*\n>\n> *AND* rpc.is_active = *true* *AND* rr.is_active =\n> *true* *AND* rs.is_active = *true* *AND* r.is_active = *true*\n>\n>\n> I don't think it is super complex. But when I run explain analyze on this\n> I get the following:\n>\n> Planning Time: 578.068 ms\n> Execution Time: 0.113 ms\n>\n> This is a huge deviation in planning vs. execution times. The explain plan\n> looks good since the execution time is < 1ms. It doesn't matter though\n> since the planning time is high. I don't see anything in the explain\n> analyze output that tells me why the planning time is high. On average, the\n> tables being joined have 3 indexes/table. How can I debug this?\n>\n> Been stuck on this for weeks. Any help is appreciated. Thank you!\n>\n>\nThe fundamental issue here is that you have basically 12 conditions across\n5 tables that need to be evaluated to determine which one of the 1,680\npossible join orders is the most efficient. The fact that you have 5\nis_active checks and 3 pairs of matching UUID checks seems odd and if you\ncould reduce those 11 to 4 I suspect you'd get a better planning time.\nThough it also may produce an inferior plan...thus consider the following\noption:\n\nAssuming the ideal plan shape for your data doesn't change you can read the\nfollowing and basically tell the planner to stop trying so hard and just\ntrust the join order that exists in the query.\n\nhttps://www.postgresql.org/docs/current/explicit-joins.html\n\nLastly, if you can leverage prepared statements you can at least amortize\nthe cost (depending on whether a generic plan performs sufficiently\nquickly).\n\nI'll admit I'm no expert at this. I'd probably just follow the\njoin_collapse_limit advice and move on if it works. Maybe adding a\nperiodic check to see if anything has changed.\nDavid J.\n\nOn Wed, Apr 6, 2022 at 5:27 PM Saurabh Sehgal <[email protected]> wrote:I have the following query:\n explain (analyze, costs, timing) SELECT  rr.* FROM rpc rpc\n                       INNER JOIN rr rr\n                           ON rr.uuid = rpc.rr_id\n                       INNER JOIN rs rs\n                           ON rs.r_id = rpc.r_id\n                       INNER JOIN role r\n                           ON r.uuid = rs.r_id\n                       LEFT JOIN spc spc\n                           ON spc.rr_id = rpc.rr_id\n                   WHERE rs.s_id = 'caa767b8-8371-43a3-aa11-d1dba1893601' \n                       and spc.s_id  = 'caa767b8-8371-43a3-aa11-d1dba1893601' \n                       and spc.rd_id  = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.rd_id = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.c_id = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       and spc.c_id  = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       AND rr.b_id = 'xyz'\n                       AND (('GLOBAL' = ' NO_PROJECT_ID + \"' ) OR (rr.p_id = 'GLOBAL'))\n                       AND spc.permission_type IS null and spc.is_active  = true\n                       AND rpc.is_active = true AND rr.is_active = true AND rs.is_active = true AND r.is_active = true I don't think it is super complex. But when I run explain analyze on this I get the following:Planning Time: 578.068 msExecution Time: 0.113 msThis is a huge deviation in planning vs. execution times. The explain plan looks good since the execution time is < 1ms. It doesn't matter though since the planning time is high. I don't see anything in the explain analyze output that tells me why the planning time is high. On average, the tables being joined have 3 indexes/table. How can I debug this?Been stuck on this for weeks. Any help is appreciated. Thank you!The fundamental issue here is that you have basically 12 conditions across 5 tables that need to be evaluated to determine which one of the 1,680 possible join orders is the most efficient.  The fact that you have 5 is_active checks and 3 pairs of matching UUID checks seems odd and if you could reduce those 11 to 4 I suspect you'd get a better planning time.  Though it also may produce an inferior plan...thus consider the following option:Assuming the ideal plan shape for your data doesn't change you can read the following and basically tell the planner to stop trying so hard and just trust the join order that exists in the query.https://www.postgresql.org/docs/current/explicit-joins.htmlLastly, if you can leverage prepared statements you can at least amortize the cost (depending on whether a generic plan performs sufficiently quickly).I'll admit I'm no expert at this.  I'd probably just follow the join_collapse_limit advice and move on if it works.  Maybe adding a periodic check to see if anything has changed.David J.", "msg_date": "Wed, 6 Apr 2022 17:54:04 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Planning Times" }, { "msg_contents": "I added the additional where clauses to remove needing to join multiple\ncolumns which I guess didn't really help. This is the original query:\n\n *SELECT* rr.* *FROM* rpc rpc\n\n *INNER* *JOIN rr* rr\n\n *ON* rr.uuid = rpc.rr_id\n\n *INNER* *JOIN* rs rs\n\n *ON* rs.r_d = rpc.r_id\n\n *INNER* *JOIN* *role* r\n\n *ON* r.uuid = rs.r_id\n\n *inner* *JOIN* subject_permission_control spc\n\n *ON* spc.rr_id = rpc.rr_id\n\n *AND* spc.s_id = rs.s_id\n\n *AND* spc.c_id = rpc.c_id\n\n *AND* spc.is_active = *true*\n\n *WHERE* rs.s_id = 'caa767b8-8371-43a3-aa11-d1dba1893601'\n\n *AND* rpc.rr_id =\n'9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n\n *AND* rpc.c_id =\n'9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n\n *AND* rr.b_id = 'testb1'\n\n *AND* (('GLOBAL' = ' NO_PROJECT_ID + \"' ) *OR* (rr.\np_id = 'GLOBAL'))\n\n *AND* spc.type *IS* *NULL*\n\n *AND* rpc.is_active = *true* *AND* rr.is_active =\n*true* *AND* rs.is_active = *true* *AND* r.is_active = *true*\n\n\n\n\nI tied prepared statements and I am stuck. Using prepared statement almost\nalways chooses a crappy generic plan that runs slow. If I don't user\nprepared statement, the plan is efficient but the planning time is slow.\nI'll try the join_collapse_limit advice and see if that helps. Thank you!\n\nOn Wed, Apr 6, 2022 at 5:54 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Wed, Apr 6, 2022 at 5:27 PM Saurabh Sehgal <[email protected]>\n> wrote:\n>\n>>\n>> I have the following query:\n>>\n>> *explain* (*analyze*, costs, timing) *SELECT* rr.* *FROM* rpc rpc\n>>\n>> *INNER* *JOIN* rr rr\n>>\n>> *ON* rr.uuid = rpc.rr_id\n>>\n>> *INNER* *JOIN* rs rs\n>>\n>> *ON* rs.r_id = rpc.r_id\n>>\n>> *INNER* *JOIN* *role* r\n>>\n>> *ON* r.uuid = rs.r_id\n>>\n>> *LEFT* *JOIN* spc spc\n>>\n>> *ON* spc.rr_id = rpc.rr_id\n>>\n>> *WHERE* rs.s_id =\n>> 'caa767b8-8371-43a3-aa11-d1dba1893601'\n>>\n>> *and* spc.s_id =\n>> 'caa767b8-8371-43a3-aa11-d1dba1893601'\n>>\n>> *and* spc.rd_id =\n>> '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n>>\n>> *AND* rpc.rd_id =\n>> '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n>>\n>> *AND* rpc.c_id =\n>> '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n>>\n>> *and* spc.c_id =\n>> '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n>>\n>> *AND* rr.b_id = 'xyz'\n>>\n>> *AND* (('GLOBAL' = ' NO_PROJECT_ID + \"' ) *OR* (rr\n>> .p_id = 'GLOBAL'))\n>>\n>> *AND* spc.permission_type *IS* *null* *and* spc.\n>> is_active = *true*\n>>\n>> *AND* rpc.is_active = *true* *AND* rr.is_active =\n>> *true* *AND* rs.is_active = *true* *AND* r.is_active = *true*\n>>\n>>\n>> I don't think it is super complex. But when I run explain analyze on this\n>> I get the following:\n>>\n>> Planning Time: 578.068 ms\n>> Execution Time: 0.113 ms\n>>\n>> This is a huge deviation in planning vs. execution times. The explain\n>> plan looks good since the execution time is < 1ms. It doesn't matter though\n>> since the planning time is high. I don't see anything in the explain\n>> analyze output that tells me why the planning time is high. On average, the\n>> tables being joined have 3 indexes/table. How can I debug this?\n>>\n>> Been stuck on this for weeks. Any help is appreciated. Thank you!\n>>\n>>\n> The fundamental issue here is that you have basically 12 conditions across\n> 5 tables that need to be evaluated to determine which one of the 1,680\n> possible join orders is the most efficient. The fact that you have 5\n> is_active checks and 3 pairs of matching UUID checks seems odd and if you\n> could reduce those 11 to 4 I suspect you'd get a better planning time.\n> Though it also may produce an inferior plan...thus consider the following\n> option:\n>\n> Assuming the ideal plan shape for your data doesn't change you can read\n> the following and basically tell the planner to stop trying so hard and\n> just trust the join order that exists in the query.\n>\n> https://www.postgresql.org/docs/current/explicit-joins.html\n>\n> Lastly, if you can leverage prepared statements you can at least amortize\n> the cost (depending on whether a generic plan performs sufficiently\n> quickly).\n>\n> I'll admit I'm no expert at this. I'd probably just follow the\n> join_collapse_limit advice and move on if it works. Maybe adding a\n> periodic check to see if anything has changed.\n> David J.\n>\n>\n\n-- \nSaurabh Sehgal\nE-mail: [email protected]\nPhone: 425-269-1324\nLinkedIn: https://www.linkedin.com/in/saurabh-s-4367a31/\n\nI added the additional where clauses to remove needing to join multiple columns which I guess didn't really help. This is the original query:\n          SELECT  rr.* FROM rpc rpc\n                       INNER JOIN rr rr\n                           ON rr.uuid = rpc.rr_id\n                       INNER JOIN rs rs\n                           ON rs.r_d = rpc.r_id\n                       INNER JOIN role r\n                           ON r.uuid = rs.r_id\n                       inner JOIN subject_permission_control spc\n                           ON spc.rr_id = rpc.rr_id\n                           AND spc.s_id = rs.s_id\n                           AND spc.c_id = rpc.c_id\n                           AND spc.is_active = true\n                   WHERE rs.s_id = 'caa767b8-8371-43a3-aa11-d1dba1893601'\n                       AND rpc.rr_id = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.c_id = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       AND rr.b_id = 'testb1'\n                       AND (('GLOBAL' = ' NO_PROJECT_ID + \"' ) OR (rr.p_id = 'GLOBAL'))\n                       AND spc.type IS NULL\n                       AND rpc.is_active = true AND rr.is_active = true AND rs.is_active = true AND r.is_active = true I tied prepared statements and I am stuck. Using prepared statement almost always chooses a crappy generic plan that runs slow. If I don't user prepared statement, the plan is efficient but the planning time is slow. I'll try the join_collapse_limit advice and see if that helps. Thank you!On Wed, Apr 6, 2022 at 5:54 PM David G. Johnston <[email protected]> wrote:On Wed, Apr 6, 2022 at 5:27 PM Saurabh Sehgal <[email protected]> wrote:I have the following query:\n explain (analyze, costs, timing) SELECT  rr.* FROM rpc rpc\n                       INNER JOIN rr rr\n                           ON rr.uuid = rpc.rr_id\n                       INNER JOIN rs rs\n                           ON rs.r_id = rpc.r_id\n                       INNER JOIN role r\n                           ON r.uuid = rs.r_id\n                       LEFT JOIN spc spc\n                           ON spc.rr_id = rpc.rr_id\n                   WHERE rs.s_id = 'caa767b8-8371-43a3-aa11-d1dba1893601' \n                       and spc.s_id  = 'caa767b8-8371-43a3-aa11-d1dba1893601' \n                       and spc.rd_id  = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.rd_id = '9f33c45a-90c2-4e05-a42e-048ec1f2b2fa'\n                       AND rpc.c_id = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       and spc.c_id  = '9fd29fdc-15fd-40bb-b85d-8cfe99734987'\n                       AND rr.b_id = 'xyz'\n                       AND (('GLOBAL' = ' NO_PROJECT_ID + \"' ) OR (rr.p_id = 'GLOBAL'))\n                       AND spc.permission_type IS null and spc.is_active  = true\n                       AND rpc.is_active = true AND rr.is_active = true AND rs.is_active = true AND r.is_active = true I don't think it is super complex. But when I run explain analyze on this I get the following:Planning Time: 578.068 msExecution Time: 0.113 msThis is a huge deviation in planning vs. execution times. The explain plan looks good since the execution time is < 1ms. It doesn't matter though since the planning time is high. I don't see anything in the explain analyze output that tells me why the planning time is high. On average, the tables being joined have 3 indexes/table. How can I debug this?Been stuck on this for weeks. Any help is appreciated. Thank you!The fundamental issue here is that you have basically 12 conditions across 5 tables that need to be evaluated to determine which one of the 1,680 possible join orders is the most efficient.  The fact that you have 5 is_active checks and 3 pairs of matching UUID checks seems odd and if you could reduce those 11 to 4 I suspect you'd get a better planning time.  Though it also may produce an inferior plan...thus consider the following option:Assuming the ideal plan shape for your data doesn't change you can read the following and basically tell the planner to stop trying so hard and just trust the join order that exists in the query.https://www.postgresql.org/docs/current/explicit-joins.htmlLastly, if you can leverage prepared statements you can at least amortize the cost (depending on whether a generic plan performs sufficiently quickly).I'll admit I'm no expert at this.  I'd probably just follow the join_collapse_limit advice and move on if it works.  Maybe adding a periodic check to see if anything has changed.David J.\n-- Saurabh SehgalE-mail:     [email protected]:     425-269-1324LinkedIn: https://www.linkedin.com/in/saurabh-s-4367a31/", "msg_date": "Wed, 6 Apr 2022 18:47:50 -0700", "msg_from": "Saurabh Sehgal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Planning Times" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Wed, Apr 6, 2022 at 5:27 PM Saurabh Sehgal <[email protected]> wrote:\n>> I have the following query:\n>> I don't think it is super complex. But when I run explain analyze on this\n>> I get the following:\n>> Planning Time: 578.068 ms\n>> Execution Time: 0.113 ms\n\n> The fundamental issue here is that you have basically 12 conditions across\n> 5 tables that need to be evaluated to determine which one of the 1,680\n> possible join orders is the most efficient.\n\nA 5-way join doesn't seem particularly outrageous. But I'm wondering\nif these are all plain tables or if some of them are actually complex\nviews. Another possibility is that the statistics target has been\ncranked to the moon and the planner is spending all its time sifting\nthrough huge statistics arrays.\n\nIt'd be interesting to see the actual schemas for the tables,\nas well as EXPLAIN's output for this query. I'm wondering\nexactly which PG version this is, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Apr 2022 22:57:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Planning Times" }, { "msg_contents": "I just figured this out. Someone had set the default_statistics_target to\n5000 .... instead of 500 I think. I changed it to 500, ran analyze and\nplanning time is much better. In case someone runs into this problem,\nsending this out here. Thank you all.\n\nOn Wed, Apr 6, 2022 at 7:57 PM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Wed, Apr 6, 2022 at 5:27 PM Saurabh Sehgal <[email protected]>\n> wrote:\n> >> I have the following query:\n> >> I don't think it is super complex. But when I run explain analyze on\n> this\n> >> I get the following:\n> >> Planning Time: 578.068 ms\n> >> Execution Time: 0.113 ms\n>\n> > The fundamental issue here is that you have basically 12 conditions\n> across\n> > 5 tables that need to be evaluated to determine which one of the 1,680\n> > possible join orders is the most efficient.\n>\n> A 5-way join doesn't seem particularly outrageous. But I'm wondering\n> if these are all plain tables or if some of them are actually complex\n> views. Another possibility is that the statistics target has been\n> cranked to the moon and the planner is spending all its time sifting\n> through huge statistics arrays.\n>\n> It'd be interesting to see the actual schemas for the tables,\n> as well as EXPLAIN's output for this query. I'm wondering\n> exactly which PG version this is, too.\n>\n> regards, tom lane\n>\n\n\n-- \nSaurabh Sehgal\nE-mail: [email protected]\nPhone: 425-269-1324\nLinkedIn: https://www.linkedin.com/in/saurabh-s-4367a31/\n\nI just figured this out. Someone had set the default_statistics_target to 5000 .... instead of 500 I think. I changed it to 500, ran analyze and planning time is much better. In case someone runs into this problem, sending this out here. Thank you all. On Wed, Apr 6, 2022 at 7:57 PM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Wed, Apr 6, 2022 at 5:27 PM Saurabh Sehgal <[email protected]> wrote:\n>> I have the following query:\n>> I don't think it is super complex. But when I run explain analyze on this\n>> I get the following:\n>> Planning Time: 578.068 ms\n>> Execution Time: 0.113 ms\n\n> The fundamental issue here is that you have basically 12 conditions across\n> 5 tables that need to be evaluated to determine which one of the 1,680\n> possible join orders is the most efficient.\n\nA 5-way join doesn't seem particularly outrageous.  But I'm wondering\nif these are all plain tables or if some of them are actually complex\nviews.  Another possibility is that the statistics target has been\ncranked to the moon and the planner is spending all its time sifting\nthrough huge statistics arrays.\n\nIt'd be interesting to see the actual schemas for the tables,\nas well as EXPLAIN's output for this query.  I'm wondering\nexactly which PG version this is, too.\n\n                        regards, tom lane\n-- Saurabh SehgalE-mail:     [email protected]:     425-269-1324LinkedIn: https://www.linkedin.com/in/saurabh-s-4367a31/", "msg_date": "Wed, 6 Apr 2022 21:09:40 -0700", "msg_from": "Saurabh Sehgal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Planning Times" } ]
[ { "msg_contents": "Hi Team,\n\nGreetings !!\n\nWe have recently done the migration from Oracle Database Version 12C to Azure PostgreSQL PaaS instance version 11.4 and most of the application functionality testing has been over and tested successfully\n\nHowever, there is 1 process at application level which is taking approx. 10 mins in PostgreSQL and in oracle it is taking only 3 mins.\n\nAfter investigating further we identified that process which is executed from application end contains 500 to 600 no of short SQL queries into the database. We tried to run the few queries individually on database and they are taking less than sec in Postgres Database to execute, and we noticed that in Oracle taking half of the time as is taking in PostgreSQL. for ex . in oracle same select statement is taking 300 millisecond and in PostgreSQL it is taking approx. 600 millisecond which over increases the execution of the process.\n\nOracle Database are hosted on ON- Prem DC with dedicated application server on OnPrem and same for PostgreSQL.\nWe are using below specifications for PostgreSQL\nPostgreSQL Azure PaaS instance -Single Server (8cvore with 1 TB storage on general purpose tier ) = 8 Core and 40 Gb of Memory\nPostgreSQL version - 11.4\n\nWe have tried running maintenance Jobs like vaccum, analyze, creating indexes, increasing compute but no sucess\n\n\nI am happy to share my server parameter for PostgreSQL for more information.\n\nPlease let us know if this is expected behavior in PostgreSQL or is there any way i can decrease the time for the SQL queries and make it a comparison with Oracle\n\nRegards,\nMukesh Kumar\n\n\n\n\n\n\n\n\n\n\n\nHi Team,\n\n \nGreetings !!\n \nWe have recently done the migration from Oracle Database Version 12C to Azure PostgreSQL PaaS instance version 11.4 and most of the application functionality testing has\n been over and tested successfully \n \nHowever, there is 1 process at application level which is taking approx. 10 mins in PostgreSQL and in oracle it is taking only 3 mins.\n \nAfter investigating further we identified that process which is executed from application end contains 500 to 600 no of short SQL queries into the database. We tried to\n run the few queries individually on database and they are taking less than sec in Postgres Database to execute, and we noticed that in Oracle taking half of the time as is taking in PostgreSQL. for ex . in oracle same select statement is taking 300 millisecond\n and in PostgreSQL it is taking approx. 600 millisecond which over increases the execution of the process.\n \nOracle Database are hosted on ON- Prem DC with dedicated application server on OnPrem and same for PostgreSQL.\nWe are using below specifications for PostgreSQL\n\nPostgreSQL Azure PaaS instance -Single Server (8cvore with 1 TB storage on general purpose tier ) = 8 Core and 40 Gb of Memory\nPostgreSQL version - 11.4\n \nWe have tried running maintenance Jobs like vaccum, analyze, creating indexes, increasing compute but no sucess\n\n \n \nI am happy to share my server parameter for PostgreSQL for more information.\n \nPlease let us know if this is expected behavior in PostgreSQL or is there any way i can decrease the time for the SQL queries and make it a comparison with Oracle\n\n \nRegards,\n\nMukesh Kumar", "msg_date": "Tue, 12 Apr 2022 09:10:23 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance for SQL queries on Azure PostgreSQL PaaS instance " }, { "msg_contents": "You might be comparing apples and pears..\n\nYour Oracle is running on prem while Postgres is running on Azure. Azure\ndoes not really have disks; it seems to have just a bunch of old people\nwriting the data on paper - I/O on Azure is ridiculously slow. What\ndisks/hardware does the on-prem Oracle have?\n\nOn Tue, Apr 12, 2022 at 4:16 PM Kumar, Mukesh <[email protected]>\nwrote:\n\n> Hi Team,\n>\n>\n>\n> Greetings !!\n>\n>\n>\n> We have recently done the migration from Oracle Database Version 12C to\n> Azure PostgreSQL PaaS instance version 11.4 and most of the application\n> functionality testing has been over and tested successfully\n>\n>\n>\n> However, there is 1 process at application level which is taking approx.\n> 10 mins in PostgreSQL and in oracle it is taking only 3 mins.\n>\n>\n>\n> After investigating further we identified that process which is executed\n> from application end contains 500 to 600 no of short SQL queries into the\n> database. We tried to run the few queries individually on database and they\n> are taking less than sec in Postgres Database to execute, and we noticed\n> that in Oracle taking half of the time as is taking in PostgreSQL. for ex .\n> in oracle same select statement is taking 300 millisecond and in PostgreSQL\n> it is taking approx. 600 millisecond which over increases the execution of\n> the process.\n>\n>\n>\n> Oracle Database are hosted on ON- Prem DC with dedicated application\n> server on OnPrem and same for PostgreSQL.\n>\n> We are using below specifications for PostgreSQL\n>\n> PostgreSQL Azure PaaS instance -Single Server (8cvore with 1 TB storage on\n> general purpose tier ) = 8 Core and 40 Gb of Memory\n>\n> PostgreSQL version - 11.4\n>\n>\n>\n> We have tried running maintenance Jobs like vaccum, analyze, creating\n> indexes, increasing compute but no sucess\n>\n>\n>\n>\n>\n> I am happy to share my server parameter for PostgreSQL for more\n> information.\n>\n>\n>\n> Please let us know if this is expected behavior in PostgreSQL or is there\n> any way i can decrease the time for the SQL queries and make it a\n> comparison with Oracle\n>\n>\n>\n> Regards,\n>\n> Mukesh Kumar\n>\n>\n>\n>\n>\n\nYou might be comparing apples and pears..Your Oracle is running on prem while Postgres is running on Azure. Azure does not really have disks; it seems to have just a bunch of old people writing the data on paper - I/O on Azure is ridiculously slow. What disks/hardware does the on-prem Oracle have?On Tue, Apr 12, 2022 at 4:16 PM Kumar, Mukesh <[email protected]> wrote:\n\n\nHi Team,\n\n \nGreetings !!\n \nWe have recently done the migration from Oracle Database Version 12C to Azure PostgreSQL PaaS instance version 11.4 and most of the application functionality testing has\n been over and tested successfully \n \nHowever, there is 1 process at application level which is taking approx. 10 mins in PostgreSQL and in oracle it is taking only 3 mins.\n \nAfter investigating further we identified that process which is executed from application end contains 500 to 600 no of short SQL queries into the database. We tried to\n run the few queries individually on database and they are taking less than sec in Postgres Database to execute, and we noticed that in Oracle taking half of the time as is taking in PostgreSQL. for ex . in oracle same select statement is taking 300 millisecond\n and in PostgreSQL it is taking approx. 600 millisecond which over increases the execution of the process.\n \nOracle Database are hosted on ON- Prem DC with dedicated application server on OnPrem and same for PostgreSQL.\nWe are using below specifications for PostgreSQL\n\nPostgreSQL Azure PaaS instance -Single Server (8cvore with 1 TB storage on general purpose tier ) = 8 Core and 40 Gb of Memory\nPostgreSQL version - 11.4\n \nWe have tried running maintenance Jobs like vaccum, analyze, creating indexes, increasing compute but no sucess\n\n \n \nI am happy to share my server parameter for PostgreSQL for more information.\n \nPlease let us know if this is expected behavior in PostgreSQL or is there any way i can decrease the time for the SQL queries and make it a comparison with Oracle\n\n \nRegards,\n\nMukesh Kumar", "msg_date": "Tue, 12 Apr 2022 16:23:02 +0200", "msg_from": "Frits Jalvingh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for SQL queries on Azure PostgreSQL PaaS instance" }, { "msg_contents": "On 4/12/22 16:23, Frits Jalvingh wrote:\n> You might be comparing apples and pears..\n> \n> Your Oracle is running on prem while Postgres is running on Azure. Azure\n> does not really have disks; it seems to have just a bunch of old people\n> writing the data on paper - I/O on Azure is ridiculously slow. What\n> disks/hardware does the on-prem Oracle have?\n> \n\nRight. It'd be good to do some basic system benchmarks first, e.g. using\n\"fio\" or similar tools, before comparing query timings. It's quite\npossible this is due to Azure storage being slower than physical drives\nin the on-premise system.\n\nIf that does not explain this, I suggest picking a single query and\nfocus on it, instead of investigating all queries at once. There's a\nnice wiki page explaining what info to provide:\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 12 Apr 2022 17:06:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for SQL queries on Azure PostgreSQL PaaS instance" }, { "msg_contents": "On Tue, 2022-04-12 at 09:10 +0000, Kumar, Mukesh wrote:\n> We have recently done the migration from Oracle Database Version 12C to Azure\n> PostgreSQL PaaS instance version 11.4 and most of the application functionality\n> testing has been over and tested successfully \n>  \n> However, there is 1 process at application level which is taking approx. 10 mins\n> in PostgreSQL and in oracle it is taking only 3 mins.\n>  \n> After investigating further we identified that process which is executed from\n> application end contains 500 to 600 no of short SQL queries into the database.\n> We tried to run the few queries individually on database and they are taking\n> less than sec in Postgres Database to execute, and we noticed that in Oracle\n> taking half of the time as is taking in PostgreSQL. for ex . in oracle same\n> select statement is taking 300 millisecond and in PostgreSQL it is taking\n> approx. 600 millisecond which over increases the execution of the process.\n>  \n> Oracle Database are hosted on ON- Prem DC with dedicated application server on\n> OnPrem and same for PostgreSQL.\n\nHow can a database hosted with Microsoft be on your permises?\n\nApart from all other things, compare the network latency. If a single request\nresults in 500 database queries, you will be paying 1000 times the network\nlatency per request.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Wed, 13 Apr 2022 10:34:24 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for SQL queries on Azure PostgreSQL PaaS instance" }, { "msg_contents": "Hi Albe , \r\n\r\nI mean to say that , we have everything hosted on Oracle is on On - Prem DC and everything hosted on Azure PostgreSQL on Microsoft Azure Cloud like Application Server and PaaS Instance,\r\n\r\nPlease revert in case of any query\r\n\r\nThanks and Regards, \r\nMukesh Kumar\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]> \r\nSent: Wednesday, April 13, 2022 2:04 PM\r\nTo: Kumar, Mukesh <[email protected]>; [email protected]; MUKESH KUMAR <[email protected]>\r\nSubject: Re: Performance for SQL queries on Azure PostgreSQL PaaS instance\r\n\r\nOn Tue, 2022-04-12 at 09:10 +0000, Kumar, Mukesh wrote:\r\n> We have recently done the migration from Oracle Database Version 12C \r\n> to Azure PostgreSQL PaaS instance version 11.4 and most of the \r\n> application functionality testing has been over and tested \r\n> successfully\r\n>  \r\n> However, there is 1 process at application level which is taking \r\n> approx. 10 mins in PostgreSQL and in oracle it is taking only 3 mins.\r\n>  \r\n> After investigating further we identified that process which is \r\n> executed from application end contains 500 to 600 no of short SQL queries into the database.\r\n> We tried to run the few queries individually on database and they are \r\n> taking less than sec in Postgres Database to execute, and we noticed \r\n> that in Oracle taking half of the time as is taking in PostgreSQL. for \r\n> ex . in oracle same select statement is taking 300 millisecond and in \r\n> PostgreSQL it is taking approx. 600 millisecond which over increases the execution of the process.\r\n>  \r\n> Oracle Database are hosted on ON- Prem DC with dedicated application \r\n> server on OnPrem and same for PostgreSQL.\r\n\r\nHow can a database hosted with Microsoft be on your permises?\r\n\r\nApart from all other things, compare the network latency. If a single request results in 500 database queries, you will be paying 1000 times the network latency per request.\r\n\r\nYours,\r\nLaurenz Albe\r\n--\r\nCybertec | https://urldefense.com/v3/__https://www.cybertec-postgresql.com__;!!KupS4sW4BlfImQPd!Na6zYPRuqYDPkzxkeKGFLkUk5TtVvDNeBotFXA-DpoSA8sO0hMkFnUll1op05OICvy74bGAGSzuTfzBWN-4PfzlYkK0vvQ$ \r\n\r\n", "msg_date": "Wed, 13 Apr 2022 08:42:59 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Performance for SQL queries on Azure PostgreSQL PaaS instance" }, { "msg_contents": "On Wed, Apr 13, 2022 at 10:34:24AM +0200, Laurenz Albe wrote:\n> On Tue, 2022-04-12 at 09:10 +0000, Kumar, Mukesh wrote:\n> > We have recently done the migration from Oracle Database Version 12C to Azure\n> > PostgreSQL PaaS instance version 11.4 and most of the application functionality\n> > testing has been over and tested successfully \n> > �\n> > However, there is 1 process at application level which is taking approx. 10 mins\n> > in PostgreSQL and in oracle it is taking only 3 mins.\n> > �\n> > After investigating further we identified that process which is executed from\n> > application end contains 500 to 600 no of short SQL queries into the database.\n> > We tried to run the few queries individually on database and they are taking\n> > less than sec in Postgres Database to execute, and we noticed that in Oracle\n> > taking half of the time as is taking in PostgreSQL. for ex . in oracle same\n> > select statement is taking 300 millisecond and in PostgreSQL it is taking\n> > approx. 600 millisecond which over increases the execution of the process.\n> > �\n> > Oracle Database are hosted on ON- Prem DC with dedicated application server on\n> > OnPrem and same for PostgreSQL.\n> \n> How can a database hosted with Microsoft be on your permises?\n\nNot OP, but it couldn't it be\nhttps://azure.microsoft.com/en-us/overview/azure-stack/ ?\n\n> Apart from all other things, compare the network latency. If a single request\n> results in 500 database queries, you will be paying 1000 times the network\n> latency per request.\n> \n> Yours,\n> Laurenz Albe\n> -- \n> Cybertec | https://www.cybertec-postgresql.com\n> \n> \n> \n\n\n", "msg_date": "Wed, 13 Apr 2022 07:17:56 -0400", "msg_from": "andrew cooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for SQL queries on Azure PostgreSQL PaaS instance" }, { "msg_contents": "Azure VM's are incredibly slow. I couldn't host a OpenStreetMap\ndatabase because the disk IO would die off from reasonable performance\nto about 5KB/s and the data import wouldn't finish. Reboot and it would\nbe fine for a while then repeat. $400 a month for that. \n\nYou are better off on bare metal outside of Azure, otherwise it is\ngoing to be cloudy misery. I'm saving hundreds renting a bare metal\nmachine in a data center and I get the expected performance on top of\nthe cost savings. \n\n\n-----Original Message-----\nFrom: \"Kumar, Mukesh\" <[email protected]>\nTo: [email protected]\n<[email protected]>, MUKESH KUMAR\n<[email protected]>\nSubject: Performance for SQL queries on Azure PostgreSQL PaaS instance\nDate: Tue, 12 Apr 2022 09:10:23 +0000\n\nHi Team,\n \nGreetings !!\n \nWe have recently done the migration from Oracle Database Version 12C to\nAzure PostgreSQL PaaS instance version 11.4 and most of the application\nfunctionality testing has been over and tested successfully \n \nHowever, there is 1 process at application level which is taking\napprox. 10 mins in PostgreSQL and in oracle it is taking only 3 mins.\n \nAfter investigating further we identified that process which is\nexecuted from application end contains 500 to 600 no of short SQL\nqueries into the database. We tried to run the few queries individually\non database and they are taking less than sec in Postgres Database to\nexecute, and we noticed that in Oracle taking half of the time as is\ntaking in PostgreSQL. for ex . in oracle same select statement is\ntaking 300 millisecond and in PostgreSQL it is taking approx. 600\nmillisecond which over increases the execution of the process.\n \nOracle Database are hosted on ON- Prem DC with dedicated application\nserver on OnPrem and same for PostgreSQL.\nWe are using below specifications for PostgreSQL\nPostgreSQL Azure PaaS instance -Single Server (8cvore with 1 TB storage\non general purpose tier ) = 8 Core and 40 Gb of Memory\nPostgreSQL version - 11.4\n \nWe have tried running maintenance Jobs like vaccum, analyze, creating\nindexes, increasing compute but no sucess\n \n \nI am happy to share my server parameter for PostgreSQL for more\ninformation.\n \nPlease let us know if this is expected behavior in PostgreSQL or is\nthere any way i can decrease the time for the SQL queries and make it a\ncomparison with Oracle\n \nRegards,\nMukesh Kumar\n \n \n\n\n\n\n", "msg_date": "Thu, 14 Apr 2022 13:20:21 -0700", "msg_from": "overland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance for SQL queries on Azure PostgreSQL PaaS instance" } ]
[ { "msg_contents": "Hi Team,\n\nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\n\nQuery - 1\n\nSelect * from\n (\n Select payment_sid_c,\n lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\n from\n lms_app.lms_payment_check_request\n group by payment_sid_c) a\n where paymentstatus in ('PAID', 'MANUALLYPAID')\n\n\nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.\n\n\nhttps://explain.depesz.com/s/Jsiw#stats\n\n\nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\n\nCould please anyone help or suggest how to improve the query performance.\n\nThanks and Regards,\nMukesh Kumar", "msg_date": "Thu, 14 Apr 2022 06:03:33 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query Tunning related to function " }, { "msg_contents": "Em qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <\[email protected]> escreveu:\n\n> Hi Team,\n>\n>\n>\n> We are running the below query in PostgreSQL and its taking approx. 8 to 9\n> sec to run the query.\n>\n>\n>\n> Query – 1\n>\n>\n>\n> Select * from\n>\n> (\n>\n> Select payment_sid_c,\n>\n> lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\n>\n> from\n>\n> lms_app.lms_payment_check_request\n>\n> group by payment_sid_c) a\n>\n> where paymentstatus in ('PAID', 'MANUALLYPAID')\n>\n>\n>\n>\n>\n> The explain plan and other details are placed at below link for more\n> information. We have checked the indexes on column but in the explain plan\n> it is showing as Seq Scan which we have to find out.\n>\n>\n>\n>\n>\n> *https://explain.depesz.com/s/Jsiw#stats\n> <https://explain.depesz.com/s/Jsiw#stats>*\n>\n>\n>\n>\n>\n> This query is using a function translate_payment_status on column\n> payment_sid_c whose script is attached in this mail\n>\n>\n>\n> Could please anyone help or suggest how to improve the query performance.\n>\nYou can try create a partial index that help this filter:\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY\n('{PAID,MANUALLYPAID}'::text[]))\n\nSee at:\nhttps://www.postgresql.org/docs/current/indexes-partial.html\n\nregards,\nRanier Vilela\n\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]> escreveu:\n\n\nHi Team, \n \nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\n \nQuery – 1 \n \nSelect * from \n  (\n  Select payment_sid_c,\n  lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\n\n  from \n  lms_app.lms_payment_check_request\n  group by payment_sid_c) a  \n  where  paymentstatus in ('PAID', 'MANUALLYPAID')\n \n \nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.\n \n \nhttps://explain.depesz.com/s/Jsiw#stats\n \n \nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\n \nCould please anyone help or suggest how to improve the query performance.You can try create a partial index that help this filter:\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[])) See at:https://www.postgresql.org/docs/current/indexes-partial.html regards,Ranier Vilela", "msg_date": "Thu, 14 Apr 2022 11:26:05 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Tunning related to function" }, { "msg_contents": "Hi Rainer ,\r\n\r\nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\r\n\r\nAlso we tried to force the query to use the index by enabling the parameter at session level\r\n\r\nset enable_seqscan=false;\r\n\r\nand it is still taking the time below is the explain plan for the same\r\n\r\nhttps://explain.depesz.com/s/YRWIW#stats\r\n\r\nAlso we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.\r\n\r\n\r\nhttps://explain.depesz.com/s/wktl#stats\r\n\r\nPlease assist\r\n\r\n\r\nThanks and Regards,\r\nMukesh Kuma\r\n\r\nFrom: Ranier Vilela <[email protected]>\r\nSent: Thursday, April 14, 2022 7:56 PM\r\nTo: Kumar, Mukesh <[email protected]>\r\nCc: [email protected]; MUKESH KUMAR <[email protected]>\r\nSubject: Re: Query Tunning related to function\r\n\r\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]<mailto:[email protected]>> escreveu:\r\nHi Team,\r\n\r\nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\r\n\r\nQuery – 1\r\n\r\nSelect * from\r\n (\r\n Select payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\r\n from\r\n lms_app.lms_payment_check_request\r\n group by payment_sid_c) a\r\n where paymentstatus in ('PAID', 'MANUALLYPAID')\r\n\r\n\r\nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.\r\n\r\n\r\nhttps://explain.depesz.com/s/Jsiw#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/Jsiw*stats__;Iw!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1rBCDg9rA$>\r\n\r\n\r\nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\r\n\r\nCould please anyone help or suggest how to improve the query performance.\r\nYou can try create a partial index that help this filter:\r\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))\r\n\r\nSee at:\r\nhttps://www.postgresql.org/docs/current/indexes-partial.html<https://urldefense.com/v3/__https:/www.postgresql.org/docs/current/indexes-partial.html__;!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1quLi3m8Q$>\r\n\r\nregards,\r\nRanier Vilela\r\n\n\n\n\n\n\n\n\n\nHi Rainer ,\r\n\n \nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\n \nAlso we tried to force the query to use the index by enabling the parameter at session level\n \nset enable_seqscan=false;\n \nand it is still taking the time below is the explain plan for the same\r\n\n \nhttps://explain.depesz.com/s/YRWIW#stats\n \nAlso we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.\n \n \nhttps://explain.depesz.com/s/wktl#stats\n \nPlease assist\r\n\n \n \nThanks and Regards, \nMukesh Kuma \n \n\nFrom: Ranier Vilela <[email protected]> \nSent: Thursday, April 14, 2022 7:56 PM\nTo: Kumar, Mukesh <[email protected]>\nCc: [email protected]; MUKESH KUMAR <[email protected]>\nSubject: Re: Query Tunning related to function\n\n \n\n\n\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]> escreveu:\n\n\n\n\nHi Team,\r\n\n \nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\n \nQuery – 1\r\n\n \nSelect * from\r\n\n  (\n  Select payment_sid_c,\n  lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\r\n\n  from\r\n\n  lms_app.lms_payment_check_request\n  group by payment_sid_c) a \r\n\n  where  paymentstatus in ('PAID', 'MANUALLYPAID')\n \n \nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we\r\n have to find out.\n \n \nhttps://explain.depesz.com/s/Jsiw#stats\n \n \nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\n \nCould please anyone help or suggest how to improve the query performance.\n\n\n\n\nYou can try create a partial index that help this filter:\n\n\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))\r\n\n\n\n \n\n\nSee at:\n\n\nhttps://www.postgresql.org/docs/current/indexes-partial.html\n\n\n \n\n\nregards,\n\n\nRanier Vilela", "msg_date": "Thu, 14 Apr 2022 14:44:43 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Query Tunning related to function" }, { "msg_contents": "Hi,\n\n \n\nThis part of the function is odd and must be dropped:\n\n IF (ret_status = payment_rec)\n\n THEN\n\n ret_status := payment_rec;\n\n \n\nI didn’t look really the function code and stopped on the view referenced by the cursor.\n\nThe view (we know it just by its name) used in the function is a black box for us. Perhaps it is important to begin optimization there!\n\nIf values 'PAID' and 'MANUALLYPAID' are an important percentage of table rows forcing index use is not a good thing especially when it is done with a non-optimized function.\n\n \n\nIf rows with values 'PAID' and 'MANUALLYPAID' constitute a little percentage of the table, then the partial index plus rewriting the query would be much more efficient\n\nSelect\n\n payment_sid_c,\n\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\n\nfrom\n\n lms_app.lms_payment_check_request\n\nwhere\n\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n\ngroup by\n\n payment_sid_c\n\n \n\nIf not, you can gain some performance if you rewrite your query to be like this:\n\n \n\nSelect\n\n payment_sid_c,\n\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\n\nfrom\n\n lms_app.lms_payment_check_request\n\ngroup by\n\n payment_sid_c\n\nhaving\n\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n\n \n\nAnd you can also try to write the query like this:\n\n \n\nSelect t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)\n\nFrom\n\n(\n\n Select\n\n payment_sid_c\n\n from\n\n lms_app.lms_payment_check_request\n\n group by\n\n payment_sid_c\n\n having\n\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n\n) t\n\n \n\nRegards\n\n \n\nMichel SALAIS\n\nDe : Kumar, Mukesh <[email protected]> \nEnvoyé : jeudi 14 avril 2022 16:45\nÀ : Ranier Vilela <[email protected]>\nCc : [email protected]; MUKESH KUMAR <[email protected]>\nObjet : RE: Query Tunning related to function\n\n \n\nHi Rainer , \n\n \n\nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\n\n \n\nAlso we tried to force the query to use the index by enabling the parameter at session level\n\n \n\nset enable_seqscan=false;\n\n \n\nand it is still taking the time below is the explain plan for the same \n\n \n\nhttps://explain.depesz.com/s/YRWIW#stats\n\n \n\nAlso we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.\n\n \n\n \n\nhttps://explain.depesz.com/s/wktl#stats\n\n \n\nPlease assist \n\n \n\n \n\nThanks and Regards, \n\nMukesh Kuma \n\n \n\nFrom: Ranier Vilela <[email protected] <mailto:[email protected]> > \nSent: Thursday, April 14, 2022 7:56 PM\nTo: Kumar, Mukesh <[email protected] <mailto:[email protected]> >\nCc: [email protected] <mailto:[email protected]> ; MUKESH KUMAR <[email protected] <mailto:[email protected]> >\nSubject: Re: Query Tunning related to function\n\n \n\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected] <mailto:[email protected]> > escreveu:\n\nHi Team, \n\n \n\nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\n\n \n\nQuery – 1 \n\n \n\nSelect * from \n\n (\n\n Select payment_sid_c,\n\n lms_app.translate_payment_status(payment_sid_c) AS paymentstatus \n\n from \n\n lms_app.lms_payment_check_request\n\n group by payment_sid_c) a \n\n where paymentstatus in ('PAID', 'MANUALLYPAID')\n\n \n\n \n\nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.\n\n \n\n \n\nhttps://explain.depesz.com/s/Jsiw#stats <https://urldefense.com/v3/__https:/explain.depesz.com/s/Jsiw*stats__;Iw!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1rBCDg9rA$> \n\n \n\n \n\nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\n\n \n\nCould please anyone help or suggest how to improve the query performance.\n\nYou can try create a partial index that help this filter:\n\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[])) \n\n \n\nSee at:\n\nhttps://www.postgresql.org/docs/current/indexes-partial.html <https://urldefense.com/v3/__https:/www.postgresql.org/docs/current/indexes-partial.html__;!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1quLi3m8Q$> \n\n \n\nregards,\n\nRanier Vilela\n\n\nHi, This part of the function is odd and must be dropped:         IF (ret_status = payment_rec)         THEN              ret_status := payment_rec; I didn’t look really the function code and stopped on the view referenced by the cursor.The view (we know it just by its name) used in the function is a black box for us. Perhaps it is important to begin optimization there!If values 'PAID' and 'MANUALLYPAID' are an important percentage of table rows forcing index use is not a good thing especially when it is done with a non-optimized function. If rows with values 'PAID' and 'MANUALLYPAID'  constitute a little percentage of the table, then the partial index plus rewriting the query would be much more efficientSelect  payment_sid_c,  lms_app.translate_payment_status(payment_sid_c) as paymentstatusfrom  lms_app.lms_payment_check_requestwhere  lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')group by  payment_sid_c If not, you can gain some performance if you rewrite your query to be like this: Select  payment_sid_c,  lms_app.translate_payment_status(payment_sid_c) as paymentstatusfrom  lms_app.lms_payment_check_requestgroup by  payment_sid_chaving  lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID') And you can also try to write the query like this: Select t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)From(  Select    payment_sid_c  from    lms_app.lms_payment_check_request  group by    payment_sid_c  having    lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')) t Regards Michel SALAISDe : Kumar, Mukesh <[email protected]> Envoyé : jeudi 14 avril 2022 16:45À : Ranier Vilela <[email protected]>Cc : [email protected]; MUKESH KUMAR <[email protected]>Objet : RE: Query Tunning related to function Hi Rainer ,  We tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now. Also we tried to force the query to use the index by enabling the parameter at session level set enable_seqscan=false; and it is still taking the time below is the explain plan for the same  https://explain.depesz.com/s/YRWIW#stats Also we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.  https://explain.depesz.com/s/wktl#stats Please assist   Thanks and Regards, Mukesh Kuma  From: Ranier Vilela <[email protected]> Sent: Thursday, April 14, 2022 7:56 PMTo: Kumar, Mukesh <[email protected]>Cc: [email protected]; MUKESH KUMAR <[email protected]>Subject: Re: Query Tunning related to function Em qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]> escreveu:Hi Team,  We are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query. Query – 1  Select * from   (  Select payment_sid_c,  lms_app.translate_payment_status(payment_sid_c) AS paymentstatus   from   lms_app.lms_payment_check_request  group by payment_sid_c) a    where  paymentstatus in ('PAID', 'MANUALLYPAID')  The explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.  https://explain.depesz.com/s/Jsiw#stats  This query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail Could please anyone help or suggest how to improve the query performance.You can try create a partial index that help this filter:Filter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))  See at:https://www.postgresql.org/docs/current/indexes-partial.html regards,Ranier Vilela", "msg_date": "Thu, 14 Apr 2022 20:14:58 +0200", "msg_from": "\"Michel SALAIS\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Query Tunning related to function" }, { "msg_contents": "Hi Michael ,\r\n\r\n\r\nWe tried dropping the below values from the function, but it did not help.\r\n\r\nAlso, the values PAID and MANUALLY PAID constitutes about 60 % of the values in table , and infact we tried creating the partial index and it did not help.\r\n\r\nThe Strange thing is that we are trying to run this in oracle as we have done the migration recently and it is running in less than second with same indexes and other database objects . I can understand that comparing to oracle is stupidity, but this is only thing where we can compare.\r\n\r\nBelow is the query we are running on oracle and comparing in postgres\r\n\r\nBelow is the query and plan for same\r\n\r\nhttps://explain.depesz.com/s/wktl#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/wktl*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJe97dsUq$>\r\n\r\nAny help would be appreciated.\r\n\r\n\r\n\r\nThanks and Regards,\r\nMukesh Kumar\r\n\r\nFrom: Michel SALAIS <[email protected]>\r\nSent: Thursday, April 14, 2022 11:45 PM\r\nTo: Kumar, Mukesh <[email protected]>; 'Ranier Vilela' <[email protected]>\r\nCc: [email protected]; 'MUKESH KUMAR' <[email protected]>\r\nSubject: RE: Query Tunning related to function\r\n\r\nHi,\r\n\r\nThis part of the function is odd and must be dropped:\r\n IF (ret_status = payment_rec)\r\n THEN\r\n ret_status := payment_rec;\r\n\r\nI didn’t look really the function code and stopped on the view referenced by the cursor.\r\nThe view (we know it just by its name) used in the function is a black box for us. Perhaps it is important to begin optimization there!\r\nIf values 'PAID' and 'MANUALLYPAID' are an important percentage of table rows forcing index use is not a good thing especially when it is done with a non-optimized function.\r\n\r\nIf rows with values 'PAID' and 'MANUALLYPAID' constitute a little percentage of the table, then the partial index plus rewriting the query would be much more efficient\r\nSelect\r\n payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\r\nfrom\r\n lms_app.lms_payment_check_request\r\nwhere\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\ngroup by\r\n payment_sid_c\r\n\r\nIf not, you can gain some performance if you rewrite your query to be like this:\r\n\r\nSelect\r\n payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\r\nfrom\r\n lms_app.lms_payment_check_request\r\ngroup by\r\n payment_sid_c\r\nhaving\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\n\r\nAnd you can also try to write the query like this:\r\n\r\nSelect t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)\r\nFrom\r\n(\r\n Select\r\n payment_sid_c\r\n from\r\n lms_app.lms_payment_check_request\r\n group by\r\n payment_sid_c\r\n having\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\n) t\r\n\r\nRegards\r\n\r\nMichel SALAIS\r\nDe : Kumar, Mukesh <[email protected]<mailto:[email protected]>>\r\nEnvoyé : jeudi 14 avril 2022 16:45\r\nÀ : Ranier Vilela <[email protected]<mailto:[email protected]>>\r\nCc : [email protected]<mailto:[email protected]>; MUKESH KUMAR <[email protected]<mailto:[email protected]>>\r\nObjet : RE: Query Tunning related to function\r\n\r\nHi Rainer ,\r\n\r\nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\r\n\r\nAlso we tried to force the query to use the index by enabling the parameter at session level\r\n\r\nset enable_seqscan=false;\r\n\r\nand it is still taking the time below is the explain plan for the same\r\n\r\nhttps://explain.depesz.com/s/YRWIW#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/YRWIW*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJVb2g-4s$>\r\n\r\nAlso we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.\r\n\r\n\r\nhttps://explain.depesz.com/s/wktl#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/wktl*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJe97dsUq$>\r\n\r\nPlease assist\r\n\r\n\r\nThanks and Regards,\r\nMukesh Kuma\r\n\r\nFrom: Ranier Vilela <[email protected]<mailto:[email protected]>>\r\nSent: Thursday, April 14, 2022 7:56 PM\r\nTo: Kumar, Mukesh <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; MUKESH KUMAR <[email protected]<mailto:[email protected]>>\r\nSubject: Re: Query Tunning related to function\r\n\r\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]<mailto:[email protected]>> escreveu:\r\nHi Team,\r\n\r\nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\r\n\r\nQuery – 1\r\n\r\nSelect * from\r\n (\r\n Select payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\r\n from\r\n lms_app.lms_payment_check_request\r\n group by payment_sid_c) a\r\n where paymentstatus in ('PAID', 'MANUALLYPAID')\r\n\r\n\r\nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.\r\n\r\n\r\nhttps://explain.depesz.com/s/Jsiw#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/Jsiw*stats__;Iw!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1rBCDg9rA$>\r\n\r\n\r\nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\r\n\r\nCould please anyone help or suggest how to improve the query performance.\r\nYou can try create a partial index that help this filter:\r\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))\r\n\r\nSee at:\r\nhttps://www.postgresql.org/docs/current/indexes-partial.html<https://urldefense.com/v3/__https:/www.postgresql.org/docs/current/indexes-partial.html__;!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1quLi3m8Q$>\r\n\r\nregards,\r\nRanier Vilela\r\n\n\n\n\n\n\n\n\n\nHi Michael ,\r\n\n \n \nWe tried dropping the below values from the function, but it did not help.\n \nAlso, the values PAID and MANUALLY PAID constitutes about 60 % of the values in table ,  and infact we tried creating the partial index and it did not help.\n \nThe Strange thing is that we are trying to run this in oracle as we have done the migration recently and it is running in less than second with same indexes and other database\r\n objects . I can understand that comparing to oracle is stupidity, but this is only thing where we can compare.\n \nBelow is the query we are running on oracle and comparing in postgres\n \nBelow is the query and plan for same\n \nhttps://explain.depesz.com/s/wktl#stats\n \nAny help would be appreciated.\n \n \n \n\nThanks and Regards, \nMukesh Kumar\n\n \n\n\nFrom: Michel SALAIS <[email protected]> \nSent: Thursday, April 14, 2022 11:45 PM\nTo: Kumar, Mukesh <[email protected]>; 'Ranier Vilela' <[email protected]>\nCc: [email protected]; 'MUKESH KUMAR' <[email protected]>\nSubject: RE: Query Tunning related to function\n\n\n \nHi,\n \nThis part of the function is odd and must be dropped:\n         IF (ret_status = payment_rec)\n         THEN\n              ret_status := payment_rec;\n \nI didn’t look really the function code and stopped on the view referenced by the cursor.\nThe view (we know it just by its name) used in the function is a black box for us. Perhaps it is important to begin optimization there!\nIf values 'PAID' and 'MANUALLYPAID' are an important percentage of table rows forcing index use is not a good thing especially when it is done with a non-optimized function.\n \nIf rows with values 'PAID' and 'MANUALLYPAID'  constitute a little percentage of the table, then the partial index plus rewriting the query would be much more efficient\nSelect\n  payment_sid_c,\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\nfrom\n  lms_app.lms_payment_check_request\nwhere\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\ngroup by\n  payment_sid_c\n \nIf not, you can gain some performance if you rewrite your query to be like this:\n \nSelect\n  payment_sid_c,\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\nfrom\n  lms_app.lms_payment_check_request\ngroup by\n  payment_sid_c\nhaving\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n \nAnd you can also try to write the query like this:\n \nSelect t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)\nFrom\n(\n  Select\n    payment_sid_c\n  from\n   lms_app.lms_payment_check_request\n  group by\n    payment_sid_c\n  having\n    lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n) t\n \nRegards\n \n\nMichel SALAIS\n\n\n\nDe : Kumar, Mukesh <[email protected]>\r\n\nEnvoyé : jeudi 14 avril 2022 16:45\nÀ : Ranier Vilela <[email protected]>\nCc : [email protected]; MUKESH KUMAR <[email protected]>\nObjet : RE: Query Tunning related to function\n\n\n \nHi Rainer ,\r\n\n \nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\n \nAlso we tried to force the query to use the index by enabling the parameter at session level\n \nset enable_seqscan=false;\n \nand it is still taking the time below is the explain plan for the same\r\n\n \nhttps://explain.depesz.com/s/YRWIW#stats\n \nAlso we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.\n \n \nhttps://explain.depesz.com/s/wktl#stats\n \nPlease assist\r\n\n \n \nThanks and Regards, \nMukesh Kuma \n \n\nFrom: Ranier Vilela <[email protected]>\r\n\nSent: Thursday, April 14, 2022 7:56 PM\nTo: Kumar, Mukesh <[email protected]>\nCc: [email protected]; MUKESH KUMAR <[email protected]>\nSubject: Re: Query Tunning related to function\n\n \n\n\n\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]> escreveu:\n\n\n\n\nHi Team,\r\n\n \nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\n \nQuery – 1\r\n\n \nSelect * from\r\n\n  (\n  Select payment_sid_c,\n  lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\r\n\n  from\r\n\n  lms_app.lms_payment_check_request\n  group by payment_sid_c) a \r\n\n  where  paymentstatus in ('PAID', 'MANUALLYPAID')\n \n \nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we\r\n have to find out.\n \n \nhttps://explain.depesz.com/s/Jsiw#stats\n \n \nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\n \nCould please anyone help or suggest how to improve the query performance.\n\n\n\n\nYou can try create a partial index that help this filter:\n\n\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))\r\n\n\n\n \n\n\nSee at:\n\n\nhttps://www.postgresql.org/docs/current/indexes-partial.html\n\n\n \n\n\nregards,\n\n\nRanier Vilela", "msg_date": "Thu, 14 Apr 2022 18:45:59 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Query Tunning related to function" }, { "msg_contents": "Can you paste from oracle for\n\nSet lines 10000\nSelect text from dba_source\nWhere name =\nUPPER('translate_payment_status')\nAnd owner = 'IMS_APP'\n\nThanks.\n\n\nOn Thu, Apr 14, 2022, 12:07 PM Kumar, Mukesh <[email protected]>\nwrote:\n\n> Hi Michael ,\n>\n>\n>\n>\n>\n> We tried dropping the below values from the function, but it did not help.\n>\n>\n>\n> Also, the values PAID and MANUALLY PAID constitutes about 60 % of the\n> values in table , and infact we tried creating the partial index and it\n> did not help.\n>\n>\n>\n> The Strange thing is that we are trying to run this in oracle as we have\n> done the migration recently and it is running in less than second with same\n> indexes and other database objects . I can understand that comparing to\n> oracle is stupidity, but this is only thing where we can compare.\n>\n>\n>\n> Below is the query we are running on oracle and comparing in postgres\n>\n>\n>\n> Below is the query and plan for same\n>\n>\n>\n> https://explain.depesz.com/s/wktl#stats\n> <https://urldefense.com/v3/__https:/explain.depesz.com/s/wktl*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJe97dsUq$>\n>\n>\n>\n> Any help would be appreciated.\n>\n>\n>\n>\n>\n>\n>\n> Thanks and Regards,\n>\n> Mukesh Kumar\n>\n>\n>\n> *From:* Michel SALAIS <[email protected]>\n> *Sent:* Thursday, April 14, 2022 11:45 PM\n> *To:* Kumar, Mukesh <[email protected]>; 'Ranier Vilela' <\n> [email protected]>\n> *Cc:* [email protected]; 'MUKESH KUMAR' <\n> [email protected]>\n> *Subject:* RE: Query Tunning related to function\n>\n>\n>\n> Hi,\n>\n>\n>\n> This part of the function is odd and must be dropped:\n>\n> IF (ret_status = payment_rec)\n>\n> THEN\n>\n> ret_status := payment_rec;\n>\n>\n>\n> I didn’t look really the function code and stopped on the view referenced\n> by the cursor.\n>\n> The view (we know it just by its name) used in the function is a black box\n> for us. Perhaps it is important to begin optimization there!\n>\n> If values 'PAID' and 'MANUALLYPAID' are an important percentage of table\n> rows forcing index use is not a good thing especially when it is done with\n> a non-optimized function.\n>\n>\n>\n> If rows with values 'PAID' and 'MANUALLYPAID' constitute a little\n> percentage of the table, then the partial index plus rewriting the query\n> would be much more efficient\n>\n> Select\n>\n> payment_sid_c,\n>\n> lms_app.translate_payment_status(payment_sid_c) as paymentstatus\n>\n> from\n>\n> lms_app.lms_payment_check_request\n>\n> where\n>\n> lms_app.translate_payment_status(payment_sid_c) IN ('PAID',\n> 'MANUALLYPAID')\n>\n> group by\n>\n> payment_sid_c\n>\n>\n>\n> If not, you can gain some performance if you rewrite your query to be like\n> this:\n>\n>\n>\n> Select\n>\n> payment_sid_c,\n>\n> lms_app.translate_payment_status(payment_sid_c) as paymentstatus\n>\n> from\n>\n> lms_app.lms_payment_check_request\n>\n> group by\n>\n> payment_sid_c\n>\n> having\n>\n> lms_app.translate_payment_status(payment_sid_c) IN ('PAID',\n> 'MANUALLYPAID')\n>\n>\n>\n> And you can also try to write the query like this:\n>\n>\n>\n> Select t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)\n>\n> From\n>\n> (\n>\n> Select\n>\n> payment_sid_c\n>\n> from\n>\n> lms_app.lms_payment_check_request\n>\n> group by\n>\n> payment_sid_c\n>\n> having\n>\n> lms_app.translate_payment_status(payment_sid_c) IN ('PAID',\n> 'MANUALLYPAID')\n>\n> ) t\n>\n>\n>\n> Regards\n>\n>\n>\n> *Michel SALAIS*\n>\n> *De :* Kumar, Mukesh <[email protected]>\n> *Envoyé :* jeudi 14 avril 2022 16:45\n> *À :* Ranier Vilela <[email protected]>\n> *Cc :* [email protected]; MUKESH KUMAR <\n> [email protected]>\n> *Objet :* RE: Query Tunning related to function\n>\n>\n>\n> Hi Rainer ,\n>\n>\n>\n> We tried to create the partial ‘index on table but it did not help, and it\n> is taking approx. 7 sec now.\n>\n>\n>\n> Also we tried to force the query to use the index by enabling the\n> parameter at session level\n>\n>\n>\n> set enable_seqscan=false;\n>\n>\n>\n> and it is still taking the time below is the explain plan for the same\n>\n>\n>\n> https://explain.depesz.com/s/YRWIW#stats\n> <https://urldefense.com/v3/__https:/explain.depesz.com/s/YRWIW*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJVb2g-4s$>\n>\n>\n>\n> Also we running the query which is actually used in application and above\n> query is used in below query. Below is the explain plan for same.\n>\n>\n>\n>\n>\n> https://explain.depesz.com/s/wktl#stats\n> <https://urldefense.com/v3/__https:/explain.depesz.com/s/wktl*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJe97dsUq$>\n>\n>\n>\n> Please assist\n>\n>\n>\n>\n>\n> Thanks and Regards,\n>\n> Mukesh Kuma\n>\n>\n>\n> *From:* Ranier Vilela <[email protected]>\n> *Sent:* Thursday, April 14, 2022 7:56 PM\n> *To:* Kumar, Mukesh <[email protected]>\n> *Cc:* [email protected]; MUKESH KUMAR <\n> [email protected]>\n> *Subject:* Re: Query Tunning related to function\n>\n>\n>\n> Em qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <\n> [email protected]> escreveu:\n>\n> Hi Team,\n>\n>\n>\n> We are running the below query in PostgreSQL and its taking approx. 8 to 9\n> sec to run the query.\n>\n>\n>\n> Query – 1\n>\n>\n>\n> Select * from\n>\n> (\n>\n> Select payment_sid_c,\n>\n> lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\n>\n> from\n>\n> lms_app.lms_payment_check_request\n>\n> group by payment_sid_c) a\n>\n> where paymentstatus in ('PAID', 'MANUALLYPAID')\n>\n>\n>\n>\n>\n> The explain plan and other details are placed at below link for more\n> information. We have checked the indexes on column but in the explain plan\n> it is showing as Seq Scan which we have to find out.\n>\n>\n>\n>\n>\n> *https://explain.depesz.com/s/Jsiw#stats\n> <https://urldefense.com/v3/__https:/explain.depesz.com/s/Jsiw*stats__;Iw!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1rBCDg9rA$>*\n>\n>\n>\n>\n>\n> This query is using a function translate_payment_status on column\n> payment_sid_c whose script is attached in this mail\n>\n>\n>\n> Could please anyone help or suggest how to improve the query performance.\n>\n> You can try create a partial index that help this filter:\n>\n> Filter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY\n> ('{PAID,MANUALLYPAID}'::text[]))\n>\n>\n>\n> See at:\n>\n> https://www.postgresql.org/docs/current/indexes-partial.html\n> <https://urldefense.com/v3/__https:/www.postgresql.org/docs/current/indexes-partial.html__;!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1quLi3m8Q$>\n>\n>\n>\n> regards,\n>\n> Ranier Vilela\n>\n\nCan you paste from oracle forSet lines 10000Select text from dba_sourceWhere name =UPPER('translate_payment_status')And owner = 'IMS_APP'Thanks.On Thu, Apr 14, 2022, 12:07 PM Kumar, Mukesh <[email protected]> wrote:\n\n\nHi Michael ,\n\n \n \nWe tried dropping the below values from the function, but it did not help.\n \nAlso, the values PAID and MANUALLY PAID constitutes about 60 % of the values in table ,  and infact we tried creating the partial index and it did not help.\n \nThe Strange thing is that we are trying to run this in oracle as we have done the migration recently and it is running in less than second with same indexes and other database\n objects . I can understand that comparing to oracle is stupidity, but this is only thing where we can compare.\n \nBelow is the query we are running on oracle and comparing in postgres\n \nBelow is the query and plan for same\n \nhttps://explain.depesz.com/s/wktl#stats\n \nAny help would be appreciated.\n \n \n \n\nThanks and Regards, \nMukesh Kumar\n\n \n\n\nFrom: Michel SALAIS <[email protected]> \nSent: Thursday, April 14, 2022 11:45 PM\nTo: Kumar, Mukesh <[email protected]>; 'Ranier Vilela' <[email protected]>\nCc: [email protected]; 'MUKESH KUMAR' <[email protected]>\nSubject: RE: Query Tunning related to function\n\n\n \nHi,\n \nThis part of the function is odd and must be dropped:\n         IF (ret_status = payment_rec)\n         THEN\n              ret_status := payment_rec;\n \nI didn’t look really the function code and stopped on the view referenced by the cursor.\nThe view (we know it just by its name) used in the function is a black box for us. Perhaps it is important to begin optimization there!\nIf values 'PAID' and 'MANUALLYPAID' are an important percentage of table rows forcing index use is not a good thing especially when it is done with a non-optimized function.\n \nIf rows with values 'PAID' and 'MANUALLYPAID'  constitute a little percentage of the table, then the partial index plus rewriting the query would be much more efficient\nSelect\n  payment_sid_c,\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\nfrom\n  lms_app.lms_payment_check_request\nwhere\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\ngroup by\n  payment_sid_c\n \nIf not, you can gain some performance if you rewrite your query to be like this:\n \nSelect\n  payment_sid_c,\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\nfrom\n  lms_app.lms_payment_check_request\ngroup by\n  payment_sid_c\nhaving\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n \nAnd you can also try to write the query like this:\n \nSelect t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)\nFrom\n(\n  Select\n    payment_sid_c\n  from\n   lms_app.lms_payment_check_request\n  group by\n    payment_sid_c\n  having\n    lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n) t\n \nRegards\n \n\nMichel SALAIS\n\n\n\nDe : Kumar, Mukesh <[email protected]>\n\nEnvoyé : jeudi 14 avril 2022 16:45\nÀ : Ranier Vilela <[email protected]>\nCc : [email protected]; MUKESH KUMAR <[email protected]>\nObjet : RE: Query Tunning related to function\n\n\n \nHi Rainer ,\n\n \nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\n \nAlso we tried to force the query to use the index by enabling the parameter at session level\n \nset enable_seqscan=false;\n \nand it is still taking the time below is the explain plan for the same\n\n \nhttps://explain.depesz.com/s/YRWIW#stats\n \nAlso we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.\n \n \nhttps://explain.depesz.com/s/wktl#stats\n \nPlease assist\n\n \n \nThanks and Regards, \nMukesh Kuma \n \n\nFrom: Ranier Vilela <[email protected]>\n\nSent: Thursday, April 14, 2022 7:56 PM\nTo: Kumar, Mukesh <[email protected]>\nCc: [email protected]; MUKESH KUMAR <[email protected]>\nSubject: Re: Query Tunning related to function\n\n \n\n\n\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]> escreveu:\n\n\n\n\nHi Team,\n\n \nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\n \nQuery – 1\n\n \nSelect * from\n\n  (\n  Select payment_sid_c,\n  lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\n\n  from\n\n  lms_app.lms_payment_check_request\n  group by payment_sid_c) a \n\n  where  paymentstatus in ('PAID', 'MANUALLYPAID')\n \n \nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we\n have to find out.\n \n \nhttps://explain.depesz.com/s/Jsiw#stats\n \n \nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\n \nCould please anyone help or suggest how to improve the query performance.\n\n\n\n\nYou can try create a partial index that help this filter:\n\n\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))\n\n\n\n \n\n\nSee at:\n\n\nhttps://www.postgresql.org/docs/current/indexes-partial.html\n\n\n \n\n\nregards,\n\n\nRanier Vilela", "msg_date": "Thu, 14 Apr 2022 15:13:59 -0700", "msg_from": "Bhupendra Babu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Tunning related to function" }, { "msg_contents": "Hi Babu ,\r\n\r\nPlease find below the script for the function from Oracle\r\n\r\nHi babu ,\r\n\r\nPlease find attached the script for function from Oracle .\r\n\r\nPlease revert in case of any query.\r\n\r\nThanks and Regards,\r\nMukesh Kumar\r\n\r\nFrom: Bhupendra Babu <[email protected]>\r\nSent: Friday, April 15, 2022 3:44 AM\r\nTo: Kumar, Mukesh <[email protected]>\r\nCc: Michel SALAIS <[email protected]>; Ranier Vilela <[email protected]>; postgres performance list <[email protected]>; MUKESH KUMAR <[email protected]>; [email protected]\r\nSubject: Re: Query Tunning related to function\r\n\r\nCan you paste from oracle for\r\n\r\nSet lines 10000\r\nSelect text from dba_source\r\nWhere name =\r\nUPPER('translate_payment_status')\r\nAnd owner = 'IMS_APP'\r\n\r\nThanks.\r\n\r\n\r\nOn Thu, Apr 14, 2022, 12:07 PM Kumar, Mukesh <[email protected]<mailto:[email protected]>> wrote:\r\nHi Michael ,\r\n\r\n\r\nWe tried dropping the below values from the function, but it did not help.\r\n\r\nAlso, the values PAID and MANUALLY PAID constitutes about 60 % of the values in table , and infact we tried creating the partial index and it did not help.\r\n\r\nThe Strange thing is that we are trying to run this in oracle as we have done the migration recently and it is running in less than second with same indexes and other database objects . I can understand that comparing to oracle is stupidity, but this is only thing where we can compare.\r\n\r\nBelow is the query we are running on oracle and comparing in postgres\r\n\r\nBelow is the query and plan for same\r\n\r\nhttps://explain.depesz.com/s/wktl#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/wktl*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJe97dsUq$>\r\n\r\nAny help would be appreciated.\r\n\r\n\r\n\r\nThanks and Regards,\r\nMukesh Kumar\r\n\r\nFrom: Michel SALAIS <[email protected]<mailto:[email protected]>>\r\nSent: Thursday, April 14, 2022 11:45 PM\r\nTo: Kumar, Mukesh <[email protected]<mailto:[email protected]>>; 'Ranier Vilela' <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; 'MUKESH KUMAR' <[email protected]<mailto:[email protected]>>\r\nSubject: RE: Query Tunning related to function\r\n\r\nHi,\r\n\r\nThis part of the function is odd and must be dropped:\r\n IF (ret_status = payment_rec)\r\n THEN\r\n ret_status := payment_rec;\r\n\r\nI didn’t look really the function code and stopped on the view referenced by the cursor.\r\nThe view (we know it just by its name) used in the function is a black box for us. Perhaps it is important to begin optimization there!\r\nIf values 'PAID' and 'MANUALLYPAID' are an important percentage of table rows forcing index use is not a good thing especially when it is done with a non-optimized function.\r\n\r\nIf rows with values 'PAID' and 'MANUALLYPAID' constitute a little percentage of the table, then the partial index plus rewriting the query would be much more efficient\r\nSelect\r\n payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\r\nfrom\r\n lms_app.lms_payment_check_request\r\nwhere\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\ngroup by\r\n payment_sid_c\r\n\r\nIf not, you can gain some performance if you rewrite your query to be like this:\r\n\r\nSelect\r\n payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\r\nfrom\r\n lms_app.lms_payment_check_request\r\ngroup by\r\n payment_sid_c\r\nhaving\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\n\r\nAnd you can also try to write the query like this:\r\n\r\nSelect t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)\r\nFrom\r\n(\r\n Select\r\n payment_sid_c\r\n from\r\n lms_app.lms_payment_check_request\r\n group by\r\n payment_sid_c\r\n having\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\n) t\r\n\r\nRegards\r\n\r\nMichel SALAIS\r\nDe : Kumar, Mukesh <[email protected]<mailto:[email protected]>>\r\nEnvoyé : jeudi 14 avril 2022 16:45\r\nÀ : Ranier Vilela <[email protected]<mailto:[email protected]>>\r\nCc : [email protected]<mailto:[email protected]>; MUKESH KUMAR <[email protected]<mailto:[email protected]>>\r\nObjet : RE: Query Tunning related to function\r\n\r\nHi Rainer ,\r\n\r\nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\r\n\r\nAlso we tried to force the query to use the index by enabling the parameter at session level\r\n\r\nset enable_seqscan=false;\r\n\r\nand it is still taking the time below is the explain plan for the same\r\n\r\nhttps://explain.depesz.com/s/YRWIW#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/YRWIW*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJVb2g-4s$>\r\n\r\nAlso we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.\r\n\r\n\r\nhttps://explain.depesz.com/s/wktl#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/wktl*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJe97dsUq$>\r\n\r\nPlease assist\r\n\r\n\r\nThanks and Regards,\r\nMukesh Kuma\r\n\r\nFrom: Ranier Vilela <[email protected]<mailto:[email protected]>>\r\nSent: Thursday, April 14, 2022 7:56 PM\r\nTo: Kumar, Mukesh <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; MUKESH KUMAR <[email protected]<mailto:[email protected]>>\r\nSubject: Re: Query Tunning related to function\r\n\r\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]<mailto:[email protected]>> escreveu:\r\nHi Team,\r\n\r\nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\r\n\r\nQuery – 1\r\n\r\nSelect * from\r\n (\r\n Select payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\r\n from\r\n lms_app.lms_payment_check_request\r\n group by payment_sid_c) a\r\n where paymentstatus in ('PAID', 'MANUALLYPAID')\r\n\r\n\r\nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.\r\n\r\n\r\nhttps://explain.depesz.com/s/Jsiw#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/Jsiw*stats__;Iw!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1rBCDg9rA$>\r\n\r\n\r\nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\r\n\r\nCould please anyone help or suggest how to improve the query performance.\r\nYou can try create a partial index that help this filter:\r\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))\r\n\r\nSee at:\r\nhttps://www.postgresql.org/docs/current/indexes-partial.html<https://urldefense.com/v3/__https:/www.postgresql.org/docs/current/indexes-partial.html__;!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1quLi3m8Q$>\r\n\r\nregards,\r\nRanier Vilela", "msg_date": "Fri, 15 Apr 2022 06:13:00 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Query Tunning related to function" }, { "msg_contents": "Hi All ,\r\n\r\nWe request you to please provide some assistance on below issue and it is impacting the migration project.\r\n\r\nThanks and Regards,\r\nMukesh Kumar\r\n\r\nFrom: Kumar, Mukesh\r\nSent: Friday, April 15, 2022 11:43 AM\r\nTo: Bhupendra Babu <[email protected]>\r\nCc: Michel SALAIS <[email protected]>; Ranier Vilela <[email protected]>; postgres performance list <[email protected]>; MUKESH KUMAR <[email protected]>; [email protected]\r\nSubject: RE: Query Tunning related to function\r\n\r\nHi Babu ,\r\n\r\nPlease find below the script for the function from Oracle\r\n\r\nHi babu ,\r\n\r\nPlease find attached the script for function from Oracle .\r\n\r\nPlease revert in case of any query.\r\n\r\nThanks and Regards,\r\nMukesh Kumar\r\n\r\nFrom: Bhupendra Babu <[email protected]<mailto:[email protected]>>\r\nSent: Friday, April 15, 2022 3:44 AM\r\nTo: Kumar, Mukesh <[email protected]<mailto:[email protected]>>\r\nCc: Michel SALAIS <[email protected]<mailto:[email protected]>>; Ranier Vilela <[email protected]<mailto:[email protected]>>; postgres performance list <[email protected]<mailto:[email protected]>>; MUKESH KUMAR <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>\r\nSubject: Re: Query Tunning related to function\r\n\r\nCan you paste from oracle for\r\n\r\nSet lines 10000\r\nSelect text from dba_source\r\nWhere name =\r\nUPPER('translate_payment_status')\r\nAnd owner = 'IMS_APP'\r\n\r\nThanks.\r\n\r\n\r\nOn Thu, Apr 14, 2022, 12:07 PM Kumar, Mukesh <[email protected]<mailto:[email protected]>> wrote:\r\nHi Michael ,\r\n\r\n\r\nWe tried dropping the below values from the function, but it did not help.\r\n\r\nAlso, the values PAID and MANUALLY PAID constitutes about 60 % of the values in table , and infact we tried creating the partial index and it did not help.\r\n\r\nThe Strange thing is that we are trying to run this in oracle as we have done the migration recently and it is running in less than second with same indexes and other database objects . I can understand that comparing to oracle is stupidity, but this is only thing where we can compare.\r\n\r\nBelow is the query we are running on oracle and comparing in postgres\r\n\r\nBelow is the query and plan for same\r\n\r\nhttps://explain.depesz.com/s/wktl#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/wktl*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJe97dsUq$>\r\n\r\nAny help would be appreciated.\r\n\r\n\r\n\r\nThanks and Regards,\r\nMukesh Kumar\r\n\r\nFrom: Michel SALAIS <[email protected]<mailto:[email protected]>>\r\nSent: Thursday, April 14, 2022 11:45 PM\r\nTo: Kumar, Mukesh <[email protected]<mailto:[email protected]>>; 'Ranier Vilela' <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; 'MUKESH KUMAR' <[email protected]<mailto:[email protected]>>\r\nSubject: RE: Query Tunning related to function\r\n\r\nHi,\r\n\r\nThis part of the function is odd and must be dropped:\r\n IF (ret_status = payment_rec)\r\n THEN\r\n ret_status := payment_rec;\r\n\r\nI didn’t look really the function code and stopped on the view referenced by the cursor.\r\nThe view (we know it just by its name) used in the function is a black box for us. Perhaps it is important to begin optimization there!\r\nIf values 'PAID' and 'MANUALLYPAID' are an important percentage of table rows forcing index use is not a good thing especially when it is done with a non-optimized function.\r\n\r\nIf rows with values 'PAID' and 'MANUALLYPAID' constitute a little percentage of the table, then the partial index plus rewriting the query would be much more efficient\r\nSelect\r\n payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\r\nfrom\r\n lms_app.lms_payment_check_request\r\nwhere\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\ngroup by\r\n payment_sid_c\r\n\r\nIf not, you can gain some performance if you rewrite your query to be like this:\r\n\r\nSelect\r\n payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\r\nfrom\r\n lms_app.lms_payment_check_request\r\ngroup by\r\n payment_sid_c\r\nhaving\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\n\r\nAnd you can also try to write the query like this:\r\n\r\nSelect t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)\r\nFrom\r\n(\r\n Select\r\n payment_sid_c\r\n from\r\n lms_app.lms_payment_check_request\r\n group by\r\n payment_sid_c\r\n having\r\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\r\n) t\r\n\r\nRegards\r\n\r\nMichel SALAIS\r\nDe : Kumar, Mukesh <[email protected]<mailto:[email protected]>>\r\nEnvoyé : jeudi 14 avril 2022 16:45\r\nÀ : Ranier Vilela <[email protected]<mailto:[email protected]>>\r\nCc : [email protected]<mailto:[email protected]>; MUKESH KUMAR <[email protected]<mailto:[email protected]>>\r\nObjet : RE: Query Tunning related to function\r\n\r\nHi Rainer ,\r\n\r\nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\r\n\r\nAlso we tried to force the query to use the index by enabling the parameter at session level\r\n\r\nset enable_seqscan=false;\r\n\r\nand it is still taking the time below is the explain plan for the same\r\n\r\nhttps://explain.depesz.com/s/YRWIW#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/YRWIW*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJVb2g-4s$>\r\n\r\nAlso we running the query which is actually used in application and above query is used in below query. Below is the explain plan for same.\r\n\r\n\r\nhttps://explain.depesz.com/s/wktl#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/wktl*stats__;Iw!!KupS4sW4BlfImQPd!OE7VRYuxv81xKZski81jR9U-OFWiC5_KPW02j0u9iHLcaEbtUo5u_sIfi8VFrToyBiI2A_69MqYrJe97dsUq$>\r\n\r\nPlease assist\r\n\r\n\r\nThanks and Regards,\r\nMukesh Kuma\r\n\r\nFrom: Ranier Vilela <[email protected]<mailto:[email protected]>>\r\nSent: Thursday, April 14, 2022 7:56 PM\r\nTo: Kumar, Mukesh <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; MUKESH KUMAR <[email protected]<mailto:[email protected]>>\r\nSubject: Re: Query Tunning related to function\r\n\r\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]<mailto:[email protected]>> escreveu:\r\nHi Team,\r\n\r\nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\r\n\r\nQuery – 1\r\n\r\nSelect * from\r\n (\r\n Select payment_sid_c,\r\n lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\r\n from\r\n lms_app.lms_payment_check_request\r\n group by payment_sid_c) a\r\n where paymentstatus in ('PAID', 'MANUALLYPAID')\r\n\r\n\r\nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.\r\n\r\n\r\nhttps://explain.depesz.com/s/Jsiw#stats<https://urldefense.com/v3/__https:/explain.depesz.com/s/Jsiw*stats__;Iw!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1rBCDg9rA$>\r\n\r\n\r\nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\r\n\r\nCould please anyone help or suggest how to improve the query performance.\r\nYou can try create a partial index that help this filter:\r\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))\r\n\r\nSee at:\r\nhttps://www.postgresql.org/docs/current/indexes-partial.html<https://urldefense.com/v3/__https:/www.postgresql.org/docs/current/indexes-partial.html__;!!KupS4sW4BlfImQPd!M8K66GpB-7DvYJA0HYFVpY9mtO6TaqIGRjTLI2G1WNjwK8KA9I8JaEr9OWwGy5F6fC4Ed5dwEjCf_1quLi3m8Q$>\r\n\r\nregards,\r\nRanier Vilela\r\n\n\n\n\n\n\n\n\n\nHi All ,\r\n\n \nWe request you to please provide some assistance on below issue and it is impacting the migration project.\n \n\nThanks and Regards, \nMukesh Kumar\n\n \n\n\nFrom: Kumar, Mukesh \nSent: Friday, April 15, 2022 11:43 AM\nTo: Bhupendra Babu <[email protected]>\nCc: Michel SALAIS <[email protected]>; Ranier Vilela <[email protected]>; postgres performance list <[email protected]>; MUKESH KUMAR <[email protected]>; [email protected]\nSubject: RE: Query Tunning related to function\n\n\n \nHi Babu ,\r\n\n \nPlease find below the script for the function from Oracle\r\n\n \nHi babu ,\r\n\n \nPlease find attached the script for function from Oracle .\n \nPlease revert in case of any query.\n \nThanks and Regards, \nMukesh Kumar\n \n\nFrom: Bhupendra Babu <[email protected]>\r\n\nSent: Friday, April 15, 2022 3:44 AM\nTo: Kumar, Mukesh <[email protected]>\nCc: Michel SALAIS <[email protected]>; Ranier Vilela <[email protected]>; postgres performance list <[email protected]>;\r\n MUKESH KUMAR <[email protected]>;\r\[email protected]\nSubject: Re: Query Tunning related to function\n\n \n\n\nCan you paste from oracle for\n\n \n\n\nSet lines 10000\n\n\nSelect text from dba_source\n\n\nWhere name =\n\nUPPER('translate_payment_status')\r\nAnd owner = 'IMS_APP'\n\n\n \n\n\nThanks.\n\n\n \n\n\n \n\n\nOn Thu, Apr 14, 2022, 12:07 PM Kumar, Mukesh <[email protected]> wrote:\n\n\n\n\nHi Michael ,\r\n\n \n \nWe tried dropping the below values from the function, but it did not help.\n \nAlso, the values PAID and MANUALLY PAID constitutes about 60 % of the values in table ,  and infact we tried creating\r\n the partial index and it did not help.\n \nThe Strange thing is that we are trying to run this in oracle as we have done the migration recently and it is\r\n running in less than second with same indexes and other database objects . I can understand that comparing to oracle is stupidity, but this is only thing where we can compare.\n \nBelow is the query we are running on oracle and comparing in postgres\n \nBelow is the query and plan for same\n \nhttps://explain.depesz.com/s/wktl#stats\n \nAny help would be appreciated.\n \n \n \n\nThanks and Regards,\r\n\nMukesh Kumar\n\n \n\n\nFrom: Michel SALAIS <[email protected]>\r\n\nSent: Thursday, April 14, 2022 11:45 PM\nTo: Kumar, Mukesh <[email protected]>; 'Ranier Vilela' <[email protected]>\nCc: [email protected]; 'MUKESH KUMAR' <[email protected]>\nSubject: RE: Query Tunning related to function\n\n\n \nHi,\n \nThis part of the function is odd and must be dropped:\n         IF (ret_status = payment_rec)\n         THEN\n              ret_status := payment_rec;\n \nI didn’t look really the function code and stopped on the view referenced by the cursor.\nThe view (we know it just by its name) used in the function is a black box for us. Perhaps it is important to begin optimization there!\nIf values 'PAID' and 'MANUALLYPAID' are an important percentage of table rows forcing index use is not a good thing especially when it is done with a non-optimized function.\n \nIf rows with values 'PAID' and 'MANUALLYPAID'  constitute a little percentage of the table, then the partial index plus rewriting the query would be much more efficient\nSelect\n  payment_sid_c,\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\nfrom\n  lms_app.lms_payment_check_request\nwhere\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\ngroup by\n  payment_sid_c\n \nIf not, you can gain some performance if you rewrite your query to be like this:\n \nSelect\n  payment_sid_c,\n lms_app.translate_payment_status(payment_sid_c) as paymentstatus\nfrom\n  lms_app.lms_payment_check_request\ngroup by\n  payment_sid_c\nhaving\n lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n \nAnd you can also try to write the query like this:\n \nSelect t.payment_sid_c, lms_app.translate_payment_status(t.payment_sid_c)\nFrom\n(\n  Select\n    payment_sid_c\n  from\n   lms_app.lms_payment_check_request\n  group by\n    payment_sid_c\n  having\n    lms_app.translate_payment_status(payment_sid_c) IN ('PAID', 'MANUALLYPAID')\n) t\n \nRegards\n \n\nMichel SALAIS\n\n\n\nDe : Kumar, Mukesh <[email protected]>\r\n\nEnvoyé : jeudi 14 avril 2022 16:45\nÀ : Ranier Vilela <[email protected]>\nCc : [email protected]; MUKESH KUMAR <[email protected]>\nObjet : RE: Query Tunning related to function\n\n\n \nHi Rainer ,\r\n\n \nWe tried to create the partial ‘index on table but it did not help, and it is taking approx. 7 sec now.\n \nAlso we tried to force the query to use the index by enabling the parameter at session level\n \nset enable_seqscan=false;\n \nand it is still taking the time below is the explain plan for the same\r\n\n \nhttps://explain.depesz.com/s/YRWIW#stats\n \nAlso we running the query which is actually used in application and above query is used in below query. Below\r\n is the explain plan for same.\n \n \nhttps://explain.depesz.com/s/wktl#stats\n \nPlease assist\r\n\n \n \nThanks and Regards,\r\n\nMukesh Kuma\r\n\n \n\nFrom: Ranier Vilela <[email protected]>\r\n\nSent: Thursday, April 14, 2022 7:56 PM\nTo: Kumar, Mukesh <[email protected]>\nCc: [email protected]; MUKESH KUMAR <[email protected]>\nSubject: Re: Query Tunning related to function\n\n \n\n\n\nEm qui., 14 de abr. de 2022 às 08:01, Kumar, Mukesh <[email protected]> escreveu:\n\n\n\n\nHi Team,\r\n\n \nWe are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\n \nQuery – 1\r\n\n \nSelect * from\r\n\n  (\n  Select payment_sid_c,\n  lms_app.translate_payment_status(payment_sid_c) AS paymentstatus\r\n\n  from\r\n\n  lms_app.lms_payment_check_request\n  group by payment_sid_c) a \r\n\n  where  paymentstatus in ('PAID', 'MANUALLYPAID')\n \n \nThe explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we\r\n have to find out.\n \n \nhttps://explain.depesz.com/s/Jsiw#stats\n \n \nThis query is using a function translate_payment_status on column payment_sid_c whose script is attached in this mail\n \nCould please anyone help or suggest how to improve the query performance.\n\n\n\n\nYou can try create a partial index that help this filter:\n\n\nFilter: ((lms_app.translate_payment_status(payment_sid_c))::text = ANY ('{PAID,MANUALLYPAID}'::text[]))\r\n\n\n\n \n\n\nSee at:\n\n\nhttps://www.postgresql.org/docs/current/indexes-partial.html\n\n\n \n\n\nregards,\n\n\nRanier Vilela", "msg_date": "Fri, 15 Apr 2022 16:46:16 +0000", "msg_from": "\"Kumar, Mukesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Query Tunning related to function" }, { "msg_contents": "On Thu, Apr 14, 2022 at 06:03:33AM +0000, Kumar, Mukesh wrote:\n> We are running the below query in PostgreSQL and its taking approx. 8 to 9 sec to run the query.\n> \n> Query - 1 ...\n> \n> The explain plan and other details are placed at below link for more information. We have checked the indexes on column but in the explain plan it is showing as Seq Scan which we have to find out.\n> \n> https://explain.depesz.com/s/Jsiw#stats\n\nThere's a list of information to provide in the postgres wiki, and people here\nsent you a link to that wiki page on Feb 27, Mar 1, and Apr 12. Your problem\nreport is still missing a lot of that information. Asking for it piece by\npiece would be tedious.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 16 Apr 2022 22:13:15 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Tunning related to function" }, { "msg_contents": "On Sun, Apr 17, 2022 at 8:53 AM Kumar, Mukesh <[email protected]>\nwrote:\n\n> We request you to please provide some assistance on below issue and it is\n> impacting the migration project.\n>\n\nI suggest you try and re-write the loop-based function into a set-oriented\nview.\n\nSpecifically, I think doing: \"array_agg(DISTINCT paymenttype)\" and then\nchecking for various array results will be considerably more efficient.\n\nOr do a combination: write the set-oriented query in an SQL function. You\nshould not need pl/pgsql for this and avoiding it should improve\nperformance.\n\nDavid J.\n\np.s., The convention on these lists is to inline post and remove unneeded\ncontext. Or at least bottom post.\n\nOn Sun, Apr 17, 2022 at 8:53 AM Kumar, Mukesh <[email protected]> wrote:\n\n\nWe request you to please provide some assistance on below issue and it is impacting the migration project.I suggest you try and re-write the loop-based function into a set-oriented view.Specifically, I think doing: \"array_agg(DISTINCT paymenttype)\" and then checking for various array results will be considerably more efficient.Or do a combination: write the set-oriented query in an SQL function.  You should not need pl/pgsql for this and avoiding it should improve performance.David J.p.s., The convention on these lists is to inline post and remove unneeded context.  Or at least bottom post.", "msg_date": "Sun, 17 Apr 2022 09:47:51 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Tunning related to function" } ]
[ { "msg_contents": "Hi All,\n\nWe migrated from Oracle 12.1 to Aurora postgres 12.8.1. The query in Oracle\ntakes less than a millisecond however the same query in aurora is taking\nmore than a second. We have a larger number of executions for the SQL which\nis causing an overall latency for the application. I am new to postgres and\ntrying to get some ideas around how better we can optimize. I have the plan\ndetails for the SQL as below. Can someone shed some light on possible ways\nthat can make this query to meet its original execution time?\n\nhttps://explain.depesz.com/s/jlVc#html\n\nThanks,\n\nGoti\n\nHi All,We migrated from Oracle 12.1 to Aurora postgres 12.8.1. The query in Oracle takes less than a millisecond however the same query in aurora is taking more than a second. We have a larger number of executions for the SQL which is causing an overall latency for the application. I am new to postgres and trying to get some ideas around how better we can optimize. I have the plan details for the SQL as below. Can someone shed some light on possible ways that can make this query to meet its original execution time?https://explain.depesz.com/s/jlVc#htmlThanks,Goti", "msg_date": "Thu, 14 Apr 2022 15:05:05 +0530", "msg_from": "Goti <[email protected]>", "msg_from_op": true, "msg_subject": "SQL performance issue after migration from Oracle to Aurora postgres" }, { "msg_contents": "\nOn 2022-04-14 Th 05:35, Goti wrote:\n> Hi All,\n>\n> We migrated from Oracle 12.1 to Aurora postgres 12.8.1. The query in\n> Oracle takes less than a millisecond however the same query in aurora\n> is taking more than a second. We have a larger number of executions\n> for the SQL which is causing an overall latency for the application. I\n> am new to postgres and trying to get some ideas around how better we\n> can optimize. I have the plan details for the SQL as below. Can\n> someone shed some light on possible ways that can make this query to\n> meet its original execution time?\n>\n> https://explain.depesz.com/s/jlVc#html\n>\n\nWithout knowing much about your data I would suggest trying to rewrite\nthe query to get rid of the correlated subselect, using a join instead.\nI note the use of both implicit and explicit joins in your FROM clause,\nwhich is something I always advise against, as it hurts clarity, but\nthat's a matter of style rather than performance.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 14 Apr 2022 10:40:43 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL performance issue after migration from Oracle to Aurora\n postgres" } ]
[ { "msg_contents": "Greetings Postgres Developers,\n\nI've recently started taking advantage of the PARTITION BY HASH feature for\nmy database system. It's a really great fit since my tables can get quite\nlarge (900M+ rows for some) and splitting them up into manageable chunks\nshould let me upload to them without having to update an enormous index\nevery time. What's more, since each partition has a write lock independent\nof the parent table, it should theoretically be possible to perform a\nparallelized insert operation, provided the data to be added is partitioned\nbeforehand.\n\nWhat has been disappointing is that the query planner doesn't seem to\nrecognize this potential. For example, if I have a large list of input\ndata, and I want to perform a select operation across the target table:\n\n -- target table is hashed on 'textfield' & has a unique index on\n'textfield'\n select * from temp_data td left join target tg on td.textfield =\ntg.textfield;\n\nI would expect to get a query plan like this:\n\n partition temp_data\n parallel scan on\n target_p0 using target_p0_textfield_uniq_idx against temp_data_p0\n target_p1 using target_p1_textfield_uniq_idx against temp_data_p1\n target_p2 using target_p2_textfield_uniq_idx against temp_data_p2\n ...\n\nInstead, I get a seemingly terrible plan like this:\n\n hash temp_data\n sequential scan on\n target_p0 against temp_data\n target_p1 against temp_data\n target_p2 against temp_data\n ...\n\nIt doesn't even make use of the index on the textfield! Instead, it opts to\nhash all of temp_data and perform a sequential scan against it.\n\nIt doesn't help if I partition temp_data by textfield beforehand either\n(using the same scheme as the target table). It still opts to concatenate\nall of temp_data, hash it, then perform a sequential scan against the\ntarget partitions.\n\nOn insert the behaviour is better but it still opts for a sequential insert\ninstead of a parallel one.\n\nDoes the query planner know something I don't? It's my intuition that it\nshould be faster to do a rough counting sort (partition by hash) first, and\nthen do N smaller more accurate sorts in parallel afterwards.\n\nCurrently I am creating a custom script(s) to emulate my desired behaviour,\nbut it would be nice if there was a way to get the query planner to do this\nautomatically. Any tricks to do this would be much appreciated!\n\n-Ben\n\nGreetings Postgres Developers,I've recently started taking advantage of the PARTITION BY HASH feature for my database system. It's a really great fit since my tables can get quite large (900M+ rows for some) and splitting them up into manageable chunks should let me upload to them without having to update an enormous index every time. What's more, since each partition has a write lock independent of the parent table, it should theoretically be possible to perform a parallelized insert operation, provided the data to be added is partitioned beforehand.What has been disappointing is that the query planner doesn't seem to recognize this potential. For example, if I have a large list of input data, and I want to perform a select operation across the target table:  -- target table is hashed on 'textfield' & has a unique index on 'textfield'  select * from temp_data td left join target tg on td.textfield = tg.textfield;I would expect to get a query plan like this:  partition temp_data  parallel scan on    target_p0 using target_p0_textfield_uniq_idx against temp_data_p0    target_p1 using target_p1_textfield_uniq_idx against temp_data_p1    target_p2 using target_p2_textfield_uniq_idx against temp_data_p2    ...Instead, I get a seemingly terrible plan like this:  hash temp_data  sequential scan on    target_p0 against temp_data    target_p1 against temp_data    target_p2 against temp_data    ...It doesn't even make use of the index on the textfield! Instead, it opts to hash all of temp_data and perform a sequential scan against it.It doesn't help if I partition temp_data by textfield beforehand either (using the same scheme as the target table). It still opts to concatenate all of temp_data, hash it, then perform a sequential scan against the target partitions.On insert the behaviour is better but it still opts for a sequential insert instead of a parallel one. Does the query planner know something I don't? It's my intuition that it should be faster to do a rough counting sort (partition by hash) first, and then do N smaller more accurate sorts in parallel afterwards.Currently I am creating a custom script(s) to emulate my desired behaviour, but it would be nice if there was a way to get the query planner to do this automatically. Any tricks to do this would be much appreciated!-Ben", "msg_date": "Thu, 14 Apr 2022 16:36:42 -0700", "msg_from": "Benjamin Tingle <[email protected]>", "msg_from_op": true, "msg_subject": "Query Planner not taking advantage of HASH PARTITION" }, { "msg_contents": "Benjamin Tingle <[email protected]> writes:\n> I've recently started taking advantage of the PARTITION BY HASH feature for\n> my database system. It's a really great fit since my tables can get quite\n> large (900M+ rows for some) and splitting them up into manageable chunks\n> should let me upload to them without having to update an enormous index\n> every time. What's more, since each partition has a write lock independent\n> of the parent table, it should theoretically be possible to perform a\n> parallelized insert operation, provided the data to be added is partitioned\n> beforehand.\n\n> What has been disappointing is that the query planner doesn't seem to\n> recognize this potential.\n\nThat's because there isn't any. The hash partitioning rule has\nbasically nothing to do with any plausible WHERE condition. If you're\nhoping to see partition pruning happen, you need to be using list or\nrange partitions, with operators compatible with your likely WHERE\nconditions.\n\n(I'm of the opinion that the hash partitioning option is more in the\ncategory of a dangerous nuisance than a useful feature. There are some\naround here who will argue otherwise, but they're wrong for exactly the\nreason that it's impossible to prune hash partitions.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Apr 2022 12:09:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Planner not taking advantage of HASH PARTITION" }, { "msg_contents": "Interesting. Why is it impossible to prune hash partitions? Maybe prune\nisn’t the best word, more so use to advantage. At the very least, it should\nbe possible to utilize a parallel insert against a table partitioned by\nhash. (Partition query rows, then distribute these rows to parallel workers)\n\nOn Sun, Apr 17, 2022 at 9:09 AM Tom Lane <[email protected]> wrote:\n\n> Benjamin Tingle <[email protected]> writes:\n> > I've recently started taking advantage of the PARTITION BY HASH feature\n> for\n> > my database system. It's a really great fit since my tables can get quite\n> > large (900M+ rows for some) and splitting them up into manageable chunks\n> > should let me upload to them without having to update an enormous index\n> > every time. What's more, since each partition has a write lock\n> independent\n> > of the parent table, it should theoretically be possible to perform a\n> > parallelized insert operation, provided the data to be added is\n> partitioned\n> > beforehand.\n>\n> > What has been disappointing is that the query planner doesn't seem to\n> > recognize this potential.\n>\n> That's because there isn't any. The hash partitioning rule has\n> basically nothing to do with any plausible WHERE condition. If you're\n> hoping to see partition pruning happen, you need to be using list or\n> range partitions, with operators compatible with your likely WHERE\n> conditions.\n>\n> (I'm of the opinion that the hash partitioning option is more in the\n> category of a dangerous nuisance than a useful feature. There are some\n> around here who will argue otherwise, but they're wrong for exactly the\n> reason that it's impossible to prune hash partitions.)\n>\n> regards, tom lane\n>\n-- \n\nBen(t).\n\nInteresting. Why is it impossible to prune hash partitions? Maybe prune isn’t the best word, more so use to advantage. At the very least, it should be possible to utilize a parallel insert against a table partitioned by hash. (Partition query rows, then distribute these rows to parallel workers)On Sun, Apr 17, 2022 at 9:09 AM Tom Lane <[email protected]> wrote:Benjamin Tingle <[email protected]> writes:\n> I've recently started taking advantage of the PARTITION BY HASH feature for\n> my database system. It's a really great fit since my tables can get quite\n> large (900M+ rows for some) and splitting them up into manageable chunks\n> should let me upload to them without having to update an enormous index\n> every time. What's more, since each partition has a write lock independent\n> of the parent table, it should theoretically be possible to perform a\n> parallelized insert operation, provided the data to be added is partitioned\n> beforehand.\n\n> What has been disappointing is that the query planner doesn't seem to\n> recognize this potential.\n\nThat's because there isn't any.  The hash partitioning rule has\nbasically nothing to do with any plausible WHERE condition.  If you're\nhoping to see partition pruning happen, you need to be using list or\nrange partitions, with operators compatible with your likely WHERE\nconditions.\n\n(I'm of the opinion that the hash partitioning option is more in the\ncategory of a dangerous nuisance than a useful feature.  There are some\naround here who will argue otherwise, but they're wrong for exactly the\nreason that it's impossible to prune hash partitions.)\n\n                        regards, tom lane\n-- Ben(t).", "msg_date": "Sun, 17 Apr 2022 10:07:50 -0700", "msg_from": "Benjamin Tingle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Planner not taking advantage of HASH PARTITION" }, { "msg_contents": "Benjamin Tingle <[email protected]> writes:\n> Interesting. Why is it impossible to prune hash partitions? Maybe prune\n> isn’t the best word, more so use to advantage. At the very least, it should\n> be possible to utilize a parallel insert against a table partitioned by\n> hash. (Partition query rows, then distribute these rows to parallel workers)\n\nYour plan-shape complaint had nothing to do with insertions; it had\nto do with joining the partitioned table to another table. That\njoin can't be optimized.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Apr 2022 14:20:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Planner not taking advantage of HASH PARTITION" }, { "msg_contents": "On 2022-Apr-14, Benjamin Tingle wrote:\n\n> It doesn't help if I partition temp_data by textfield beforehand either\n> (using the same scheme as the target table). It still opts to concatenate\n> all of temp_data, hash it, then perform a sequential scan against the\n> target partitions.\n\nDoes it still do that if you set\n SET enable_partitionwise_join TO on;\n? If the partition strategies are identical, that might get you a\nbetter plan. (Actually, in pg13 and upwards the strategies don't need\nto be exactly identical, just \"compatible\".)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)\n\n\n", "msg_date": "Sun, 17 Apr 2022 20:50:14 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Planner not taking advantage of HASH PARTITION" }, { "msg_contents": "Going to forward my response from an individual thread with Jeff here in\ncase anyone else can help me out.\n\nI wasn't sure if forwarding the message to the mailing list would be\nconsidered as a continuation of the same subject, so I'm just going to\npaste it here. I'm a bit of an email noob :P\n\n-------------------------------------------------------------------------------------------------------------------------\n\nJeff,\n\nFirst off, thanks for the thoughtful response.\n\n@ the first point about write locks\nI think I had/have a misconception about how inserts work in postgres. It's\nmy understanding that postgres will never draft a parallel insert plan for\nany query (except maybe CREATE TABLE AS?) because the process needs to\nacquire an exclusive access write lock to the table it is inserting on. My\nthinking was that since the partitions are treated as separate tables that\nthey can be theoretically inserted to in parallel.\n\n@ the second point about indexes on textfield\nI realized my error on this after I sent the email, indexes do not speed up\nlarge joins, just small ones.\n\n@ the third point about hash joins\nSo this is interesting to me. Your description of how hash joins work\nsounds like the behaviour I would want, yet performing huge joins is where\nmy larger databases have been getting stuck. Upon looking back at my code,\nI think I realize perhaps why they were getting stuck. So my database\ndoesn't have just one table, it has three principal tables which relate to\none another: Full disclosure, these databases/tables are distributed\nbetween multiple machines and can get quite enormous (some tables\nindividually are 200+GB)\n\ntab1 (textfield1 text, tf1_id bigint) unique on textfield1\ntab2 (textfield2 text, tf2_id bigint) unique on textfield2\ntab3 (tf1_id_fk bigint, tf2_id_fk bigint) unique on tf1_id_fk, tf2_id_fk\n\nSo as I'm uploading new data (in the form of (textfield1, textfield2)\nentries) I need to record the ID of each matching record on the join(s) or\nnull if there was no match (thus a new record). The way I have been\naccomplishing this so far has been like so:\n\n1. create temporary table upload(textfield1 text, textfield2 text, tf1_id\nbigint, tf2_id bigint);\n2. copy :'source' to upload(textfield1, textfield2);\n3. update upload set tf1_id = tab1.tf1_id from tab1 where upload.textfield1\n= tab1.textfield1;\n4. create temporary table new_textfield1 (textfield1 text, tf1_id bigint);\n5. insert into new_textfield1 (select distinct on (textfield1) textfield1,\nnextval('tf1_id_sequence') as tf1_id from upload where tf1_id is null)\n6. update upload u set tf1_id = ntf1.tf1_id from new_textfield1 ntf1 where\nu.tf1_id is null and u.textfield1 = ntf1.textfield1;\n-- etc. continue process for tab2, tab3\n\nNow, since I wrote that code I've learned about aggregation & window\nfunctions so I can generate the new ids during the big table join rather\nthan after the fact, but the big join \"update\" statement has been where the\nprocess gets stuck for huge tables. I notice that the query planner\ngenerates a different strategy when just selecting data vs \"insert select\"\nor \"update\".\n\n For example, when I write a query to join entries from one huge table to\nanother using a select statement, I get a nice parallel hash join plan like\nyou mention.\n\n> explain select * from hugetable1 join hugetable2 on some_field;\n\n\nResults in:\n\n> Gather\n> Workers Planned: 2\n> -> Parallel Hash Left Join\n> Hash Cond: ((ht1.some_field)::text = (ht2.some_field)::text)\n> -> Parallel Append\n> -> Parallel Seq Scan on hugetable2 ht2\n>\n\nHowever, when I change this to an update or insert select, I get a very\ndifferent plan.\n\n> explain insert into dummy1 (select * from hugetable1 join hugetable2 on\n> some_field)\n> OR\n> explain update hugetable1 ht1 set id = ht2.id from hugetable2 ht2 where\n> ht1.some_field = ht2.some_field\n\n\nResults in:\n\n> Update on hugetable1 ||OR|| Insert on dummy1\n> -> Merge Join\n> Merge Cond: ((ht1.some_field)::text = (ht2.some_field)::text)\n> -> Sort\n> Sort Key: ht1.some_field\n> -> Seq Scan on hugetable1 ht1\n> -> Materialize\n> -> Sort\n> Sort Key: ht2.some_field\n> -> Append\n> -> Seq Scan on hugetable2 ht2\n\n\nMaybe this query should perform similarly to the hash join, but the fact\nremains that I've had databases stuck for weeks on plans like this. The\npartitioning strategy I've adopted has been an attempt to force data\nlocality during the join operation, and so far has been working reasonably\nwell. If you have some insight into why these large update/insert\noperations go so slowly, it would be much appreciated.\n\n-Ben\n\nOn Sun, Apr 17, 2022 at 11:50 AM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2022-Apr-14, Benjamin Tingle wrote:\n>\n> > It doesn't help if I partition temp_data by textfield beforehand either\n> > (using the same scheme as the target table). It still opts to concatenate\n> > all of temp_data, hash it, then perform a sequential scan against the\n> > target partitions.\n>\n> Does it still do that if you set\n> SET enable_partitionwise_join TO on;\n> ? If the partition strategies are identical, that might get you a\n> better plan. (Actually, in pg13 and upwards the strategies don't need\n> to be exactly identical, just \"compatible\".)\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"Thou shalt not follow the NULL pointer, for chaos and madness await\n> thee at its end.\" (2nd Commandment for C programmers)\n>\n\n\n-- \n\nBen(t).\n\nGoing to forward my response from an individual thread with Jeff here in case anyone else can help me out.I wasn't sure if forwarding the message to the mailing list would be considered as a continuation of the same subject, so I'm just going to paste it here. I'm a bit of an email noob :P-------------------------------------------------------------------------------------------------------------------------Jeff,First off, thanks for the thoughtful response.@ the first point about write locksI think I had/have a misconception about how inserts work in postgres. It's my understanding that postgres will never draft a parallel insert plan for any query (except maybe CREATE TABLE AS?) because the process needs to acquire an exclusive access write lock to the table it is inserting on. My thinking was that since the partitions are treated as separate tables that they can be theoretically inserted to in parallel. @ the second point about indexes on textfieldI realized my error on this after I sent the email, indexes do not speed up large joins, just small ones.@ the third point about hash joinsSo this is interesting to me. Your description of how hash joins work sounds like the behaviour I would want, yet performing huge joins is where my larger databases have been getting stuck. Upon looking back at my code, I think I realize perhaps why they were getting stuck. So my database doesn't have just one table, it has three principal tables which relate to one another: Full disclosure, these databases/tables are distributed between multiple machines and can get quite enormous (some tables individually are 200+GB)tab1 (textfield1 text, tf1_id bigint) unique on textfield1tab2 (textfield2 text, tf2_id bigint) unique on textfield2tab3 (tf1_id_fk bigint, tf2_id_fk bigint) unique on tf1_id_fk, tf2_id_fkSo as I'm uploading new data (in the form of (textfield1, textfield2) entries) I need to record the ID of each matching record on the join(s) or null if there was no match (thus a new record). The way I have been accomplishing this so far has been like so:1. create temporary table upload(textfield1 text, textfield2 text, tf1_id bigint, tf2_id bigint);2. copy :'source' to upload(textfield1, textfield2);3. update upload set tf1_id = tab1.tf1_id from tab1 where upload.textfield1 = tab1.textfield1;4. create temporary table new_textfield1 (textfield1 text, tf1_id bigint);5. insert into new_textfield1 (select distinct on (textfield1) textfield1, nextval('tf1_id_sequence') as tf1_id from upload where tf1_id is null)6. update upload u set tf1_id = ntf1.tf1_id from new_textfield1 ntf1 where u.tf1_id is null and u.textfield1 = ntf1.textfield1;-- etc. continue process for tab2, tab3Now, since I wrote that code I've learned about aggregation & window functions so I can generate the new ids during the big table join rather than after the fact, but the big join \"update\" statement has been where the process gets stuck for huge tables. I notice that the query planner generates a different strategy when just selecting data vs \"insert select\" or \"update\". For example, when I write a query to join entries from one huge table to another using a select statement, I get a nice parallel hash join plan like you mention.explain select * from hugetable1 join hugetable2 on some_field;Results in: Gather    Workers Planned: 2   ->  Parallel Hash Left Join         Hash Cond: ((ht1.some_field)::text = (ht2.some_field)::text)         ->  Parallel Append               ->  Parallel Seq Scan on hugetable2 ht2However, when I change this to an update or insert select, I get a very different plan.explain insert into dummy1 (select *  from hugetable1 join hugetable2 on some_field)ORexplain update hugetable1 ht1 set id = ht2.id from hugetable2 ht2 where ht1.some_field = ht2.some_field Results in:Update on hugetable1 ||OR|| Insert on dummy1   ->  Merge Join         Merge Cond: ((ht1.some_field)::text = (ht2.some_field)::text)         ->  Sort                Sort Key: ht1.some_field               ->  Seq Scan on hugetable1 ht1         ->  Materialize                ->  Sort                     Sort Key: ht2.some_field                     ->  Append                           ->  Seq Scan on hugetable2 ht2Maybe this query should perform similarly to the hash join, but the fact remains that I've had databases stuck for weeks on plans like this. The partitioning strategy I've adopted has been an attempt to force data locality during the join operation, and so far has been working reasonably well. If you have some insight into why these large update/insert operations go so slowly, it would be much appreciated.-BenOn Sun, Apr 17, 2022 at 11:50 AM Alvaro Herrera <[email protected]> wrote:On 2022-Apr-14, Benjamin Tingle wrote:\n\n> It doesn't help if I partition temp_data by textfield beforehand either\n> (using the same scheme as the target table). It still opts to concatenate\n> all of temp_data, hash it, then perform a sequential scan against the\n> target partitions.\n\nDoes it still do that if you set\n  SET enable_partitionwise_join TO on;\n?  If the partition strategies are identical, that might get you a\nbetter plan.  (Actually, in pg13 and upwards the strategies don't need\nto be exactly identical, just \"compatible\".)\n\n-- \nÁlvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)\n-- Ben(t).", "msg_date": "Wed, 20 Apr 2022 19:11:37 -0700", "msg_from": "Benjamin Tingle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Planner not taking advantage of HASH PARTITION" }, { "msg_contents": "On Wed, Apr 20, 2022 at 07:11:37PM -0700, Benjamin Tingle wrote:\n> @ the first point about write locks\n> I think I had/have a misconception about how inserts work in postgres. It's\n> my understanding that postgres will never draft a parallel insert plan for\n> any query (except maybe CREATE TABLE AS?)\n\nIt's correct that DML (INSERT/UPDATE/DELETE) currently is not run in parallel.\nhttps://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html\n\n> because the process needs to acquire an exclusive access write lock to the\n> table it is inserting on.\n\nBut this is incorrect - DML acquires a relation lock, but not a strong one.\nMultiple processes can insert into a table at once (because the row-excl lock\nlevel is not self-conflicting, to be technical).\nhttps://www.postgresql.org/docs/current/explicit-locking.html\n\nIn fact, that's a design requirement. It's understood that many people would\nbe unhappy if only one client were able to run UPDATEs at once, and that only a\ntoy system would acquire a strong lock for DML.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 21 Apr 2022 06:52:36 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Planner not taking advantage of HASH PARTITION" } ]
[ { "msg_contents": "Hi Geeks,\n\nEnv: postgres12\n\nI am new to postgres and coming from an Oracle background. Please excuse me\nif I am not asking valid questions.\n\n I would like to know if postgres performs any transformations when it does\nthe parsing? If yes, is there a way we can get the final transformed query?\n\nThanks,\n\nGoti\n\nHi Geeks,Env: postgres12I am new to postgres and coming from an Oracle background. Please excuse me if I am not asking valid questions. I would like to know if postgres performs any transformations when it does the parsing? If yes, is there a way we can get the final transformed query?Thanks,Goti", "msg_date": "Mon, 18 Apr 2022 19:32:49 +0530", "msg_from": "Goti <[email protected]>", "msg_from_op": true, "msg_subject": "How to find the final transformed query in postgresql" }, { "msg_contents": "Goti <[email protected]> writes:\n> I would like to know if postgres performs any transformations when it does\n> the parsing?\n\nThis might be helpful reading:\n\nhttps://www.postgresql.org/docs/current/overview.html\n\n> If yes, is there a way we can get the final transformed query?\n\nSee debug_print_parse and friends [1]. Depending on what you mean by\n\"final transformed query\", you might instead want debug_print_rewritten,\nor maybe you want the plan, in which case EXPLAIN is a much friendlier\nway to look at it than debug_print_plan.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT\n\n\n", "msg_date": "Mon, 18 Apr 2022 10:13:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to find the final transformed query in postgresql" }, { "msg_contents": "Thanks a lot Tom.\n\nThanks,\n\nGoti\n\n\nOn Mon, Apr 18, 2022 at 7:43 PM Tom Lane <[email protected]> wrote:\n\n> Goti <[email protected]> writes:\n> > I would like to know if postgres performs any transformations when it\n> does\n> > the parsing?\n>\n> This might be helpful reading:\n>\n> https://www.postgresql.org/docs/current/overview.html\n>\n> > If yes, is there a way we can get the final transformed query?\n>\n> See debug_print_parse and friends [1]. Depending on what you mean by\n> \"final transformed query\", you might instead want debug_print_rewritten,\n> or maybe you want the plan, in which case EXPLAIN is a much friendlier\n> way to look at it than debug_print_plan.\n>\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT\n>\n\nThanks a lot Tom.Thanks,GotiOn Mon, Apr 18, 2022 at 7:43 PM Tom Lane <[email protected]> wrote:Goti <[email protected]> writes:\n>  I would like to know if postgres performs any transformations when it does\n> the parsing?\n\nThis might be helpful reading:\n\nhttps://www.postgresql.org/docs/current/overview.html\n\n> If yes, is there a way we can get the final transformed query?\n\nSee debug_print_parse and friends [1].  Depending on what you mean by\n\"final transformed query\", you might instead want debug_print_rewritten,\nor maybe you want the plan, in which case EXPLAIN is a much friendlier\nway to look at it than debug_print_plan.\n\n                        regards, tom lane\n\n[1] https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT", "msg_date": "Mon, 18 Apr 2022 20:01:11 +0530", "msg_from": "Goti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to find the final transformed query in postgresql" } ]
[ { "msg_contents": "Hello All,\n\nI am working on workload testing on a PostgreSQL database.\nUse case: Run workload of 5000 to 11000 transactions and a transaction should have Inserts, Selects, Updates and Selects\nI am using HammerDB, an open source tool to generate work load, and my question here is how to generate workload metrics by transactions per second.\n\nQuestion: Is there a way to get a metrics of queries executed by transactions and the execution times of each SQL with a transaction?\n\nThanks,\nRavi\n\n\n\n\n\n\n\n\n\n\n\nHello All,\n \nI am working on workload testing on a PostgreSQL database. \n\nUse case: Run workload of 5000 to 11000 transactions and a transaction should have Inserts, Selects, Updates and Selects\nI am using HammerDB, an open source tool to generate work load, and my question here is how to generate workload metrics by transactions per second.\n \nQuestion: Is there a way to get a metrics of queries executed by transactions and the execution times of each SQL with a transaction?\n \nThanks,\nRavi", "msg_date": "Tue, 19 Apr 2022 12:35:21 +0000", "msg_from": "\"Patil, Ravi\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to find all SQLs executed by a transaction id?" } ]
[ { "msg_contents": "Hi all;\n\n\nWe are debugging a sql performance issue. We have a sql file with 50,000 \nsimple select statements in it. If I run the file locally it completes \nin less than 15sec.  If I force the local connection to be a tcp/ip \nconnection via psql -h and I get approximately the same results, 15 - 16sec.\n\n\nHowever if we move the file to another server in the same network and \nrun with a psql -h then it runs for more than 10min. Are there any \npostgres specific issues / settings / connection overhead  we should \nlook at? Or is this simply a network issue and fully outside the scope \nof the postgres database?\n\n\nFYI:\n\npostgresql 13\n\n1.5TB of RAM\n\n512GB of buffer_pool\n\n10GB of work_mem\n\n\n\nThanks in advance\n\n\n\n\n", "msg_date": "Tue, 19 Apr 2022 15:00:09 -0600", "msg_from": "Sbob <[email protected]>", "msg_from_op": true, "msg_subject": "significant jump in sql statement timing for on server vs a remote\n connection" }, { "msg_contents": "On Tue, Apr 19, 2022 at 03:00:09PM -0600, Sbob wrote:\n> We are debugging a sql performance issue. We have a sql file with 50,000\n> simple select statements in it. If I run the file locally it completes in\n> less than 15sec.� If I force the local connection to be a tcp/ip connection\n> via psql -h and I get approximately the same results, 15 - 16sec.\n> \n> \n> However if we move the file to another server in the same network and run\n> with a psql -h then it runs for more than 10min. Are there any postgres\n> specific issues / settings / connection overhead� we should look at? Or is\n> this simply a network issue and fully outside the scope of the postgres\n> database?\n\nWhat OS ? What kind of authentication are you using ?\nIs there a connection pooler involved ? Did you try like that ?\n\nDid you test how long it takes to run 10k empty statements locally vs remotely ?\ntime yes 'SELECT;' |head -9999 |psql ... >/dev/null\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 19 Apr 2022 16:04:21 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant jump in sql statement timing for on server vs a\n remote connection" }, { "msg_contents": "On Tue, Apr 19, 2022 at 5:00 PM Sbob <[email protected]> wrote:\n\n>\n> However if we move the file to another server in the same network and\n> run with a psql -h then it runs for more than 10min.\n\n\nWhat is the ping time? Packet loss? You can't take for granted that the\nnetwork is good and fast just because they are on the same LAN.\n\nCheers,\n\nJeff\n\nOn Tue, Apr 19, 2022 at 5:00 PM Sbob <[email protected]> wrote:\nHowever if we move the file to another server in the same network and \nrun with a psql -h then it runs for more than 10min.What is the ping time?  Packet loss? You can't take for granted that the network is good and fast just because they are on the same LAN.Cheers,Jeff", "msg_date": "Wed, 20 Apr 2022 00:17:14 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant jump in sql statement timing for on server vs a\n remote connection" }, { "msg_contents": "On 4/19/22 22:17, Jeff Janes wrote:\n> On Tue, Apr 19, 2022 at 5:00 PM Sbob <[email protected]> wrote:\n>\n>\n> However if we move the file to another server in the same network and\n> run with a psql -h then it runs for more than 10min.\n>\n>\n> What is the ping time?  Packet loss? You can't take for granted that \n> the network is good and fast just because they are on the same LAN.\n>\n> Cheers,\n>\n> Jeff\n\n\nHere is the ping stats:\n\n--- db-primary ping statistics ---\n4 packets transmitted, 4 received, 0% packet loss, time 3000ms\nrtt min/avg/max/mdev = 0.304/0.348/0.400/0.039 ms\n\n\nThis seems pretty good yes?Anything else I could look at?\n\n\n\n\n\n\n\n\n\nOn 4/19/22 22:17, Jeff Janes wrote:\n\n\n\n\nOn Tue, Apr 19, 2022 at 5:00 PM Sbob <[email protected]>\n wrote:\n\n\n\n However if we move the file to another server in the same\n network and \n run with a psql -h then it runs for more than 10min.\n\n\nWhat is the ping time?  Packet loss? You can't take for\n granted that the network is good and fast just because they\n are on the same LAN.\n\n\nCheers,\n\n\nJeff\n\n\n\n\n\nHere is the ping stats:\n--- db-primary\n ping statistics ---\n \n 4 packets transmitted, 4 received, 0% packet loss, time 3000ms\n \n rtt min/avg/max/mdev = 0.304/0.348/0.400/0.039 ms\n\n\n\nThis seems pretty good yes?\n Anything else I could look at?", "msg_date": "Wed, 20 Apr 2022 09:16:26 -0600", "msg_from": "Sbob <[email protected]>", "msg_from_op": true, "msg_subject": "Re: significant jump in sql statement timing for on server vs a\n remote connection" }, { "msg_contents": "Em qua., 20 de abr. de 2022 às 12:16, Sbob <[email protected]>\nescreveu:\n\n>\n> On 4/19/22 22:17, Jeff Janes wrote:\n>\n> On Tue, Apr 19, 2022 at 5:00 PM Sbob <[email protected]> wrote:\n>\n>>\n>> However if we move the file to another server in the same network and\n>> run with a psql -h then it runs for more than 10min.\n>\n>\n> What is the ping time? Packet loss? You can't take for granted that the\n> network is good and fast just because they are on the same LAN.\n>\n> Cheers,\n>\n> Jeff\n>\n>\n> Here is the ping stats:\n>\n> --- db-primary ping statistics ---\n> 4 packets transmitted, 4 received, 0% packet loss, time 3000ms\n>\n3000 ms?\nAre sure that haven't packet loss?\n\nregards,\nRanier Vilela\n\nEm qua., 20 de abr. de 2022 às 12:16, Sbob <[email protected]> escreveu:\n\n\n\nOn 4/19/22 22:17, Jeff Janes wrote:\n\n\n\nOn Tue, Apr 19, 2022 at 5:00 PM Sbob <[email protected]>\n wrote:\n\n\n\n However if we move the file to another server in the same\n network and \n run with a psql -h then it runs for more than 10min.\n\n\nWhat is the ping time?  Packet loss? You can't take for\n granted that the network is good and fast just because they\n are on the same LAN.\n\n\nCheers,\n\n\nJeff\n\n\n\n\n\nHere is the ping stats:\n--- db-primary\n ping statistics ---\n \n 4 packets transmitted, 4 received, 0% packet loss, time 3000ms\n 3000 ms?Are sure that haven't packet loss? regards,Ranier Vilela", "msg_date": "Wed, 20 Apr 2022 12:51:55 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant jump in sql statement timing for on server vs a\n remote connection" } ]
[ { "msg_contents": "Dear mailing list\n\nWe are investigating a strange performance issue with our database.\n\nOur use case is a sensor reading database where we have sensor location \n(called channels), parameter settings (called valueseries) and reading \n(called datavalues). Datavalues is partitioned per month.\nLike this: channel <-(FK)- valueseries <-(FK)- datavalues\n\nWe have a query that returns the latest sensor reading. When we have no \nreadings in datavalues the query is SLOW, when we have 1 or more \nreadings in datavalues the query is FAST. (slow being ~1second, fast \n~5ms) It isn't the slowness that is the main problem, but rather the odd \nbehaviour.\n\nThe query that is giving us issues is the following, channel 752433 has \nNO values, 752431 has values.\n(Channel 752433 only has valueseries 752434)\n\nselect * from datavalue\nwhere dataview in ( select id from valueseries where channel = \n%channel_idx%)\nORDER BY VALUETIMESTAMP DESC\nFETCH FIRST ROW only;\n\t\t\nRunning explain analyze shows strange numbers, 52'000 rows are being \nreturned but there are no rows there.\n\nFor channel 752433\n-> Index Scan Backward using \ndatavalue_2022_03_valuetimestamp_dataview_idx on datavalue_2022_03 \ndatavalue_6 (cost=0.42..7166.19 rows=119673 width=226) (actual \ntime=0.008..32.831 rows=119601 loops=1)\n-> Index Scan Backward using \ndatavalue_2022_04_valuetimestamp_dataview_idx on datavalue_2022_04 \ndatavalue_7 (cost=0.29..4002.79 rows=52499 width=227) (actual \ntime=0.011..15.005 rows=52499 loops=1)\n\t\nFor channel 752431\n-> Index Scan Backward using \ndatavalue_2022_03_valuetimestamp_dataview_idx on datavalue_2022_03 \ndatavalue_6 (cost=0.42..7166.19 rows=119673 width=226) (actual \ntime=0.008..0.008 rows=1 loops=1)\n-> Index Scan Backward using \ndatavalue_2022_04_valuetimestamp_dataview_idx on datavalue_2022_04 \ndatavalue_7 (cost=0.29..4002.79 rows=52499 width=227) (actual \ntime=0.011..0.011 rows=1 loops=1)\t\t\n\t\nInserting even a single row changes the offending rows to the expected \nvalues:\n\ninsert into maclient.datavalue (dataview, valuetimestamp, datavalue) \nvalues (752434, '2022-03-01 00:00:00', 234);\n-> Index Scan Backward using \ndatavalue_2022_03_valuetimestamp_dataview_idx on datavalue_2022_03 \ndatavalue_6 (cost=0.42..7166.19 rows=119673 width=226) (actual \ntime=0.006..0.006 rows=1 loops=1)\n\t\t\t\t\nFull explain analyze on https://paste.depesz.com/s/ZwJ\nwith buffers and track_io_timing: https://paste.depesz.com/s/Ss\n\nDisabling indexscan (set enable_indexscan=false;) hides the problem, it \ndoes not show up with a bitmap index scan.\nRunning autovacuum analyze doesn't seem to help, the results are the same.\n\n\nSELECT version();\n\"PostgreSQL 14.2, compiled by Visual C++ build 1914, 64-bit\"\n\n\nCan anyone explain what is going on here.\n* Why is the database returning 52'000+ rows when it should be returning 0?\n* Is my query badly formulated?\n* Is there something wrong with the indexes and I need to rebuild them?\n\t\nWe are stumped, and would greatly appreciate any input.\n\nRegards\nEmil\n\n\n", "msg_date": "Fri, 22 Apr 2022 16:53:48 +0200", "msg_from": "Emil Iggland <[email protected]>", "msg_from_op": true, "msg_subject": "Performance differential when 0 values present vs when 1 values\n present. Planner return 52k rows when 0 expected." }, { "msg_contents": "Emil Iggland <[email protected]> writes:\n> The query that is giving us issues is the following, channel 752433 has \n> NO values, 752431 has values.\n> (Channel 752433 only has valueseries 752434)\n\n> select * from datavalue\n> where dataview in ( select id from valueseries where channel = \n> %channel_idx%)\n> ORDER BY VALUETIMESTAMP DESC\n> FETCH FIRST ROW only;\n\t\t\n> Running explain analyze shows strange numbers, 52'000 rows are being \n> returned but there are no rows there.\n\n> For channel 752433\n> -> Index Scan Backward using \n> datavalue_2022_03_valuetimestamp_dataview_idx on datavalue_2022_03 \n> datavalue_6 (cost=0.42..7166.19 rows=119673 width=226) (actual \n> time=0.008..32.831 rows=119601 loops=1)\n\nYou've got the wrong column order (for this query anyway) in that\nindex. It'd work a lot better if dataview were the first column;\nor at least, it wouldn't tempt the planner to try this unstably-\nperforming plan. It's trying to use the index ordering to satisfy\nthe ORDER BY, which works great as long as it finds a dataview\nmatch in some reasonably recent index entry. Otherwise, it's\ngoing to crawl the whole index to discover that there's no match.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Apr 2022 12:00:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance differential when 0 values present vs when 1 values\n present. Planner return 52k rows when 0 expected." }, { "msg_contents": " > You've got the wrong column order (for this query anyway) in that\n > index. It'd work a lot better if dataview were the first column;\nI might be misunderstanding you, but I assume that you are suggesting an \nindex on (dataview, valuetimestamp).\nWe have that index, it is the primary key. For some reason it isn't \nbeing selected.\n\nI can understand that it has to go through the whole index, potentially \neven the whole table, but I do not why it takes so long.\n\nEven a query that should take equally long (probably longer) is \nsubstantially faster:\n\nexplain (analyze, buffers)\nselect valuetimestamp from datavalue\nwhere valuetimestamp <> '1965-01-07 05:50:59';\n\nCompletes in less than 500ms using a sequential scan,\n\n...\n-> Seq Scan on datavalue_2022_04 datavalue_7 (cost=0.00..1450.39 \nrows=56339 width=8) (actual time=0.013..5.988 rows=56109 loops=1)\"\n\tFilter: (valuetimestamp <> '1965-01-07 05:50:59'::timestamp without \ntime zone)\n\tBuffers: shared hit=742 read=4\n...\nPlanning Time: 0.781 ms\nExecution Time: 394.408 ms\n\n\nwhile the original query takes over 1 second.\n...\n-> Index Scan Backward using \ndatavalue_2022_04_valuetimestamp_dataview_idx on datavalue_2022_04 \ndatavalue_7 (cost=0.29..4292.48 rows=56351 width=227) (actual \ntime=0.166..17.340 rows=56109 loops=1)\n\tBuffers: shared hit=42013 read=278\n...\nPlanning Time: 0.964 ms\nExecution Time: 1291.509 ms\n\nI do not understand how looking at every value in the index and \nreturning none be slower than looking at every table in the table and \nreturning none. If it takes 500ms to return every value in the table via \na sequential scan, then it should take less via an index scan.\n\n\nIn case we never solve it, and someone else runs into similiar problems, \nwe (hopefully temporarily) worked around it by reformulating the query \nto use a lateral join:\n\nEXPLAIN (analyze, buffers)\nSELECT dv.* FROM valueseries vs\nLEFT JOIN LATERAL (\n\tSELECT * FROM datavalue dv WHERE dv.dataview = vs.id\n\tORDER BY VALUETIMESTAMP\n\tFETCH FIRST 1 ROWS ONLY\n) dv ON TRUE\nwhere vs.channel = 752433\n\nThis causes it to use the correct index:\n-> Index Scan using datavalue_2022_01_pkey on datavalue_2022_01 dv_4 \n(cost=0.42..2951.17 rows=1032 width=228) (actual time=0.034..0.034 \nrows=0 loops=1)\n\tIndex Cond: (dataview = vs.id)\n\tBuffers: shared read=3\n...\nPlanning Time: 1.169 ms\nExecution Time: 0.524 ms\n\n\nRegards\nEmil\n\n\nOn 2022-04-25 18:00, Tom Lane wrote:\n> Emil Iggland <[email protected]> writes:\n>> The query that is giving us issues is the following, channel 752433 has\n>> NO values, 752431 has values.\n>> (Channel 752433 only has valueseries 752434)\n> \n>> select * from datavalue\n>> where dataview in ( select id from valueseries where channel =\n>> %channel_idx%)\n>> ORDER BY VALUETIMESTAMP DESC\n>> FETCH FIRST ROW only;\n> \t\t\n>> Running explain analyze shows strange numbers, 52'000 rows are being\n>> returned but there are no rows there.\n> \n>> For channel 752433\n>> -> Index Scan Backward using\n>> datavalue_2022_03_valuetimestamp_dataview_idx on datavalue_2022_03\n>> datavalue_6 (cost=0.42..7166.19 rows=119673 width=226) (actual\n>> time=0.008..32.831 rows=119601 loops=1)\n> \n> You've got the wrong column order (for this query anyway) in that\n> index. It'd work a lot better if dataview were the first column;\n> or at least, it wouldn't tempt the planner to try this unstably-\n> performing plan. It's trying to use the index ordering to satisfy\n> the ORDER BY, which works great as long as it finds a dataview\n> match in some reasonably recent index entry. Otherwise, it's\n> going to crawl the whole index to discover that there's no match.\n> \n> \t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Apr 2022 09:41:52 +0200", "msg_from": "Emil Iggland <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance differential when 0 values present vs when 1 values\n present. Planner return 52k rows when 0 expected." }, { "msg_contents": "On Wed, 27 Apr 2022 at 19:54, Emil Iggland <[email protected]> wrote:\n>\n> > You've got the wrong column order (for this query anyway) in that\n> > index. It'd work a lot better if dataview were the first column;\n\n> I might be misunderstanding you, but I assume that you are suggesting an\n> index on (dataview, valuetimestamp).\n> We have that index, it is the primary key. For some reason it isn't\n> being selected.\n\nI don't think that index can be used for your original query. It could\nonly be used if \"channel\" is unique in \"valueseries\" and you'd written\nthe query as:\n\nselect * from datavalue\nwhere dataview = (select id from valueseries where channel = 752433)\nORDER BY VALUETIMESTAMP DESC\nFETCH FIRST ROW only;\n\nthat would allow a backwards index scan using the (dataview,\nvaluetimestamp) index. Because you're using the IN clause to possibly\nlook for multiple \"dataview\" values matching the given \"channel\", the\nindex range scan does not have a single point to start at. What\nyou've done with the LATERAL query allows the index to be scanned once\nfor each \"valueseries\" row with a \"channel\" value matching your WHERE\nclause.\n\nI guess \"channel\" must not be the primary key to \"valueseries\" and\nthat's why you use an IN().\n\nThe above query would return an error if multiple rows were returned\nby the subquery.\n\nDavid\n\n\n", "msg_date": "Wed, 27 Apr 2022 20:22:12 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance differential when 0 values present vs when 1 values\n present. Planner return 52k rows when 0 expected." }, { "msg_contents": " > I don't think that index can be used for your original query. It could\n > only be used if \"channel\" is unique in \"valueseries\" and you'd written\n > the query as:\n\nThanks! That explanation I can understand, now I know how to avoid this \nin future.\n\n > I guess \"channel\" must not be the primary key to \"valueseries\" and\n > that's why you use an IN().\nCorrect. We create a new valueseries in some circumstances, so multiple \nvalueseries can point to the same channel.\n\n\n\n\nOn 2022-04-27 10:22, David Rowley wrote:\n> On Wed, 27 Apr 2022 at 19:54, Emil Iggland <[email protected]> wrote:\n>>\n>> > You've got the wrong column order (for this query anyway) in that\n>> > index. It'd work a lot better if dataview were the first column;\n> \n>> I might be misunderstanding you, but I assume that you are suggesting an\n>> index on (dataview, valuetimestamp).\n>> We have that index, it is the primary key. For some reason it isn't\n>> being selected.\n> \n> I don't think that index can be used for your original query. It could\n> only be used if \"channel\" is unique in \"valueseries\" and you'd written\n> the query as:\n> \n> select * from datavalue\n> where dataview = (select id from valueseries where channel = 752433)\n> ORDER BY VALUETIMESTAMP DESC\n> FETCH FIRST ROW only;\n> \n> that would allow a backwards index scan using the (dataview,\n> valuetimestamp) index. Because you're using the IN clause to possibly\n> look for multiple \"dataview\" values matching the given \"channel\", the\n> index range scan does not have a single point to start at. What\n> you've done with the LATERAL query allows the index to be scanned once\n> for each \"valueseries\" row with a \"channel\" value matching your WHERE\n> clause.\n> \n> I guess \"channel\" must not be the primary key to \"valueseries\" and\n> that's why you use an IN().\n> \n> The above query would return an error if multiple rows were returned\n> by the subquery.\n> \n> David\n\n\n", "msg_date": "Thu, 28 Apr 2022 10:09:55 +0200", "msg_from": "Emil Iggland <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance differential when 0 values present vs when 1 values\n present. Planner return 52k rows when 0 expected." } ]
[ { "msg_contents": "Hello everyone,\n\n*1) Context*\n\nI'm working with large tables containing arrays of integers, indexed with \"\n*gin__int_ops*\" GIN indexes offered by the \"*intarray*\" extension.\nThe goal I'm trying to achieve is to do a \"nested loop semi join\" using the\narray inclusion operation (@>) as join condition but in an indexed way.\n(Basically an INNER JOIN without the duplicate rows and without needing to\nuse columns from the joined table.)\n\n*2) Configuration*\n\nThe queries are run on a PostgreSQL v14 server with 32GB RAM and 8 vCPUs on\na 64 bit ARM Neoverse architecture (m6g.2xlarge AWS RDS instance).\nPostgreSQL's configuration uses the following key values:\n\n\n - work_mem = 8GB (only set for this query)\n - shared_buffers = 8GB\n - effective_cache_size = 22GB\n - max_worker_processes = 8\n - max_parallel_workers_per_gather = 4\n - jit = on\n\n*3) Tables*\n\nThe \"light_pages_attributes\" contains about 2 million rows, each with an\n\"attributes\" column containing on average 20 integers.\n\nCREATE TABLE\n> light_pages_attributes\n> (\n> id INTEGER NOT NULL,\n> \"attributes\" INTEGER[] NOT NULL\n> )\n> ;\n> CREATE INDEX\n> light_pages_attributes_attributes\n> ON\n> light_pages_attributes\n> USING\n> gin\n> (\n> attributes gin__int_ops\n> )\n> ;\n\n\nThe \"light_pages_views\" contains about 25 million rows, each with a\n\"page_ids\" column containing on average 20 integers as well.\n\nCREATE TABLE\n> light_pages_views\n> (\n> vector_id BIGINT NOT NULL,\n> page_ids INTEGER[] NOT NULL\n> )\n> ;\n> CREATE INDEX\n> light_pages_views_page_ids\n> ON\n> light_pages_views\n> USING\n> gin\n> (\n> page_ids gin__int_ops\n> )\n> ;\n\n\n*4) Query*\n\nThe query I'm trying to optimise is the following:\n\nBEGIN;\n\n\n\nSET LOCAL work_mem = '8GB';\n\n\n\nCREATE TEMPORARY VIEW\n> urls\n> AS\n> (\n> SELECT ARRAY[lpa.id]\n> AS page_id\n> FROM\n> light_pages_attributes\n> AS lpa\n> WHERE\n> lpa.\"attributes\" @> ARRAY[189376]\n> );\n> EXPLAIN (\n> ANALYZE,\n> VERBOSE,\n> COSTS,\n> BUFFERS,\n> TIMING\n> )\n> SELECT\n> COUNT(*)\n> FROM\n> light_pages_views\n> AS lpv\n> WHERE\n> EXISTS (\n> SELECT\n> 1\n> FROM\n> urls\n> AS u\n> WHERE\n> lpv.page_ids @> u.page_id\n> )\n> ;\n\n\n\nCOMMIT;\n\n\nThe last query does not finish after waiting for more than 15 minutes.\n(The temporary view creation is very fast and required due to the same\nquery in a CTE greatly reducing performance (by more than 5 min.) due to\nthe optimisation barrier I'm guessing.)\nThis alternative query, which should be far slower due to the fact that it\ngenerates duplicate lines through the INNER JOIN, is in fact much faster, 1\nmin. and 39 s.:\n\nEXPLAIN (\n> ANALYZE,\n> VERBOSE,\n> COSTS,\n> BUFFERS,\n> TIMING\n> )\n> SELECT\n> COUNT(*)\n> FROM\n> (\n> SELECT\n> 1\n> FROM\n> light_pages_views\n> AS lpv\n> INNER JOIN\n> urls\n> AS u\n> ON lpv.page_ids @> u.page_id\n> GROUP BY\n> lpv.vector_id\n> )\n> AS t\n> ;\n\n\nVisual query plan: https://explain.dalibo.com/plan/bc3#plan\nRaw query plan: https://explain.dalibo.com/plan/bc3#raw\n\nOther strategies I've tried as well:\n\n - lpv.page_ids @> ANY(SELECT u.page_id FROM urls AS u)\n - FULL OUTER JOIN, not possible due to the condition not being\n merge-joinable\n\nThe end-goal would be to update all matching \"light_pages_views\" rows by\nappending an integer to their array of integer.\nSo possibly millions of tows to be updated.\n\nThank you a lot in advance for your help!\n\nMickael\n\nHello everyone,1) ContextI'm working with large tables containing arrays of integers, indexed with \"gin__int_ops\" GIN indexes offered by the \"intarray\" extension.The goal I'm trying to achieve is to do a \"nested loop semi join\" using the array inclusion operation (@>) as join condition but in an indexed way.(Basically an INNER JOIN without the duplicate rows and without needing to use columns from the joined table.)2) ConfigurationThe queries are run on a PostgreSQL v14 server with 32GB RAM and 8 vCPUs on a 64 bit ARM Neoverse architecture (m6g.2xlarge AWS RDS instance).PostgreSQL's configuration uses the following key values:work_mem = 8GB (only set for this query)shared_buffers = 8GBeffective_cache_size = 22GBmax_worker_processes = 8max_parallel_workers_per_gather = 4jit = on3) TablesThe \"light_pages_attributes\" contains about 2 million rows, each with an \"attributes\" column containing on average 20 integers.CREATE TABLE  light_pages_attributes  (    id            INTEGER   NOT NULL,    \"attributes\"  INTEGER[] NOT NULL  );CREATE INDEX  light_pages_attributes_attributesON  light_pages_attributesUSING  gin  (    attributes gin__int_ops  );The \"light_pages_views\" contains about 25 million rows, each with a \"page_ids\" column containing on average 20 integers as well.CREATE TABLE  light_pages_views  (    vector_id     BIGINT    NOT NULL,    page_ids      INTEGER[] NOT NULL  );CREATE INDEX  light_pages_views_page_idsON  light_pages_viewsUSING  gin  (    page_ids gin__int_ops  );4) QueryThe query I'm trying to optimise is the following:BEGIN;  SET LOCAL work_mem = '8GB';  CREATE TEMPORARY VIEW  urls  AS  (    SELECT ARRAY[lpa.id]        AS page_id      FROM        light_pages_attributes          AS lpa      WHERE        lpa.\"attributes\" @> ARRAY[189376]  );EXPLAIN (  ANALYZE,  VERBOSE,  COSTS,  BUFFERS,  TIMING)SELECT  COUNT(*)FROM  light_pages_views    AS lpvWHERE  EXISTS (    SELECT      1    FROM      urls        AS u    WHERE      lpv.page_ids @> u.page_id  );  COMMIT;The last query does not finish after waiting for more than 15 minutes.(The temporary view creation is very fast and required due to the same query in a CTE greatly reducing performance (by more than 5 min.) due to the optimisation barrier I'm guessing.)This alternative query, which should be far slower due to the fact that it generates duplicate lines through the INNER JOIN, is in fact much faster, 1 min. and 39 s.:EXPLAIN (  ANALYZE,  VERBOSE,  COSTS,  BUFFERS,  TIMING)SELECT  COUNT(*)FROM  (    SELECT      1    FROM      light_pages_views        AS lpv    INNER JOIN      urls        AS u        ON lpv.page_ids @> u.page_id    GROUP BY      lpv.vector_id  )    AS t;Visual query plan: https://explain.dalibo.com/plan/bc3#planRaw query plan: https://explain.dalibo.com/plan/bc3#rawOther strategies I've tried as well:lpv.page_ids @> ANY(SELECT u.page_id FROM urls AS u)FULL OUTER JOIN, not possible due to the condition not being merge-joinableThe end-goal would be to update all matching \"light_pages_views\" rows by appending an integer to their array of integer.So possibly millions of tows to be updated. Thank you a lot in advance for your help!Mickael", "msg_date": "Wed, 27 Apr 2022 14:18:38 +0200", "msg_from": "Mickael van der Beek <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Array of integer indexed nested-loop semi join" }, { "msg_contents": "On Wed, Apr 27, 2022 at 8:19 AM Mickael van der Beek <\[email protected]> wrote:\n\n>\n> The last query does not finish after waiting for more than 15 minutes.\n> (The temporary view creation is very fast and required due to the same\n> query in a CTE greatly reducing performance (by more than 5 min.) due to\n> the optimisation barrier I'm guessing.)\n>\n\nHow much over 15 minutes? 20 minutes doesn't seem that long to wait to get\na likely definitive answer. But at the least show us the EXPLAIN without\nANALYZE of it, that should take no milliseconds.\n\nAnd what does it mean for something to take 5 minutes longer than \"never\nfinishes\"?\n\n(Also, putting every or every other token on a separate line does not make\nit easier to read)\n\nCheer,\n\nJeff\n\n>\n\nOn Wed, Apr 27, 2022 at 8:19 AM Mickael van der Beek <[email protected]> wrote:The last query does not finish after waiting for more than 15 minutes.(The temporary view creation is very fast and required due to the same query in a CTE greatly reducing performance (by more than 5 min.) due to the optimisation barrier I'm guessing.)How much over 15 minutes?  20 minutes doesn't seem that long to wait to get a likely definitive answer.  But at the least show us the EXPLAIN without ANALYZE of it, that should take no milliseconds.And what does it mean for something to take 5 minutes longer than \"never finishes\"?(Also, putting every or every other token on a separate line does not make it easier to read)Cheer,Jeff", "msg_date": "Wed, 27 Apr 2022 10:28:16 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Array of integer indexed nested-loop semi join" }, { "msg_contents": "Hello Jeff,\n\nI have waited a few hours without the query ever finishing which is the\nreason I said \"never finishes\".\nEspecially because the INNER JOIN version finishes within a few minutes\nwhile being combinatorial and less efficient.\nThe query probably only does sequential scans.\n\nYou will find the query plan using EXPLAIN here:\n- Visual query plan: https://explain.dalibo.com/plan#plan\n- Raw query plan: https://explain.dalibo.com/plan#raw\n\nThanks for your help,\n\nMickael\n\nOn Wed, Apr 27, 2022 at 4:28 PM Jeff Janes <[email protected]> wrote:\n\n> On Wed, Apr 27, 2022 at 8:19 AM Mickael van der Beek <\n> [email protected]> wrote:\n>\n>>\n>> The last query does not finish after waiting for more than 15 minutes.\n>> (The temporary view creation is very fast and required due to the same\n>> query in a CTE greatly reducing performance (by more than 5 min.) due to\n>> the optimisation barrier I'm guessing.)\n>>\n>\n> How much over 15 minutes? 20 minutes doesn't seem that long to wait to\n> get a likely definitive answer. But at the least show us the EXPLAIN\n> without ANALYZE of it, that should take no milliseconds.\n>\n> And what does it mean for something to take 5 minutes longer than \"never\n> finishes\"?\n>\n> (Also, putting every or every other token on a separate line does not make\n> it easier to read)\n>\n> Cheer,\n>\n> Jeff\n>\n>>\n\n-- \nMickael van der BeekWeb developer & Security analyst\n\[email protected]\n\nHello Jeff,I have waited a few hours without the query ever finishing which is the reason I said \"never finishes\".Especially because the INNER JOIN version finishes within a few minutes while being combinatorial and less efficient.The query probably only does sequential scans.You will find the query plan using EXPLAIN here:- Visual query plan: https://explain.dalibo.com/plan#plan- Raw query plan: https://explain.dalibo.com/plan#rawThanks for your help,MickaelOn Wed, Apr 27, 2022 at 4:28 PM Jeff Janes <[email protected]> wrote:On Wed, Apr 27, 2022 at 8:19 AM Mickael van der Beek <[email protected]> wrote:The last query does not finish after waiting for more than 15 minutes.(The temporary view creation is very fast and required due to the same query in a CTE greatly reducing performance (by more than 5 min.) due to the optimisation barrier I'm guessing.)How much over 15 minutes?  20 minutes doesn't seem that long to wait to get a likely definitive answer.  But at the least show us the EXPLAIN without ANALYZE of it, that should take no milliseconds.And what does it mean for something to take 5 minutes longer than \"never finishes\"?(Also, putting every or every other token on a separate line does not make it easier to read)Cheer,Jeff\n\n-- Mickael van der BeekWeb developer & Security [email protected]", "msg_date": "Wed, 27 Apr 2022 16:54:35 +0200", "msg_from": "Mickael van der Beek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Array of integer indexed nested-loop semi join" }, { "msg_contents": "Hello Jeff,\n\nSorry for the delay, here are the EXPLAIN ANALYSE results for one single\nrow in the inner-query:\n\nNested Loop Semi Join (cost=10000000993.81..10004731160.70 rows=536206\n> width=28) (actual time=93765.182..93765.183 rows=0 loops=1)\n> Output: fu.w2_page_idxs\n> Join Filter: (fu.w2_page_idxs && (ARRAY[fact_pages.idx]))\n> Rows Removed by Join Filter: 53762825\n> Buffers: shared hit=569194 read=2821768\n> I/O Timings: read=56586.955\n> -> Seq Scan on public.fact_users fu\n> (cost=10000000000.00..10003925857.68 rows=53620568 width=28) (actual\n> time=79.139..67423.779 rows=53762825 loops=1)\n> Output: fu.w2_page_idxs\n> Buffers: shared hit=567884 read=2821768\n> I/O Timings: read=56586.955\n> -> Materialize (cost=993.81..994.50 rows=1 width=32) (actual\n> time=0.000..0.000 rows=1 loops=53762825)\n> Output: (ARRAY[fact_pages.idx])\n> Buffers: shared hit=148\n> -> Limit (cost=993.81..994.48 rows=1 width=32) (actual\n> time=26.382..26.383 rows=1 loops=1)\n> Output: (ARRAY[fact_pages.idx])\n> Buffers: shared hit=148\n> -> Bitmap Heap Scan on public.fact_pages\n> (cost=993.81..70645.00 rows=103556 width=32) (actual time=26.378..26.379\n> rows=1 loops=1)\n> Output: ARRAY[fact_pages.idx]\n> Recheck Cond: (fact_pages.attribute_idxs &&\n> '{300000160}'::integer[])\n> Heap Blocks: exact=1\n> Buffers: shared hit=148\n> -> Bitmap Index Scan on fact_pages_attribute_idxs_int\n> (cost=0.00..967.92 rows=103556 width=0) (actual time=14.865..14.865\n> rows=101462 loops=1)\n> Index Cond: (fact_pages.attribute_idxs &&\n> '{300000160}'::integer[])\n> Buffers: shared hit=147\n> Query Identifier: 6779965332684941204\n> Planning:\n> Buffers: shared hit=2\n> Planning Time: 0.162 ms\n> JIT:\n> Functions: 10\n> Options: Inlining true, Optimization true, Expressions true, Deforming\n> true\n> Timing: Generation 1.507 ms, Inlining 9.797 ms, Optimization 54.902 ms,\n> Emission 14.314 ms, Total 80.521 ms\n> Execution Time: 93766.772 ms\n\n\nQuery:\n\nEXPLAIN (\n> ANALYZE,\n> VERBOSE,\n> COSTS,\n> BUFFERS,\n> TIMING\n> )\n> SELECT\n> fu.w2_page_idxs\n> FROM\n> fact_users\n> AS fu\n> WHERE\n> EXISTS (\n> SELECT\n> FROM\n> (\n> SELECT\n> ARRAY[idx] AS page_idx\n> FROM\n> fact_pages\n> WHERE\n> attribute_idxs && ARRAY[300000160]\n> FETCH FIRST 1 ROWS ONLY\n> )\n> AS fp\n> WHERE\n> fu.w2_page_idxs && fp.page_idx\n> )\n> ;\n\n\nWithout any surprises, the planner is using a sequential scan on the\n\"fact_users\" table which is very large instead of using the GIN index set\non the \"w2_page_idxs\" column.\n\nLink to the query plan visualiser: https://explain.dalibo.com/plan/1vC\n\nThank you very much in advance,\n\nMickael\n\nOn Wed, Apr 27, 2022 at 4:54 PM Mickael van der Beek <\[email protected]> wrote:\n\n> Hello Jeff,\n>\n> I have waited a few hours without the query ever finishing which is the\n> reason I said \"never finishes\".\n> Especially because the INNER JOIN version finishes within a few minutes\n> while being combinatorial and less efficient.\n> The query probably only does sequential scans.\n>\n> You will find the query plan using EXPLAIN here:\n> - Visual query plan: https://explain.dalibo.com/plan#plan\n> - Raw query plan: https://explain.dalibo.com/plan#raw\n>\n> Thanks for your help,\n>\n> Mickael\n>\n> On Wed, Apr 27, 2022 at 4:28 PM Jeff Janes <[email protected]> wrote:\n>\n>> On Wed, Apr 27, 2022 at 8:19 AM Mickael van der Beek <\n>> [email protected]> wrote:\n>>\n>>>\n>>> The last query does not finish after waiting for more than 15 minutes.\n>>> (The temporary view creation is very fast and required due to the same\n>>> query in a CTE greatly reducing performance (by more than 5 min.) due to\n>>> the optimisation barrier I'm guessing.)\n>>>\n>>\n>> How much over 15 minutes? 20 minutes doesn't seem that long to wait to\n>> get a likely definitive answer. But at the least show us the EXPLAIN\n>> without ANALYZE of it, that should take no milliseconds.\n>>\n>> And what does it mean for something to take 5 minutes longer than \"never\n>> finishes\"?\n>>\n>> (Also, putting every or every other token on a separate line does not\n>> make it easier to read)\n>>\n>> Cheer,\n>>\n>> Jeff\n>>\n>>>\n>\n> --\n> Mickael van der BeekWeb developer & Security analyst\n>\n> [email protected]\n>\n\n\n-- \nMickael van der BeekWeb developer & Security analyst\n\[email protected]\n\nHello Jeff,Sorry for the delay, here are the EXPLAIN ANALYSE results for one single row in the inner-query:Nested Loop Semi Join  (cost=10000000993.81..10004731160.70 rows=536206 width=28) (actual time=93765.182..93765.183 rows=0 loops=1)  Output: fu.w2_page_idxs  Join Filter: (fu.w2_page_idxs && (ARRAY[fact_pages.idx]))  Rows Removed by Join Filter: 53762825  Buffers: shared hit=569194 read=2821768  I/O Timings: read=56586.955  ->  Seq Scan on public.fact_users fu  (cost=10000000000.00..10003925857.68 rows=53620568 width=28) (actual time=79.139..67423.779 rows=53762825 loops=1)        Output: fu.w2_page_idxs        Buffers: shared hit=567884 read=2821768        I/O Timings: read=56586.955  ->  Materialize  (cost=993.81..994.50 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=53762825)        Output: (ARRAY[fact_pages.idx])        Buffers: shared hit=148        ->  Limit  (cost=993.81..994.48 rows=1 width=32) (actual time=26.382..26.383 rows=1 loops=1)              Output: (ARRAY[fact_pages.idx])              Buffers: shared hit=148              ->  Bitmap Heap Scan on public.fact_pages  (cost=993.81..70645.00 rows=103556 width=32) (actual time=26.378..26.379 rows=1 loops=1)                    Output: ARRAY[fact_pages.idx]                    Recheck Cond: (fact_pages.attribute_idxs && '{300000160}'::integer[])                    Heap Blocks: exact=1                    Buffers: shared hit=148                    ->  Bitmap Index Scan on fact_pages_attribute_idxs_int  (cost=0.00..967.92 rows=103556 width=0) (actual time=14.865..14.865 rows=101462 loops=1)                          Index Cond: (fact_pages.attribute_idxs && '{300000160}'::integer[])                          Buffers: shared hit=147Query Identifier: 6779965332684941204Planning:  Buffers: shared hit=2Planning Time: 0.162 msJIT:  Functions: 10  Options: Inlining true, Optimization true, Expressions true, Deforming true  Timing: Generation 1.507 ms, Inlining 9.797 ms, Optimization 54.902 ms, Emission 14.314 ms, Total 80.521 msExecution Time: 93766.772 msQuery:EXPLAIN (  ANALYZE,  VERBOSE,  COSTS,  BUFFERS,  TIMING)SELECT  fu.w2_page_idxsFROM  fact_users    AS fuWHERE  EXISTS (    SELECT    FROM      (        SELECT          ARRAY[idx] AS page_idx        FROM          fact_pages        WHERE          attribute_idxs && ARRAY[300000160]        FETCH FIRST 1 ROWS ONLY      )        AS fp    WHERE      fu.w2_page_idxs && fp.page_idx  );Without any surprises, the planner is using a sequential scan on the \"fact_users\" table which is very large instead of using the GIN index set on the \"w2_page_idxs\" column.Link to the query plan visualiser: https://explain.dalibo.com/plan/1vCThank you very much in advance,MickaelOn Wed, Apr 27, 2022 at 4:54 PM Mickael van der Beek <[email protected]> wrote:Hello Jeff,I have waited a few hours without the query ever finishing which is the reason I said \"never finishes\".Especially because the INNER JOIN version finishes within a few minutes while being combinatorial and less efficient.The query probably only does sequential scans.You will find the query plan using EXPLAIN here:- Visual query plan: https://explain.dalibo.com/plan#plan- Raw query plan: https://explain.dalibo.com/plan#rawThanks for your help,MickaelOn Wed, Apr 27, 2022 at 4:28 PM Jeff Janes <[email protected]> wrote:On Wed, Apr 27, 2022 at 8:19 AM Mickael van der Beek <[email protected]> wrote:The last query does not finish after waiting for more than 15 minutes.(The temporary view creation is very fast and required due to the same query in a CTE greatly reducing performance (by more than 5 min.) due to the optimisation barrier I'm guessing.)How much over 15 minutes?  20 minutes doesn't seem that long to wait to get a likely definitive answer.  But at the least show us the EXPLAIN without ANALYZE of it, that should take no milliseconds.And what does it mean for something to take 5 minutes longer than \"never finishes\"?(Also, putting every or every other token on a separate line does not make it easier to read)Cheer,Jeff\n\n-- Mickael van der BeekWeb developer & Security [email protected]\n-- Mickael van der BeekWeb developer & Security [email protected]", "msg_date": "Fri, 20 May 2022 12:42:43 +0200", "msg_from": "Mickael van der Beek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Array of integer indexed nested-loop semi join" }, { "msg_contents": "On Fri, May 20, 2022 at 6:42 AM Mickael van der Beek <\[email protected]> wrote:\n\n>\n> Query:\n>\n> EXPLAIN (\n>> ANALYZE,\n>> VERBOSE,\n>> COSTS,\n>> BUFFERS,\n>> TIMING\n>> )\n>> SELECT\n>> fu.w2_page_idxs\n>> FROM\n>> fact_users\n>> AS fu\n>> WHERE\n>> EXISTS (\n>> SELECT\n>> FROM\n>> (\n>> SELECT\n>> ARRAY[idx] AS page_idx\n>> FROM\n>> fact_pages\n>> WHERE\n>> attribute_idxs && ARRAY[300000160]\n>> FETCH FIRST 1 ROWS ONLY\n>> )\n>> AS fp\n>> WHERE\n>> fu.w2_page_idxs && fp.page_idx\n>> )\n>> ;\n>\n>\n> Without any surprises, the planner is using a sequential scan on the\n> \"fact_users\" table which is very large instead of using the GIN index set\n> on the \"w2_page_idxs\" column.\n>\n\nFor me, using the subquery in and expression, instead of the EXISTS, does\nget it to use the gin index. And I think it must give the same results.\n\nSELECT\n fu.w2_page_idxs\nFROM fact_users AS fu\nWHERE\n fu.w2_page_idxs && ARRAY[(select idx from fact_pages where\nattribute_idxs && ARRAY[3003] FETCH FIRST 1 ROWS ONLY)];\n\nBut why are you using intarray? That is unnecessary here, and by creating\nambiguity about the array operators it might be harmful.\n\nCheers,\n\nJeff\n\n>\n\nOn Fri, May 20, 2022 at 6:42 AM Mickael van der Beek <[email protected]> wrote:Query:EXPLAIN (  ANALYZE,  VERBOSE,  COSTS,  BUFFERS,  TIMING)SELECT  fu.w2_page_idxsFROM  fact_users    AS fuWHERE  EXISTS (    SELECT    FROM      (        SELECT          ARRAY[idx] AS page_idx        FROM          fact_pages        WHERE          attribute_idxs && ARRAY[300000160]        FETCH FIRST 1 ROWS ONLY      )        AS fp    WHERE      fu.w2_page_idxs && fp.page_idx  );Without any surprises, the planner is using a sequential scan on the \"fact_users\" table which is very large instead of using the GIN index set on the \"w2_page_idxs\" column.For me, using the subquery in and expression, instead of the EXISTS, does get it to use the gin index.  And I think it must give the same results.SELECT  fu.w2_page_idxsFROM  fact_users AS fuWHERE      fu.w2_page_idxs && ARRAY[(select idx from fact_pages where attribute_idxs && ARRAY[3003] FETCH FIRST 1 ROWS ONLY)];But why are you using intarray?  That is unnecessary here, and by creating ambiguity about the array operators it might be harmful. Cheers,Jeff", "msg_date": "Sun, 22 May 2022 22:45:14 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Array of integer indexed nested-loop semi join" }, { "msg_contents": "Hello Jeff,\n\nSadly, the query you suggested won't work because you are only returning\nthe first row of the matching inner query rows.\nExample:\n\nSELECT\n> u.idx,\n> u.page_idxs\n> FROM\n> (\n> VALUES\n> (1, ARRAY[11, 21, 31]),\n> (2, ARRAY[12, 21, 32]),\n> (3, ARRAY[13, 23, 31])\n> )\n> AS u(idx, page_idxs)\n> WHERE\n> u.page_idxs && ARRAY[(\n> SELECT\n> p.idx\n> FROM\n> (\n> VALUES\n> (11, ARRAY[101, 201, 301]),\n> (21, ARRAY[102, 201, 302]),\n> (13, ARRAY[103, 203, 301])\n> )\n> AS p(idx, attribute_idxs)\n> WHERE\n> p.attribute_idxs && ARRAY[201]\n> FETCH FIRST 1 ROWS ONLY\n> )]\n> ;\n\n\nThis query only returns one row while it should actually return two:\n\n1 {11,21,31}\n\n\nThe INNER JOIN version of the query will return all matching rows but also\ninclude duplicates:\n\nSELECT\n> u.idx,\n> u.page_idxs\n> FROM\n> (\n> VALUES\n> (1, ARRAY[11, 21, 31]),\n> (2, ARRAY[12, 21, 32]),\n> (3, ARRAY[13, 23, 31])\n> )\n> AS u(idx, page_idxs)\n> INNER JOIN\n> (\n> SELECT\n> p.idx\n> FROM\n> (\n> VALUES\n> (11, ARRAY[101, 201, 301]),\n> (21, ARRAY[102, 201, 302]),\n> (13, ARRAY[103, 203, 301])\n> )\n> AS p(idx, attribute_idxs)\n> WHERE\n> p.attribute_idxs && ARRAY[201]\n> )\n> AS p2\n> ON u.page_idxs && ARRAY[p2.idx]\n> ;\n\n\nResults:\n\n1 {11,21,31}\n> 1 {11,21,31}\n> 2 {12,21,32}\n\n\nAs far as I know, the the IN + sub-expression query can't work since the\nleft side of the operation is an array of integers and the right side a set\nof rows with a single integer column.\nThe reason I'm using integer arrays is because it is the only way I have\nfound in PostgreSQL to get fast inclusion / exclusion checks on large\ndatasets (hundreds of millions of values).\nDid I misunderstand your response?\nThank you for the ongoing help,\n\nMickael\n\nOn Mon, May 23, 2022 at 4:45 AM Jeff Janes <[email protected]> wrote:\n\n>\n>\n> On Fri, May 20, 2022 at 6:42 AM Mickael van der Beek <\n> [email protected]> wrote:\n>\n>>\n>> Query:\n>>\n>> EXPLAIN (\n>>> ANALYZE,\n>>> VERBOSE,\n>>> COSTS,\n>>> BUFFERS,\n>>> TIMING\n>>> )\n>>> SELECT\n>>> fu.w2_page_idxs\n>>> FROM\n>>> fact_users\n>>> AS fu\n>>> WHERE\n>>> EXISTS (\n>>> SELECT\n>>> FROM\n>>> (\n>>> SELECT\n>>> ARRAY[idx] AS page_idx\n>>> FROM\n>>> fact_pages\n>>> WHERE\n>>> attribute_idxs && ARRAY[300000160]\n>>> FETCH FIRST 1 ROWS ONLY\n>>> )\n>>> AS fp\n>>> WHERE\n>>> fu.w2_page_idxs && fp.page_idx\n>>> )\n>>> ;\n>>\n>>\n>> Without any surprises, the planner is using a sequential scan on the\n>> \"fact_users\" table which is very large instead of using the GIN index set\n>> on the \"w2_page_idxs\" column.\n>>\n>\n> For me, using the subquery in and expression, instead of the EXISTS, does\n> get it to use the gin index. And I think it must give the same results.\n>\n> SELECT\n> fu.w2_page_idxs\n> FROM fact_users AS fu\n> WHERE\n> fu.w2_page_idxs && ARRAY[(select idx from fact_pages where\n> attribute_idxs && ARRAY[3003] FETCH FIRST 1 ROWS ONLY)];\n>\n> But why are you using intarray? That is unnecessary here, and by creating\n> ambiguity about the array operators it might be harmful.\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\n-- \nMickael van der BeekWeb developer & Security analyst\n\[email protected]\n\nHello Jeff,Sadly, the query you suggested won't work because you are only returning the first row of the matching inner query rows.Example:SELECT  u.idx,  u.page_idxsFROM  (    VALUES      (1, ARRAY[11, 21, 31]),      (2, ARRAY[12, 21, 32]),      (3, ARRAY[13, 23, 31])  )    AS u(idx, page_idxs)WHERE  u.page_idxs && ARRAY[(    SELECT      p.idx    FROM      (        VALUES          (11, ARRAY[101, 201, 301]),          (21, ARRAY[102, 201, 302]),          (13, ARRAY[103, 203, 301])      )        AS p(idx, attribute_idxs)    WHERE      p.attribute_idxs && ARRAY[201]    FETCH FIRST 1 ROWS ONLY  )];This query only returns one row while it should actually return two:1\t{11,21,31}The INNER JOIN version of the query will return all matching rows but also include duplicates:SELECT  u.idx,  u.page_idxsFROM  (    VALUES      (1, ARRAY[11, 21, 31]),      (2, ARRAY[12, 21, 32]),      (3, ARRAY[13, 23, 31])  )    AS u(idx, page_idxs)INNER JOIN  (    SELECT      p.idx    FROM      (        VALUES          (11, ARRAY[101, 201, 301]),          (21, ARRAY[102, 201, 302]),          (13, ARRAY[103, 203, 301])      )        AS p(idx, attribute_idxs)    WHERE      p.attribute_idxs && ARRAY[201]  )  AS p2  ON u.page_idxs && ARRAY[p2.idx];Results:1\t{11,21,31}1\t{11,21,31}2\t{12,21,32}As far as I know, the the IN + sub-expression query can't work since the left side of the operation is an array of integers and the right side a set of rows with a single integer column.The reason I'm using integer arrays is because it is the only way I have found in PostgreSQL to get fast inclusion / exclusion checks on large datasets (hundreds of millions of values). Did I misunderstand your response?Thank you for the ongoing help,MickaelOn Mon, May 23, 2022 at 4:45 AM Jeff Janes <[email protected]> wrote:On Fri, May 20, 2022 at 6:42 AM Mickael van der Beek <[email protected]> wrote:Query:EXPLAIN (  ANALYZE,  VERBOSE,  COSTS,  BUFFERS,  TIMING)SELECT  fu.w2_page_idxsFROM  fact_users    AS fuWHERE  EXISTS (    SELECT    FROM      (        SELECT          ARRAY[idx] AS page_idx        FROM          fact_pages        WHERE          attribute_idxs && ARRAY[300000160]        FETCH FIRST 1 ROWS ONLY      )        AS fp    WHERE      fu.w2_page_idxs && fp.page_idx  );Without any surprises, the planner is using a sequential scan on the \"fact_users\" table which is very large instead of using the GIN index set on the \"w2_page_idxs\" column.For me, using the subquery in and expression, instead of the EXISTS, does get it to use the gin index.  And I think it must give the same results.SELECT  fu.w2_page_idxsFROM  fact_users AS fuWHERE      fu.w2_page_idxs && ARRAY[(select idx from fact_pages where attribute_idxs && ARRAY[3003] FETCH FIRST 1 ROWS ONLY)];But why are you using intarray?  That is unnecessary here, and by creating ambiguity about the array operators it might be harmful. Cheers,Jeff\n\n-- Mickael van der BeekWeb developer & Security [email protected]", "msg_date": "Mon, 23 May 2022 09:57:25 +0200", "msg_from": "Mickael van der Beek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Array of integer indexed nested-loop semi join" }, { "msg_contents": "On Mon, May 23, 2022 at 3:57 AM Mickael van der Beek <\[email protected]> wrote:\n\n> Hello Jeff,\n>\n> Sadly, the query you suggested won't work because you are only returning\n> the first row of the matching inner query rows.\n>\n\nSure, but the query I replaced did the same thing. (I thought that was\nwhat you wanted, but I guess that was just to get it to run fast enough to\never finish--in that case it is probably better to use EXPLAIN without the\nANALYZE so that we can see the plan of the correct query). To get around\nthat one-row limit you have to write it somewhat differently, getting rid\nof the ARRAY and adding an array_agg():\n\nSELECT fu.*\nFROM\n fact_users AS fu\nWHERE\n fu.w2_page_idxs && (select array_agg(idx) from fact_pages where\nattribute_idxs && ARRAY[201]);\n\nThis way of writing it is better, as it still works with the LIMIT 1 but\nalso works without it. This still uses the indexes for me, at least when\nenable_seqscan is off.\n\n\n> The INNER JOIN version of the query will return all matching rows but also\n> include duplicates:\n>\n\nYou could just add a DISTINCT to get rid of the duplicates. Of course that\nwill also take some time on a large returned data set, but probably less\ntime than scanning a giant table. I think this is probably cleaner than\nthe alternatives.\n\n\n>\n> The reason I'm using integer arrays is because it is the only way I have\n> found in PostgreSQL to get fast inclusion / exclusion checks on large\n> datasets (hundreds of millions of values).\n> Did I misunderstand your response?\n>\n\nI don't know if you misunderstood. I meant specifically the intarray\nextension. You can use integer arrays with built-in GIN indexes without\nhelp from the intarray extension. Maybe you know that already and are just\nsaying that the extension is even faster than the built-in indexed\noperators are and you need that extra speed.\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, May 23, 2022 at 3:57 AM Mickael van der Beek <[email protected]> wrote:Hello Jeff,Sadly, the query you suggested won't work because you are only returning the first row of the matching inner query rows.Sure, but the query I replaced did the same thing.  (I thought that was what you wanted, but I guess that was just to get it to run fast enough to ever finish--in that case it is probably better to use EXPLAIN without the ANALYZE so that we can see the plan of the correct query).  To get around that one-row limit you have to write it somewhat differently, getting rid of the ARRAY and adding an array_agg():SELECT fu.*FROM  fact_users AS fuWHERE      fu.w2_page_idxs && (select array_agg(idx) from fact_pages where attribute_idxs && ARRAY[201]);This way of writing it is better, as it still works with the LIMIT 1 but also works without it.  This still uses the indexes for me, at least when enable_seqscan is off.The INNER JOIN version of the query will return all matching rows but also include duplicates:You could just add a DISTINCT to get rid of the duplicates.  Of course that will also take some time on a large returned data set, but probably less time than scanning a giant table.  I think this is probably cleaner than the alternatives. The reason I'm using integer arrays is because it is the only way I have found in PostgreSQL to get fast inclusion / exclusion checks on large datasets (hundreds of millions of values). Did I misunderstand your response?I don't know if you misunderstood.  I meant specifically the intarray extension.  You can use integer arrays with built-in GIN indexes without help from the intarray extension.  Maybe you know that already and are just saying that the extension is even faster than the built-in indexed operators are and you need that extra speed.Cheers,Jeff", "msg_date": "Mon, 23 May 2022 10:10:48 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Array of integer indexed nested-loop semi join" }, { "msg_contents": "Hello Jeff,\n\nThank you again for your advice.\n\nI did indeed think of the ARRAY_AGG() version of the query.\nAlthough this method is very fast (and does use indexes) for smallish array\nsizes, this is sadly not practical in my case because the arrays of\nmatching rows can reach multiple hundreds of thousands of rows.\nI thought of maybe \"batching\" the ARRAY_AGG() in batches of max n rows in a\nsubquery and then calculating intersection on that but it doesn't seem\npractical or faster in the end.\n\n> You could just add a DISTINCT to get rid of the duplicates. Of course\nthat will also take some time on a large returned data set, but probably\nless time than scanning a giant table. I think this is probably cleaner\nthan the alternatives.\n\nYes, and a GROUP BY will do the trick as well.\nThe fact that the current solution is a \"nested loop\" instead of a \"nested\nloop semi join\" means that the query is much slower due to needing to GROUP\nBY the rows.\nThis is why I tried various version using EXISTS, ANY, ARRAY_AGG(), etc\nwith no avail.\nWould you have an idea on why PostgreSQL doesn't use the existing indexes\nfor this type of subqueries ?\n\n> I don't know if you misunderstood. I meant specifically the intarray\nextension. You can use integer arrays with built-in GIN indexes without\nhelp from the intarray extension. Maybe you know that already and are just\nsaying that the extension is even faster than the built-in indexed\noperators are and you need that extra speed.\n\nAre there specific advantages to not using the intarray extension and it's\nindexes in this case?\nI was under the impression that it supported more operation types and was\ngenerally faster for this niche use case.\n\nThank you again for your help,\n\nMickael\n\n\n\nOn Mon, May 23, 2022 at 4:11 PM Jeff Janes <[email protected]> wrote:\n\n> On Mon, May 23, 2022 at 3:57 AM Mickael van der Beek <\n> [email protected]> wrote:\n>\n>> Hello Jeff,\n>>\n>> Sadly, the query you suggested won't work because you are only returning\n>> the first row of the matching inner query rows.\n>>\n>\n> Sure, but the query I replaced did the same thing. (I thought that was\n> what you wanted, but I guess that was just to get it to run fast enough to\n> ever finish--in that case it is probably better to use EXPLAIN without the\n> ANALYZE so that we can see the plan of the correct query). To get around\n> that one-row limit you have to write it somewhat differently, getting rid\n> of the ARRAY and adding an array_agg():\n>\n> SELECT fu.*\n> FROM\n> fact_users AS fu\n> WHERE\n> fu.w2_page_idxs && (select array_agg(idx) from fact_pages where\n> attribute_idxs && ARRAY[201]);\n>\n> This way of writing it is better, as it still works with the LIMIT 1 but\n> also works without it. This still uses the indexes for me, at least when\n> enable_seqscan is off.\n>\n>\n>> The INNER JOIN version of the query will return all matching rows but\n>> also include duplicates:\n>>\n>\n> You could just add a DISTINCT to get rid of the duplicates. Of course\n> that will also take some time on a large returned data set, but probably\n> less time than scanning a giant table. I think this is probably cleaner\n> than the alternatives.\n>\n>\n>>\n>> The reason I'm using integer arrays is because it is the only way I have\n>> found in PostgreSQL to get fast inclusion / exclusion checks on large\n>> datasets (hundreds of millions of values).\n>> Did I misunderstand your response?\n>>\n>\n> I don't know if you misunderstood. I meant specifically the intarray\n> extension. You can use integer arrays with built-in GIN indexes without\n> help from the intarray extension. Maybe you know that already and are just\n> saying that the extension is even faster than the built-in indexed\n> operators are and you need that extra speed.\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\n-- \nMickael van der BeekWeb developer & Security analyst\n\[email protected]\n\nHello Jeff,Thank you again for your advice.I did indeed think of the ARRAY_AGG() version of the query.Although this method is very fast (and does use indexes) for smallish array sizes, this is sadly not practical in my case because the arrays of matching rows can reach multiple hundreds of thousands of rows.I thought of maybe \"batching\" the ARRAY_AGG() in batches of max n rows in a subquery and then calculating intersection on that but it doesn't seem practical or faster in the end.> You could just add a DISTINCT to get rid of the duplicates.  Of course that will also take some time on a large returned data set, but probably less time than scanning a giant table.  I think this is probably cleaner than the alternatives.Yes, and a GROUP BY will do the trick as well.The fact that the current solution is a \"nested loop\" instead of a \"nested loop semi join\" means that the query is much slower due to needing to GROUP BY the rows.This is why I tried various version using EXISTS, ANY, ARRAY_AGG(), etc with no avail.Would you have an idea on why PostgreSQL doesn't use the existing indexes for this type of subqueries ?> I don't know if you misunderstood.  I meant specifically the intarray extension.  You can use integer arrays with built-in GIN indexes without help from the intarray extension.  Maybe you know that already and are just saying that the extension is even faster than the built-in indexed operators are and you need that extra speed.Are there specific advantages to not using the intarray extension and it's indexes in this case?I was under the impression that it supported more operation types and was generally faster for this niche use case.Thank you again for your help,Mickael On Mon, May 23, 2022 at 4:11 PM Jeff Janes <[email protected]> wrote:On Mon, May 23, 2022 at 3:57 AM Mickael van der Beek <[email protected]> wrote:Hello Jeff,Sadly, the query you suggested won't work because you are only returning the first row of the matching inner query rows.Sure, but the query I replaced did the same thing.  (I thought that was what you wanted, but I guess that was just to get it to run fast enough to ever finish--in that case it is probably better to use EXPLAIN without the ANALYZE so that we can see the plan of the correct query).  To get around that one-row limit you have to write it somewhat differently, getting rid of the ARRAY and adding an array_agg():SELECT fu.*FROM  fact_users AS fuWHERE      fu.w2_page_idxs && (select array_agg(idx) from fact_pages where attribute_idxs && ARRAY[201]);This way of writing it is better, as it still works with the LIMIT 1 but also works without it.  This still uses the indexes for me, at least when enable_seqscan is off.The INNER JOIN version of the query will return all matching rows but also include duplicates:You could just add a DISTINCT to get rid of the duplicates.  Of course that will also take some time on a large returned data set, but probably less time than scanning a giant table.  I think this is probably cleaner than the alternatives. The reason I'm using integer arrays is because it is the only way I have found in PostgreSQL to get fast inclusion / exclusion checks on large datasets (hundreds of millions of values). Did I misunderstand your response?I don't know if you misunderstood.  I meant specifically the intarray extension.  You can use integer arrays with built-in GIN indexes without help from the intarray extension.  Maybe you know that already and are just saying that the extension is even faster than the built-in indexed operators are and you need that extra speed.Cheers,Jeff\n\n-- Mickael van der BeekWeb developer & Security [email protected]", "msg_date": "Mon, 23 May 2022 16:46:41 +0200", "msg_from": "Mickael van der Beek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Array of integer indexed nested-loop semi join" } ]
[ { "msg_contents": "I noticed an issue in a simple query with WHERE NOT IN (SELECT ...). I am\naware that anti-joins with NOT IN are currently not optimized and should be\nrewritten as WHERE NOT EXISTS (SELECT ...), so if this is irrelevant please\njust ignore it.\n\nHere is a setup that works:\n\nCREATE TABLE a\n(\n\ta_id serial NOT NULL,\n\tPRIMARY KEY (a_id)\n);\nCREATE TABLE b\n(\n\tb_id serial NOT NULL,\n\ta_id int NOT NULL,\n\tPRIMARY KEY (b_id)\n);\n\nINSERT INTO a(a_id) SELECT generate_series(1, 20000);\nINSERT INTO b(b_id, a_id) SELECT generate_series(1, 500000), floor(random()\n* 22000 + 1)::int;\n\nANALYZE a;\nANALYZE b;\n\nEXPLAIN SELECT count(*) FROM b WHERE a_id NOT IN (SELECT a_id FROM a);\n\nFinalize Aggregate (cost=7596.23..7596.24 rows=1 width=8)\n -> Gather (cost=7596.12..7596.23 rows=1 width=8)\n Workers Planned: 1\n -> Partial Aggregate (cost=6596.12..6596.13 rows=1 width=8)\n -> Parallel Seq Scan on b (cost=339.00..6228.47 rows=147059\nwidth=0)\n Filter: (NOT (hashed SubPlan 1))\n SubPlan 1\n -> Seq Scan on a (cost=0.00..289.00 rows=20000\nwidth=4)\n\nFiddle:\nhttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=497ab1d5eec6e02d4d1c0f6630b6f1\nf1\n\nNow if you change\nINSERT INTO a(a_id) SELECT generate_series(1, 20000);\nto\nINSERT INTO a(a_id) SELECT generate_series(1, 200000);\ni.e. add a zero, the plan becomes this:\n\nFinalize Aggregate (cost=759860198.41..759860198.42 rows=1 width=8)\n -> Gather (cost=759860198.29..759860198.40 rows=1 width=8)\n Workers Planned: 1\n -> Partial Aggregate (cost=759859198.29..759859198.30 rows=1\nwidth=8)\n -> Parallel Seq Scan on b (cost=0.00..759858830.65\nrows=147059 width=0)\n Filter: (NOT (SubPlan 1))\n SubPlan 1\n -> Materialize (cost=0.00..4667.00 rows=200000\nwidth=4)\n -> Seq Scan on a (cost=0.00..2885.00\nrows=200000 width=4)\n\nFiddle:\nhttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=bec018196195635cb6ec05ccae3213\n7c\n\n\n\n\n", "msg_date": "Thu, 28 Apr 2022 02:52:57 +0200", "msg_from": "=?iso-8859-1?Q?Andr=E9_H=E4nsel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Unworkable plan above certain row count" }, { "msg_contents": "=?iso-8859-1?Q?Andr=E9_H=E4nsel?= <[email protected]> writes:\n> Now if you change\n> INSERT INTO a(a_id) SELECT generate_series(1, 20000);\n> to\n> INSERT INTO a(a_id) SELECT generate_series(1, 200000);\n> i.e. add a zero, the plan becomes [ not a hashed subplan ]\n\nYeah, it won't hash the subplan if the estimated size of the hash\ntable exceeds work_mem. In this case, boosting work_mem would be\na mighty good idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Apr 2022 22:08:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unworkable plan above certain row count" } ]
[ { "msg_contents": "Hi all,\n\nI have a table with time series data and on this table a trigger for \nnotifies:\n\ncontainers_notify AFTER INSERT ON containers FOR EACH ROW EXECUTE \nPROCEDURE containers_notify('containers_notify_collector')\n\nand the function does:\n\nPERFORM pg_notify(CAST(TG_ARGV[0] AS text), row_to_json(NEW)::text);\n\nso that another application (java) fetches every inserted row as a JSON \nfor further processing every half second:\n\n...listenStatement.execute(\"LISTEN 'containers_notify_collector'\");\n...PGNotification notifications[] = \n((org.postgresql.PGConnection)notifyPGCon.getUnderlyingConnection()).getNotifications(); \n\n\nThis works as a charm but occasionally (I think with more load on the \nsystem) the notifications are received much time (up to hours!) after \nthe INSERTs.\nNevertheless no notifications become lost, they are only very late! The \ndelay grows, seems as a queue grows, but the java process tries to fetch \nthe notifications fairly fast,\nso there should be no queue growing..\n\nVersions:\nPostgreSQL 10.12 on x86_64-pc-linux-gnu, compiled by \nx86_64-pc-linux-gnu-gcc (Gentoo 6.4.0-r1 p1.3) 6.4.0, 64-bit\nJDBC 42.2.23\n\nThe commit of the application inserting the data is ok/fast. So the \ninsert of the data is not slowed down.\nAre the notifications delivered asynchronously to the commit/trigger?\n\nThanks for any help,\n\nPeter\n\n\n\n\n\n\n", "msg_date": "Thu, 28 Apr 2022 16:28:15 +0200", "msg_from": "\"Peter Eser HEUFT [Germany]\" <[email protected]>", "msg_from_op": true, "msg_subject": "LISTEN NOTIFY sometimes huge delay" }, { "msg_contents": "\"Peter Eser HEUFT [Germany]\" <[email protected]> writes:\n> I have a table with time series data and on this table a trigger for \n> notifies:\n> containers_notify AFTER INSERT ON containers FOR EACH ROW EXECUTE \n> PROCEDURE containers_notify('containers_notify_collector')\n> and the function does:\n> PERFORM pg_notify(CAST(TG_ARGV[0] AS text), row_to_json(NEW)::text);\n\n> This works as a charm but occasionally (I think with more load on the \n> system) the notifications are received much time (up to hours!) after \n> the INSERTs.\n> Nevertheless no notifications become lost, they are only very late! The \n> delay grows, seems as a queue grows, but the java process tries to fetch \n> the notifications fairly fast,\n\nHm. We've not previously had reports of late notifications. One idea\nthat comes to mind is that the server won't deliver notifications as\nlong as the client has an open transaction, so is it possible your\nlistening process sometimes forgets to close its transaction?\n\n> Versions:\n> PostgreSQL 10.12 on x86_64-pc-linux-gnu, compiled by \n> x86_64-pc-linux-gnu-gcc (Gentoo 6.4.0-r1 p1.3) 6.4.0, 64-bit\n> JDBC 42.2.23\n\nThat's pretty old. We've made a number of changes to the LISTEN/NOTIFY\ncode since then; although in reading the commit log entries about them,\nnothing is said about long-delayed notifications.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Apr 2022 12:38:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN NOTIFY sometimes huge delay" } ]
[ { "msg_contents": "Hi,\nWe are trying to COPY a few tables from Oracle to Postgres and getting the\nfollowing error. Data gets partially copied. Table does not have any huge\ndata; there are 4 numeric columns and 1 vahchar column. Could you please\nhelp?\n\nFATAL:canceling authentication due to timeout\n\n\nRegards,\nAditya.\n\nHi,We are trying to COPY a few tables from Oracle to Postgres and getting the following error. Data gets partially copied.  Table does not have any huge data; there are 4 numeric columns and 1 vahchar column. Could you please help?FATAL:canceling authentication due to timeout Regards,Aditya.", "msg_date": "Fri, 29 Apr 2022 16:29:22 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "FATAL: canceling authentication due to timeout" }, { "msg_contents": "Hi,\nWe are trying to COPY a few tables from Oracle to Postgres and getting the\nfollowing error. Data gets partially copied. Table does not have any huge\ndata; there are 4 numeric columns and 1 vahchar column. Could you please\nhelp?\n\nFATAL:canceling authentication due to timeout\n\n\nRegards,\nAditya.\n\nHi,We are trying to COPY a few tables from Oracle to Postgres and getting the following error. Data gets partially copied.  Table does not have any huge data; there are 4 numeric columns and 1 vahchar column. Could you please help?FATAL:canceling authentication due to timeout Regards,Aditya.", "msg_date": "Fri, 29 Apr 2022 16:53:30 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: FATAL: canceling authentication due to timeout" } ]
[ { "msg_contents": "Hello,\n\nI have come across a plan that should never get generated IMHO:\n\nSELECT 1\nFROM extdataregular e1\nINNER JOIN extdataempty e2 ON e1.field = e2.field AND e1.index = e2.index\n\ngenerates the following plan:\n\nNested Loop (cost=1.13..528540.89 rows=607604 width=4) (actual time=9298.504..9298.506 rows=0 loops=1)\n -> Index Only Scan using pk_extdataempty on extdataempty e2 (cost=0.56..157969.52 rows=4078988 width=16) (actual time=0.026..641.248 rows=4067215 loops=1)\n Heap Fetches: 268828\n -> Memoize (cost=0.58..0.67 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=4067215)\n Cache Key: e2.field, e2.index\n Cache Mode: logical\n Hits: 0 Misses: 4067215 Evictions: 3228355 Overflows: 0 Memory Usage: 65537kB\n Buffers: shared hit=16268863\n -> Index Only Scan using pk_extdataregular on extdataregular e1 (cost=0.57..0.66 rows=1 width=16) (actual time=0.001..0.001 rows=0 loops=4067215)\n Index Cond: ((field = e2.field) AND (index = e2.index))\n Heap Fetches: 2\n\nPlease note that the memoize node has no cache hits, which is not surprising given that we are joining on two primary keys that are unique by definition (\"field\" and \"index\" make up the primary key of both tables).\nWhy would it ever make sense to generate a memoize plan for a unique join?\n\nI think this issue might tie in with the current discussion over on the hackers mailing list [1]\n\nCheers, Ben\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvpFsSJAThNLtqaWvA7axQd-VOFct%3DFYQN5muJV-sYtXjw%40mail.gmail.com\n\n-- \n\nBejamin Coutu\[email protected]\n\nZeyOS GmbH & Co. KG\nhttp://www.zeyos.com\n\n\n", "msg_date": "Tue, 03 May 2022 13:05:30 +0200", "msg_from": "Benjamin Coutu <[email protected]>", "msg_from_op": true, "msg_subject": "Useless memoize path generated for unique join on primary keys" }, { "msg_contents": "On Tue, 3 May 2022 at 23:05, Benjamin Coutu <[email protected]> wrote:\n> -> Memoize (cost=0.58..0.67 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=4067215)\n> Cache Key: e2.field, e2.index\n> Cache Mode: logical\n> Hits: 0 Misses: 4067215 Evictions: 3228355 Overflows: 0 Memory Usage: 65537kB\n> Buffers: shared hit=16268863\n> -> Index Only Scan using pk_extdataregular on extdataregular e1 (cost=0.57..0.66 rows=1 width=16) (actual time=0.001..0.001 rows=0 loops=4067215)\n> Index Cond: ((field = e2.field) AND (index = e2.index))\n\n> Why would it ever make sense to generate a memoize plan for a unique join?\n\nIt wouldn't ever make sense.\n\nThe problem is that estimate_num_groups() is used to estimate the\nnumber of distinct values and that function does not know about\nprimary keys. There's no way the costing of Memoize would allow a\nMemoize plan to be used if it thought all values were unique, so the\nonly possibility here is that ndistinct is being underestimated by\nsome amount that makes Memoize look like the most favourable plan.\n\nYou could see what the planner thinks about the ndistinct estimate on\nfield, index by doing:\n\nEXPLAIN SELECT field,index FROM extdataregular GROUP BY 1,2;\n\nWhatever you see in the final row estimate for that plan is what's\nbeing fed into the Memoize costing code.\n\n> I think this issue might tie in with the current discussion over on the hackers mailing list [1]\n\nI'd say it's a pretty different problem. The cache hit ratio\ndiscussion on that thread talks about underestimating the hit ratio.\nThat particular problem could only lead to Memoize plans *not* being\nchosen when they maybe should be. Not the other way around, which is\nyour case.\n\ncreate statistics extdataregular_field_index_stats (ndistinct) on\nfield, index from extdataregular;\nanalyze extdataregular;\n\nwould likely put that right.\n\nDavid\n\n\n", "msg_date": "Tue, 3 May 2022 23:43:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Useless memoize path generated for unique join on primary keys" }, { "msg_contents": "> I'd say it's a pretty different problem. The cache hit ratio\n> discussion on that thread talks about underestimating the hit ratio.\n> That particular problem could only lead to Memoize plans *not* being\n> chosen when they maybe should be. Not the other way around, which is\n> your case.\n> \n> create statistics extdataregular_field_index_stats (ndistinct) on\n> field, index from extdataregular;\n> analyze extdataregular;\n> \n> would likely put that right.\n\nThanks David, using extended statistics for both (and only for both) tables solved this problem.\n\nBTW, thank you for all your work on performance in recent releases.\n\n\n", "msg_date": "Tue, 03 May 2022 14:21:46 +0200", "msg_from": "Benjamin Coutu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Useless memoize path generated for unique join on primary keys" }, { "msg_contents": "On Wed, 4 May 2022 at 00:21, Benjamin Coutu <[email protected]> wrote:\n> Thanks David, using extended statistics for both (and only for both) tables solved this problem.\n\nOh, whoops. I did get that backwards. The estimate used by the\nMemoize costing code is from the outer side of the join, which is the\nextdataempty in this case. I don't think the\nextdataregular_field_index_stats will do anything. It'll be the ones\nyou added on extdataempty that are making it work.\n\n> BTW, thank you for all your work on performance in recent releases.\n\nThanks for the feedback :)\n\nDavid\n\n\n", "msg_date": "Wed, 4 May 2022 00:31:32 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Useless memoize path generated for unique join on primary keys" } ]
[ { "msg_contents": "I have a “temporal table” — a table where there are multiple “versions” of\nentities, with each version having a distinct timestamp:\n\nCREATE TABLE contract_balance_updates (\n block_id bigint NOT NULL,\n block_signed_at timestamp(0) without time zone NOT NULL,\n contract_address bytea NOT NULL,\n holder_address bytea NOT NULL,\n start_block_height bigint NOT NULL,\n balance numeric NOT NULL\n) PARTITION BY RANGE (block_signed_at);\n\n-- one for each partition (applied by pg_partman from a template)\nCREATE UNIQUE INDEX contract_balance_updates_pkey\nON contract_balance_updates(\n holder_address bytea_ops,\n contract_address bytea_ops,\n start_block_height int8_ops DESC\n);\n\n\nThis table has ~1 billion rows; millions of entities (i.e. (holder_address,\ncontract_address) pairs); and for a few entities (power-law distribution),\nthere are millions of versions (i.e. rows with distinct start_block_height\nvalues.)\n\nThe main query this table needs to support, is to efficiently get the\nnewest version-rows of each contract_address for a given holder_address, as\nof a given application-domain time. (Think of it as: the what the set of\nentities owned by a user looked like at a given time.) The “as of” part is\nimportant here: it’s why we can’t just use the usual system-temporal setup\nwith separate “historical” and “current version” tables. (Also, due to our\nsource being effectively an event store, and due to our throughput\nrequirements [~100k new records per second], we must discover+insert new\nentity-version rows concurrently + out-of-order; so it’d be pretty\nnon-trivial to keep anything like a validity-upper-bound column updated\nusing triggers.)\n\nIt is our expectation that this query “should” be able to be\ncheap-to-compute and effectively instantaneous. (It’s clear to us how we\nwould make it so, given a simple LMDB-like sorted key-value store:\nprefix-match on holder_address; take the first row you find for the\ncontract-address you’re on; build a comparator key of (holder_address,\ncontract_address, highest-possible-version) and traverse to find the lowest\nrow that sorts greater than it; repeat.)\n\nWhich might, in SQL, be expressed as something like this:\n\nWITH ranked_holder_balances AS (\n SELECT\n *,\n row_number() OVER w AS balance_rank\n FROM contract_balance_updates\n WHERE holder_address = '\\x0000000000000000000000000000000000000000'::bytea\n WINDOW w AS (\n PARTITION BY holder_address, contract_address\n ORDER BY start_block_height DESC\n )\n ORDER BY holder_address ASC, contract_address ASC, start_block_height DESC\n)\nSELECT *\nFROM ranked_holder_balances\nWHERE balance_rank = 1\n\n\nThe trouble is that this query seems to traverse the tuples (or maybe just\nthe index nodes?) of every row in the matched partitions. We know that the\nquery only “needs\" to touch the first row of each partition (row_number() =\n1) to resolve the query; but Postgres seemingly isn’t aware of this\npotential optimization. So the query is fast when all matched entities have\nfew versions; but when any matched entities have millions of versions, the\ncold performance of the query becomes extremely bad.\n\nSubquery Scan on ranked_holder_balances (cost=5.02..621761.05\nrows=2554 width=55) (actual time=270.031..82148.370 rows=856 loops=1)\n Filter: (ranked_holder_balances.balance_rank = 1)\n Rows Removed by Filter: 554167\n Buffers: shared hit=166647 read=391704 dirtied=65\n -> WindowAgg (cost=5.02..605150.30 rows=510707 width=81) (actual\ntime=270.028..82098.501 rows=555023 loops=1)\n Buffers: shared hit=166647 read=391704 dirtied=65\n -> Merge Append (cost=5.02..584722.02 rows=510707 width=65)\n(actual time=270.017..81562.693 rows=555023 loops=1)\n Sort Key: contract_balance_updates.contract_address,\ncontract_balance_updates.start_block_height DESC\n Buffers: shared hit=166647 read=391704 dirtied=65\n -> Index Scan using contract_balance_updates_pkey_p2015\non contract_balance_updates_p2015 contract_balance_updates_1\n(cost=0.28..2.51 rows=1 width=65) (actual time=0.013..0.014 rows=0\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=2\n -> Index Scan using contract_balance_updates_pkey_p2016\non contract_balance_updates_p2016 contract_balance_updates_2\n(cost=0.42..8.34 rows=6 width=65) (actual time=0.010..0.011 rows=0\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=3\n -> Index Scan using contract_balance_updates_pkey_p2017\non contract_balance_updates_p2017 contract_balance_updates_3\n(cost=0.56..44599.76 rows=40460 width=65) (actual\ntime=269.891..6690.808 rows=41677 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=11596 read=30025\n -> Index Scan using contract_balance_updates_pkey_p2018\non contract_balance_updates_p2018 contract_balance_updates_4\n(cost=0.70..234755.48 rows=213110 width=65) (actual\ntime=0.032..32498.344 rows=236101 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=80201 read=156828\n -> Index Scan using contract_balance_updates_pkey_p2019\non contract_balance_updates_p2019 contract_balance_updates_5\n(cost=0.70..191361.74 rows=171228 width=65) (actual\ntime=0.017..29401.994 rows=172785 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=32830 read=141392\n -> Index Scan using contract_balance_updates_pkey_p2020\non contract_balance_updates_p2020 contract_balance_updates_6\n(cost=0.70..95518.47 rows=83880 width=65) (actual time=0.016..9375.502\nrows=83042 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=38369 read=45420\n -> Index Scan using contract_balance_updates_pkey_p2021\non contract_balance_updates_p2021 contract_balance_updates_7\n(cost=0.70..2264.47 rows=1966 width=65) (actual time=0.015..3378.816\nrows=21093 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=3621 read=17728 dirtied=65\n -> Index Scan using contract_balance_updates_pkey_p2022\non contract_balance_updates_p2022 contract_balance_updates_8\n(cost=0.56..63.08 rows=54 width=65) (actual time=0.011..60.048\nrows=325 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=21 read=311\n -> Index Scan using contract_balance_updates_pkey_p2023\non contract_balance_updates_p2023 contract_balance_updates_9\n(cost=0.12..2.36 rows=1 width=65) (actual time=0.003..0.003 rows=0\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=2\n -> Index Scan using\ncontract_balance_updates_pkey_default on\ncontract_balance_updates_default contract_balance_updates_10\n(cost=0.12..2.36 rows=1 width=65) (actual time=0.004..0.004 rows=0\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=2\nPlanning:\n Buffers: shared hit=6\nPlanning Time: 0.793 ms\nJIT:\n Functions: 46\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 3.429 ms, Inlining 9.848 ms, Optimization 173.359\nms, Emission 86.269 ms, Total 272.906 ms\nExecution Time: 82152.562 ms\n\n\nMind you, the query is fine if run successively, due to all the tuples it\nwould traverse already being hot in the disk cache. (But, as many\nconcurrent users are doing these queries for millions of different\nentities; and there are many other tables competing for disk cache in this\nDB; this will be true approximately never.)\n\nOther variations on the same theme fare no better. For example, a DISTINCT\nON query:\n\nSELECT\n\tDISTINCT ON (holder_address, contract_address)\n\tcontract_address,\n\tbalance,\n\tblock_signed_at,\n\tstart_block_height\nFROM contract_balance_updates\nWHERE holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea\nORDER BY holder_address ASC, contract_address ASC, start_block_height DESC\n\n\nUnique (cost=5.02..588552.32 rows=40000 width=97) (actual\ntime=235.805..930.314 rows=856 loops=1)\n Buffers: shared hit=558351\n -> Merge Append (cost=5.02..587275.55 rows=510707 width=97)\n(actual time=235.803..900.431 rows=555023 loops=1)\n Sort Key: contract_balance_updates.contract_address,\ncontract_balance_updates.start_height DESC\n Buffers: shared hit=558351\n -> Index Scan using contract_balance_updates_pkey_p2015 on\ncontract_balance_updates_p2015 contract_balance_updates_1\n(cost=0.28..2.52 rows=1 width=97) (actual time=0.015..0.015 rows=0\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=2\n -> Index Scan using contract_balance_updates_pkey_p2016 on\ncontract_balance_updates_p2016 contract_balance_updates_2\n(cost=0.42..8.37 rows=6 width=97) (actual time=0.007..0.007 rows=0\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=3\n -> Index Scan using contract_balance_updates_pkey_p2017 on\ncontract_balance_updates_p2017 contract_balance_updates_3\n(cost=0.56..44802.06 rows=40460 width=97) (actual\ntime=235.680..280.609 rows=41677 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=41621\n -> Index Scan using contract_balance_updates_pkey_p2018 on\ncontract_balance_updates_p2018 contract_balance_updates_4\n(cost=0.70..235821.03 rows=213110 width=97) (actual\ntime=0.030..264.259 rows=236101 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=237029\n -> Index Scan using contract_balance_updates_pkey_p2019 on\ncontract_balance_updates_p2019 contract_balance_updates_5\n(cost=0.70..192217.88 rows=171228 width=97) (actual\ntime=0.015..186.356 rows=172785 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=174222\n -> Index Scan using contract_balance_updates_pkey_p2020 on\ncontract_balance_updates_p2020 contract_balance_updates_6\n(cost=0.70..95937.87 rows=83880 width=97) (actual time=0.018..91.405\nrows=83042 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=83789\n -> Index Scan using contract_balance_updates_pkey_p2021 on\ncontract_balance_updates_p2021 contract_balance_updates_7\n(cost=0.70..2274.30 rows=1966 width=97) (actual time=0.014..23.228\nrows=21093 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=21349\n -> Index Scan using contract_balance_updates_pkey_p2022 on\ncontract_balance_updates_p2022 contract_balance_updates_8\n(cost=0.56..63.35 rows=54 width=97) (actual time=0.012..0.395 rows=325\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=332\n -> Index Scan using contract_balance_updates_pkey_p2023 on\ncontract_balance_updates_p2023 contract_balance_updates_9\n(cost=0.12..2.37 rows=1 width=97) (actual time=0.003..0.003 rows=0\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=2\n -> Index Scan using contract_balance_updates_pkey_default on\ncontract_balance_updates_default contract_balance_updates_10\n(cost=0.12..2.37 rows=1 width=97) (actual time=0.003..0.003 rows=0\nloops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Buffers: shared hit=2\nPlanning:\n Buffers: shared hit=6\nPlanning Time: 0.591 ms\nJIT:\n Functions: 41\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 4.133 ms, Inlining 8.799 ms, Optimization 147.922\nms, Emission 78.529 ms, Total 239.382 ms\nExecution Time: 934.680 ms\n\n\nHowever, I have found that I can trick PG into giving me the efficiency I\nwant, by using a correlated subquery:\n\nWITH bup1 AS (\n SELECT DISTINCT bup.holder_address, bup.contract_address\n FROM contract_balance_updates bup\n WHERE bup.holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea\n ORDER BY bup.contract_address ASC\n)\nSELECT\n bup1.holder_address,\n bup1.contract_address,\n (\n SELECT balance\n FROM contract_balance_updates bup2\n WHERE bup2.holder_address = bup1.holder_address\n AND bup2.contract_address = bup1.contract_address\n ORDER BY bup2.holder_address ASC, bup2.contract_address ASC,\nbup2.start_block_height DESC\n LIMIT 1\n ) AS balance\nFROM bup1\n\n\nSubquery Scan on bup1 (cost=110951.62..404059.86 rows=40000 width=74)\n(actual time=1555.929..1590.783 rows=856 loops=1)\n -> Sort (cost=110951.62..111051.62 rows=40000 width=42) (actual\ntime=1555.779..1555.855 rows=856 loops=1)\n Sort Key: bup.contract_address\n Sort Method: quicksort Memory: 91kB\n -> HashAggregate (cost=106694.08..107894.08 rows=40000\nwidth=42) (actual time=1554.358..1554.604 rows=856 loops=1)\n Group Key: bup.contract_address, bup.holder_address\n Batches: 1 Memory Usage: 1681kB\n -> Append (cost=0.28..104140.54 rows=510707 width=42)\n(actual time=39.823..1463.019 rows=555023 loops=1)\n -> Index Only Scan using\ncontract_balance_updates_pkey_p2015 on contract_balance_updates_p2015\nbup_1 (cost=0.28..2.51 rows=1 width=42) (actual time=0.233..0.234\nrows=0 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Heap Fetches: 0\n -> Index Only Scan using\ncontract_balance_updates_pkey_p2016 on contract_balance_updates_p2016\nbup_2 (cost=0.42..3.95 rows=6 width=42) (actual time=0.019..0.019\nrows=0 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Heap Fetches: 0\n -> Index Only Scan using\ncontract_balance_updates_pkey_p2017 on contract_balance_updates_p2017\nbup_3 (cost=0.56..3532.54 rows=40460 width=42) (actual\ntime=39.569..762.639 rows=41677 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Heap Fetches: 4566\n -> Index Only Scan using\ncontract_balance_updates_pkey_p2018 on contract_balance_updates_p2018\nbup_4 (cost=0.70..47759.03 rows=213110 width=42) (actual\ntime=0.236..512.911 rows=236101 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Heap Fetches: 58812\n -> Index Only Scan using\ncontract_balance_updates_pkey_p2019 on contract_balance_updates_p2019\nbup_5 (cost=0.70..28615.74 rows=171228 width=42) (actual\ntime=0.071..101.332 rows=172785 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Heap Fetches: 12663\n -> Index Only Scan using\ncontract_balance_updates_pkey_p2020 on contract_balance_updates_p2020\nbup_6 (cost=0.70..16436.37 rows=83880 width=42) (actual\ntime=0.080..39.611 rows=83042 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Heap Fetches: 4030\n -> Index Only Scan using\ncontract_balance_updates_pkey_p2021 on contract_balance_updates_p2021\nbup_7 (cost=0.70..119.52 rows=1966 width=42) (actual\ntime=0.095..9.474 rows=21093 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Heap Fetches: 102\n -> Index Only Scan using\ncontract_balance_updates_pkey_p2022 on contract_balance_updates_p2022\nbup_8 (cost=0.56..10.28 rows=54 width=42) (actual time=0.047..0.595\nrows=325 loops=1)\n Index Cond: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n Heap Fetches: 34\n -> Seq Scan on contract_balance_updates_p2023\nbup_9 (cost=0.00..0.00 rows=1 width=64) (actual time=0.004..0.004\nrows=0 loops=1)\n Filter: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n -> Seq Scan on contract_balance_updates_default\nbup_10 (cost=0.00..0.00 rows=1 width=64) (actual time=0.002..0.002\nrows=0 loops=1)\n Filter: (holder_address =\n'\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)\n SubPlan 1\n -> Limit (cost=5.02..7.30 rows=1 width=66) (actual\ntime=0.040..0.040 rows=1 loops=856)\n -> Merge Append (cost=5.02..27.74 rows=10 width=66)\n(actual time=0.040..0.040 rows=1 loops=856)\n Sort Key: bup2.start_height DESC\n -> Index Scan using\ncontract_balance_updates_pkey_p2015 on contract_balance_updates_p2015\nbup2_1 (cost=0.28..2.51 rows=1 width=54) (actual time=0.001..0.001\nrows=0 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_p2016 on contract_balance_updates_p2016\nbup2_2 (cost=0.42..2.66 rows=1 width=58) (actual time=0.001..0.001\nrows=0 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_p2017 on contract_balance_updates_p2017\nbup2_3 (cost=0.56..2.80 rows=1 width=57) (actual time=0.003..0.003\nrows=0 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_p2018 on contract_balance_updates_p2018\nbup2_4 (cost=0.70..2.93 rows=1 width=56) (actual time=0.008..0.008\nrows=1 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_p2019 on contract_balance_updates_p2019\nbup2_5 (cost=0.70..2.93 rows=1 width=58) (actual time=0.006..0.006\nrows=1 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_p2020 on contract_balance_updates_p2020\nbup2_6 (cost=0.70..2.94 rows=1 width=58) (actual time=0.007..0.007\nrows=0 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_p2021 on contract_balance_updates_p2021\nbup2_7 (cost=0.70..2.94 rows=1 width=59) (actual time=0.006..0.006\nrows=0 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_p2022 on contract_balance_updates_p2022\nbup2_8 (cost=0.56..2.80 rows=1 width=59) (actual time=0.003..0.003\nrows=0 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_p2023 on contract_balance_updates_p2023\nbup2_9 (cost=0.12..2.36 rows=1 width=104) (actual time=0.001..0.001\nrows=0 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\n -> Index Scan using\ncontract_balance_updates_pkey_default on\ncontract_balance_updates_default bup2_10 (cost=0.12..2.36 rows=1\nwidth=104) (actual time=0.001..0.001 rows=0 loops=856)\n Index Cond: ((holder_address =\nbup1.holder_address) AND (contract_address = bup1.contract_address))\nPlanning Time: 7.122 ms\nJIT:\n Functions: 96\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n Timing: Generation 15.259 ms, Inlining 0.000 ms, Optimization 2.664\nms, Emission 34.750 ms, Total 52.673 ms\nExecution Time: 1607.491 ms\n\n\nI really don’t like this last approach; it scans twice, it’s surprising /\nconfusing for people maintaining the query, etc. I believe that, due to the\ncorrelated subquery, the planning overhead is also O(N) with the number of\nmatched entities increases (though I don’t have a good test-case for this.)\n\nIs there any way to get PG to do what this last query is doing, purely\nusing window-functions / distinct on / etc.? Because, if there is, I can’t\nfind it.\n\nIt seems that PG can in fact do index-range-seeking (since that’s what it’s\ndoing when gathering the distinct contract_addresses in the last query.) It\nseems intuitive to me that it should be using such an approach to filter\nfor rows in window/group-partitions, when a criteria+index that can be\ncombined to limit the size of the window/group are available to the\nplanner. And that, even when not able to be automatically inferred, it\nwould make sense for there to be control over such behaviour in SQL, using\nhypothetical syntax like:\n\n-- for windows\nrow_number() OVER (PARTITION BY x ORDER BY x LIMIT 10 OFFSET 3)\n\n-- for groups\nGROUP BY x, y, z (APPLYING LIMIT 20 OFFSET 5 PER GROUP)\n\n\nDoes this make sense? Or is this something PG is already doing, and I just\nhaven’t found the right magic words / built my index correctly to unlock\nit? (I notice that the last example is an index-only scan; would I get this\nbehaviour from the previous two queries if I made the index a covering\nindex such that those could be index-only scans as well?)\n\n---\n\nHardware and OS config details:\n\n\n - GCP n2d-standard-128 VM (64 cores / 128 hyperthreads, 512GB memory)\n - PGDATA on 9TiB ext4 filesystem\n (stripe-width=256,nobarrier,noatime,data=writeback) on direct-attached SSD\n MDRAID RAID0 (24 375GiB devices)\n - 64GB swapfile, on same filesystem\n\n\n# /etc/sysctl.d/30-postgresql.conf\n\nkernel.shmmax = 67594764288\nkernel.shmall = 16502628\nvm.swappiness = 1\nvm.overcommit_memory = 2\nvm.overcommit_ratio = 95\nvm.dirty_background_ratio = 3\nvm.dirty_ratio = 10\nvm.nr_hugepages = 74690\nvm.min_free_kbytes = 986608\n\n\n\nPG config details:\n\n-- SELECT version();\nPostgreSQL 14.2 (Ubuntu 14.2-1.pgdg20.04+1+b1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n\n-- non-default server config\narchive_command = \"pgbackrest --stanza=main-v4-p1 archive-push %p\"\narchive_mode = \"on\"\narchive_timeout = \"1min\"\nautovacuum_max_workers = \"16\"\nautovacuum_vacuum_cost_delay = \"4ms\"\nautovacuum_vacuum_cost_limit = \"10000\"\nbgwriter_lru_maxpages = \"1000\"\nbgwriter_lru_multiplier = \"4\"\ncheckpoint_completion_target = \"0.9\"\ncheckpoint_timeout = \"5min\"\ncpu_tuple_cost = \"0.03\"\ndefault_statistics_target = \"500\"\ndynamic_shared_memory_type = \"posix\"\neffective_cache_size = \"384GB\"\neffective_io_concurrency = \"1000\"\nenable_partitionwise_aggregate = \"on\"\nenable_partitionwise_join = \"on\"\nhash_mem_multiplier = \"2\"\nhuge_pages = \"try\"\nlogical_decoding_work_mem = \"1GB\"\nmaintenance_work_mem = \"8GB\"\nmax_connections = \"2000\"\nmax_locks_per_transaction = \"6400\"\nmax_parallel_maintenance_workers = \"16\"\nmax_parallel_workers = \"128\"\nmax_parallel_workers_per_gather = \"8\"\nmax_stack_depth = \"2MB\"\nmax_wal_senders = \"10\"\nmax_wal_size = \"10GB\"\nmax_worker_processes = \"128\"\nmin_wal_size = \"80MB\"\nrandom_page_cost = \"1.1\"\nseq_page_cost = \"1\"\nshared_buffers = \"128GB\"\nsynchronous_commit = \"off\"\nwal_level = \"replica\"\nwal_recycle = \"on\"\nwork_mem = \"25804kB\"\n\nI have a “temporal table” — a table where there are multiple “versions” of entities, with each version having a distinct timestamp:CREATE TABLE contract_balance_updates (    block_id bigint NOT NULL,    block_signed_at timestamp(0) without time zone NOT NULL,    contract_address bytea NOT NULL,    holder_address bytea NOT NULL,    start_block_height bigint NOT NULL,    balance numeric NOT NULL) PARTITION BY RANGE (block_signed_at);-- one for each partition (applied by pg_partman from a template)CREATE UNIQUE INDEX contract_balance_updates_pkey\nON contract_balance_updates(\n holder_address bytea_ops,\n contract_address bytea_ops,\n start_block_height int8_ops DESC\n);This table has ~1 billion rows; millions of entities (i.e. (holder_address, contract_address) pairs); and for a few entities (power-law distribution), there are millions of versions (i.e. rows with distinct start_block_height values.)The main query this table needs to support, is to efficiently get the newest version-rows of each contract_address for a given holder_address, as of a given application-domain time. (Think of it as: the what the set of entities owned by a user looked like at a given time.) The “as of” part is important here: it’s why we can’t just use the usual system-temporal setup with separate “historical” and “current version” tables. (Also, due to our source being effectively an event store, and due to our throughput requirements [~100k new records per second], we must discover+insert new entity-version rows concurrently + out-of-order; so it’d be pretty non-trivial to keep anything like a validity-upper-bound column updated using triggers.)It is our expectation that this query “should” be able to be cheap-to-compute and effectively instantaneous. (It’s clear to us how we would make it so, given a simple LMDB-like sorted key-value store: prefix-match on holder_address; take the first row you find for the contract-address you’re on; build a comparator key of (holder_address, contract_address, highest-possible-version) and traverse to find the lowest row that sorts greater than it; repeat.)Which might, in SQL, be expressed as something like this:WITH ranked_holder_balances AS ( SELECT    *,    row_number() OVER w AS balance_rank FROM contract_balance_updates WHERE holder_address = '\\x0000000000000000000000000000000000000000'::bytea WINDOW w AS (\n PARTITION BY holder_address, contract_address\n ORDER BY start_block_height DESC\n ) ORDER BY holder_address ASC, contract_address ASC, start_block_height DESC)SELECT *FROM ranked_holder_balancesWHERE balance_rank = 1The trouble is that this query seems to traverse the tuples (or maybe just the index nodes?) of every row in the matched partitions. We know that the query only “needs\" to touch the first row of each partition (row_number() = 1) to resolve the query; but Postgres seemingly isn’t aware of this potential optimization. So the query is fast when all matched entities have few versions; but when any matched entities have millions of versions, the cold performance of the query becomes extremely bad.Subquery Scan on ranked_holder_balances  (cost=5.02..621761.05 rows=2554 width=55) (actual time=270.031..82148.370 rows=856 loops=1)  Filter: (ranked_holder_balances.balance_rank = 1)  Rows Removed by Filter: 554167  Buffers: shared hit=166647 read=391704 dirtied=65  ->  WindowAgg  (cost=5.02..605150.30 rows=510707 width=81) (actual time=270.028..82098.501 rows=555023 loops=1)        Buffers: shared hit=166647 read=391704 dirtied=65        ->  Merge Append  (cost=5.02..584722.02 rows=510707 width=65) (actual time=270.017..81562.693 rows=555023 loops=1)              Sort Key: contract_balance_updates.contract_address, contract_balance_updates.start_block_height DESC              Buffers: shared hit=166647 read=391704 dirtied=65              ->  Index Scan using contract_balance_updates_pkey_p2015 on contract_balance_updates_p2015 contract_balance_updates_1  (cost=0.28..2.51 rows=1 width=65) (actual time=0.013..0.014 rows=0 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=2              ->  Index Scan using contract_balance_updates_pkey_p2016 on contract_balance_updates_p2016 contract_balance_updates_2  (cost=0.42..8.34 rows=6 width=65) (actual time=0.010..0.011 rows=0 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=3              ->  Index Scan using contract_balance_updates_pkey_p2017 on contract_balance_updates_p2017 contract_balance_updates_3  (cost=0.56..44599.76 rows=40460 width=65) (actual time=269.891..6690.808 rows=41677 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=11596 read=30025              ->  Index Scan using contract_balance_updates_pkey_p2018 on contract_balance_updates_p2018 contract_balance_updates_4  (cost=0.70..234755.48 rows=213110 width=65) (actual time=0.032..32498.344 rows=236101 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=80201 read=156828              ->  Index Scan using contract_balance_updates_pkey_p2019 on contract_balance_updates_p2019 contract_balance_updates_5  (cost=0.70..191361.74 rows=171228 width=65) (actual time=0.017..29401.994 rows=172785 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=32830 read=141392              ->  Index Scan using contract_balance_updates_pkey_p2020 on contract_balance_updates_p2020 contract_balance_updates_6  (cost=0.70..95518.47 rows=83880 width=65) (actual time=0.016..9375.502 rows=83042 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=38369 read=45420              ->  Index Scan using contract_balance_updates_pkey_p2021 on contract_balance_updates_p2021 contract_balance_updates_7  (cost=0.70..2264.47 rows=1966 width=65) (actual time=0.015..3378.816 rows=21093 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=3621 read=17728 dirtied=65              ->  Index Scan using contract_balance_updates_pkey_p2022 on contract_balance_updates_p2022 contract_balance_updates_8  (cost=0.56..63.08 rows=54 width=65) (actual time=0.011..60.048 rows=325 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=21 read=311              ->  Index Scan using contract_balance_updates_pkey_p2023 on contract_balance_updates_p2023 contract_balance_updates_9  (cost=0.12..2.36 rows=1 width=65) (actual time=0.003..0.003 rows=0 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=2              ->  Index Scan using contract_balance_updates_pkey_default on contract_balance_updates_default contract_balance_updates_10  (cost=0.12..2.36 rows=1 width=65) (actual time=0.004..0.004 rows=0 loops=1)                    Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    Buffers: shared hit=2Planning:  Buffers: shared hit=6Planning Time: 0.793 msJIT:  Functions: 46  Options: Inlining true, Optimization true, Expressions true, Deforming true  Timing: Generation 3.429 ms, Inlining 9.848 ms, Optimization 173.359 ms, Emission 86.269 ms, Total 272.906 msExecution Time: 82152.562 msMind you, the query is fine if run successively, due to all the tuples it would traverse already being hot in the disk cache. (But, as many concurrent users are doing these queries for millions of different entities; and there are many other tables competing for disk cache in this DB; this will be true approximately never.)Other variations on the same theme fare no better. For example, a DISTINCT ON query:SELECT\tDISTINCT ON (holder_address, contract_address)\tcontract_address,\tbalance,\tblock_signed_at,\tstart_block_heightFROM contract_balance_updatesWHERE holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::byteaORDER BY holder_address ASC, contract_address ASC, start_block_height DESCUnique  (cost=5.02..588552.32 rows=40000 width=97) (actual time=235.805..930.314 rows=856 loops=1)  Buffers: shared hit=558351  ->  Merge Append  (cost=5.02..587275.55 rows=510707 width=97) (actual time=235.803..900.431 rows=555023 loops=1)        Sort Key: contract_balance_updates.contract_address, contract_balance_updates.start_height DESC        Buffers: shared hit=558351        ->  Index Scan using contract_balance_updates_pkey_p2015 on contract_balance_updates_p2015 contract_balance_updates_1  (cost=0.28..2.52 rows=1 width=97) (actual time=0.015..0.015 rows=0 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=2        ->  Index Scan using contract_balance_updates_pkey_p2016 on contract_balance_updates_p2016 contract_balance_updates_2  (cost=0.42..8.37 rows=6 width=97) (actual time=0.007..0.007 rows=0 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=3        ->  Index Scan using contract_balance_updates_pkey_p2017 on contract_balance_updates_p2017 contract_balance_updates_3  (cost=0.56..44802.06 rows=40460 width=97) (actual time=235.680..280.609 rows=41677 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=41621        ->  Index Scan using contract_balance_updates_pkey_p2018 on contract_balance_updates_p2018 contract_balance_updates_4  (cost=0.70..235821.03 rows=213110 width=97) (actual time=0.030..264.259 rows=236101 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=237029        ->  Index Scan using contract_balance_updates_pkey_p2019 on contract_balance_updates_p2019 contract_balance_updates_5  (cost=0.70..192217.88 rows=171228 width=97) (actual time=0.015..186.356 rows=172785 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=174222        ->  Index Scan using contract_balance_updates_pkey_p2020 on contract_balance_updates_p2020 contract_balance_updates_6  (cost=0.70..95937.87 rows=83880 width=97) (actual time=0.018..91.405 rows=83042 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=83789        ->  Index Scan using contract_balance_updates_pkey_p2021 on contract_balance_updates_p2021 contract_balance_updates_7  (cost=0.70..2274.30 rows=1966 width=97) (actual time=0.014..23.228 rows=21093 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=21349        ->  Index Scan using contract_balance_updates_pkey_p2022 on contract_balance_updates_p2022 contract_balance_updates_8  (cost=0.56..63.35 rows=54 width=97) (actual time=0.012..0.395 rows=325 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=332        ->  Index Scan using contract_balance_updates_pkey_p2023 on contract_balance_updates_p2023 contract_balance_updates_9  (cost=0.12..2.37 rows=1 width=97) (actual time=0.003..0.003 rows=0 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=2        ->  Index Scan using contract_balance_updates_pkey_default on contract_balance_updates_default contract_balance_updates_10  (cost=0.12..2.37 rows=1 width=97) (actual time=0.003..0.003 rows=0 loops=1)              Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)              Buffers: shared hit=2Planning:  Buffers: shared hit=6Planning Time: 0.591 msJIT:  Functions: 41  Options: Inlining true, Optimization true, Expressions true, Deforming true  Timing: Generation 4.133 ms, Inlining 8.799 ms, Optimization 147.922 ms, Emission 78.529 ms, Total 239.382 msExecution Time: 934.680 msHowever, I have found that I can trick PG into giving me the efficiency I want, by using a correlated subquery:WITH bup1 AS (    SELECT DISTINCT bup.holder_address, bup.contract_address    FROM contract_balance_updates bup    WHERE bup.holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea    ORDER BY bup.contract_address ASC)SELECT  bup1.holder_address,  bup1.contract_address,  (    SELECT balance    FROM contract_balance_updates bup2    WHERE bup2.holder_address = bup1.holder_address    AND bup2.contract_address = bup1.contract_address    ORDER BY bup2.holder_address ASC, bup2.contract_address ASC, bup2.start_block_height DESC    LIMIT 1  ) AS balanceFROM bup1Subquery Scan on bup1  (cost=110951.62..404059.86 rows=40000 width=74) (actual time=1555.929..1590.783 rows=856 loops=1)  ->  Sort  (cost=110951.62..111051.62 rows=40000 width=42) (actual time=1555.779..1555.855 rows=856 loops=1)        Sort Key: bup.contract_address        Sort Method: quicksort  Memory: 91kB        ->  HashAggregate  (cost=106694.08..107894.08 rows=40000 width=42) (actual time=1554.358..1554.604 rows=856 loops=1)              Group Key: bup.contract_address, bup.holder_address              Batches: 1  Memory Usage: 1681kB              ->  Append  (cost=0.28..104140.54 rows=510707 width=42) (actual time=39.823..1463.019 rows=555023 loops=1)                    ->  Index Only Scan using contract_balance_updates_pkey_p2015 on contract_balance_updates_p2015 bup_1  (cost=0.28..2.51 rows=1 width=42) (actual time=0.233..0.234 rows=0 loops=1)                          Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                          Heap Fetches: 0                    ->  Index Only Scan using contract_balance_updates_pkey_p2016 on contract_balance_updates_p2016 bup_2  (cost=0.42..3.95 rows=6 width=42) (actual time=0.019..0.019 rows=0 loops=1)                          Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                          Heap Fetches: 0                    ->  Index Only Scan using contract_balance_updates_pkey_p2017 on contract_balance_updates_p2017 bup_3  (cost=0.56..3532.54 rows=40460 width=42) (actual time=39.569..762.639 rows=41677 loops=1)                          Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                          Heap Fetches: 4566                    ->  Index Only Scan using contract_balance_updates_pkey_p2018 on contract_balance_updates_p2018 bup_4  (cost=0.70..47759.03 rows=213110 width=42) (actual time=0.236..512.911 rows=236101 loops=1)                          Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                          Heap Fetches: 58812                    ->  Index Only Scan using contract_balance_updates_pkey_p2019 on contract_balance_updates_p2019 bup_5  (cost=0.70..28615.74 rows=171228 width=42) (actual time=0.071..101.332 rows=172785 loops=1)                          Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                          Heap Fetches: 12663                    ->  Index Only Scan using contract_balance_updates_pkey_p2020 on contract_balance_updates_p2020 bup_6  (cost=0.70..16436.37 rows=83880 width=42) (actual time=0.080..39.611 rows=83042 loops=1)                          Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                          Heap Fetches: 4030                    ->  Index Only Scan using contract_balance_updates_pkey_p2021 on contract_balance_updates_p2021 bup_7  (cost=0.70..119.52 rows=1966 width=42) (actual time=0.095..9.474 rows=21093 loops=1)                          Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                          Heap Fetches: 102                    ->  Index Only Scan using contract_balance_updates_pkey_p2022 on contract_balance_updates_p2022 bup_8  (cost=0.56..10.28 rows=54 width=42) (actual time=0.047..0.595 rows=325 loops=1)                          Index Cond: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                          Heap Fetches: 34                    ->  Seq Scan on contract_balance_updates_p2023 bup_9  (cost=0.00..0.00 rows=1 width=64) (actual time=0.004..0.004 rows=0 loops=1)                          Filter: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)                    ->  Seq Scan on contract_balance_updates_default bup_10  (cost=0.00..0.00 rows=1 width=64) (actual time=0.002..0.002 rows=0 loops=1)                          Filter: (holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea)  SubPlan 1    ->  Limit  (cost=5.02..7.30 rows=1 width=66) (actual time=0.040..0.040 rows=1 loops=856)          ->  Merge Append  (cost=5.02..27.74 rows=10 width=66) (actual time=0.040..0.040 rows=1 loops=856)                Sort Key: bup2.start_height DESC                ->  Index Scan using contract_balance_updates_pkey_p2015 on contract_balance_updates_p2015 bup2_1  (cost=0.28..2.51 rows=1 width=54) (actual time=0.001..0.001 rows=0 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_p2016 on contract_balance_updates_p2016 bup2_2  (cost=0.42..2.66 rows=1 width=58) (actual time=0.001..0.001 rows=0 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_p2017 on contract_balance_updates_p2017 bup2_3  (cost=0.56..2.80 rows=1 width=57) (actual time=0.003..0.003 rows=0 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_p2018 on contract_balance_updates_p2018 bup2_4  (cost=0.70..2.93 rows=1 width=56) (actual time=0.008..0.008 rows=1 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_p2019 on contract_balance_updates_p2019 bup2_5  (cost=0.70..2.93 rows=1 width=58) (actual time=0.006..0.006 rows=1 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_p2020 on contract_balance_updates_p2020 bup2_6  (cost=0.70..2.94 rows=1 width=58) (actual time=0.007..0.007 rows=0 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_p2021 on contract_balance_updates_p2021 bup2_7  (cost=0.70..2.94 rows=1 width=59) (actual time=0.006..0.006 rows=0 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_p2022 on contract_balance_updates_p2022 bup2_8  (cost=0.56..2.80 rows=1 width=59) (actual time=0.003..0.003 rows=0 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_p2023 on contract_balance_updates_p2023 bup2_9  (cost=0.12..2.36 rows=1 width=104) (actual time=0.001..0.001 rows=0 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))                ->  Index Scan using contract_balance_updates_pkey_default on contract_balance_updates_default bup2_10  (cost=0.12..2.36 rows=1 width=104) (actual time=0.001..0.001 rows=0 loops=856)                      Index Cond: ((holder_address = bup1.holder_address) AND (contract_address = bup1.contract_address))Planning Time: 7.122 msJIT:  Functions: 96  Options: Inlining false, Optimization false, Expressions true, Deforming true  Timing: Generation 15.259 ms, Inlining 0.000 ms, Optimization 2.664 ms, Emission 34.750 ms, Total 52.673 msExecution Time: 1607.491 msI really don’t like this last approach; it scans twice, it’s surprising / confusing for people maintaining the query, etc. I believe that, due to the correlated subquery, the planning overhead is also O(N) with the number of matched entities increases (though I don’t have a good test-case for this.)Is there any way to get PG to do what this last query is doing, purely using window-functions / distinct on / etc.? Because, if there is, I can’t find it.It seems that PG can in fact do index-range-seeking (since that’s what it’s doing when gathering the distinct contract_addresses in the last query.) It seems intuitive to me that it should be using such an approach to filter for rows in window/group-partitions, when a criteria+index that can be combined to limit the size of the window/group are available to the planner. And that, even when not able to be automatically inferred, it would make sense for there to be control over such behaviour in SQL, using hypothetical syntax like:-- for windows\nrow_number() OVER (PARTITION BY x ORDER BY x LIMIT 10 OFFSET 3)\n\n-- for groups\nGROUP BY x, y, z (APPLYING LIMIT 20 OFFSET 5 PER GROUP)Does this make sense? Or is this something PG is already doing, and I just haven’t found the right magic words / built my index correctly to unlock it? (I notice that the last example is an index-only scan; would I get this behaviour from the previous two queries if I made the index a covering index such that those could be index-only scans as well?)---Hardware and OS config details:GCP n2d-standard-128 VM (64 cores / 128 hyperthreads, 512GB memory)PGDATA on 9TiB ext4 filesystem (stripe-width=256,nobarrier,noatime,data=writeback) on direct-attached SSD MDRAID RAID0 (24 375GiB devices)64GB swapfile, on same filesystem# /etc/sysctl.d/30-postgresql.conf\n\nkernel.shmmax = 67594764288kernel.shmall = 16502628vm.swappiness = 1vm.overcommit_memory = 2vm.overcommit_ratio = 95vm.dirty_background_ratio = 3vm.dirty_ratio = 10vm.nr_hugepages = 74690vm.min_free_kbytes = 986608\nPG config details:-- SELECT version();PostgreSQL 14.2 (Ubuntu 14.2-1.pgdg20.04+1+b1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n-- non-default server configarchive_command = \"pgbackrest --stanza=main-v4-p1 archive-push %p\"archive_mode = \"on\"archive_timeout = \"1min\"autovacuum_max_workers = \"16\"autovacuum_vacuum_cost_delay = \"4ms\"autovacuum_vacuum_cost_limit = \"10000\"bgwriter_lru_maxpages = \"1000\"bgwriter_lru_multiplier = \"4\"checkpoint_completion_target = \"0.9\"checkpoint_timeout = \"5min\"cpu_tuple_cost = \"0.03\"default_statistics_target = \"500\"dynamic_shared_memory_type = \"posix\"effective_cache_size = \"384GB\"effective_io_concurrency = \"1000\"enable_partitionwise_aggregate = \"on\"enable_partitionwise_join = \"on\"hash_mem_multiplier = \"2\"huge_pages = \"try\"logical_decoding_work_mem = \"1GB\"maintenance_work_mem = \"8GB\"max_connections = \"2000\"max_locks_per_transaction = \"6400\"max_parallel_maintenance_workers = \"16\"max_parallel_workers = \"128\"max_parallel_workers_per_gather = \"8\"max_stack_depth = \"2MB\"max_wal_senders = \"10\"max_wal_size = \"10GB\"max_worker_processes = \"128\"min_wal_size = \"80MB\"random_page_cost = \"1.1\"seq_page_cost = \"1\"shared_buffers = \"128GB\"synchronous_commit = \"off\"wal_level = \"replica\"wal_recycle = \"on\"work_mem = \"25804kB\"", "msg_date": "Tue, 3 May 2022 11:11:31 -0700", "msg_from": "Levi Aul <[email protected]>", "msg_from_op": true, "msg_subject": "Window partial fetch optimization" }, { "msg_contents": "On Wed, 4 May 2022 at 06:11, Levi Aul <[email protected]> wrote:\n> It is our expectation that this query “should” be able to be cheap-to-compute and effectively instantaneous. (It’s clear to us how we would make it so, given a simple LMDB-like sorted key-value store: prefix-match on holder_address; take the first row you find for the contract-address you’re on; build a comparator key of (holder_address, contract_address, highest-possible-version) and traverse to find the lowest row that sorts greater than it; repeat.)\n>\n> Which might, in SQL, be expressed as something like this:\n>\n> WITH ranked_holder_balances AS (\n> SELECT\n> *,\n> row_number() OVER w AS balance_rank\n> FROM contract_balance_updates\n> WHERE holder_address = '\\x0000000000000000000000000000000000000000'::bytea\n> WINDOW w AS (\n> PARTITION BY holder_address, contract_address\n> ORDER BY start_block_height DESC\n> )\n> ORDER BY holder_address ASC, contract_address ASC, start_block_height DESC\n> )\n> SELECT *\n> FROM ranked_holder_balances\n> WHERE balance_rank = 1\n\nYes, PostgreSQL 14 is not very smart about realising that WHERE\nbalance_rank = 1 is only going to match the first row of each window\npartition. PostgreSQL 15 (coming later this year) should be better in\nthis regard as some work was done to teach the query planner about\nmonotonic window functions [1]. However, that change likely does not\ndo all you'd like here as the WindowAgg node still must consume and\nthrow away all tuples until it finds the first tuple belonging to the\nnext window partition. It sounds like you really want \"Skip Scans\" or\n\"Loose Index Scans\" which are implemented by some other RDBMS'. I\nimagine that even with the change to PostgreSQL 15 that it still\nwouldn't be as fast as your DISTINCT ON example.\n\n> WITH bup1 AS (\n> SELECT DISTINCT bup.holder_address, bup.contract_address\n> FROM contract_balance_updates bup\n> WHERE bup.holder_address = '\\xe03c23519e18d64f144d2800e30e81b0065c48b5'::bytea\n> ORDER BY bup.contract_address ASC\n> )\n> SELECT\n> bup1.holder_address,\n> bup1.contract_address,\n> (\n> SELECT balance\n> FROM contract_balance_updates bup2\n> WHERE bup2.holder_address = bup1.holder_address\n> AND bup2.contract_address = bup1.contract_address\n> ORDER BY bup2.holder_address ASC, bup2.contract_address ASC, bup2.start_block_height DESC\n> LIMIT 1\n> ) AS balance\n> FROM bup1\n\n> I really don’t like this last approach; it scans twice, it’s surprising / confusing for people maintaining the query, etc. I believe that, due to the correlated subquery, the planning overhead is also O(N) with the number of matched entities increases (though I don’t have a good test-case for this.)\n\nNo, the subquery is not replanned each time it is rescanned. It's\nplanned once and that same plan will be executed each time. So no O(n)\nplanning overhead.\n\n> Is there any way to get PG to do what this last query is doing, purely using window-functions / distinct on / etc.? Because, if there is, I can’t find it.\n>\n> It seems that PG can in fact do index-range-seeking (since that’s what it’s doing when gathering the distinct contract_addresses in the last query.) It seems intuitive to me that it should be using such an approach to filter for rows in window/group-partitions, when a criteria+index that can be combined to limit the size of the window/group are available to the planner. And that, even when not able to be automatically inferred, it would make sense for there to be control over such behaviour in SQL, using hypothetical syntax like:\n\nUnfortunately, DISTINCT can only be implemented with Hash Aggregate or\nSort / Index Scan + Unique. We don't have anything currently which\nwill jump to the next highest key in an index. There has been some\nwork on what we're starting to call \"Skip Scans\", but it's all still\nwork in progress.\n\nYou might find something useful in [2] which might help speed up your\nDISTINCT query.\n\n> -- for windows\n> row_number() OVER (PARTITION BY x ORDER BY x LIMIT 10 OFFSET 3)\n>\n> -- for groups\n> GROUP BY x, y, z (APPLYING LIMIT 20 OFFSET 5 PER GROUP)\n>\n>\n> Does this make sense? Or is this something PG is already doing, and I just haven’t found the right magic words / built my index correctly to unlock it? (I notice that the last example is an index-only scan; would I get this behaviour from the previous two queries if I made the index a covering index such that those could be index-only scans as well?)\n\nUnfortunately, there is no magic words here. PostgreSQL 14 simply has\nno ability to know that row_number() is monotonically increasing,\ntherefore has no ability to skip any processing for rows that are\nnever needed.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9d9c02ccd1aea8e9131d8f4edb21bf1687e40782\n[2] https://wiki.postgresql.org/wiki/Loose_indexscan\n\n\n", "msg_date": "Wed, 4 May 2022 10:12:51 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Window partial fetch optimization" }, { "msg_contents": "On Tue, May 3, 2022 at 2:11 PM Levi Aul <[email protected]> wrote:\n\n> I have a “temporal table” — a table where there are multiple “versions” of\n> entities, with each version having a distinct timestamp:\n> CREATE TABLE contract_balance_updates (\n> block_id bigint NOT NULL,\n> block_signed_at timestamp(0) without time zone NOT NULL,\n> contract_address bytea NOT NULL,\n> holder_address bytea NOT NULL,\n> start_block_height bigint NOT NULL,\n> balance numeric NOT NULL\n> ) PARTITION BY RANGE (block_signed_at);\n>\n> -- one for each partition (applied by pg_partman from a template)\n> CREATE UNIQUE INDEX contract_balance_updates_pkey\n> ON contract_balance_updates(\n> holder_address bytea_ops,\n> contract_address bytea_ops,\n> start_block_height int8_ops DESC\n> );\n>\n\nHow does pg_partman deal with the fact that a unique index on a partitioned\ntable must contain the partitioning key?\n\nIt should be noted that your 3 queries don't return the same thing. The\nlast one returns columns holder_address, contract_address, and balance,\nwhile the first returns all columns in the table. If you were to make the\nfirst query return just the three columns holder_address, contract_address,\nand balance and build a suitable index, then you could get it to use an\nindex-only scan. This should be similar to (but probably faster than) your\n3rd query, without all the kerfuffle of extra scans and dummy syntax. The\nindex needed would be:\n\n(holder_address bytea_ops, contract_address bytea_ops, start_block_height,\nbalance);\n\nNote that in theory it could do a better job of using the index you already\nhave. It could compute the row_number using only the data available in the\nindex, then go fetch the table tuple for just the rows which pass the\nrow_number filter. But it just isn't smart enough to do that. (By\nseparating the WHERE clause from the select list into different queries,\nthat is essentially what your third query is tricking it into doing)\n\nCheers,\n\nJeff\n\nOn Tue, May 3, 2022 at 2:11 PM Levi Aul <[email protected]> wrote:I have a “temporal table” — a table where there are multiple “versions” of entities, with each version having a distinct timestamp:CREATE TABLE contract_balance_updates (    block_id bigint NOT NULL,    block_signed_at timestamp(0) without time zone NOT NULL,    contract_address bytea NOT NULL,    holder_address bytea NOT NULL,    start_block_height bigint NOT NULL,    balance numeric NOT NULL) PARTITION BY RANGE (block_signed_at);-- one for each partition (applied by pg_partman from a template)CREATE UNIQUE INDEX contract_balance_updates_pkeyON contract_balance_updates(    holder_address bytea_ops,    contract_address bytea_ops,    start_block_height int8_ops DESC);How does pg_partman deal with the fact that a unique index on a partitioned table must contain the partitioning key?It should be noted that your 3 queries don't return the same thing.  The last one returns columns holder_address, contract_address, and balance, while the first returns all columns in the table.  If you were to make the first query return just the three columns holder_address, contract_address, and balance and build a suitable index, then you could get it to use an index-only scan.  This should be similar to (but probably faster than) your 3rd query, without all the kerfuffle of extra scans and dummy syntax.  The index needed would be: (holder_address bytea_ops, contract_address bytea_ops, start_block_height, balance);Note that in theory it could do a better job of using the index you already have.  It could compute the row_number using only the data available in the index, then go fetch the table tuple for just the rows which pass the row_number filter.  But it just isn't smart enough to do that. (By separating the WHERE clause from the select list into different queries, that is essentially what your third query is tricking it into doing)Cheers,Jeff", "msg_date": "Wed, 4 May 2022 17:56:27 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Window partial fetch optimization" } ]