threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hello. I just sent this note to the pgsql-general list, but I thought it\nprobably deserved to land on the hackers list as well. My apologies if I'm\nwrong.\n\nCharlie\n-----------------------------------------\n\nX-Sender: [email protected] (Unverified)\nDate: Fri, 29 Jan 1999 18:00:11 -0800\nTo: [email protected]\nFrom: Charles Hornberger <[email protected]>\nSubject: Re: [GENERAL] nested loops in joins, ambiguous rewrite rules\nSender: [email protected]\n\nHello again. I've got a few more notes on this \"strange\" optimizer\nbehavior, which might be useful to anyone interested in this problem:\n\nAs I mentioned in my last posting, I have one DB called apx13 that is\nproducing expensive \"nested loop\" query plans when I try to join two\ntables, and one called apx14 that's using \"merge join\" query plans. I've\nbeen trying to figure out why this is. (This is all happening under 6.4.\nWe haven't upgraded to 6.4.2 yet but I don't see anything in the 6.4.2\nHISTORY file that suggests that major changes have been made to the\noptimizer, so I guess that probably wouldn't change anything?)\n\nIn any case, I dumped the entire contents the apx14 database using `pg_dump\n-D apx14 > apx14.out`. Then I created a new database called apx15 and ran\n`psql -f apx14.out apx15`.\n\nThen I ran a join query against apx15:\n\n apx15=> \\i ../query\n explain select a.article_id, b.article_id \n from article a, article_text b \n where a.article_id = b.article_id;\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\nSo I ran vacuum analyze and did it again. And got the same query plan.\n\nAt that point, I decided to create yet another DB, apx16. I just used the\nschema from apx14, and the contents of the two tables being used in my join\nquery.\n\nIn other words:\n`pg_dump -s apx14 > apx14.schema.out`\n`pg_dump -aDt article apx14 > apx14.article.out`\n`pg_dump -aDt article_text apx14 > apx14.article_text.out`\n`psql -f apx14.schema.out apx16`\n`psql -f apx14.article.out apx16`\n`psql -f apx14.article_text.out apx16`\n\n\nThen I ran the same join query against apx16, and this time the optimizer\ndecided to use a \"merge join\":\n\n apx16=> \\i ../query\n explain select a.article_id, b.article_id \n from article a, article_text b \n where a.article_id = b.article_id;\n NOTICE: QUERY PLAN:\n\n Merge Join (cost=0.00 size=1 width=8)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article a (cost=0.00 size=0 width=4)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article_text b (cost=0.00 size=0 width=4)\n\n EXPLAIN\n\nMaybe I'm missing something, but I can't understand why two identical join\nqueries run against two identical tables should produce different output.\nMaybe there's something in the way my tables and indices are set up, but I\ncan't imagine what that is.\n\nOne final note: I just did another test, hoping to exlude other factors\nfrom the equation. I created two more databases using only the two\nrelevant tables (and sequences & indices) from apx16. What I discovered is\nthat if I build a database from \"full\" dump files (i.e., dump files that\ninclude both table schema & data), the optimizer will always elect to use\n\"nested loops\" for its query plans. On the other hand, if I build the\ndatabase by first creating the tables and indices, and then doing the\ninserts in a second operation, it uses the less costly \"merge join\". As a\nmatter of fact, it appears that if you create an index on a table *after*\ninserting data into that table, joins performed on that table will *always*\nuse these expensive nested loops.\n\nSo my theory is:\n\nSince pg_dump creates dump files that perform INSERTs on tables before\nperforming any CREATE INDEX statements on tables, using raw dump files to\ncreate databases will cause this \"nested loop\" behavior.\n\nAnd there's another catch: If you create your tables and indices first,\nthen insert your data, the optimizer will continue to build \"merge join\"\nquery plans until you make the mistake of running a \"vacuum analyze\". If\nyou run \"vacuum analyze\", the optimizer will then decide to begin using the\nmore costly nested loops to handle any join queries. The only way to get\naround this is to delete all records from the tables being joined,\nre-insert them and remember NOT to run vacuum analyze.\n\nIf anyone wants to replicate this, I've documented my tests below.\n\n(And, of course, if I've totally missed the point on anything in here,\nplease correct me.)\n\n===============\nTable structure\n===============\n\napx16=> \\d article\nTable = article\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 not null default nextval ( |\n4 |\n| section_id | int4 not null |\n4 |\n| locale_id | int4 not null |\n4 |\n| article_source_id | int4 |\n4 |\n| volume_id | int4 |\n4 |\n| issue_id | int4 |\n4 |\n| print_page_no | int4 |\n4 |\n| print_publ_date | date |\n4 |\n| site_publ_date | date not null |\n4 |\n| site_publ_time | datetime not null |\n8 |\n| inputter_id | int4 not null |\n4 |\n| input_date | datetime default text 'now' |\n8 |\n| published | int4 default 0 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\nIndices: article_article_id_key\n article_issue_ix\n article_locale_ix\n article_section_ix\n article_source_ix\n article_vol_ix\n\napx16=> \\d article_article_id_key\n\nTable = article_article_id_key\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\napx16=> \\d article_text\n\nTable = article_text\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 not null |\n4 |\n| headline | varchar() |\n0 |\n| subhead | varchar() |\n0 |\n+----------------------------------+----------------------------------+-----\n--+\nIndex: article_text_ix\n\napx16=> \\d article_text_ix\n\nTable = article_text_ix\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\n\n\n=================\nCREATE STATEMENTS\n=================\n\nCREATE SEQUENCE \"article_article_id_seq\" start 10 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nSELECT nextval ('article_article_id_seq');\nCREATE TABLE \"article\" (\"article_id\" \"int4\" DEFAULT nextval (\n'article_article_id_seq' ) NOT NULL, \"section_id\" \"int4\" NOT NULL,\n\"locale_id\" \"int4\" NOT NULL, \"article_source_id\" \"int4\", \"volume_id\"\n\"int4\", \"issue_id\" \"int4\", \"print_page_no\" \"int4\", \"print_publ_date\"\n\"date\", \"site_publ_date\" \"date\" NOT NULL, \"site_publ_time\" \"datetime\" NOT\nNULL, \"inputter_id\" \"int4\" NOT NULL, \"input_date\" \"datetime\" DEFAULT text\n'now', \"published\" \"int4\" DEFAULT 0);\nCREATE UNIQUE INDEX \"article_article_id_key\" on \"article\" using btree (\n\"article_id\" \"int4_ops\" );\nCREATE INDEX \"article_vol_ix\" on \"article\" using btree ( \"volume_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_source_ix\" on \"article\" using btree (\n\"article_source_id\" \"int4_ops\" );\nCREATE INDEX \"article_issue_ix\" on \"article\" using btree ( \"issue_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_locale_ix\" on \"article\" using btree ( \"locale_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_section_ix\" on \"article\" using btree ( \"section_id\"\n\"int4_ops\" );\nCREATE TABLE \"article_text\" (\"article_id\" \"int4\" NOT NULL, \"headline\"\nvarchar, \"subhead\" varchar);\nCREATE INDEX \"article_text_ix\" on \"article_text\" using btree (\n\"article_id\" \"int4_ops\" );\n\n\n=================\nINSERT STATEMENTS\n=================\n\nINSERT INTO \"article\"\n(\"article_id\",\"section_id\",\"locale_id\",\"article_source_id\",\"volume_id\",\"issu\ne_id\",\"print_page_no\",\"print_publ_date\",\"site_publ_date\",\"site_publ_time\",\"i\nnputter_id\",\"input_date\",\"published\") values\n(10,3,1,4,2,3,4,'04-05-2006','01-28-1999','Thu Jan 28 19:28:40 1999\nPST',100,'Thu Jan 28 19:28:40 1999 PST',0);\nINSERT INTO \"article\"\n(\"article_id\",\"section_id\",\"locale_id\",\"article_source_id\",\"volume_id\",\"issu\ne_id\",\"print_page_no\",\"print_publ_date\",\"site_publ_date\",\"site_publ_time\",\"i\nnputter_id\",\"input_date\",\"published\") values\n(11,3,1,4,2,3,4,'04-05-2006','01-28-1999','Thu Jan 28 19:28:40 1999\nPST',100,'Thu Jan 28 19:28:40 1999 PST',0);\nINSERT INTO \"article_text\" (\"article_id\",\"headline\",\"subhead\") values\n(10,'Mayor Signs Contract With Company','Legally binding document said to\nbe four pages long');\nINSERT INTO \"article_text\" (\"article_id\",\"headline\",\"subhead\") values\n(11,'Mayor Cancels Contract','Company Promises to Sue Over Scuttled Deal');\n\n==================\nSteps to replicate\n==================\n\nVacuum analyze 'problem'\n------------------------\n1. Create a fresh database and execute the CREATE statements above\n2. Execute the INSERT statements above\n3. Execute the following query:\n EXPLAIN SELECT a.article_id, b.article_id \n FROM article a, article_text b \n WHERE a.article_id = b.article_id;\n4. You should see:\n\n NOTICE: QUERY PLAN:\n\n Merge Join (cost=0.00 size=1 width=8)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article a (cost=0.00 size=0 width=4)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article_text b (cost=0.00 size=0 width=4)\n\n EXPLAIN\n\n5. Do \"vacuum analyze\".\n6. Execute the query again.\n7. Now you'll see:\n\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\n\npg_dump 'problem'\n-----------------\n\n1. Dump the DB you created above to a file with\n\t`pg_dump -D dbname > foo`\n\n2. Create a new DB\n\n3. Do `psql -f foo newdb`\n\n4. Execute the join query:\n EXPLAIN SELECT a.article_id, b.article_id \n FROM article a, article_text b \n WHERE a.article_id = b.article_id;\n\n5. You should see:\n\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\n\n",
"msg_date": "Fri, 29 Jan 1999 18:09:10 -0800",
"msg_from": "Charles Hornberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] nested loops in joins, ambiguous rewrite rules"
}
] |
[
{
"msg_contents": "Hello. I just sent this note to the pgsql-general list, but I thought it\nprobably deserved to land on the hackers list as well. My apologies if I'm\nwrong.\n\nCharlie\n-----------------------------------------\n\nX-Sender: [email protected] (Unverified)\nDate: Fri, 29 Jan 1999 18:00:11 -0800\nTo: [email protected]\nFrom: Charles Hornberger <[email protected]>\nSubject: Re: [GENERAL] nested loops in joins, ambiguous rewrite rules\nSender: [email protected]\n\nHello again. I've got a few more notes on this \"strange\" optimizer\nbehavior, which might be useful to anyone interested in this problem:\n\nAs I mentioned in my last posting, I have one DB called apx13 that is\nproducing expensive \"nested loop\" query plans when I try to join two\ntables, and one called apx14 that's using \"merge join\" query plans. I've\nbeen trying to figure out why this is. (This is all happening under 6.4.\nWe haven't upgraded to 6.4.2 yet but I don't see anything in the 6.4.2\nHISTORY file that suggests that major changes have been made to the\noptimizer, so I guess that probably wouldn't change anything?)\n\nIn any case, I dumped the entire contents the apx14 database using `pg_dump\n-D apx14 > apx14.out`. Then I created a new database called apx15 and ran\n`psql -f apx14.out apx15`.\n\nThen I ran a join query against apx15:\n\n apx15=> \\i ../query\n explain select a.article_id, b.article_id \n from article a, article_text b \n where a.article_id = b.article_id;\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\nSo I ran vacuum analyze and did it again. And got the same query plan.\n\nAt that point, I decided to create yet another DB, apx16. I just used the\nschema from apx14, and the contents of the two tables being used in my join\nquery.\n\nIn other words:\n`pg_dump -s apx14 > apx14.schema.out`\n`pg_dump -aDt article apx14 > apx14.article.out`\n`pg_dump -aDt article_text apx14 > apx14.article_text.out`\n`psql -f apx14.schema.out apx16`\n`psql -f apx14.article.out apx16`\n`psql -f apx14.article_text.out apx16`\n\n\nThen I ran the same join query against apx16, and this time the optimizer\ndecided to use a \"merge join\":\n\n apx16=> \\i ../query\n explain select a.article_id, b.article_id \n from article a, article_text b \n where a.article_id = b.article_id;\n NOTICE: QUERY PLAN:\n\n Merge Join (cost=0.00 size=1 width=8)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article a (cost=0.00 size=0 width=4)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article_text b (cost=0.00 size=0 width=4)\n\n EXPLAIN\n\nMaybe I'm missing something, but I can't understand why two identical join\nqueries run against two identical tables should produce different output.\nMaybe there's something in the way my tables and indices are set up, but I\ncan't imagine what that is.\n\nOne final note: I just did another test, hoping to exlude other factors\nfrom the equation. I created two more databases using only the two\nrelevant tables (and sequences & indices) from apx16. What I discovered is\nthat if I build a database from \"full\" dump files (i.e., dump files that\ninclude both table schema & data), the optimizer will always elect to use\n\"nested loops\" for its query plans. On the other hand, if I build the\ndatabase by first creating the tables and indices, and then doing the\ninserts in a second operation, it uses the less costly \"merge join\". As a\nmatter of fact, it appears that if you create an index on a table *after*\ninserting data into that table, joins performed on that table will *always*\nuse these expensive nested loops.\n\nSo my theory is:\n\nSince pg_dump creates dump files that perform INSERTs on tables before\nperforming any CREATE INDEX statements on tables, using raw dump files to\ncreate databases will cause this \"nested loop\" behavior.\n\nAnd there's another catch: If you create your tables and indices first,\nthen insert your data, the optimizer will continue to build \"merge join\"\nquery plans until you make the mistake of running a \"vacuum analyze\". If\nyou run \"vacuum analyze\", the optimizer will then decide to begin using the\nmore costly nested loops to handle any join queries. The only way to get\naround this is to delete all records from the tables being joined,\nre-insert them and remember NOT to run vacuum analyze.\n\nIf anyone wants to replicate this, I've documented my tests below.\n\n(And, of course, if I've totally missed the point on anything in here,\nplease correct me.)\n\n===============\nTable structure\n===============\n\napx16=> \\d article\nTable = article\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 not null default nextval ( |\n4 |\n| section_id | int4 not null |\n4 |\n| locale_id | int4 not null |\n4 |\n| article_source_id | int4 |\n4 |\n| volume_id | int4 |\n4 |\n| issue_id | int4 |\n4 |\n| print_page_no | int4 |\n4 |\n| print_publ_date | date |\n4 |\n| site_publ_date | date not null |\n4 |\n| site_publ_time | datetime not null |\n8 |\n| inputter_id | int4 not null |\n4 |\n| input_date | datetime default text 'now' |\n8 |\n| published | int4 default 0 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\nIndices: article_article_id_key\n article_issue_ix\n article_locale_ix\n article_section_ix\n article_source_ix\n article_vol_ix\n\napx16=> \\d article_article_id_key\n\nTable = article_article_id_key\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\napx16=> \\d article_text\n\nTable = article_text\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 not null |\n4 |\n| headline | varchar() |\n0 |\n| subhead | varchar() |\n0 |\n+----------------------------------+----------------------------------+-----\n--+\nIndex: article_text_ix\n\napx16=> \\d article_text_ix\n\nTable = article_text_ix\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\n\n\n=================\nCREATE STATEMENTS\n=================\n\nCREATE SEQUENCE \"article_article_id_seq\" start 10 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nSELECT nextval ('article_article_id_seq');\nCREATE TABLE \"article\" (\"article_id\" \"int4\" DEFAULT nextval (\n'article_article_id_seq' ) NOT NULL, \"section_id\" \"int4\" NOT NULL,\n\"locale_id\" \"int4\" NOT NULL, \"article_source_id\" \"int4\", \"volume_id\"\n\"int4\", \"issue_id\" \"int4\", \"print_page_no\" \"int4\", \"print_publ_date\"\n\"date\", \"site_publ_date\" \"date\" NOT NULL, \"site_publ_time\" \"datetime\" NOT\nNULL, \"inputter_id\" \"int4\" NOT NULL, \"input_date\" \"datetime\" DEFAULT text\n'now', \"published\" \"int4\" DEFAULT 0);\nCREATE UNIQUE INDEX \"article_article_id_key\" on \"article\" using btree (\n\"article_id\" \"int4_ops\" );\nCREATE INDEX \"article_vol_ix\" on \"article\" using btree ( \"volume_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_source_ix\" on \"article\" using btree (\n\"article_source_id\" \"int4_ops\" );\nCREATE INDEX \"article_issue_ix\" on \"article\" using btree ( \"issue_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_locale_ix\" on \"article\" using btree ( \"locale_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_section_ix\" on \"article\" using btree ( \"section_id\"\n\"int4_ops\" );\nCREATE TABLE \"article_text\" (\"article_id\" \"int4\" NOT NULL, \"headline\"\nvarchar, \"subhead\" varchar);\nCREATE INDEX \"article_text_ix\" on \"article_text\" using btree (\n\"article_id\" \"int4_ops\" );\n\n\n=================\nINSERT STATEMENTS\n=================\n\nINSERT INTO \"article\"\n(\"article_id\",\"section_id\",\"locale_id\",\"article_source_id\",\"volume_id\",\"issu\ne_id\",\"print_page_no\",\"print_publ_date\",\"site_publ_date\",\"site_publ_time\",\"i\nnputter_id\",\"input_date\",\"published\") values\n(10,3,1,4,2,3,4,'04-05-2006','01-28-1999','Thu Jan 28 19:28:40 1999\nPST',100,'Thu Jan 28 19:28:40 1999 PST',0);\nINSERT INTO \"article\"\n(\"article_id\",\"section_id\",\"locale_id\",\"article_source_id\",\"volume_id\",\"issu\ne_id\",\"print_page_no\",\"print_publ_date\",\"site_publ_date\",\"site_publ_time\",\"i\nnputter_id\",\"input_date\",\"published\") values\n(11,3,1,4,2,3,4,'04-05-2006','01-28-1999','Thu Jan 28 19:28:40 1999\nPST',100,'Thu Jan 28 19:28:40 1999 PST',0);\nINSERT INTO \"article_text\" (\"article_id\",\"headline\",\"subhead\") values\n(10,'Mayor Signs Contract With Company','Legally binding document said to\nbe four pages long');\nINSERT INTO \"article_text\" (\"article_id\",\"headline\",\"subhead\") values\n(11,'Mayor Cancels Contract','Company Promises to Sue Over Scuttled Deal');\n\n==================\nSteps to replicate\n==================\n\nVacuum analyze 'problem'\n------------------------\n1. Create a fresh database and execute the CREATE statements above\n2. Execute the INSERT statements above\n3. Execute the following query:\n EXPLAIN SELECT a.article_id, b.article_id \n FROM article a, article_text b \n WHERE a.article_id = b.article_id;\n4. You should see:\n\n NOTICE: QUERY PLAN:\n\n Merge Join (cost=0.00 size=1 width=8)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article a (cost=0.00 size=0 width=4)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article_text b (cost=0.00 size=0 width=4)\n\n EXPLAIN\n\n5. Do \"vacuum analyze\".\n6. Execute the query again.\n7. Now you'll see:\n\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\n\npg_dump 'problem'\n-----------------\n\n1. Dump the DB you created above to a file with\n\t`pg_dump -D dbname > foo`\n\n2. Create a new DB\n\n3. Do `psql -f foo newdb`\n\n4. Execute the join query:\n EXPLAIN SELECT a.article_id, b.article_id \n FROM article a, article_text b \n WHERE a.article_id = b.article_id;\n\n5. You should see:\n\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\n\n",
"msg_date": "Fri, 29 Jan 1999 20:05:18 -0800",
"msg_from": "Charles Hornberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] nested loops in joins, ambiguous rewrite rules"
}
] |
[
{
"msg_contents": "Hello. I just sent this note to the pgsql-general list, but I thought it\nprobably deserved to land on the hackers list as well. My apologies if I'm\nwrong.\n\nCharlie\n-----------------------------------------\n\nX-Sender: [email protected] (Unverified)\nDate: Fri, 29 Jan 1999 18:00:11 -0800\nTo: [email protected]\nFrom: Charles Hornberger <[email protected]>\nSubject: Re: [GENERAL] nested loops in joins, ambiguous rewrite rules\nSender: [email protected]\n\nHello again. I've got a few more notes on this \"strange\" optimizer\nbehavior, which might be useful to anyone interested in this problem:\n\nAs I mentioned in my last posting, I have one DB called apx13 that is\nproducing expensive \"nested loop\" query plans when I try to join two\ntables, and one called apx14 that's using \"merge join\" query plans. I've\nbeen trying to figure out why this is. (This is all happening under 6.4.\nWe haven't upgraded to 6.4.2 yet but I don't see anything in the 6.4.2\nHISTORY file that suggests that major changes have been made to the\noptimizer, so I guess that probably wouldn't change anything?)\n\nIn any case, I dumped the entire contents the apx14 database using `pg_dump\n-D apx14 > apx14.out`. Then I created a new database called apx15 and ran\n`psql -f apx14.out apx15`.\n\nThen I ran a join query against apx15:\n\n apx15=> \\i ../query\n explain select a.article_id, b.article_id \n from article a, article_text b \n where a.article_id = b.article_id;\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\nSo I ran vacuum analyze and did it again. And got the same query plan.\n\nAt that point, I decided to create yet another DB, apx16. I just used the\nschema from apx14, and the contents of the two tables being used in my join\nquery.\n\nIn other words:\n`pg_dump -s apx14 > apx14.schema.out`\n`pg_dump -aDt article apx14 > apx14.article.out`\n`pg_dump -aDt article_text apx14 > apx14.article_text.out`\n`psql -f apx14.schema.out apx16`\n`psql -f apx14.article.out apx16`\n`psql -f apx14.article_text.out apx16`\n\n\nThen I ran the same join query against apx16, and this time the optimizer\ndecided to use a \"merge join\":\n\n apx16=> \\i ../query\n explain select a.article_id, b.article_id \n from article a, article_text b \n where a.article_id = b.article_id;\n NOTICE: QUERY PLAN:\n\n Merge Join (cost=0.00 size=1 width=8)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article a (cost=0.00 size=0 width=4)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article_text b (cost=0.00 size=0 width=4)\n\n EXPLAIN\n\nMaybe I'm missing something, but I can't understand why two identical join\nqueries run against two identical tables should produce different output.\nMaybe there's something in the way my tables and indices are set up, but I\ncan't imagine what that is.\n\nOne final note: I just did another test, hoping to exlude other factors\nfrom the equation. I created two more databases using only the two\nrelevant tables (and sequences & indices) from apx16. What I discovered is\nthat if I build a database from \"full\" dump files (i.e., dump files that\ninclude both table schema & data), the optimizer will always elect to use\n\"nested loops\" for its query plans. On the other hand, if I build the\ndatabase by first creating the tables and indices, and then doing the\ninserts in a second operation, it uses the less costly \"merge join\". As a\nmatter of fact, it appears that if you create an index on a table *after*\ninserting data into that table, joins performed on that table will *always*\nuse these expensive nested loops.\n\nSo my theory is:\n\nSince pg_dump creates dump files that perform INSERTs on tables before\nperforming any CREATE INDEX statements on tables, using raw dump files to\ncreate databases will cause this \"nested loop\" behavior.\n\nAnd there's another catch: If you create your tables and indices first,\nthen insert your data, the optimizer will continue to build \"merge join\"\nquery plans until you make the mistake of running a \"vacuum analyze\". If\nyou run \"vacuum analyze\", the optimizer will then decide to begin using the\nmore costly nested loops to handle any join queries. The only way to get\naround this is to delete all records from the tables being joined,\nre-insert them and remember NOT to run vacuum analyze.\n\nIf anyone wants to replicate this, I've documented my tests below.\n\n(And, of course, if I've totally missed the point on anything in here,\nplease correct me.)\n\n===============\nTable structure\n===============\n\napx16=> \\d article\nTable = article\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 not null default nextval ( |\n4 |\n| section_id | int4 not null |\n4 |\n| locale_id | int4 not null |\n4 |\n| article_source_id | int4 |\n4 |\n| volume_id | int4 |\n4 |\n| issue_id | int4 |\n4 |\n| print_page_no | int4 |\n4 |\n| print_publ_date | date |\n4 |\n| site_publ_date | date not null |\n4 |\n| site_publ_time | datetime not null |\n8 |\n| inputter_id | int4 not null |\n4 |\n| input_date | datetime default text 'now' |\n8 |\n| published | int4 default 0 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\nIndices: article_article_id_key\n article_issue_ix\n article_locale_ix\n article_section_ix\n article_source_ix\n article_vol_ix\n\napx16=> \\d article_article_id_key\n\nTable = article_article_id_key\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\napx16=> \\d article_text\n\nTable = article_text\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 not null |\n4 |\n| headline | varchar() |\n0 |\n| subhead | varchar() |\n0 |\n+----------------------------------+----------------------------------+-----\n--+\nIndex: article_text_ix\n\napx16=> \\d article_text_ix\n\nTable = article_text_ix\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| article_id | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\n\n\n=================\nCREATE STATEMENTS\n=================\n\nCREATE SEQUENCE \"article_article_id_seq\" start 10 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nSELECT nextval ('article_article_id_seq');\nCREATE TABLE \"article\" (\"article_id\" \"int4\" DEFAULT nextval (\n'article_article_id_seq' ) NOT NULL, \"section_id\" \"int4\" NOT NULL,\n\"locale_id\" \"int4\" NOT NULL, \"article_source_id\" \"int4\", \"volume_id\"\n\"int4\", \"issue_id\" \"int4\", \"print_page_no\" \"int4\", \"print_publ_date\"\n\"date\", \"site_publ_date\" \"date\" NOT NULL, \"site_publ_time\" \"datetime\" NOT\nNULL, \"inputter_id\" \"int4\" NOT NULL, \"input_date\" \"datetime\" DEFAULT text\n'now', \"published\" \"int4\" DEFAULT 0);\nCREATE UNIQUE INDEX \"article_article_id_key\" on \"article\" using btree (\n\"article_id\" \"int4_ops\" );\nCREATE INDEX \"article_vol_ix\" on \"article\" using btree ( \"volume_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_source_ix\" on \"article\" using btree (\n\"article_source_id\" \"int4_ops\" );\nCREATE INDEX \"article_issue_ix\" on \"article\" using btree ( \"issue_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_locale_ix\" on \"article\" using btree ( \"locale_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_section_ix\" on \"article\" using btree ( \"section_id\"\n\"int4_ops\" );\nCREATE TABLE \"article_text\" (\"article_id\" \"int4\" NOT NULL, \"headline\"\nvarchar, \"subhead\" varchar);\nCREATE INDEX \"article_text_ix\" on \"article_text\" using btree (\n\"article_id\" \"int4_ops\" );\n\n\n=================\nINSERT STATEMENTS\n=================\n\nINSERT INTO \"article\"\n(\"article_id\",\"section_id\",\"locale_id\",\"article_source_id\",\"volume_id\",\"issu\ne_id\",\"print_page_no\",\"print_publ_date\",\"site_publ_date\",\"site_publ_time\",\"i\nnputter_id\",\"input_date\",\"published\") values\n(10,3,1,4,2,3,4,'04-05-2006','01-28-1999','Thu Jan 28 19:28:40 1999\nPST',100,'Thu Jan 28 19:28:40 1999 PST',0);\nINSERT INTO \"article\"\n(\"article_id\",\"section_id\",\"locale_id\",\"article_source_id\",\"volume_id\",\"issu\ne_id\",\"print_page_no\",\"print_publ_date\",\"site_publ_date\",\"site_publ_time\",\"i\nnputter_id\",\"input_date\",\"published\") values\n(11,3,1,4,2,3,4,'04-05-2006','01-28-1999','Thu Jan 28 19:28:40 1999\nPST',100,'Thu Jan 28 19:28:40 1999 PST',0);\nINSERT INTO \"article_text\" (\"article_id\",\"headline\",\"subhead\") values\n(10,'Mayor Signs Contract With Company','Legally binding document said to\nbe four pages long');\nINSERT INTO \"article_text\" (\"article_id\",\"headline\",\"subhead\") values\n(11,'Mayor Cancels Contract','Company Promises to Sue Over Scuttled Deal');\n\n==================\nSteps to replicate\n==================\n\nVacuum analyze 'problem'\n------------------------\n1. Create a fresh database and execute the CREATE statements above\n2. Execute the INSERT statements above\n3. Execute the following query:\n EXPLAIN SELECT a.article_id, b.article_id \n FROM article a, article_text b \n WHERE a.article_id = b.article_id;\n4. You should see:\n\n NOTICE: QUERY PLAN:\n\n Merge Join (cost=0.00 size=1 width=8)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article a (cost=0.00 size=0 width=4)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on article_text b (cost=0.00 size=0 width=4)\n\n EXPLAIN\n\n5. Do \"vacuum analyze\".\n6. Execute the query again.\n7. Now you'll see:\n\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\n\npg_dump 'problem'\n-----------------\n\n1. Dump the DB you created above to a file with\n\t`pg_dump -D dbname > foo`\n\n2. Create a new DB\n\n3. Do `psql -f foo newdb`\n\n4. Execute the join query:\n EXPLAIN SELECT a.article_id, b.article_id \n FROM article a, article_text b \n WHERE a.article_id = b.article_id;\n\n5. You should see:\n\n NOTICE: QUERY PLAN:\n\n Nested Loop (cost=3.20 size=2 width=8)\n -> Seq Scan on article a (cost=1.07 size=2 width=4)\n -> Seq Scan on article_text b (cost=1.07 size=2 width=4)\n\n EXPLAIN\n\n\n",
"msg_date": "Fri, 29 Jan 1999 20:20:17 -0800",
"msg_from": "Charles Hornberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] nested loops in joins, ambiguous rewrite rules"
},
{
"msg_contents": "Charles Hornberger <[email protected]> writes:\n> Hello again. I've got a few more notes on this \"strange\" optimizer\n> behavior, which might be useful to anyone interested in this problem:\n\nWell, I understand at least part of the \"problem\" here.\n\nFirst, you're assuming that a merge-join plan is necessarily better than\na nested-loop plan. That should be true for large queries, but it is\n*not* necessarily true for small tables --- when there are only a few\ntuples in the tables being scanned, a simple nested loop wins because it\nhas much less startup overhead. (Or at least that's what our optimizer\nthinks; I have not tried to measure this for myself.)\n\nWhat's really going on here is that when the optimizer *knows* how small\nyour test tables are, it deliberately chooses nested-loop as being the\nfastest thing. If it doesn't know, it makes some default assumptions\nabout the sizes of the tables, and with those default sizes it decides\nthat merge-join will be cheaper.\n\nSo the next question is why apparently similar database situations yield\ndifferent states of optimizer knowledge. The answer is that creating\nan index before inserting tuples and creating it afterwards have\ndifferent side effects on the optimizer's statistics.\n\nI've only spent about ten minutes looking into this, so my understanding\nis no doubt incomplete, but what I've found out is:\n 1. There are a couple of statistical fields in the system pg_class\n table, namely relpages and reltuples (number of disk pages and\n tuples in each relation).\n 2. There are per-attribute statistics kept in the pg_statistic table.\n 3. The pg_class statistics fields are updated by a VACUUM (with or\n without ANALYZE) *and also by CREATE INDEX*. Possibly by other\n things too ... but plain INSERTs and so forth don't update 'em.\n 4. The pg_statistics info only seems to be updated by VACUUM ANALYZE.\n\nSo if you do\n\tCREATE TABLE\n\tCREATE INDEX\n\tinsert tuples\nthen the state of play is that the optimizer thinks the table is empty.\n(Or, perhaps, it makes its default assumption --- pg_class doesn't seem\nto have any special representation for \"size of table unknown\" as\nopposed to \"size of table is zero\", so maybe the optimizer treats\nreltuples = 0 as meaning it should use a default table size. I haven't\nlooked at that part of the code to find out.)\n\nBut if you do\n\tCREATE TABLE\n\tinsert tuples\n\tCREATE INDEX\nthe state of play is that the optimizer thinks there are as many tuples\nin the table as there were when you created the index. This explains\nthe varying behavior in your detailed test case.\n\nI'll bet that if you investigate the contents of pg_class and\npg_statistic in the two databases you were originally working with,\nyou'll find that they are different. But a VACUUM ANALYZE should\nbring everything into sync, assuming that the actual data in the\ndatabases is the same.\n\nAt any rate, if you still think there's something flaky going on,\nplease have a look at what's happening in these statistical fields;\nthat'll give us more info about whether there's really a bug or not.\n\nAlso, if the optimizer still wants to use nested loop when it knows\nthere are a *lot* of tuples to be processed, there might be a bug in\nits equations for the cost of these operations --- how large are your\ntables?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 30 Jan 1999 16:07:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] nested loops in joins, ambiguous rewrite rules "
},
{
"msg_contents": "At 04:07 PM 1/30/99 -0500, Tom Lane <[email protected]> wrote:\n>First, you're assuming that a merge-join plan is necessarily better than\n>a nested-loop plan. That should be true for large queries, but it is\n>*not* necessarily true for small tables --- when there are only a few\n>tuples in the tables being scanned, a simple nested loop wins because it\n>has much less startup overhead. (Or at least that's what our optimizer\n>thinks; I have not tried to measure this for myself.)\n\nOK, I understand that I don't understand whether merge-join plans are\nnecessarily better than nested-loop plans, and that it could make sense to\npick one or the other depending on the size of the tables and the number of\nrows in them. Also, your explanation of how 'vacuum analyze' updates the\nstatistics in pg_class and pg_statistic makes it very clear why I'm seeing\none query plan in one DB, and different plan in the other. Thanks for the\nquick lesson, and my apologies for making it happen on the hackers list.\n\nBut let's leave that aside for the moment and address the underlying\nquestion: Why does it take ~ 11.5 minutes to process the following query?\nIt performs a simple ANDed join on seven tables. All of the columns named\nin the WHERE clauses are indexed, and all of the tables except article and\narticle_text contain just one single row; article and article_text contain\ntwo rows.\n\nEXPLAIN SELECT a.article_id, a.print_publ_date, b.section_name, \n c.source_name, d.headline,\n e.issue_name, f.locale_name, g.volume_name\n FROM article a, section b, article_source c, article_text d,\n issue e, locale f, volume g\n WHERE a.article_id = d.article_id\n AND a.section_id = b.section_id\n AND a.article_source_id = c.source_id\n AND a.issue_id = e.issue_id\n AND a.locale_id = f.locale_id\n AND a.volume_id = g.volume_id ;\n\nLet me explain each of the WHERE clauses:\n 1) a.article_id has a unique index, d.article_id has a non-unique index\n 2) a.section_id has a non-unique index, b.section_id has a unique index\n 3) a.article_source_id has a non-unique index, c.source_id has a unique\nindex\n 4) a.issue_id has a non-unique index, e.issue_id has a unique index\n 5) a.locale_id has a non-unique index, f.locale_id has a unique index\n 6) a.volume_id has a non-unique index, g.volume_id has a unique index\n\n\nThis is being done under Postgres 6.4.2 (we upgraded from 6.4 last night)\non a Pentium 150 with 192MB of RAM running Linux 2.0.35 \n\nHere's the query plan it generated:\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=12.49 size=2 width=124)\n -> Nested Loop (cost=10.43 size=2 width=108)\n -> Nested Loop (cost=8.36 size=2 width=92)\n -> Nested Loop (cost=6.30 size=2 width=76)\n -> Nested Loop (cost=4.16 size=2 width=60)\n -> Nested Loop (cost=2.10 size=2 width=44)\n -> Seq Scan on locale f (cost=1.03 size=1\nwidth=16)\n -> Seq Scan on article a (cost=1.07\nsize=2 width=28)\n -> Seq Scan on issue e (cost=1.03 size=1 width=16)\n -> Seq Scan on article_text d (cost=1.07 size=2\nwidth=16)\n -> Seq Scan on article_source c (cost=1.03 size=1 width=16)\n -> Seq Scan on section b (cost=1.03 size=1 width=16)\n -> Seq Scan on volume g (cost=1.03 size=1 width=16)\n\n\n>At any rate, if you still think there's something flaky going on,\n>please have a look at what's happening in these statistical fields;\n>that'll give us more info about whether there's really a bug or not.\n>\n>Also, if the optimizer still wants to use nested loop when it knows\n>there are a *lot* of tuples to be processed, there might be a bug in\n>its equations for the cost of these operations --- how large are your\n>tables?\n>\n\nThe tables are tiny.\n\nHere are some statistics from the DB.\n\napx00=> select * from pg_statistic;\nstarelid|staattnum|staop|stalokey\n|stahikey \n--------+---------+-----+------------------------------------------+--------\n-------------------------------------------\n 407828| 4| 0|4 |4\n \n 407828| 5| 0|2 |2\n \n 407828| 6| 0|3 |3\n \n 407828| 7| 0|4 |4\n \n 407828| 8| 0|04-05-2006\n|04-05-2006 \n 407828| 9| 0|01-28-1999\n|01-28-1999 \n 407828| 10| 0|Thu Jan 28 19:28:40 1999 PST |Thu Jan\n28 19:28:40 1999 PST \n 407828| 11| 0|100 |100\n \n 407828| 12| 0|Thu Jan 28 19:28:40 1999 PST |Thu Jan\n28 19:28:40 1999 PST \n 407828| 13| 0|0 |0\n \n 407828| 1| 0|10 |11\n \n 407828| 2| 0|3 |3\n \n 407828| 3| 0|1 |1\n \n 407852| 1| 0|10 |11\n \n 407852| 2| 0|Mayor Cancels Contract |Mayor\nSigns Contract With Company \n 407852| 3| 0|Company Promises to Sue Over Scuttled Deal|Legally\nbinding document said to be four pages long\n 407863| 1| 0|3 |3\n \n 407863| 2| 0|News |News\n \n 407874| 1| 0|1 |1\n \n 407874| 2| 0|Downtown\n|Downtown \n 407885| 1| 0|4 |4\n \n 407885| 2| 0|The Times |The\nTimes \n 407896| 1| 0|2 |2\n \n 407896| 2| 0|2 |2\n \n 407907| 1| 0|3 |3\n \n 407907| 2| 0|25 |25\n \n 407907| 3| 0|04-05-2006\n|04-05-2006 \n(27 rows)\n\napx00=> select oid,* from pg_class where relowner <> 505;\n oid|relname\n|reltype|relowner|relam|relpages|reltuples|relhasindex|relisshared|relkind|r\nelnatts|relchecks|reltriggers|relukeys|relfkeys|relrefs|relhaspkey|relhasrul\nes|relacl\n------+----------------------------+-------+--------+-----+--------+--------\n-+-----------+-----------+-------+--------+---------+-----------+--------+--\n------+-------+----------+-----------+------\n407923|article_vol_ix | 0| 508| 403| 2|\n2|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407926|article_source_ix | 0| 508| 403| 2|\n2|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407929|article_issue_ix | 0| 508| 403| 2|\n2|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407932|article_locale_ix | 0| 508| 403| 2|\n2|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407726|article_article_id_seq | 0| 508| 0| 0|\n0|f |f |S | 8| 0| 0| 0|\n 0| 0|f |f | \n407743|section_section_id_seq | 0| 508| 0| 0|\n0|f |f |S | 8| 0| 0| 0|\n 0| 0|f |f | \n407760|locale_locale_id_seq | 0| 508| 0| 0|\n0|f |f |S | 8| 0| 0| 0|\n 0| 0|f |f | \n407777|article_source_source_id_seq| 0| 508| 0| 0|\n0|f |f |S | 8| 0| 0| 0|\n 0| 0|f |f | \n407794|volume_volume_id_seq | 0| 508| 0| 0|\n0|f |f |S | 8| 0| 0| 0|\n 0| 0|f |f | \n407811|issue_issue_id_seq | 0| 508| 0| 0|\n0|f |f |S | 8| 0| 0| 0|\n 0| 0|f |f | \n407828|article | 0| 508| 0| 1|\n2|t |f |r | 13| 0| 0| 0|\n 0| 0|f |f | \n407935|article_section_ix | 0| 508| 403| 2|\n2|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407852|article_text | 0| 508| 0| 1|\n2|t |f |r | 3| 0| 0| 0|\n 0| 0|f |f | \n407938|article_text_ix | 0| 508| 403| 2|\n2|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407863|section | 0| 508| 0| 1|\n1|t |f |r | 2| 0| 0| 0|\n 0| 0|f |f | \n407941|section_section_id_key | 0| 508| 403| 2|\n1|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407874|locale | 0| 508| 0| 1|\n1|t |f |r | 2| 0| 0| 0|\n 0| 0|f |f | \n407944|locale_locale_id_key | 0| 508| 403| 2|\n1|f |f |i | 1| 0| \n407885|article_source | 0| 508| 0| 1|\n1|t |f |r | 2| 0| 0| 0|\n 0| 0|f |f | \n407920|article_article_id_key | 0| 508| 403| 2|\n2|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407947|article_source_source_id_key| 0| 508| 403| 2|\n1|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407896|volume | 0| 508| 0| 1|\n1|t |f |r | 2| 0| 0| 0|\n 0| 0|f |f | \n407950|volume_volume_id_key | 0| 508| 403| 2|\n1|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n407907|issue | 0| 508| 0| 1|\n1|t |f |r | 3| 0| 0| 0|\n 0| 0|f |f | \n407953|issue_issue_id_key | 0| 508| 403| 2|\n1|f |f |i | 1| 0| 0| 0|\n 0| 0|f |f | \n(25 rows)\n\n\nI've appended a dump of this database to the end of this e-mail, if anyone\nelse wants to give it a shot.\n\n(And finally, a note of apology. I think my last e-mail to the list was\nposted three times; I got two BOUNCE messages back from the list, and so I\nresent the message each time. I think the \"From\" header in my e-mails was\nwrong, but apparently it wasn't wrong enough to actually stop the message\nfrom making its way onto the list; it was just wrong enough to generate a\nBOUNCE.)\n\nThanks,\nCharlie\n\n>What's really going on here is that when the optimizer *knows* how small\n>your test tables are, it deliberately chooses nested-loop as being the\n>fastest thing. If it doesn't know, it makes some default assumptions\n>about the sizes of the tables, and with those default sizes it decides\n>that merge-join will be cheaper.\n>\n>So the next question is why apparently similar database situations yield\n>different states of optimizer knowledge. The answer is that creating\n>an index before inserting tuples and creating it afterwards have\n>different side effects on the optimizer's statistics.\n>\n>I've only spent about ten minutes looking into this, so my understanding\n>is no doubt incomplete, but what I've found out is:\n> 1. There are a couple of statistical fields in the system pg_class\n> table, namely relpages and reltuples (number of disk pages and\n> tuples in each relation).\n> 2. There are per-attribute statistics kept in the pg_statistic table.\n> 3. The pg_class statistics fields are updated by a VACUUM (with or\n> without ANALYZE) *and also by CREATE INDEX*. Possibly by other\n> things too ... but plain INSERTs and so forth don't update 'em.\n> 4. The pg_statistics info only seems to be updated by VACUUM ANALYZE.\n>\n>So if you do\n>\tCREATE TABLE\n>\tCREATE INDEX\n>\tinsert tuples\n>then the state of play is that the optimizer thinks the table is empty.\n>(Or, perhaps, it makes its default assumption --- pg_class doesn't seem\n>to have any special representation for \"size of table unknown\" as\n>opposed to \"size of table is zero\", so maybe the optimizer treats\n>reltuples = 0 as meaning it should use a default table size. I haven't\n>looked at that part of the code to find out.)\n>\n>But if you do\n>\tCREATE TABLE\n>\tinsert tuples\n>\tCREATE INDEX\n>the state of play is that the optimizer thinks there are as many tuples\n>in the table as there were when you created the index. This explains\n>the varying behavior in your detailed test case.\n>\n>I'll bet that if you investigate the contents of pg_class and\n>pg_statistic in the two databases you were originally working with,\n>you'll find that they are different. But a VACUUM ANALYZE should\n>bring everything into sync, assuming that the actual data in the\n>databases is the same.\n>\n>At any rate, if you still think there's something flaky going on,\n>please have a look at what's happening in these statistical fields;\n>that'll give us more info about whether there's really a bug or not.\n>\n>Also, if the optimizer still wants to use nested loop when it knows\n>there are a *lot* of tuples to be processed, there might be a bug in\n>its equations for the cost of these operations --- how large are your\n>tables?\n>\n>\t\t\tregards, tom lane\n>\n>\n\n\nCREATE SEQUENCE \"article_article_id_seq\" start 10 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nCREATE SEQUENCE \"section_section_id_seq\" start 7 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nCREATE SEQUENCE \"locale_locale_id_seq\" start 3 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nCREATE SEQUENCE \"article_source_source_id_seq\" start 4 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nCREATE SEQUENCE \"volume_volume_id_seq\" start 2 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nCREATE SEQUENCE \"issue_issue_id_seq\" start 3 increment 1 maxvalue\n2147483647 minvalue 1 cache 1 ;\nCREATE TABLE \"article\" (\n \"article_id\" int4 DEFAULT nextval ( 'article_article_id_seq' ) NOT\nNULL,\n \"section_id\" int4 NOT NULL,\n \"locale_id\" int4 NOT NULL,\n \"article_source_id\" int4,\n \"volume_id\" int4,\n \"issue_id\" int4,\n \"print_page_no\" int4,\n \"print_publ_date\" date,\n \"site_publ_date\" date NOT NULL,\n \"site_publ_time\" datetime NOT NULL,\n \"inputter_id\" int4 NOT NULL,\n \"input_date\" datetime DEFAULT text 'now',\n \"published\" int4 DEFAULT 0);\nCREATE TABLE \"article_text\" (\n \"article_id\" int4 NOT NULL,\n \"headline\" character varying,\n \"subhead\" character varying);\nCREATE TABLE \"section\" (\n \"section_id\" int4 DEFAULT nextval ( 'section_section_id_seq' ) NOT\nNULL,\n \"section_name\" character varying NOT NULL);\nCREATE TABLE \"locale\" (\n \"locale_id\" int4 DEFAULT nextval ( 'locale_locale_id_seq' ) NOT NULL,\n \"locale_name\" character varying NOT NULL);\nCREATE TABLE \"article_source\" (\n \"source_id\" int4 DEFAULT nextval ( 'article_source_source_id_seq' )\nNOT NULL,\n \"source_name\" character varying NOT NULL);\nCREATE TABLE \"volume\" (\n \"volume_id\" int4 DEFAULT nextval ( 'volume_volume_id_seq' ) NOT NULL,\n \"volume_name\" character varying);\nCREATE TABLE \"issue\" (\n \"issue_id\" int4 DEFAULT nextval ( 'issue_issue_id_seq' ) NOT NULL,\n \"issue_name\" character varying,\n \"issue_date\" date NOT NULL);\nINSERT INTO \"article\"\n(\"article_id\",\"section_id\",\"locale_id\",\"article_source_id\",\"volume_id\",\"issu\ne_id\",\"print_page_no\",\"print_publ_date\",\"site_publ_date\",\"site_publ_time\",\"i\nnputter_id\",\"input_date\",\"published\") values\n(10,3,1,4,2,3,4,'04-05-2006','01-28-1999','Thu Jan 28 19:28:40 1999\nPST',100,'Thu Jan 28 19:28:40 1999 PST',0);\nINSERT INTO \"article\"\n(\"article_id\",\"section_id\",\"locale_id\",\"article_source_id\",\"volume_id\",\"issu\ne_id\",\"print_page_no\",\"print_publ_date\",\"site_publ_date\",\"site_publ_time\",\"i\nnputter_id\",\"input_date\",\"published\") values\n(11,3,1,4,2,3,4,'04-05-2006','01-28-1999','Thu Jan 28 19:28:40 1999\nPST',100,'Thu Jan 28 19:28:40 1999 PST',0);\nINSERT INTO \"article_text\" (\"article_id\",\"headline\",\"subhead\") values\n(10,'Mayor Signs Contract With Company','Legally binding document said to\nbe four pages long');\nINSERT INTO \"article_text\" (\"article_id\",\"headline\",\"subhead\") values\n(11,'Mayor Cancels Contract','Company Promises to Sue Over Scuttled Deal');\nINSERT INTO \"section\" (\"section_id\",\"section_name\") values (3,'News');\nINSERT INTO \"locale\" (\"locale_id\",\"locale_name\") values (1,'Downtown');\nINSERT INTO \"article_source\" (\"source_id\",\"source_name\") values (4,'The\nTimes');\nINSERT INTO \"volume\" (\"volume_id\",\"volume_name\") values (2,'2');\nINSERT INTO \"issue\" (\"issue_id\",\"issue_name\",\"issue_date\") values\n(3,'25','04-05-2006');\nCREATE UNIQUE INDEX \"article_article_id_key\" on \"article\" using btree (\n\"article_id\" \"int4_ops\" );\nCREATE INDEX \"article_vol_ix\" on \"article\" using btree ( \"volume_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_source_ix\" on \"article\" using btree (\n\"article_source_id\" \"int4_ops\" );\nCREATE INDEX \"article_issue_ix\" on \"article\" using btree ( \"issue_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_locale_ix\" on \"article\" using btree ( \"locale_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_section_ix\" on \"article\" using btree ( \"section_id\"\n\"int4_ops\" );\nCREATE INDEX \"article_text_ix\" on \"article_text\" using btree (\n\"article_id\" \"int4_ops\" );\nCREATE UNIQUE INDEX \"section_section_id_key\" on \"section\" using btree (\n\"section_id\" \"int4_ops\" );\nCREATE UNIQUE INDEX \"locale_locale_id_key\" on \"locale\" using btree (\n\"locale_id\" \"int4_ops\" );\nCREATE UNIQUE INDEX \"article_source_source_id_key\" on \"article_source\"\nusing btree ( \"source_id\" \"int4_ops\" );\nCREATE UNIQUE INDEX \"volume_volume_id_key\" on \"volume\" using btree (\n\"volume_id\" \"int4_ops\" );\nCREATE UNIQUE INDEX \"issue_issue_id_key\" on \"issue\" using btree (\n\"issue_id\" \"int4_ops\" );\n\n",
"msg_date": "Sat, 30 Jan 1999 18:35:04 -0800",
"msg_from": "Charles Hornberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] nested loops in joins, ambiguous rewrite rules "
},
{
"msg_contents": "See the SET options of psql.\n\ntest=> show geqo\\g\nNOTICE: GEQO is ON beginning with 8 relations\nSHOW VARIABLE\ntest=> \\q\n\nWe turn on geqo at 8 relations. Try:\n\n\tSET GEQO TO 4\n\nand try the query again. Let us know.\n\n\n> At 04:07 PM 1/30/99 -0500, Tom Lane <[email protected]> wrote:\n> >First, you're assuming that a merge-join plan is necessarily better than\n> >a nested-loop plan. That should be true for large queries, but it is\n> >*not* necessarily true for small tables --- when there are only a few\n> >tuples in the tables being scanned, a simple nested loop wins because it\n> >has much less startup overhead. (Or at least that's what our optimizer\n> >thinks; I have not tried to measure this for myself.)\n> \n> OK, I understand that I don't understand whether merge-join plans are\n> necessarily better than nested-loop plans, and that it could make sense to\n> pick one or the other depending on the size of the tables and the number of\n> rows in them. Also, your explanation of how 'vacuum analyze' updates the\n> statistics in pg_class and pg_statistic makes it very clear why I'm seeing\n> one query plan in one DB, and different plan in the other. Thanks for the\n> quick lesson, and my apologies for making it happen on the hackers list.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 30 Jan 1999 22:12:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] nested loops in joins, ambiguous rewrite rules"
},
{
"msg_contents": ">We turn on geqo at 8 relations. Try:\n>\n>\tSET GEQO TO 4\n>\n>and try the query again. Let us know.\n\nWell isn't that something! Thanks so much for your help!\n\nI set the GEQO variable to 4 and now the 11.5 minute query executes in 6 seconds with this query plan:\n\nHash Join (cost=21.99 size=152 width=124)\n -> Hash Join (cost=17.48 size=38 width=108)\n -> Hash Join (cost=13.48 size=16 width=92)\n -> Hash Join (cost=10.09 size=8 width=76)\n -> Hash Join (cost=6.66 size=7 width=60)\n -> Nested Loop (cost=3.26 size=6 width=44)\n -> Seq Scan on volume g (cost=1.07 size=2 width=16)\n -> Seq Scan on article a (cost=1.10 size=3 width=28)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on article_text d (cost=1.10 size=3 width=16)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on locale f (cost=1.10 size=3 width=16)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on issue e (cost=1.07 size=2 width=16)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on section b (cost=1.23 size=7 width=16)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on article_source c (cost=1.13 size=4 width=16)\n\n\nAre there any recommendations about what value *ought* to be set for GEQO? It seems to me like 'ON=8' is pretty high -- for us, it meant that UNLESS we explicity set that variable for every JOIN query of 6-7 tables, the joins were going to bog down to a total crawl, while sending memory and CPU consumption through the roof (roughly 22MB and 90-95%, respectively, for the entire query-processing period).\n\nWhat we've done is change the default setting in /src/include/optimizer/internals.h and recompiled. (It's the very last line in that file.) Maybe it'd be nice to add that as a command-line option to postmaster?\n\nAlso, we couldn't find the GEQO README, which was mentioned several times in comments in the source code but doesn't appear to have made its way into the distribution tarball. (AFAIK, we don't have a copy anywhere beneath /usr/local/pgsql/.) Maybe it got overlooked when the tarball was balled up?\n\nThanks again. If you'd like me to submit any more information about this \"problem\", please let me know.\n\nCharlie\n\nAt 10:12 PM 1/30/99 -0500, Bruce Momjian wrote:\n>See the SET options of psql.\n>\n>test=> show geqo\\g\n>NOTICE: GEQO is ON beginning with 8 relations\n>SHOW VARIABLE\n>test=> \\q\n>\n>\n>\n>> At 04:07 PM 1/30/99 -0500, Tom Lane <[email protected]> wrote:\n>> >First, you're assuming that a merge-join plan is necessarily better than\n>> >a nested-loop plan. That should be true for large queries, but it is\n>> >*not* necessarily true for small tables --- when there are only a few\n>> >tuples in the tables being scanned, a simple nested loop wins because it\n>> >has much less startup overhead. (Or at least that's what our optimizer\n>> >thinks; I have not tried to measure this for myself.)\n>> \n>> OK, I understand that I don't understand whether merge-join plans are\n>> necessarily better than nested-loop plans, and that it could make sense to\n>> pick one or the other depending on the size of the tables and the number of\n>> rows in them. Also, your explanation of how 'vacuum analyze' updates the\n>> statistics in pg_class and pg_statistic makes it very clear why I'm seeing\n>> one query plan in one DB, and different plan in the other. Thanks for the\n>> quick lesson, and my apologies for making it happen on the hackers list.\n>\n>\n>-- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n",
"msg_date": "Sat, 30 Jan 1999 21:35:07 -0800",
"msg_from": "Charles Hornberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] nested loops in joins, ambiguous rewrite rules"
},
{
"msg_contents": "> >We turn on geqo at 8 relations. Try: > > SET GEQO TO 4 >\n> >and try the query again. Let us know.\n> \n> Well isn't that something! Thanks so much for your help!\n> \n> I set the GEQO variable to 4 and now the 11.5 minute query\n> executes in 6 seconds with this query plan:\n> \n> Hash Join (cost=21.99 size=152 width=124)\n\t width=16)\n> \n> \n> Are there any recommendations about what value *ought* to be\n> set for GEQO? It seems to me like 'ON=8' is pretty high -- for\n> us, it meant that UNLESS we explicity set that variable for\n> every JOIN query of 6-7 tables, the joins were going to bog down\n> to a total crawl, while sending memory and CPU consumption\n> through the roof (roughly 22MB and 90-95%, respectively, for\n> the entire query-processing period).\n\nWe have toyed with lowering it to 6. I think someone said that was too\nlow, but obviously, some queries need a value of 6 or lower. I don't\nunderstand when that is the case, though. Any comments from anyone?\n\nI believe the GEQO README was moved into the documentation in docs. It\nused to be there, but is no longer.\n\n\n--\n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Jan 1999 01:52:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] nested loops in joins, ambiguous rewrite rules"
},
{
"msg_contents": ">> Well isn't that something! Thanks so much for your help!\n>> I set the GEQO variable to 4 and now the 11.5 minute query\n>> executes in 6 seconds with this query plan:\n\nMaybe this was already obvious to everyone else, but I just now got it\nthrough my head that Charlie was not complaining about the time to\n*execute* his seven-way join, but about the time to *plan* it.\n\n(In other words, EXPLAIN was taking about as long as actually doing\nthe SELECT, right?)\n\nThe problem here is explained in the Postgres \"Developer's Guide\",\nin the chapter titled \"Genetic Query Optimization in Database Systems\".\nThe regular planner/optimizer does a nearly exhaustive search through\nall possible ways to execute the query --- and the number of possible\nways grows exponentially in the number of joins.\n\nSo you need a heuristic shortcut when there are lots of tables. It may\nnot find the best possible execution plan, but it should find a pretty\ngood one in not too much time. That's what the GEQO code does.\n\n>> Are there any recommendations about what value *ought* to be\n>> set for GEQO? It seems to me like 'ON=8' is pretty high\n\nBruce Momjian <[email protected]> writes:\n> We have toyed with lowering it to 6. I think someone said that was too\n> low, but obviously, some queries need a value of 6 or lower. I don't\n> understand when that is the case, though. Any comments from anyone?\n\nThe docs say that the number being used is just the number of base\ntables mentioned in the query. I wonder whether that is the right\nnumber to look at. In particular, the number of cases that the\nexhaustive searcher would have to look at is not only dependent on the\nnumber of underlying tables, but also on how many indexes there are to\nconsider using.\n\nPerhaps if we develop a slightly more complex measure to compare against\nthe GEQO threshold, we can do better in deciding when to kick in GEQO.\nA brain-dead suggestion is number of tables + number of indexes ...\ndoes anyone know whether that is a good measure of the number of cases\nconsidered in the optimizer?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 31 Jan 1999 13:57:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] nested loops in joins, ambiguous rewrite rules "
},
{
"msg_contents": "> Perhaps if we develop a slightly more complex measure to compare against\n> the GEQO threshold, we can do better in deciding when to kick in GEQO.\n> A brain-dead suggestion is number of tables + number of indexes ...\n> does anyone know whether that is a good measure of the number of cases\n> considered in the optimizer?\n\nThat is an excellent suggestion. We have always seen 6-table joins that\nare short with geqo, and some longer. Perhaps that is the cause.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Jan 1999 14:35:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] nested loops in joins, ambiguous rewrite rules"
}
] |
[
{
"msg_contents": "Dan Gowin <[email protected]> wrote:\n>Subject: Visionary\n>\n>Berkley types,\n>\tDr. Stonebraker developed some code in 1991 called Visionary. That \n>allowed a user to visually datamine a database engine's data. Does the\n>original\n>source to this project still exist? And could Postgres benefit from it as a\n>datamining tool for Postgres clients?\n\nI think you may be referring to the DataSplash project (see:\nhttp://datasplash.cs.berkeley.edu).\n\nThe next release is due out next month, and should work with PostgreSQL 6.x\nout of the box.\n\nThe contact person is Mybrid Spalding ([email protected]).\n\n\t-Michael Robinson\n\n",
"msg_date": "Sat, 30 Jan 1999 15:51:03 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Visionary"
}
] |
[
{
"msg_contents": ">Subject: Visionary\n>\n>Berkley types,\n> Dr. Stonebraker developed some code in 1991 called Visionary. That \n>allowed a user to visually datamine a database engine's data. Does the\n>original\n>source to this project still exist? And could Postgres benefit from it as a\n>datamining tool for Postgres clients?\n\nI think you may be referring to the DataSplash project (see:\nhttp://datasplash.cs.berkeley.edu).\n\nThe next release is due out next month, and should work with PostgreSQL 6.x\nout of the box.\n\nThe contact person is Mybrid Spalding ([email protected]).\n\n -Michael Robinson\n\n",
"msg_date": "Sat, 30 Jan 1999 16:04:18 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Visionary"
}
] |
[
{
"msg_contents": "Hello PostgreSQL Hackers,\n\nI get this error with the attached program. Is this a bug or is it that\nI'm doing something totally stupid?\n\nThe program opens a connection, starts a transaction, creates a large\nobject with mode INV_READ | INV_WRITE, opens it with the same mode, calls\nlo_write five times with a single 'X', calls lo_lseek to seek to position\n1, calls lo_write to write a single 'y', then calls lo_lseek to seek to\nposition 3 and writes another 'y', then calls lo_close and finally ends\nthe transaction. The error happens when I write the second 'y'.\n\nI have attached a trace to this message too.\n\nThanks in advance for your time.\n\nIan",
"msg_date": "Sat, 30 Jan 1999 12:45:55 +0000 (GMT)",
"msg_from": "Ian Grant <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: heap_fetch: xinv19073 relation: ReadBuffer(81aeefe) failed"
}
] |
[
{
"msg_contents": "Dear PostgreSQL Hackers,\n\n ERROR: heap_fetch: xinv19073 relation: ReadBuffer(81aeefe) failed\n\nI get the above error with the attached program. Is it a bug?\n\nThe program opens a connection, starts a transaction, creates a large\nobject with mode INV_READ | INV_WRITE, opens it with the same mode, calls\nlo_write five times with a single 'X', calls lo_lseek to seek to position\n1, calls lo_write to write a single 'y', then calls lo_lseek to seek to\nposition 3 and writes another 'y', then calls lo_close and finally ends\nthe transaction. The error happens when I write the second 'y'.\n\nI have attached a trace to this message too.\n\nThanks in advance for your time.\n\nIan\n\n#include <stdio.h>\r\n#include \"libpq-fe.h\"\r\n#include \"libpq/libpq-fs.h\"\r\n\r\nvoid exec_cmd(PGconn *conn, char *str);\r\n\r\nmain (int argc, char *argv[])\r\n{\r\n PGconn *conn;\r\n int lobj_fd;\r\n char buf[256];\r\n int ret, i;\r\n Oid lobj_id;\r\n\r\n conn = PQconnectdb(\"dbname=test\");\r\n if (PQstatus(conn) != CONNECTION_OK) {\r\n fprintf(stderr, \"Can't connect to backend.\\n\");\r\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n exec_cmd(conn, \"BEGIN TRANSACTION\");\r\n PQtrace (conn, stdout);\r\n if ((lobj_id = lo_creat(conn, INV_READ | INV_WRITE)) < 0) {\r\n fprintf(stderr, \"Can't create lobj.\\n\");\r\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n if ((lobj_fd = lo_open(conn, lobj_id, INV_READ | INV_WRITE)) < 0) {\r\n fprintf(stderr, \"Can't open lobj.\\n\");\r\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n fprintf(stderr, \"lo_open returned fd = %d.\\n\", lobj_fd);\r\n for (i = 0; i < 5; i++) {\r\n if ((ret = lo_write(conn, lobj_fd, \"X\", 1)) != 1) {\r\n fprintf(stderr, \"Can't write lobj.\\n\");\r\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n }\r\n if ((ret = lo_lseek(conn, lobj_fd, 1, 0)) != 1) {\r\n fprintf(stderr, \"error (%d) lseeking in large object.\\n\", ret);\r\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n if ((ret = lo_write(conn, lobj_fd, \"y\", 1)) != 1) {\r\n fprintf(stderr, \"Can't write lobj.\\n\");\r\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n if ((ret = lo_lseek(conn, lobj_fd, 3, 0)) != 3) {\r\n fprintf(stderr, \"error (%d) lseeking in large object.\\n\", ret);\r\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n if ((ret = lo_write(conn, lobj_fd, \"y\", 1)) != 1) {\r\n fprintf(stderr, \"Can't write lobj.\\n\");\r\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n ret = lo_close(conn, lobj_fd);\r\n printf(\"lo_close returned %d.\\n\", ret);\r\n if (ret)\r\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\r\n PQuntrace(conn);\r\n exec_cmd(conn, \"END TRANSACTION\");\r\n exit(0);\r\n}\r\n\r\nvoid exec_cmd(PGconn *conn, char *str)\r\n{\r\n PGresult *res;\r\n\r\n if ((res = PQexec(conn, str)) == NULL) {\r\n fprintf(stderr, \"Error executing %s.\\n\", str);\r\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\r\n exit(1);\r\n }\r\n if (PQresultStatus(res) != PGRES_COMMAND_OK) {\r\n fprintf(stderr, \"Error executing %s.\\n\", str);\r\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\r\n PQclear(res);\r\n exit(1);\r\n }\r\n PQclear(res);\r\n}\r\n\nTo backend> Q\r\nTo backend> select proname, oid from pg_proc\t \t\twhere proname = 'lo_open'\t\t\t or proname = 'lo_close'\t\t\t or proname = 'lo_creat'\t\t\t or proname = 'lo_unlink'\t\t\t or proname = 'lo_lseek'\t\t\t or proname = 'lo_tell'\t\t\t or proname = 'loread'\t\t\t or proname = 'lowrite'\r\nFrom backend> P\r\nFrom backend> \"blank\"\r\nFrom backend> T\r\nFrom backend (#2)> 2\r\nFrom backend> \"proname\"\r\nFrom backend (#4)> 19\r\nFrom backend (#2)> 32\r\nFrom backend (#4)> -1\r\nFrom backend> \"oid\"\r\nFrom backend (#4)> 26\r\nFrom backend (#2)> 4\r\nFrom backend (#4)> -1\r\nFrom backend> D\r\nFrom backend (1)> �\r\nFrom backend (#4)> 11\r\nFrom backend (7)> lo_open\r\nFrom backend (#4)> 7\r\nFrom backend (3)> 952\r\nFrom backend> D\r\nFrom backend (1)> �\r\nFrom backend (#4)> 12\r\nFrom backend (8)> lo_close\r\nFrom backend (#4)> 7\r\nFrom backend (3)> 953\r\nFrom backend> D\r\nFrom backend (1)> �\r\nFrom backend (#4)> 12\r\nFrom backend (8)> lo_creat\r\nFrom backend (#4)> 7\r\nFrom backend (3)> 957\r\nFrom backend> D\r\nFrom backend (1)> �\r\nFrom backend (#4)> 13\r\nFrom backend (9)> lo_unlink\r\nFrom backend (#4)> 7\r\nFrom backend (3)> 964\r\nFrom backend> D\r\nFrom backend (1)> �\r\nFrom backend (#4)> 12\r\nFrom backend (8)> lo_lseek\r\nFrom backend (#4)> 7\r\nFrom backend (3)> 956\r\nFrom backend> D\r\nFrom backend (1)> �\r\nFrom backend (#4)> 11\r\nFrom backend (7)> lo_tell\r\nFrom backend (#4)> 7\r\nFrom backend (3)> 958\r\nFrom backend> D\r\nFrom backend (1)> �\r\nFrom backend (#4)> 10\r\nFrom backend (6)> loread\r\nFrom backend (#4)> 7\r\nFrom backend (3)> 954\r\nFrom backend> D\r\nFrom backend (1)> �\r\nFrom backend (#4)> 11\r\nFrom backend (7)> lowrite\r\nFrom backend (#4)> 7\r\nFrom backend (3)> 955\r\nFrom backend> C\r\nFrom backend> \"SELECT\"\r\nFrom backend> Z\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 957\r\nTo backend (4#)> 1\r\nTo backend (4#)> 4\r\nTo backend (4#)> 393216\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 19201\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 952\r\nTo backend (4#)> 2\r\nTo backend (4#)> 4\r\nTo backend (4#)> 19201\r\nTo backend (4#)> 4\r\nTo backend (4#)> 393216\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 0\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 956\r\nTo backend (4#)> 3\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 0\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 955\r\nTo backend (4#)> 2\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 1\r\nTo backend> X\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 1\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 955\r\nTo backend (4#)> 2\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 1\r\nTo backend> X\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 1\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 955\r\nTo backend (4#)> 2\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 1\r\nTo backend> X\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 1\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 955\r\nTo backend (4#)> 2\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 1\r\nTo backend> X\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 1\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 955\r\nTo backend (4#)> 2\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 1\r\nTo backend> X\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 1\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 956\r\nTo backend (4#)> 3\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 4\r\nTo backend (4#)> 1\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 1\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 955\r\nTo backend (4#)> 2\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 1\r\nTo backend> y\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 1\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 956\r\nTo backend (4#)> 3\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 4\r\nTo backend (4#)> 3\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nFrom backend> V\r\nFrom backend> G\r\nFrom backend (#4)> 4\r\nFrom backend (#4)> 3\r\nFrom backend> 0\r\nFrom backend> Z\r\nTo backend> F \r\nTo backend (4#)> 955\r\nTo backend (4#)> 2\r\nTo backend (4#)> 4\r\nTo backend (4#)> 0\r\nTo backend (4#)> 1\r\nTo backend> y\r\nFrom backend> E\r\nFrom backend> \"ERROR: heap_fetch: xinv19201 relation: ReadBuffer(81aeefe) failed\r\n\"\r\nFrom backend> Z",
"msg_date": "Sat, 30 Jan 1999 12:53:54 +0000 (GMT)",
"msg_from": "Ian Grant <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: heap_fetch: xinv19073 relation: ReadBuffer(81aeefe) failed"
}
] |
[
{
"msg_contents": "Oops,\n\nI forgot to say that I'm using PostgreSQL 6.4.2 on a Red Hat 5.1\nIntel box with libc.so.6\n\nIan\n\n",
"msg_date": "Sat, 30 Jan 1999 13:06:44 +0000 (GMT)",
"msg_from": "Ian Grant <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: heap_fetch: xinv19201 relation: ReadBuffer(81aeefe) failed"
}
] |
[
{
"msg_contents": "\nTatsuo suggested...\n\n> Or even better, MaxBackendId can be set at the run time such as\n> postmaster's option. Also, it would be nice if we could monitor number\n> of backends currently running. Maybe we should have a new protocol for\n> this kind of puropose?\n\nOracle does this with pseudo-tables. There are about (from my memory)\n15 tables, all starting with v$, that let you look into the internals\nof a running oracle database. Things like mounted tablespaces, locks,\nall sorts of things. One is (I think) just a name,value table for\nmisc parameters. There were even things like writes/second and\nreads/second for performance tuning.\n\nWe did a bunch of nice tk front ends for monitoring oracle simply\nby looking at the v$ tables. \n\nUsing a tables like this (pg_config, pg_backends, pg_transactions,\nwhatever) would be one way to provide this information.\n\nI haven't delved into backend code enough to know if this\nis possible.\n\n-- cary\n\n\n\n",
"msg_date": "Sat, 30 Jan 1999 09:08:21 -0500 (EST)",
"msg_from": "\"Cary O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Postmaster dies with many child processes"
}
] |
[
{
"msg_contents": "\nI agree with tom lane that a simple approach to running\nout of semaphores is sufficient.\n\nI've run into problems on systems where several systems (Oracle,\nTuxedo, and (i think) something else) all used semaphores. We got\ninto some wierd situations where as the load built up, the system\nwould run out of semaphores. But depending on how things ramped up,\nsometimes Oracle would run out, sometimes Tuxedo. (Ok, I've got to\nput in a disclamer. I *THINK* this is what was going on in this\nsystem. Actually I think it was running out of undo buffers. Same\nthing.).\n\nThe point? IMHO, the safest thing is to have a configurable (command\nline ?) max backends. When postmaster starts, it allocates the\nnecessary number of semaphores for the max number of backends. Backend\nforks beyond this number simply don't happen.\n\nIf the system doesn't have enough, we know right away. If PostgreSQL\nstarts, snarfs up a bunch of semaphores, and then some OTHER system\nruns out of semaphors, then ipcs will pretty quickly show what is\ngoing on. Better this than an indeterminate failure based on which\nsoftware package gets more load first.\n\nJust one man's opinion. \n\n-- cary\n\n\n",
"msg_date": "Sat, 30 Jan 1999 20:09:45 -0500 (EST)",
"msg_from": "\"Cary O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backends and semaphores"
}
] |
[
{
"msg_contents": "[Note to pgsql-hackers: I can't figure out why I have this and Kenneth\ndoesn't. Can someone else figure out why?]\n\nThus spake Kenneth Jacker\n> Thanks for your prompt reply ...\n> \n> |The source is in <PostgreSQL directory>/src/bin/vacuumdb and is simply\n> |the raw file that gets installed in the bin directory.\n> \n> Either we're using different versions of the distribution or I need to\n> re-ftp and re-install everything:\n> \n> Here's what's in *my* src/bin directory:\n> \n> % ls postgresql-6.4.2/src/bin/\n> CVS/ createuser/ initlocation/ pg_id/ pgtclsh/\n> Makefile destroydb/ ipcclean/ pg_passwd/ psql/\n> cleardbdir/ destroyuser/ pg_dump/ pg_version/\n> createdb/ initdb/ pg_encoding/ pgaccess/\n> \n> Unless you have any ideas, I'll guess I'll just give up!\n\nI deleted my src/bin/vacuumdb directory and supped the latest sources.\nThey got recreated. I am copying this to pgsql-hackers as I am fresh out\nof ideas here. Maybe it isn't getting into releases for some reason.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 30 Jan 1999 22:27:38 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb?"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> [Note to pgsql-hackers: I can't figure out why I have this and Kenneth\n> doesn't. Can someone else figure out why?]\n\nvacuumdb was added post-6.4, so it won't appear in 6.4.* tarfiles.\nIt wouldn't appear in your CVS pull either, if you pulled the REL6_4 tag\nrather than the main branch.\n\nThere's no reason the script wouldn't work with 6.4, as far as I can\nsee.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 31 Jan 1999 12:21:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuumdb? "
}
] |
[
{
"msg_contents": "I sent in a patch for createuser almost a week ago and I was wondering\nwhat the procedure is. I haven't seen the patch applied but I also\nhaven't seen it rejected. I understand that not every one of my little\ngems is going to make it in but I hope there is some sort of closure\nmethod built into the process. In this case I think I have a useful\nlittle addition to createuser but if I am alone, it isn't important\nenough to diverge from the standard distribution. I just need to\nknow one way or the other.\n\nIn fact, this is the second time I sent in a similar patch for createuser\nand I never heard anything about the first time either and that was before\n6.4 was released.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 30 Jan 1999 22:42:04 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Patches"
},
{
"msg_contents": "> I sent in a patch for createuser almost a week ago and I was wondering\n> what the procedure is. I haven't seen the patch applied but I also\n> haven't seen it rejected. I understand that not every one of my little\n> gems is going to make it in but I hope there is some sort of closure\n> method built into the process. In this case I think I have a useful\n> little addition to createuser but if I am alone, it isn't important\n> enough to diverge from the standard distribution. I just need to\n> know one way or the other.\n> \n> In fact, this is the second time I sent in a similar patch for createuser\n> and I never heard anything about the first time either and that was before\n> 6.4 was released.\n\nI have it. When I am doing my own development, I don't apply patches\nfrom the list. They stay in my mailbox, and get applied before beta\nstarts. If someone else applies it, I remove my copy.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Jan 1999 00:38:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patches"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > I sent in a patch for createuser almost a week ago and I was wondering\n> \n> I have it. When I am doing my own development, I don't apply patches\n> from the list. They stay in my mailbox, and get applied before beta\n> starts. If someone else applies it, I remove my copy.\n\nI see that Marc has gone ahead and committed it now. I guess the problem\nis multiple queues. It would be better if there was one queue that the\ncommitters could work on but I can't think of a good way to make that\nwork. Maybe some sort of PR system.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sun, 31 Jan 1999 08:18:39 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Patches"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> I see that Marc has gone ahead and committed it now. I guess the problem\n> is multiple queues. It would be better if there was one queue that the\n> committers could work on but I can't think of a good way to make that\n> work. Maybe some sort of PR system.\n\nI don't think multiple queues per se are a problem; the deficiency I see\nin our patching procedures is lack of visibility of the status of a\nproposed patch. If it's not been applied, is it just because no one\nhas gotten to it yet, or was there an objection from someone? What's\nworse is that one of the people with commit access might miss or forget\nabout such an objection, and commit a bogus patch anyway sometime later.\nWe have enough committers now that I think there's a definite risk here.\n\nIf we wanted to be really organized about this, it'd be cool to have\na central database with an item for each proposed patch and links to\nfollowup discussions. But I'm not sure it's worth the work it would\ntake to set it up and then maintain the entries. Unless we get badly\nbitten by a mistake that such a database would've prevented, it probably\nwon't happen ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 31 Jan 1999 12:40:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patches "
},
{
"msg_contents": "> I don't think multiple queues per se are a problem; the deficiency I see\n> in our patching procedures is lack of visibility of the status of a\n> proposed patch. If it's not been applied, is it just because no one\n> has gotten to it yet, or was there an objection from someone? What's\n> worse is that one of the people with commit access might miss or forget\n> about such an objection, and commit a bogus patch anyway sometime later.\n> We have enough committers now that I think there's a definite risk here.\n> \n> If we wanted to be really organized about this, it'd be cool to have\n> a central database with an item for each proposed patch and links to\n> followup discussions. But I'm not sure it's worth the work it would\n> take to set it up and then maintain the entries. Unless we get badly\n> bitten by a mistake that such a database would've prevented, it probably\n> won't happen ...\n\nI keep them in my mailbox, delete them if there is objection or someone\nelse applies it. Eventually, I apply it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Jan 1999 13:38:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patches"
}
] |
[
{
"msg_contents": "Hi folks...sorry for sending this to this list, but the others\nare not getting me any where. I've run into a problem trying to\nrecover data from a table that has some how been corrupted (I\ndo not know how). Although this is no longer critical (I have\nbeen able to recreate the data from other sources), I would like\nto know, if possible, how to get around this in the future.\n\nThe problem is a 1.6 million row table to which any command\nsuch as select, pg_dump, copy, vacuum fails to go anywhere.\nSpecifically, no error is generated, but the backend just sits\nand spins eating up CPU cycles. The table initially had two\nindices associated with it. In trying to identify the problem,\nI've dropped the indices and attempted to work with the raw\nunderlying table.\n\nSpecifically:\n 1. The table is visible to clients - i.e. you can _attempt_\n a select, pg_dump, etc.\n 2. If a pg_dump is attempted on the table, only the first\n 761 rows are dumped. Thereafter, the server task spins\n forever chewing up CPU cycles and never dumps an\n additional record. In one case (prior to me killing the\n task) I witnessed it consuming 4 hours of CPU time.\n (P200, 128Meg Ram, 90 Meg swap, but never used swap)\n 3. vacuum does the same...If I vacuum the db, it vacuums\n almost everything but this table (i.e. it gets stuck\n on what I think is this table). If I vacuum the table\n directly, the server task spins endlessly.\n 4. Select statements hang forever (same effect)\n\nIn all cases, the memory used by the back-end never grows in\nterms of memory usage once it starts spinning on CPU cycles.\n\nAll other tables in the db behave \"normally\".\n\nAny tools other than vacuum that detect and correct inconsistencies,\nor allow for some form of data recovery on the table?\n\nThomas\n\n------------------------------------------------------------\nThomas Reinke Tel: (416) 460-7021\nDirector of Technology Fax: (416) 598-2319\nE-Soft Inc. http://www.e-softinc.com\n",
"msg_date": "Sun, 31 Jan 1999 19:22:03 -0500",
"msg_from": "Thomas Reinke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table exists, but not accessible?"
}
] |
[
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> Recently I thought about UPDATE operations in READ COMMITED\n> mode a little and found it's very difficult.\n> I read Oracle's manual but couldn't find the algorithm.\n> \n> After I read your posting [READ COMMITTED isolevel is implemented ...],\n> I tested the following [Case-1] in my Oracle database environment(Version\n> 7).\n> I believed that changes of the values of columns used in a QUERY caused\n> re-execution.\n> \n> After I posted my mail,I tested another case(the following [Case -2]) and\n> found that changes of the values of columns used in a QUERY don't\n> necessarily cause re-execution.\n> To tell the truth,I can't understand the result of this case.\n> IMHO this case requires re-execution after re-setting the time of execution\n> and the result should be same as [Case-1] .\n> But Oracle doesn' work as I believe.\n> \n> create table t (id int4,dt int4,name text);\n> insert into t values (10,5,'a0');\n> insert into t values (20,10,'b0');\n> insert into t values (30,15,'c0');\n> \n> id |dt |name\n> ----------------------------\n> 10 |5 |a0\n> 20 |10 |b0\n> 30 |15 |c0\n> \n> \n> session-1 session-2 session-3\n> \n> [Case-1]\n> update t set dt=dt+1,\n> ^^^^^^^^\n> name='c1'\n> where id=30;\n> UPDATE 1\n> update t set dt=dt+2\n> where dt >7;\n> ^^^^^^^^^^^^^\n> (blocked)\n> update t set dt=dt+3,\n> ^^^^^^^^^\n> id=id+1\n> where id=10;\n> UPDATE 1\n> commit;\n> COMMIT\n> commit;\n> COMMIT\n> UPDATE 3\n> ^^^^^^^^^^^^\n\nOps. I'm quite suprized that T2 sees changes made by T3 after\nT2' statement started! What would be results if T3 wouldn't\nmake UPDATE but made INSERT INTO t VALUES (11, 8, 'a1') ?\n \n> [result] id |dt |name\n> ---------------------------\n> 11 |10 |a0\n> 20 |12 |b0\n> 30 |18 |c1\n> \n> If dt=dt+1 ==> dt=dt\n> ^^^^^^^^\n> then UPDATE 3 ==> UPDATE 2\n> ^^^^^^^^^^^^\n> \n> [result] id |dt |name\n> ---------------------------\n> 11 |8 |a0\n> 20 |12 |b0\n> 30 |17 |c1\n> \n\nWhy T2 doesn't change id=11 row now???\nDoes Oracle re-execute _entire_ query after being blocked\nby concurrent transaction T only if T made changes in columns\nused in QUAL?!\n\nYes! Case-2 confirmes this!\n\n> [Case-2]\n> \n> update t set dt=dt+1,\n> ^^^^^^^^\n> name='c1'\n> where id=30;\n> UPDATE 1\n> update t set dt=dt+2\n> where id > 10;\n> ^^^^^^^^^\n> (blocked)\n> update t set dt=dt+3,\n> id=id+1\n> ^^^^^^^^\n> where id=10;\n> UPDATE 1\n> commit;\n> COMMIT\n> commit;\n> COMMIT\n> UPDATE 2\n> ^^^^^^^^^^^^\n> \n> [result] id |dt |name\n> ---------------------------\n> 11 |8 |a0\n> 20 |12 |b0\n> 30 |18 |c1\n\nid is not changed by T1 and so T2 doesn't re-execute query\nafter T1 committed and doesn't see changes made by T3!\nT2 just re-evaluates target list.\n\nOk. Postgres always re-executes query after being blocked,\nbut makes this for single row only and so all other changes\nmade after query started are not visible. I would say that\nwe are more consistent than Oracle -:))\n\nOracle says:\n---\nOracle always enforces statement-level read consistency. \nThis guarantees that the data returned by a single query \nis consistent with respect to the time that the query began.\nTherefore, a query never sees dirty data nor any of the changes \nmade by transactions that commit during query execution. As query \nexecution proceeds, only data committed before the query began is \nvisible to the query. The query does not see changes committed after\nstatement execution begins. \n---\n\nYour tests show that Oracle breaks this statement if re-evaluation\nof query' qual is required!\n\nJust wondering what would be the results of these test in\nInformix or Sybase?\nCould someone run them?\n\nVadim\n",
"msg_date": "Mon, 01 Feb 1999 11:54:20 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..."
},
{
"msg_contents": "Hello all,\n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Monday, February 01, 1999 1:54 PM\n> To: Hiroshi Inoue\n> Cc: [email protected]\n> Subject: Re: [HACKERS] READ COMMITTED isolevel is implemented ...\n> \n> \n> Hiroshi Inoue wrote:\n> > \n[snip]\n\n> > \n> > create table t (id int4,dt int4,name text);\n> > insert into t values (10,5,'a0');\n> > insert into t values (20,10,'b0');\n> > insert into t values (30,15,'c0');\n> > \n> > id |dt |name\n> > ----------------------------\n> > 10 |5 |a0\n> > 20 |10 |b0\n> > 30 |15 |c0\n> > \n> > \n> > session-1 session-2 session-3\n> > \n> > [Case-1]\n> > update t set dt=dt+1,\n> > ^^^^^^^^\n> > name='c1'\n> > where id=30;\n> > UPDATE 1\n> > update t set dt=dt+2\n> > where dt >7;\n> > ^^^^^^^^^^^^^\n> > (blocked)\n> > update t set dt=dt+3,\n> > ^^^^^^^^^\n> > id=id+1\n> > where id=10;\n> > UPDATE 1\n> > commit;\n> > COMMIT\n> > commit;\n> > COMMIT\n> > UPDATE 3\n> > ^^^^^^^^^^^^\n> \n> Ops. I'm quite suprized that T2 sees changes made by T3 after\n> T2' statement started! What would be results if T3 wouldn't\n> make UPDATE but made INSERT INTO t VALUES (11, 8, 'a1') ?\n>\n\n\tThe result was\n\n\tUPDATE 3\n\n\tid\t|dt\t|name\n\t----------------------------\n\t10\t|5\t|a0\n\t20\t|12\t|b0\n\t30\t|18\t|c1\n\t11\t|10\t|a1\n \n> > [result] id |dt |name\n> > ---------------------------\n> > 11 |10 |a0\n> > 20 |12 |b0\n> > 30 |18 |c1\n> > \n> > If dt=dt+1 ==> dt=dt\n> > ^^^^^^^^\n> > then UPDATE 3 ==> UPDATE 2\n> > ^^^^^^^^^^^^\n> > \n> > [result] id |dt |name\n> > ---------------------------\n> > 11 |8 |a0\n> > 20 |12 |b0\n> > 30 |17 |c1\n> > \n> \n> Why T2 doesn't change id=11 row now???\n> Does Oracle re-execute _entire_ query after being blocked\n> by concurrent transaction T only if T made changes in columns\n> used in QUAL?!\n> \n> Yes! Case-2 confirmes this!\n> \n> > [Case-2]\n> > \n> > update t set dt=dt+1,\n> > ^^^^^^^^\n> > name='c1'\n> > where id=30;\n> > UPDATE 1\n> > update t set dt=dt+2\n> > where id > 10;\n> > ^^^^^^^^^\n> > (blocked)\n> > update t set dt=dt+3,\n> > id=id+1\n> > ^^^^^^^^\n> > where id=10;\n> > UPDATE 1\n> > commit;\n> > COMMIT\n> > commit;\n> > COMMIT\n> > UPDATE 2\n> > ^^^^^^^^^^^^\n> > \n> > [result] id |dt |name\n> > ---------------------------\n> > 11 |8 |a0\n> > 20 |12 |b0\n> > 30 |18 |c1\n> \n> id is not changed by T1 and so T2 doesn't re-execute query\n> after T1 committed and doesn't see changes made by T3!\n> T2 just re-evaluates target list.\n> \n> Ok. Postgres always re-executes query after being blocked,\n> but makes this for single row only and so all other changes\n> made after query started are not visible. I would say that\n> we are more consistent than Oracle -:))\n> \n> Oracle says:\n> ---\n> Oracle always enforces statement-level read consistency. \n> This guarantees that the data returned by a single query \n> is consistent with respect to the time that the query began.\n> Therefore, a query never sees dirty data nor any of the changes \n> made by transactions that commit during query execution. As query \n> execution proceeds, only data committed before the query began is \n> visible to the query. The query does not see changes committed after\n> statement execution begins. \n> ---\n> \n> Your tests show that Oracle breaks this statement if re-evaluation\n> of query' qual is required!\n> \n\nI don't think so.\nIt seems that Oracle changes the time that the query began and \nre-executes the query (or processes something like re-execution \nmore effective than re-execution).\n\nSo [ Case-1 ] is reasonable for me.\n\nIn case of dt=dt+1,the time that the query of T2 began was changed \nto the time when T1 was committed and in case of dt=dt,it was not \nchanged from the time when the query command was issued.\nI think that both cases hold column-level read consistency.\n\nBut [ Case-2 ] is a question for me.\n\nWhen did the query of T2 begin ? \nIt seems that only the values of old target dt ( ... set dt=dt+2 ...) are \nat the time when T1 was committed.\t\t ^^^ \n\n> Just wondering what would be the results of these test in\n> Informix or Sybase?\n> Could someone run them?\n\nAnyway the version of Oracle used for my test is very old.\nThe result may be different in newer version.\nCound someone run them in Oracle8 ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 1 Feb 1999 18:17:52 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] READ COMMITTED isolevel is implemented ..."
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > Oracle says:\n> > ---\n> > Oracle always enforces statement-level read consistency.\n> > This guarantees that the data returned by a single query\n> > is consistent with respect to the time that the query began.\n> > Therefore, a query never sees dirty data nor any of the changes\n> > made by transactions that commit during query execution. As query\n> > execution proceeds, only data committed before the query began is\n> > visible to the query. The query does not see changes committed after\n> > statement execution begins.\n> > ---\n> >\n> > Your tests show that Oracle breaks this statement if re-evaluation\n> > of query' qual is required!\n> >\n> \n> I don't think so.\n> It seems that Oracle changes the time that the query began and\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nWhat's the reason to do this?\n\n> re-executes the query (or processes something like re-execution\n> more effective than re-execution).\n> \n> So [ Case-1 ] is reasonable for me.\n\nWhy?\nIf T2 wouldn't be blocked by T1 then T2 wouldn't see T3 changes\neven if T3 would be committed \"just after\" T2 started - before\nT2 read id=10 row.\n\n> In case of dt=dt+1,the time that the query of T2 began was changed\n> to the time when T1 was committed and in case of dt=dt,it was not\n> changed from the time when the query command was issued.\n> I think that both cases hold column-level read consistency.\n> But [ Case-2 ] is a question for me.\n> \n> When did the query of T2 begin ?\n> It seems that only the values of old target dt ( ... set dt=dt+2 ...) are\n> at the time when T1 was committed. ^^^\n\nAnd this shows that Oracle doesn't re-execute query at all when qual\ncolumns were not changed - just because of this is not required\nfrom any point of view. Oracle just gets new version of row\nand re-evaluates target list - to performe update over new version,\nnot old one.\n\nOk. Please try to run this modified test:\n\ncreate table t (id int4,dt int4,name text);\ninsert into t values (10,5,'a0');\ninsert into t values (20,10,'b0');\ninsert into t values (30,15,'c0');\n\ncreate table updatable (id int4);\ninsert into updatable select id from t;\n\n[Case-2]\n\nupdate t set dt=dt+1,\n name='c1'\n where id=30;\nUPDATE 1\n update t set dt=dt+2\n where id > 10 and \n id in (select * from updatable);\n (blocked)\n\n delete from updatable\n where id = 30;\n DELETE 1\n COMMIT;\n END\nCOMMIT;\nEND\n UPDATE 2\n(actually, I got UPDATE 1 - due to bug in subqueries:\nsubplan' snapshot wasn't initialized, fixed, patch attached).\n select * from t;\n\n id|dt|name\n --+--+----\n 10| 5|a0 \n 20|12|b0 \n 30|18|c1 \n ^^^^^^^^\nupdated... What's the result in Oracle?\n\nVadim",
"msg_date": "Mon, 01 Feb 1999 20:29:52 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..."
},
{
"msg_contents": "Hello all,\n\n> > \n> > > Oracle says:\n> > > ---\n> > > Oracle always enforces statement-level read consistency.\n> > > This guarantees that the data returned by a single query\n> > > is consistent with respect to the time that the query began.\n> > > Therefore, a query never sees dirty data nor any of the changes\n> > > made by transactions that commit during query execution. As query\n> > > execution proceeds, only data committed before the query began is\n> > > visible to the query. The query does not see changes committed after\n> > > statement execution begins.\n> > > ---\n> > >\n> > > Your tests show that Oracle breaks this statement if re-evaluation\n> > > of query' qual is required!\n> > >\n> > \n> > I don't think so.\n> > It seems that Oracle changes the time that the query began and\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> What's the reason to do this?\n>\n\nIn case of SELECT only,read consistency is obvious.\nBut it's ambiguous and difficult for me in case of update or select .. \nfor update.\nThere will be many ways of thinking.\nMine is only one of them.\n\nMy way of thinking is\n\n\tThe values of columns of tuples to be updated/deleted/sele\n\tcted_for_update which appears in the query should be \n\tlatest ones and so they may be different from the values \n\twhich read consistency provides. \n\tThe time that the query began should be changed to \n\tthe time that no such differences can be seen.\n\n> > re-executes the query (or processes something like re-execution\n> > more effective than re-execution).\n> > \n> > So [ Case-1 ] is reasonable for me.\n> \n> Why?\n> If T2 wouldn't be blocked by T1 then T2 wouldn't see T3 changes\n> even if T3 would be committed \"just after\" T2 started - before\n> T2 read id=10 row.\n>\n\n[My thinkging]\n\tIf T2 wouldn't be blocked by T1,T2 woundn't detect the changes\n\tof dt,so we don't have to change the time that the query began.\n\tBut if T2 detects the change of dt which is used in the query(\n\tand the tuple is to be updated),the time that the query began \n\tshould be changed. \n \n> > In case of dt=dt+1,the time that the query of T2 began was changed\n> > to the time when T1 was committed and in case of dt=dt,it was not\n> > changed from the time when the query command was issued.\n> > I think that both cases hold column-level read consistency.\n> > But [ Case-2 ] is a question for me.\n> > \n> > When did the query of T2 begin ?\n> > It seems that only the values of old target dt ( ... set \n> dt=dt+2 ...) are\n> > at the time when T1 was committed. ^^^\n> \n> And this shows that Oracle doesn't re-execute query at all when qual\n> columns were not changed - just because of this is not required\n> from any point of view. Oracle just gets new version of row\n> and re-evaluates target list - to performe update over new version,\n> not old one.\n>\n\nMy way of thinking can't explain this case.\nOracle seems to ignore columns in the targetlist as you say.\n\nBut how about the following case ?\nAfter chainging the query of T2 in [ Case-2 ] to \n \n select dt from t \n where id > 10 for update;\n\nthe result was\n\t\tdt\n\t\t----\n\t\t8\n\t\t10\n\t\t16\n \t\t(3 rows)\n \t \nThis result is reasonable for me.\nWhere is the difference ?\n \n> Ok. Please try to run this modified test:\n> \n> create table t (id int4,dt int4,name text);\n> insert into t values (10,5,'a0');\n> insert into t values (20,10,'b0');\n> insert into t values (30,15,'c0');\n> \n> create table updatable (id int4);\n> insert into updatable select id from t;\n> \n> [Case-2]\n> \n> update t set dt=dt+1,\n> name='c1'\n> where id=30;\n> UPDATE 1\n> update t set dt=dt+2\n> where id > 10 and \n> id in (select * from updatable);\n> (blocked)\n> \n> delete from updatable\n> where id = 30;\n> DELETE 1\n> COMMIT;\n> END\n> COMMIT;\n> END\n> UPDATE 2\n> (actually, I got UPDATE 1 - due to bug in subqueries:\n> subplan' snapshot wasn't initialized, fixed, patch attached).\n> select * from t;\n> \n> id|dt|name\n> --+--+----\n> 10| 5|a0 \n> 20|12|b0 \n> 30|18|c1 \n> ^^^^^^^^\n> updated... What's the result in Oracle?\n>\n\nThe result is same as yours.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 2 Feb 1999 15:27:42 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] READ COMMITTED isolevel is implemented ..."
}
] |
[
{
"msg_contents": "Dear Hackers,\n\nCan anyone confirm for me that libpq is re-entrant. Ie, it is safe to\nexecute libpq calls from within a signal handler when that signal could've\ninterrupted a libpq function? Of course I don't expect to be able to use\nthe same PGconn or PGresult structures inside the signal handler as\noutside.\n\nI've looked through the 6.4.2 libpq documentation and it doesn't say,\nalthough the existence of non-blocking calls implies it could be made\nre-entrant quite easily.\n\nMany thanks\nIan\n\n",
"msg_date": "Mon, 1 Feb 1999 08:38:53 +0000 (GMT)",
"msg_from": "Ian Grant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is libpq re-entrant?"
},
{
"msg_contents": "Ian Grant <[email protected]> writes:\n> Can anyone confirm for me that libpq is re-entrant. Ie, it is safe to\n> execute libpq calls from within a signal handler when that signal could've\n> interrupted a libpq function? Of course I don't expect to be able to use\n> the same PGconn or PGresult structures inside the signal handler as\n> outside.\n\nOperations on a single PGconn are definitely not thread-safe, but I\nthink you could use multiple PGconns from different threads without\ntrouble. The only gotcha I know of offhand is that PQconnectdb()\nis not thread-safe because of use of a modifiable static data structure.\nFixing that seems to require changing the API, so I haven't done\nanything about it yet.\n\nOf course this assumes that your underlying libc is thread-safe;\nif malloc(), sprintf(), et al are not thread-safe then better\nforget the whole thing.\n\nThere is a special exception for PQrequestCancel, which is supposed\nto be safely callable from a signal handler even if your libc is\nbrain-dead.\n\nIf you try it and find any other problems let me know; I'll see if\nI can do anything about them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Feb 1999 14:00:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is libpq re-entrant? "
}
] |
[
{
"msg_contents": "> Modified Files:\n> main.html\n> Log Message:\n> Add mention of TEMP tables.\n\nBruce, would this be a good time to start the v6.5 release notes?\nCertainly the temp tables capability would make the short list of\n\"highlighted features\" in the 1-paragraph-per-topic summary above the\none-liners.\n\nDo you want to start the v6.5 section in release.sgml? Also, we\napparently have not yet gotten the v6.4.2 release notes in there either,\nso perhaps you could drop those in also? I'll polish the markup as\nneeded...\n\n - Tom\n",
"msg_date": "Mon, 01 Feb 1999 13:19:41 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] [WEBMASTER] 'www/html main.html'"
},
{
"msg_contents": "> > Modified Files:\n> > main.html\n> > Log Message:\n> > Add mention of TEMP tables.\n> \n> Bruce, would this be a good time to start the v6.5 release notes?\n> Certainly the temp tables capability would make the short list of\n> \"highlighted features\" in the 1-paragraph-per-topic summary above the\n> one-liners.\n> \n> Do you want to start the v6.5 section in release.sgml? Also, we\n> apparently have not yet gotten the v6.4.2 release notes in there either,\n> so perhaps you could drop those in also? I'll polish the markup as\n> needed...\n> \n> - Tom\n> \n\nOK, got to finish temp table and commit them first.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 08:44:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] [WEBMASTER] 'www/html main.html'"
},
{
"msg_contents": ">\n> > > Modified Files:\n> > > main.html\n> > > Log Message:\n> > > Add mention of TEMP tables.\n> >\n> > Bruce, would this be a good time to start the v6.5 release notes?\n> > Certainly the temp tables capability would make the short list of\n> > \"highlighted features\" in the 1-paragraph-per-topic summary above the\n> > one-liners.\n> >\n> > Do you want to start the v6.5 section in release.sgml? Also, we\n> > apparently have not yet gotten the v6.4.2 release notes in there either,\n> > so perhaps you could drop those in also? I'll polish the markup as\n> > needed...\n> >\n> > - Tom\n> >\n>\n> OK, got to finish temp table and commit them first.\n\n Don't know if you noticed my statement in another thread.\n\n I think temp tables could confuse any function that uses the\n prepared/saved plan feature of SPI manager. If such a\n function called once saved a plan for a temp table named\n 'test', and later called again but 'test' is now another\n table, probably the parsetree and plan might point to the old\n (wrong, not anymore existent) table.\n\n Should not be a real drawback against temp tables. But maybe\n something to mention in the release notes.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 1 Feb 1999 20:12:16 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] [WEBMASTER] 'www/html main.html'"
},
{
"msg_contents": "> > OK, got to finish temp table and commit them first.\n> \n> Don't know if you noticed my statement in another thread.\n> \n> I think temp tables could confuse any function that uses the\n> prepared/saved plan feature of SPI manager. If such a\n> function called once saved a plan for a temp table named\n> 'test', and later called again but 'test' is now another\n> table, probably the parsetree and plan might point to the old\n> (wrong, not anymore existent) table.\n\nYes. There are going to be limitations to the temp tables. Not sure\nhow what the limitations are going to be..\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 14:19:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] [WEBMASTER] 'www/html main.html'"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n>>>>> \"Bruce\" == Bruce Momjian <[email protected]> writes:\n\n Bruce> Yes. There are going to be limitations to the temp tables.\n Bruce> Not sure how what the limitations are going to be..\n\nAs a possible suggestion...I know Sybase requires that temp tables\nhave names that begin with a `#'. This avoids the problem of\nshadowing ordinary names.\n\nroland\n- -- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 101 West 15th St #4NN\[email protected] New York, NY 10011\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\nComment: Processed by Mailcrypt 3.4, an Emacs/PGP interface\n\niQCVAwUBNrZgIuoW38lmvDvNAQGW0gP9Egug1ICjbI9u93oXnWtWuGFQ+SV9XdEK\nCpSkEiGbFBIB3MgXVc+6sCTbAsL5Y3RVEM3G0p91f/frFZVCpJYwuqvhxM6NHf3U\nVUuHXUIBKwFUJsv/+t687/dlzX3e6jVUGx8BW++SrrpcDOHWmCMBJK+9v+gJqRum\naxGwmGJgBZw=\n=Q37n\n-----END PGP SIGNATURE-----\n",
"msg_date": "01 Feb 1999 21:17:07 -0500",
"msg_from": "Roland Roberts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Re: [COMMITTERS] [WEBMASTER] 'www/html\n\tmain.html'"
},
{
"msg_contents": "-- Start of PGP signed section.\n> >>>>> \"Bruce\" == Bruce Momjian <[email protected]> writes:\n> \n> Bruce> Yes. There are going to be limitations to the temp tables.\n> Bruce> Not sure how what the limitations are going to be..\n> \n> As a possible suggestion...I know Sybase requires that temp tables\n> have names that begin with a `#'. This avoids the problem of\n> shadowing ordinary names.\n\nThat's cheezy. Here is my latest temp trick. Create a table and index,\ncreate temp versions, insert into them, and test temp table removal. \nThis is the regression test. I am about to commit the code.\n\n---------------------------------------------------------------------------\n\t\n\tQUERY: CREATE TABLE temptest(col int);\n\tQUERY: CREATE INDEX i_temptest ON temptest(col);\n\tQUERY: CREATE TEMP TABLE temptest(col int);\n\tQUERY: CREATE INDEX i_temptest ON temptest(col);\n\tQUERY: DROP INDEX i_temptest;\n\tQUERY: DROP TABLE temptest;\n\tQUERY: DROP INDEX i_temptest;\n\tQUERY: DROP TABLE temptest;\n\tQUERY: CREATE TABLE temptest(col int);\n\tQUERY: INSERT INTO temptest VALUES (1);\n\tQUERY: CREATE TEMP TABLE temptest(col int);\n\tQUERY: INSERT INTO temptest VALUES (2);\n\tQUERY: SELECT * FROM temptest;\n\tcol\n\t---\n\t 2\n\t(1 row)\n\t\n\tQUERY: DROP TABLE temptest;\n\tQUERY: SELECT * FROM temptest;\n\tcol\n\t---\n\t 1\n\t(1 row)\n\t\n\tQUERY: DROP TABLE temptest;\n\tQUERY: CREATE TEMP TABLE temptest(col int);\n\t-- restarts backend\n\t\\connect regression \n\tQUERY: SELECT * FROM temptest;\n\tERROR: temptest: Table does not exist.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 22:02:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Re: [COMMITTERS] [WEBMASTER] 'www/html\n\tmain.html'"
},
{
"msg_contents": "> As a possible suggestion...I know Sybase requires that temp tables\n> have names that begin with a `#'. This avoids the problem of\n> shadowing ordinary names.\n\nInteresting. We could do something similar if supporting SQL92\nconventions is impossible, but why not try for something compatible\nfirst?\n\n - Tom\n",
"msg_date": "Tue, 02 Feb 1999 03:30:24 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Re: [COMMITTERS] [WEBMASTER] 'www/html\n\tmain.html'"
}
] |
[
{
"msg_contents": "subscribe\n\n",
"msg_date": "Mon, 1 Feb 1999 21:11:44 +0530 (IST)",
"msg_from": "\"Venkatesh. K\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "I am creating a text file to do file dumps into postgres using the copy command. The copy command\nexecutes fine but when I do a count on the table, I get zero records. But when I do a vacuum\ncommand on that table, here is what is says\n\nsdbm=> vacuum verbose D_TERM_APPROACH;\nNOTICE: --Relation d_term_approach--\nNOTICE: Pages 490: Changed 0, Reapped 490, Empty 0, New 0; Tup 0: Vac 11530, Crash 11530, UnUsed 0,\nMinLen 0, MaxLen 0; Re-using: Free/Avail. Space 3964040/0; EndEmpty/Avail. Pages 490/0. Elapsed 0/1\nsec.\nNOTICE: Rel d_term_approach: Pages: 490 --> 0.\n\nI know the 11530 is the correct number of tuples that should have been added to the table. Any\nideas on what may be happening or how I can debug this problem. I am using version 6.4 on Solaris\n2.5.1.\nThanks\n---------\nChris Williams\nSterling Software\nRome, New York\nPhone: (315) 336-0500\nEmail: [email protected]\n\n",
"msg_date": "Mon, 1 Feb 1999 14:48:23 -0500",
"msg_from": "\"Chris Williams\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems doing a copy to a table"
},
{
"msg_contents": "Chris Williams wrote:\n\n>\n> I am creating a text file to do file dumps into postgres using the copy command. The copy command\n> executes fine but when I do a count on the table, I get zero records. But when I do a vacuum\n> command on that table, here is what is says\n>\n> sdbm=> vacuum verbose D_TERM_APPROACH;\n> NOTICE: --Relation d_term_approach--\n> NOTICE: Pages 490: Changed 0, Reapped 490, Empty 0, New 0; Tup 0: Vac 11530, Crash 11530, UnUsed 0,\n> MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 3964040/0; EndEmpty/Avail. Pages 490/0. Elapsed 0/1\n> sec.\n> NOTICE: Rel d_term_approach: Pages: 490 --> 0.\n>\n> I know the 11530 is the correct number of tuples that should have been added to the table. Any\n> ideas on what may be happening or how I can debug this problem. I am using version 6.4 on Solaris\n> 2.5.1.\n\n The notice about 'Crash 11530' tells that the copy didn't\n execute as fine as you thought. Don't know what causes the\n crash though as long as I can't see some part of what you're\n trying to copy in.\n\n But as a matter of fact, the backend copying in must do\n something very bad at the end of it's COPY. Thus the\n transaction doesn't commit correctly and vacuum just whipes\n out the data which came from a crashed transaction. Whithout\n a correct committed transaction, your whole approach is a big\n noop.\n\n Please try to reproduce the error with a smaller amount of\n (maybe anonymized data) and give a reproducable example of\n what you're in detail do (table schema, test-data and exact\n copy command).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 1 Feb 1999 21:23:26 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems doing a copy to a table"
}
] |
[
{
"msg_contents": "> On Mon, 1 Feb 1999, Bruce Momjian wrote:\n> \n> \tActually, it's still fairly slow. I've got two things that my\n> view is pulling back now, and it's taking quite a while. I'm going to\n> load up a bunch of data and see if it gets any slower. Is there an actual\n> problem happening here?\n\nWith the GEQO problems people were having, I have modified the default\nGEQO start table count from 8 to 6.\n\nPeople are having trouble at values of > 6 for a while, but\nsomeone(Vadim?) objected to setting it to six in the past. With two\npeople having problems today, I wanted to lower it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 15:08:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] big bad join problems"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > On Mon, 1 Feb 1999, Bruce Momjian wrote:\n> >\n> > Actually, it's still fairly slow. I've got two things that my\n> > view is pulling back now, and it's taking quite a while. I'm going to\n> > load up a bunch of data and see if it gets any slower. Is there an actual\n> > problem happening here?\n> \n> With the GEQO problems people were having, I have modified the default\n> GEQO start table count from 8 to 6.\n> \n> People are having trouble at values of > 6 for a while, but\n> someone(Vadim?) objected to setting it to six in the past. With two\n> people having problems today, I wanted to lower it.\n\nYes, it was me. I don't object against 6, but just remember that\nthere were other people having troubles with GEQO and this is\nwhy table count was increased from 6 to 8.\n\nVadim\n",
"msg_date": "Tue, 02 Feb 1999 09:58:03 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] big bad join problems"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > On Mon, 1 Feb 1999, Bruce Momjian wrote:\n> > >\n> > > Actually, it's still fairly slow. I've got two things that my\n> > > view is pulling back now, and it's taking quite a while. I'm going to\n> > > load up a bunch of data and see if it gets any slower. Is there an actual\n> > > problem happening here?\n> > \n> > With the GEQO problems people were having, I have modified the default\n> > GEQO start table count from 8 to 6.\n> > \n> > People are having trouble at values of > 6 for a while, but\n> > someone(Vadim?) objected to setting it to six in the past. With two\n> > people having problems today, I wanted to lower it.\n> \n> Yes, it was me. I don't object against 6, but just remember that\n> there were other people having troubles with GEQO and this is\n> why table count was increased from 6 to 8.\n\nDo you remember what problems?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 22:02:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] big bad join problems"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > > On Mon, 1 Feb 1999, Bruce Momjian wrote:\n> > > >\n> > > > Actually, it's still fairly slow. I've got two things that my\n> > > > view is pulling back now, and it's taking quite a while. I'm going to\n> > > > load up a bunch of data and see if it gets any slower. Is there an actual\n> > > > problem happening here?\n> > >\n> > > With the GEQO problems people were having, I have modified the default\n> > > GEQO start table count from 8 to 6.\n> > >\n> > > People are having trouble at values of > 6 for a while, but\n> > > someone(Vadim?) objected to setting it to six in the past. With two\n> > > people having problems today, I wanted to lower it.\n> >\n> > Yes, it was me. I don't object against 6, but just remember that\n> > there were other people having troubles with GEQO and this is\n> > why table count was increased from 6 to 8.\n> \n> Do you remember what problems?\n\nNo. Either the same as now (long planning) or bad plans\n(long execution).\n\nVadim\n",
"msg_date": "Tue, 02 Feb 1999 10:21:17 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] big bad join problems"
},
{
"msg_contents": "On Tue, 2 Feb 1999, Vadim Mikheev wrote:\n\n# > Do you remember what problems?\n# \n# No. Either the same as now (long planning) or bad plans (long\n# execution). \n\n\tI don't have enough data to know whether it's planning well or\nnot, but I can't do two queries at the same time for lack of RAM with\nGEQO, and the web browser times out without it. :) I'm torn. If only I\ncould store a query plan. (subliminal message)\n\n--\nSA, beyond.com My girlfriend asked me which one I like better.\npub 1024/3CAE01D5 1994/11/03 Dustin Sallings <[email protected]>\n| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE \nL_______________________ I hope the answer won't upset her. ____________\n\n",
"msg_date": "Mon, 1 Feb 1999 19:35:38 -0800 (PST)",
"msg_from": "Dustin Sallings <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] big bad join problems"
},
{
"msg_contents": "> > > Yes, it was me. I don't object against 6, but just remember that\n> > > there were other people having troubles with GEQO and this is\n> > > why table count was increased from 6 to 8.\n> > \n> > Do you remember what problems?\n> \n> No. Either the same as now (long planning) or bad plans\n> (long execution).\n\nMy rememberance was that GEQO was slower for some 6-table joins, so it\nwas recommended to keep it at 8. Tom clearly is on the proper track in\nchecking the number of indexes when using GEQO. That should allow us to\nset a proper value that will use GEQO in most/all cases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 22:38:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] big bad join problems"
}
] |
[
{
"msg_contents": "OK I found it,\nI search in the planner for the '\\xFF' appending.\nFinally I found in MakeIndexable() in gram.y\n\nAttach a patch which removes the \"<=\" test in USE_LOCALE,\nmight make some queries a bit slower for us \"locale-heads\",\nBUT correct result is more important.\n\n\tregards,\n-- \n-----------------\nG�ran Thyni\nThis is Penguin Country. On a quiet night you can hear Windows NT\nreboot!",
"msg_date": "Mon, 01 Feb 1999 21:29:55 +0100",
"msg_from": "Goran Thyni <[email protected]>",
"msg_from_op": true,
"msg_subject": "New patch (was: tough bug)"
},
{
"msg_contents": "Hi!\n\nOn Mon, 1 Feb 1999, Goran Thyni wrote:\n> Attach a patch which removes the \"<=\" test in USE_LOCALE,\n> might make some queries a bit slower for us \"locale-heads\",\n> BUT correct result is more important.\n\n I've applied your patch and tested on Debian 2.0, Postgres 6.4.2.\nApplied cleanly, compiled. Test for koi8 locale passed well.\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Tue, 2 Feb 1999 19:19:13 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New patch (was: tough bug)"
},
{
"msg_contents": "Applied.\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> OK I found it,\n> I search in the planner for the '\\xFF' appending.\n> Finally I found in MakeIndexable() in gram.y\n> \n> Attach a patch which removes the \"<=\" test in USE_LOCALE,\n> might make some queries a bit slower for us \"locale-heads\",\n> BUT correct result is more important.\n> \n> \tregards,\n> -- \n> -----------------\n> G_ran Thyni\n> This is Penguin Country. On a quiet night you can hear Windows NT\n> reboot!\n\n> diff -c pgsql/src/backend/parser/gram.y.orig pgsql/src/backend/parser/gram.y\n> *** pgsql/src/backend/parser/gram.y.orig\tTue Jan 26 07:02:32 1999\n> --- pgsql/src/backend/parser/gram.y\tMon Feb 1 21:16:56 1999\n> ***************\n> *** 5249,5259 ****\n> --- 5249,5265 ----\n> \t\t\t\tleast->val.val.str = match_least;\n> \t\t\t\tmost->val.type = T_String;\n> \t\t\t\tmost->val.val.str = match_most;\n> + #ifdef USE_LOCALE\n> + \t\t\t\tresult = makeA_Expr(AND, NULL,\n> + \t\t\t\t\t\tmakeA_Expr(OP, \"~\", lexpr, rexpr),\n> + \t\t\t\t\t\tmakeA_Expr(OP, \">=\", lexpr, (Node *)least));\n> + #else\n> \t\t\t\tresult = makeA_Expr(AND, NULL,\n> \t\t\t\t\t\tmakeA_Expr(OP, \"~\", lexpr, rexpr),\n> \t\t\t\t\t\tmakeA_Expr(AND, NULL,\n> \t\t\t\t\t\t\tmakeA_Expr(OP, \">=\", lexpr, (Node *)least),\n> \t\t\t\t\t\t\tmakeA_Expr(OP, \"<=\", lexpr, (Node *)most)));\n> + #endif\n> \t\t\t}\n> \t\t}\n> \t}\n> ***************\n> *** 5296,5306 ****\n> --- 5302,5318 ----\n> \t\t\t\tleast->val.val.str = match_least;\n> \t\t\t\tmost->val.type = T_String;\n> \t\t\t\tmost->val.val.str = match_most;\n> + #ifdef USE_LOCALE\n> + \t\t\t\tresult = makeA_Expr(AND, NULL,\n> + \t\t\t\t\t\tmakeA_Expr(OP, \"~~\", lexpr, rexpr),\n> + \t\t\t\t\t\tmakeA_Expr(OP, \">=\", lexpr, (Node *)least));\n> + #else\n> \t\t\t\tresult = makeA_Expr(AND, NULL,\n> \t\t\t\t\t\tmakeA_Expr(OP, \"~~\", lexpr, rexpr),\n> \t\t\t\t\t\tmakeA_Expr(AND, NULL,\n> \t\t\t\t\t\t\tmakeA_Expr(OP, \">=\", lexpr, (Node *)least),\n> \t\t\t\t\t\t\tmakeA_Expr(OP, \"<=\", lexpr, (Node *)most)));\n> + #endif\n> \t\t\t}\n> \t\t}\n> \t}\n> \n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Feb 1999 14:20:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New patch (was: tough bug)"
}
] |
[
{
"msg_contents": "Hi,\n\n I've just committed two very simple patches to copy.c and\n trigger.c which caused backend to grow until transaction end.\n\n trigger.c didn't expected that trigger function could have\n returned another heap tuple that was built inside of trigger\n with SPI_copytuple().\n\n In copy.c I'n not absolutely sure why it was as it was. In\n CopyFrom() the array for the values was palloc()'d once\n before entering the copy loop, and then again at the top of\n the loop. But there was only one pfree() after loop exited.\n\n I've removed the palloc() inside the loop. Seems to work for\n the regression test. Telling here only for the case someone\n encounters problems on COPY FROM.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 1 Feb 1999 21:35:29 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Small patches in copy.c and trigger.c"
},
{
"msg_contents": "> Hi,\n> \n> I've just committed two very simple patches to copy.c and\n> trigger.c which caused backend to grow until transaction end.\n> \n> trigger.c didn't expected that trigger function could have\n> returned another heap tuple that was built inside of trigger\n> with SPI_copytuple().\n> \n> In copy.c I'n not absolutely sure why it was as it was. In\n> CopyFrom() the array for the values was palloc()'d once\n> before entering the copy loop, and then again at the top of\n> the loop. But there was only one pfree() after loop exited.\n> \n> I've removed the palloc() inside the loop. Seems to work for\n> the regression test. Telling here only for the case someone\n> encounters problems on COPY FROM.\n\nI have a morbid curiosity, so I decided to find out how this got into\nthe source. On November 1, 1998, Magus applied a patch:\n\n Here is a first patch to cleanup the backend side of libpq. This patch\n removes all external dependencies on the \"Pfin\" and \"Pfout\" that are\n declared in pqcomm.h. These variables are also changed to \"static\" to\n make sure. Almost all the change is in the handler of the \"copy\" command\n - most other areas of the backend already used the correct functions.\n This change will make the way for cleanup of the internal stuff there -\n now that all the functions accessing the file descriptors are confined\n to a single directory.\n\nSeveral users have complained about 6.4.* COPY slowing down when loading\nrows. This may be the cause. Good job finding it.\n\nIn fact, Magnus added two palloc's, when 'values' already had a\npalloc(). Magnus, we all make goofy mistakes, so don't sweat it. \nHowever, if you had some deeper reason for adding the palloc's, please\nlet us know. Jan, I will make sure there is only one palloc for 'values'\nin CopyFrom().\n\nI am about to commit TEMP tables anyway.\n\n---------------------------------------------------------------------------\n\n\n values = (Datum *) palloc(sizeof(Datum) * attr_count);\n nulls = (char *) palloc(attr_count);\n index_nulls = (char *) palloc(attr_count);\n idatum = (Datum *) palloc(sizeof(Datum) * attr_count);\n byval = (bool *) palloc(attr_count * sizeof(bool)); \n \n for (i = 0; i < attr_count; i++)\n {\n nulls[i] = ' ';\n index_nulls[i] = ' ';\n byval[i] = (bool) IsTypeByVal(attr[i]->atttypid);\n }\n values = (Datum *) palloc(sizeof(Datum) * attr_count);\n \n lineno = 0;\n while (!done)\n {\n values = (Datum *) palloc(sizeof(Datum) * attr_count);\n if (!binary)\n\n\n---------------------------------------------------------------------------\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 18:23:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Small patches in copy.c and trigger.c"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have a morbid curiosity, so I decided to find out how this got into\n> the source. On November 1, 1998, Magus applied a patch:\n> Here is a first patch to cleanup the backend side of libpq.\n> Several users have complained about 6.4.* COPY slowing down when loading\n> rows. This may be the cause. Good job finding it.\n\nI thought Magnus' changes were only in the current CVS branch, not\nin REL6_4 ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Feb 1999 19:46:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Small patches in copy.c and trigger.c "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have a morbid curiosity, so I decided to find out how this got into\n> > the source. On November 1, 1998, Magus applied a patch:\n> > Here is a first patch to cleanup the backend side of libpq.\n> > Several users have complained about 6.4.* COPY slowing down when loading\n> > rows. This may be the cause. Good job finding it.\n> \n> I thought Magnus' changes were only in the current CVS branch, not\n> in REL6_4 ?\n\nYou are absolutely right. Sorry Magnus. The COPY complaint I heard\nobviously was not from this.\n\n\nBack to our regularly scheduled program.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 21:58:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Small patches in copy.c and trigger.c"
},
{
"msg_contents": "Hello!\n\nOn Mon, 1 Feb 1999, Jan Wieck wrote:\n> I've just committed two very simple patches to copy.c and\n> trigger.c which caused backend to grow until transaction end.\n\n Any chance I will see the patch? Or even better, postgres 6.4.3? Recently I\ngot a problem loading db dump...\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Tue, 2 Feb 1999 12:45:34 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Small patches in copy.c and trigger.c"
},
{
"msg_contents": ">\n> Hello!\n>\n> On Mon, 1 Feb 1999, Jan Wieck wrote:\n> > I've just committed two very simple patches to copy.c and\n> > trigger.c which caused backend to grow until transaction end.\n>\n> Any chance I will see the patch? Or even better, postgres 6.4.3? Recently I\n> got a problem loading db dump...\n\n Must be something different. I've checked v6.4 and v6.4.2,\n and the duplicate palloc() for values isn't there. Since\n triggers are created after loading the data in any dump, that\n one couldn't be it either.\n\n I'll do some memory tests with copy in the released versions\n and if I find something, put a patch onto the ftp server.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 2 Feb 1999 10:54:04 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Small patches in copy.c and trigger.c"
},
{
"msg_contents": ">\n> > Bruce Momjian <[email protected]> writes:\n> > > I have a morbid curiosity, so I decided to find out how this got into\n> > > the source. On November 1, 1998, Magus applied a patch:\n> > > Here is a first patch to cleanup the backend side of libpq.\n> > > Several users have complained about 6.4.* COPY slowing down when loading\n> > > rows. This may be the cause. Good job finding it.\n> >\n> > I thought Magnus' changes were only in the current CVS branch, not\n> > in REL6_4 ?\n>\n> You are absolutely right. Sorry Magnus. The COPY complaint I heard\n> obviously was not from this.\n>\n\n If we don't discover any errors on COPY FROM after some days,\n I think I should fix it in REL6_4 too.\n\n Do we plan to release v6.4.3 sometimes? If so, are there any\n neat things I could add to the v6.4.3 feature patch?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 2 Feb 1999 14:19:25 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Small patches in copy.c and trigger.c"
},
{
"msg_contents": "> > > I thought Magnus' changes were only in the current CVS branch, not\n> > > in REL6_4 ?\n> >\n> > You are absolutely right. Sorry Magnus. The COPY complaint I heard\n> > obviously was not from this.\n> >\n> \n> If we don't discover any errors on COPY FROM after some days,\n> I think I should fix it in REL6_4 too.\n> \n> Do we plan to release v6.4.3 sometimes? If so, are there any\n> neat things I could add to the v6.4.3 feature patch?\n\nYou mean your fixes, right. Magnus's stuff was only in current, and is\nfixed now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Feb 1999 11:35:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Small patches in copy.c and trigger.c"
},
{
"msg_contents": ">\n> > > > I thought Magnus' changes were only in the current CVS branch, not\n> > > > in REL6_4 ?\n> > >\n> > > You are absolutely right. Sorry Magnus. The COPY complaint I heard\n> > > obviously was not from this.\n> > >\n> >\n> > If we don't discover any errors on COPY FROM after some days,\n> > I think I should fix it in REL6_4 too.\n> >\n> > Do we plan to release v6.4.3 sometimes? If so, are there any\n> > neat things I could add to the v6.4.3 feature patch?\n>\n> You mean your fixes, right. Magnus's stuff was only in current, and is\n> fixed now.\n\n That one in COPY.\n\n But the other one in ExecBRDeleteTriggers() is still in\n place. If a trigger returns something it created with\n SPI_copytuple() (what's done for PL/pgSQL triggers allways if\n returning NEW or OLD), that heap_tuple isn't pfree()'d until\n transaction end.\n\n Since it's safe to throw it away I'll go ahead and put that\n into REL6_4.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 2 Feb 1999 17:49:53 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Small patches in copy.c and trigger.c"
}
] |
[
{
"msg_contents": "So who's maintaining the website?\nDrop me a line and let's have a chat about some improvements if you're\ninterested.\n\t-DEJ \n\n> -----Original Message-----\n> > I don't think multiple queues per se are a problem; the \n> deficiency I see\n> > in our patching procedures is lack of visibility of the status of a\n> > proposed patch. If it's not been applied, is it just because no one\n> > has gotten to it yet, or was there an objection from \n> someone? What's\n> > worse is that one of the people with commit access might \n> miss or forget\n> > about such an objection, and commit a bogus patch anyway \n> sometime later.\n> > We have enough committers now that I think there's a \n> definite risk here.\n> > \n> > If we wanted to be really organized about this, it'd be cool to have\n> > a central database with an item for each proposed patch and links to\n> > followup discussions. But I'm not sure it's worth the work it would\n> > take to set it up and then maintain the entries. Unless we \n> get badly\n> > bitten by a mistake that such a database would've \n> prevented, it probably\n> > won't happen ...\n> \n> I keep them in my mailbox, delete them if there is objection \n> or someone\n> else applies it. Eventually, I apply it.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, \n> Pennsylvania 19026\n> \n",
"msg_date": "Mon, 1 Feb 1999 17:30:42 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Patches"
},
{
"msg_contents": "> So who's maintaining the website?\n> Drop me a line and let's have a chat about some improvements if you're\n> interested.\n> \t-DEJ \n\n\nWe discuss the web site on the docs list, but if it's related to\npatches, we can discuss in hackers. I am not the webmaster. You can\nreach him at [email protected].\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 18:33:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patches"
}
] |
[
{
"msg_contents": "I have been looking at optimizer runtime using Charles Hornberger's\nexample case, and what I find is that the number of tables involved in\nthe query is a very inadequate predictor of the optimizer's runtime.\nThus, it's not surprising that we are getting poor results from using\nnumber of tables as the GEQO threshold measure.\n\nI started with Charles' test case as presented, then simplified it by\nremoving indexes from the database. This didn't change the final\nplan, which was a straight nested loop over sequential scans anyway\n(because the tables were so small). But it drastically reduces the\nnumber of cases that the optimizer looks at.\n\nOriginal test case: query involves seven tables with a total of twelve\nindexes (six on the primary table and one on each other table).\n\n7-index test case: I removed all but one of the indexes on the primary\ntable, leaving one index per table.\n\nNo-index test case: I removed all indexes.\n\nIt took a certain amount of patience to run these cases with profiling\nturned on :-(, but here are the results:\n\n\t\t\t7T+12I\t\t7T+7I\t\t7T\n\nRuntime, sec\t\t1800\t\t457\t\t59\nequal() calls\t\t585 mil\t\t157 mil\t\t21.7 mil\nbetter_path() calls\t72546\t\t37418\t\t14025\nbp->path_is_cheaper\t668\t\t668\t\t668\ncreate_hashjoin_path\t198\t\t198\t\t198\ncreate_mergejoin_path\t2358\t\t723\t\t198\ncreate_nestloop_path\t38227\t\t20297\t\t7765\n\nNext, I removed the last table from Charles' query, producing these\ncases:\n\n\t\t\t6T+11I\t\t6T+6I\t\t6T\n\nRuntime, sec\t\t34\t\t12\t\t2.3\nequal() calls\t\t14.3 mil\t4.7 mil\t\t0.65 mil\nbetter_path() calls\t10721\t\t6172\t\t2443\nbp->path_is_cheaper\t225\t\t225\t\t225\ncreate_hashjoin_path\t85\t\t85\t\t85\ncreate_mergejoin_path\t500\t\t236\t\t85\ncreate_nestloop_path\t5684\t\t3344\t\t1354\n\nThe 5T+10I case is down to a couple of seconds of runtime, so I\ndidn't bother to do profiles for 5 tables.\n\nA fairly decent approximation is that the runtime varies as the\nsquare of the number of create_nestloop_path calls. That number\nin turn seems to vary as the factorial of the number of tables,\nwith a weaker but still strong term involving the number of indexes.\nI understand the square and the factorial terms, but the form of\nthe dependency on the index count isn't real clear.\n\nIt might be worthwhile to try a GEQO threshold based on the number of\ntables plus half the number of indexes on those tables. I have no idea\nwhere to find the number of indexes, so I'll just offer the idea for\nsomeone else to try.\n\n\nThe main thing that jumps out from the profiles is that on these complex\nsearches, the optimizer spends practically *all* of its time down inside\nbetter_path, scanning the list of already known access paths to see if a\nproposed new path is a duplicate of one already known. (Most of the time\nit is not, as shown by the small number of times path_is_cheaper gets\ncalled from better_path.) This scanning is O(N^2) in the number of paths\nconsidered, whereas it seems that all the other work is no worse than O(N)\nin the number of paths. The bottom of the duplicate check is equal(),\nwhich as you can see is getting called a horrendous number of times.\n\nIt would be worth our while to try to eliminate this mostly-unsuccessful\ncomparison operation.\n\nI wonder whether it would work to simply always add the new path to the\nlist, and rely on the later \"prune\" step to get rid of duplicates?\nThe prune code is evidently far more efficient at getting rid of useless\nentries than better_path is, because it's nowhere near the top of the\nprofile even though it's processing nearly as many paths.\n\nAnybody here know much about how the optimizer works? I'm hesitant to\nhack on it myself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Feb 1999 19:17:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer speed and GEQO (was: nested loops in joins)"
},
{
"msg_contents": "> It took a certain amount of patience to run these cases with profiling\n> turned on :-(, but here are the results:\n\nInteresting numbers.\n\n> \n> \t\t\t7T+12I\t\t7T+7I\t\t7T\n> \n> Runtime, sec\t\t1800\t\t457\t\t59\n> equal() calls\t\t585 mil\t\t157 mil\t\t21.7 mil\n\n> A fairly decent approximation is that the runtime varies as the\n> square of the number of create_nestloop_path calls. That number\n> in turn seems to vary as the factorial of the number of tables,\n> with a weaker but still strong term involving the number of indexes.\n> I understand the square and the factorial terms, but the form of\n> the dependency on the index count isn't real clear.\n\nThis is exactly what I would expect. See optimizer/README. The\nfactorial makes a lot of sense. It evaluates all joins join, removes\nthe cheapest from the list, and goes to find the next best one.\n\n\t8 compares\n\t7 compares\n\t6 compares\n\t...\n\nHere is a good illustration of what you are seeing:\n\n\t> 6!\n\t 720\n\t> 8!\n\t 40320\n\nThat's why I reduced GEQO from 8 to 6.\n\n> \n> It might be worthwhile to try a GEQO threshold based on the number of\n> tables plus half the number of indexes on those tables. I have no idea\n> where to find the number of indexes, so I'll just offer the idea for\n> someone else to try.\n\nSounds like a good idea. I can easily get that information. The\noptimizer does that lower in the code. Perhaps we can just move the\nGEQO test to after the index stuff is created. I will be able to look\nat it after the TEMP stuff.\n\n> The main thing that jumps out from the profiles is that on these complex\n> searches, the optimizer spends practically *all* of its time down inside\n> better_path, scanning the list of already known access paths to see if a\n> proposed new path is a duplicate of one already known. (Most of the time\n> it is not, as shown by the small number of times path_is_cheaper gets\n> called from better_path.) This scanning is O(N^2) in the number of paths\n> considered, whereas it seems that all the other work is no worse than O(N)\n> in the number of paths. The bottom of the duplicate check is equal(),\n> which as you can see is getting called a horrendous number of times.\n> \n> It would be worth our while to try to eliminate this mostly-unsuccessful\n> comparison operation.\n> \n> I wonder whether it would work to simply always add the new path to the\n> list, and rely on the later \"prune\" step to get rid of duplicates?\n> The prune code is evidently far more efficient at getting rid of useless\n> entries than better_path is, because it's nowhere near the top of the\n> profile even though it's processing nearly as many paths.\n\nI think I optimized prune.c white a bit around 6.1. I broke it during\nthe optimization, and Vadim fixed it for me.\n\nOne of my TODO items has been to 100% understand the optimizer. Haven't\nhad time to do that yet. Been on my list for a while.\n\n\n> \n> Anybody here know much about how the optimizer works? I'm hesitant to\n> hack on it myself.\n\nLet me help you.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 21:55:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins)"
},
{
"msg_contents": "OK, I have modified the optimizer to count not only the table, but also\nthe indexes. Patch is applied. The code is:\n\n\n {\n List *temp;\n int paths_to_consider = 0;\n\n foreach(temp, outer_rels)\n {\n RelOptInfo *rel = (RelOptInfo *) lfirst(temp);\n paths_to_consider += length(rel->pathlist);\n }\n\n if ((_use_geqo_) && paths_to_consider >= _use_geqo_rels_)\n /* returns _one_ RelOptInfo, so lcons it */\n return lcons(geqo(root), NIL); \n }\n\n\nIt is my understanding that it is the size of the pathlist that\ndetermines the complexity of the optimization. (Wow, I actually\nunderstood that sentence.)\n\nTom, can you tell me if this looks good, and also give a suggestion for\na possible default value for GEQO optimization. My guess is that 6 now\nis too low. Sounds like you have a good testbed to do this.\n\nOld posting attached. I will look at the functions you mentioned for\npossible improvement.\n\n\n> I have been looking at optimizer runtime using Charles Hornberger's\n> example case, and what I find is that the number of tables involved in\n> the query is a very inadequate predictor of the optimizer's runtime.\n> Thus, it's not surprising that we are getting poor results from using\n> number of tables as the GEQO threshold measure.\n> \n> I started with Charles' test case as presented, then simplified it by\n> removing indexes from the database. This didn't change the final\n> plan, which was a straight nested loop over sequential scans anyway\n> (because the tables were so small). But it drastically reduces the\n> number of cases that the optimizer looks at.\n> \n> Original test case: query involves seven tables with a total of twelve\n> indexes (six on the primary table and one on each other table).\n> \n> 7-index test case: I removed all but one of the indexes on the primary\n> table, leaving one index per table.\n> \n> No-index test case: I removed all indexes.\n> \n> It took a certain amount of patience to run these cases with profiling\n> turned on :-(, but here are the results:\n> \n> \t\t\t7T+12I\t\t7T+7I\t\t7T\n> \n> Runtime, sec\t\t1800\t\t457\t\t59\n> equal() calls\t\t585 mil\t\t157 mil\t\t21.7 mil\n> better_path() calls\t72546\t\t37418\t\t14025\n> bp->path_is_cheaper\t668\t\t668\t\t668\n> create_hashjoin_path\t198\t\t198\t\t198\n> create_mergejoin_path\t2358\t\t723\t\t198\n> create_nestloop_path\t38227\t\t20297\t\t7765\n> \n> Next, I removed the last table from Charles' query, producing these\n> cases:\n> \n> \t\t\t6T+11I\t\t6T+6I\t\t6T\n> \n> Runtime, sec\t\t34\t\t12\t\t2.3\n> equal() calls\t\t14.3 mil\t4.7 mil\t\t0.65 mil\n> better_path() calls\t10721\t\t6172\t\t2443\n> bp->path_is_cheaper\t225\t\t225\t\t225\n> create_hashjoin_path\t85\t\t85\t\t85\n> create_mergejoin_path\t500\t\t236\t\t85\n> create_nestloop_path\t5684\t\t3344\t\t1354\n> \n> The 5T+10I case is down to a couple of seconds of runtime, so I\n> didn't bother to do profiles for 5 tables.\n> \n> A fairly decent approximation is that the runtime varies as the\n> square of the number of create_nestloop_path calls. That number\n> in turn seems to vary as the factorial of the number of tables,\n> with a weaker but still strong term involving the number of indexes.\n> I understand the square and the factorial terms, but the form of\n> the dependency on the index count isn't real clear.\n> \n> It might be worthwhile to try a GEQO threshold based on the number of\n> tables plus half the number of indexes on those tables. I have no idea\n> where to find the number of indexes, so I'll just offer the idea for\n> someone else to try.\n> \n> \n> The main thing that jumps out from the profiles is that on these complex\n> searches, the optimizer spends practically *all* of its time down inside\n> better_path, scanning the list of already known access paths to see if a\n> proposed new path is a duplicate of one already known. (Most of the time\n> it is not, as shown by the small number of times path_is_cheaper gets\n> called from better_path.) This scanning is O(N^2) in the number of paths\n> considered, whereas it seems that all the other work is no worse than O(N)\n> in the number of paths. The bottom of the duplicate check is equal(),\n> which as you can see is getting called a horrendous number of times.\n> \n> It would be worth our while to try to eliminate this mostly-unsuccessful\n> comparison operation.\n> \n> I wonder whether it would work to simply always add the new path to the\n> list, and rely on the later \"prune\" step to get rid of duplicates?\n> The prune code is evidently far more efficient at getting rid of useless\n> entries than better_path is, because it's nowhere near the top of the\n> profile even though it's processing nearly as many paths.\n> \n> Anybody here know much about how the optimizer works? I'm hesitant to\n> hack on it myself.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Feb 1999 15:39:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins)"
},
{
"msg_contents": "> The main thing that jumps out from the profiles is that on these complex\n> searches, the optimizer spends practically *all* of its time down inside\n> better_path, scanning the list of already known access paths to see if a\n> proposed new path is a duplicate of one already known. (Most of the time\n> it is not, as shown by the small number of times path_is_cheaper gets\n> called from better_path.) This scanning is O(N^2) in the number of paths\n> considered, whereas it seems that all the other work is no worse than O(N)\n> in the number of paths. The bottom of the duplicate check is equal(),\n> which as you can see is getting called a horrendous number of times.\n> \n> It would be worth our while to try to eliminate this mostly-unsuccessful\n> comparison operation.\n> \n> I wonder whether it would work to simply always add the new path to the\n> list, and rely on the later \"prune\" step to get rid of duplicates?\n> The prune code is evidently far more efficient at getting rid of useless\n> entries than better_path is, because it's nowhere near the top of the\n> profile even though it's processing nearly as many paths.\n\n\nYou are definately on to something here. I have added comments and more\nto the optimizer README as I continue exploring options.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Feb 1999 16:58:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins)"
},
{
"msg_contents": "> The main thing that jumps out from the profiles is that on these complex\n> searches, the optimizer spends practically *all* of its time down inside\n> better_path, scanning the list of already known access paths to see if a\n> proposed new path is a duplicate of one already known. (Most of the time\n> it is not, as shown by the small number of times path_is_cheaper gets\n> called from better_path.) This scanning is O(N^2) in the number of paths\n> considered, whereas it seems that all the other work is no worse than O(N)\n> in the number of paths. The bottom of the duplicate check is equal(),\n> which as you can see is getting called a horrendous number of times.\n\nThe ironic thing about add_pathlist() is that the slowness is due to the\ncode using a nested loop to merge duplicate RelOptInfo paths, rather\nthan some sort of mergejoin.\n\nI will keep studying it, but from your comments, I can see you\nunderstood the code much better than I had. I am just understanding\nyour conclusions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Feb 1999 18:28:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins)"
},
{
"msg_contents": "> I have been looking at optimizer runtime using Charles Hornberger's\n> example case, and what I find is that the number of tables involved in\n> the query is a very inadequate predictor of the optimizer's runtime.\n> Thus, it's not surprising that we are getting poor results from using\n> number of tables as the GEQO threshold measure.\n\nStill digging into the optimizer, but if you want some real eye-opening\nstuff, set OPTIMIZER_DEBUG and look in the postmaster log. A six-table\njoin generates 55k lines of debug info, very nicely formatted. It shows\nwhat we are up against.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Feb 1999 17:23:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins)"
},
{
"msg_contents": "> I have been looking at optimizer runtime using Charles Hornberger's\n> example case, and what I find is that the number of tables involved in\n> the query is a very inadequate predictor of the optimizer's runtime.\n> Thus, it's not surprising that we are getting poor results from using\n> number of tables as the GEQO threshold measure.\n\nOK, now I have a specific optimizer question. I looked at all\nreferences to RelOptInfo.pathlist, and though it gets very long and hard\nto check for uniqueness, the only thing I see it is used for it to find\nthe cheapest path.\n\nWhy are we maintaining this huge Path List for every RelOptInfo\nstructure if we only need the cheapest? Why not store only the cheapest\nplan, instead of generating all unique plans, then using only the\ncheapest?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Feb 1999 07:49:03 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins)"
},
{
"msg_contents": "> Why are we maintaining this huge Path List for every RelOptInfo\n> structure if we only need the cheapest? Why not store only the \n> cheapest plan, instead of generating all unique plans, then using only \n> the cheapest?\n\nJust guessing here: does it use this same list to determine if a new\nplan is a duplicate of a previously generated plan? Of course, maybe\nthat is not important, since any *cheaper* plan should be different from\nany existing plan, and any plan of the same cost or higher could be\nrejected.\n\nPerhaps having the entire list of plans available makes it easier to\ndebug, especially when the stuff was in lisp (since in that environment\nit is easy to traverse and manipulate these lists interactively)...\n\n - Tom\n",
"msg_date": "Sat, 06 Feb 1999 14:21:40 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins)"
},
{
"msg_contents": ">> Why are we maintaining this huge Path List for every RelOptInfo\n>> structure if we only need the cheapest?\n\nI think Bruce is onto something here... keep only the best-so-far\nnot everything ever generated. That'd get rid of the O(N^2)\ncomparison behavior.\n\nThe only thing I can think of that we might possibly *need* the\nwhole list for is if it is used as a recursion stopper.\n(\"Oh, I already investigated that alternative, no need to go down\nthat path again.\") It did not look to me like the optimizer does\nsuch a thing, but I don't claim to understand the code.\n\nIt seems to me that the search space of possible paths is\nwell-structured and can be enumerated without generation of duplicates.\nYou've got a known set of tables involved in a query, a fixed set of\npossible access methods for each table, and only so many ways to\ncombine them. So the real question here is why does the optimizer\neven need to check for duplicates --- should it not never generate\nany to begin with? The profiles I ran a few days ago show that indeed\nmost of the generated paths are not duplicates, but a small fraction\nare duplicates. Why is that? I'd feel a lot closer to understanding\nwhat's going on if we could explain where the duplicates come from.\n\n\"Thomas G. Lockhart\" <[email protected]> writes:\n> Perhaps having the entire list of plans available makes it easier to\n> debug, especially when the stuff was in lisp (since in that environment\n> it is easy to traverse and manipulate these lists interactively)...\n\nThe optimizer's Lisp heritage is pretty obvious, and in that culture\nbuilding a list of everything you're interested in is just The Way To\nDo Things. I doubt anyone realized that keeping the whole list would\nturn out to be a performance bottleneck.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Feb 1999 12:06:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins) "
},
{
"msg_contents": "> >> Why are we maintaining this huge Path List for every RelOptInfo\n> >> structure if we only need the cheapest?\n> \n> I think Bruce is onto something here... keep only the best-so-far\n> not everything ever generated. That'd get rid of the O(N^2)\n> comparison behavior.\n> \n> The only thing I can think of that we might possibly *need* the\n> whole list for is if it is used as a recursion stopper.\n> (\"Oh, I already investigated that alternative, no need to go down\n> that path again.\") It did not look to me like the optimizer does\n> such a thing, but I don't claim to understand the code.\n> \n> It seems to me that the search space of possible paths is\n> well-structured and can be enumerated without generation of duplicates.\n> You've got a known set of tables involved in a query, a fixed set of\n> possible access methods for each table, and only so many ways to\n> combine them. So the real question here is why does the optimizer\n> even need to check for duplicates --- should it not never generate\n> any to begin with? The profiles I ran a few days ago show that indeed\n> most of the generated paths are not duplicates, but a small fraction\n> are duplicates. Why is that? I'd feel a lot closer to understanding\n> what's going on if we could explain where the duplicates come from.\n\nHere is my guess, and I think it is valid. There are cases where a join\nof two tables may be more expensive than another plan, but the order of\nthe result may be cheaper to use for a later join than the cheaper plan.\nThat is why I suspect they are doing in better_path():\n\n if (samekeys(path->keys, new_path->keys) &&\n equal_path_ordering(&path->p_ordering,\n &new_path->p_ordering))\n {\n old_path = path;\n break;\n }\n \nSo I don't think we can grab the cheapest path right away, but I think\nthe system is keeping around non-cheapest paths with the same result\nordering. I also would like to throw away plans that are so expensive\nthat sorting another plan to the desired ordering would be cheaper than\nusing the plan.\n\nI am only guessing on this. How often does this 'keep non-cheapest path\naround' really help?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Feb 1999 12:51:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer speed and GEQO (was: nested loops in joins)"
}
] |
[
{
"msg_contents": "I have applied TEMP tables to the tree, with documenation changes and a\nregression test.\n\nI am sure there are going to be some features that break with temp\ntables.\n\nFirst, there is no pg_type tuple to match the temp table name. There is\none to match the system-generated temp table name, pg_temp.pid##.seq##. \nTemp table do not show up in \\d, but you can see them in \\dS as\npg_temp*.\n\nNot sure if sequences/SERIAL/PRIMARY will work with temp tables. We\neither need them to work, or prevent them from being created. Testing\nwill be necessary.\n\nOne item I was not 100% happy with. In backend/catalog/heap.c and\nindex.c, there is code that say:\n\n---------------------------------------------------------------------------\n\n /* invalidate cache so non-temp table is masked by temp */\n if (istemp)\n {\n Oid relid = RelnameFindRelid(relname);\n\n if (relid != InvalidOid)\n {\n /*\n * This is heavy-handed, but appears necessary bjm 1999/02/01\n * SystemCacheRelationFlushed(relid) is not enough either.\n */\n RelationForgetRelation(relid);\n ResetSystemCache();\n } \n }\n \n---------------------------------------------------------------------------\n\nI found that in my regression test where I create a table, and index,\nthen a temp table and index, when I go to delete the temp index and\ntable, the non-temps are deleted unless I use ResetSystemCache().\n\nIf I SystemCacheRelationFlushed(relid), the non-temp table is never\ndeleted from the cache. It only seems to be a problem with indexes. \n\nDoes anyone know why? Currently, it works, but I am not happy with it. \nDoes anyone understand why the cache would not flush? Can someone debug\na call to SystemCacheRelationFlushed(relid) and see why the invalidation\nis not happening on the non-temp relid.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Feb 1999 22:57:04 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "TEMP tables applied"
}
] |
[
{
"msg_contents": "Hi!\n\ncan somebody see this too?\n\ncreate table t1(i1 int4);\ncreate table t2(i1 int4);\ncreate table t3(i2 int4);\n\ntest=> create rule rm_t1 as on delete to t1 \ntest-> do ( delete from t2 where old.i1 = i1;\ntest-> delete from t3 where old.i1 = i2;);\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n\nOS = Linux 2.0.35, gcc 2.7.2.3, postgreSQL-6.4.2\n\n\nRegards\nErich\n\n",
"msg_date": "Tue, 2 Feb 1999 06:05:40 +0100 (MET)",
"msg_from": "Erich Stamberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "trouble with rules"
},
{
"msg_contents": ">\n> Hi!\n>\n> can somebody see this too?\n>\n> create table t1(i1 int4);\n> create table t2(i1 int4);\n> create table t3(i2 int4);\n>\n> test=> create rule rm_t1 as on delete to t1\n> test-> do ( delete from t2 where old.i1 = i1;\n> test-> delete from t3 where old.i1 = i2;);\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n>\n>\n> OS = Linux 2.0.35, gcc 2.7.2.3, postgreSQL-6.4.2\n\n That's courios. I can't reproduce it with v6.4 or v6.4.2\n (Linux 2.1.88, gcc 2.7.2.1). Did the checks with the release\n tarballs, not with the REL_6_4 tree (will check that later).\n\n But with the current development tree I get a parse error\n near delete!\n\n I recall that there was something done in the parser about\n parantheses around queries. Have to check it out and if fixed\n add multiple action rules with parantheses to the regression\n test to avoid breakage again in the future.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 2 Feb 1999 11:03:47 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> >\n> > Hi!\n> >\n> > can somebody see this too?\n> >\n> > create table t1(i1 int4);\n> > create table t2(i1 int4);\n> > create table t3(i2 int4);\n> >\n> > test=> create rule rm_t1 as on delete to t1\n> > test-> do ( delete from t2 where old.i1 = i1;\n> > test-> delete from t3 where old.i1 = i2;);\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally before or\n> > while processing the request.\n> > We have lost the connection to the backend, so further processing is\n> > impossible. Terminating.\n> >\n> >\n> > OS = Linux 2.0.35, gcc 2.7.2.3, postgreSQL-6.4.2\n> \n> That's courios. I can't reproduce it with v6.4 or v6.4.2\n> (Linux 2.1.88, gcc 2.7.2.1). Did the checks with the release\n> tarballs, not with the REL_6_4 tree (will check that later).\n\nCASSERT is off?\n\nVadim\n",
"msg_date": "Tue, 02 Feb 1999 17:13:40 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "Vadim wrote:\n\n>\n> Jan Wieck wrote:\n> >\n> > >\n> > > Hi!\n> > >\n> > > can somebody see this too?\n> > >\n> > > create table t1(i1 int4);\n> > > create table t2(i1 int4);\n> > > create table t3(i2 int4);\n> > >\n> > > test=> create rule rm_t1 as on delete to t1\n> > > test-> do ( delete from t2 where old.i1 = i1;\n> > > test-> delete from t3 where old.i1 = i2;);\n> > > pqReadData() -- backend closed the channel unexpectedly.\n> > > This probably means the backend terminated abnormally before or\n> > > while processing the request.\n> > > We have lost the connection to the backend, so further processing is\n> > > impossible. Terminating.\n> > >\n> > >\n> > > OS = Linux 2.0.35, gcc 2.7.2.3, postgreSQL-6.4.2\n> >\n> > That's courios. I can't reproduce it with v6.4 or v6.4.2\n> > (Linux 2.1.88, gcc 2.7.2.1). Did the checks with the release\n> > tarballs, not with the REL_6_4 tree (will check that later).\n>\n> CASSERT is off?\n\n Yepp - thanks.\n\n Fixed in REL6_4 and CURRENT.\n\n Placed a patch in\n\n ftp://hub.org/pub/patches/multi_action_rule.patch\n\n Now have to look who damaged the parser in CURRENT not any\n longer accepting parentheses for mutiple action rules.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 2 Feb 1999 14:08:05 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "> > > > can somebody see this too?\n> > > >\n> > > > create table t1(i1 int4);\n> > > > create table t2(i1 int4);\n> > > > create table t3(i2 int4);\n> > > >\n> > > > test=> create rule rm_t1 as on delete to t1\n> > > > test-> do ( delete from t2 where old.i1 = i1;\n> > > > test-> delete from t3 where old.i1 = i2;);\n> > > > pqReadData() -- backend closed the channel unexpectedly.\n>\n> Now have to look who damaged the parser in CURRENT not any\n> longer accepting parentheses for mutiple action rules.\n\n Has been commented out when INTERSECT came.\n\n Fixed in CURRENT. I hate to but I have to comment on this:\n\n Beeing able to put multiple actions for rules into\n parentheses has been added and RELEASED with v6.4. And\n this syntax is documented in the programmers manual of\n v6.4.\n\n It wasn't hard to reenable it. I just told gram.y that a\n SelectStmt cannot occur in a multiple rule action block.\n It looks to me, that it was taken out only to move\n INTERSECT in the easy way. But this time the easy way is\n IMHO the wrong way.\n\n Removing a documented, released feature is something that\n causes havy trouble for those who want to upgrade to a\n new version.\n\n Next time please keep existing syntax/features until\n there is an agreement of the developers team that it has\n to die.\n\n BTW: There is 1 shift/reduce conflict in gram.y (was there\n before I fixed multi action rules). Who introduced that?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 7 Feb 1999 20:41:10 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> It looks to me, that it was taken out only to move\n> INTERSECT in the easy way. But this time the easy way is\n> IMHO the wrong way.\n> Removing a documented, released feature is something that\n> causes havy trouble for those who want to upgrade to a\n> new version.\n> Next time please keep existing syntax/features until\n> there is an agreement of the developers team that it has\n> to die.\n\nCalm down Jan ;-). I think what happened here is a slightly careless\nmerge of the 6.3 - based INTERSECT/EXPECT code into the current code.\nNot a deliberate removal of a feature, just a foulup.\n\nThis does suggest that we need to be more careful when applying patches\ndeveloped against old system versions.\n\n> BTW: There is 1 shift/reduce conflict in gram.y (was there\n> before I fixed multi action rules). Who introduced that?\n\nYeah, I'm seeing that too. Same cause perhaps? It seems to have\nappeared in rev 2.43, when the INTERSECT/EXPECT code was checked in.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 17:32:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules "
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > It looks to me, that it was taken out only to move\n> > INTERSECT in the easy way. But this time the easy way is\n> > IMHO the wrong way.\n> > Removing a documented, released feature is something that\n> > causes havy trouble for those who want to upgrade to a\n> > new version.\n> > Next time please keep existing syntax/features until\n> > there is an agreement of the developers team that it has\n> > to die.\n>\n> Calm down Jan ;-). I think what happened here is a slightly careless\n> merge of the 6.3 - based INTERSECT/EXPECT code into the current code.\n> Not a deliberate removal of a feature, just a foulup.\n\n Was my fault too. I should have added this new syntax to the\n regression (as I did now). That way I would have noticed as\n early as can that something disappeared.\n\n>\n> This does suggest that we need to be more careful when applying patches\n> developed against old system versions.\n\n This does suggest that we need to pay more attention that all\n the nifty things we do are added to the regression suite.\n\n Saying this I've just checked and the examples I've written\n in the rule system section of the programmers manual cause\n the backend to dump core.\n\n Isn't if funny? All I'm telling could be used against me. :-)\n\n>\n> > BTW: There is 1 shift/reduce conflict in gram.y (was there\n> > before I fixed multi action rules). Who introduced that?\n>\n> Yeah, I'm seeing that too. Same cause perhaps? It seems to have\n> appeared in rev 2.43, when the INTERSECT/EXPECT code was checked in.\n\n Hmmm - wasn't there some switch to bison that tells where it\n shifts/reduces. I know most of the features of gdb, but bison\n is a bit hairy for me.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 00:10:27 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "I wrote:\n>\n> Tom Lane wrote:\n>\n> > Calm down Jan ;-). I think what happened here is a slightly careless\n> > merge of the 6.3 - based INTERSECT/EXPECT code into the current code.\n> > Not a deliberate removal of a feature, just a foulup.\n>\n> Was my fault too. I should have added this new syntax to the\n> regression (as I did now). That way I would have noticed as\n> early as can that something disappeared.\n>\n> >\n> > This does suggest that we need to be more careful when applying patches\n> > developed against old system versions.\n>\n> This does suggest that we need to pay more attention that all\n> the nifty things we do are added to the regression suite.\n>\n> Saying this I've just checked and the examples I've written\n> in the rule system section of the programmers manual cause\n> the backend to dump core.\n>\n> Isn't if funny? All I'm telling could be used against me. :-)\n\n No, it isn't fun any more and I'm getting angry now >:-(\n\n I've checked it and it turns out, that due to the changes\n that came in with INTERSECT/EXPECT many expressions aren't\n any longer copied when they are added from one parsetree to\n another. Thus, multiple parsetrees reference the same Var\n nodes and if multiple rules get applied during the rule\n system recursion (rewritten trees get rewritten again), later\n rules mangle up the ones referenced in trees the rule system\n is already done with.\n\n I've spent night's to fix this all for v6.4. Added many\n copyObject()'s around things that MUST be copied. Now I find\n them commented out, because it was easier to apply v6.3 based\n development onto the v6.5 sources.\n\n I'll now revert things to do the copyObject() again where it\n has to be done and will add the examples from the programmers\n manual to the regression tests.\n\n Surely this will break the INTERSECT/EXPECT code, because it\n depends on nodes beeing at specific memory locations for\n comparisions. But this is impossible due to the requirements\n of the rule system.\n\n Sorry for that.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 01:14:58 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "On Sun, 07 Feb 1999, you wrote:\n> Hmmm - wasn't there some switch to bison that tells where it\n> shifts/reduces. I know most of the features of gdb, but bison\n> is a bit hairy for me.\n\nbison -v will spit out a *huge* data file describing the parser. Somewhere in\nthere it will tell you where the shift/reduce conflict is occurring.\n\nTaral\n",
"msg_date": "Sun, 7 Feb 1999 19:19:34 -0600",
"msg_from": "Taral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": ">\n> On Sun, 07 Feb 1999, you wrote:\n> > Hmmm - wasn't there some switch to bison that tells where it\n> > shifts/reduces. I know most of the features of gdb, but bison\n> > is a bit hairy for me.\n>\n> bison -v will spit out a *huge* data file describing the parser. Somewhere in\n> there it will tell you where the shift/reduce conflict is occurring.\n>\n> Taral\n\n Thanks Taral, and bingo - Tom's guess about that it came with\n INTERSECT seems right.\n\n [...]\n State 269 contains 1 shift/reduce conflict.\n [...]\n state 269\n SelectStmt -> select_w_o_sort sort_clause . for_update...\n\n I'm currently committing the turn backs into rewriteManip.c\n and the additional rule system tests.\n\n INTERSECT IS BROKEN NOW!!! The one who's responsible for that\n may contact me to help fixing it by doing the comparisions\n that rely on memory addresses of Var nodes correctly\n according to the requirements of the rule system. I don't\n know enough about how INTERSECT/EXCEPT is expected to work.\n And the regression test, which is passing here now completely\n (only the 4 missing NOTICE in misc) seems not cover it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 03:01:21 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "> [email protected] (Jan Wieck) writes:\n> > It looks to me, that it was taken out only to move\n> > INTERSECT in the easy way. But this time the easy way is\n> > IMHO the wrong way.\n> > Removing a documented, released feature is something that\n> > causes havy trouble for those who want to upgrade to a\n> > new version.\n> > Next time please keep existing syntax/features until\n> > there is an agreement of the developers team that it has\n> > to die.\n> \n> Calm down Jan ;-). I think what happened here is a slightly careless\n> merge of the 6.3 - based INTERSECT/EXPECT code into the current code.\n> Not a deliberate removal of a feature, just a foulup.\n> \n> This does suggest that we need to be more careful when applying patches\n> developed against old system versions.\n\nThis is normally caused by Stephan's patches. His patches were\noriginally against 6.3, and he ported them to 6.4, but he normally does\nlots of development without any communication with us, sends us a huge\npatch, and we normally have to clean up the edges somewhat. This patch\nactually caused fewer problems than the HAVING patch he submitted.\n\nIn fact, I didn't even know he was working anymore, and then I recieve\nthis huge patch, with a huge thesis that Thomas is merging into the\ndocs. He is in the Army now, and probably unreachable.\n\nBasically, I think our hands are tied on this one. Not really sure we\ncould have done anything different. The patch is usually beefy enough\nthat is worth our effort to polish it. To his credit, he did more\nregression testing this time, so we have fewer problems. All his stuff\nhas /***S*I***/ next to it, which I will remove once we resolve issues\nwith his patch.\n\n> > BTW: There is 1 shift/reduce conflict in gram.y (was there\n> > before I fixed multi action rules). Who introduced that?\n> \n> Yeah, I'm seeing that too. Same cause perhaps? It seems to have\n> appeared in rev 2.43, when the INTERSECT/EXPECT code was checked in.\n\nSame cause.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 21:53:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "On Sun, Feb 07, 1999 at 08:41:10PM +0100, Jan Wieck wrote:\n> BTW: There is 1 shift/reduce conflict in gram.y (was there\n> before I fixed multi action rules). Who introduced that?\n\nDon't know but it was introduced with the EXCEPT and INTERSECT features.\nI haven't found the time yet to check where it comes from. But the same\nholds for ecpg since I synced both yacc files.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 8 Feb 1999 08:45:16 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> This is normally caused by Stephan's patches. His patches were\n> originally against 6.3, and he ported them to 6.4, but he normally does\n> lots of development without any communication with us, sends us a huge\n> patch, and we normally have to clean up the edges somewhat. This patch\n> actually caused fewer problems than the HAVING patch he submitted.\n\n Porting his patch to v6.4 was not exactly what he did. He\n changed the v6.5 tree in a way that his patch fit's into.\n\n Namely he removed copyObject() in some cases. copyObject() is\n an expensive function. There are only 2 reasons to call it.\n One is that the object in question lives in a memory context\n that could get destroyed before we are done with the object.\n The other is that we are about to change the object but\n others need it unchanged (in the actual case varno's got\n changed).\n\n And he reverted the possibility to group multiple rule\n actions in ()'s.\n\n One good thing it caused is, that I realize I was wrong!\n LIMIT seems to never have been applied to the tree - OOOPS. I\n don't know how this could have happened. Must do it before\n v6.5 BETA because it's FEATURE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 13:00:12 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > This is normally caused by Stephan's patches. His patches were\n> > originally against 6.3, and he ported them to 6.4, but he normally does\n> > lots of development without any communication with us, sends us a huge\n> > patch, and we normally have to clean up the edges somewhat. This patch\n> > actually caused fewer problems than the HAVING patch he submitted.\n\n> \n> Porting his patch to v6.4 was not exactly what he did. He\n> changed the v6.5 tree in a way that his patch fit's into.\n\nYes, I understand the frustration. I had that with HAVING. I\nsymathize. It also bothers me when things are added that are disruptive\nto other code, and I see he did that.\n\nShould we remove his patch? I don't know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 10:20:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Should we remove his patch? I don't know.\n\nThat seems like an overreaction. We need to look at it more carefully,\nbut it's a feature we want no? (I assume INTERSECT/EXCEPT are SQL92\nitems, not something Stefan invented out of whole cloth.) Seems like\nit mostly works and we just need to find the places where it conflicts\nwith other post-6.3 changes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Feb 1999 11:02:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules "
},
{
"msg_contents": ">\n> > Bruce Momjian wrote:\n> >\n> > > This is normally caused by Stephan's patches. His patches were\n> > > originally against 6.3, and he ported them to 6.4, but he normally does\n> > > lots of development without any communication with us, sends us a huge\n> > > patch, and we normally have to clean up the edges somewhat. This patch\n> > > actually caused fewer problems than the HAVING patch he submitted.\n>\n> >\n> > Porting his patch to v6.4 was not exactly what he did. He\n> > changed the v6.5 tree in a way that his patch fit's into.\n>\n> Yes, I understand the frustration. I had that with HAVING. I\n> symathize. It also bothers me when things are added that are disruptive\n> to other code, and I see he did that.\n>\n> Should we remove his patch? I don't know.\n\n I don't think it's possible to remove it easily. Too many\n things have been done after.\n\n I've only noticed while browsing through the code why he did\n comment out those things. He's comparing memoy addresses of\n nodes, what doesn't work any more after copyObject(). If he's\n not available right now, we must fix that part. I can help on\n that, but someone else must tell what queries should produce\n which expected output.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 17:31:09 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> I've only noticed while browsing through the code why he did\n> comment out those things. He's comparing memoy addresses of\n> nodes, what doesn't work any more after copyObject(). If he's\n> not available right now, we must fix that part.\n\nIs there more to do than using equal() instead of a plain pointer\ncompare?\n\nThere might be --- for example the collapsing-UNION problem I mentioned\nyesterday is a case where using equal() allows an overly aggressive\noptimization. Where are these comparisons and what are they for?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Feb 1999 11:34:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules "
},
{
"msg_contents": ">\n> [email protected] (Jan Wieck) writes:\n> > I've only noticed while browsing through the code why he did\n> > comment out those things. He's comparing memoy addresses of\n> > nodes, what doesn't work any more after copyObject(). If he's\n> > not available right now, we must fix that part.\n>\n> Is there more to do than using equal() instead of a plain pointer\n> compare?\n>\n> There might be --- for example the collapsing-UNION problem I mentioned\n> yesterday is a case where using equal() allows an overly aggressive\n> optimization. Where are these comparisons and what are they for?\n\n rewriteHandler.c 1691 and 2908... and rewriteManip.c 175, 403\n and 1068. Now that I've looked closer I see that it are\n assignments. All of them have to do with sublinks and\n lefttree-aggregate issues. Shouldn't be too hard to figure\n out what's right and it will give us some additional queries\n for the rule system checks.\n\n So can someone please tell me how INTERSECT/EXCEPT works?\n\n I'll deuglify the code while working on it then :-}. It's\n really hard to read (must have been written in a 120 char\n wide window or so).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 18:05:55 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Should we remove his patch? I don't know.\n> \n> That seems like an overreaction. We need to look at it more carefully,\n> but it's a feature we want no? (I assume INTERSECT/EXCEPT are SQL92\n> items, not something Stefan invented out of whole cloth.) Seems like\n> it mostly works and we just need to find the places where it conflicts\n> with other post-6.3 changes.\n\nI agree. I was just offering it as an option. Hate to make people\nclean up other people's mess.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 12:26:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "> >\n> > [email protected] (Jan Wieck) writes:\n> > > I've only noticed while browsing through the code why he did\n> > > comment out those things. He's comparing memoy addresses of\n> > > nodes, what doesn't work any more after copyObject(). If he's\n> > > not available right now, we must fix that part.\n> >\n> > Is there more to do than using equal() instead of a plain pointer\n> > compare?\n> >\n> > There might be --- for example the collapsing-UNION problem I mentioned\n> > yesterday is a case where using equal() allows an overly aggressive\n> > optimization. Where are these comparisons and what are they for?\n> \n> rewriteHandler.c 1691 and 2908... and rewriteManip.c 175, 403\n> and 1068. Now that I've looked closer I see that it are\n> assignments. All of them have to do with sublinks and\n> lefttree-aggregate issues. Shouldn't be too hard to figure\n> out what's right and it will give us some additional queries\n> for the rule system checks.\n> \n> So can someone please tell me how INTERSECT/EXCEPT works?\n\nAre the regression tests he supplied installed yet. Should be samples\nin there.\n\n> \n> I'll deuglify the code while working on it then :-}. It's\n> really hard to read (must have been written in a 120 char\n> wide window or so).\n\n\nYes. Just run pgindent on any files you want, or I will do it if you\ntell me where. I ran it on some optimizer file. I can easily do a\nwhole directory if no one else is working in there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 12:33:43 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > So can someone please tell me how INTERSECT/EXCEPT works?\n>\n> Are the regression tests he supplied installed yet. Should be samples\n> in there.\n\n Does not seem so, and if, they don't touch anything I\n reverted.\n\n>\n> >\n> > I'll deuglify the code while working on it then :-}. It's\n> > really hard to read (must have been written in a 120 char\n> > wide window or so).\n>\n>\n> Yes. Just run pgindent on any files you want, or I will do it if you\n> tell me where. I ran it on some optimizer file. I can easily do a\n> whole directory if no one else is working in there.\n\n I'm actually hacking on the TIME QUAL issue. Maybe it turns\n out that my thought's are complete, and if Vadim agrees then\n I would like to move it in. I'm sure I have to touch many\n directories (parser, rewrite, executor, utils/time I have so\n far). So please don't for now.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 19:12:24 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> > Yes. Just run pgindent on any files you want, or I will do it if you\n> > tell me where. I ran it on some optimizer file. I can easily do a\n> > whole directory if no one else is working in there.\n> \n> I'm actually hacking on the TIME QUAL issue. Maybe it turns\n\nWhat's the issue you're talking about ?\n\n> out that my thought's are complete, and if Vadim agrees then\n> I would like to move it in. I'm sure I have to touch many\n> directories (parser, rewrite, executor, utils/time I have so\n> far). So please don't for now.\n\npgindent should be run just before beta, not now, please.\n\nVadim\n",
"msg_date": "Tue, 09 Feb 1999 09:17:57 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "> No, it isn't fun any more and I'm getting angry now >:-(\n> \n> I've checked it and it turns out, that due to the changes\n> that came in with INTERSECT/EXPECT many expressions aren't\n> any longer copied when they are added from one parsetree to\n> another. Thus, multiple parsetrees reference the same Var\n> nodes and if multiple rules get applied during the rule\n> system recursion (rewritten trees get rewritten again), later\n> rules mangle up the ones referenced in trees the rule system\n> is already done with.\n> \n> I've spent night's to fix this all for v6.4. Added many\n> copyObject()'s around things that MUST be copied. Now I find\n> them commented out, because it was easier to apply v6.3 based\n> development onto the v6.5 sources.\n> \n> I'll now revert things to do the copyObject() again where it\n> has to be done and will add the examples from the programmers\n> manual to the regression tests.\n> \n> Surely this will break the INTERSECT/EXPECT code, because it\n> depends on nodes beeing at specific memory locations for\n> comparisions. But this is impossible due to the requirements\n> of the rule system.\n\n\nThat is bad. He wants the Var nodes to have the same address. Why\ncan't he use equal() like everyone else? Do you want to review his\npatch, and reverse anything that looks wrong in it. This may be the\nonly way to make sure things are OK. The patch isn't that long.\n\nI had the same frustrations with HAVING six months ago.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Feb 1999 11:33:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
},
{
"msg_contents": "> That is bad. He wants the Var nodes to have the same address. Why\n> can't he use equal() like everyone else? Do you want to review his\n> patch, and reverse anything that looks wrong in it. This may be the\n> only way to make sure things are OK. The patch isn't that long.\n\nBut we should be clear: imho INTERSECT/EXCEPT can be dumped from v6.5 if\nyou find it is too ugly to make work, or if you find it trashed too much\nother stuff. We can re-enable it later after working out the kinks...\n\n - Tom\n",
"msg_date": "Tue, 23 Feb 1999 03:51:54 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] trouble with rules"
}
] |
[
{
"msg_contents": "And another sync-with-gram.y patch.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!",
"msg_date": "Tue, 2 Feb 1999 09:57:21 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "ecpg patch"
}
] |
[
{
"msg_contents": "> > Bruce Momjian <[email protected]> writes:\n> > > I have a morbid curiosity, so I decided to find out how \n> this got into\n> > > the source. On November 1, 1998, Magus applied a patch:\n> > > Here is a first patch to cleanup the backend side of libpq.\n> > > Several users have complained about 6.4.* COPY slowing \n> down when loading\n> > > rows. This may be the cause. Good job finding it.\n> > \n> > I thought Magnus' changes were only in the current CVS branch, not\n> > in REL6_4 ?\n> \n> You are absolutely right. Sorry Magnus. The COPY complaint I heard\n> obviously was not from this.\n\nPhew - I was going crazy trying to find out why I had touched anything to do\nwith palloc() :-)\nPlus it was not in 6.4...\n\n\n//Magnus\n",
"msg_date": "Tue, 2 Feb 1999 09:59:51 +0100 ",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Small patches in copy.c and trigger.c"
}
] |
[
{
"msg_contents": "I am writing a Postgres interface for Guile. (I know, I am the other one\nof the two people doing this!)\n\nI sent a message at the weekend, but I used mime encoding of the files\nwhich I understand some people have difficulty decoding, so here is my\nreport again, in the clear this time. \n\nI am having problems with my large object interface. In particular I get\nthe error\n\n ERROR: heap_fetch: xinv19073 relation: ReadBuffer(81aeefe) failed\n\nfrom the backend.\n\nI have written a small C program to reproduce the problem, and it follows \nbelow, along with the output of PQtrace.\n\nIn summary, the problem is: \n\n/* Pseudo-C */\n\nconn = PQconnectdb()\nPQexec (conn, BEGIN TRANSACTION)\noid = lo_creat (conn, INV_READ | WRITE)\nfd = lo_open(conn oid, INV_READ | INV_WRITE)\nfor (i = 0; i < 5; i++)\n lo_write(fd, 'X')\nlo_lseek(fd, 1, 0)\nlo_write(fd, 'y')\nlo_lseek(fd, 3, 0)\nlo_write(fd, 'y') /**** error happens here ****/\nlo_close(fd)\nPQexec (conn, END TRANSACTION)\n\nThe real C is:\n\n#include <stdio.h>\n#include \"libpq-fe.h\"\n#include \"libpq/libpq-fs.h\"\n\nvoid exec_cmd(PGconn *conn, char *str);\n\nmain (int argc, char *argv[])\n{\n PGconn *conn;\n int lobj_fd;\n char buf[256];\n int ret, i;\n Oid lobj_id;\n\n conn = PQconnectdb(\"dbname=test\");\n if (PQstatus(conn) != CONNECTION_OK) {\n fprintf(stderr, \"Can't connect to backend.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n exec_cmd(conn, \"BEGIN TRANSACTION\");\n PQtrace (conn, stdout);\n if ((lobj_id = lo_creat(conn, INV_READ | INV_WRITE)) < 0) {\n fprintf(stderr, \"Can't create lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if ((lobj_fd = lo_open(conn, lobj_id, INV_READ | INV_WRITE)) < 0) {\n fprintf(stderr, \"Can't open lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n fprintf(stderr, \"lo_open returned fd = %d.\\n\", lobj_fd);\n for (i = 0; i < 5; i++) {\n if ((ret = lo_write(conn, lobj_fd, \"X\", 1)) != 1) {\n fprintf(stderr, \"Can't write lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n }\n if ((ret = lo_lseek(conn, lobj_fd, 1, 0)) != 1) {\n fprintf(stderr, \"error (%d) lseeking in large object.\\n\", ret);\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if ((ret = lo_write(conn, lobj_fd, \"y\", 1)) != 1) {\n fprintf(stderr, \"Can't write lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if ((ret = lo_lseek(conn, lobj_fd, 3, 0)) != 3) {\n fprintf(stderr, \"error (%d) lseeking in large object.\\n\", ret);\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if ((ret = lo_write(conn, lobj_fd, \"y\", 1)) != 1) {\n fprintf(stderr, \"Can't write lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n ret = lo_close(conn, lobj_fd);\n printf(\"lo_close returned %d.\\n\", ret);\n if (ret)\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\n PQuntrace(conn);\n exec_cmd(conn, \"END TRANSACTION\");\n exit(0);\n}\n\nvoid exec_cmd(PGconn *conn, char *str)\n{\n PGresult *res;\n\n if ((res = PQexec(conn, str)) == NULL) {\n fprintf(stderr, \"Error executing %s.\\n\", str);\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if (PQresultStatus(res) != PGRES_COMMAND_OK) {\n fprintf(stderr, \"Error executing %s.\\n\", str);\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\n PQclear(res);\n exit(1);\n }\n PQclear(res);\n}\n\nHere is a trace-log of the whole affair:\n\nTo backend> Q\nTo backend> select proname, oid from pg_proc\t \t\twhere proname = 'lo_open'\t\t\t or proname = 'lo_close'\t\t\t or proname = 'lo_creat'\t\t\t or proname = 'lo_unlink'\t\t\t or proname = 'lo_lseek'\t\t\t or proname = 'lo_tell'\t\t\t or proname = 'loread'\t\t\t o\nr proname = 'lowrite'\n>From backend> P\n>From backend> \"blank\"\n>From backend> T\n>From backend (#2)> 2\n>From backend> \"proname\"\n>From backend (#4)> 19\n>From backend (#2)> 32\n>From backend (#4)> -1\n>From backend> \"oid\"\n>From backend (#4)> 26\n>From backend (#2)> 4\n>From backend (#4)> -1\n>From backend> D\n>From backend (1)> �\n>From backend (#4)> 11\n>From backend (7)> lo_open\n>From backend (#4)> 7\n>From backend (3)> 952\n>From backend> D\n>From backend (1)> �\n>From backend (#4)> 12\n>From backend (8)> lo_close\n>From backend (#4)> 7\n>From backend (3)> 953\n>From backend> D\n>From backend (1)> �\n>From backend (#4)> 12\n>From backend (8)> lo_creat\n>From backend (#4)> 7\n>From backend (3)> 957\n>From backend> D\n>From backend (1)> �\n>From backend (#4)> 13\n>From backend (9)> lo_unlink\n>From backend (#4)> 7\n>From backend (3)> 964\n>From backend> D\n>From backend (1)> �\n>From backend (#4)> 12\n>From backend (8)> lo_lseek\n>From backend (#4)> 7\n>From backend (3)> 956\n>From backend> D\n>From backend (1)> �\n>From backend (#4)> 11\n>From backend (7)> lo_tell\n>From backend (#4)> 7\n>From backend (3)> 958\n>From backend> D\n>From backend (1)> �\n>From backend (#4)> 10\n>From backend (6)> loread\n>From backend (#4)> 7\n>From backend (3)> 954\n>From backend> D\n>From backend (1)> �\n>From backend (#4)> 11\n>From backend (7)> lowrite\n>From backend (#4)> 7\n>From backend (3)> 955\n>From backend> C\n>From backend> \"SELECT\"\n>From backend> Z\n>From backend> Z\nTo backend> F \nTo backend (4#)> 957\nTo backend (4#)> 1\nTo backend (4#)> 4\nTo backend (4#)> 393216\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 19201\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 952\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 19201\nTo backend (4#)> 4\nTo backend (4#)> 393216\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 0\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 956\nTo backend (4#)> 3\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 4\nTo backend (4#)> 0\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 0\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 955\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 1\nTo backend> X\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 1\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 955\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 1\nTo backend> X\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 1\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 955\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 1\nTo backend> X\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 1\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 955\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 1\nTo backend> X\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 1\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 955\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 1\nTo backend> X\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 1\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 956\nTo backend (4#)> 3\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 4\nTo backend (4#)> 1\nTo backend (4#)> 4\nTo backend (4#)> 0\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 1\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 955\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 1\nTo backend> y\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 1\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 956\nTo backend (4#)> 3\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 4\nTo backend (4#)> 3\nTo backend (4#)> 4\nTo backend (4#)> 0\n>From backend> V\n>From backend> G\n>From backend (#4)> 4\n>From backend (#4)> 3\n>From backend> 0\n>From backend> Z\nTo backend> F \nTo backend (4#)> 955\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 1\nTo backend> y\n>From backend> E\n>From backend> \"ERROR: heap_fetch: xinv19201 relation: ReadBuffer(81aeefe) failed\n\"\n>From backend> Z\n\n\n\n",
"msg_date": "Tue, 2 Feb 1999 11:15:02 +0000 (GMT)",
"msg_from": "Ian Grant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend problem with large objects"
},
{
"msg_contents": "> I am writing a Postgres interface for Guile. (I know, I am the other one\n> of the two people doing this!)\n> \n> I sent a message at the weekend, but I used mime encoding of the files\n> which I understand some people have difficulty decoding, so here is my\n> report again, in the clear this time. \n> \n> I am having problems with my large object interface. In particular I get\n> the error\n> \n> ERROR: heap_fetch: xinv19073 relation: ReadBuffer(81aeefe) failed\n> \n> from the backend.\n\nReproduced here too. Seems very old and known problem of large object\n(writing into in the middle of a large object does not work).\n---\nTatsuo Ishii\n\n",
"msg_date": "Tue, 02 Feb 1999 22:59:08 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend problem with large objects "
},
{
"msg_contents": "On Tue, 2 Feb 1999, Tatsuo Ishii wrote:\n\n> Reproduced here too. Seems very old and known problem of large object\n> (writing into in the middle of a large object does not work).\n\nMany thanks, does this mean it's not likely to be fixed? If so I'll take\nthis to the documentation list, if there is one. But first, can anyone\nexplain what *is* allowed in lo_write after lo_lseek? Is it OK to\noverwrite a large object for example? \n\nI also note that there is no way to truncate a large object without\nreading the beginning bit and copying it out to another new large object,\nwhich involves it going down the wire to the client and then back again. \nAre there any plans to implement lo_trunc or something? Perhaps this is\ndifficult for the same reason lo_write is difficult inside a large object.\n\nIan\n\n",
"msg_date": "Tue, 2 Feb 1999 20:43:44 +0000 (GMT)",
"msg_from": "Ian Grant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend problem with large objects "
},
{
"msg_contents": "> On Tue, 2 Feb 1999, Tatsuo Ishii wrote:\n> \n> > Reproduced here too. Seems very old and known problem of large object\n> > (writing into in the middle of a large object does not work).\n> \n> Many thanks, does this mean it's not likely to be fixed? If so I'll take\n> this to the documentation list, if there is one. But first, can anyone\n> explain what *is* allowed in lo_write after lo_lseek? Is it OK to\n> overwrite a large object for example? \n\nOk. I think I have found the source of the problem. Please apply\nincluded patches and try again.\n\n> I also note that there is no way to truncate a large object without\n> reading the beginning bit and copying it out to another new large object,\n> which involves it going down the wire to the client and then back again. \n> Are there any plans to implement lo_trunc or something? Perhaps this is\n> difficult for the same reason lo_write is difficult inside a large object.\n\nSeems not too difficult, but I don't have time to do that.\n---\nTatsuo Ishii\n\n----------------------------- cut here ----------------------------------\n*** postgresql-6.4.2/src/backend/storage/large_object/inv_api.c.orig\tSun Dec 13 14:08:19 1998\n--- postgresql-6.4.2/src/backend/storage/large_object/inv_api.c\tThu Feb 4 22:02:43 1999\n***************\n*** 545,555 ****\n \t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n \t\telse\n \t\t{\n! \t\t\tif (obj_desc->offset > obj_desc->highbyte)\n \t\t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n \t\t\telse\n \t\t\t\ttuplen = inv_wrold(obj_desc, buf, nbytes - nwritten, tuple, buffer);\n! \t\t\tReleaseBuffer(buffer);\n \t\t}\n \n \t\t/* move pointers past the amount we just wrote */\n--- 545,561 ----\n \t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n \t\telse\n \t\t{\n! \t\tif (obj_desc->offset > obj_desc->highbyte) {\n \t\t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n+ \t\t\t\tReleaseBuffer(buffer);\n+ \t\t\t}\n \t\t\telse\n \t\t\t\ttuplen = inv_wrold(obj_desc, buf, nbytes - nwritten, tuple, buffer);\n! \t\t\t/* inv_wrold() has already issued WriteBuffer()\n! \t\t\t which has decremented local reference counter\n! \t\t\t (LocalRefCount). So we should not call\n! \t\t\t ReleaseBuffer() here. -- Tatsuo 99/2/4\n! \t\t\tReleaseBuffer(buffer); */\n \t\t}\n \n \t\t/* move pointers past the amount we just wrote */\n",
"msg_date": "Thu, 04 Feb 1999 23:14:01 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend problem with large objects "
},
{
"msg_contents": "\nApplied manually. The patch did not apply cleanly, and needed a &tuple\nin inv_wrold, not tuple.\n\n\n> > On Tue, 2 Feb 1999, Tatsuo Ishii wrote:\n> > \n> > > Reproduced here too. Seems very old and known problem of large object\n> > > (writing into in the middle of a large object does not work).\n> > \n> > Many thanks, does this mean it's not likely to be fixed? If so I'll take\n> > this to the documentation list, if there is one. But first, can anyone\n> > explain what *is* allowed in lo_write after lo_lseek? Is it OK to\n> > overwrite a large object for example? \n> \n> Ok. I think I have found the source of the problem. Please apply\n> included patches and try again.\n> \n> > I also note that there is no way to truncate a large object without\n> > reading the beginning bit and copying it out to another new large object,\n> > which involves it going down the wire to the client and then back again. \n> > Are there any plans to implement lo_trunc or something? Perhaps this is\n> > difficult for the same reason lo_write is difficult inside a large object.\n> \n> Seems not too difficult, but I don't have time to do that.\n> ---\n> Tatsuo Ishii\n> \n> ----------------------------- cut here ----------------------------------\n> *** postgresql-6.4.2/src/backend/storage/large_object/inv_api.c.orig\tSun Dec 13 14:08:19 1998\n> --- postgresql-6.4.2/src/backend/storage/large_object/inv_api.c\tThu Feb 4 22:02:43 1999\n> ***************\n> *** 545,555 ****\n> \t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n> \t\telse\n> \t\t{\n> ! \t\t\tif (obj_desc->offset > obj_desc->highbyte)\n> \t\t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n> \t\t\telse\n> \t\t\t\ttuplen = inv_wrold(obj_desc, buf, nbytes - nwritten, tuple, buffer);\n> ! \t\t\tReleaseBuffer(buffer);\n> \t\t}\n> \n> \t\t/* move pointers past the amount we just wrote */\n> --- 545,561 ----\n> \t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n> \t\telse\n> \t\t{\n> ! \t\tif (obj_desc->offset > obj_desc->highbyte) {\n> \t\t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n> + \t\t\t\tReleaseBuffer(buffer);\n> + \t\t\t}\n> \t\t\telse\n> \t\t\t\ttuplen = inv_wrold(obj_desc, buf, nbytes - nwritten, tuple, buffer);\n> ! \t\t\t/* inv_wrold() has already issued WriteBuffer()\n> ! \t\t\t which has decremented local reference counter\n> ! \t\t\t (LocalRefCount). So we should not call\n> ! \t\t\t ReleaseBuffer() here. -- Tatsuo 99/2/4\n> ! \t\t\tReleaseBuffer(buffer); */\n> \t\t}\n> \n> \t\t/* move pointers past the amount we just wrote */\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Feb 1999 09:52:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend problem with large objects"
},
{
"msg_contents": "On Thu, 4 Feb 1999, Bruce Momjian wrote:\n\n> Applied manually. The patch did not apply cleanly, and needed a &tuple\n> in inv_wrold, not tuple.\n\nIn the 4.6.2 release there are no &tuple arguments to inv_wrold around the\npatch. Perhaps there is a patch you have applied that I need? Please see\nbelow:\n\n> > > On Tue, 2 Feb 1999, Tatsuo Ishii wrote:\n> > > \n> > Ok. I think I have found the source of the problem. Please apply\n> > included patches and try again.\n\nMany thanks indeed for this. Unfortunately it doesn't completely work: it\nfixes the problem as reported, but when, instead of writing five\ncharacters, one at a time, I write five at once, the backend dies in\nthe same place it did before. Here's the C code slightly modified to\nreproduce the problem:\n\n#include <stdio.h>\n#include \"libpq-fe.h\"\n#include \"libpq/libpq-fs.h\"\n\nvoid exec_cmd(PGconn *conn, char *str);\n\nmain (int argc, char *argv[])\n{\n PGconn *conn;\n int lobj_fd;\n char buf[256];\n int ret, i;\n Oid lobj_id;\n\n conn = PQconnectdb(\"dbname=test\");\n if (PQstatus(conn) != CONNECTION_OK) {\n fprintf(stderr, \"Can't connect to backend.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n exec_cmd(conn, \"BEGIN TRANSACTION\");\n PQtrace (conn, stdout);\n if ((lobj_id = lo_creat(conn, INV_READ | INV_WRITE)) < 0) {\n fprintf(stderr, \"Can't create lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if ((lobj_fd = lo_open(conn, lobj_id, INV_READ | INV_WRITE)) < 0) {\n fprintf(stderr, \"Can't open lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n fprintf(stderr, \"lo_open returned fd = %d.\\n\", lobj_fd);\n/*\n for (i = 0; i < 5; i++) {\n*/\n if ((ret = lo_write(conn, lobj_fd, \"XXXXX\", 5)) != 5) {\n fprintf(stderr, \"Can't write lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n/*\n }\n*/\n if ((ret = lo_lseek(conn, lobj_fd, 1, 0)) != 1) {\n fprintf(stderr, \"error (%d) lseeking in large object.\\n\", ret);\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if ((ret = lo_write(conn, lobj_fd, \"y\", 1)) != 1) {\n fprintf(stderr, \"Can't write lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if ((ret = lo_lseek(conn, lobj_fd, 3, 0)) != 3) {\n fprintf(stderr, \"error (%d) lseeking in large object.\\n\", ret);\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if ((ret = lo_write(conn, lobj_fd, \"y\", 1)) != 1) {\n fprintf(stderr, \"Can't write lobj.\\n\");\n fprintf(stderr, \"ERROR: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n ret = lo_close(conn, lobj_fd);\n printf(\"lo_close returned %d.\\n\", ret);\n if (ret)\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\n PQuntrace(conn);\n exec_cmd(conn, \"END TRANSACTION\");\n exit(0);\n}\n\nvoid exec_cmd(PGconn *conn, char *str)\n{\n PGresult *res;\n\n if ((res = PQexec(conn, str)) == NULL) {\n fprintf(stderr, \"Error executing %s.\\n\", str);\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\n exit(1);\n }\n if (PQresultStatus(res) != PGRES_COMMAND_OK) {\n fprintf(stderr, \"Error executing %s.\\n\", str);\n fprintf(stderr, \"Error message: %s\\n\", PQerrorMessage(conn));\n PQclear(res);\n exit(1);\n }\n PQclear(res);\n}\n\n\n",
"msg_date": "Thu, 4 Feb 1999 22:50:33 +0000 (GMT)",
"msg_from": "Ian Grant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend problem with large objects"
},
{
"msg_contents": ">On Thu, 4 Feb 1999, Bruce Momjian wrote:\n>\n>> Applied manually. The patch did not apply cleanly, and needed a &tuple\n>> in inv_wrold, not tuple.\n>\n>In the 4.6.2 release there are no &tuple arguments to inv_wrold around the\n>patch. Perhaps there is a patch you have applied that I need? Please see\n>below:\n\nMy patches are for 6.4.2. Bruce is talking about current. Oh, I don't\nknow what version of PostgreSQL you are using.\n\n>> > > On Tue, 2 Feb 1999, Tatsuo Ishii wrote:\n>> > > \n>> > Ok. I think I have found the source of the problem. Please apply\n>> > included patches and try again.\n>\n>Many thanks indeed for this. Unfortunately it doesn't completely work: it\n>fixes the problem as reported, but when, instead of writing five\n>characters, one at a time, I write five at once, the backend dies in\n>the same place it did before. Here's the C code slightly modified to\n>reproduce the problem:\n\nGive me some time. I'm not sure if I could solve new problem, though.\n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 05 Feb 1999 10:05:25 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend problem with large objects "
},
{
"msg_contents": "> >Many thanks indeed for this. Unfortunately it doesn't completely work: it\n> >fixes the problem as reported, but when, instead of writing five\n> >characters, one at a time, I write five at once, the backend dies in\n> >the same place it did before. Here's the C code slightly modified to\n> >reproduce the problem:\n> \n> Give me some time. I'm not sure if I could solve new problem, though.\n> --\n> Tatsuo Ishii\n\nI think I have fixed the problem you mentioned. Ian, could you apply\nincluded patches and test again? Note that those are for 6.4.2 and\nadditions to the previous patches.\n\nBTW, lobj strangely added a 0 filled disk block at the head of the\nheap. As a result, even 1-byte-user-data lobj consumes at least 16384\nbytes (2 disk blocks)! Included patches also fix this problem.\n\nTo Bruce:\nThanks for taking care of my previous patches for current. If\nincluded patch is ok, I will make one for current.\n\n---------------------------- cut here ---------------------------------\n*** postgresql-6.4.2/src/backend/storage/large_object/inv_api.c.orig\tSun Dec 13 14:08:19 1998\n--- postgresql-6.4.2/src/backend/storage/large_object/inv_api.c\tFri Feb 12 20:21:05 1999\n***************\n*** 624,648 ****\n \t\t|| obj_desc->offset < obj_desc->lowbyte\n \t\t|| !ItemPointerIsValid(&(obj_desc->htid)))\n \t{\n \n \t\t/* initialize scan key if not done */\n \t\tif (obj_desc->iscan == (IndexScanDesc) NULL)\n \t\t{\n- \t\t\tScanKeyData skey;\n- \n \t\t\t/*\n \t\t\t * As scan index may be prematurely closed (on commit), we\n \t\t\t * must use object current offset (was 0) to reinitialize the\n \t\t\t * entry [ PA ].\n \t\t\t */\n- \t\t\tScanKeyEntryInitialize(&skey, 0x0, 1, F_INT4GE,\n- \t\t\t\t\t\t\t\t Int32GetDatum(obj_desc->offset));\n \t\t\tobj_desc->iscan =\n \t\t\t\tindex_beginscan(obj_desc->index_r,\n \t\t\t\t\t\t\t\t(bool) 0, (uint16) 1,\n \t\t\t\t\t\t\t\t&skey);\n \t\t}\n- \n \t\tdo\n \t\t{\n \t\t\tres = index_getnext(obj_desc->iscan, ForwardScanDirection);\n--- 630,655 ----\n \t\t|| obj_desc->offset < obj_desc->lowbyte\n \t\t|| !ItemPointerIsValid(&(obj_desc->htid)))\n \t{\n+ \t\tScanKeyData skey;\n+ \n+ \t\tScanKeyEntryInitialize(&skey, 0x0, 1, F_INT4GE,\n+ \t\t\t\t Int32GetDatum(obj_desc->offset));\n \n \t\t/* initialize scan key if not done */\n \t\tif (obj_desc->iscan == (IndexScanDesc) NULL)\n \t\t{\n \t\t\t/*\n \t\t\t * As scan index may be prematurely closed (on commit), we\n \t\t\t * must use object current offset (was 0) to reinitialize the\n \t\t\t * entry [ PA ].\n \t\t\t */\n \t\t\tobj_desc->iscan =\n \t\t\t\tindex_beginscan(obj_desc->index_r,\n \t\t\t\t\t\t\t\t(bool) 0, (uint16) 1,\n \t\t\t\t\t\t\t\t&skey);\n+ \t\t} else {\n+ \t\t\tindex_rescan(obj_desc->iscan, false, &skey);\n \t\t}\n \t\tdo\n \t\t{\n \t\t\tres = index_getnext(obj_desc->iscan, ForwardScanDirection);\n***************\n*** 666,672 ****\n \t\t\ttuple = heap_fetch(obj_desc->heap_r, SnapshotNow,\n \t\t\t\t\t\t\t &res->heap_iptr, buffer);\n \t\t\tpfree(res);\n! \t\t} while (tuple == (HeapTuple) NULL);\n \n \t\t/* remember this tid -- we may need it for later reads/writes */\n \t\tItemPointerCopy(&tuple->t_ctid, &obj_desc->htid);\n--- 673,679 ----\n \t\t\ttuple = heap_fetch(obj_desc->heap_r, SnapshotNow,\n \t\t\t\t\t\t\t &res->heap_iptr, buffer);\n \t\t\tpfree(res);\n! \t\t} while (!HeapTupleIsValid(tuple));\n \n \t\t/* remember this tid -- we may need it for later reads/writes */\n \t\tItemPointerCopy(&tuple->t_ctid, &obj_desc->htid);\n***************\n*** 675,680 ****\n--- 682,691 ----\n \t{\n \t\ttuple = heap_fetch(obj_desc->heap_r, SnapshotNow,\n \t\t\t\t\t\t &(obj_desc->htid), buffer);\n+ \t\tif (!HeapTupleIsValid(tuple)) {\n+ \t\t elog(ERROR,\n+ \t\t \"inv_fetchtup: heap_fetch failed\");\n+ \t\t}\n \t}\n \n \t/*\n***************\n*** 746,757 ****\n \n \tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \tif (nblocks > 0)\n \t\tbuffer = ReadBuffer(hr, nblocks - 1);\n! \telse\n \t\tbuffer = ReadBuffer(hr, P_NEW);\n! \n! \tpage = BufferGetPage(buffer);\n \n \t/*\n \t * If the last page is too small to hold all the data, and it's too\n--- 757,771 ----\n \n \tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \tif (nblocks > 0) {\n \t\tbuffer = ReadBuffer(hr, nblocks - 1);\n! \t\tpage = BufferGetPage(buffer);\n! \t}\n! \telse {\n \t\tbuffer = ReadBuffer(hr, P_NEW);\n! \t\tpage = BufferGetPage(buffer);\n! \t\tPageInit(page, BufferGetPageSize(buffer), 0);\n! \t}\n \n \t/*\n \t * If the last page is too small to hold all the data, and it's too\n***************\n*** 865,876 ****\n \n \t\tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \t\tif (nblocks > 0)\n \t\t\tnewbuf = ReadBuffer(hr, nblocks - 1);\n! \t\telse\n \t\t\tnewbuf = ReadBuffer(hr, P_NEW);\n \n- \t\tnewpage = BufferGetPage(newbuf);\n \t\tfreespc = IFREESPC(newpage);\n \n \t\t/*\n--- 879,894 ----\n \n \t\tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \t\tif (nblocks > 0) {\n \t\t\tnewbuf = ReadBuffer(hr, nblocks - 1);\n! \t\t\tnewpage = BufferGetPage(newbuf);\n! \t\t}\n! \t\telse {\n \t\t\tnewbuf = ReadBuffer(hr, P_NEW);\n+ \t\t\tnewpage = BufferGetPage(newbuf);\n+ \t\t\tPageInit(newpage, BufferGetPageSize(newbuf), 0);\n+ \t\t}\n \n \t\tfreespc = IFREESPC(newpage);\n \n \t\t/*\n***************\n*** 973,978 ****\n--- 991,999 ----\n \tWriteBuffer(buffer);\n \tif (newbuf != buffer)\n \t\tWriteBuffer(newbuf);\n+ \n+ \t/* Tuple id is no longer valid */\n+ \tItemPointerSetInvalid(&(obj_desc->htid));\n \n \t/* done */\n \treturn nwritten;\n",
"msg_date": "Fri, 12 Feb 1999 23:12:07 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend problem with large objects "
}
] |
[
{
"msg_contents": "Hello.\n\nRecently some users started to complain about:\nWarning: PostgresSQL query failed: FATAL 1: btree: lost page in the chain of duplicates\n\nI noticed that VACUUM removes this problem, but no making\nVACUUM I've got:\nNOTICE: BlowawayRelationBuffers(confread, 45): block 154 is referenced (private 0, last 0,\nglobal 1)\nFATAL 1: VACUUM (vc_rpfheap): BlowawayRelationBuffers returned -2\n\nWhat it can be? I'm using 6.3.2\n\nAndrey\n\n\n",
"msg_date": "Tue, 2 Feb 1999 15:15:20 +0300",
"msg_from": "������������������ ��������������������� <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM problem"
},
{
"msg_contents": "On Tue, 2 Feb 1999, XXXXXX XXXXXXX wrote:\n\n> Hello.\n> \n> Recently some users started to complain about:\n> Warning: PostgresSQL query failed: FATAL 1: btree: lost page in the chain of duplicates\n> \n> I noticed that VACUUM removes this problem, but no making\n> VACUUM I've got:\n> NOTICE: BlowawayRelationBuffers(confread, 45): block 154 is referenced (private 0, last 0,\n> global 1)\n> FATAL 1: VACUUM (vc_rpfheap): BlowawayRelationBuffers returned -2\n> \n> What it can be? I'm using 6.3.2\n\n\tA relatively old version of PostgreSQL...upgrade to v6.4.2, as I\nbelieve this was fixed several months ago, in v6.4...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 2 Feb 1999 08:38:51 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM problem"
}
] |
[
{
"msg_contents": "subscribe\n\n--------------------------------------------------------------------------------\nSebesty�n Zolt�n <[email protected]>\tI'm believing that the Holy Spirit is\n\t\t\t\t\tgonna allow the hand, and the foot, and\nMAKE INSTALL NOT WAR\t\t\tthe mouth, just to begin to speak, and\n to minister, and to heal coordinated by\n\t\t\t\t\tthe head.\n\nI use UNIX because reboots are for hardware upgrades.\n\n\t\t -- Waiting for FreeBSD 3.1, not Godot --\n\n",
"msg_date": "Tue, 2 Feb 1999 22:48:29 +0100 (CET)",
"msg_from": "Sebestyen Zoltan <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Why don't you regress test it on the CURRENT tree and if it doesn't\nbreak anything submit it? I thought that's what Beta was for (to get\nthe bugs worked out of the new features), but I'll trust your judgement\non what you think it'll break. \n\t-DEJ \n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, February 02, 1999 5:16 PM\n> To: [email protected]\n> Cc: [email protected]; [email protected]; [email protected]\n> Subject: Re: [HACKERS] 6.5 beta and ORDER BY patch\n> \n> \n> > > We haven't started beta yet. Anything on LIMIT?\n> > \n> > LIMIT is in there and was during entire v6.5 development.\n> > But ORDER BY suppressing sort using index wasn't.\n> > \n> \n> Great.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, \n> Pennsylvania 19026\n> \n",
"msg_date": "Tue, 2 Feb 1999 17:22:45 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.5 beta and ORDER BY patch"
}
] |
[
{
"msg_contents": "\nHi,\n\nI'm having serious problems with postgresql and TCL as a procedural language.\n\nI did a clean compilantion and install --with-tcl.\nI did an initdb, created the language and a demo function found on the site.\n\nThis is the result:\n\na=> create function pltcl_call_handler() returns opaque\na-> as '/usr/postgres/lib/pltcl.so'\na-> language 'C';\nCREATE\na=>\na=> create trusted procedural language 'pltcl'\na-> handler pltcl_call_handler\na-> lancompiler 'PL/Tcl';\nCREATE\na=> CREATE FUNCTION tcl_max (int4, int4) RETURNS int4 AS '\na'> if {$1 > $2} {return $1}\na'> return $2\na'> ' LANGUAGE 'pltcl';\nCREATE\na=> select tcl_max(3,7);\nERROR: Load of file /usr/postgres/lib/pltcl.so failed: /usr/lib/libtcl8.0.so:\nundefined symbol: stat\n\na=> select tcl_max(3,7);\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or while\nprocessing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n----------\n\nThe same happens on two other 6.4.0 - 6.4.2 installations. All machines are\nIntel based RedHat 5.2 Linuxes with kernel 2.0.36, tcl-8.0.3.\nI wasn't able to test this with the current snapshot because I hadn't been able\nto initdb.\n\nAny guess ?\n\nBye!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Wed, 03 Feb 1999 00:57:16 +0100",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "PL/TCL bug (?)"
},
{
"msg_contents": "Daniele Orlandi wrote:\n\n> Hi,\n>\n> I'm having serious problems with postgresql and TCL as a procedural language.\n>\n> I did a clean compilantion and install --with-tcl.\n> I did an initdb, created the language and a demo function found on the site.\n>\n> This is the result:\n>\n> a=> create function pltcl_call_handler() returns opaque\n> a-> as '/usr/postgres/lib/pltcl.so'\n> a-> language 'C';\n> CREATE\n> a=>\n> a=> create trusted procedural language 'pltcl'\n> a-> handler pltcl_call_handler\n> a-> lancompiler 'PL/Tcl';\n> CREATE\n> a=> CREATE FUNCTION tcl_max (int4, int4) RETURNS int4 AS '\n> a'> if {$1 > $2} {return $1}\n> a'> return $2\n> a'> ' LANGUAGE 'pltcl';\n> CREATE\n\n Anything correct so far.\n\n> a=> select tcl_max(3,7);\n> ERROR: Load of file /usr/postgres/lib/pltcl.so failed: /usr/lib/libtcl8.0.so:\n> undefined symbol: stat\n\n There's something wrong with your Tcl installation. The Tcl\n shared lib does not correctly reference the library where\n stat() is in. It was linked with a missing -l...\n\n You might try to link pltcl.so with the additional library.\n Try adding -lm -lc.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 3 Feb 1999 10:54:32 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PL/TCL bug (?)"
}
] |
[
{
"msg_contents": "Stephen Kogge <[email protected]> wrote (quite some time ago):\n> \tIn short Sun's libc does not allow 64 bit I/O (*print* functions)\n> and atol does not detect overflow. When the regression tests are run\n> several tests fail due to the 64 bit I/O problems. gcc does the internal \n> operations but can't print :-( I will be happy if someone can tell me that\n> with the sunos libc this can be done.\n\nI have just rejiggered the int8 support so that it only depends on\nsnprintf instead of sprintf and sscanf. The interesting thing about\nthis is that we have our own version of snprintf that we use if the\nplatform's C library hasn't got snprintf/vsnprintf --- and our version\nknows about %lld. So if you have a compiler that offers working 64-bit\narithmetic, you don't need any help from the C library to make int8 go.\n\nI've verified that int8 now passes regression test on HPUX 9.* with gcc\n--- and configure's 64-bit arithmetic check passes with HP's cc too,\nso I bet that that case works as well, but I haven't bothered to run it\nthrough regression yet.\n\nSunOS 4.1.* doesn't have snprintf either, so int8 should work there too\nas long as you build with gcc. I don't have the time to try that right\nnow; do you want to?\n\nWe could conceivably extend configure so that if you have working 64-bit\narithmetic but the platform's C library supplies non-long-long-aware\nsnprintf and vsnprintf, configure would reject those routines and use\nour own. Right at the moment I'm not aware of any platforms fitting\nthat combination of circumstances, so I have not bothered to add the\nextra complexity to configure. (Anyone who does run into this\ncombination can change the configuration results by hand, of course.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Feb 1999 19:37:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Postgres for Sunos 4.1.4 "
}
] |
[
{
"msg_contents": "Wow!\n\nWhat are the odds...\n\nI had just created a C function which would allow\nthe database to determine wether or not a file exists\nsomewhere on the filesystem (readable by user\npostgres, of course) and return 0 if not and\n1 if the file exists. I then used the function in \na VIEW and used a MS Access front-end to make \npretty reports from the view, and I arrived at the\nexact same symbol missing from the dynamic linker,\nsince I used stat() to test for the existence of the\nfile. I wound up using fopen() (a poor alternative)\nin the custom function, because the backend didn't\nhave any linker problems with it. I spent hours\nworking under the assumption that I was using the\nwrong compiler flags with gcc and ld, and finally\ngave up trying to use stat(). Note that this could\nstill be the case...\n\nI have the EXACT same configuration as you:\n\nIntel\nLinux RedHat 5.2\nKernel 2.0.36\nPostgreSQL 6.4\n\nInteresting....\n\n---Daniele Orlandi <[email protected]> wrote:\n>\n> \n> Hi,\n> \n> I'm having serious problems with postgresql and TCL as a procedural\nlanguage.\n> \n> I did a clean compilantion and install --with-tcl.\n> I did an initdb, created the language and a demo function found on\nthe site.\n> \n> This is the result:\n> \n> a=> create function pltcl_call_handler() returns opaque\n> a-> as '/usr/postgres/lib/pltcl.so'\n> a-> language 'C';\n> CREATE\n> a=>\n> a=> create trusted procedural language 'pltcl'\n> a-> handler pltcl_call_handler\n> a-> lancompiler 'PL/Tcl';\n> CREATE\n> a=> CREATE FUNCTION tcl_max (int4, int4) RETURNS int4 AS '\n> a'> if {$1 > $2} {return $1}\n> a'> return $2\n> a'> ' LANGUAGE 'pltcl';\n> CREATE\n> a=> select tcl_max(3,7);\n> ERROR: Load of file /usr/postgres/lib/pltcl.so failed:\n/usr/lib/libtcl8.0.so:\n> undefined symbol: stat\n> \n> a=> select tcl_max(3,7);\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before\nor while\n> processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> ----------\n> \n> The same happens on two other 6.4.0 - 6.4.2 installations. All\nmachines are\n> Intel based RedHat 5.2 Linuxes with kernel 2.0.36, tcl-8.0.3.\n> I wasn't able to test this with the current snapshot because I\nhadn't been able\n> to initdb.\n> \n> Any guess ?\n> \n> Bye!\n> \n> -- \n> Daniele\n> \n>\n-------------------------------------------------------------------------------\n> Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n> Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n>\n-------------------------------------------------------------------------------\n> \n> \n\n_________________________________________________________\nDO YOU YAHOO!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 2 Feb 1999 21:45:38 -0800 (PST)",
"msg_from": "Marcus Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PL/TCL bug (?)"
}
] |
[
{
"msg_contents": "src/template/alpha_cc seems missing in 6.4.2 but does exist in current.\nWhat about 6.4_REL? I belive that file should exist in any version of\nsource tree.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 03 Feb 1999 16:02:00 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "template/alpha_cc"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> src/template/alpha_cc seems missing in 6.4.2 but does exist in current.\n> What about 6.4_REL? I belive that file should exist in any version of\n> source tree.\n\n$ cvs log alpha_cc\n\nRCS file: /usr/local/cvsroot/pgsql/src/template/alpha_cc,v\nWorking file: alpha_cc\nhead: 1.1\nbranch:\nlocks: strict\naccess list:\nsymbolic names:\nkeyword substitution: kv\ntotal revisions: 1; selected revisions: 1\ndescription:\n----------------------------\nrevision 1.1\ndate: 1998/12/18 07:08:02; author: momjian; state: Exp;\netc etc...\n\nThere's no REL6_4 tag, therefore this file does not exist as far as\nthe 6.4.* branch is concerned. (Same deal as with discussion of\nvacuumdb a day or two ago.) Bruce could've applied a REL6_4 tag to\nthe file when he created it, but did not.\n\nDo we want to continue updating the REL6_4 branch for stuff like this?\nOr is it time to declare 6.4.2 the last of that branch and press forward\nwith 6.5 beta test?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Feb 1999 11:53:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] template/alpha_cc "
},
{
"msg_contents": "On Wed, 3 Feb 1999, Tom Lane wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n> > src/template/alpha_cc seems missing in 6.4.2 but does exist in current.\n> > What about 6.4_REL? I belive that file should exist in any version of\n> > source tree.\n> \n> $ cvs log alpha_cc\n> \n> RCS file: /usr/local/cvsroot/pgsql/src/template/alpha_cc,v\n> Working file: alpha_cc\n> head: 1.1\n> branch:\n> locks: strict\n> access list:\n> symbolic names:\n> keyword substitution: kv\n> total revisions: 1; selected revisions: 1\n> description:\n> ----------------------------\n> revision 1.1\n> date: 1998/12/18 07:08:02; author: momjian; state: Exp;\n> etc etc...\n> \n> There's no REL6_4 tag, therefore this file does not exist as far as\n> the 6.4.* branch is concerned. (Same deal as with discussion of\n> vacuumdb a day or two ago.) Bruce could've applied a REL6_4 tag to\n> the file when he created it, but did not.\n> \n> Do we want to continue updating the REL6_4 branch for stuff like this?\n> Or is it time to declare 6.4.2 the last of that branch and press forward\n> with 6.5 beta test?\n\nNothing new gets added to the REL6_4 branch, period. Personally, there\nalso will never be a v6.4.3, so the REL6_4 branch effectively died the day\nv6.4.2 was released...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 3 Feb 1999 13:48:23 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] template/alpha_cc "
},
{
"msg_contents": ">On Wed, 3 Feb 1999, Tom Lane wrote:\n>\n>> Tatsuo Ishii <[email protected]> writes:\n>> > src/template/alpha_cc seems missing in 6.4.2 but does exist in current.\n>> > What about 6.4_REL? I belive that file should exist in any version of\n>> > source tree.\n>> \n>> $ cvs log alpha_cc\n>> \n>> RCS file: /usr/local/cvsroot/pgsql/src/template/alpha_cc,v\n>> Working file: alpha_cc\n>> head: 1.1\n>> branch:\n>> locks: strict\n>> access list:\n>> symbolic names:\n>> keyword substitution: kv\n>> total revisions: 1; selected revisions: 1\n>> description:\n>> ----------------------------\n>> revision 1.1\n>> date: 1998/12/18 07:08:02; author: momjian; state: Exp;\n>> etc etc...\n>> \n>> There's no REL6_4 tag, therefore this file does not exist as far as\n>> the 6.4.* branch is concerned. (Same deal as with discussion of\n>> vacuumdb a day or two ago.) Bruce could've applied a REL6_4 tag to\n>> the file when he created it, but did not.\n>> \n>> Do we want to continue updating the REL6_4 branch for stuff like this?\n>> Or is it time to declare 6.4.2 the last of that branch and press forward\n>> with 6.5 beta test?\n>\n>Nothing new gets added to the REL6_4 branch, period. Personally, there\n>also will never be a v6.4.3, so the REL6_4 branch effectively died the day\n>v6.4.2 was released...\n\nThat's fine. I'm just curious why that file once existed in 6.4,\nvanished in 6.4.2, then appears in current.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 04 Feb 1999 10:01:42 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] template/alpha_cc "
},
{
"msg_contents": "On Thu, 4 Feb 1999, Tatsuo Ishii wrote:\n\n> >On Wed, 3 Feb 1999, Tom Lane wrote:\n> >\n> >> Tatsuo Ishii <[email protected]> writes:\n> >> > src/template/alpha_cc seems missing in 6.4.2 but does exist in current.\n> >> > What about 6.4_REL? I belive that file should exist in any version of\n> >> > source tree.\n> >> \n> >> $ cvs log alpha_cc\n> >> \n> >> RCS file: /usr/local/cvsroot/pgsql/src/template/alpha_cc,v\n> >> Working file: alpha_cc\n> >> head: 1.1\n> >> branch:\n> >> locks: strict\n> >> access list:\n> >> symbolic names:\n> >> keyword substitution: kv\n> >> total revisions: 1; selected revisions: 1\n> >> description:\n> >> ----------------------------\n> >> revision 1.1\n> >> date: 1998/12/18 07:08:02; author: momjian; state: Exp;\n> >> etc etc...\n> >> \n> >> There's no REL6_4 tag, therefore this file does not exist as far as\n> >> the 6.4.* branch is concerned. (Same deal as with discussion of\n> >> vacuumdb a day or two ago.) Bruce could've applied a REL6_4 tag to\n> >> the file when he created it, but did not.\n> >> \n> >> Do we want to continue updating the REL6_4 branch for stuff like this?\n> >> Or is it time to declare 6.4.2 the last of that branch and press forward\n> >> with 6.5 beta test?\n> >\n> >Nothing new gets added to the REL6_4 branch, period. Personally, there\n> >also will never be a v6.4.3, so the REL6_4 branch effectively died the day\n> >v6.4.2 was released...\n> \n> That's fine. I'm just curious why that file once existed in 6.4,\n> vanished in 6.4.2, then appears in current.\n\nAccording to the log file, this was only created on Dec 18th, after v6.4\nwas released (see below)...looking at the v6.4.2 distribution, there is a\ntemplate for just 'alpha' that was added June 12, 1998...this split\ndoesn't appear to have been done for/before teh v6.4.2 split though...\n\n\n> cvs log alpha_cc | more\n\nRCS file: /usr/local/cvsroot/pgsql/src/template/alpha_cc,v\nWorking file: alpha_cc\nhead: 1.1\nbranch:\nlocks: strict\naccess list:\nsymbolic names:\nkeyword substitution: kv\ntotal revisions: 1; selected revisions: 1\ndescription:\n----------------------------\nrevision 1.1\ndate: 1998/12/18 07:08:02; author: momjian; state: Exp;\nAttached is a patch with some fixes that (I think that) should go into\n6.4.1. Here is the list:\n\n- The type int8 now works. In fact, the bug(s) were in\nsrc/backend/port/snprintf.c, so int8 is probably broken in every platform\nthat hasn't a native snprintf/vsnprintf. The type itself worked as\nexpected, only the output was wrong. Anyway, this patch should be checked\nin other platforms.\n\n- The regression tests for int2 and int4, which were broken due to\ndifferences in the error messages, are fixed.\n\n- The regression test for float8, which was broken in the reference\nplatform, is also fixed. I don't know if the new file (float8-OSF1.out)\nwill work on other platforms, but it might be worth to try it.\n\n- Two new template files are provided (alpha_cc, which includes\noptimization, and alpha_gcc), and src/templates/.similar is updated\naccordingly. src/templates/alpha should be removed from the distribution.\n*IMPORTANT NOTE*: I don't know if you can use gcc to compile postgres;\nI've written the alpha_gcc file because alpha_cc has some flags that are\nspecific to DEC C.\n\n- There is a (very basic) Digital Unix specific FAQ in\ndoc/FAQ_DigitalUnix.\n\n--\n-------------------------------------------------------------------\nPedro Jos<E9> Lobo Perea Tel: +34 91 336 78 19\n=============================================================================\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 3 Feb 1999 23:15:18 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] template/alpha_cc "
},
{
"msg_contents": "On Wed, 3 Feb 1999, The Hermit Hacker wrote:\n\n>On Thu, 4 Feb 1999, Tatsuo Ishii wrote:\n>\n>> >On Wed, 3 Feb 1999, Tom Lane wrote:\n>> >\n>> >Nothing new gets added to the REL6_4 branch, period. Personally, there\n>> >also will never be a v6.4.3, so the REL6_4 branch effectively died the day\n>> >v6.4.2 was released...\n>> \n>> That's fine. I'm just curious why that file once existed in 6.4,\n>> vanished in 6.4.2, then appears in current.\n>\n>According to the log file, this was only created on Dec 18th, after v6.4\n>was released (see below)...looking at the v6.4.2 distribution, there is a\n>template for just 'alpha' that was added June 12, 1998...this split\n>doesn't appear to have been done for/before teh v6.4.2 split though...\n\nEhm, let me clarify this thing. This file is part of a patch I submitted a\nfew days (or maybe hours) before 6.4.2 was released. Bruce applied the\nwhole patch to the 6.5 tree, but only a part to the 6.4.2 tree,\nbecause he considered that the whole patch was too \"dangerous\" to apply\n(because there wasn't time to test whether it affected the other\nplatforms). So, the 6.4.2 tree has template/alpha, and the 6.5 tree\nhas template/alpha_cc and template/alpha_gcc, but not template/alpha.\n\nSince then, I haven't had the time to download and test the 6.5 branch to\nsee how things are now (for DU 4, at least). I will do it ASAP.\n\nBTW, Compaq has changed the operating system's name from \"Digital Unix\" to\n\"Tru64 Unix\". I will update the docs to reflect this change when I have\nthe time.\n\n-- \n-------------------------------------------------------------------\nPedro Jos� Lobo Perea Tel: +34 91 336 78 19\nCentro de C�lculo Fax: +34 91 331 92 29\nEUIT Telecomunicaci�n - UPM e-mail: [email protected]\n\n",
"msg_date": "Thu, 4 Feb 1999 11:29:41 +0100 (MET)",
"msg_from": "\"Pedro J. Lobo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] template/alpha_cc "
},
{
"msg_contents": "Thank you for your explanatiopn. Now everything seems clear. My\nunderstanding that 6.4 had alpla_cc was apparently wrong. Sorry for\nthe confusion.\n\n> Ehm, let me clarify this thing. This file is part of a patch I submitted a\n> few days (or maybe hours) before 6.4.2 was released. Bruce applied the\n> whole patch to the 6.5 tree, but only a part to the 6.4.2 tree,\n> because he considered that the whole patch was too \"dangerous\" to apply\n> (because there wasn't time to test whether it affected the other\n> platforms). So, the 6.4.2 tree has template/alpha, and the 6.5 tree\n> has template/alpha_cc and template/alpha_gcc, but not template/alpha.\n> \n> Since then, I haven't had the time to download and test the 6.5 branch to\n> see how things are now (for DU 4, at least). I will do it ASAP.\n> \n> BTW, Compaq has changed the operating system's name from \"Digital Unix\" to\n> \"Tru64 Unix\". I will update the docs to reflect this change when I have\n> the time.\n\nOh, \"Tru64 Unix\"? How can I pronounce it?:-)\n---\nTatsuo Ishii\n",
"msg_date": "Thu, 04 Feb 1999 23:13:02 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] template/alpha_cc "
}
] |
[
{
"msg_contents": "And another sync-with-gram.y patch.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!",
"msg_date": "Wed, 3 Feb 1999 08:57:46 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "ecpg patch"
},
{
"msg_contents": "\nApplied...\n\nOn Wed, 3 Feb 1999, Michael Meskes wrote:\n\n> And another sync-with-gram.y patch.\n> \n> Michael\n> -- \n> Michael Meskes | Go SF 49ers!\n> Th.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\n> Tel.: (+49) 2431/72651 | Use Debian GNU/Linux!\n> Email: [email protected] | Use PostgreSQL!\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 4 Feb 1999 00:58:09 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ecpg patch"
}
] |
[
{
"msg_contents": "Hi,\nI update my postgresql 6.3.x to 6.4.2 this day and surprise, the java\nprogram that I use to use Blob don't match. \nWhen I try to create a new blob, the postmaster tell me that's an error\noccur and that I must to reconnect (it kill my psql application )\nSo I dowmgrad my postgresql, isn't normal ? (it seems the same with\npostgresql.snapshot )\n\nThanks a lot for your help\nNicols Prochazka\n\n\n\n\n--------------------------------------------------------\nNicolas PROCHAZKA\nmailto:[email protected]\nhttp://maxibus.info.unicaen.fr/~prochazka\n\n\"Si y'avait que des objecteurs de conscience sur terre,\n Pour sur, y'aurait moins d'militaires\"\n--------------------------------------------------------\n\n",
"msg_date": "Wed, 3 Feb 1999 14:27:06 +0100 (CET)",
"msg_from": "Nicolas Prochazka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Blob BUG Since 6.4.2"
}
] |
[
{
"msg_contents": "\nHi,\n\nI sometimes need to update records without a SET clause, just to fire a trigger\nthat actually changes the tuple.\n\nCurrently I do:\n\nUPDATE relation SET anyfield=anyfield WHERE whereclause;\n\nIt would be nice to be able to do:\n\nUPDATE relation WHERE whereclause;\n\nI don't know if this will break something, and I'd like to make the change\nmyself, but I'm still a newbie in the postgres internals. Please consider it a\nlowest-priority entry to be put in the whis-list :^)\n\nThanks!\n\nBye!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Wed, 03 Feb 1999 17:36:43 +0100",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Minor change for UPDATE"
}
] |
[
{
"msg_contents": "Some compilation problem found while compiling\nPostgreSql-6.4.2 on OSF1 V4.0 878 alpha\n\n1. configure (problem because . not in $PATH)\n 743c743\n< . ./conftest.sh\n---\n> . conftest.sh\n\n2. gram.y did not compilei by yacc (on FreeBSD too)\n\n# woland(dms)~/postgresql-6.4.2/src/backend/parser>yacc -d gram.y \n# yacc: f - maximum table size exceeded\n\nfixed by using bison \n\n3. src/interfaces/libpq and all dependent sources (like perl5 interface)\ndidn't compile because native cc don't support tag const\nfixed by\n #define const \nin apropriate places\n\n4. (old bug) Inastalation fail if prefix directory \n(e.g /usr/local/pgsql) doesn't exists\n\n-- \nDmitry Samersoff\n DM\\S, [email protected], AIM: Samersoff\n http://devnull.wplus.net\n\n",
"msg_date": "Wed, 3 Feb 1999 20:43:46 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compiling on DEC Alpha OSF1"
}
] |
[
{
"msg_contents": "Could anyone tell me whether the EXEC SQL PREPARE statement is to be\ninterpreted by the preprocessor or during run-time? That is can it be placed\noutside a function? Does it have to stand above the DECLARE or EXECUTE\nstatement that references it or does it have to be executed prior to this\nstatement? \n\nThe DECLARE statement AFAIK is to be processed by the preprocessor.\nHopefully that one is correct too. :-)\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 3 Feb 1999 19:29:52 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "preprocessor question: prepare statement"
},
{
"msg_contents": "> Could anyone tell me whether the EXEC SQL PREPARE statement is to be\n> interpreted by the preprocessor or during run-time? That is can it be \n> placed outside a function? Does it have to stand above the DECLARE or \n> EXECUTE statement that references it or does it have to be executed \n> prior to this statement?\n> The DECLARE statement AFAIK is to be processed by the preprocessor.\n> Hopefully that one is correct too. :-)\n\nPretty sure that PREPARE is a run-time thing, since you can dynamically\nbuild the sql statement fed to PREPARE.\n\nIn fact, looking at my Ingres docs:\n\n\"Dynamic SQL has four statements that are exclusively used in a dynamic\nprogram: 'execute immediate', 'prepare', 'execute', and 'describe'.\"\n\nIn another section, it says:\n\n\"The 'prepare' statement tells the DBMS to encode the dynamically built\nstatement and assign it the specified name. After a statement is\nprepared, the program can execute the statement one or more times within\na transaction by issuing the 'execute' statement and specifying the\nstatement name. This method improves performance if your program must\nexecute the same statement many times in a transaction. When you commit\na transaction, all statements that were prepared during the transaction\nare discarded.\"\n\nHope this helps...\n\n - Tom\n",
"msg_date": "Thu, 04 Feb 1999 02:47:20 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] preprocessor question: prepare statement"
},
{
"msg_contents": "On Thu, Feb 04, 1999 at 02:47:20AM +0000, Thomas G. Lockhart wrote:\n> Pretty sure that PREPARE is a run-time thing, since you can dynamically\n> build the sql statement fed to PREPARE.\n\nThat is also possible with a compile time statement as you give it a\nvariable name not the variable data as argument.\n\nBut all in all I agree, and I will try to implement it that way.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 4 Feb 1999 11:30:12 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] preprocessor question: prepare statement"
},
{
"msg_contents": "Okay, I got it going as long as no variables are involved, a statement is\nprepared only once and not deallocated. Hopefully I get it complete before\nfreezing 6.5, but I'm not sure I make that deadline.\n\nAm I correct that the wildcard used to represent a variable is\nimplementation defined? I know Oracle uses something like :var1, :var2 etc.\nThis requires parsing of the statement (right now I just store it for later\nuse). I'd prefer to use \";;\" since this is was ecpg uses internally. What is\nused on other systems?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 4 Feb 1999 13:13:42 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] preprocessor question: prepare statement"
}
] |
[
{
"msg_contents": "Some compilation problem found while compiling\nPostgreSQL-6.4.2 on OSF1 V4.0 878 alpha\n\n1. configure (problem because . not in $PATH)\n 743c743\n < . ./conftest.sh\n ---\n > . conftest.sh\n\n2. gram.y did not compile by yacc (on FreeBSD too)\n\n# woland(dms)~/postgresql-6.4.2/src/backend/parser>yacc -d gram.y \n# yacc: f - maximum table size exceeded\n\nfixed by using bison \n\n3. src/interfaces/libpq and all dependent sources (like perl5 interface)\ndidn't compile because native cc don't support tag const\nfixed by\n #define const \n in apropriate places\n\n 4. (old bug) Inastalation fail if prefix directory \n (e.g /usr/local/pgsql) doesn't exists\n\n\n-- \nDmitry Samersoff\n DM\\S, [email protected], AIM: Samersoff\n http://devnull.wplus.net\n\n",
"msg_date": "Wed, 3 Feb 1999 21:51:59 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "DEC OSF1 Compilation problems"
},
{
"msg_contents": "> Some compilation problem found while compiling\n> PostgreSQL-6.4.2 on OSF1 V4.0 878 alpha\n> \n> 1. configure (problem because . not in $PATH)\n> 743c743\n> < . ./conftest.sh\n> ---\n> > . conftest.sh\n\nFixed in current tree.\n\n> \n> 2. gram.y did not compile by yacc (on FreeBSD too)\n> \n> # woland(dms)~/postgresql-6.4.2/src/backend/parser>yacc -d gram.y \n> # yacc: f - maximum table size exceeded\n> \n> fixed by using bison \n\nNeed bison, though gram.c is newer than gram.y, or it should be in the\ntar file.\n\n> \n> 3. src/interfaces/libpq and all dependent sources (like perl5 interface)\n> didn't compile because native cc don't support tag const\n> fixed by\n> #define const \n> in apropriate places\n\nHard to figure they don't support const.\n\n> \n> 4. (old bug) Inastalation fail if prefix directory \n> (e.g /usr/local/pgsql) doesn't exists\n\nI am fixing this now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Feb 1999 14:12:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> 4. (old bug) Inastalation fail if prefix directory \n>> (e.g /usr/local/pgsql) doesn't exists\n\n> I am fixing this now.\n\nAre you sure it's a good idea to do anything other than fail?\n\nIn a typical setup, the parent directory of the install target\n(ie, /usr/local) is going to be owned by root. If you are able\nto create the install target it means the installation is being\ndone as root. We do *not* want to encourage people to install\nas root, methinks.\n\nI think the correct response is to leave the software alone and\nfix the documentation to remind people to create the top-level\ndirectory (as root) before running \"make install\" (as postgres).\n/usr/local/pgsql should be owned by postgres, so the install can\nproceed from that point without root privileges.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Feb 1999 15:06:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> 4. (old bug) Inastalation fail if prefix directory \n> >> (e.g /usr/local/pgsql) doesn't exists\n> \n> > I am fixing this now.\n> \n> Are you sure it's a good idea to do anything other than fail?\n> \n> In a typical setup, the parent directory of the install target\n> (ie, /usr/local) is going to be owned by root. If you are able\n> to create the install target it means the installation is being\n> done as root. We do *not* want to encourage people to install\n> as root, methinks.\n> \n> I think the correct response is to leave the software alone and\n> fix the documentation to remind people to create the top-level\n> directory (as root) before running \"make install\" (as postgres).\n> /usr/local/pgsql should be owned by postgres, so the install can\n> proceed from that point without root privileges.\n\nOK, I started to agree with you after I tried adding it. I will leave\nit alone.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Feb 1999 15:10:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "> > 2. gram.y did not compile by yacc (on FreeBSD too)\n> > fixed by using bison\n> Need bison, though gram.c is newer than gram.y, or it should be in the\n> tar file.\n\nWe've gotten a few reports on this, so I'll guess that we have a too-new\ngram.y in the distribution (again) :(\n\nThe workaround, besides installing bison, is to type\n\n $ touch backend/parser/gram.c\n\nbefore typing\n\n $ make\n $ make install\n\n - Tom\n",
"msg_date": "Thu, 04 Feb 1999 02:52:41 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "> > 3. src/interfaces/libpq and all dependent sources (like perl5 interface)\n> > didn't compile because native cc don't support tag const\n> > fixed by\n> > #define const \n> > in apropriate places\n> \n> Hard to figure they don't support const.\n\nSorry, it's my mistake, better say - problem around supporting const\n(on Solaris it looks to be the same, but I'm not sure)\n\nThis is compiler message\n\n----------------------------------------\ncc: Error: fe-connect.c, line 173: In this declaration, \nparameter 1 has a different type than specified in an earlier\n declaration of this function.\nPQconnectdb(const char *conninfo)\n^\ncc: Error: fe-connect.c, line 173: In this declaration, \nthe type of \"PQconnectdb\" is not compatible with the type of a \nprevious declaration of \"PQconnectdb\" at line number 153 in file libpq-fe.h.\n----------------------------------------------------------------------------\n\nReal declaration\n\n (fe-connect.c:173) :\nPGconn * PQconnectdb(const char *conninfo)\n\n(libpq-fe.h:153):\n extern PGconn *PQconnectdb(const char *conninfo);\n \n--\nDmitry Samersoff\n DM\\S, [email protected], AIM: Samersoff\n http://devnull.wplus.net\n\n",
"msg_date": "Thu, 4 Feb 1999 12:25:09 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "On Wed, 3 Feb 1999, Dmitry Samersoff wrote:\n\n>Some compilation problem found while compiling\n>PostgreSQL-6.4.2 on OSF1 V4.0 878 alpha\n>\n>1. configure (problem because . not in $PATH)\n> 743c743\n> < . ./conftest.sh\n> ---\n> > . conftest.sh\n\nI don't know why this happens in 6.4.2, because it didn't in 6.4.1. You\nhave applied the right fix/workaround.\n>\n>2. gram.y did not compile by yacc (on FreeBSD too)\n>\n># woland(dms)~/postgresql-6.4.2/src/backend/parser>yacc -d gram.y \n># yacc: f - maximum table size exceeded\n>\n>fixed by using bison \n\nI had always used bison. I will add this to the DU FAQ.\n\n>3. src/interfaces/libpq and all dependent sources (like perl5 interface)\n>didn't compile because native cc don't support tag const\n>fixed by\n> #define const \n> in apropriate places\n\nThe right fix is to add \"-std\" to the CFLAGS line in template/alpha. This\nis already fixed in 6.5 beta.\n\n-- \n-------------------------------------------------------------------\nPedro Jos� Lobo Perea Tel: +34 91 336 78 19\nCentro de C�lculo Fax: +34 91 331 92 29\nEUIT Telecomunicaci�n - UPM e-mail: [email protected]\n\n",
"msg_date": "Thu, 4 Feb 1999 11:00:50 +0100 (MET)",
"msg_from": "\"Pedro J. Lobo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "> >2. gram.y did not compile by yacc (on FreeBSD too)\n> ># woland(dms)~/postgresql-6.4.2/src/backend/parser>yacc -d gram.y\n> ># yacc: f - maximum table size exceeded\n> >fixed by using bison\n> I had always used bison. I will add this to the DU FAQ.\n\nThis should not be required in principle, but it is easy with cvs to\naccidentally get the time tags on gram.y and gram.c out of sync, so that\na \"cvs checkout\" causes Make to think gram.c needs to be rebuilt. I\nthink that v6.4.2 ended up with the times out of sync, as have other\nreleases in the past.\n\nThe other large parser, for ecpg, should probably ship both .y and .c\nfiles, but does not yet, so perhaps bison needs to be used anyway. We\nshould fix this in an upcoming release.\n\nTo fix the cvs checkout problem, we might consider having a canned\nroutine which updates these time tags after a cvs checkout and before\nthe tar file is constructed...\n\n - Tom\n",
"msg_date": "Fri, 05 Feb 1999 03:24:22 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "> > >2. gram.y did not compile by yacc (on FreeBSD too)\n> > ># woland(dms)~/postgresql-6.4.2/src/backend/parser>yacc -d gram.y\n> > ># yacc: f - maximum table size exceeded\n> > >fixed by using bison\n> > I had always used bison. I will add this to the DU FAQ.\n> \n> This should not be required in principle, but it is easy with cvs to\n> accidentally get the time tags on gram.y and gram.c out of sync, so that\n> a \"cvs checkout\" causes Make to think gram.c needs to be rebuilt. I\n> think that v6.4.2 ended up with the times out of sync, as have other\n> releases in the past.\n> \n> The other large parser, for ecpg, should probably ship both .y and .c\n> files, but does not yet, so perhaps bison needs to be used anyway. We\n> should fix this in an upcoming release.\n> \n> To fix the cvs checkout problem, we might consider having a canned\n> routine which updates these time tags after a cvs checkout and before\n> the tar file is constructed...\n\nIs there some way to do this fixup in the makefile?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Feb 1999 22:38:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "On Thu, Feb 04, 1999 at 10:38:06PM -0500, Bruce Momjian wrote:\n> > The other large parser, for ecpg, should probably ship both .y and .c\n> > files, but does not yet, so perhaps bison needs to be used anyway. We\n> > should fix this in an upcoming release.\n\nI have included both files in my latest patch.\n\n> Is there some way to do this fixup in the makefile?\n\nTell me what to do.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 5 Feb 1999 13:32:03 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> \n> > >2. gram.y did not compile by yacc (on FreeBSD too)\n> > ># woland(dms)~/postgresql-6.4.2/src/backend/parser>yacc -d gram.y\n> > ># yacc: f - maximum table size exceeded\n> > >fixed by using bison\n> > I had always used bison. I will add this to the DU FAQ.\n> \n> This should not be required in principle, but it is easy with cvs to\n> accidentally get the time tags on gram.y and gram.c out of sync, so that\n> a \"cvs checkout\" causes Make to think gram.c needs to be rebuilt. I\n> think that v6.4.2 ended up with the times out of sync, as have other\n> releases in the past.\n\nIf someone with access to CVSROOT on the CVS server would be so kind,\nit's easy to set up a rule in commitinfo to automatically create\ngram.c from gram.y, thus forcing a new, properly timestamped version\nof gram.c to be in the archive.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "5 Feb 1999 08:28:39 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "> > To fix the cvs checkout problem, we might consider having a canned\n> > routine which updates these time tags after a cvs checkout and \n> > before the tar file is constructed...\n> Is there some way to do this fixup in the makefile?\n\nWell, maybe, but that would defeat the purpose of the makefile. Unless\nwe had a \"make release-tarball\" at the top level? That could have some\nembedded file touches in it...\n\nI like geek+'s suggestion to build with a commit rule in cvs, if it can\nbe done without having the whole tree set up and made. otoh this would\nincrease the maintenance effort a bit, since we would have to maintain\nthe rules.\n\n - Tom\n",
"msg_date": "Fri, 05 Feb 1999 13:51:20 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "> I have included both files in my latest patch.\n\nGreat. Bruce and scrappy, whoever applies this will need to add this as\na new file in cvs. At the moment the file is named y.tab.c (and\ny.tab.h), but we might want to consider renaming it as is done in the\nmain parser to keep the names unique within the installation (for\nexample, y.tab.c is probably also a temporary file in\nsrc/backend/parser/).\n\n> > Is there some way to do this fixup in the makefile?\n> Tell me what to do.\n\nDoing this in the local makefile is probably dangerous or at least\nannoying. Let's not be hasty in adopting a fix for this out of sync\nproblem. We should remember that any heuristic like this might also mask\nthe fact that we have forgotten to update the gram.c before a release.\n\nimho the best way to ensure sync is for Bruce, myself, and anyone else\nwho commits parser stuff to commit gram.y and scan.l first, then gram.c\nand scan.c afterwards. The cvs time tags will be consistant then.\n\nAlso, our pre-release checking apparently does not alway catch this\nproblem; perhaps we should figure out a way to build with a dummy\nyacc/bison for this final verification, so things barf if it is actually\ninvoked.\n\n - Tom\n",
"msg_date": "Fri, 05 Feb 1999 14:02:20 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "> On Thu, Feb 04, 1999 at 10:38:06PM -0500, Bruce Momjian wrote:\n> > > The other large parser, for ecpg, should probably ship both .y and .c\n> > > files, but does not yet, so perhaps bison needs to be used anyway. We\n> > > should fix this in an upcoming release.\n> \n> I have included both files in my latest patch.\n> \n> > Is there some way to do this fixup in the makefile?\n> \n> Tell me what to do.\n\nI don't know how to do it reliably. If someone edits gram.y, we want\ngram.c to be regenerated, but we don't if gram.y is the one from the tar\nfile.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Feb 1999 12:24:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "On Fri, Feb 05, 1999 at 02:02:20PM +0000, Thomas G. Lockhart wrote:\n> Great. Bruce and scrappy, whoever applies this will need to add this as\n> a new file in cvs. At the moment the file is named y.tab.c (and\n> y.tab.h), but we might want to consider renaming it as is done in the\n> main parser to keep the names unique within the installation (for\n> example, y.tab.c is probably also a temporary file in\n> src/backend/parser/).\n\nI did that already. They are named preproc.c resp. preproc.h now.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sat, 6 Feb 1999 12:47:32 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
}
] |
[
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > > So, I guess my question is: how costly are joins? I've heard that\n> > > > Postgres pretty much \"pukes\" (in terms of speed) when you're trying \n> > > > to do anything more than 6 table joins in one query. This leads\n> > > > me to believe that joins are fairly costly... ???? \n> > > \n> > > i've noticed a pretty drastic slowdown going from a 4 table join\n> > > (instantaneous) to a 5 table join (15-20 seconds) i don't think i've\n> > > tried 6 tables yet with the same database. \n> \n> i'll attach the output of my query explains if you want to look at them\n> in detail, but here's a quick overview: all of my tables that are being\n> joined have 125000-200000 records, all have a primary key, and the join\n> is done by matching the primary key in the respective tables. i'm doing\n> selects on 4,5,6,7,8 tables testing both with and without GEQO. 7 and 8\n> tables caused a crash (apparently out of memory), 4-6 were pretty\n> reasonable. they all came up with sensible plans, namely:\n> Nested Loop \n> -> Nested Loop \n> -> Nested Loop \n> -> Index Scan \n> -> Index Scan \n> -> Index Scan \n> -> Index Scan \n> with the respective levels of index scans/nested loops for each number\n> of tables in the join.\n> \n> although i did find some improvement by lowering the GEQO threshold to\n> 6, it wasn't really the answer i was hoping for. using GEQO was the\n> loser (time wise) each time on similar selects joining 4 and 5 tables. \n> GEQO came up with a better plan (lower cost), but took longer to come up\n> with the plan, making it the loser unless you can prepare the statement\n> beforehand (which would be nice and i know has been discussed already). \n> apparently 4 is the magic number for \"instantaneous\" joins in my case, 5\n> gets you up to about 5 seconds (if you type the query right), and 6 is\n> just nasty without GEQO (it's still churning whereas with GEQO it took\n> ~20 seconds) \n\nSo 6 was your magic number for GEQO in 6.4, though I should comment that\nin 6.5, we will use tables+indexes as the geqo start value, so with 5\ntables and one index on each you had combined value of 10.\n\n> \n> so now the question is, what is the relationship between cost and how\n> long a query takes? for example, one of my queries show up with\n> cost=14387.49 size=165779 width=664(6 table join w/o GEQO); another\n> shows up as cost=14383.39 size=165779 width=644 (5 table join w/o GEQO)\n> yet the first one takes several minutes whereas the second only takes a\n> few seconds. i don't really know what other questions i should be\n> asking here. my guess is that there's some kind of memory limit that's\n> being hit causing the jump from 4->5 to take more time than it\n> apparently should causing it to take multiple (and therefore slower)\n> steps to get the end result. is this a possibility? or am i way off\n> base?\n\n[CC to hackers.]\n\nThe cost really is just for comparison to other plans. Not sure you can\nreally make any meaning out of the number, and geqo probably uses a\ndifferent measurement for cost. The 4-5 setting is about what I\nexpected, and I realize GEQO is still too slow for large joins. I am\nlooking at what can be done with this, in trying to make non-geqo faster\nfor joins in the 4-10 range, so perhaps we can speed those up, and use\ngeqo only for really large joins. I hope to have something for 6.5, but\nam still researching.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Feb 1999 16:27:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Cost: Big Tables vs. Organized Separation of Data"
}
] |
[
{
"msg_contents": "I am trying to understand the optimizer, and am doing cleanup at the\nsame time.\n\nWho would call a structure HInfo, and calling everything a clause doesn't\nhelp.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Feb 1999 21:12:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "cleanup of optimizer"
}
] |
[
{
"msg_contents": "I have applied this. D'Arcy feels it is safe, and I think I agree now.\n\nIf anyone has a problem, let us know. He believes it completes\nconstification of the libpq API.\n\n\n> I am sending this patch to hackers because I think it needs some\n> discussion before being added. I'm not 100% sure that there\n> isn't some internal issue with making these changes but so far\n> it seems to work for me.\n> \n> In interfaces/libpq/libpq-fe.h there are some structures that include\n> char pointers. Often one would expect the user to send const strings\n> to the functions using these pointers. The following keeps external\n> programs from failing when full error checking is enabled.\n> \n> \n> *** ../src.original/./interfaces/libpq/libpq-fe.h\tSat Jan 16 07:33:49 1999\n> --- ./interfaces/libpq/libpq-fe.h\tFri Jan 22 07:14:21 1999\n> ***************\n> *** 100,108 ****\n> \t\tpqbool\t\thtml3;\t\t/* output html tables */\n> \t\tpqbool\t\texpanded;\t/* expand tables */\n> \t\tpqbool\t\tpager;\t\t/* use pager for output if needed */\n> ! \t\tchar\t *fieldSep;\t/* field separator */\n> ! \t\tchar\t *tableOpt;\t/* insert to HTML <table ...> */\n> ! \t\tchar\t *caption;\t/* HTML <caption> */\n> \t\tchar\t **fieldName;\t/* null terminated array of repalcement\n> \t\t\t\t\t\t\t\t * field names */\n> \t} PQprintOpt;\n> --- 100,108 ----\n> \t\tpqbool\t\thtml3;\t\t/* output html tables */\n> \t\tpqbool\t\texpanded;\t/* expand tables */\n> \t\tpqbool\t\tpager;\t\t/* use pager for output if needed */\n> ! \t\tconst char *fieldSep;\t/* field separator */\n> ! \t\tconst char *tableOpt;\t/* insert to HTML <table ...> */\n> ! \t\tconst char *caption;\t/* HTML <caption> */\n> \t\tchar\t **fieldName;\t/* null terminated array of repalcement\n> \t\t\t\t\t\t\t\t * field names */\n> \t} PQprintOpt;\n> ***************\n> *** 113,124 ****\n> */\n> \ttypedef struct _PQconninfoOption\n> \t{\n> ! \t\tchar\t *keyword;\t/* The keyword of the option\t\t\t*/\n> ! \t\tchar\t *envvar;\t/* Fallback environment variable name\t*/\n> ! \t\tchar\t *compiled;\t/* Fallback compiled in default value\t*/\n> ! \t\tchar\t *val;\t\t/* Options value\t\t\t\t\t\t*/\n> ! \t\tchar\t *label;\t\t/* Label for field in connect dialog\t*/\n> ! \t\tchar\t *dispchar;\t/* Character to display for this field\t*/\n> \t\t\t\t\t\t\t\t/* in a connect dialog. Values are:\t\t*/\n> \t\t\t\t\t\t\t\t/* \"\"\tDisplay entered value as is */\n> \t\t\t\t\t\t\t\t/* \"*\"\tPassword field - hide value */\n> --- 113,124 ----\n> */\n> \ttypedef struct _PQconninfoOption\n> \t{\n> ! \t\tconst char\t*keyword;\t/* The keyword of the option\t\t\t*/\n> ! \t\tconst char\t*envvar;\t/* Fallback environment variable name\t*/\n> ! \t\tconst char\t*compiled;\t/* Fallback compiled in default value\t*/\n> ! \t\tchar\t\t*val;\t\t/* Options value\t\t\t\t\t\t*/\n> ! \t\tconst char\t*label;\t\t/* Label for field in connect dialog\t*/\n> ! \t\tconst char\t*dispchar;\t/* Character to display for this field\t*/\n> \t\t\t\t\t\t\t\t/* in a connect dialog. Values are:\t\t*/\n> \t\t\t\t\t\t\t\t/* \"\"\tDisplay entered value as is */\n> \t\t\t\t\t\t\t\t/* \"*\"\tPassword field - hide value */\n> *** ../src.original/./interfaces/libpq/fe-print.c\tFri Jan 22 07:02:10 1999\n> --- ./interfaces/libpq/fe-print.c\tFri Jan 22 07:03:09 1999\n> ***************\n> *** 681,687 ****\n> \t\tp = border;\n> \t\tif (po->standard)\n> \t\t{\n> ! \t\t\tchar\t *fs = po->fieldSep;\n> \n> \t\t\twhile (*fs++)\n> \t\t\t\t*p++ = '+';\n> --- 681,687 ----\n> \t\tp = border;\n> \t\tif (po->standard)\n> \t\t{\n> ! \t\t\tconst char\t *fs = po->fieldSep;\n> \n> \t\t\twhile (*fs++)\n> \t\t\t\t*p++ = '+';\n> ***************\n> *** 693,699 ****\n> \t\t\tfor (len = fieldMax[j] + (po->standard ? 2 : 0); len--; *p++ = '-');\n> \t\t\tif (po->standard || (j + 1) < nFields)\n> \t\t\t{\n> ! \t\t\t\tchar\t *fs = po->fieldSep;\n> \n> \t\t\t\twhile (*fs++)\n> \t\t\t\t\t*p++ = '+';\n> --- 693,699 ----\n> \t\t\tfor (len = fieldMax[j] + (po->standard ? 2 : 0); len--; *p++ = '-');\n> \t\t\tif (po->standard || (j + 1) < nFields)\n> \t\t\t{\n> ! \t\t\t\tconst char\t *fs = po->fieldSep;\n> \n> \t\t\t\twhile (*fs++)\n> \t\t\t\t\t*p++ = '+';\n> *** ../src.original/./interfaces/libpq/fe-connect.c\tFri Jan 22 07:04:03 1999\n> --- ./interfaces/libpq/fe-connect.c\tFri Jan 22 07:13:09 1999\n> ***************\n> *** 48,54 ****\n> static void freePGconn(PGconn *conn);\n> static void closePGconn(PGconn *conn);\n> static int\tconninfo_parse(const char *conninfo, char *errorMessage);\n> ! static char *conninfo_getval(char *keyword);\n> static void conninfo_free(void);\n> static void defaultNoticeProcessor(void *arg, const char *message);\n> \n> --- 48,54 ----\n> static void freePGconn(PGconn *conn);\n> static void closePGconn(PGconn *conn);\n> static int\tconninfo_parse(const char *conninfo, char *errorMessage);\n> ! static const char *conninfo_getval(const char *keyword);\n> static void conninfo_free(void);\n> static void defaultNoticeProcessor(void *arg, const char *message);\n> \n> ***************\n> *** 172,179 ****\n> PGconn *\n> PQconnectdb(const char *conninfo)\n> {\n> ! \tPGconn\t *conn;\n> ! \tchar\t *tmp;\n> \n> \t/* ----------\n> \t * Allocate memory for the conn structure\n> --- 172,179 ----\n> PGconn *\n> PQconnectdb(const char *conninfo)\n> {\n> ! \tPGconn\t\t *conn;\n> ! \tconst char\t *tmp;\n> \n> \t/* ----------\n> \t * Allocate memory for the conn structure\n> ***************\n> *** 284,291 ****\n> PGconn *\n> PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions, const char *pgtty, const char *dbName, const char *login, const char *pwd)\n> {\n> ! \tPGconn\t *conn;\n> ! \tchar\t *tmp;\n> \n> \t/* An error message from some service we call. */\n> \tbool\t\terror = FALSE;\n> --- 284,291 ----\n> PGconn *\n> PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions, const char *pgtty, const char *dbName, const char *login, const char *pwd)\n> {\n> ! \tPGconn\t\t*conn;\n> ! \tconst char\t*tmp;\n> \n> \t/* An error message from some service we call. */\n> \tbool\t\terror = FALSE;\n> ***************\n> *** 1137,1143 ****\n> \tchar\t *pname;\n> \tchar\t *pval;\n> \tchar\t *buf;\n> ! \tchar\t *tmp;\n> \tchar\t *cp;\n> \tchar\t *cp2;\n> \tPQconninfoOption *option;\n> --- 1137,1143 ----\n> \tchar\t *pname;\n> \tchar\t *pval;\n> \tchar\t *buf;\n> ! \tconst char *tmp;\n> \tchar\t *cp;\n> \tchar\t *cp2;\n> \tPQconninfoOption *option;\n> ***************\n> *** 1343,1350 ****\n> }\n> \n> \n> ! static char *\n> ! conninfo_getval(char *keyword)\n> {\n> \tPQconninfoOption *option;\n> \n> --- 1343,1350 ----\n> }\n> \n> \n> ! static const char *\n> ! conninfo_getval(const char *keyword)\n> {\n> \tPQconninfoOption *option;\n> \n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Feb 1999 22:22:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding some const keywords to external interfaces"
}
] |
[
{
"msg_contents": "Let's you create a table again.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!",
"msg_date": "Thu, 4 Feb 1999 11:52:03 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "small bug fix for ecpg"
},
{
"msg_contents": "> Let's you create a table again.\n> \n> Michael\n> -- \n> Michael Meskes | Go SF 49ers!\n> Th.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\n> Tel.: (+49) 2431/72651 | Use Debian GNU/Linux!\n> Email: [email protected] | Use PostgreSQL!\n\nI am sure my temp table stuff is going to mess you up. Sorry. No way\nto add that stuff without affecting you.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Feb 1999 12:15:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] small bug fix for ecpg"
},
{
"msg_contents": "On Thu, Feb 04, 1999 at 12:15:06PM -0500, Bruce Momjian wrote:\n> I am sure my temp table stuff is going to mess you up. Sorry. No way\n\nNot exactly. I merged your changes in it and producerd a bug. My fault.\n\n> to add that stuff without affecting you.\n\nNo problem.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 5 Feb 1999 08:53:53 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] small bug fix for ecpg"
}
] |
[
{
"msg_contents": "> >Could anyone tell me whether the EXEC SQL PREPARE statement is to be\n> >interpreted by the preprocessor or during run-time?\n> \n> run time\n> \n> >That is can it be placed outside a function?\n> \n> yes ?\n> \n> > Does it have to stand above the DECLARE or EXECUTE\n> > statement that references it or does it have to be executed prior to\n> this\n> > statement? \n> \n> no, only has to be executed prior \n> \n> > The DECLARE statement AFAIK is to be processed by the preprocessor.\n> > Hopefully that one is correct too. :-)\n> \n> yes :)\n> \n> Andreas\n> \n",
"msg_date": "Thu, 4 Feb 1999 12:47:54 +0100 ",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] preprocessor question: prepare statement"
},
{
"msg_contents": "On Thu, Feb 04, 1999 at 12:47:54PM +0100, Zeugswetter Andreas IZ5 wrote:\n> > run time\n\nGood. That's how it works now.\n\n> > >That is can it be placed outside a function?\n> > \n> > yes ?\n\nNo. :-) Only declarations can be put outside I think.\n\n> > no, only has to be executed prior \n\nGood.\n\n> > > The DECLARE statement AFAIK is to be processed by the preprocessor.\n> > > Hopefully that one is correct too. :-)\n> > \n> > yes :)\n\nThanks.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 4 Feb 1999 19:28:17 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] preprocessor question: prepare statement"
}
] |
[
{
"msg_contents": "Hi,\n\n I'm continuing with the pooled palloc() stuff and am stuck\n with a very strange thing. I've reverted my changes to\n palloc() and am doing all the memory block pool handling now\n in aset.c.\n\n The benefit from this will be that I later can easily make\n palloc() etc. macros.\n\n The new version of the AllocSet...() functions does not use\n ordered set. it manages the block pools itself. Has the same\n 10% speedup and I expect some more from the macro version of\n palloc(). It aligns small allocations to power of 2 for\n better reusability of free'd chunks which are held in 8\n different free lists per alloc set depending on their size.\n It lost the ability of AllocSetDump() - who need's that?\n\n First I found some bad places where memory is used after it\n has been free'd. One was in the portal manager with a portal\n memory context struct! I'm pretty sure that I found all\n because I tested by memset() 'ing all memory on\n AllocSetFree() and AllocSetReset() with different values.\n\n The strange behaviour now is that depending on the blocksize\n and the limit for block/single alloction I use for the pools,\n the portals_p2 regression test fails or not. The failure is\n that the cursor foo24 does not return data if the pools\n blocksize is greater/equal 16K and the smallchunk limit is\n 2K. It returns the correct row if one of them is less. More\n irritating is that it only fails if run inside 'make\n runtest'. If I put multiple portals_p2 tests into the tests\n list, all fail the same. But if the test is run manually with\n the same psql switches, it succeeds.\n\n All this behaviour is identical on two Linux 2.1.88\n installations. One has gcc-2.8.1 and glibc-2.0.13, the other\n gcc-2.7.2.1 and libc.5.\n\n I have absolutely no clue what's going on here. Anyone an\n idea how to track this down?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 4 Feb 1999 22:40:18 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "strange behaviour on pooled alloc"
},
{
"msg_contents": "> Hi,\n> \n> I'm continuing with the pooled palloc() stuff and am stuck\n> with a very strange thing. I've reverted my changes to\n> palloc() and am doing all the memory block pool handling now\n> in aset.c.\n> \n> The benefit from this will be that I later can easily make\n> palloc() etc. macros.\n\nSounds good.\n\n> The new version of the AllocSet...() functions does not use\n> ordered set. it manages the block pools itself. Has the same\n> 10% speedup and I expect some more from the macro version of\n> palloc(). It aligns small allocations to power of 2 for\n> better reusability of free'd chunks which are held in 8\n> different free lists per alloc set depending on their size.\n> It lost the ability of AllocSetDump() - who need's that?\n\nNo one.\n\n> \n> First I found some bad places where memory is used after it\n> has been free'd. One was in the portal manager with a portal\n> memory context struct! I'm pretty sure that I found all\n> because I tested by memset() 'ing all memory on\n> AllocSetFree() and AllocSetReset() with different values.\n\n\nGood.\n\n> The strange behaviour now is that depending on the blocksize\n> and the limit for block/single alloction I use for the pools,\n> the portals_p2 regression test fails or not. The failure is\n> that the cursor foo24 does not return data if the pools\n> blocksize is greater/equal 16K and the smallchunk limit is\n> 2K. It returns the correct row if one of them is less. More\n> irritating is that it only fails if run inside 'make\n> runtest'. If I put multiple portals_p2 tests into the tests\n> list, all fail the same. But if the test is run manually with\n> the same psql switches, it succeeds.\n> \n> All this behaviour is identical on two Linux 2.1.88\n> installations. One has gcc-2.8.1 and glibc-2.0.13, the other\n> gcc-2.7.2.1 and libc.5.\n> \n> I have absolutely no clue what's going on here. Anyone an\n> idea how to track this down?\n\nMy recommendation is to apply the fix and let others debug it. Someone\nwill find the cause. Just give them a reproducable test case. In many\ncases, more eyes or another OS shows the error much clearer.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Feb 1999 16:55:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behaviour on pooled alloc"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > The strange behaviour now is that depending on the blocksize\n> > and the limit for block/single alloction I use for the pools,\n> > the portals_p2 regression test fails or not.\n> > [...]\n> > I have absolutely no clue what's going on here. Anyone an\n> > idea how to track this down?\n>\n> My recommendation is to apply the fix and let others debug it. Someone\n> will find the cause. Just give them a reproducable test case. In many\n> cases, more eyes or another OS shows the error much clearer.\n\n New version of AllocSet...() functions is committed. palloc()\n is a macro now. The memory eating problem of COPY FROM,\n INSERT ... SELECT and UPDATES on a table that has constraints\n is fixed (new file nodes/freefuncs.c).\n\n The settings in aset.c aren't optimal for now, because the\n settings in place force the portals_p2 test to fail (at least\n here). Some informations for those who want to take a look at\n it follow.\n\n Reproducing the bug:\n\n The bug can be reproduced after the regression test has\n been run by running only portals_p2.sql.\n\n To cause the error, the postmaster must be started with\n -B64 (default) and at least one environment variable\n (e.g. PGDATESTYLE), that causes psql to send a SET on\n connection must be set.\n\n If -B is greater than 64, AllocSetAlloc() put's the\n allocation for the buffer reference counts in the\n execution state EState into it's own malloc() area, not\n into a smallchunk block. The problem disappears.\n\n If the ALLOC_BLOCK_SIZE (in aset.c) is changed to 8192,\n the problem also disappears.\n\n If none of the mentioned environment variables is set,\n the BEGIN from the regression test is the first command\n sent to the backend and the problem disappears too. But\n adding a simple BEGIN; END; to the top of the test forces\n it to appear again, so it isn't in the variable setting\n code.\n\n Guessings:\n\n The symptom is that in the case of many portals on a big\n table rows that are there don't show up. Each cursor\n declaration results in it's own ExecutorStart(), where\n the buffer reference count is saved into the newly\n created execution state and reset to zero. Later on\n ExecutorEnd() these states are restored.\n\n These disappearing rows might have to do with unpinned\n buffers that are expected to be pinned.\n\n Since it depends on whether the allocation for the saved\n reference counts is taken from a block or allocated\n separately, I think some counts get corrupted from\n somewhere else.\n\n It also depends on the blocksize, one more point that it\n might be from somewhere else because the refcount areas\n must live in the same block with some other allocation\n together.\n\n I'll keep on debugging, but would be very appreciated if\n someone could help.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 6 Feb 1999 18:28:16 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behaviour on pooled alloc"
}
] |
[
{
"msg_contents": "Could someone point me to where the operators are defined?\nI'm trying to define a lower for varchar. \"Why?\", you may ask, so that\nI can create a functional index on the lower of a varchar column.\nThanks,\n\t-DEJ\n",
"msg_date": "Thu, 4 Feb 1999 17:06:29 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "where to declare system operators"
}
] |
[
{
"msg_contents": "I've been seeing the following behavior with the CVS sources for the\nlast several days:\n\n$ psql regression\nregression=> SELECT * INTO TABLE ramp FROM road ;\nSELECT\nregression=> \\d ramp\nCouldn't find table ramp!\nregression=> SELECT * INTO TABLE ramp FROM road ;\nERROR: ramp relation already exists\nregression=> SELECT * FROM ramp ;\n< works fine, produces plenty of output... >\n\nIf you exit psql and start a new session, \"ramp\" has disappeared\nentirely. This is causing the create_view regression test to fail,\nsince it depends on a table made like this in a prior test.\n\nMy guess is that this is an unwanted side effect of the \"temp table\"\ncode. Is anyone else seeing the same, or am I looking for a\nplatform-specific bug? (gcc 2.7.2.2 on HPUX 9.03, for the record.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Feb 1999 23:53:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT INTO TABLE busted?"
},
{
"msg_contents": "> I've been seeing the following behavior with the CVS sources for the\n> last several days:\n> \n> $ psql regression\n> regression=> SELECT * INTO TABLE ramp FROM road ;\n> SELECT\n> regression=> \\d ramp\n> Couldn't find table ramp!\n> regression=> SELECT * INTO TABLE ramp FROM road ;\n> ERROR: ramp relation already exists\n> regression=> SELECT * FROM ramp ;\n> < works fine, produces plenty of output... >\n> \n> If you exit psql and start a new session, \"ramp\" has disappeared\n> entirely. This is causing the create_view regression test to fail,\n> since it depends on a table made like this in a prior test.\n> \n> My guess is that this is an unwanted side effect of the \"temp table\"\n> code. Is anyone else seeing the same, or am I looking for a\n> platform-specific bug? (gcc 2.7.2.2 on HPUX 9.03, for the record.)\n\nVery likely the TEMP table stuff, though I think the changes only take\naffect when you do a temp table. I am passing the create_view\nregression test here.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Feb 1999 00:26:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO TABLE busted?"
},
{
"msg_contents": "Have you done a fresh initdb recently?\n\n> I've been seeing the following behavior with the CVS sources for the\n> last several days:\n> \n> $ psql regression\n> regression=> SELECT * INTO TABLE ramp FROM road ;\n> SELECT\n> regression=> \\d ramp\n> Couldn't find table ramp!\n> regression=> SELECT * INTO TABLE ramp FROM road ;\n> ERROR: ramp relation already exists\n> regression=> SELECT * FROM ramp ;\n> < works fine, produces plenty of output... >\n> \n> If you exit psql and start a new session, \"ramp\" has disappeared\n> entirely. This is causing the create_view regression test to fail,\n> since it depends on a table made like this in a prior test.\n> \n> My guess is that this is an unwanted side effect of the \"temp table\"\n> code. Is anyone else seeing the same, or am I looking for a\n> platform-specific bug? (gcc 2.7.2.2 on HPUX 9.03, for the record.)\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Feb 1999 00:27:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO TABLE busted?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am passing the create_view regression test here.\n\nI was afraid you would say that. I'll start digging.\n\n> Have you done a fresh initdb recently? \n\nYup, this is starting from \"distclean\" sources and an empty\ninstall directory.\n\nI'm seeing several other regress failures that seem to be caused by\nthe same SELECT INTO problem. Most are the same \"no such table\"\nfailure in a later test session, but the \"transactions\" test\nactually coredumps:\n\nQUERY: BEGIN;\nQUERY: SELECT * INTO TABLE xacttest FROM aggtest;\nQUERY: INSERT INTO xacttest (a, b) VALUES (777, 777.777);\nQUERY: END;\nQUERY: SELECT a FROM xacttest WHERE a > 100;\npqReadData() -- backend closed the channel unexpectedly.\n\nI did a little bit of corefile-entrails-examining and found\nthat heapgettuple was hitting a bad pointer, but ran out of\nenergy before getting further than that. If that rings any\nbells please let me know. I won't have time to look at this\nmore until Saturday.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Feb 1999 01:13:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT INTO TABLE busted? "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I am passing the create_view regression test here.\n> \n> I was afraid you would say that. I'll start digging.\n> \n> > Have you done a fresh initdb recently? \n> \n> Yup, this is starting from \"distclean\" sources and an empty\n> install directory.\n> \n> I'm seeing several other regress failures that seem to be caused by\n> the same SELECT INTO problem. Most are the same \"no such table\"\n> failure in a later test session, but the \"transactions\" test\n> actually coredumps:\n\nOne idea. Can you cvs a version before the temp table stuff, and see if\nthey pass?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Feb 1999 12:20:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO TABLE busted?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I'm seeing several other regress failures that seem to be caused by\n>> the same SELECT INTO problem.\n\n> One idea. Can you cvs a version before the temp table stuff, and see if\n> they pass?\n\nThey were all passing a week or so ago. Since it works for you (and\napparently for the rest of the crew), I have to assume there's some\nplatform-dependent bug in the new temp table stuff. I'll try to dig\ninto it tomorrow --- but if you have any thoughts about the most likely\nplaces to look, please let me know.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Feb 1999 13:07:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT INTO TABLE busted? "
},
{
"msg_contents": "I found part of the SELECT INTO TABLE problem: at line 2872 of gram.y,\n\n\tn->istemp = (bool)((A_Const *)lfirst($4))->val.val.ival;\n\nthe cast should be to Value* not A_Const*. On my machine, an\nuninitialized field is picked up and dropped into n->istemp,\nso that the system sometimes interprets SELECT INTO TABLE\nas SELECT INTO TEMP TABLE. Ooops.\n\nWith this fix, the regression tests pass again (except the \"don't know\nwhether nodes of type 600 are equal\" problem is still there).\n\nHowever, I can now report that there's a second bug involving\ntrying to access a temp table after end of transaction.\nThe query series (in the regression database)\n\nBEGIN;\nSELECT * INTO TEMP TABLE xacttest FROM aggtest;\nINSERT INTO xacttest (a, b) VALUES (777, 777.777);\nEND;\nSELECT a FROM xacttest WHERE a > 100;\n\ncrashes the backend. It seems to think that xacttest still exists,\nbut it chokes trying to retrieve tuples from it. (Whether a non-temp\ntable xacttest exists doesn't seem to matter, btw.)\n\nAm I right in thinking that the temp table should disappear\nat END TRANSACTION?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Feb 1999 15:41:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: SELECT INTO TABLE busted? "
}
] |
[
{
"msg_contents": "Due to consistently losing email via usa.net I have changed my\nsubscriptions. I try to do all postgresql related stuff as\[email protected] for now.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 5 Feb 1999 08:54:46 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Email change"
}
] |
[
{
"msg_contents": "[ Redirected to hackers, as it's now way off-topic for pgsql-interfaces ]\n\nPeter T Mount <[email protected]> writes:\n> [ trying to test whether Postgres can split a huge database into\n> multiple 2-Gb files ]\n> The problem I have is that it takes 4 hours for a table to reach 2Gb on my\n> system, so it's a slow process :-(\n\nI had a thought about this --- is there a #define somewhere that sets\nthe size at which the system decides it needs to split a table?\n(If not, shouldn't there be?) If there is, you could build a debug\ncopy of Postgres in which the value is set very small, maybe a meg or\nless, and then it'd be easy to test the file-splitting behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Feb 1999 12:36:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Postgres Limitations "
},
{
"msg_contents": "On Fri, 5 Feb 1999, Tom Lane wrote:\n\n> [ Redirected to hackers, as it's now way off-topic for pgsql-interfaces ]\n> \n> Peter T Mount <[email protected]> writes:\n> > [ trying to test whether Postgres can split a huge database into\n> > multiple 2-Gb files ]\n> > The problem I have is that it takes 4 hours for a table to reach 2Gb on my\n> > system, so it's a slow process :-(\n> \n> I had a thought about this --- is there a #define somewhere that sets\n> the size at which the system decides it needs to split a table?\n> (If not, shouldn't there be?) If there is, you could build a debug\n> copy of Postgres in which the value is set very small, maybe a meg or\n> less, and then it'd be easy to test the file-splitting behavior.\n\nI've already found it, and I was thinking along those lines.\n\nThe define was last changed when we got variable block sizes, and there's\nsome lengthy comments by it describing how it's value's calculated.\n\nAnyhow, I'm planning on working on this tomorrow morning.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Fri, 5 Feb 1999 22:19:21 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Postgres Limitations "
}
] |
[
{
"msg_contents": "You are correct. I have modified the current sources. Good eye.\n\n\n\n> Hi Bruce,\n> \tI've been looking over the code in .../backend/utils/adt/selfuncs.c\n> and there are a number of functions in there like btreesel() that have\n> code\n> like this\n> \n> float64\n> btreesel(...)\n> {\n> float64 result;\n> float64data resultData;\n> \n> ...\n> ...\n> \n> resultData = 1.0/3.0;\n> result = &resultData;\n> \n> ...\n> ...\n> \n> return result;\n> }\n> \n> I don't pretend to understand all the contextual details about how the\n> function is used, but It seems that the return result is a pointer to an\n> invalid stack location. The storage is allocated as auto storage during\n> the function call, and with dynamic range restricted to the execution\n> of the function body.\n> \n> The same thing is used in hashsel() as well, though not in btreenpage()\n> which\n> uses palloc() to allocate heap storage for the result.\n> \n> Sorry for the informal form of the notice, but the machine that I work\n> on is a long way from the internet, and that makes compiling and mailing\n> a patch a pain.\n> \n> Bernie\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Feb 1999 12:48:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres bug?"
}
] |
[
{
"msg_contents": "Create user is requiring an Expiration Date.\n\nWhat is the function at oid 948 named varchar really supposed to be\ndoing? Whatever it is I don't think it's succeeding.\n\nAnd I'll ask about creating a lower/upper function for varchar once\nmore. Everything that I try either produces incorrect results or a core\ndump.\n\t-DEJ\n",
"msg_date": "Fri, 5 Feb 1999 12:54:21 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bugs in snapshot"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a bug in 6.4.2 which seems to be\nrelated to the char(n) type and shows up\nif one assigns a zero-length default value.\n\nHere is an example:\n\n\ntest=> create table t1 (\ntest-> str1 char(2) default '', <---- note this one\ntest-> str2 text default '',\ntest-> str3 text default ''\ntest-> );\nCREATE\n\ntest=> insert into t1 values ('aa', 'string2', 'string3');\nINSERT 91278 1\ntest=> insert into t1 (str3) values ('string3');\nINSERT 91279 1\ntest=>test=> select * from t1;\nBackend message type 0x44 arrived while idle\nBackend message type 0x44 arrived while idle\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nIf the table is created as\n\ncreate table t1 (\n str1 char(2) default ' ',\n str2 text default '',\n str3 text default ''\n);\n\nthe crash doesn't happen.\n\nRegards\nErich\n\n",
"msg_date": "Fri, 5 Feb 1999 20:30:11 +0100 (MET)",
"msg_from": "Erich Stamberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "char(n) default '' crashes server"
},
{
"msg_contents": "Yes, this crashes under the current backend code too. We are looking at\nit.\n\n\n> Hi,\n> \n> I found a bug in 6.4.2 which seems to be\n> related to the char(n) type and shows up\n> if one assigns a zero-length default value.\n> \n> Here is an example:\n> \n> \n> test=> create table t1 (\n> test-> str1 char(2) default '', <---- note this one\n> test-> str2 text default '',\n> test-> str3 text default ''\n> test-> );\n> CREATE\n> \n> test=> insert into t1 values ('aa', 'string2', 'string3');\n> INSERT 91278 1\n> test=> insert into t1 (str3) values ('string3');\n> INSERT 91279 1\n> test=>test=> select * from t1;\n> Backend message type 0x44 arrived while idle\n> Backend message type 0x44 arrived while idle\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> If the table is created as\n> \n> create table t1 (\n> str1 char(2) default ' ',\n> str2 text default '',\n> str3 text default ''\n> );\n> \n> the crash doesn't happen.\n> \n> Regards\n> Erich\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Mar 1999 09:52:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] char(n) default '' crashes server"
},
{
"msg_contents": "\nThis is still an open item. The crash still happens.\n\n> Hi,\n> \n> I found a bug in 6.4.2 which seems to be\n> related to the char(n) type and shows up\n> if one assigns a zero-length default value.\n> \n> Here is an example:\n> \n*************> \n> test=> create table t1 (\n> test-> str1 char(2) default '', <---- note this one\n> test-> str2 text default '',\n> test-> str3 text default ''\n> test-> );\n> CREATE\n> \n> test=> insert into t1 values ('aa', 'string2', 'string3');\n> INSERT 91278 1\n> test=> insert into t1 (str3) values ('string3');\n> INSERT 91279 1\n> test=>test=> select * from t1;\n> Backend message type 0x44 arrived while idle\n> Backend message type 0x44 arrived while idle\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> If the table is created as\n> \n> create table t1 (\n> str1 char(2) default ' ',\n> str2 text default '',\n> str3 text default ''\n> );\n> \n> the crash doesn't happen.\n> \n> Regards\n> Erich\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 9 May 1999 09:02:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] char(n) default '' crashes server"
},
{
"msg_contents": "\nThis was fixed in 6.5.\n\n\n> Hi,\n> \n> I found a bug in 6.4.2 which seems to be\n> related to the char(n) type and shows up\n> if one assigns a zero-length default value.\n> \n> Here is an example:\n> \n> \n> test=> create table t1 (\n> test-> str1 char(2) default '', <---- note this one\n> test-> str2 text default '',\n> test-> str3 text default ''\n> test-> );\n> CREATE\n> \n> test=> insert into t1 values ('aa', 'string2', 'string3');\n> INSERT 91278 1\n> test=> insert into t1 (str3) values ('string3');\n> INSERT 91279 1\n> test=>test=> select * from t1;\n> Backend message type 0x44 arrived while idle\n> Backend message type 0x44 arrived while idle\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> If the table is created as\n> \n> create table t1 (\n> str1 char(2) default ' ',\n> str2 text default '',\n> str3 text default ''\n> );\n> \n> the crash doesn't happen.\n> \n> Regards\n> Erich\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 21:50:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] char(n) default '' crashes server"
}
] |
[
{
"msg_contents": "Hi,\nI saw a message a couple of weeks ago from someone having problems with\nlarger than 2GB tables. I have similar problems.\n\nPostgreSQL: anon-cvs as of today (2/5/1999)\nOS: Redhat Linux 5.2 (running 2.0.35)\n\nI created a database called mcrl, and a table called mcrl3_1.\nI copied in a set of 450MB of data twice(which comes to pg file size of\n2.4GB or so).\n\nWhen it hit 2GB I got this message:\n mcrl=> copy mcrl3_1 FROM '/home/gjerde/mcrl/MCR3_1.txt';\n ERROR: mcrl3_1: cannot extend\n\nThe table file looks like this:\n[postgres@snowman mcrl]$ ls -l mcrl*\n-rw------- 1 postgres postgres 2147482624 Feb 5 16:49 mcrl3_1\n\nIt did NOT create the .1 file however, which I did see when I tried this\non 6.4.2(but still didn't work).\n\nI looked around in the code(specifically src/backend/storage/smgr/*.c),\nbut couldn't figure too much of it out. I'll have to figure out how\npostgres handles the database files first..\n\nHope this helps,\nOle Gjerde\n\n",
"msg_date": "Fri, 5 Feb 1999 17:07:38 -0600 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "I just found out that you guys have been discussing this problem.. oops..\nI looked through the mailing-list archive, and didn't find any posts, but\nfebruary isn't in the archive yet :)\n\nOle Gjerde\n\n",
"msg_date": "Fri, 5 Feb 1999 20:27:08 -0600 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Ignore me :) Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "I may be dating myself really badly here, but isn't there a hard limit\non\nthe file system at 2Gig? I thought the file size attribute in Unix is\nrepresented as a 32 bit signed long, which happens to be a max value\nof 2147483648. If I'm right, it means the problem is fundamentally\nwith the file system, not with PostGres, and you won't solve this\nunless the os supports larger files.\n\[email protected] wrote:\n> \n> Hi,\n> I saw a message a couple of weeks ago from someone having problems with\n> larger than 2GB tables. I have similar problems.\n> \n> PostgreSQL: anon-cvs as of today (2/5/1999)\n> OS: Redhat Linux 5.2 (running 2.0.35)\n> \n> I created a database called mcrl, and a table called mcrl3_1.\n> I copied in a set of 450MB of data twice(which comes to pg file size of\n> 2.4GB or so).\n> \n> When it hit 2GB I got this message:\n> mcrl=> copy mcrl3_1 FROM '/home/gjerde/mcrl/MCR3_1.txt';\n> ERROR: mcrl3_1: cannot extend\n> \n> The table file looks like this:\n> [postgres@snowman mcrl]$ ls -l mcrl*\n> -rw------- 1 postgres postgres 2147482624 Feb 5 16:49 mcrl3_1\n> \n> It did NOT create the .1 file however, which I did see when I tried this\n> on 6.4.2(but still didn't work).\n> \n> I looked around in the code(specifically src/backend/storage/smgr/*.c),\n> but couldn't figure too much of it out. I'll have to figure out how\n> postgres handles the database files first..\n> \n> Hope this helps,\n> Ole Gjerde\n\n-- \n------------------------------------------------------------\nThomas Reinke Tel: (416) 460-7021\nDirector of Technology Fax: (416) 598-2319\nE-Soft Inc. http://www.e-softinc.com\n",
"msg_date": "Sat, 06 Feb 1999 01:46:24 -0500",
"msg_from": "Thomas Reinke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "On Sat, 6 Feb 1999, Thomas Reinke wrote:\n\n> I may be dating myself really badly here, but isn't there a hard limit\n> on\n> the file system at 2Gig? I thought the file size attribute in Unix is\n> represented as a 32 bit signed long, which happens to be a max value\n> of 2147483648. If I'm right, it means the problem is fundamentally\n> with the file system, not with PostGres, and you won't solve this\n> unless the os supports larger files.\n\nPostgreSQL has internal code that is supposed to automagically break up a\ntable into 2gb chunks so that thsi isn't a problem...\n> \n> [email protected] wrote:\n> > \n> > Hi,\n> > I saw a message a couple of weeks ago from someone having problems with\n> > larger than 2GB tables. I have similar problems.\n> > \n> > PostgreSQL: anon-cvs as of today (2/5/1999)\n> > OS: Redhat Linux 5.2 (running 2.0.35)\n> > \n> > I created a database called mcrl, and a table called mcrl3_1.\n> > I copied in a set of 450MB of data twice(which comes to pg file size of\n> > 2.4GB or so).\n> > \n> > When it hit 2GB I got this message:\n> > mcrl=> copy mcrl3_1 FROM '/home/gjerde/mcrl/MCR3_1.txt';\n> > ERROR: mcrl3_1: cannot extend\n> > \n> > The table file looks like this:\n> > [postgres@snowman mcrl]$ ls -l mcrl*\n> > -rw------- 1 postgres postgres 2147482624 Feb 5 16:49 mcrl3_1\n> > \n> > It did NOT create the .1 file however, which I did see when I tried this\n> > on 6.4.2(but still didn't work).\n> > \n> > I looked around in the code(specifically src/backend/storage/smgr/*.c),\n> > but couldn't figure too much of it out. I'll have to figure out how\n> > postgres handles the database files first..\n> > \n> > Hope this helps,\n> > Ole Gjerde\n> \n> -- \n> ------------------------------------------------------------\n> Thomas Reinke Tel: (416) 460-7021\n> Director of Technology Fax: (416) 598-2319\n> E-Soft Inc. http://www.e-softinc.com\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 6 Feb 1999 05:03:17 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "Thomas Reinke wrote:\n> \n> I may be dating myself really badly here, but isn't there a hard limit\n> on\n> the file system at 2Gig? I thought the file size attribute in Unix is\n> represented as a 32 bit signed long, which happens to be a max value\n> of 2147483648. If I'm right, it means the problem is fundamentally\n> with the file system, not with PostGres, and you won't solve this\n> unless the os supports larger files.\n\nThere is logic insid PostgreSQL to overflof to nex file at 2GB, but \napparently this is currently broken.\n\nAFAIK, there are people working on it now\n\n--------------\nHannu\n",
"msg_date": "Sat, 06 Feb 1999 14:37:43 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "On Sat, 6 Feb 1999, Hannu Krosing wrote:\n\n> Thomas Reinke wrote:\n> > \n> > I may be dating myself really badly here, but isn't there a hard limit\n> > on\n> > the file system at 2Gig? I thought the file size attribute in Unix is\n> > represented as a 32 bit signed long, which happens to be a max value\n> > of 2147483648. If I'm right, it means the problem is fundamentally\n> > with the file system, not with PostGres, and you won't solve this\n> > unless the os supports larger files.\n> \n> There is logic insid PostgreSQL to overflof to nex file at 2GB, but \n> apparently this is currently broken.\n> \n> AFAIK, there are people working on it now\n\nYes, me ;-)\n\nI have an idea where the failure is occuring, but I'm still testing the\nrelavent parts of the code.\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Sun, 7 Feb 1999 13:20:14 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "On Sun, 7 Feb 1999, Peter T Mount wrote:\n\n> On Sat, 6 Feb 1999, Hannu Krosing wrote:\n> \n> > Thomas Reinke wrote:\n> > > \n> > > I may be dating myself really badly here, but isn't there a hard limit\n> > > on\n> > > the file system at 2Gig? I thought the file size attribute in Unix is\n> > > represented as a 32 bit signed long, which happens to be a max value\n> > > of 2147483648. If I'm right, it means the problem is fundamentally\n> > > with the file system, not with PostGres, and you won't solve this\n> > > unless the os supports larger files.\n> > \n> > There is logic insid PostgreSQL to overflof to nex file at 2GB, but \n> > apparently this is currently broken.\n> > \n> > AFAIK, there are people working on it now\n> \n> Yes, me ;-)\n> \n> I have an idea where the failure is occuring, but I'm still testing the\n> relavent parts of the code.\n\nWell, just now I think I know what's going on.\n\nFirst, I've reduced the size that postgres breaks the file to 2Mb (256\nblocks). I then ran the test script that imports some large records into a\ntest table.\n\nAs expected, the splitting of the file works fine. So the code isn't\nbroken. What I think is happening is that the code extends the table, then\ntests to see if it's at the 2Gig limit, and when it is, creates the next\nfile for that table.\n\nHowever, I think the OS has problems with a file exactly 2Gb in size.\n\nI've attached a patch that should reduce the max table size by 1 block.\nThis should prevent us from hitting the physical limit.\n\nNote: I haven't tested this patch yet!\n\nIt compiles but, because the test takes 4 hours for my machine to reach\n2Gb, and I have a few other things to do today, I'll run it overnight.\n\nHopefully, first thing tomorrow, we'll know if it works.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n*** ./backend/storage/smgr/md.c.orig Mon Feb 1 17:55:57 1999\n--- ./backend/storage/smgr/md.c Sun Feb 7 14:48:35 1999\n***************\n*** 77,86 ****\n *\n * 19 Mar 98 darrenk\n *\n */\n \n #ifndef LET_OS_MANAGE_FILESIZE\n! #define RELSEG_SIZE ((8388608 / BLCKSZ) * 256)\n #endif\n \n /* routines declared here */\n--- 77,91 ----\n *\n * 19 Mar 98 darrenk\n *\n+ * After testing, we need to add one less block to the file, otherwise\n+ * we extend beyond the 2-gig limit.\n+ *\n+ * 07 Feb 99 Peter Mount\n+ *\n */\n \n #ifndef LET_OS_MANAGE_FILESIZE\n! #define RELSEG_SIZE (((8388608 / BLCKSZ) * 256)-BLCKSZ)\n #endif\n \n /* routines declared here */\n\n\n",
"msg_date": "Sun, 7 Feb 1999 14:57:18 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "Peter T Mount <[email protected]> writes:\n> As expected, the splitting of the file works fine. So the code isn't\n> broken. What I think is happening is that the code extends the table, then\n> tests to see if it's at the 2Gig limit, and when it is, creates the next\n> file for that table.\n\n> However, I think the OS has problems with a file exactly 2Gb in size.\n\nOh! I didn't realize that we were trying to extend the file to\n*exactly* 2Gb. Indeed that's a very dangerous thing to do: the file\nsize in bytes will be 0x80000000, which will appear to be negative when\nviewed as a signed 32-bit integer; unless your OS is very careful about\nsigned vs. unsigned arithmetic, it will break.\n\nFor that matter it's not impossible that our own code contains similar\nproblems, if it does much calculating with byte offsets into the file.\n(The pushups that darrenk had to do in order to calculate RELSEG_SIZE\nin the first place should have suggested to him that running right at\nthe overflow limit was not such a hot idea...)\n\nI'd suggest setting the limit a good deal less than 2Gb to avoid any\nrisk of arithmetic overflow. Maybe 200000 8K blocks, instead of 262144.\n\nAnd change the comment while you're at it, not just the code ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 13:03:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "> For that matter it's not impossible that our own code contains similar\n> problems, if it does much calculating with byte offsets into the file.\n> (The pushups that darrenk had to do in order to calculate RELSEG_SIZE\n> in the first place should have suggested to him that running right at\n> the overflow limit was not such a hot idea...)\n\nNot my code to begin with...\n\nRELSEG_SIZE was always there hard-coded to 262144 to assume the block\nsize would be 8k. At the time of my changes, I didn't think thru what\nit was for, I only changed the code that was there to calculate it and\nget the same value as before for variable disc block sizes.\n\nI agree that running right at the limit is a Bad Thing, but analyzing\nthat wasn't my main area of concern with that patch.\n\ndarrenk\n",
"msg_date": "Sun, 7 Feb 1999 13:39:59 -0500",
"msg_from": "\"Stupor Genius\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "On Sun, 7 Feb 1999, Tom Lane wrote:\n\n> Peter T Mount <[email protected]> writes:\n> > As expected, the splitting of the file works fine. So the code isn't\n> > broken. What I think is happening is that the code extends the table, then\n> > tests to see if it's at the 2Gig limit, and when it is, creates the next\n> > file for that table.\n> \n> > However, I think the OS has problems with a file exactly 2Gb in size.\n> \n> Oh! I didn't realize that we were trying to extend the file to\n> *exactly* 2Gb. Indeed that's a very dangerous thing to do: the file\n> size in bytes will be 0x80000000, which will appear to be negative when\n> viewed as a signed 32-bit integer; unless your OS is very careful about\n> signed vs. unsigned arithmetic, it will break.\n> \n> For that matter it's not impossible that our own code contains similar\n> problems, if it does much calculating with byte offsets into the file.\n> (The pushups that darrenk had to do in order to calculate RELSEG_SIZE\n> in the first place should have suggested to him that running right at\n> the overflow limit was not such a hot idea...)\n> \n> I'd suggest setting the limit a good deal less than 2Gb to avoid any\n> risk of arithmetic overflow. Maybe 200000 8K blocks, instead of 262144.\n\nThat might be an idea.\n\nI've just re-synced by copy of the cvs source, so I'll set it there, and\nwe'll know by the morning if it's worked or not.\n\n> And change the comment while you're at it, not just the code ;-)\n\nWill do :-)\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Sun, 7 Feb 1999 19:42:52 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "On Sun, 7 Feb 1999, Stupor Genius wrote:\n\n> > For that matter it's not impossible that our own code contains similar\n> > problems, if it does much calculating with byte offsets into the file.\n> > (The pushups that darrenk had to do in order to calculate RELSEG_SIZE\n> > in the first place should have suggested to him that running right at\n> > the overflow limit was not such a hot idea...)\n> \n> Not my code to begin with...\n> \n> RELSEG_SIZE was always there hard-coded to 262144 to assume the block\n> size would be 8k. At the time of my changes, I didn't think thru what\n> it was for, I only changed the code that was there to calculate it and\n> get the same value as before for variable disc block sizes.\n> \n> I agree that running right at the limit is a Bad Thing, but analyzing\n> that wasn't my main area of concern with that patch.\n\nI agree with you. I think that the original error stemmed from when\nRELSEG_SIZE was originally set.\n\nAnyhow, I'm about to start the test, using RELSEG_SIZE set to 243968 which\nworks out to be 1.6Gb. That should stay well away from the overflow\nproblem.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Sun, 7 Feb 1999 21:02:32 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "On Sun, 7 Feb 1999, Peter T Mount wrote:\n> Anyhow, I'm about to start the test, using RELSEG_SIZE set to 243968 which\n> works out to be 1.6Gb. That should stay well away from the overflow\n> problem.\n\nHi,\nI just did a checkout of the cvs code, hardcoded RELSEG_SIZE to 243968,\nand it works beautifully now!\n\nI imported about 2.2GB of data(table file size) and it looks like this:\n-rw------- 1 postgres postgres 1998585856 Feb 7 16:22 mcrl3_1\n-rw------- 1 postgres postgres 219611136 Feb 7 16:49 mcrl3_1.1\n-rw------- 1 postgres postgres 399368192 Feb 7 16:49\nmcrl3_1_partnumber_index\n\nAnd it works fine.. I did some selects on data that should have ended up\nin the .1 file, and it works great. The best thing about it, is that it\nseems at least as fast as MSSQL on the same data, if not faster..\n\nIt did take like 45 minutes to create that index.. Isn't that a bit\nlong(AMD K6-2 350MHz)? :)\n\nSuggestion: How hard would it be to make copy tablename FROM 'somefile'\ngive some feedback? Either some kind of percentage or just print out\nsomething after each 10k row chunks or something like that.\n\nThanks,\nOle Gjerde\n\n",
"msg_date": "Sun, 7 Feb 1999 17:01:09 -0600 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "On Sun, 7 Feb 1999 [email protected] wrote:\n\n> On Sun, 7 Feb 1999, Peter T Mount wrote:\n> > Anyhow, I'm about to start the test, using RELSEG_SIZE set to 243968 which\n> > works out to be 1.6Gb. That should stay well away from the overflow\n> > problem.\n> \n> Hi,\n> I just did a checkout of the cvs code, hardcoded RELSEG_SIZE to 243968,\n> and it works beautifully now!\n\nProblem here is that RELSEG_SIZE is dependent on the block size. Seeing we\ncan increase the block size from 8k, this would break.\n\nAs I type, my machine is populating the test table.\n\n> I imported about 2.2GB of data(table file size) and it looks like this:\n> -rw------- 1 postgres postgres 1998585856 Feb 7 16:22 mcrl3_1\n> -rw------- 1 postgres postgres 219611136 Feb 7 16:49 mcrl3_1.1\n> -rw------- 1 postgres postgres 399368192 Feb 7 16:49\n> mcrl3_1_partnumber_index\n> \n> And it works fine.. I did some selects on data that should have ended up\n> in the .1 file, and it works great. The best thing about it, is that it\n> seems at least as fast as MSSQL on the same data, if not faster..\n\nThis is what I got when I tested it using a reduced file size. It's what\nmade me decide to reduce the size by 1 in the patch I posted earlier.\n\nHowever, I'm using John's suggestion of reducing the file size a lot more,\nto ensure we don't hit any math errors, etc. So the max file size is about\n1.6Gb.\n\n> It did take like 45 minutes to create that index.. Isn't that a bit\n> long(AMD K6-2 350MHz)? :)\n\nWell, it's taking my poor old P133 about 2 hours to hit 2Gb at the moment.\n\n> Suggestion: How hard would it be to make copy tablename FROM 'somefile'\n> give some feedback? Either some kind of percentage or just print out\n> something after each 10k row chunks or something like that.\n\nAttached is the test script I'm using, minus the data file.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf",
"msg_date": "Sun, 7 Feb 1999 23:43:53 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "Peter T Mount <[email protected]> writes:\n>> I just did a checkout of the cvs code, hardcoded RELSEG_SIZE to 243968,\n>> and it works beautifully now!\n\n> Problem here is that RELSEG_SIZE is dependent on the block size. Seeing we\n> can increase the block size from 8k, this would break.\n\nOf course it should really be defined as\n\n#define RELSEG_SIZE\t\t(2000000000 / BLCKSZ)\n\nfor some suitable magic constant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 19:06:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "Thus spake Tom Lane\n> I'd suggest setting the limit a good deal less than 2Gb to avoid any\n> risk of arithmetic overflow. Maybe 200000 8K blocks, instead of 262144.\n\nWhy not make it substantially lower by default? Makes it easier to split\na database across spindles. Even better, how about putting extra extents\ninto different directories like data/base.1, data/base.2, etc? Then as\nthe database grows you can add drives, move the extents into them and\nmount the new drives. The software doesn't even notice the change.\n\nJust a thought.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sun, 7 Feb 1999 19:14:00 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> Thus spake Tom Lane\n> > I'd suggest setting the limit a good deal less than 2Gb to avoid any\n> > risk of arithmetic overflow. Maybe 200000 8K blocks, instead of 262144.\n> \n> Why not make it substantially lower by default? Makes it easier to split\n> a database across spindles. Even better, how about putting extra extents\n> into different directories like data/base.1, data/base.2, etc? Then as\n> the database grows you can add drives, move the extents into them and\n> mount the new drives. The software doesn't even notice the change.\n\nIt would be also a great way to help optimization if indexes were in \na separate directory from the tables.\n\nAnd of course our current way of keeping all the large object files in\none\ndirectory (even _the same_ with other data) sucks. \n\nIt has kept me away from using large objects at all, as I've heard that \nLinux (or rather ext2fs) is not very good at dealing with huge\ndirectories.\nAn I have no use for only a few large objects ;)\n\nThere have been suggestions about splitting up the large object storage\nby \nthe hex representation of the oid value (= part of current filename),\nbut a good start would be to put them just in a separate directory under \npg_data. The temp files are also good candidates for putting in a\nseparate \ndirectory.\n\nThe next step would be of course dataspaces, probably most easyly \nimplemented as directories:\n\nCREATE DATASPACE PG_DATA1 STORAGE='/mnt/scsi.105.7/data1'; \nSET DEFAULT_DATASPACE TO PG_DATA1;\nCREATE TABLE ... IN DATASPACE PG_DATA; \nCREATE INDEX ... ;\n\nThen we would'nt have to move and symlink them tables and indexes\nmanually.\n\n--------------\nHannu\n",
"msg_date": "Mon, 08 Feb 1999 02:35:19 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> Thus spake Tom Lane\n>> I'd suggest setting the limit a good deal less than 2Gb to avoid any\n>> risk of arithmetic overflow. Maybe 200000 8K blocks, instead of 262144.\n\n> Why not make it substantially lower by default?\n\nConfigure-time option, anyone ;-) ?\n\n> Makes it easier to split\n> a database across spindles. Even better, how about putting extra extents\n> into different directories like data/base.1, data/base.2, etc?\n\nThis could be a pretty good idea. Right now, if you need to split a\ndatabase across multiple filesystems, you have to do a bunch of tedious\nhand manipulation of symlinks. With an option like this, you could\nautomatically distribute your larger tables across filesystems...\nset up the subdirectories as symlinks once, and forget it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 22:04:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "> On Sun, 7 Feb 1999, Peter T Mount wrote:\n> > Anyhow, I'm about to start the test, using RELSEG_SIZE set to 243968 which\n> > works out to be 1.6Gb. That should stay well away from the overflow\n> > problem.\n> \n> Hi,\n> I just did a checkout of the cvs code, hardcoded RELSEG_SIZE to 243968,\n> and it works beautifully now!\n> \n> I imported about 2.2GB of data(table file size) and it looks like this:\n> -rw------- 1 postgres postgres 1998585856 Feb 7 16:22 mcrl3_1\n> -rw------- 1 postgres postgres 219611136 Feb 7 16:49 mcrl3_1.1\n> -rw------- 1 postgres postgres 399368192 Feb 7 16:49\n> mcrl3_1_partnumber_index\n\nGreat. This has been on the TODO list for quite some time. Glad it is\nfixed.\n\n> \n> And it works fine.. I did some selects on data that should have ended up\n> in the .1 file, and it works great. The best thing about it, is that it\n> seems at least as fast as MSSQL on the same data, if not faster..\n> \n> It did take like 45 minutes to create that index.. Isn't that a bit\n> long(AMD K6-2 350MHz)? :)\n> \n> Suggestion: How hard would it be to make copy tablename FROM 'somefile'\n> give some feedback? Either some kind of percentage or just print out\n> something after each 10k row chunks or something like that.\n\nWe could, but it would then make the output file larger.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 22:07:35 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "> > mcrl3_1_partnumber_index\n> > \n> > And it works fine.. I did some selects on data that should have ended up\n> > in the .1 file, and it works great. The best thing about it, is that it\n> > seems at least as fast as MSSQL on the same data, if not faster..\n> \n> This is what I got when I tested it using a reduced file size. It's what\n> made me decide to reduce the size by 1 in the patch I posted earlier.\n> \n> However, I'm using John's suggestion of reducing the file size a lot more,\n> to ensure we don't hit any math errors, etc. So the max file size is about\n> 1.6Gb.\n\nI can imagine people finding that strange. It it really needed. Is\nthere some math that could overflow with a larger value?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 22:09:34 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "On Sun, 7 Feb 1999, Bruce Momjian wrote:\n\n> > > mcrl3_1_partnumber_index\n> > > \n> > > And it works fine.. I did some selects on data that should have ended up\n> > > in the .1 file, and it works great. The best thing about it, is that it\n> > > seems at least as fast as MSSQL on the same data, if not faster..\n> > \n> > This is what I got when I tested it using a reduced file size. It's what\n> > made me decide to reduce the size by 1 in the patch I posted earlier.\n> > \n> > However, I'm using John's suggestion of reducing the file size a lot more,\n> > to ensure we don't hit any math errors, etc. So the max file size is about\n> > 1.6Gb.\n>\n> I can imagine people finding that strange. It it really needed. Is\n> there some math that could overflow with a larger value?\n\nNot sure. My original choice was to subtract 1 from the calculated\nmaximum, which meant it would split just before the 2Gb limit.\n\nHowever, running with the value set at the lower value:\n\n 1998585856 Feb 8 02:25 /opt/db/base/test/smallcat\n 599007232 Feb 8 03:21 /opt/db/base/test/smallcat.1\n\nTotal 26653000 rows loaded\n\nWould anyone really notice the lower value?\n\nPerhaps we could make this another compile time setting, like the block\nsize?\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Mon, 8 Feb 1999 06:35:01 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "> Not sure. My original choice was to subtract 1 from the calculated\n> maximum, which meant it would split just before the 2Gb limit.\n> \n> However, running with the value set at the lower value:\n> \n> 1998585856 Feb 8 02:25 /opt/db/base/test/smallcat\n> 599007232 Feb 8 03:21 /opt/db/base/test/smallcat.1\n> \n> Total 26653000 rows loaded\n> \n> Would anyone really notice the lower value?\n> \n> Perhaps we could make this another compile time setting, like the block\n> size?\n\nI guess all I am saying is I prefer the max-1 value. Seems more\nlogical. Could be set in config.h.in, though.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 10:16:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> However, I'm using John's suggestion of reducing the file size a lot more,\n>> to ensure we don't hit any math errors, etc. So the max file size is about\n>> 1.6Gb.\n\n> I can imagine people finding that strange. It it really needed. Is\n> there some math that could overflow with a larger value?\n\nWell, that's the question all right --- are you sure that there's not?\nI think \"max - 1 blocks\" is pushing it, since code that computes\nsomething like \"the byte offset of the block after next\" would fail.\nEven if there isn't any such code today, it seems possible that there\nmight be someday.\n\nI'd be comfortable with 2 billion (2000000000) bytes as the filesize\nlimit, or Andreas' proposal of 1Gb.\n\nI also like the proposals to allow the filesize limit to be configured\neven lower to ease splitting huge tables across filesystems.\n\nTo make that work easily, we really should adopt a layout where the data\nfiles don't all go in the same directory. Perhaps the simplest is:\n\n* First or only segment of a table goes in top-level data directory,\n same as now.\n\n* First extension segment is .../data/1/tablename.1, second is\n .../data/2/tablename.2, etc. (Using numbers for the subdirectory\n names prevents name conflict with ordinary tables.)\n\nThen, just configuring the filesize limit small (a few tens/hundreds\nof MB) and setting up symlinks for the subdirectories data/1, data/2,\netc gets the job done.\n\nStarting to feel old --- I remember when a \"few tens of MB\" was a\nmonstrous hard disk, never mind a single file ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Feb 1999 11:18:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "On Mon, 8 Feb 1999, Bruce Momjian wrote:\n\n> > Not sure. My original choice was to subtract 1 from the calculated\n> > maximum, which meant it would split just before the 2Gb limit.\n> > \n> > However, running with the value set at the lower value:\n> > \n> > 1998585856 Feb 8 02:25 /opt/db/base/test/smallcat\n> > 599007232 Feb 8 03:21 /opt/db/base/test/smallcat.1\n> > \n> > Total 26653000 rows loaded\n> > \n> > Would anyone really notice the lower value?\n> > \n> > Perhaps we could make this another compile time setting, like the block\n> > size?\n> \n> I guess all I am saying is I prefer the max-1 value. Seems more\n> logical. Could be set in config.h.in, though.\n\nThat's what I thought when I posted the small patch. However, there now\nseems to be a consensus for a smaller segment size. Toms (for some reason\nI called him John yesterday?) idea of 200000 (1.6Gb) works, and I know it\nworks ok on smaller segment sizes (I used 2Mb segments to see that it\nworked past the second segment).\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Mon, 8 Feb 1999 19:19:19 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "On Mon, 8 Feb 1999, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> >> However, I'm using John's suggestion of reducing the file size a lot more,\n> >> to ensure we don't hit any math errors, etc. So the max file size is about\n> >> 1.6Gb.\n> \n> > I can imagine people finding that strange. It it really needed. Is\n> > there some math that could overflow with a larger value?\n> \n> Well, that's the question all right --- are you sure that there's not?\n> I think \"max - 1 blocks\" is pushing it, since code that computes\n> something like \"the byte offset of the block after next\" would fail.\n> Even if there isn't any such code today, it seems possible that there\n> might be someday.\n> \n> I'd be comfortable with 2 billion (2000000000) bytes as the filesize\n> limit, or Andreas' proposal of 1Gb.\n\nI'm starting to like Andreas' proposal as the new default.\n\n> I also like the proposals to allow the filesize limit to be configured\n> even lower to ease splitting huge tables across filesystems.\n> \n> To make that work easily, we really should adopt a layout where the data\n> files don't all go in the same directory. Perhaps the simplest is:\n> \n> * First or only segment of a table goes in top-level data directory,\n> same as now.\n> \n> * First extension segment is .../data/1/tablename.1, second is\n> .../data/2/tablename.2, etc. (Using numbers for the subdirectory\n> names prevents name conflict with ordinary tables.)\n\nHow about dropping the suffix, so you would have:\n\n\t.../data/2/tablename\n\nDoing that doesn't mean having to increase the filename buffer size, just\nthe format and arg order (from %s.%d to %d/%s).\n\nI'd think we could add a test when the new segment is created for the\nsymlink/directory. If it doesn't exist, then create it. Otherwise a poor\nunsuspecting user would have their database fall over, not realising where\nthe error is.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Mon, 8 Feb 1999 20:25:17 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "\nPeter T Mount wrote:\n\n> How about dropping the suffix, so you would have:\n> \n> .../data/2/tablename\n> \n> Doing that doesn't mean having to increase the filename buffer size, just\n> the format and arg order (from %s.%d to %d/%s).\n> \n> I'd think we could add a test when the new segment is created for the\n> symlink/directory. If it doesn't exist, then create it. Otherwise a poor\n> unsuspecting user would have their database fall over, not realising where\n> the error is.\n\nThis sounds like attempting to solve the \"2 Gig limit\" along with data\ndistribution across multiple drives at the same time. I like the\nsolution\nto the first part, but I'm not entirely sure that this is an effective\nsolution to data distribution.\n\nSpecifically, distribution of data should be administerable in some\nfashion. So, that means you would want to control how much data\nis placed on each drive. Further, you would not want to fill up\none drive before floating over to the next\n\nConsider 10 1 Gig tables split across two 6 gig drives. Doesn't work\nvery well if the overlow only happens after 1 Gig. Then, consider\n10 1 Gig tables split across a 3 Gig and an 8 Gig drive. Again, doesn't\nwork very well with ANY sort of fixed size splitting scheme.\n\nI'd suggest making the max file size 1 Gig default, configurable\nsomeplace, and solving the data distribution as a separate effort.\n\nThomas\n\n------------------------------------------------------------\nThomas Reinke Tel: (416) 460-7021\nDirector of Technology Fax: (416) 598-2319\nE-Soft Inc. http://www.e-softinc.com\n",
"msg_date": "Mon, 08 Feb 1999 16:34:22 -0500",
"msg_from": "Thomas Reinke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "Peter T Mount wrote:\n>> How about dropping the suffix, so you would have:\n>> .../data/2/tablename\n>> Doing that doesn't mean having to increase the filename buffer size, just\n>> the format and arg order (from %s.%d to %d/%s).\n\nI thought of that also, but concluded it was a bad idea, because it\nmeans you cannot symlink several of the /n subdirectories to the same\nplace. It also seems just plain risky/errorprone to have different\nfiles named the same thing...\n\n>> I'd think we could add a test when the new segment is created for the\n>> symlink/directory. If it doesn't exist, then create it.\n\nAbsolutely, the system would need to auto-create a /n subdirectory if\none didn't already exist.\n\nThomas Reinke <[email protected]> writes:\n> ... I'm not entirely sure that this is an effective\n> solution to data distribution.\n\nWell, I'm certain we could do better if we wanted to put some direct\neffort into that issue, but we can get a usable scheme this way with\npractically no effort except writing a little how-to documentation.\n\nAssume you have N big tables where you know what N is. (You probably\nhave a lot of little tables as well, which we assume can be ignored for\nthe purposes of space allocation.) If you configure the max file size\nas M megabytes, the toplevel data directory will have M * N megabytes\nof stuff (plus little files). If all the big tables are about the same\nsize, say K * M meg apiece, then you wind up with K-1 subdirectories\neach also containing M * N meg, which you can readily scatter across\ndifferent filesystems by setting up the subdirectories as symlinks.\nIn practice the later subdirectories are probably less full because\nthe big tables aren't all equally big, but you can put more of them\non a single filesystem to make up for that.\n\nIf N varies considerably over time then this scheme doesn't work so\nwell, but I don't see any scheme that would cope with a very variable\ndatabase without physically moving files around every so often.\n\nWhen we get to the point where people are routinely complaining what\na pain in the neck it is to manage big databases this way, it'll be\ntime enough to improve the design and write some scripts to help\nrearrange files on the fly. Right now, I would just like to see a\nscheme that doesn't require the dbadmin to symlink each individual\ntable file in order to split a big database. (It could probably be\nargued that even doing that much is ahead of the demand, but since\nit's so cheap to provide this little bit of functionality we might\nas well do it.)\n\n> I'd suggest making the max file size 1 Gig default, configurable\n> someplace, and solving the data distribution as a separate effort.\n\nWe might actually be saying the same thing, if by that remark you\nmean that we can come back later and write \"real\" data distribution\nmanagement tools. I'm just pointing out that given a configurable\nmax file size we can have a primitive facility almost for free.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Feb 1999 18:55:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "On Mon, 8 Feb 1999, Tom Lane wrote:\n\n> Peter T Mount wrote:\n> >> How about dropping the suffix, so you would have:\n> >> .../data/2/tablename\n> >> Doing that doesn't mean having to increase the filename buffer size, just\n> >> the format and arg order (from %s.%d to %d/%s).\n> \n> I thought of that also, but concluded it was a bad idea, because it\n> means you cannot symlink several of the /n subdirectories to the same\n> place. It also seems just plain risky/errorprone to have different\n> files named the same thing...\n\nThat's true.\n\n[snip]\n\n> >> I'd think we could add a test when the new segment is created for the\n> >> symlink/directory. If it doesn't exist, then create it.\n> \n> Absolutely, the system would need to auto-create a /n subdirectory if\n> one didn't already exist.\n> \n> > I'd suggest making the max file size 1 Gig default, configurable\n> > someplace, and solving the data distribution as a separate effort.\n> \n> We might actually be saying the same thing, if by that remark you\n> mean that we can come back later and write \"real\" data distribution\n> management tools. I'm just pointing out that given a configurable\n> max file size we can have a primitive facility almost for free.\n\nWe are saying the same thing. To implement having the %d/%s.%d format we'd\nneed to just add 11 bytes to the temporary buffer (keeping the same\ncapacity as the cuurent code).\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Tue, 9 Feb 1999 21:07:49 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "Say guys,\n\nI just noticed that RELSEG_SIZE still hasn't been reduced per the\ndiscussion from early February. Let's make sure that doesn't slip\nthrough the cracks, OK?\n\nI think Peter Mount was supposed to be off testing this issue.\nPeter, did you learn anything further?\n\nWe should probably apply the patch to REL6_4 as well...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Mar 1999 12:51:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
}
] |
[
{
"msg_contents": "\"Bryan White\" <[email protected]> writes:\n> The documentation for CREATE INDEX implies that functions are allowed in\n> index definitions but when I execute:\n> create unique index idxtest on customer (lower(email));\n> the result is:\n> ERROR: DefineIndex: (null) class not found\n> Should this work? Do I have the syntax wrong?\n\nI tried this wih 6.4.2 and found that it was only accepted if I\nexplicitly identified which index operator class to use:\n\nplay=> create table customer (email text);\nCREATE\nplay=> create unique index idxtest on customer (lower(email));\nERROR: DefineIndex: class not found\nplay=> create unique index idxtest on customer (lower(email) text_ops);\nCREATE\nplay=>\n\nThat'll do as a workaround for Bryan, but isn't this a bug? Surely\nthe system ought to know what type the result of lower() is...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Feb 1999 12:27:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Functional Indexes"
},
{
"msg_contents": "On Sat, Feb 06, 1999 at 12:27:47PM -0500, Tom Lane wrote:\n> \"Bryan White\" <[email protected]> writes:\n> > The documentation for CREATE INDEX implies that functions are allowed in\n> > index definitions but when I execute:\n> > create unique index idxtest on customer (lower(email));\n> > the result is:\n> > ERROR: DefineIndex: (null) class not found\n> > Should this work? Do I have the syntax wrong?\n> \n> I tried this wih 6.4.2 and found that it was only accepted if I\n> explicitly identified which index operator class to use:\n> \n> play=> create table customer (email text);\n> CREATE\n> play=> create unique index idxtest on customer (lower(email));\n> ERROR: DefineIndex: class not found\n> play=> create unique index idxtest on customer (lower(email) text_ops);\n> CREATE\n> play=>\n> \n> That'll do as a workaround for Bryan, but isn't this a bug? Surely\n> the system ought to know what type the result of lower() is...\n> \n> \t\t\tregards, tom lane\n\nI still have a problem with that ... edited typescript follows\n\nfunweb=> \\d userdat\nTable = userdat\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| username | varchar() not null | 30 |\n...\n+----------------------------------+----------------------------------+-------+\nIndex: userdat_pkey\nfunweb=> create unique index userdat_idx2 on userdat (lower(username)\nvarchar_ops);\nERROR: BuildFuncTupleDesc: function 'lower(varchar)' does not exist\n\nThis error message looks very bogus to me.\n\n-- \n \n Regards,\n\t\t\t\t\t\t \n Sascha Schumann | \n Consultant | finger [email protected]\n | for PGP public key\n",
"msg_date": "Mon, 8 Feb 1999 01:28:26 +0100",
"msg_from": "Sascha Schumann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Functional Indexes"
},
{
"msg_contents": "On Mon, 8 Feb 1999, Sascha Schumann wrote:\n\n> On Sat, Feb 06, 1999 at 12:27:47PM -0500, Tom Lane wrote:\n> > \"Bryan White\" <[email protected]> writes:\n> > > The documentation for CREATE INDEX implies that functions are allowed in\n> > > index definitions but when I execute:\n> > > create unique index idxtest on customer (lower(email));\n> > > the result is:\n> > > ERROR: DefineIndex: (null) class not found\n> > > Should this work? Do I have the syntax wrong?\n> > \n> > I tried this wih 6.4.2 and found that it was only accepted if I\n> > explicitly identified which index operator class to use:\n> > \n> > play=> create table customer (email text);\n> > CREATE\n> > play=> create unique index idxtest on customer (lower(email));\n> > ERROR: DefineIndex: class not found\n> > play=> create unique index idxtest on customer (lower(email) text_ops);\n> > CREATE\n> > play=>\n> > \n> > That'll do as a workaround for Bryan, but isn't this a bug? Surely\n> > the system ought to know what type the result of lower() is...\n> > \n> > \t\t\tregards, tom lane\n> \n> I still have a problem with that ... edited typescript follows\n> \n> funweb=> \\d userdat\n> Table = userdat\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | username | varchar() not null | 30 |\n> ...\n> +----------------------------------+----------------------------------+-------+\n> Index: userdat_pkey\n> funweb=> create unique index userdat_idx2 on userdat (lower(username)\n> varchar_ops);\n> ERROR: BuildFuncTupleDesc: function 'lower(varchar)' does not exist\n> \n> This error message looks very bogus to me.\n> \n> -- \n> \n> Regards,\n> \t\t\t\t\t\t \n> Sascha Schumann | \n> Consultant | finger [email protected]\n> | for PGP public key\n> \n\nI don't think lower is defined for varchar arguments. consider redefining\nusername as type text and using text_ops.\n\nThis method worked on my system:\n\nstocks=> create table temptext (a text, b varchar(20));\nCREATE\nstocks=> create index itemptext on temptext using btree(lower(a) text_ops) ;\nCREATE\n\nYour error reproduced:\n\nstocks=> create index i2temptext on temptext using btree(lower(b) text_ops) ;\nERROR: BuildFuncTupleDesc: function 'lower(varchar)' does not exist\n\nExcerpt from function definitions( both return value and argument are text \ntypes):\n\ntext |lower |text |lowercase \n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n",
"msg_date": "Sun, 7 Feb 1999 21:42:23 -0500 (EST)",
"msg_from": "Marc Howard Zuckman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Functional Indexes"
},
{
"msg_contents": "Marc Howard Zuckman <[email protected]> writes:\n> On Mon, 8 Feb 1999, Sascha Schumann wrote:\n>> funweb=> create unique index userdat_idx2 on userdat (lower(username)\n>> varchar_ops);\n>> ERROR: BuildFuncTupleDesc: function 'lower(varchar)' does not exist\n>> \n>> This error message looks very bogus to me.\n\n> I don't think lower is defined for varchar arguments. consider redefining\n> username as type text and using text_ops.\n\nI think Marc is right. Someone was working on adding lower() to the\navailable ops for varchar for 6.5, but it's not there in 6.4.\n\nYou can get lower() to work on varchar source data in a simple\nSELECT, but that's some sort of hack that involves the system\nknowing that text and varchar have the same physical representation\nso it's OK to use a function that takes text on a varchar column.\nThe type matching requirements for functional indexes are tighter.\n\nNote to hackers: is there a good reason why indexes are more\nrestrictive? Offhand it seems like the same physical-equivalence\ntrick could be applied.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 22:18:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Functional Indexes "
},
{
"msg_contents": "> \n> Note to hackers: is there a good reason why indexes are more\n> restrictive? Offhand it seems like the same physical-equivalence\n> trick could be applied.\n\nWhat do you mean by restrictive. If you mean:\n\n\t* allow creation of functional indexes to use default types\n\nIt is on the TODO list, and has been for a while.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 22:48:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Functional Indexes"
},
{
"msg_contents": "> > I don't think lower is defined for varchar arguments.\n> Note to hackers: is there a good reason why indexes are more\n> restrictive? Offhand it seems like the same physical-equivalence\n> trick could be applied.\n\nWell, we should have a combination of \"binary compatible\" and type\ncoersion to make this fly, just as we have in other places for v6.4. I\ndidn't realize this index code was there, so never looked at it.\n\nIf someone else doesn't get to it, I'll try to look at it before or\nduring 6.5beta...\n\n - Tom\n",
"msg_date": "Mon, 08 Feb 1999 15:56:12 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Functional Indexes"
},
{
"msg_contents": "On the TODO list:\n\n\t* allow creation of functional indexes to use default types\n\n\n> \"Bryan White\" <[email protected]> writes:\n> > The documentation for CREATE INDEX implies that functions are allowed in\n> > index definitions but when I execute:\n> > create unique index idxtest on customer (lower(email));\n> > the result is:\n> > ERROR: DefineIndex: (null) class not found\n> > Should this work? Do I have the syntax wrong?\n> \n> I tried this wih 6.4.2 and found that it was only accepted if I\n> explicitly identified which index operator class to use:\n> \n> play=> create table customer (email text);\n> CREATE\n> play=> create unique index idxtest on customer (lower(email));\n> ERROR: DefineIndex: class not found\n> play=> create unique index idxtest on customer (lower(email) text_ops);\n> CREATE\n> play=>\n> \n> That'll do as a workaround for Bryan, but isn't this a bug? Surely\n> the system ought to know what type the result of lower() is...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Mar 1999 09:53:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Functional Indexes"
}
] |
[
{
"msg_contents": "\nERROR: RestrictionClauseSelectivity: bad value 163645.593750\n\nThe query is:\n\nSELECT p.first_name, p.last_name, t.title, t.rundate, t.app_version,\np.email \n FROM sw_password p, tools t \n WHERE p.userid = t.userid \n AND t.category = 'tools' \n ORDER by t.title;\n\nSomething wrong with that taht I'm not seeing? :( \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 7 Feb 1999 02:24:34 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "One I've never seen before:"
},
{
"msg_contents": "\nquick appendum to my own...\n\nthe 'category' field was created with an 'alter table' command, if that\nhelps any...\n\neven from psql, it comes back:\n\npostgresql=> select category from tools where category = 'projects';\nERROR: RestrictionClauseSelectivity: bad value 163645.593750\n\nIf I do it without the where clause, or a where cluase on any other field,\nall appears well, its only teh one I created with 'alter table' that is\n\"screwed\"...\n\nNeat...if I rename the table to something else, the problem goes away. If\nI rename it to old_tools, it still exists, but if I rename it to software,\nthe problem disappears...*raised eyebrows*\n\n and the table looks like:\n\npostgresql=> \\d tools\n\nTable = tools\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| userid | text | var |\n| title | text | var |\n| url | text | var |\n| platform | text | var |\n| app_version | text | var |\n| pg_version | text | var |\n| contact_name | text | var |\n| contact_email | text | var |\n| institution | text | var |\n| company_url | text | var |\n| keywords | text | var |\n| description | text | var |\n| rundate | datetime | 8 |\n| category | text | var |\n+----------------------------------+----------------------------------+-------+\n\nOn Sun, 7 Feb 1999, The Hermit Hacker wrote:\n\n> \n> ERROR: RestrictionClauseSelectivity: bad value 163645.593750\n> \n> The query is:\n> \n> SELECT p.first_name, p.last_name, t.title, t.rundate, t.app_version,\n> p.email \n> FROM sw_password p, tools t \n> WHERE p.userid = t.userid \n> AND t.category = 'tools' \n> ORDER by t.title;\n> \n> Something wrong with that taht I'm not seeing? :( \n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 7 Feb 1999 02:39:18 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] One I've never seen before:"
},
{
"msg_contents": "> \n> ERROR: RestrictionClauseSelectivity: bad value 163645.593750\n> \n> The query is:\n> \n> SELECT p.first_name, p.last_name, t.title, t.rundate, t.app_version,\n> p.email \n> FROM sw_password p, tools t \n> WHERE p.userid = t.userid \n> AND t.category = 'tools' \n> ORDER by t.title;\n> \n> Something wrong with that taht I'm not seeing? :( \n\nRerun vacuum analyze. Somehow, bad selectivity/disbursion values are\ngetting into the system tables.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 07:33:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] One I've never seen before:"
},
{
"msg_contents": "\nYou are correct...that appears to have fixed it. Is this a problem that\nshould be investigated, or not to worry about?\n\nOn Sun, 7 Feb 1999, Bruce Momjian wrote:\n\n> > \n> > ERROR: RestrictionClauseSelectivity: bad value 163645.593750\n> > \n> > The query is:\n> > \n> > SELECT p.first_name, p.last_name, t.title, t.rundate, t.app_version,\n> > p.email \n> > FROM sw_password p, tools t \n> > WHERE p.userid = t.userid \n> > AND t.category = 'tools' \n> > ORDER by t.title;\n> > \n> > Something wrong with that taht I'm not seeing? :( \n> \n> Rerun vacuum analyze. Somehow, bad selectivity/disbursion values are\n> getting into the system tables.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 7 Feb 1999 10:55:22 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] One I've never seen before:"
},
{
"msg_contents": "> \n> You are correct...that appears to have fixed it. Is this a problem that\n> should be investigated, or not to worry about?\n> \n\nI am worried about it. My guess is that ALTER TABLE is putting random\ndate into the attdisbursion column. I will check on it when I finish\nwith the optimizer.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 15:08:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] One I've never seen before:"
},
{
"msg_contents": "> \n> quick appendum to my own...\n> \n> the 'category' field was created with an 'alter table' command, if that\n> helps any...\n> \n> even from psql, it comes back:\n> \n> postgresql=> select category from tools where category = 'projects';\n> ERROR: RestrictionClauseSelectivity: bad value 163645.593750\n> \n> If I do it without the where clause, or a where cluase on any other field,\n> all appears well, its only teh one I created with 'alter table' that is\n> \"screwed\"...\n> \n> Neat...if I rename the table to something else, the problem goes away. If\n> I rename it to old_tools, it still exists, but if I rename it to software,\n> the problem disappears...*raised eyebrows*\n> \n> and the table looks like:\n\nFixed. Problem was I was not initializing the new column properly.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Feb 1999 12:31:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] One I've never seen before:"
},
{
"msg_contents": "Again. Fixed.\n\n\n> \n> You are correct...that appears to have fixed it. Is this a problem that\n> should be investigated, or not to worry about?\n> \n> On Sun, 7 Feb 1999, Bruce Momjian wrote:\n> \n> > > \n> > > ERROR: RestrictionClauseSelectivity: bad value 163645.593750\n> > > \n> > > The query is:\n> > > \n> > > SELECT p.first_name, p.last_name, t.title, t.rundate, t.app_version,\n> > > p.email \n> > > FROM sw_password p, tools t \n> > > WHERE p.userid = t.userid \n> > > AND t.category = 'tools' \n> > > ORDER by t.title;\n> > > \n> > > Something wrong with that taht I'm not seeing? :( \n> > \n> > Rerun vacuum analyze. Somehow, bad selectivity/disbursion values are\n> > getting into the system tables.\n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Feb 1999 12:32:34 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] One I've never seen before:"
}
] |
[
{
"msg_contents": "Vadim,\n\nI wrote:\n> New version of AllocSet...() functions is committed. palloc()\n> is a macro now. The memory eating problem of COPY FROM,\n> INSERT ... SELECT and UPDATES on a table that has constraints\n> is fixed (new file nodes/freefuncs.c).\n>\n> The settings in aset.c aren't optimal for now, because the\n> settings in place force the portals_p2 test to fail (at least\n> here). Some informations for those who want to take a look at\n> it follow.\n>\n> Reproducing the bug:\n> [...]\n>\n> Guessings:\n> [...]\n\n The guess that some memory get's corrupted where right. The\n area affected is something else.\n\n> I'll keep on debugging, but would be very appreciated if\n> someone could help.\n\n I love gdb! Wasn't easy to find, and would have been\n impossible without such a nice, smart tool.\n\n The problem was caused by QuerySnapshot beeing free()'d if a\n transaction uses many portals. In ExecutorStart(), the actual\n QuerySnapshot is placed into the execution state, and that is\n used subsequently for HeapTupleSatisfies() deep down during\n ExecutePlan().\n\n In the case of portals, every DECLARE CURSOR results in a\n call to ExecutorStart(). The corresponding ExecutorEnd(),\n where the execution state is thrown away, is delayed until\n CLOSE.\n\n The tcop calls SetQuerySnapshot() before any call to\n ExecutorStart(). So if someone opens many cursors,\n SetQuerySnapshot() free's snapshot data that will later be\n needed when FETCH has to scan relations.\n\n I have modified ExecutorStart() so it makes it's private copy\n of the actual QuerySnapshot in it's own executor memory\n context. Could you please comment if what is in QuerySnapshot\n at the time of ExecutorStart() get's or should get modified\n anywhere during the execution of a plan. The name snapshot\n tells me NOT. But you're the one to judge.\n\n Please have a look at it and comment.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 7 Feb 1999 15:08:41 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behaviour on pooled alloc (fwd)"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> I have modified ExecutorStart() so it makes it's private copy\n> of the actual QuerySnapshot in it's own executor memory\n> context. Could you please comment if what is in QuerySnapshot\n> at the time of ExecutorStart() get's or should get modified\n> anywhere during the execution of a plan. The name snapshot\n> tells me NOT. But you're the one to judge.\n\nYou're correct. Alternativly, we could use some refcount\nin Snapshot structure...\n\nVadim\n",
"msg_date": "Mon, 08 Feb 1999 10:12:15 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behaviour on pooled alloc (fwd)"
},
{
"msg_contents": ">\n> Jan Wieck wrote:\n> >\n> > I have modified ExecutorStart() so it makes it's private copy\n> > of the actual QuerySnapshot in it's own executor memory\n> > context. Could you please comment if what is in QuerySnapshot\n> > at the time of ExecutorStart() get's or should get modified\n> > anywhere during the execution of a plan. The name snapshot\n> > tells me NOT. But you're the one to judge.\n>\n> You're correct. Alternativly, we could use some refcount\n> in Snapshot structure...\n\n As it is now, the snapshot data exists in the same memory\n context as the execution state which isn't free'd explicitly\n (auto done on AllocSetReset() when memory context dies). No\n need to add another bookkeeping that's hard to track.\n\n So pooled memory allocation is done now.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 12:03:56 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behaviour on pooled alloc (fwd)"
}
] |
[
{
"msg_contents": "For discussion:\n\n I've committed two little changes to CURRENT and REL6_4.\n\n 1. ExecBRDeleteTriggers() now free's the return tuple from\n the trigger procedure if that isn't the tuple given to it\n as OLD. This caused PL/pgSQL triggers BEFORE DELETE to\n let backend grow until transaction end.\n\n 2. The CHECK constraint qualification trees are only\n generated once per query in ExecRelCheck(). EState has a\n new field es_result_relation_constraints (a List**).\n ExecConstraints() and ExecRelCheck() get the actual\n EState as parameter.\n\n The fact that the constraints qualifications where read\n in with nodeToString() from ccbin for every single tuple\n also caused the backend to grow until transaction end.\n\n Especially the second fix is important. We already had\n reports from users of v6.4.2 who ran out of memory when doing\n a COPY FROM to tables that have CHECK constraints. And it\n would also occur on INSERT and UPDATES for those tables if\n many rows affected.\n\n Now that we are going to start v6.5 BETA, isn't it good to\n put out v6.4.3 before the hot time begins? And if we find\n bugs during v6.5 BETA that could also be fixed in REL6_4, do\n so and put out a v6.4.4 the same time we release v6.5.1.\n\n I think this would be a good release strategy. From my\n experience with one of the biggest commercial software\n products, SAP R/3, I know that it is a safe way for\n productional (mission critical) installations to follow\n releases this way:\n\n 1. Wait for the first bugfix release of a new version.\n\n 2. Use the time between the first and the second bugfix\n release to adapt new features into the local\n applications.\n\n 3. Test the second bugfix release with the result of step 2\n and upgrade production. If the first bugfix release can\n stand for a time long enough without further bugs\n reported, use that for this step.\n\n 4. Follow subsequent bugfix releases if the fixes in them do\n or could be expected to happen in the production.\n\n Doing it this way means, that a mission critical installation\n will use v6.4.* until some time after we've put out at least\n v6.5.1. Thus, we should care about them.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 7 Feb 1999 18:30:43 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "v6.4.3 ?"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Now that we are going to start v6.5 BETA, isn't it good to\n> put out v6.4.3 before the hot time begins?\n\nI'm beginning to agree with Jan about this. I have in fact been\nthinking that I wasn't going to be in any big hurry to install 6.5\non my company's mission-critical server, because of the size of the\nchanges being put in place (MVCC etc). We ran 6.4 in early alpha\nstage because we had to --- we were getting bitten by 6.3.2 bugs ---\nbut 6.4 has been pretty stable for us and so we're probably going\nto take a wait-and-see attitude about 6.5.\n\nI don't want to see the Postgres group put a *lot* of time into\nmaintaining back-rev versions, but when we can easily retrofit an\nimportant bugfix into the prior release we should probably do it.\n\nI do say that back-rev maintenance should be bugfixes only, no\nfeature upgrades. Adding features would not only be more work,\nbut it would go against the whole point of the exercise, which is\nto provide as stable a release as we possibly can.\n\nThe good news is that Postgres is getting used for real,\nmission-critical work. Every project should have such problems ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 13:23:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ? "
},
{
"msg_contents": "> 3. Test the second bugfix release with the result of step 2\n> and upgrade production. If the first bugfix release can\n> stand for a time long enough without further bugs\n> reported, use that for this step.\n> \n> 4. Follow subsequent bugfix releases if the fixes in them do\n> or could be expected to happen in the production.\n> \n> Doing it this way means, that a mission critical installation\n> will use v6.4.* until some time after we've put out at least\n> v6.5.1. Thus, we should care about them.\n\nNow I see why you patching against 6.4.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 15:33:43 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "On Sun, 7 Feb 1999, Bruce Momjian wrote:\n\n> > 3. Test the second bugfix release with the result of step 2\n> > and upgrade production. If the first bugfix release can\n> > stand for a time long enough without further bugs\n> > reported, use that for this step.\n> > \n> > 4. Follow subsequent bugfix releases if the fixes in them do\n> > or could be expected to happen in the production.\n> > \n> > Doing it this way means, that a mission critical installation\n> > will use v6.4.* until some time after we've put out at least\n> > v6.5.1. Thus, we should care about them.\n> \n> Now I see why you patching against 6.4.\n\nThe arguments for a v6.4.3 make sense to me...tell me when you wish for\nthis to be created, and it shall be done :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 7 Feb 1999 16:45:03 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "> > Doing it this way means, that a mission critical installation\n> > will use v6.4.* until some time after we've put out at least\n> > v6.5.1. Thus, we should care about them.\n>\n> Now I see why you patching against 6.4.\n\n That's the reason. One of the biggest drawbacks against\n Postgres is (for many companies at least), that you can't buy\n support.\n\n If we only care a little for the last official release\n branch, it'll be much better than any payable support for a\n commercial product. Yes, I believe it - we have the power.\n\n I recall some replies to recent problem reports on v6.3.2\n where we told \"upgrade to v6.4.2\" (shame on us). That's\n easier said than done, if someone has big applications\n breaking on new features.\n\n I know that it isn't true, but these \"upgrade to another\n release\" answers instead of \"install bugfix release vX.X.X\"\n look like we don't care about anyone who really uses\n Postgres, not only playing around with it just for fun.\n That's the sound of M$.\n\n Patching against v6.4 takes time. It must be done manually\n nearly all the time since patches don't apply clean. But it's\n the only way to give Postgres a real good reputation.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 7 Feb 1999 22:06:03 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "Marc G. Fournier wrote:\n\n> On Sun, 7 Feb 1999, Bruce Momjian wrote:\n>\n> > Now I see why you patching against 6.4.\n>\n> The arguments for a v6.4.3 make sense to me...tell me when you wish for\n> this to be created, and it shall be done :)\n\n A week or so before we start v6.5 BETA will be enough so I\n have the time to build the v6.4.3 feature patch before having\n to respond to v6.5 BETA calls.\n\n Don't remember what's all fixed between v6.4.2 and now.\n\n Does anyone else know about bugs that are still in the REL6_4\n branch and could be fixed without adding features?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 7 Feb 1999 22:30:00 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Don't remember what's all fixed between v6.4.2 and now.\n> Does anyone else know about bugs that are still in the REL6_4\n> branch and could be fixed without adding features?\n\nI just checked in the \". conftest.sh\" -> \". ./conftest.sh\" fix to\nconfigure, which several people have complained of (6.4.2 fails\nif \".\" is not in your PATH at configure time).\n\nWe have to be pretty careful with these back-rev patches, since they\ntypically aren't going to get much testing in the back version's\nCVS tree. So I'm leery of applying anything that hasn't been tested\nfor a while in the development branch.\n\nFor example, the patch I just applied to CURRENT to link libpgtcl.so\nwith -lcrypt perhaps ought to go into REL6_4 --- but I'm afraid to do\nthat until we verify that it works on a variety of platforms. It fixes\nthings on teo's Linux box but I worry that it might actually break things\nelsewhere.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 17:51:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ? "
},
{
"msg_contents": "I will update the stuff in the REL6_4 tree for 6.4.3.\n\n> On Sun, 7 Feb 1999, Bruce Momjian wrote:\n> \n> > > 3. Test the second bugfix release with the result of step 2\n> > > and upgrade production. If the first bugfix release can\n> > > stand for a time long enough without further bugs\n> > > reported, use that for this step.\n> > > \n> > > 4. Follow subsequent bugfix releases if the fixes in them do\n> > > or could be expected to happen in the production.\n> > > \n> > > Doing it this way means, that a mission critical installation\n> > > will use v6.4.* until some time after we've put out at least\n> > > v6.5.1. Thus, we should care about them.\n> > \n> > Now I see why you patching against 6.4.\n> \n> The arguments for a v6.4.3 make sense to me...tell me when you wish for\n> this to be created, and it shall be done :)\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 18:40:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "On Sun, Feb 07, 1999 at 05:51:08PM -0500, Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > Don't remember what's all fixed between v6.4.2 and now.\n> > Does anyone else know about bugs that are still in the REL6_4\n> > branch and could be fixed without adding features?\n> \n> I just checked in the \". conftest.sh\" -> \". ./conftest.sh\" fix to\n> configure, which several people have complained of (6.4.2 fails\n> if \".\" is not in your PATH at configure time).\n> \n> We have to be pretty careful with these back-rev patches, since they\n> typically aren't going to get much testing in the back version's\n> CVS tree. So I'm leery of applying anything that hasn't been tested\n> for a while in the development branch.\n> \n> For example, the patch I just applied to CURRENT to link libpgtcl.so\n> with -lcrypt perhaps ought to go into REL6_4 --- but I'm afraid to do\n> that until we verify that it works on a variety of platforms. It fixes\n> things on teo's Linux box but I worry that it might actually break things\n> elsewhere.\n\nJust a short note:\n\n-lcrypt is available on glibc 2 systems only. \n\nAnd yes please release a 6.4.3. As a administrator of some sites which rely on\nPostgreSQL heavily I would never use a zero-numbered version. The risk is just\ntoo high and too unforseeable.\n\nAn example: Before 6.4, we used a table called \"user\" without any problems.\nUnfortunately, this was not possible in 6.4.x, because it became a reserved\nkeyword there. Having one release together with maintenance updates minimizes\nthe risk of getting features you don't want.\n\n-- \n \n Regards,\n\t\t\t\t\t\t \n Sascha Schumann | \n Consultant | finger [email protected]\n | for PGP public key\n",
"msg_date": "Mon, 8 Feb 1999 01:15:19 +0100",
"msg_from": "Sascha Schumann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "Tom Lane wrote:\n\n> We have to be pretty careful with these back-rev patches, since they\n> typically aren't going to get much testing in the back version's\n> CVS tree. So I'm leery of applying anything that hasn't been tested\n> for a while in the development branch.\n\n Over careful! They might rely on new features that aren't\n there. So it's better to wait for a v6.4.* based bug report\n and fix them by hand instead of appying complete patches that\n fix it on the fly.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 02:07:32 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "> That's the reason. One of the biggest drawbacks against\n> Postgres is (for many companies at least), that you can't buy\n> support.\n> \n> If we only care a little for the last official release\n> branch, it'll be much better than any payable support for a\n> commercial product. Yes, I believe it - we have the power.\n> \n> I recall some replies to recent problem reports on v6.3.2\n> where we told \"upgrade to v6.4.2\" (shame on us). That's\n> easier said than done, if someone has big applications\n> breaking on new features.\n\nI agree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 21:41:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "> Marc G. Fournier wrote:\n> \n> > On Sun, 7 Feb 1999, Bruce Momjian wrote:\n> >\n> > > Now I see why you patching against 6.4.\n> >\n> > The arguments for a v6.4.3 make sense to me...tell me when you wish for\n> > this to be created, and it shall be done :)\n> \n> A week or so before we start v6.5 BETA will be enough so I\n> have the time to build the v6.4.3 feature patch before having\n> to respond to v6.5 BETA calls.\n> \n> Don't remember what's all fixed between v6.4.2 and now.\n> \n> Does anyone else know about bugs that are still in the REL6_4\n> branch and could be fixed without adding features?\n\nOK. I will make a 6.5 changes list, and you can see what we have. Do\nyou want to make it as a patch, or a full release with a release number?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 21:45:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "Sascha Schumann <[email protected]> writes:\n> On Sun, Feb 07, 1999 at 05:51:08PM -0500, Tom Lane wrote:\n>> For example, the patch I just applied to CURRENT to link libpgtcl.so\n>> with -lcrypt perhaps ought to go into REL6_4 --- but I'm afraid to do\n>> that until we verify that it works on a variety of platforms.\n\n> Just a short note:\n> -lcrypt is available on glibc 2 systems only. \n\n-lcrypt exists on a variety of systems --- you're showing undue\nLinux-centrism here. But what I did was to set up the makefiles\nso that -lcrypt is included in the shared library link commands\nonly if configure found that libcrypt exists. We've done this\nfor libpq itself since 6.4, and have not heard complaints, so\nI'm probably being overly conservative to worry that it might\nbreak the higher-level libraries.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 22:09:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ? "
},
{
"msg_contents": "On Sun, Feb 07, 1999 at 10:06:03PM +0100, Jan Wieck wrote:\n> I recall some replies to recent problem reports on v6.3.2\n> where we told \"upgrade to v6.4.2\" (shame on us). That's\n> easier said than done, if someone has big applications\n> breaking on new features.\n\nAgreed.\n\n> I know that it isn't true, but these \"upgrade to another\n> release\" answers instead of \"install bugfix release vX.X.X\"\n> look like we don't care about anyone who really uses\n> Postgres, not only playing around with it just for fun.\n> That's the sound of M$.\n\nAnd once more I agree.\n\n> Patching against v6.4 takes time. It must be done manually\n> nearly all the time since patches don't apply clean. But it's\n> the only way to give Postgres a real good reputation.\n\nI like this approach. However, when d we stop maintaining the old version? I\nthink there are still people using 6.3 who cannot simply upgrade to 6.4. So\ndo we create 6.3.3 too? Or are there no open bug reports on 6.3 anymore?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 8 Feb 1999 08:47:33 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "Hi all\n\n> > That's the reason. One of the biggest drawbacks against\n> > Postgres is (for many companies at least), that you can't buy\n> > support.\n\nIMHO ...\n\nWell, yes one can, one may just need to look around a bit... and pay\ncommercial support prices.\n\nExample:\nAs for my self I feel confident that I could provide such support, having\nbeen using Postgres+ since Postgres 0.95? (3?4 years ago?). I charge\n$25/hour, but have been considering going to $30/hour. While I've yet to\nget a PostgreSQL specific job, I have had some other Linux based jobs.\n\nI'll bet that there are many people scattered around the globe that can\nalso provide such support.\n\nPerhaps it might be of benefit to both the PostgreSQL image and\ncompanies considering using PostgreSQL if there where a support team\naround the world that could be called on. So, that said, what does every\none think of the idea of maintaining on the PostgreSQL web site a contact\nlist of \"Authorized Support Centers\"?\n\nPresto, instant commercial support around the globe.\n\nIn addition, some time back there was a lot of talk about how to raise\nsome money to support PostgreSQL development. Well, what if 2%? 5%? or\nso of money earned (related to PostgreSQL) from such official support\ncenters went back to the PostgreSQL development group (what ever that\nis.). \n\nPostgreSQL is one of the top 2 or 3 choices to run on Linux, maybe the 1st\nchoice. And Linux is now exploding in popularity, both in the home and\nin the company (I just came from a job where half the computers where\nLinux boxes -- ~40 employees, all with computers.)\n\nSo, this means that the need for such support will become acute in the\nnext year or two, and if PostgreSQL already has such a system in place and\ndebugged, then that alone could put PostgreSQL way out ahead of all the\nother choices.\n\nJust my thoughts on the matter.\n\nThanks for reading this far,\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\n\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n\n",
"msg_date": "Mon, 8 Feb 1999 08:32:01 -0500 (EST)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Commercial support, was Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "Terry Mackintosh wrote:\n\n>\n> Hi all\n>\n> > > That's the reason. One of the biggest drawbacks against\n> > > Postgres is (for many companies at least), that you can't buy\n> > > support.\n>\n> IMHO ...\n>\n> Well, yes one can, one may just need to look around a bit... and pay\n> commercial support prices.\n>\n> Example:\n> As for my self I feel confident that I could provide such support, having\n> been using Postgres+ since Postgres 0.95? (3?4 years ago?). I charge\n> $25/hour, but have been considering going to $30/hour. While I've yet to\n> get a PostgreSQL specific job, I have had some other Linux based jobs.\n>\n> [...]\n\n Nice idea.\n\n But a word of caution seems appropriate.\n\n Commercial support doesn't mean only that you can hire\n someone who takes care about your actual problems with the\n product. It also means that there is someone you can bill if\n that product caused big damage to you (product warranty).\n\n Commercial support doesn't mean only that you hire someone on\n a T/M base (time and material). It also means that you can\n sign a support contract with a regular payment and have\n written down response- and maximum problem-to-fix times,\n escalation levels etc.\n\n For these issues (and there are more) you would need an\n assurance in the background (or a big company). But this\n requires that you have quality assurance management on top of\n the development. And that you have aggreed procedures where\n escalation levels from your support activate the core\n developers in specified times to solve problems. And it\n requires that you have more precise product specifications\n telling what the product can and where it's limits are.\n Otherwise you wouldn't be able to pay the assurance.\n\n There are already distributions of Linux out where you can\n buy commercial support with them. They stay behind the\n bleeding edge of development and are offered by companies,\n that have their own development team apart from the internet\n community.\n\n Looking at how we are organized (or better unorganized), all\n this high level commercial support seems far away.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 15:11:49 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: Commercial support, was Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "\n\nJan Wieck wrote:\n> \n> Terry Mackintosh wrote:\n> \n> >\n> > Hi all\n> >\n> > > > That's the reason. One of the biggest drawbacks against\n> > > > Postgres is (for many companies at least), that you can't buy\n> > > > support.\n> >\n> > IMHO ...\n> >\n> > Well, yes one can, one may just need to look around a bit... and pay\n> > commercial support prices.\n> >\n> > Example:\n> > As for my self I feel confident that I could provide such support, having\n> > been using Postgres+ since Postgres 0.95? (3?4 years ago?). I charge\n> > $25/hour, but have been considering going to $30/hour. While I've yet to\n> > get a PostgreSQL specific job, I have had some other Linux based jobs.\n> >\n> > [...]\n> \n> Nice idea.\n> \n> But a word of caution seems appropriate.\n> \n> Commercial support doesn't mean only that you can hire\n> someone who takes care about your actual problems with the\n> product. It also means that there is someone you can bill if\n> that product caused big damage to you (product warranty).\n> \n> Commercial support doesn't mean only that you hire someone on\n> a T/M base (time and material). It also means that you can\n> sign a support contract with a regular payment and have\n> written down response- and maximum problem-to-fix times,\n> escalation levels etc.\n> \n\nUsage decisions also depend on one other MAJOR factor, which Linux has\nconquered, but I personally feel that PostGres is still a bit shy on:\nreliability. We use PostGres commercially, and quite frankly have a\ntough time with it, because of consistent failures with it. Although\nthe price is right, and we hope to stick with PostGres as it matures\ninto a more robust product, others would not touch it when you consider\nthe following reliability problems (admittedly all reported on 6.3):\n\n 1. Tables \"disappearing\" while still listed in the db directory\n (but no longer visible from the client)\n 2. Tables being corrupted (i.e. not selectable, not vacuumable,\n not exportable)\n 3. Vacuum commands that take longer to run after one day of table\n updates than if the table was to be dumped and reloaded\n (e.g. table with 1.7 million rows, 200,000 rows being updated\n each day)\n 4. An inability to run multiple clients simultaneously without\n having the backends choke and kick everybody out (we've had\n to implement a lock manager that restricts db access to one\n client at a time) (Part of the test suite should be an 8 hour\n or so load test that has multiple clients reading/writing\n to the same/different tables...might be surprised what you\n find)\n 5. Memory leaks/poor mem management in various components that need \n to be worked around (vacuum, insert of existing rec into\n uniquely indexed table)\n \n\nLinux is successful because it is reliable, and because many folks are\nWILLING to risk an OS that has the perception of being\nunsupported, if once they install it it will run cleanly. However,\nanyone using a database for any sort of serious application will\ngenerally have a more stringent set of criteria that they apply to\ntheir selection process. I.e. PostGres is tackling a tougher market\nthan Linux is tackling, and it will have to be correspondingly more\nmature in order to enjoy the same success.\n\nReword? We would be happier if someone were to iron all the problems\nout of postgres that make it unreliable, and not very robust, than\nif someone were to provide commercial support (which will NOT fix\nthe aforementioned problems!)\n\nThomas\n------------------------------------------------------------\nThomas Reinke Tel: (416) 460-7021\nDirector of Technology Fax: (416) 598-2319\nE-Soft Inc. http://www.e-softinc.com\n",
"msg_date": "Mon, 08 Feb 1999 09:56:50 -0500",
"msg_from": "Thomas Reinke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commercial support, was Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "On Mon, 8 Feb 1999, Michael Meskes wrote:\n\n> On Sun, Feb 07, 1999 at 10:06:03PM +0100, Jan Wieck wrote:\n> > I recall some replies to recent problem reports on v6.3.2\n> > where we told \"upgrade to v6.4.2\" (shame on us). That's\n> > easier said than done, if someone has big applications\n> > breaking on new features.\n> \n> Agreed.\n> \n> > I know that it isn't true, but these \"upgrade to another\n> > release\" answers instead of \"install bugfix release vX.X.X\"\n> > look like we don't care about anyone who really uses\n> > Postgres, not only playing around with it just for fun.\n> > That's the sound of M$.\n> \n> And once more I agree.\n> \n> > Patching against v6.4 takes time. It must be done manually\n> > nearly all the time since patches don't apply clean. But it's\n> > the only way to give Postgres a real good reputation.\n> \n> I like this approach. However, when d we stop maintaining the old version? I\n> think there are still people using 6.3 who cannot simply upgrade to 6.4. So\n> do we create 6.3.3 too? Or are there no open bug reports on 6.3 anymore?\n\nOne version back, in my opinion...v6.5's release kills anythingolder then\nv6.4, v6.6's release kills anything older then v6.5, etc...\n\nAnd, no, we aren't \"supporting\" v6.3 at this time, since this is a new\nthing...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 8 Feb 1999 13:50:12 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "> I like this approach. However, when d we stop maintaining the old version? I\n> think there are still people using 6.3 who cannot simply upgrade to 6.4. So\n> do we create 6.3.3 too? Or are there no open bug reports on 6.3 anymore?\n\nWe keep releasing versions until no show-stopper bugs exist.\n\n6.4.* had one on COPY bug that Jan is fixing. The others have\nworkarounds, and we are so quick at pointing people to these\nwork-arounds, I don't think they warrant a new release.\n\nWe get so many improvements with each release, it is tough to back-patch\nthose into the tree. At some point, the 6.3.* tree would match the\n6.4.* tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 13:13:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "> We keep releasing versions until no show-stopper bugs exist.\n> 6.4.* had one on COPY bug that Jan is fixing. The others have\n> workarounds, and we are so quick at pointing people to these\n> work-arounds, I don't think they warrant a new release.\n\nI posted one or two patches into the /pub/patches directory, and am not\nsure whether I bothered installing them into the v6.4.x tree since it\nmay have been declared dead at that point. Be sure and check (or get a\nmessage from me after I've checked) that they have been applied before\nreleasing v6.4.3...\n\n - Tom\n",
"msg_date": "Tue, 09 Feb 1999 02:49:45 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "> > We keep releasing versions until no show-stopper bugs exist.\n> > 6.4.* had one on COPY bug that Jan is fixing. The others have\n> > workarounds, and we are so quick at pointing people to these\n> > work-arounds, I don't think they warrant a new release.\n> \n> I posted one or two patches into the /pub/patches directory, and am not\n> sure whether I bothered installing them into the v6.4.x tree since it\n> may have been declared dead at that point. Be sure and check (or get a\n> message from me after I've checked) that they have been applied before\n> releasing v6.4.3...\n> \n\nWe can't just do 6.4.2 patches, can we? That would be nice. We getting\nnear starting beta, and will not have lots of time to test things.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 22:01:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "> We can't just do 6.4.2 patches, can we? That would be nice. We \n> getting near starting beta, and will not have lots of time to test \n> things.\n\nPersonally, I would choose to post patches, since as you point out we\nare really focused on v6.5beta. We *still* need a patch convention with\na .../patches/ directory shipped with Postgres, and with routines to\nhelp create and apply the patches.\n\nI would suggest that for the future we create a top level directory\ncalled \"patches\", and within that directory have two routines, perhaps\nCreatePatch, CreatePackage, and ApplyPatch. CreatePatch would create a\nsubdirectory, look for all .orig files and make separate patch files for\neach inside the subdirectory. Type a README and post a tar file of the\nsubdirectory at ftp://postgresql.org/pub/patches/. On the other end,\nApplyPatch could print the README, prompt for an \"OK to continue?\", and\napply the specified patches.\n\nIf there is already a common package for doing this we could use that of\ncourse.\n\nI might have time to do this for v6.5 unless someone else wants to give\nit a shot. I have some code which could form the basis for this (and I\nknow that Bruce has something in the source tree which he uses but which\nimho does not have enough of the features mentioned above; my current\ncode is lacking some of these also)...\n\n - Tom\n",
"msg_date": "Tue, 09 Feb 1999 14:51:24 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n> Personally, I would choose to post patches, since as you point out we\n> are really focused on v6.5beta. We *still* need a patch convention with\n> a .../patches/ directory shipped with Postgres, and with routines to\n> help create and apply the patches.\n\nThe trouble with maintaining a pile of independent patches is that you\nhave cross-patch dependencies: patch B fails to apply unless patch A\nwas previously installed, or applies but fails to work right, etc etc.\nWorse, an installation reporting a problem might be running a slightly\ndifferent set of patches than anyone else, complicating the diagnosis\nsubstantially.\n\nSo, while tossing a patch into a \"patches\" directory sounds easy,\nI fear it will lead to more effort overall, both for developers and\nusers. It also doesn't advance the goal of creating super-stable\nversions for people who have chosen to run a back revision for\nreliability reasons.\n\nI think the approach already discussed is better: apply bug fixes\n(but not feature additions) to the previous release's CVS branch\nwhenever practical, and put out a dot-N version every so often.\n\n> I would suggest that for the future we create a top level directory\n> called \"patches\", and within that directory have two routines, perhaps\n> CreatePatch, CreatePackage, and ApplyPatch.\n\nThis would be worth doing in order to simplify life for people working\non the code --- it'd be easier to package up and mail in a set of\nchanges, and easier to apply them. But I don't think it's an answer\nfor back-rev maintenance.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Feb 1999 11:26:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ? "
},
{
"msg_contents": "> \"Thomas G. Lockhart\" <[email protected]> writes:\n> > Personally, I would choose to post patches, since as you point out we\n> > are really focused on v6.5beta. We *still* need a patch convention with\n> > a .../patches/ directory shipped with Postgres, and with routines to\n> > help create and apply the patches.\n> \n> The trouble with maintaining a pile of independent patches is that you\n> have cross-patch dependencies: patch B fails to apply unless patch A\n> was previously installed, or applies but fails to work right, etc etc.\n> Worse, an installation reporting a problem might be running a slightly\n> different set of patches than anyone else, complicating the diagnosis\n> substantially.\n\nMy optimizer fix will not affect other patches. If we only have a few\npatches, they will not bump into each other.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Feb 1999 11:38:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "On Tue, 9 Feb 1999, Tom Lane wrote:\n\n> \"Thomas G. Lockhart\" <[email protected]> writes:\n> > Personally, I would choose to post patches, since as you point out we\n> > are really focused on v6.5beta. We *still* need a patch convention with\n> > a .../patches/ directory shipped with Postgres, and with routines to\n> > help create and apply the patches.\n> \n> The trouble with maintaining a pile of independent patches is that you\n> have cross-patch dependencies: patch B fails to apply unless patch A\n> was previously installed, or applies but fails to work right, etc etc.\n> Worse, an installation reporting a problem might be running a slightly\n> different set of patches than anyone else, complicating the diagnosis\n> substantially.\n> \n> So, while tossing a patch into a \"patches\" directory sounds easy,\n> I fear it will lead to more effort overall, both for developers and\n> users. It also doesn't advance the goal of creating super-stable\n> versions for people who have chosen to run a back revision for\n> reliability reasons.\n> \n> I think the approach already discussed is better: apply bug fixes\n> (but not feature additions) to the previous release's CVS branch\n> whenever practical, and put out a dot-N version every so often.\n\nI kinda agree with this one also...as Tom's says, at least ppl can't say\n\"Patch B broke things\", but they didn't apply Patch A, which B's fixes\nused features from...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 9 Feb 1999 23:21:26 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ? "
},
{
"msg_contents": "> I kinda agree with this one also...as Tom's says, at least ppl can't \n> say \"Patch B broke things\", but they didn't apply Patch A, which B's \n> fixes used features from...\n\n*shrug* I don't have a strong feeling about it, and would be happy to\ncarry along two versions. Especially if one or a few people wanted to\nadopt the earlier version to ensure that patches get tested before being\napplied and to ensure that builds get tested before release. Over long\nperiods of time we will accumulate more than a few patches, and there\nneeds to be some way to ensure the integrity and ongoing testing of the\nolder tree between releases.\n\nSo, are we releasing v6.4.3? If so, when? I'll need to scrub it for\noutstanding patches...\n\n - Tom\n",
"msg_date": "Wed, 10 Feb 1999 04:09:13 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.4.3 ?"
}
] |
[
{
"msg_contents": "The equal() updates I installed yesterday (to fix the \"don't know\nwhether nodes of type 600 are equal\" problem) have had an unintended\nside effect.\n\nAm I right in thinking that UNION (without ALL) is defined to do a\nDISTINCT on its result, so that duplicates are removed even if the\nduplicates both came from the same source table? That's what 6.4.2\ndoes, but I do not know if it's strictly kosher according to the SQL\nspec.\n\nIf so, the code is now busted, because with the equal() extension in\nplace, cnfify() is able to recognize and remove duplicate select\nclauses. That is, \"SELECT xxx UNION SELECT xxx\" will be folded to\njust \"SELECT xxx\" ... and that doesn't mean the same thing.\n\nAn actual example: given the data\n\nplay=> select a from tt;\na\n-\n1\n1\n2\n3\n(4 rows)\n\nUnder 6.4.2 I get:\n\nplay=> select a from tt union select a from tt;\na\n-\n1\n2\n3\n(3 rows)\n\nNote lack of duplicate \"1\". Under current sources I get:\n\nttest=> select a from tt union select a from tt;\na\n-\n1\n1\n2\n3\n(4 rows)\n\nsince the query is effectively reduced to just \"select a from tt\".\n\nAssuming that 6.4.2 is doing the Right Thing, I see two possible fixes:\n(1) simplify equal() to say that two T_Query nodes are never equal, or\n(2) modify the planner so that the \"select distinct\" operation is\ninserted explicitly, and will thus happen even if the UNIONed selects\nare collapsed into just one.\n\n(1) is a trivial fix of course, but it worries me --- maybe someday\nwe will need equal() to give an honest answer for Query nodes.\nBut I don't have the expertise to apply (2), and it seems like rather\na lot of work for a boundary case that isn't really interesting in\npractice.\n\nComments? *Is* 6.4.2 behaving according to the SQL spec?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Feb 1999 13:46:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oops, I seem to have changed UNION's behavior"
},
{
"msg_contents": "Hello!\n\nOn Sun, 7 Feb 1999, Tom Lane wrote:\n> The equal() updates I installed yesterday (to fix the \"don't know\n> whether nodes of type 600 are equal\" problem) have had an unintended\n> side effect.\n> \n> Am I right in thinking that UNION (without ALL) is defined to do a\n> DISTINCT on its result, so that duplicates are removed even if the\n> duplicates both came from the same source table? That's what 6.4.2\n> does, but I do not know if it's strictly kosher according to the SQL\n> spec.\n\n Yes, this is standard. My books (primary, Gruber) say UNION should work\nthis way - UNION without ALL implies DISTINCT.\n\n> If so, the code is now busted, because with the equal() extension in\n> place, cnfify() is able to recognize and remove duplicate select\n> clauses. That is, \"SELECT xxx UNION SELECT xxx\" will be folded to\n> just \"SELECT xxx\" ... and that doesn't mean the same thing.\n> \n> An actual example: given the data\n> \n> play=> select a from tt;\n> a\n> -\n> 1\n> 1\n> 2\n> 3\n> (4 rows)\n> \n> Under 6.4.2 I get:\n> \n> play=> select a from tt union select a from tt;\n> a\n> -\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> Note lack of duplicate \"1\". Under current sources I get:\n> \n> ttest=> select a from tt union select a from tt;\n> a\n> -\n> 1\n> 1\n> 2\n> 3\n> (4 rows)\n> \n> since the query is effectively reduced to just \"select a from tt\".\n\n I am sure my books did not consider such case as UNION that could be\notimized this way. Not sure what is Right Thing here...\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 8 Feb 1999 11:45:16 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior"
},
{
"msg_contents": "> The equal() updates I installed yesterday (to fix the \"don't know\n> whether nodes of type 600 are equal\" problem) have had an unintended\n> side effect.\n> Am I right in thinking that UNION (without ALL) is defined to do a\n> DISTINCT on its result, so that duplicates are removed even if the\n> duplicates both came from the same source table? That's what 6.4.2\n> does, but I do not know if it's strictly kosher according to the SQL\n> spec.\n\nYup, that's the way it should be...\n\n> If so, the code is now busted, because with the equal() extension in\n> place, cnfify() is able to recognize and remove duplicate select\n> clauses. That is, \"SELECT xxx UNION SELECT xxx\" will be folded to\n> just \"SELECT xxx\" ... and that doesn't mean the same thing.\n\n:(\n\n> Assuming that 6.4.2 is doing the Right Thing, I see two possible \n> fixes:\n> (1) simplify equal() to say that two T_Query nodes are never equal, or\n> (2) modify the planner so that the \"select distinct\" operation is\n> inserted explicitly, and will thus happen even if the UNIONed selects\n> are collapsed into just one.\n\nSounds to me like (2) is the correct way to do it, as long as the\nexplicit \"sort/unique\" happens only if the query is collapsed. I haven't\nlooked at this code, and have no experience with cnfify(), but at the\ntime it decides to collapse into a single select, can't it ensure that a\nsort/unique is done instead? Or is that what you are suggesting?\nWouldn't want to have two sort/unique operations on top of each other...\n\n - Tom\n",
"msg_date": "Mon, 08 Feb 1999 16:33:52 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior"
},
{
"msg_contents": "\nWas there a conclusion on this?\n\n\n> The equal() updates I installed yesterday (to fix the \"don't know\n> whether nodes of type 600 are equal\" problem) have had an unintended\n> side effect.\n> \n> Am I right in thinking that UNION (without ALL) is defined to do a\n> DISTINCT on its result, so that duplicates are removed even if the\n> duplicates both came from the same source table? That's what 6.4.2\n> does, but I do not know if it's strictly kosher according to the SQL\n> spec.\n> \n> If so, the code is now busted, because with the equal() extension in\n> place, cnfify() is able to recognize and remove duplicate select\n> clauses. That is, \"SELECT xxx UNION SELECT xxx\" will be folded to\n> just \"SELECT xxx\" ... and that doesn't mean the same thing.\n> \n> An actual example: given the data\n> \n> play=> select a from tt;\n> a\n> -\n> 1\n> 1\n> 2\n> 3\n> (4 rows)\n> \n> Under 6.4.2 I get:\n> \n> play=> select a from tt union select a from tt;\n> a\n> -\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> Note lack of duplicate \"1\". Under current sources I get:\n> \n> ttest=> select a from tt union select a from tt;\n> a\n> -\n> 1\n> 1\n> 2\n> 3\n> (4 rows)\n> \n> since the query is effectively reduced to just \"select a from tt\".\n> \n> Assuming that 6.4.2 is doing the Right Thing, I see two possible fixes:\n> (1) simplify equal() to say that two T_Query nodes are never equal, or\n> (2) modify the planner so that the \"select distinct\" operation is\n> inserted explicitly, and will thus happen even if the UNIONed selects\n> are collapsed into just one.\n> \n> (1) is a trivial fix of course, but it worries me --- maybe someday\n> we will need equal() to give an honest answer for Query nodes.\n> But I don't have the expertise to apply (2), and it seems like rather\n> a lot of work for a boundary case that isn't really interesting in\n> practice.\n> \n> Comments? *Is* 6.4.2 behaving according to the SQL spec?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 9 May 1999 09:14:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior"
},
{
"msg_contents": "> > Am I right in thinking that UNION (without ALL) is defined to do a\n> > DISTINCT on its result, so that duplicates are removed even if the\n> > duplicates both came from the same source table? That's what 6.4.2\n> > does, but I do not know if it's strictly kosher according to the SQL\n> > spec.\n\n(Just in case this is still active)\n\nYes, this is the right behavior according to SQL92...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 09 May 1999 15:05:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>>>> Am I right in thinking that UNION (without ALL) is defined to do a\n>>>> DISTINCT on its result, so that duplicates are removed even if the\n>>>> duplicates both came from the same source table? That's what 6.4.2\n>>>> does, but I do not know if it's strictly kosher according to the SQL\n>>>> spec.\n\n> (Just in case this is still active)\n\n> Yes, this is the right behavior according to SQL92...\n\nOK, then 6.5 is still broken :-(. I know a lot more about the planner\nthan I did then, so I will see if I can fix it \"right\" --- that is,\nwithout taking out equal()'s ability to detect equality of Query nodes.\n\nIf that seems too hard/risky, I will just lobotomize equal() instead.\n\nThanks for the reminder, Bruce --- I had forgotten about this issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 09 May 1999 13:31:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior "
},
{
"msg_contents": "On Sun, 9 May 1999, Thomas Lockhart wrote:\n\n> > > Am I right in thinking that UNION (without ALL) is defined to do a\n> > > DISTINCT on its result, so that duplicates are removed even if the\n> > > duplicates both came from the same source table? That's what 6.4.2\n> > > does, but I do not know if it's strictly kosher according to the SQL\n> > > spec.\n> \n> Yes, this is the right behavior according to SQL92...\n\nIn which case something should put a DISTINCT on queries using UNION...\nsince making T_Query nodes never equal is a deoptimization.\n\nTaral\n\n",
"msg_date": "Sun, 9 May 1999 15:52:17 -0500 (CDT)",
"msg_from": "Taral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior"
},
{
"msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> >>>> Am I right in thinking that UNION (without ALL) is defined to do a\n> >>>> DISTINCT on its result, so that duplicates are removed even if the\n> >>>> duplicates both came from the same source table? That's what 6.4.2\n> >>>> does, but I do not know if it's strictly kosher according to the SQL\n> >>>> spec.\n> \n> > (Just in case this is still active)\n> \n> > Yes, this is the right behavior according to SQL92...\n> \n> OK, then 6.5 is still broken :-(. I know a lot more about the planner\n> than I did then, so I will see if I can fix it \"right\" --- that is,\n> without taking out equal()'s ability to detect equality of Query nodes.\n> \n> If that seems too hard/risky, I will just lobotomize equal() instead.\n> \n> Thanks for the reminder, Bruce --- I had forgotten about this issue.\n\nHey, that's why I keep 500 messages in my PostgreSQL mailbox.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 14:26:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior"
},
{
"msg_contents": "\nCan someone comment on this? Is it still an issue with cnf'ify removing\nduplicate cases?\n\n\n\n> The equal() updates I installed yesterday (to fix the \"don't know\n> whether nodes of type 600 are equal\" problem) have had an unintended\n> side effect.\n> \n> Am I right in thinking that UNION (without ALL) is defined to do a\n> DISTINCT on its result, so that duplicates are removed even if the\n> duplicates both came from the same source table? That's what 6.4.2\n> does, but I do not know if it's strictly kosher according to the SQL\n> spec.\n> \n> If so, the code is now busted, because with the equal() extension in\n> place, cnfify() is able to recognize and remove duplicate select\n> clauses. That is, \"SELECT xxx UNION SELECT xxx\" will be folded to\n> just \"SELECT xxx\" ... and that doesn't mean the same thing.\n> \n> An actual example: given the data\n> \n> play=> select a from tt;\n> a\n> -\n> 1\n> 1\n> 2\n> 3\n> (4 rows)\n> \n> Under 6.4.2 I get:\n> \n> play=> select a from tt union select a from tt;\n> a\n> -\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> Note lack of duplicate \"1\". Under current sources I get:\n> \n> ttest=> select a from tt union select a from tt;\n> a\n> -\n> 1\n> 1\n> 2\n> 3\n> (4 rows)\n> \n> since the query is effectively reduced to just \"select a from tt\".\n> \n> Assuming that 6.4.2 is doing the Right Thing, I see two possible fixes:\n> (1) simplify equal() to say that two T_Query nodes are never equal, or\n> (2) modify the planner so that the \"select distinct\" operation is\n> inserted explicitly, and will thus happen even if the UNIONed selects\n> are collapsed into just one.\n> \n> (1) is a trivial fix of course, but it worries me --- maybe someday\n> we will need equal() to give an honest answer for Query nodes.\n> But I don't have the expertise to apply (2), and it seems like rather\n> a lot of work for a boundary case that isn't really interesting in\n> practice.\n> \n> Comments? *Is* 6.4.2 behaving according to the SQL spec?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 21:51:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone comment on this? Is it still an issue with cnf'ify removing\n> duplicate cases?\n\nThis is still an open bug. I looked at it and concluded that a clean\nsolution probably requires modifying the parsetree structure for UNIONs.\n(Currently, UNION, INTERSECT, EXCEPT are converted to OR, AND, AND NOT\nstructures with queries as arguments ... which would be OK, except that\nthe semantics aren't really the same, and the difference between UNION\nand UNION ALL doesn't fit into that operator set at all.)\n\nThat looked like a lot of work for what is surely a low-priority bug,\nso I haven't gotten around to it yet. Need more round tuits.\n\nIn the meantime, it should be on the TODO list:\n * SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo\n\n\t\t\tregards, tom lane\n\n>> The equal() updates I installed yesterday (to fix the \"don't know\n>> whether nodes of type 600 are equal\" problem) have had an unintended\n>> side effect.\n>> \n>> Am I right in thinking that UNION (without ALL) is defined to do a\n>> DISTINCT on its result, so that duplicates are removed even if the\n>> duplicates both came from the same source table? That's what 6.4.2\n>> does, but I do not know if it's strictly kosher according to the SQL\n>> spec.\n>> \n>> If so, the code is now busted, because with the equal() extension in\n>> place, cnfify() is able to recognize and remove duplicate select\n>> clauses. That is, \"SELECT xxx UNION SELECT xxx\" will be folded to\n>> just \"SELECT xxx\" ... and that doesn't mean the same thing.\n>> \n>> An actual example: given the data\n>> \n>> play=> select a from tt;\n>> a\n>> -\n>> 1\n>> 1\n>> 2\n>> 3\n>> (4 rows)\n>> \n>> Under 6.4.2 I get:\n>> \n>> play=> select a from tt union select a from tt;\n>> a\n>> -\n>> 1\n>> 2\n>> 3\n>> (3 rows)\n>> \n>> Note lack of duplicate \"1\". Under current sources I get:\n>> \n>> ttest=> select a from tt union select a from tt;\n>> a\n>> -\n>> 1\n>> 1\n>> 2\n>> 3\n>> (4 rows)\n>> \n>> since the query is effectively reduced to just \"select a from tt\".\n>> \n>> Assuming that 6.4.2 is doing the Right Thing, I see two possible fixes:\n>> (1) simplify equal() to say that two T_Query nodes are never equal, or\n>> (2) modify the planner so that the \"select distinct\" operation is\n>> inserted explicitly, and will thus happen even if the UNIONed selects\n>> are collapsed into just one.\n>> \n>> (1) is a trivial fix of course, but it worries me --- maybe someday\n>> we will need equal() to give an honest answer for Query nodes.\n>> But I don't have the expertise to apply (2), and it seems like rather\n>> a lot of work for a boundary case that isn't really interesting in\n>> practice.\n>> \n>> Comments? *Is* 6.4.2 behaving according to the SQL spec?\n>> \n>> regards, tom lane\n>> \n>> \n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 07 Jul 1999 09:46:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Oops, I seem to have changed UNION's behavior "
}
] |
[
{
"msg_contents": "On Fri, Feb 05, 1999 at 02:02:20PM +0000, Thomas G. Lockhart wrote:\n> Great. Bruce and scrappy, whoever applies this will need to add this as\n> a new file in cvs. At the moment the file is named y.tab.c (and\n> y.tab.h), but we might want to consider renaming it as is done in the\n> main parser to keep the names unique within the installation (for\n> example, y.tab.c is probably also a temporary file in\n> src/backend/parser/).\n\nI did that already. They are named preproc.c resp. preproc.h now.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sun, 7 Feb 1999 20:04:36 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "> I did that already. They are named preproc.c resp. preproc.h now.\n\nGreat! I'm working with an aging development tree, so didn't see the\nchange...\n\n - Tom\n",
"msg_date": "Mon, 08 Feb 1999 15:44:44 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
},
{
"msg_contents": "On Mon, Feb 08, 1999 at 03:44:44PM +0000, Thomas G. Lockhart wrote:\n> > I did that already. They are named preproc.c resp. preproc.h now.\n> \n> Great! I'm working with an aging development tree, so didn't see the\n> change...\n\nIt hasn't been comitted so you couldn't see it even with an up-to-date copy.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 8 Feb 1999 19:53:55 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DEC OSF1 Compilation problems"
}
] |
[
{
"msg_contents": "I seem to recall someone recently was about to start work on libpq++.\n\nIs that correct? If not, I will need to do it, but I don't want to\nduplicate effort.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"But the LORD is in his holy temple; let all the earth \n keep silence before him.\" Habakkuk 2:20 \n\n\n",
"msg_date": "Sun, 07 Feb 1999 22:29:44 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq++"
},
{
"msg_contents": "\nOn 07-Feb-99 Oliver Elphick wrote:\n> I seem to recall someone recently was about to start work on libpq++.\n> \n> Is that correct? If not, I will need to do it, but I don't want to\n> duplicate effort.\n\nI had made the comment but I don't know if anyone else did also. I still\nhaven't had the time to even look at it but would still like to.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Sun, 07 Feb 1999 17:33:18 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] libpq++"
}
] |
[
{
"msg_contents": "I have added code to psql so \\p\\g, or other backslash combinations work\non the same line.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Feb 1999 23:29:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Added \\p\\g to psql"
}
] |
[
{
"msg_contents": "On my todo list I found two missing commands:\n\nexec sql free\nexec sql alloc\n\nI remember seeing them in an (Informix?) example. But I cannot remember what\nthey should do? They do not seem to be standard either.\n\nDoes anyone one know more about this?\n\nAlso there is the exec sql type statement. At least Oracle has this typedef\nequivalent. The syntax is:\n\nexec sql type <c-type> is <ora-type> \n\nIn my examples this is followed by the keyword \"REFERENCE\". I have no idea\nwhether this is required or optional. Does anyone know that? And what does\nit mean?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 8 Feb 1999 11:28:31 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Embedded SQL question"
}
] |
[
{
"msg_contents": "Hello!\n\n I have a database, and there are a lot of tables with serial fields.\nCurrently, pg_dump is not generating code to reload value of correposnding\nsequence.\n Is there any chance pg_dump will generate such code?\n\n Well, to overcome this I am trying to create a function do\ninitialization myself. I am trying to write:\n\nDROP FUNCTION max_id(text, text);\n\nCREATE FUNCTION max_id(text, text) RETURNS int2 AS\n '\n DECLARE\n table ALIAS FOR $1;\n field ALIAS FOR $2;\n myres int;\n BEGIN\n SELECT MAX(field) INTO myres FROM table;\n RETURN myres;\n END;\n '\nLANGUAGE 'plpgsql';\n\nSELECT max_id('motd', 'msg_id');\n\n but when I pass this to postgres I got:\n\nDROP FUNCTION max_id(text, text);\nDROP\n\nCREATE FUNCTION max_id(text, text) RETURNS int2 AS\n '\n DECLARE\n table ALIAS FOR $1;\n field ALIAS FOR $2;\n myres int;\n BEGIN\n SELECT MAX(field) INTO myres FROM table;\n RETURN myres;\n END;\n '\nLANGUAGE 'plpgsql';\nCREATE\n\nSELECT max_id('motd', 'msg_id');\nERROR: parser: parse error at or near \"$2\"\n\n How can I write the function? I don't want to create a function for\nevery sequence, I want a function that takes table and field as parameters.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 8 Feb 1999 13:29:44 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "sequence dump/reload"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm interested by writing a JAVA interface, looking like pgaccess, but I\n\nhave some problems because I don't understand the whole links between\ncomponents in postgres. For example, I don't know how to get list of\ntables in a database with a SQL request. Where is it possible to find\nthese informations?\n\nThanks\n\nPascal GEND\n\n\n\n",
"msg_date": "Mon, 08 Feb 1999 11:56:42 +0100",
"msg_from": "Pascal GEND <[email protected]>",
"msg_from_op": true,
"msg_subject": "writing a JAVA interface for postgres"
}
] |
[
{
"msg_contents": "\t> Anyhow, I'm about to start the test, using RELSEG_SIZE set to\n243968 which\n\t> works out to be 1.6Gb. That should stay well away from the\noverflow problem.\n\nHow about using 1 Gb. A lot of Unices have the ulimit set to 1 Gb by\ndefault.\nIt would also be nice for the looks, easy to calculate size, nicer to\nstorage managers,\netc ....\n\nAndreas\n\n",
"msg_date": "Mon, 8 Feb 1999 13:05:39 +0100 ",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
},
{
"msg_contents": "On Mon, 8 Feb 1999, Zeugswetter Andreas IZ5 wrote:\n\n> \t> Anyhow, I'm about to start the test, using RELSEG_SIZE set to\n> 243968 which\n> \t> works out to be 1.6Gb. That should stay well away from the\n> overflow problem.\n> \n> How about using 1 Gb. A lot of Unices have the ulimit set to 1 Gb by\n> default.\n> It would also be nice for the looks, easy to calculate size, nicer to\n> storage managers,\n> etc ....\n\nCould be an idea.\n\nHow about making it a compile time option (like the blocksize), defaulting\nto 1Gb?\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Mon, 8 Feb 1999 19:16:57 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 "
}
] |
[
{
"msg_contents": "Hello!\n\n A week ago I reported a problem with VACUUM ANALYZE on linux and memory\nerror. Three good guys saw my database and two of them for VACUUM problem,\nI hope (Tom Lane and Thomas Lockhart).\n Have you reproduced the case?\n\n I ran VACUUM ANALYZE on table basis. Results:\n\n----- Script -----\nVACUUM ANALYZE sections;\nVACUUM ANALYZE subsections;\nVACUUM ANALYZE positions;\nVACUUM ANALYZE cities;\nVACUUM ANALYZE districts;\nVACUUM ANALYZE shop_types;\nVACUUM ANALYZE shops;\nVACUUM ANALYZE producers;\nVACUUM ANALYZE products;\nVACUUM ANALYZE correspondents;\nVACUUM ANALYZE shop_corr;\nVACUUM ANALYZE money4corr;\nVACUUM ANALYZE raw_maillog;\nVACUUM ANALYZE corr_mail_errors;\nVACUUM ANALYZE pos_rating;\nVACUUM ANALYZE motd;\nVACUUM ANALYZE central;\nVACUUM ANALYZE bad_data;\nVACUUM ANALYZE today_history;\nVACUUM ANALYZE currencies;\nVACUUM ANALYZE currency_exch;\nVACUUM ANALYZE param_int;\nVACUUM ANALYZE param_str;\nVACUUM ANALYZE param_float;\nVACUUM ANALYZE param_datetime;\nVACUUM ANALYZE palette;\nVACUUM ANALYZE units;\nVACUUM ANALYZE mail_ecod;\n\n----- Log -----\nVACUUM\nVACUUM\nVACUUM\nVACUUM\nVACUUM\nVACUUM\nVACUUM\nVACUUM\n\n----- Errors -----\n\nVACUUM ANALYZE sections;\nVACUUM ANALYZE subsections;\nVACUUM ANALYZE positions;\nVACUUM ANALYZE cities;\nVACUUM ANALYZE districts;\nVACUUM ANALYZE shop_types;\nVACUUM ANALYZE shops;\nVACUUM ANALYZE producers;\nVACUUM ANALYZE products;\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n----- End -----\n\n If I remove \"products\" from script, the next table to fail is \"palette\". If\nI remove \"palette\" - all goes well.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 8 Feb 1999 15:37:37 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "Oleg Broytmann <[email protected]> writes:\n> Hello!\n> A week ago I reported a problem with VACUUM ANALYZE on linux and memory\n> error. Three good guys saw my database and two of them for VACUUM problem,\n> I hope (Tom Lane and Thomas Lockhart).\n> Have you reproduced the case?\n\nOh! I'm sorry, I thought I saw a report that someone had already fixed\nthe problem, so I didn't look at it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Feb 1999 11:21:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": ">\n> Oleg Broytmann <[email protected]> writes:\n> > Hello!\n> > A week ago I reported a problem with VACUUM ANALYZE on linux and memory\n> > error. Three good guys saw my database and two of them for VACUUM problem,\n> > I hope (Tom Lane and Thomas Lockhart).\n> > Have you reproduced the case?\n>\n> Oh! I'm sorry, I thought I saw a report that someone had already fixed\n> the problem, so I didn't look at it.\n\n Maybe a little misunderstanding. Oleg reported a memory\n exhaustion problem on COPY FROM in the same context (which\n also happened on large updates). I've tracked that down in\n the executor. It was because his table had a CHECK clause and\n that got stringToNode()'ed for each single tuple.\n\n This problem is fixed in CURRENT along with a speedup of\n factor 2++ for the case of CHECK on large ranges. The check's\n are only once stringToNode()'ed now and live until the\n executor's memory context get's destroyed (the portal level\n the plan is executed in).\n\n I don't know if the same caused the VACUUM problem. Oleg,\n could you please check against the CURRENT source tree if the\n problem still exists?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 17:43:45 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "> I reported a problem with VACUUM ANALYZE on linux and memory\n> error. Three good guys saw my database and two of them for VACUUM \n> problem, I hope (Tom Lane and Thomas Lockhart).\n> Have you reproduced the case?\n\nI understood someone else was starting to look at it, so I have not\n(yet). Can do so later if needed...\n\n - Tom\n",
"msg_date": "Mon, 08 Feb 1999 17:35:02 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "Hello!\n\n I'll try CURRENT a bit later (things gonna get real slow these days :).\nBut I am sure these are two different problems.\n\n First, I had memory problem on a big table, and VACUUM ANALYZE problem\non two very small tables (few lines).\n\n Second, I have memory problem on 3 systems - RedHat 5.1 on Pentium,\nDebian 2.0 on Pentium, and Solaris on Ultra-1.\n But I have VACUUM ANALYZE problem only on linucies.\n\n BTW, I noted bot linucies are glibc2-based. It would be interesting to\ntry libc5-based system. May be we can narrow the problem down to\nglibc2-based linux?\n Have someone libc5-based linux ready to test?\n\nOn Mon, 8 Feb 1999, Jan Wieck wrote:\n> >\n> > Oleg Broytmann <[email protected]> writes:\n> > > Hello!\n> > > A week ago I reported a problem with VACUUM ANALYZE on linux and memory\n> > > error. Three good guys saw my database and two of them for VACUUM problem,\n> > > I hope (Tom Lane and Thomas Lockhart).\n> > > Have you reproduced the case?\n> >\n> > Oh! I'm sorry, I thought I saw a report that someone had already fixed\n> > the problem, so I didn't look at it.\n> \n> Maybe a little misunderstanding. Oleg reported a memory\n> exhaustion problem on COPY FROM in the same context (which\n> also happened on large updates). I've tracked that down in\n> the executor. It was because his table had a CHECK clause and\n> that got stringToNode()'ed for each single tuple.\n> \n> This problem is fixed in CURRENT along with a speedup of\n> factor 2++ for the case of CHECK on large ranges. The check's\n> are only once stringToNode()'ed now and live until the\n> executor's memory context get's destroyed (the portal level\n> the plan is executed in).\n> \n> I don't know if the same caused the VACUUM problem. Oleg,\n> could you please check against the CURRENT source tree if the\n> problem still exists?\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n\n\n",
"msg_date": "Tue, 9 Feb 1999 11:13:29 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "> Second, I have memory problem on 3 systems - RedHat 5.1 on Pentium,\n> Debian 2.0 on Pentium, and Solaris on Ultra-1.\n> But I have VACUUM ANALYZE problem only on linucies.\n> BTW, I noted bot linucies are glibc2-based. It would be interesting to\n> try libc5-based system. May be we can narrow the problem down to\n> glibc2-based linux?\n> Have someone libc5-based linux ready to test?\n\nI can test on libc5 if you still see trouble after you have verified\nJan's fixes for your memory exhaution problem.\n\nLet me know...\n\n - Tom\n",
"msg_date": "Tue, 09 Feb 1999 14:55:00 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "Hello!\n\nOn Tue, 9 Feb 1999, Thomas G. Lockhart wrote:\n> I can test on libc5 if you still see trouble after you have verified\n> Jan's fixes for your memory exhaution problem.\n\n I've downloaded latest snapshot (9 Feb) and reproduced the problem with\nVACUUM ANALYZE on Debian 2.0 (glibc2).\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Thu, 11 Feb 1999 19:35:35 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "Oleg Broytmann <[email protected]> writes:\n>> I can test on libc5 if you still see trouble after you have verified\n>> Jan's fixes for your memory exhaution problem.\n\n> I've downloaded latest snapshot (9 Feb) and reproduced the problem with\n> VACUUM ANALYZE on Debian 2.0 (glibc2).\n\nI am not able to reproduce the problem on HPUX, using either current\nsources or 6.4.2. Looks like it must be platform specific.\n\nCould you build the backend with -g and send a gdb backtrace from the\ncorefile produced when the crash occurs?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Feb 1999 21:06:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "On Thu, 11 Feb 1999, Tom Lane wrote:\n> I am not able to reproduce the problem on HPUX, using either current\n> sources or 6.4.2. Looks like it must be platform specific.\n\n Of course it is platform-specific. I reported the problem on\nglibc2-based linucies, but the same database works fine (and allows VACUUM\nANALYZE) on sparc-solaris.\n Don't know about libc5 linux - I have no one in hand.\n\n> Could you build the backend with -g and send a gdb backtrace from the\n> corefile produced when the crash occurs?\n\n I'll do it this Saturday.\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Fri, 12 Feb 1999 13:23:15 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Hi!\n\nOn Thu, 11 Feb 1999, Tom Lane wrote:\n> Could you build the backend with -g and send a gdb backtrace from the\n> corefile produced when the crash occurs?\n\n Problem compiling with -g:\n\nmake[3]: Entering directory\n`/usr/local/src/PostgreSQL/postgresql-6.4.2/src/interfaces/ecpg/preproc'\ngcc -I../../../include -I../../../backend -g -Wall -Wmissing-prototypes\n-I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=4 -DPATCHLEVEL=4\n-DINCLUDE_PATH=\\\"/usr/local/stow/pgsql-debug/include\\\" \n-c y.tab.c -o y.tab.o\npreproc.y:2389: parse error at end of input\npreproc.y:20: warning: `struct_level' defined but not used\npreproc.y:22: warning: `QueryIsRule' defined but not used\npreproc.y:23: warning: `actual_type' defined but not used\npreproc.y:24: warning: `actual_storage' defined but not used\npreproc.y:219: warning: `remove_variables' defined but not used\npreproc.y:254: warning: `reset_variables' defined but not used\npreproc.y:263: warning: `add_variable' defined but not used\npreproc.y:332: warning: `make1_str' defined but not used\npreproc.y:341: warning: `make2_str' defined but not used\npreproc.y:353: warning: `cat2_str' defined but not used\npreproc.y:366: warning: `make3_str' defined but not used\npreproc.y:380: warning: `cat3_str' defined but not used\npreproc.y:396: warning: `make4_str' defined but not used\npreproc.y:412: warning: `cat4_str' defined but not used\npreproc.y:431: warning: `make5_str' defined but not used\npreproc.y:449: warning: `cat5_str' defined but not used\npreproc.y:471: warning: `make_name' defined but not used\npreproc.y:481: warning: `output_statement' defined but not used\nmake[3]: *** [y.tab.o] Error 1\nmake[3]: Leaving directory\n`/usr/local/src/PostgreSQL/postgresql-6.4.2/src/interfaces/ecpg/preproc'\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Sat, 13 Feb 1999 14:50:32 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "> > Could you build the backend with -g and send a gdb backtrace from \n> > the corefile produced when the crash occurs?\n> Problem compiling with -g:\n\nI'd be suprised if \"-g\" would do that to you. Are you sure the input\nfile is well-formed?\n\n - Tom\n",
"msg_date": "Sat, 13 Feb 1999 14:41:24 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "Hi!\n\nOn Sat, 13 Feb 1999, Thomas G. Lockhart wrote:\n> > > Could you build the backend with -g and send a gdb backtrace from \n> > > the corefile produced when the crash occurs?\n> > Problem compiling with -g:\n> \n> I'd be suprised if \"-g\" would do that to you. Are you sure the input\n> file is well-formed?\n\n May be not enough memory or disk space. In the beginnig of next week\nI'll have new linux box, so I'll retry.\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Sat, 13 Feb 1999 17:50:13 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "Hello!\n\n Sorry for making this late :(\n\nOn Thu, 11 Feb 1999, Tom Lane wrote:\n> Could you build the backend with -g and send a gdb backtrace from the\n> corefile produced when the crash occurs?\n\n I have compiled with -g, but postgres didn't produce core. Do I need\nsomething special on startup to generate core on crash?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Wed, 24 Feb 1999 12:44:48 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Oleg Broytmann <[email protected]> writes:\n> I have compiled with -g, but postgres didn't produce core. Do I need\n> something special on startup to generate core on crash?\n\nOrdinarily not, but perhaps you have a shell 'limit' setting in place\nthat prevents a corefile from being made? I think csh has such a\nsetting but I forget the details. Anyway, if postmaster is started from\na shell with any limit variables enabled, they will apply to the\nbackends too.\n\nOr you might just not be looking in the right place. Backend crashes\nproduce corefiles in the database subdirectory, eg,\n/usr/local/pgsql/data/base/MyDatabase/core\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Feb 1999 09:56:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Hi!\n\nOn Wed, 24 Feb 1999, Tom Lane wrote:\n\n> Oleg Broytmann <[email protected]> writes:\n> > I have compiled with -g, but postgres didn't produce core. Do I need\n> > something special on startup to generate core on crash?\n> \n> Ordinarily not, but perhaps you have a shell 'limit' setting in place\n> that prevents a corefile from being made? I think csh has such a\n\n I am using bash all the time.\n\n> setting but I forget the details. Anyway, if postmaster is started from\n> a shell with any limit variables enabled, they will apply to the\n> backends too.\n\n Ok, I'll retest this.\n\n> Or you might just not be looking in the right place. Backend crashes\n> produce corefiles in the database subdirectory, eg,\n> /usr/local/pgsql/data/base/MyDatabase/core\n\n I search with find / -name core. I got /dev/core and\n/usr/src/linux/.../core :)\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Wed, 24 Feb 1999 18:03:11 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "On Wed, 24 Feb 1999, Oleg Broytmann wrote:\n\n> > Or you might just not be looking in the right place. Backend crashes\n> > produce corefiles in the database subdirectory, eg,\n> > /usr/local/pgsql/data/base/MyDatabase/core\n> \n> I search with find / -name core. I got /dev/core and\n> /usr/src/linux/.../core :)\n\nTry this instead:\n\n# find / -name '*.core'\n\nand you should find the other core dumps.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 24 Feb 1999 10:28:48 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Hi!\n\n I ran postmaster from command line (usually I run it from /etc/init.d/),\nconnected to it and ran VACUUM ANALYZE.\n It worked.\n\n I don't know should I use :) or :( - it failed on production server and\nworked on debugging server...\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Wed, 24 Feb 1999 18:47:11 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Followup to myself...\n\nOn Wed, 24 Feb 1999, Oleg Broytmann wrote:\n> I ran postmaster from command line (usually I run it from /etc/init.d/),\n> connected to it and ran VACUUM ANALYZE.\n> It worked.\n\n I tested the following way:\n\n1. Run postmaster without parameters; connect and run VACUUM ANALYZE -\nworked.\n\n2. Run postmaster -b -D/usr/local/pgsql/data -o -Fe\n and run VACUUM ANALYZE - worked\n\n3. Run postmaster -b -D/usr/local/pgsql/data -o -Fe -S (to detach it)\n and run VACUUM ANALYZE - worked\n\n (I took these parameters from script /etc/init.d/postgres)\n\n4. Run /etc/init.d/postgres start\n and run VACUUM ANALYZE - failed, no core file.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Wed, 24 Feb 1999 18:58:29 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Oleg Broytmann <[email protected]> writes:\n> 3. Run postmaster -b -D/usr/local/pgsql/data -o -Fe -S (to detach it)\n> and run VACUUM ANALYZE - worked\n> (I took these parameters from script /etc/init.d/postgres)\n> 4. Run /etc/init.d/postgres start\n> and run VACUUM ANALYZE - failed, no core file.\n\nSo there is something different about the environment of your postmaster\nwhen it's started by init.d versus when it's started by hand. Now you\njust have to figure out what.\n\nI thought of environment variables, ulimit settings,\nownership/permission settings ... but it's not clear why any of these\nwould affect VACUUM in particular yet leave you able to do other stuff\nsuccessfully. Puzzling.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Feb 1999 13:35:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Hello!\n\nOn Thu, 11 Feb 1999, Tom Lane wrote:\n> Could you build the backend with -g and send a gdb backtrace from the\n> corefile produced when the crash occurs?\n\n I have a problem getting core - postgres didn't produces core. Recently\nI got a suggestion (from Vadim) to attach gdb to a running process and\ndebug it this way. Ok, done.\n To remind of the problem - I have a problem running VACUUM ANALYZE on a\nglibc2 linux (Debian 2.0). On solaris it is Ok (and I got a report it is Ok\non HP-UX).\n Here is the traceback. The problem is in strcoll, don't understand why.\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x40119587 in strcoll ()\n(gdb) where\n#0 0x40119587 in strcoll ()\n#1 0x816cadd in varstr_cmp (arg1=0x4020fccc \" \", len1=0, \n arg2=0x8268604 \" \", len2=0) at varlena.c:511\n#2 0x816b31d in bpcharlt (arg1=0x4020fcc8 \"\\n\", arg2=0x8268600 \"\\n\")\n at varchar.c:504\n#3 0x80a1bb7 in vc_attrstats (onerel=0x8264378, vacrelstats=0x8262a10, \n tuple=0x4020fc90) at vacuum.c:1630\n#4 0x809ffec in vc_scanheap (vacrelstats=0x8262a10, onerel=0x8264378, \n vacuum_pages=0xbfffcf84, fraged_pages=0xbfffcf78) at vacuum.c:806\n#5 0x809f773 in vc_vacone (relid=32600, analyze=1 '\\001', va_cols=0x0)\n at vacuum.c:504\n#6 0x809ed84 in vc_vacuum (VacRelP=0x0, analyze=1 '\\001', va_cols=0x0)\n at vacuum.c:257\n#7 0x809ec3e in vacuum (vacrel=0x0, verbose=0 '\\000', analyze=1 '\\001', \n va_spec=0x0) at vacuum.c:160\n#8 0x8138c43 in ProcessUtility (parsetree=0x82464e0, dest=Remote)\n at utility.c:644\n#9 0x8135388 in pg_exec_query_dest (query_string=0xbfffd0d4 \"vacuum\nanalyze;\", \n dest=Remote, aclOverride=0 '\\000') at postgres.c:758\n#10 0x8135264 in pg_exec_query (query_string=0xbfffd0d4 \"vacuum analyze;\")\n at postgres.c:699\n#11 0x813677e in PostgresMain (argc=6, argv=0xbffff15c, real_argc=8, \n real_argv=0xbffffafc) at postgres.c:1645\n#12 0x8111561 in DoBackend (port=0x82110f0) at postmaster.c:1541\n#13 0x8110f35 in BackendStartup (port=0x82110f0) at postmaster.c:1312\n#14 0x811012f in ServerLoop () at postmaster.c:757\n#15 0x810fb5e in PostmasterMain (argc=8, argv=0xbffffafc) at\npostmaster.c:563\n#16 0x80c8c32 in main (argc=8, argv=0xbffffafc) at main.c:93\n\n> \n> \t\t\tregards, tom lane\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 15 Mar 1999 15:23:19 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Oleg Broytmann <[email protected]> writes:\n> To remind of the problem - I have a problem running VACUUM ANALYZE on a\n> glibc2 linux (Debian 2.0). On solaris it is Ok (and I got a report it is Ok\n> on HP-UX).\n> Here is the traceback. The problem is in strcoll, don't understand why.\n\n> Program received signal SIGSEGV, Segmentation fault.\n> 0x40119587 in strcoll ()\n> (gdb) where\n> #0 0x40119587 in strcoll ()\n> #1 0x816cadd in varstr_cmp (arg1=0x4020fccc \" \", len1=0, \n> arg2=0x8268604 \" \", len2=0) at varlena.c:511\n> #2 0x816b31d in bpcharlt (arg1=0x4020fcc8 \"\\n\", arg2=0x8268600 \"\\n\")\n> at varchar.c:504\n\nSure looks like strcoll is broken on your platform. Build a little test\nprogram and see if strcoll(\"\", \"\") coredumps ... if the traceback is\naccurate, that's what was getting passed to it.\n\nBTW, why in the world is varstr_cmp written to duplicate the strings\nit's passed, rather than just handing them off to strcoll() as-is?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Mar 1999 10:18:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Hi!\n\nOn Mon, 15 Mar 1999, Tom Lane wrote:\n> Sure looks like strcoll is broken on your platform. Build a little test\n> program and see if strcoll(\"\", \"\") coredumps ... if the traceback is\n> accurate, that's what was getting passed to it.\n\n Will test it...\n\n> BTW, why in the world is varstr_cmp written to duplicate the strings\n> it's passed, rather than just handing them off to strcoll() as-is?\n\n I got the code... No, I didn't \"got\" it - I found the code. Initially it\nwas written by Oleg Bartunov, and I extended it a bit for all char types\n(initial code worked only with \"text\" type).\n\n> \t\t\tregards, tom lane\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 15 Mar 1999 18:25:13 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "> Oleg Broytmann <[email protected]> writes:\n> > To remind of the problem - I have a problem running VACUUM ANALYZE on a\n> > glibc2 linux (Debian 2.0). On solaris it is Ok (and I got a report it is Ok\n> > on HP-UX).\n> > Here is the traceback. The problem is in strcoll, don't understand why.\n> \n> > Program received signal SIGSEGV, Segmentation fault.\n> > 0x40119587 in strcoll ()\n> > (gdb) where\n> > #0 0x40119587 in strcoll ()\n> > #1 0x816cadd in varstr_cmp (arg1=0x4020fccc \" \", len1=0, \n> > arg2=0x8268604 \" \", len2=0) at varlena.c:511\n> > #2 0x816b31d in bpcharlt (arg1=0x4020fcc8 \"\\n\", arg2=0x8268600 \"\\n\")\n> > at varchar.c:504\n> \n> Sure looks like strcoll is broken on your platform. Build a little test\n> program and see if strcoll(\"\", \"\") coredumps ... if the traceback is\n> accurate, that's what was getting passed to it.\n> \n> BTW, why in the world is varstr_cmp written to duplicate the strings\n> it's passed, rather than just handing them off to strcoll() as-is?\n\nIt appears he is unsure whether the string is null-terminated, and he is\nright in not assuming that. We have strncmp, but there is no strncoll().\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Mar 1999 10:34:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> BTW, why in the world is varstr_cmp written to duplicate the strings\n>> it's passed, rather than just handing them off to strcoll() as-is?\n\n> It appears he is unsure whether the string is null-terminated, and he is\n> right in not assuming that.\n\nOh, of course. Excuse momentary brain fade :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Mar 1999 10:37:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "On Mon, 15 Mar 1999, Tom Lane wrote:\n> Sure looks like strcoll is broken on your platform. Build a little test\n> program and see if strcoll(\"\", \"\") coredumps ... if the traceback is\n> accurate, that's what was getting passed to it.\n\n#include <stdio.h>\n#include <string.h>\n\nint main()\n{\n printf(\"strcoll: %d\\n\", strcoll(\"\", \"\"));\n\treturn 0;\n}\n\n prints: \"strcoll: 0\". No core.\n\n> \t\t\tregards, tom lane\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 15 Mar 1999 18:51:32 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Hello!\n\n> > To remind of the problem - I have a problem running VACUUM ANALYZE on a\n> > glibc2 linux (Debian 2.0). On solaris it is Ok (and I got a report it is Ok\n> > on HP-UX).\n\n I have upgradede Debian 2.0 to 2.1 and the problem mysteriously gone\naway!\n\n I am using the word \"mysteriously\" because:\n-- I have not upgraded kernel (yet) - I am still running 2.0.34\n-- I have not upgraded glibc2 - both 2.0 and 2.1 are based upon libc-2.0.7\n-- I have not upgraded nor recompiled postgres.\n\n Yes, this fix my problem, but what next? It seems suspicious to me, so I\ncan expect other glibc2-related problems.\n BTW, I already reported yet another problem with glibc2 - bug with\ncomplex join (actually, not so complex - 4 tables). I overcame the error by\nrewriting the query into correlated subquery with EXISTS. I'll test if\nDebian upgrade \"mysteriously\" fix the problem too.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n\n",
"msg_date": "Sun, 21 Mar 1999 18:42:59 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
},
{
"msg_contents": "Hi!\n\n Followup to myself...\n\nOn Sun, 21 Mar 1999, Oleg Broytmann wrote:\n> BTW, I already reported yet another problem with glibc2 - bug with\n> complex join (actually, not so complex - 4 tables). I overcame the error by\n> rewriting the query into correlated subquery with EXISTS. I'll test if\n> Debian upgrade \"mysteriously\" fix the problem too.\n\n No, the join still bugs (it return 0 rows, where rewrote query returns\nsome number of rows, and these rows seems to me pretty good - either there\nis a bug in join or I rewrote the query in a wrong way, and got correct\nresults :)\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Sun, 21 Mar 1999 18:50:29 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux "
}
] |
[
{
"msg_contents": "Could anyone tell me why a term like 'int' is not a keyword?\n\nAlso I need a list of postgresql types so I know which ones should be\naccepted by ecpg.\n\nFinally I wonder whether we should make all ecpg keywords keywords for the\nbackend too. Or else we could end up with queries expressable via psql but\nnot via ecpg.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 8 Feb 1999 13:59:54 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Keywords"
},
{
"msg_contents": "> Could anyone tell me why a term like 'int' is not a keyword?\n> \n> Also I need a list of postgresql types so I know which ones should be\n> accepted by ecpg.\n\nWe don't reserve the type names as keywords, and because they can create\ntheir own types, it wouldn't make sense.\n\n> \n> Finally I wonder whether we should make all ecpg keywords keywords for the\n> backend too. Or else we could end up with queries expressable via psql but\n> not via ecpg.\n\nSeems like more work than it's worth, no? Why not have psql queries\nthat can't be done in ecpg. Most commercial embedded sql's have a more\nlimited keyword set, don't they?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 13:15:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Keywords"
},
{
"msg_contents": "> Could anyone tell me why a term like 'int' is not a keyword?\n\nWhat Bruce sez...\n\n> Also I need a list of postgresql types so I know which ones should be\n> accepted by ecpg.\n\nCheck the chapter on data types in the new html/hardcopy User's Guide.\nBut since you can define new types, I'm not sure whatever you are\nplanning is general enough. The main parser gram.y has to support\nseveral different kinds of type syntax, for SQL92 date/time (e.g. TIME\nWITH TIME ZONE), character strings (e.g. CHARACTER VARYING, numeric\ntypes (e.g. FLOAT(6)) and others (e.g. INTEGER).\n\n> Finally I wonder whether we should make all ecpg keywords keywords for \n> the backend too. Or else we could end up with queries expressable via \n> psql but not via ecpg.\n\nNot out of the question. We could then share keywords.c between the two\ninterfaces.\n\n - Tom\n",
"msg_date": "Tue, 09 Feb 1999 03:02:32 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Keywords"
},
{
"msg_contents": "On Mon, Feb 08, 1999 at 01:15:09PM -0500, Bruce Momjian wrote:\n> We don't reserve the type names as keywords, and because they can create\n> their own types, it wouldn't make sense.\n\nI don't exactly understand that. For instance the 'int' keyword will still\nbe reserved, isn't it?\n\n> Seems like more work than it's worth, no? Why not have psql queries\n> that can't be done in ecpg. Most commercial embedded sql's have a more\n> limited keyword set, don't they?\n\nI have no idea. Anyone else?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 9 Feb 1999 10:48:07 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Keywords"
},
{
"msg_contents": "Michael Meskes wrote:\n> \n> On Mon, Feb 08, 1999 at 01:15:09PM -0500, Bruce Momjian wrote:\n> > We don't reserve the type names as keywords, and because they can create\n> > their own types, it wouldn't make sense.\n> \n> I don't exactly understand that. For instance the 'int' keyword will still\n> be reserved, isn't it?\n> \n\nJust tested:\n\nhannu=> create table int(int int);\nCREATE\n\nThough:\n\nhannu=> create table int4(int4 int4);\nERROR: TypeCreate: type int4 already defined\n \nSo it's probably not reserved ;)\n\n--------------\nHannu\n",
"msg_date": "Wed, 10 Feb 1999 01:16:16 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Keywords"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Michael Meskes wrote:\n> >\n> > On Mon, Feb 08, 1999 at 01:15:09PM -0500, Bruce Momjian wrote:\n> > > We don't reserve the type names as keywords, and because they can \n> > > create their own types, it wouldn't make sense.\n> > I don't exactly understand that. For instance the 'int' keyword will \n> > still be reserved, isn't it?\n> hannu=> create table int(int int);\n> CREATE\n> hannu=> create table int4(int4 int4);\n> ERROR: TypeCreate: type int4 already defined\n> So it's probably not reserved ;)\n\nINT is an SQL92 reserved word. But it is not a reserved word in\nPostgres, since the usage as a reserved word would be exclusively as a\ntype name. In Postgres, the parser does not require a type name to be\nexplicitly defined as a keyword (which would make it a de facto reserved\nword) since we allow type extensibility. Parsing it explicitly as a\nkeyword does not buy us any new functionality (since we allow type names\nwhich are definitely *not* keywords anyway), so we don't do it.\n\nHowever, it is handled in a special way: in contexts where one would\nexpect a type name, \"int\" is translated to \"int4\" explicitly (very early\non, from gram.y). Otherwise it is not translated.\n\n - Tom\n",
"msg_date": "Wed, 10 Feb 1999 02:28:10 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Keywords"
},
{
"msg_contents": "On Wed, Feb 10, 1999 at 02:28:10AM +0000, Thomas G. Lockhart wrote:\n> However, it is handled in a special way: in contexts where one would\n> expect a type name, \"int\" is translated to \"int4\" explicitly (very early\n> on, from gram.y). Otherwise it is not translated.\n\nAnd int4 is reserved? Is is not in keywords.c though.\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 10 Feb 1999 07:47:56 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Keywords"
},
{
"msg_contents": "> > However, it is handled in a special way: in contexts where one would\n> > expect a type name, \"int\" is translated to \"int4\" explicitly (very \n> > early on, from gram.y). Otherwise it is not translated.\n> And int4 is reserved? Is is not in keywords.c though.\n\nNo! That's the point, really; there are very few keywords which are type\nnames. afaik the only cases where type names show up as keywords are\nwhere SQL92 syntax requires special handling, such as \"TIMESTAMP WITH\nTIME ZONE\" and \"NATIONAL CHARACTER VARYING\". And even in those cases\nI've tried to allow as broad a usage as possible, so even though\nsomething is a keyword it may be allowed as a column name, type name, or\nidentifier unless prohibited by the yacc one-token-lookahead\nlimitations.\n\nLook in the hardcopy or html User's Guide for the chapter on \"Syntax\",\nwhich lists the SQL92 and Postgres reserved words. I neglected to add\nthat chapter to the \"integrated docs\" (postgres.html) in the last\nrelease, so you will need to look at user.html to find it. Also, it\ndoesn't include recent changes from Vadim et al for MVCC, which I expect\nwill add a few keywords eventually.\n\n - Tom\n",
"msg_date": "Wed, 10 Feb 1999 15:11:38 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Keywords"
},
{
"msg_contents": "Michael Meskes wrote:\n> \n> On Wed, Feb 10, 1999 at 02:28:10AM +0000, Thomas G. Lockhart wrote:\n> > However, it is handled in a special way: in contexts where one would\n> > expect a type name, \"int\" is translated to \"int4\" explicitly (very early\n> > on, from gram.y). Otherwise it is not translated.\n> \n> And int4 is reserved? Is is not in keywords.c though.\n\nNo it's not, it just happens that it is already defined as a type, \nand in postgres defining a table also defines a type. \n\nIn some senses table and type in postgres mean the same thing.\n\n----------------------\nHannu\n",
"msg_date": "Wed, 10 Feb 1999 17:23:51 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Keywords"
},
{
"msg_contents": "On Wed, Feb 10, 1999 at 03:11:38PM +0000, Thomas G. Lockhart wrote:\n> No! That's the point, really; there are very few keywords which are type\n> names. afaik the only cases where type names show up as keywords are\n\nI see.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 10 Feb 1999 19:30:34 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Keywords"
}
] |
[
{
"msg_contents": "While update table whithin fetch forward loop \ncause infinite fetching of updated touple?\n\n(code fragment below cause infinite \ndisplayng record with oid 20864)\n\n\n----------------------------------------------------------------------------\nPQexec(_conn, \"BEGIN\");\nPQexec(_conn, \"DECLARE curr1 CURSOR FOR select oid, name from domains;\");\n\nwhile(1)\n{\n _res = PQexec(_conn, \"FETCH FORWARD 1 IN curr1\");\n if( PQresultStatus(_res) != PGRES_TUPLES_OK ) break;\n\n PQexec(_conn, \"update domains set type = 3 where oid = 20864\" );\n printf(\"oid: %s name: %s\\n\", PQgetvalue(_res,0,0),PQgetvalue(_res,0,1));\n}\n---------------------------------------------------------------------------\n\n-- \nDmitry Samersoff\nDM\\S, [email protected], AIM: Samersoff\nhttp://devnull.wplus.net\n\n",
"msg_date": "Mon, 8 Feb 1999 16:08:43 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq questuion"
},
{
"msg_contents": "Dmitry Samersoff wrote:\n\n>\n> While update table whithin fetch forward loop\n> cause infinite fetching of updated touple?\n>\n> (code fragment below cause infinite\n> displayng record with oid 20864)\n>\n>\n> ----------------------------------------------------------------------------\n> PQexec(_conn, \"BEGIN\");\n> PQexec(_conn, \"DECLARE curr1 CURSOR FOR select oid, name from domains;\");\n>\n> while(1)\n> {\n> _res = PQexec(_conn, \"FETCH FORWARD 1 IN curr1\");\n> if( PQresultStatus(_res) != PGRES_TUPLES_OK ) break;\n>\n> PQexec(_conn, \"update domains set type = 3 where oid = 20864\" );\n> printf(\"oid: %s name: %s\\n\", PQgetvalue(_res,0,0),PQgetvalue(_res,0,1));\n> }\n> ---------------------------------------------------------------------------\n\n Which Postgres version?\n\n I guess it is a side effect from the visibility of tuples\n (records) in conjunction with portals (cursors).\n\n The reason must be that the the UPDATE inside the loop add's\n new tuples to the table at it's end. It also issues a\n CommandCounterIncrement(), so the new tuples get visible to\n the already running scan for the portal.\n\n I planned to get my hands onto the visibility code in tqual.c\n after v6.5 to prepare the system for deferred queries. I\n have some things about it in mind, but must discuss the\n details with Vadim before I start implementing it. They\n interfere with MVCC.\n\n Should I go for it before v6.5?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 15:46:57 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq questuion"
},
{
"msg_contents": "Hello all,\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Jan Wieck\n> Sent: Monday, February 08, 1999 11:47 PM\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [HACKERS] libpq questuion\n> \n> \n> Dmitry Samersoff wrote:\n> \n> >\n> > While update table whithin fetch forward loop\n> > cause infinite fetching of updated touple?\n> >\n> > (code fragment below cause infinite\n> > displayng record with oid 20864)\n> >\n> >\n> > \n> ------------------------------------------------------------------\n> ----------\n> > PQexec(_conn, \"BEGIN\");\n> > PQexec(_conn, \"DECLARE curr1 CURSOR FOR select oid, name from \n> domains;\");\n> >\n> > while(1)\n> > {\n> > _res = PQexec(_conn, \"FETCH FORWARD 1 IN curr1\");\n> > if( PQresultStatus(_res) != PGRES_TUPLES_OK ) break;\n> >\n> > PQexec(_conn, \"update domains set type = 3 where oid = 20864\" );\n> > printf(\"oid: %s name: %s\\n\", \n> PQgetvalue(_res,0,0),PQgetvalue(_res,0,1));\n> > }\n> > \n> ------------------------------------------------------------------\n> ---------\n> \n> Which Postgres version?\n> \n> I guess it is a side effect from the visibility of tuples\n> (records) in conjunction with portals (cursors).\n> \n> The reason must be that the the UPDATE inside the loop add's\n> new tuples to the table at it's end. It also issues a\n> CommandCounterIncrement(), so the new tuples get visible to\n> the already running scan for the portal.\n>\n\nCurrent cursors are strangely sensitive in some cases.\nSo 4 months ago,I requested INSENSITIVE cursors..\nI'm very happy if you will implement INSENSITIVE cursors.\n\nAnd all cursors will be INSENSITIVE even if there are no \nINSENSITIVE keywords ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 9 Feb 1999 17:03:55 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] libpq questuion"
}
] |
[
{
"msg_contents": "D'Arcy wrote\n> \n> Thus spake Tom Lane\n> > I'd suggest setting the limit a good deal less than 2Gb to avoid any\n> > risk of arithmetic overflow. Maybe 200000 8K blocks, instead of 262144.\n> \n> Why not make it substantially lower by default? Makes it easier to split\n> a database across spindles. Even better, how about putting extra extents\n> into different directories like data/base.1, data/base.2, etc? Then as\n> the database grows you can add drives, move the extents into them and\n> mount the new drives. The software doesn't even notice the change.\n> \n> Just a thought.\n> \n\nA good one. Could be extended to large objects, too. One of my reasons\nfor not using large objects is that they all end up in the same directory\n(with all the other data files). Things work much better if the number\nof files in a directory is kept to a 3 digit value. Plus depending\non how the subdirectories are assigned it makes it easier to split\nacross drives. Hashing the oid to an 8 bit value might be a start.\n\nWith data tables and indexes it would still be nice to retain the\nhuman-understandable names. \n\n-- cary\n",
"msg_date": "Mon, 8 Feb 1999 08:15:54 -0500 (EST)",
"msg_from": "\"Cary O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0"
}
] |
[
{
"msg_contents": "\nI brought this up long ago, and the idea wasn't well received,\nbut now, with both lots of people using PostgreSQL in production\nenvironments AND massive (all good) improvements to the system\ngoing on, is it time to consider split stable/development lines\nof development? The cries for a 6.4.3 seem to indicate some\ninterest. 6.5 looks like a BIG change.\n\n-- cary\n",
"msg_date": "Mon, 8 Feb 1999 08:20:24 -0500 (EST)",
"msg_from": "\"Cary O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Version numbering"
}
] |
[
{
"msg_contents": "Jan,\n\tMy company Rakekniven provides professional support\npackages for our business line of PostgreSQL servers. But\nthis is in addition to our servers that we build and test\nby hand. In other words, the servers that we warranty\nhave been thoroughly tested (a generation behind) and are\na combination of hardware/software that we feel comfortable\nwith. Needless to say we spell out in detail, in the\nmaintenance contract, what the limitations of the software\nare and what actions we are willing to provide to the\ncustomer, including, replacing the server.\n\tBut you must also remember that all software companies,\nincluding Microsoft, warrant only the media the software\ncomes on and take absolutely no responsibility for the\nuse (damage) that may arise from using the software. Take\na look at one of your professional software licenses and\nread it. You will find that according to these licenses\nthat most commercial software is no better than shareware\n(License wise).\n\n\tThat's why, in the licenses, you see these big clauses \nthat say this software is not suitable for use in Nuclear \nPower reactor's, etc... As an example, read one of IBM's\nMainframe License's, IBM only guaranties that the hardware\nis free from all major defect's, without defining what those\ndefect's are. And IBM reserved the right to replace that\nhardware at there discretion.\n\nD. Gowin\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]\nSent: Monday, February 08, 1999 9:12 AM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Commercial support, was Re: [HACKERS] v6.4.3 ?\n\n\nTerry Mackintosh wrote:\n\n>\n> Hi all\n>\n> > > That's the reason. One of the biggest drawbacks against\n> > > Postgres is (for many companies at least), that you can't buy\n> > > support.\n>\n> IMHO ...\n>\n> Well, yes one can, one may just need to look around a bit... and pay\n> commercial support prices.\n>\n> Example:\n> As for my self I feel confident that I could provide such support, having\n> been using Postgres+ since Postgres 0.95? (3?4 years ago?). I charge\n> $25/hour, but have been considering going to $30/hour. While I've yet to\n> get a PostgreSQL specific job, I have had some other Linux based jobs.\n>\n> [...]\n\n Nice idea.\n\n But a word of caution seems appropriate.\n\n Commercial support doesn't mean only that you can hire\n someone who takes care about your actual problems with the\n product. It also means that there is someone you can bill if\n that product caused big damage to you (product warranty).\n\n Commercial support doesn't mean only that you hire someone on\n a T/M base (time and material). It also means that you can\n sign a support contract with a regular payment and have\n written down response- and maximum problem-to-fix times,\n escalation levels etc.\n\n For these issues (and there are more) you would need an\n assurance in the background (or a big company). But this\n requires that you have quality assurance management on top of\n the development. And that you have aggreed procedures where\n escalation levels from your support activate the core\n developers in specified times to solve problems. And it\n requires that you have more precise product specifications\n telling what the product can and where it's limits are.\n Otherwise you wouldn't be able to pay the assurance.\n\n There are already distributions of Linux out where you can\n buy commercial support with them. They stay behind the\n bleeding edge of development and are offered by companies,\n that have their own development team apart from the internet\n community.\n\n Looking at how we are organized (or better unorganized), all\n this high level commercial support seems far away.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 09:52:07 -0500 ",
"msg_from": "Dan Gowin <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Commercial support, was Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "Dan Gowin wrote:\n> But you must also remember that all software companies,\n> including Microsoft, warrant only the media the software\n> comes on and take absolutely no responsibility for the\n> use (damage) that may arise from using the software. Take\n> a look at one of your professional software licenses and\n> read it. You will find that according to these licenses\n> that most commercial software is no better than shareware\n> (License wise).\n\nThat's not necessarily true. While it is almost always true for\nconsumer-off-the-shelf software, there is plenty of software that\ndoesn't fit into that category. Quite a few software companies will\nsign support contracts (IBM is one) where they will take responsibility\nfor damage that may arise from the use of the software. This is also\nthe case for many industrial software packages. Granted, PostgreSQL\ndoesn't really fall into any of these categories, but these types of\nwarratees *do* exist.\n\n-- \nNick Bastin - RBB Systems, Inc.\nOut hme0, through the Cat5K, Across the ATM backbone, through the\nfirewall, past the provider, hit the router, down the fiber, off another\nrouter... Nothing but net.\n",
"msg_date": "Mon, 08 Feb 1999 10:06:12 -0500",
"msg_from": "Nick Bastin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commercial support, was Re: [HACKERS] v6.4.3 ?"
}
] |
[
{
"msg_contents": "Hi all\n\nAs all interested in this thread have presumably read the other posts,\nthey will not be repeated here. \n\nSeveral good points both pro and con have been raised.\n\nOne such point is the idea that PostgreSQL is not up to commercial\nstandards, maybe a few versions back this was true, but not now.\nAfter all, commercial standards are not very high, take a look at M$ --\nSQL Server, VFP -- these products have there share of problems, I know,\nI've used them. So you core developers, don't be bashful, you've done a\nfine job and have a great program to offer the world that is as good or\nbetter then any commercial product out there.\n\nAnd as one person pointed out, commercial licenses are very limited.\n\nAnd as for all the details of such support, yes it would all need to be\nworked out, that is what I meant by \"debugged\" in the orig. post.\nJan raised a good list of starting issues.\nMaybe some sort of \"Certification Program\" as well?\n\nBut over all, I have found PostgreSQL to be as good or better then any\ncommercial db products I've used.\n\nAlso, this idea is for the long term, and as such is looking forward to\nthe time when no only will PostgreSQL be even more mature then it is\nnow, but also such support will be in high demand, and will need to be\nalready in place and worked out, not just being thought about.\n\nThere is that saying:\n\"Dig the well BEFORE you need the water.\"\n\nCommercial support is a well that all open-source software of any\npopularity and size are going to need to have dug in the years to come.\nAnd yes, while we are no where near it, it is some thing that we should\nstart walking toward now.\n\nA local paper here (St. Pete Times) just ran a collection of articles\nabout Linux and related software. Over all it was very positive, the only\nreal down side to the article was that it indicated that help was very\nhard to find. This is THE major problem that is still left to over come\nin the open source community. Yes, if you are already plugged into\nthings, help is easy to find, but if you are just getting started, it all\nseems very far away with no help in sight. And if your a company,\nchances are that this will be the main reason not to use open source\n(I've seen exactly that happen).\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\n\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n\n",
"msg_date": "Mon, 8 Feb 1999 11:00:37 -0500 (EST)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commercial support, things considered"
}
] |
[
{
"msg_contents": "It's time,\n\n as we've seen on the recent question, the time qualification\n code has to be modified again. I think a little extension to\n Vadim's SnapshotData could do the trick. Following is what I\n so far have in mind to support deferred queries later in\n v6.6.\n\n 1. Add a command counter field to the SnapshotData struct.\n The command counter in the snapshot is that used in heap\n scanning instead of the global command counter.\n\n 3. Add QueryID fields to the querytree and rangetable entry\n structures.\n\n 3. Create a new global memory context \"Snapshot\". The\n lifetime of this memory context is one transaction (at\n every transaction end/abort an AllocSetReset() is issued\n on it).\n\n 4. Create a new internal counter, the QueryCounter. The\n counter is also reset between transactions. At parse\n time, the query and all it's initial RTE's get the same,\n new QueryCounter. When the rule system generates new\n queries, only the RTE's coming with the rule (except NEW\n and OLD) get the QueryId of the new query. All others\n remain as they are. For every QueryId an entry in the\n \"Snapshot\" context is created, which holds the number of\n RTE's using this snapshot. RTE's in different queries\n (copied by rules) count multipe.\n\n 5. On ExecutorStart(), the actual QuerySnapshot data is\n copied into the \"Snapshot\" context and held in the array\n of Snapshot pointers. The CommandId of the snapshot it\n set to the current command ID.\n\n 6. The executor uses the saved snapshots on\n heap_beginscan(). The RTE's QueryID tells, which of the\n snapshots to use. This way, every Scan node in a plan can\n have a different snapshot and command ID. So we have\n different visibilities in one query execution.\n\n 7. On ExecutorEnd() the snapshot's reference counts is\n decremented and unused snapshot's thrown away.\n\n In v6.6 we could also implement the suggested named\n snapshots. This only requires that a query Id can be\n associated with a name. The CREATE SNAPSHOT utilities query\n Id is that of the snapshot and during parse this Id is placed\n into the RTE's. Named snapshots are never freed until\n transaction end or FREE SNAPSHOT.\n\n This should be the core functionality that makes deferred\n queries possible at all. And it must solve the problem with\n the portal where inserts/updates inside the fetch loop get\n visible too. Since the portals heapgettup() will use the\n command counter from the CREATE CURSOR instead of the current\n command counter, the portal will not see them. The portal\n will see the database exactly in the state at CREATE CURSOR\n time. But another SELECT issued after an UPDATE in the same\n transaction will, as it is supposed to.\n\n Have I forgotten something?\n\n Vadim, please comment on this.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 17:25:26 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "TIME QUALIFICATION"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> 1. Add a command counter field to the SnapshotData struct.\n> The command counter in the snapshot is that used in heap\n> scanning instead of the global command counter.\n\nOk. For the SnapshotNow and SnapshotSelf, used in catalog scans,\nglobal command counter will be used, as now.\n\n> \n> 3. Add QueryID fields to the querytree and rangetable entry\n> structures.\n> \n> 3. Create a new global memory context \"Snapshot\". The\n> lifetime of this memory context is one transaction (at\n> every transaction end/abort an AllocSetReset() is issued\n> on it).\n> \n> 4. Create a new internal counter, the QueryCounter. The\n> counter is also reset between transactions. At parse\n> time, the query and all it's initial RTE's get the same,\n> new QueryCounter. When the rule system generates new\n> queries, only the RTE's coming with the rule (except NEW\n> and OLD) get the QueryId of the new query. All others\n> remain as they are. For every QueryId an entry in the\n> \"Snapshot\" context is created, which holds the number of\n> RTE's using this snapshot. RTE's in different queries\n> (copied by rules) count multipe.\n> \n> 5. On ExecutorStart(), the actual QuerySnapshot data is\n> copied into the \"Snapshot\" context and held in the array\n> of Snapshot pointers. The CommandId of the snapshot it\n> set to the current command ID.\n> \n> 6. The executor uses the saved snapshots on\n> heap_beginscan(). The RTE's QueryID tells, which of the\n> snapshots to use. This way, every Scan node in a plan can\n> have a different snapshot and command ID. So we have\n> different visibilities in one query execution.\n> \n> 7. On ExecutorEnd() the snapshot's reference counts is\n> decremented and unused snapshot's thrown away.\n\nIt seems too complex to me. I again propose to use refcount\ninside snapshot itself to prevent free-ing of snapshots.\nBenefits: no copying in Executor, no QueryId --> Snapshot\nlookup. Just add pointer to RTE. Parser will put NULL there:\nas flag that current snapshot has to be used. ExecutorStart\nand deffered rules will increment refcount of current snapshot.\nDeffered rules will also set snapshot pointers of appropriate\nRTEs (to the current snapshot).\n\n> In v6.6 we could also implement the suggested named\n> snapshots. This only requires that a query Id can be\n> associated with a name. The CREATE SNAPSHOT utilities query\n> Id is that of the snapshot and during parse this Id is placed\n ^^^^^^^^^^^^\nSnapshot names have to be resolved by Executor or just before\nexecution: someday we'll implement stored procedures/functions:\nno parsing before execution...\nWe could add bool to RTE: use name from snapshot pointer\nto get real snapshot. Or something like this.\n\n> into the RTE's. Named snapshots are never freed until\n> transaction end or FREE SNAPSHOT.\n> \n> This should be the core functionality that makes deferred\n> queries possible at all. And it must solve the problem with\n> the portal where inserts/updates inside the fetch loop get\n> visible too. Since the portals heapgettup() will use the\n> command counter from the CREATE CURSOR instead of the current\n> command counter, the portal will not see them. The portal\n> will see the database exactly in the state at CREATE CURSOR\n> time. But another SELECT issued after an UPDATE in the same\n> transaction will, as it is supposed to.\n\nNice.\n\nVadim\n",
"msg_date": "Tue, 09 Feb 1999 13:57:46 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "Vadim wrote:\n\n> It seems too complex to me. I again propose to use refcount\n> inside snapshot itself to prevent free-ing of snapshots.\n> Benefits: no copying in Executor, no QueryId --> Snapshot\n> lookup. Just add pointer to RTE. Parser will put NULL there:\n> as flag that current snapshot has to be used. ExecutorStart\n> and deffered rules will increment refcount of current snapshot.\n> Deffered rules will also set snapshot pointers of appropriate\n> RTEs (to the current snapshot).\n\n Yes, the parser could allways put NULL into the RTE's\n snapshot name field (or the name later for named snapshots).\n But it's the rewrite system that has to tell for unnamed\n snapshots, which ones have to be used on which RTE's.\n\n Let's have two simple tables with a rule (and assume in the\n following that snapshot includes scan command Id):\n\n create table t1 (a int4);\n create table t2 (b int4);\n\n create rule r1 as on delete to t1\n do delete from t2 where b = old.a;\n\n We execute the following commands:\n\n begin;\n delete from t1 where a = 5;\n insert into t2 values (5);\n commit;\n\n If 5 is in t2 after commit depends on if the rule is deferred\n or not. If it isn't deferred, 5 should be there, otherwise\n not.\n\n The rule will create a parsetree like this:\n\n delete from t2 where t1.a = 5 and b = t1.a;\n\n So the tree has a rangetable containing t2 and t1 (along with\n some other unused entries). But only the rule system knows,\n that the RTE for t2 came from the rule and must be scanned\n with the visibility of commit time while t1 came from the\n original query and must be scanned with the visibility that\n was when the original delete from t1 was executed (they are\n already deleted, but the rule actions scan must find em).\n\n And there could also be rules fired on t2. This results in\n recursive rewriting and it's not that easy to foresee the\n order in which all these commands will then get executed.\n During recursion there is no difference between a command\n coming from the user and one that is already generated by\n another rule.\n\n The problem here is, that the RTE's in a rule generated query\n resulting from the former command (that fired them) must get\n scanned against the snapshot of the time when the former\n command get's executed. But the RTE's coming from the rule\n action itself must get the snapshot when the rules command is\n executed. Only this way the quals added to the rule from the\n former command will see what the former command saw.\n\n The executor cannot know where all the RTE's where coming\n from. Except we have a QueryId and associate the QueryId with\n a snapshot at the time of execution. And I think we must do\n this lookup, because the order commands are executed will not\n be the same as they got created. The executor only has to\n override the RTE's snapshot if the RTE's snapshot name isn't\n NULL.\n\n>\n> > In v6.6 we could also implement the suggested named\n> > snapshots. This only requires that a query Id can be\n> > associated with a name. The CREATE SNAPSHOT utilities query\n> > Id is that of the snapshot and during parse this Id is placed\n> ^^^^^^^^^^^^\n> Snapshot names have to be resolved by Executor or just before\n> execution: someday we'll implement stored procedures/functions:\n> no parsing before execution...\n> We could add bool to RTE: use name from snapshot pointer\n> to get real snapshot. Or something like this.\n\n That's a point I forgot and I will allready have that problem\n in prepared SPI plans. One SPI query could also fire rules\n resulting in multiple plans.\n\n So I must change it. The parser rewrite combo allways put's\n QueryId's starting with 0 into the queries and their RTE's,\n telling which RTE's belong to which queries execution times.\n And these must then be offset when the plans get actually\n added to either the current execution tree list or the\n deferred execution tree list.\n\n And SPI must know about deferred queries, because for\n prepared plans, one must get deferred for every SPI_execp()\n call. It's not the rewriter who's managing the deferred tree\n list. It must be it's caller. So the deferred information is\n part of the querytree.\n\n Better now, thanks Vadim.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 9 Feb 1999 10:49:02 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Vadim wrote:\n> \n> > It seems too complex to me. I again propose to use refcount\n> > inside snapshot itself to prevent free-ing of snapshots.\n> > Benefits: no copying in Executor, no QueryId --> Snapshot\n> > lookup. Just add pointer to RTE. Parser will put NULL there:\n> > as flag that current snapshot has to be used. ExecutorStart\n ^^^^^^^^^^^^^^^^\nNote: \"current\" here is \"when actual execution will start\", not\n\"when query was parsed/rewritten\". ExecutorStart will substitute\nQuerySnapshot for NULL snapshot pointers.\n\n> > and deffered rules will increment refcount of current snapshot.\n ^^^^^^^^^^^^^^^^\nI.e. - QuerySnapshot - as it was when rewriting/execution starts.\n\nSorry, I think that my explanation was bad, hope to fix this -:)\n\n> > Deffered rules will also set snapshot pointers of appropriate\n> > RTEs (to the current snapshot).\n> \n> Yes, the parser could allways put NULL into the RTE's\n> snapshot name field (or the name later for named snapshots).\n> But it's the rewrite system that has to tell for unnamed\n> snapshots, which ones have to be used on which RTE's.\n\nOf course!\n\n> Let's have two simple tables with a rule (and assume in the\n> following that snapshot includes scan command Id):\n> \n> create table t1 (a int4);\n> create table t2 (b int4);\n> \n> create rule r1 as on delete to t1\n> do delete from t2 where b = old.a;\n> \n> We execute the following commands:\n> \n> begin;\n> delete from t1 where a = 5;\n> insert into t2 values (5);\n> commit;\n> \n> If 5 is in t2 after commit depends on if the rule is deferred\n> or not. If it isn't deferred, 5 should be there, otherwise\n> not.\n> \n> The rule will create a parsetree like this:\n> \n> delete from t2 where t1.a = 5 and b = t1.a;\n> \n> So the tree has a rangetable containing t2 and t1 (along with\n> some other unused entries). But only the rule system knows,\n> that the RTE for t2 came from the rule and must be scanned\n> with the visibility of commit time while t1 came from the\n> original query and must be scanned with the visibility that\n> was when the original delete from t1 was executed (they are\n> already deleted, but the rule actions scan must find em).\n\nAnd so for deffered rules rewrite system will:\n\n1. set t2' RTE snapshot pointer to NULL - this will guarantee\n that snapshot of execution time (commit or set immediate time)\n will be used;\n2. set t1' RTE snapshot pointer to current QuerySnapshot \n (and increment its refcount).\n\n> And there could also be rules fired on t2. This results in\n> recursive rewriting and it's not that easy to foresee the\n> order in which all these commands will then get executed.\n> During recursion there is no difference between a command\n> coming from the user and one that is already generated by\n> another rule.\n> \n> The problem here is, that the RTE's in a rule generated query\n> resulting from the former command (that fired them) must get\n> scanned against the snapshot of the time when the former\n> command get's executed. But the RTE's coming from the rule\n ^^^^^^^^^^^^^^^^^^^^^^^^\nSo - you use QuerySnapshot as it was in this time.\n\n> action itself must get the snapshot when the rules command is\n\nSet RTE' snapshot pointer to NULL.\n\n> executed. Only this way the quals added to the rule from the\n> former command will see what the former command saw.\n> \n> The executor cannot know where all the RTE's where coming\n> from. Except we have a QueryId and associate the QueryId with\n> a snapshot at the time of execution. And I think we must do\n> this lookup, because the order commands are executed will not\n> be the same as they got created. The executor only has to\n> override the RTE's snapshot if the RTE's snapshot name isn't\n> NULL.\n\n+ set NULL snapshot pointers to QuerySnapshot.\n\nVadim\n",
"msg_date": "Tue, 09 Feb 1999 19:08:48 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "Vadim wrote:\n\n> > > inside snapshot itself to prevent free-ing of snapshots.\n> > > Benefits: no copying in Executor, no QueryId --> Snapshot\n> > > lookup. Just add pointer to RTE. Parser will put NULL there:\n> > > as flag that current snapshot has to be used. ExecutorStart\n> ^^^^^^^^^^^^^^^^\n> Note: \"current\" here is \"when actual execution will start\", not\n> \"when query was parsed/rewritten\". ExecutorStart will substitute\n> QuerySnapshot for NULL snapshot pointers.\n\n Yepp - snapshots are allways built just before execution\n starts.\n\n The reason why I need the QueryId and it's lookup is that the\n time of ExecutorStart() for one query hasn't anything to do\n with where it was coming from or when it has been\n parsed/rewritten. Due to the rewriting, RTE's in different\n queries have relationships. Only the rewrite system knows\n them, and the only place where this information could be\n stored is the RTE. All RTE's that are related to each other\n across queries must use the same snapshot when they get\n scanned.\n\n> And so for deffered rules rewrite system will:\n>\n> 1. set t2' RTE snapshot pointer to NULL - this will guarantee\n> that snapshot of execution time (commit or set immediate time)\n> will be used;\n> 2. set t1' RTE snapshot pointer to current QuerySnapshot\n> (and increment its refcount).\n\n At parse/rewrite time there is no actual snapshot. And for\n SPI prepared plan, the snapshot to use will be different for\n each execution. The RTE cannot hold the snapshot itself. It\n could only tell, which of all the snapshots created during a\n transaction to use for it.\n\n>\n> > And there could also be rules fired on t2. This results in\n> > recursive rewriting and it's not that easy to foresee the\n> > order in which all these commands will then get executed.\n> > During recursion there is no difference between a command\n> > coming from the user and one that is already generated by\n> > another rule.\n> >\n> > The problem here is, that the RTE's in a rule generated query\n> > resulting from the former command (that fired them) must get\n> > scanned against the snapshot of the time when the former\n> > command get's executed. But the RTE's coming from the rule\n> ^^^^^^^^^^^^^^^^^^^^^^^^\n> So - you use QuerySnapshot as it was in this time.\n>\n> > action itself must get the snapshot when the rules command is\n>\n> Set RTE' snapshot pointer to NULL.\n>\n> > executed. Only this way the quals added to the rule from the\n> > former command will see what the former command saw.\n> >\n> > The executor cannot know where all the RTE's where coming\n> > from. Except we have a QueryId and associate the QueryId with\n> > a snapshot at the time of execution. And I think we must do\n> > this lookup, because the order commands are executed will not\n> > be the same as they got created. The executor only has to\n> > override the RTE's snapshot if the RTE's snapshot name isn't\n> > NULL.\n>\n> + set NULL snapshot pointers to QuerySnapshot.\n\n That way, the executor would have to set all the snapshot\n pointers in related RTE's of other queries (not yet executed)\n too so they point to the same snapshot. I can only think\n about an ordered set to link all the related RTE's to each\n other. That would be some kind of ordered set over the\n related RTE's, but I would get into deep trouble when copying\n rangetables during rewrite or SPI_saveplan() to keep these\n set's alive.\n\n Maybe I'm not able to explain exactly enough what I have\n vaguely in mind how it could work. But after you've helped\n not to forget prepared plans I think I have all the odds and\n ends to build it.\n\n I'll hack around a little. Then let's discuss the final\n details while having a prototype to look at.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 9 Feb 1999 13:58:27 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> > And so for deffered rules rewrite system will:\n> >\n> > 1. set t2' RTE snapshot pointer to NULL - this will guarantee\n> > that snapshot of execution time (commit or set immediate time)\n> > will be used;\n> > 2. set t1' RTE snapshot pointer to current QuerySnapshot\n> > (and increment its refcount).\n> \n> At parse/rewrite time there is no actual snapshot. And for\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nOh, you're right. This is true for prepared plans.\n\n> SPI prepared plan, the snapshot to use will be different for\n> each execution. The RTE cannot hold the snapshot itself. It\n> could only tell, which of all the snapshots created during a\n> transaction to use for it.\n> \n...\n> \n> Maybe I'm not able to explain exactly enough what I have\n> vaguely in mind how it could work. But after you've helped\n> not to forget prepared plans I think I have all the odds and\n> ends to build it.\n> \n> I'll hack around a little. Then let's discuss the final\n> details while having a prototype to look at.\n\nOk. If you feel that QueryIds is easier way to go then do it.\nIn any case some preprocessing of plan tree just before execution\nwill be required.\nBTW, why not use CommandIds ?\n\nVadim\n",
"msg_date": "Wed, 10 Feb 1999 09:28:11 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "Vadim wrote:\n\n> Ok. If you feel that QueryIds is easier way to go then do it.\n> In any case some preprocessing of plan tree just before execution\n> will be required.\n> BTW, why not use CommandIds ?\n\n CommandId is the order in which plans get executed and\n snapshots created. But it isn't the order in which the plans\n got created. There could easily hundreds of CommandId's been\n created until a deferred query executes. Some of it's RTE's\n must get the QuerySnapshot and scanCommandId of an earlier\n executed plan. But at the time it will be saved for deferred\n execution, I cannot foresee the CommandId it's parents will\n get.\n\n And the case of cascaded rules? Initial query fires rule\n action 1 which in turn fires rule action 2. Now initial query\n executes and fires trigger which executes it's own commands.\n Thus, the parent of action 2 will not get the second\n CommandId of the transaction.\n\n A plan get's associated with a CommandId at the time it's\n execution starts. So it's useless to tell the relationship\n between RTE's.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 10 Feb 1999 10:26:34 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "Just a little question:\n\n>\n> Vadim wrote:\n>\n> > Ok. If you feel that QueryIds is easier way to go then do it.\n> > In any case some preprocessing of plan tree just before execution\n> > will be required.\n> > BTW, why not use CommandIds ?\n>\n\n For that preprocessing required to associate RTE's with\n snapshots I need the exact output of the rewrite system at\n ExecutorStart() time, so I'm able to find all the RTE's.\n\n So far I've found that the planner NULL's out subselects in\n _make_subplan(). A first (very little) test showed, that it\n seems to work without doing so too. Who coded that and why\n is it done there?\n\n Does anyone know about other places in the code mucking with\n the parsetrees after I'm done with them in the rewrite\n system?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 11 Feb 1999 11:03:16 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "> For that preprocessing required to associate RTE's with\n> snapshots I need the exact output of the rewrite system at\n> ExecutorStart() time, so I'm able to find all the RTE's.\n> \n> So far I've found that the planner NULL's out subselects in\n> _make_subplan(). A first (very little) test showed, that it\n> seems to work without doing so too. Who coded that and why\n> is it done there?\n> \n> Does anyone know about other places in the code mucking with\n> the parsetrees after I'm done with them in the rewrite\n> system?\n\nVadim originally wrote the file. Is he splitting up the subqueries?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 11 Feb 1999 10:44:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> For that preprocessing required to associate RTE's with\n> snapshots I need the exact output of the rewrite system at\n> ExecutorStart() time, so I'm able to find all the RTE's.\n> \n> So far I've found that the planner NULL's out subselects in\n> _make_subplan(). A first (very little) test showed, that it\n> seems to work without doing so too. Who coded that and why\n> is it done there?\n\nDo you mean subselect.c:147 line:\n\n\tslink->subselect = NULL; /* cool ?! */\n\n? \n\nAs I remember this is done to don't copy subquery' Query\nnode in copyObject when copying plan. Actually, using \nQuery node in Executor is annoying me for ~ 1 year:\nExecutor need not in entire Query node, only in range table!\nUsing Query in Executor is bad for prepared plans: we do\ncopying and storing of mostly useless Query node...\nThis would be nice to have some new TopPlan node with\nupmost plan, range table and some other things (I had to\nadd some fields to Plan node itself for subqueries while\nthese fields should be only in topmost plan node) and get rid\nof using Query in Executor. Query is result of parsing,\nplan is result of planning and source for execution.\n\nIf you don't want to implement TopPlan node then you could\nallocate new SubLink node in subselect.c:_make_subplan()\nto be used in node->sublink...\n\nVadim\n",
"msg_date": "Fri, 12 Feb 1999 00:04:33 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "> Do you mean subselect.c:147 line:\n>\n> slink->subselect = NULL; /* cool ?! */\n>\n> ?\n\n Exactly that!\n\n>\n> As I remember this is done to don't copy subquery' Query\n> node in copyObject when copying plan. Actually, using\n> Query node in Executor is annoying me for ~ 1 year:\n> Executor need not in entire Query node, only in range table!\n> Using Query in Executor is bad for prepared plans: we do\n> copying and storing of mostly useless Query node...\n> This would be nice to have some new TopPlan node with\n> upmost plan, range table and some other things (I had to\n> add some fields to Plan node itself for subqueries while\n> these fields should be only in topmost plan node) and get rid\n> of using Query in Executor. Query is result of parsing,\n> plan is result of planning and source for execution.\n>\n> If you don't want to implement TopPlan node then you could\n> allocate new SubLink node in subselect.c:_make_subplan()\n> to be used in node->sublink...\n\n Ah - I see.\n\n So I assume the sublink->subselect, that's copied into the\n plan, is totally obsolete too at that point. The subplan has\n it's own rangetable, which is the same as the (not used) one\n in the subselect.\n\n I think I should tidy up that all to finally pass only plan\n into executor before going ahead with the deferred query\n stuff. It doesn't make sense to spend much efford now to\n prepare the system for deferred queries. It depends too much\n on where the RTE's are and how we organize them.\n\n New TopPlan could be passed down the executor instead of\n querytree. It might hold a List of rangetables. Plan and\n SubPlan then have an index telling which nth() rangetable of\n TopPlan to use for it.\n\n This would make execution preprocessing for snapshot->RTE\n assignment very easy because there's only one place to find\n ALL RTE's (no need to traverse down a tree). And it would\n substantial lower the amount of data to copy in SPI, since it\n must not save the Querytree at all.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 11 Feb 1999 19:55:15 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> So I assume the sublink->subselect, that's copied into the\n> plan, is totally obsolete too at that point. The subplan has\n> it's own rangetable, which is the same as the (not used) one\n> in the subselect.\n\nExactly.\n\n> I think I should tidy up that all to finally pass only plan\n> into executor before going ahead with the deferred query\n> stuff. It doesn't make sense to spend much efford now to\n> prepare the system for deferred queries. It depends too much\n> on where the RTE's are and how we organize them.\n> \n> New TopPlan could be passed down the executor instead of\n> querytree. It might hold a List of rangetables. Plan and\n> SubPlan then have an index telling which nth() rangetable of\n> TopPlan to use for it.\n> \n> This would make execution preprocessing for snapshot->RTE\n> assignment very easy because there's only one place to find\n> ALL RTE's (no need to traverse down a tree). And it would\n> substantial lower the amount of data to copy in SPI, since it\n> must not save the Querytree at all.\n\nNice!\n\nVadim\n",
"msg_date": "Fri, 12 Feb 1999 09:26:00 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TIME QUALIFICATION"
}
] |
[
{
"msg_contents": ">Nick Bastin - RBB Systems, Inc.\n>Out hme0, through the Cat5K, Across the ATM backbone, through the\n>firewall, past the provider, hit the router, down the fiber, off another\n>router... Nothing but net.\n>Nick,\n>\n>That's not necessarily true. While it is almost always true for\n>consumer-off-the-shelf software, there is plenty of software that\n>doesn't fit into that category. Quite a few software companies will\n>sign support contracts (IBM is one) where they will take responsibility\n>for damage that may arise from the use of the software. This is also\n>the case for many industrial software packages. Granted, PostgreSQL\n>doesn't really fall into any of these categories, but these types of\n>warratees *do* exist.\n>\nNick, \n\tI didn't say that those contracts don't exist. But\nthey are usually in conjunction with some specific application.\nFor example, Oracle's general license on there database\nproduct's is written in such a way that they cannot be\nheld accountable for anything a customer may do with there\ndatabase. The reason's are simple, Oracle couldn't possibly\ncome up with all of the possible scenario's that their\ngeneral purpose software could fail in. \n\tBut, on the flip side, Oracle has an Iron clad warranty \non the use of \"Oracle Financials\". And you can be assured\nwhat a DBA can and cannot do are very strictly defined within\nthat contract. And if you violate that contract in any minor\nway and \"Oracle Financials\" has any problem, the lawyer's\nwill use this as a way out.\n\tAs for comparing commercial database software to PostgreSQL\nreliability. I had to rebuild a Oracle 7.0 database engine\ntwo weeks ago because it was corrupt. And last week I spent\ntwo days exporting/importing a Oracle 7.3 two tablespaces \nbecause of some corruption of some type. And I did this\nafter Oracle's tech staff suggested it. \n\tWhat does all of this mean. Well, Oracle (Commercial)\ndatabase packages have some of these same problems that plague\nPostgreSQL. The only difference is they are less frequent and\nare generally tied to the development cycle. I tend to think\nof this as a evolution cycle and Postgres's cycle is on \nsteroids.\n\n\nMy two cents.\nD.\n\n\n",
"msg_date": "Mon, 8 Feb 1999 11:26:52 -0500 ",
"msg_from": "Dan Gowin <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Commercial support, was Re: [HACKERS] v6.4.3 ?"
},
{
"msg_contents": "Dan Gowin <[email protected]> writes:\n> \tWhat does all of this mean. Well, Oracle (Commercial)\n> database packages have some of these same problems that plague\n> PostgreSQL. The only difference is they are less frequent and\n> are generally tied to the development cycle. I tend to think\n> of this as a evolution cycle and Postgres's cycle is on \n> steroids.\n\nThat it is.\n\nI think most people see rapid release cycles as part of the success\nstory for open-source software, so I'm not eager to try to slow down\nthe cycle. But we do need to consider that many users of Postgres\nwill be more concerned about stability and reliability than having\nthe latest whizzy features. Making a commitment to maintain the\nprior major release as well as the current one seems like a good\nanswer.\n\nI see a number of different specific issues that are getting lumped\ntogether under the notion of \"commercial support\":\n\n1. Personal attention to a particular installation, guaranteed\nresponse to a problem, etc. These are things that can and should\nbe handled by a distributed network of support consultants as Terry\nwas suggesting.\n\n2. Bug fixing and feature additions in the supported product. (This\nis different from \"support\" in the sense of correcting an admin's\nmistake, figuring out a workaround, or otherwise supporting a specific\ninstallation. I'm thinking about changes that require developer-grade\nunderstanding of the code.) I don't really think we need to do\nanything differently here, other than have a higher level of effort on\nback-patching prior releases. Like most open-source projects we are\nfar *better* than commercial practice in this area.\n\n3. Commercial guarantees/warrantees/indemnifications, ie, someone pays\nyou money if the thing doesn't work for you. I don't think this is\ngoing to happen, at least not for the core Postgres development. Who\nin their right mind would warrantee something they didn't have 100%\ncontrol over? If there's really a demand for it then probably\ncompanies will offer packages based on back-rev Postgres releases that\nthey have tested like mad and hired a few programmers specifically to\nfix bugs in. (They'd probably want to put it on a back-rev OS,\ntoo...)\n\nWe should try to encourage qualified people to become support\nconsultants to deal with point #1. I don't think this group can\nor should do anything about #3 though --- that looks like a splinter\nproject to me. I don't mind someone else taking it up, but it would\nbe a distraction from the core development project for us.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Feb 1999 11:56:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commercial support, was Re: [HACKERS] v6.4.3 ? "
}
] |
[
{
"msg_contents": "I am fixing p_ordering(now called path_order) to properly display in\noutNode/readNode and friends.\n\nThis structure is never output during debug runs. Without the display,\nit is hard to see if the optimizer is doing what it should.\n\nMy suspicion is that RelOptInfo is retaining many duplicate pathlist\nPath's that have duplicate/useless PathOrder/key combinations, causing\noptimization of large queries to be very slow.\n\nI'll keep going.\n\nIs it me, or have things gotten very busy in the last week on the\nhackers list, with tons of new features coming in?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 13:23:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer problems"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > Is it me, or have things gotten very busy in the last week on the\n> > hackers list, with tons of new features coming in?\n> \n> As usual the number of FEATURES submitted is reverse\n> proportional to the time left until FEATURE FREEZE :-)\n> \n> I think the next time we should avoid this by discussing\n> about to start BETA on [CORE] instead of [HACKERS] (just a\n> joke - don't panic folks).\n\nWhat? We should announce beta from the housetops just after every\nrelease. Who knows how many new features we could get. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 14:31:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimizer problems"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Is it me, or have things gotten very busy in the last week on the\n> hackers list, with tons of new features coming in?\n\n As usual the number of FEATURES submitted is reverse\n proportional to the time left until FEATURE FREEZE :-)\n\n I think the next time we should avoid this by discussing\n about to start BETA on [CORE] instead of [HACKERS] (just a\n joke - don't panic folks).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 8 Feb 1999 20:32:13 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer problems"
}
] |
[
{
"msg_contents": "I think I have solved the optimizer problem. It appears in samekeys(). \nCan someone check that function, and see if you come up with the same\nfix I do (without knowing my fix)?\n\nA 9-table join that used to run for minutes and fail now completes in\nseconds! I want to commit this, but want confirmation from someone else\nthat my fix is correct.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 16:28:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "samekeys"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I think I have solved the optimizer problem. It appears in samekeys(). \n> Can someone check that function, and see if you come up with the same\n> fix I do (without knowing my fix)?\n\n\"member\" -> \"equal\", perhaps?\n\nI looked at that before and thought it was a little strange, but I\ndidn't and still don't understand the data structures being compared.\n\nI also wondered whether the two lists ought not be required to be\nexactly the same length, rather than allowing keys2 to be longer.\n\n\n> A 9-table join that used to run for minutes and fail now completes in\n> seconds!\n\nPick some smaller joins and see whether the optimizer still finds\nthe same answer...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Feb 1999 19:02:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] samekeys "
}
] |
[
{
"msg_contents": "Just for PgSQL's development group think about....\n\nI made a mistake typing a query that generates a strange result (Very strange). \n\nThe query: select text('12345678'::float8);\n\nIt returns a date in datetime format !!!!!!\n\nIf you use: select ('12345678'::float8)::text; everything runs well.\n\n\n",
"msg_date": "Mon, 8 Feb 1999 20:21:11 -0200",
"msg_from": "\"Ricardo J.C.Coelho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "A mistake generates strange result"
},
{
"msg_contents": "\nHow do I unsubscribe?\n\nSorry to post this to the entire list, but it your messages don't\nhave unsubscription info at the bottom, and the person who subscribed\n([email protected]) isn't a valid user here (I am the postmaster).\n\n-Joel Henderson\n [email protected]\n\n\n\n",
"msg_date": "Mon, 8 Feb 1999 17:57:03 -0800 (PST)",
"msg_from": "Joel Parker Henderson <[email protected]>",
"msg_from_op": false,
"msg_subject": "How do I unsubscribe?"
},
{
"msg_contents": "I have a parallel inheritance going on,\nso I was wondering if there was a way\nto re-name a derived column? This would\nmake my design clearer.\n-----------------\n\nCREATE TABLE B\n( NAME VARCHAR(10) );\n\nCREATE TABLE C\n( ... ) INHERITS(B);\n\nCREATE TABLE X\n( \n A VARCHAR(10),\n B VARCHAR(10),\n CONSTRAINT FOREIGN KEY (B) REFERENCES B(OID)\n);\n\nCREATE TABLE Y \n( B AS C, /* Syntatic Sugar */\n D VARCHAR(10),\n CONSTRAINT FOREIGN KEY (C) REFERENCES C(OID)\n) INHERITS(X)\n\n\nHere, I've added the syntax \"AS\" to show that\ncolumn A in table X, is called B in the \nderived table Y.\n\nThank you for your thoughts.\n\n:) Clark Evans\n",
"msg_date": "Tue, 09 Feb 1999 04:40:06 +0000",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Using As with Inheritance?"
},
{
"msg_contents": "\n\tHi !\n\n\"Ricardo J.C.Coelho\" <[email protected]> writes:\n> Just for PgSQL's development group think about....\n> I made a mistake typing a query that generates a strange result\n> (Very strange). \n\n> The query: select text('12345678'::float8);\n> It returns a date in datetime format !!!!!!\n\n\tI didn't found any function of name \"text\" that converts\nfloat8 to text. So I think Postgres made an implicit cast of the data\nto datatime. So: String->Float8->DateTime->Text. Stranger : I didn't\nfound any function to cinvert float to text !\n\n> If you use: select ('12345678'::float8)::text; everything runs well.\n\n\tHere, you made an explicit cast, without the use of any\nfunction. So your data is casted well.\n\n\tHope this helps !\n",
"msg_date": "09 Feb 1999 09:55:23 +0100",
"msg_from": "[email protected] (=?ISO-8859-1?Q?St=E9phane?= Dupille)",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] A mistake generates strange result"
},
{
"msg_contents": "Unfortunately, that is true, at least for Postgres 6.4.0:\ntemplate1=> select text('12345678'::float8);\n\ntext \n-----------------------------\nTue May 23 00:21:18 2000 EEST\n(1 row)\n\nPlease, guys, take care of this small bug:-)\nAleksey\n\nOn 9 Feb 1999, [ISO-8859-1] St�phane Dupille wrote:\n\n> \n> \tHi !\n> \n> \"Ricardo J.C.Coelho\" <[email protected]> writes:\n> > Just for PgSQL's development group think about....\n> > I made a mistake typing a query that generates a strange result\n> > (Very strange). \n> \n> > The query: select text('12345678'::float8);\n> > It returns a date in datetime format !!!!!!\n> \n> \tI didn't found any function of name \"text\" that converts\n> float8 to text. So I think Postgres made an implicit cast of the data\n> to datatime. So: String->Float8->DateTime->Text. Stranger : I didn't\n> found any function to cinvert float to text !\n> \n> > If you use: select ('12345678'::float8)::text; everything runs well.\n> \n> \tHere, you made an explicit cast, without the use of any\n> function. So your data is casted well.\n> \n> \tHope this helps !\n> \n> \n\n",
"msg_date": "Wed, 10 Feb 1999 15:31:01 +0200 (IST)",
"msg_from": "Postgres DBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] A mistake generates strange result"
},
{
"msg_contents": ">>>> The query: select text('12345678'::float8);\n>>>> It returns a date in datetime format !!!!!!\n\nYup, I see it here also with 6.4.2.\n\nThe current development sources seem OK however:\n\nregression=> select text('12345678'::float8);\n text\n--------\n12345678\n(1 row)\n\nSo it should be fixed in 6.5. (Thomas, could this be back-patched\ninto 6.4.3?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Feb 1999 10:11:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] A mistake generates strange result "
},
{
"msg_contents": "> \n> Unfortunately, that is true, at least for Postgres 6.4.0:\n> template1=> select text('12345678'::float8);\n> \n> text \n> -----------------------------\n> Tue May 23 00:21:18 2000 EEST\n> (1 row)\n> \n> Please, guys, take care of this small bug:-)\n> Aleksey\n\nWorks in the current tree:\n\ntest=> select text('12345678'::float8);\n text\n--------\n12345678\n(1 row)\n\ntest=>\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 10 Feb 1999 10:25:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] A mistake generates strange result"
},
{
"msg_contents": "> Unfortunately, that is true, at least for Postgres 6.4.0:\n> template1=> select text('12345678'::float8);\n> -----------------------------\n> Tue May 23 00:21:18 2000 EEST\n> Please, guys, take care of this small bug:-)\n\ntgl=> select text('12345678'::float8);\n text\n--------\n12345678\n(1 row)\n\nAlready done. From the cvs log:\n\nrevision 1.84\ndate: 1998/11/17 14:36:51; author: thomas; state: Exp; lines: +34 -15\nAdd text<->float8 and text<->float4 conversion functions.\nThis will fix the problem reported by Jose' Soares\n when trying to cast a float to text.\n\n - Tom\n",
"msg_date": "Wed, 10 Feb 1999 15:26:49 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] A mistake generates strange result"
},
{
"msg_contents": "> So it should be fixed in 6.5. (Thomas, could this be back-patched\n> into 6.4.3?)\n\nNot likely, since it defined a couple of new procedures in the system\ntables to do explicit string to float8 conversions. \n\nThe only v6.4.3-compatible patch we can make is to remove the \"binary\nequivalence\" between datetime and float8, which I had put in to allow\nmore date arithmetic (a lazy solution, but it seemed a good idea at the\ntime :/ I have a patch to do that, but have not applied it to either\ntree yet.\n\n - Tom\n",
"msg_date": "Wed, 10 Feb 1999 15:38:15 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] A mistake generates strange result"
},
{
"msg_contents": "St�phane Dupille ha scritto:\n\n> Hi !\n>\n> \"Ricardo J.C.Coelho\" <[email protected]> writes:\n> > Just for PgSQL's development group think about....\n> > I made a mistake typing a query that generates a strange result\n> > (Very strange).\n>\n> > The query: select text('12345678'::float8);\n> > It returns a date in datetime format !!!!!!\n>\n> I didn't found any function of name \"text\" that converts\n> float8 to text. So I think Postgres made an implicit cast of the data\n> to datatime. So: String->Float8->DateTime->Text. Stranger : I didn't\n> found any function to cinvert float to text !\n>\n> > If you use: select ('12345678'::float8)::text; everything runs well.\n>\n> Here, you made an explicit cast, without the use of any\n> function. So your data is casted well.\n>\n> Hope this helps !\n\nThis seems like a bug, because there's no a text(float8) built-in function.\n\nhygea=> select text('12345678'::float8);\ntext\n----------------------\n2000-05-22 23:21:18+02\n\nbut if you create the function like:\n\ncreate function text(float8) returns text as\n'\nbegin\n return $1;\nend;\n' language 'plpgsql';\nCREATE\n\nselect text('12345678.2'::float8);\n text\n----------\n12345678.2\n(1 row)\n\n\n - Jose' -\n\nAnd behold, I tell you these things that ye may learn wisdom; that ye may\nlearn that when ye are in the service of your fellow beings ye are only\nin the service of your God. - Mosiah 2:17 -\n\n\n\n \nStéphane Dupille ha scritto:\n Hi !\n\"Ricardo J.C.Coelho\" <[email protected]> writes:\n> Just for PgSQL's development group think about....\n> I made a mistake typing a query that generates a strange result\n> (Very strange).\n> The query: select text('12345678'::float8);\n> It returns a date in datetime format !!!!!!\n I didn't found any function\nof name \"text\" that converts\nfloat8 to text. So I think Postgres made an implicit cast of the data\nto datatime. So: String->Float8->DateTime->Text. Stranger : I didn't\nfound any function to cinvert float to text !\n> If you use: select ('12345678'::float8)::text; \neverything runs well.\n Here, you made an explicit\ncast, without the use of any\nfunction. So your data is casted well.\n Hope this helps !\nThis seems like a bug, because there's no a text(float8) built-in\nfunction.\nhygea=> select text('12345678'::float8);\ntext\n----------------------\n2000-05-22 23:21:18+02\nbut if you create the function like:\ncreate function text(float8) returns text as\n'\nbegin\n return $1;\nend;\n' language 'plpgsql';\nCREATE\nselect text('12345678.2'::float8);\n text\n----------\n12345678.2\n(1 row)\n \n \n- Jose' -\nAnd behold, I tell you these things that ye may learn wisdom; that\nye may\nlearn that when ye are in the service of your fellow beings ye\nare only\nin the service of your God. \n- Mosiah 2:17 -",
"msg_date": "Fri, 12 Feb 1999 15:06:50 +0100",
"msg_from": "\"jose' soares\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] A mistake generates strange result"
}
] |
[
{
"msg_contents": "HAVING appears not to recognise alias names for columns.\n\nExamples:\n** Using aggregate in HAVING clause works **\nbray=> select p.brand,avg(s.free) as f from product as p, stock as s where \np.id = s.product group by p.brand having avg(s.free) > 100;\nbrand | f\n------------+----\nAVOCA | 660\nBULK | 122\n...\n |5481\n(14 rows)\n\n** Using various kinds of column names in HAVING clause does not work **\nbray=> select p.brand,avg(s.free) as f from product as p, stock as s where \np.id = s.product group by p.brand having f > 100;\nERROR: attribute 'f' not found\nbray=> select p.brand,avg(s.free) as f from product as p, stock as s where \np.id = s.product group by p.brand having 2 > 100;\nbrand|f\n-----+-\n(0 rows)\n\nbray=> select p.brand,avg(s.free) from product as p, stock as s where p.id = \ns.product group by p.brand having 2 > 100;\nbrand|avg\n-----+---\n(0 rows)\n\nIs this a known bug?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The heavens declare the glory of God; and the \n firmament showeth his handiwork.\" Psalms 19:1\n\n\n",
"msg_date": "Tue, 09 Feb 1999 00:02:11 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "HAVING bug in 6.4.2"
}
] |
[
{
"msg_contents": "Seen with current sources and with 6.4.2:\n\nWhen datestyle = 'Postgres,European', the datetime parser will accept\ndates written in either US or Euro order:\n\nregression=> set datestyle = 'Postgres,European';\nSET VARIABLE\nregression=> select 'Wed 06 Jan 16:10:00 1999 EST'::datetime;\n?column?\n----------------------------\nWed 06 Jan 16:10:00 1999 EST\n(1 row)\n\nregression=> select 'Wed Jan 06 16:10:00 1999 EST'::datetime;\n?column?\n----------------------------\nWed 06 Jan 16:10:00 1999 EST\n(1 row)\n\nBut when datestyle = 'Postgres,US' it won't:\n\nregression=> set datestyle = 'Postgres,US';\nSET VARIABLE\nregression=> select 'Wed Jan 06 16:10:00 1999 EST'::datetime;\n?column?\n----------------------------\nWed Jan 06 16:10:00 1999 EST\n(1 row)\nregression=> select 'Wed 06 Jan 16:10:00 1999 EST'::datetime;\nERROR: Bad datetime external representation 'Wed 06 Jan 16:10:00 1999 EST'\n\nA bug, no??\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Feb 1999 20:34:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Datetime input-parsing shortcoming"
},
{
"msg_contents": "> Seen with current sources and with 6.4.2:\n> When datestyle = 'Postgres,European', the datetime parser will accept\n> dates written in either US or Euro order:\n> But when datestyle = 'Postgres,US' it won't:\n> regression=> select 'Wed 06 Jan 16:10:00 1999 EST'::datetime;\n> ERROR: Bad datetime external representation\n> A bug, no??\n\nSi, though I would prefer to think of it as a \"feature omission\" since\nit would have accepted a date sometimes (if the day of month was greater\nthan 12) :)\n\npostgres=> show DateStyle;\nNOTICE: DateStyle is Postgres with US (NonEuropean) conventions\npostgres=> select 'Wed 06 Jan 16:10:00 1999 EST'::datetime;\n----------------------------\nWed Jan 06 21:10:00 1999 GMT\n(1 row)\n\nWill apply to the development tree soon...\n\n - Tom",
"msg_date": "Tue, 09 Feb 1999 15:06:49 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Datetime input-parsing shortcoming"
}
] |
[
{
"msg_contents": "I have found the problem with joining large tables in the optimizer.\n\nThere were too many Path's kept by RelOptInfo, even though many of the\npaths had identical keys. Because the cheapest was always chosen, this\ndid not affect the final Plan, but did cause lots of wasted time during\noptimizer processing.\n\nThe code in samekeys originally said:\n\n if (!member(lfirst(key1)), lfirst(key2)))\n\nwhile member() actually wanted a Node and a List. The code below is the\nsingle change that made all the difference, though there is some\nadditional code I added to clean up things and add debugging. I am not\nfinished with checked out the optimizer yet, but this should do it. I\ncan supply a 6.4 and 6.3 patch.\n\nThis will greatly speed up large join processing, and I am sorry I did\nnot do this fix earlier. I could post some impressive speed\nimprovements, but I will ask others who have real data tables to post\nsome timings. Should be impressive.\n\n---------------------------------------------------------------------------\n\n\nbool\nsamekeys(List *keys1, List *keys2)\n{\n List *key1,\n *key2;\n\n for (key1 = keys1, key2 = keys2; key1 != NIL && key2 != NIL;\n key1 = lnext(key1), key2 = lnext(key2))\n if (!member(lfirst((List *)lfirst(key1)), lfirst(key2)))\n return false;\n\n \n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Feb 1999 22:56:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer fix"
}
] |
[
{
"msg_contents": "I have modified samekeys() again. It is now:\n\n for (key1 = keys1, key2 = keys2;\n key1 != NIL && key2 != NIL;\n key1 = lnext(key1), key2 = lnext(key2))\n {\n for (key1a = lfirst(key1), key2a = lfirst(key2);\n key1a != NIL && key2a != NIL;\n key1a = lnext(key1a), key2a = lnext(key2a))\n if (!equal(lfirst(key1a), lfirst(key2a)))\n return false;\n if (key1a != NIL)\n return false;\n }\n\nThis basically says that key1, which is the old key, has to match key2\nfor the length of key1. If key2 has extra keys after that, that is\nfine. We will still consider the keys equal. The old code obviously\nwas broken and badly thought out. One side benefit of this is that\nunorder joins that are the same cost as ordered joins are discarded, in\nthe hopes the ordered joins can be used later on.\n\nOTIMIZER_DEBUG now shows:\n\t\n\t(9 8 ): size=1 width=8\n\t path list:\n\t MergeJoin size=1 cost=0.000000\n\t clauses=(x7.y = x8.y)\n\t sortouter=1 sortinner=1\n\t SeqScan(8) size=0 cost=0.000000\n\t SeqScan(9) size=0 cost=0.000000\n\t MergeJoin size=1 cost=0.000000\n\t clauses=(x7.y = x8.y)\n\t sortouter=1 sortinner=1\n\t SeqScan(9) size=0 cost=0.000000\n\t SeqScan(8) size=0 cost=0.000000\n\t cheapest path:\n\t MergeJoin size=1 cost=0.000000\n\t clauses=(x7.y = x8.y)\n\t sortouter=1 sortinner=1\n\t SeqScan(8) size=0 cost=0.000000\n\t SeqScan(9) size=0 cost=0.000000\n\nWhich is correct. The old code had many more plans for this simple join,\nperhaps 12, causing large optimizer growth with many tables.\n\nWe now have an OPTDUP_DEBUG option to show key and PathOrder duplication\ntesting.\n\nI am unsure if samekeys should just test the first key for equality, or\nthe full length of key1 as I have done. Does the optimizer make use of\nthe second key of a merged RelOptInfo for subsequent joins? My guess is\nthat it does. For example, I keep plans that have key orders of 1,2,3\nand 1,3,2. If only the first key is used in subsequent joins, I could\njust keep the cheapest of the two.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Feb 1999 01:44:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "samekeys"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> This basically says that key1, which is the old key, has to match key2\n> for the length of key1. If key2 has extra keys after that, that is\n> fine. We will still consider the keys equal. The old code obviously\n> was broken and badly thought out.\n> ...\n> I am unsure if samekeys should just test the first key for equality, or\n> the full length of key1 as I have done.\n\nThe comment in front of samekeys claimed:\n\n * It isn't necessary to check that each sublist exactly contain\n * the same elements because if the routine that built these\n * sublists together is correct, having one element in common\n * implies having all elements in common.\n\nWas that wrong? Or, perhaps, it was once right but no longer?\nIt sounded like fragile coding to me, but I didn't have reason\nto know it was broken...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Feb 1999 11:04:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] samekeys "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > This basically says that key1, which is the old key, has to match key2\n> > for the length of key1. If key2 has extra keys after that, that is\n> > fine. We will still consider the keys equal. The old code obviously\n> > was broken and badly thought out.\n> > ...\n> > I am unsure if samekeys should just test the first key for equality, or\n> > the full length of key1 as I have done.\n> \n> The comment in front of samekeys claimed:\n> \n> * It isn't necessary to check that each sublist exactly contain\n> * the same elements because if the routine that built these\n> * sublists together is correct, having one element in common\n> * implies having all elements in common.\n> \n> Was that wrong? Or, perhaps, it was once right but no longer?\n> It sounded like fragile coding to me, but I didn't have reason\n> to know it was broken...\n\nI think it was wrong. It clearly was not passing the right parameters.\nAs far as I know (1,2,3) and (3,2,1) are not the same. Their test would\njust take '1' and see if it is in (3,2,1).\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Feb 1999 11:35:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] samekeys"
}
] |
[
{
"msg_contents": "\n\t> On my todo list I found two missing commands:\n\t> \n\t> exec sql free\n\nfree is the opposite to prepare, a statement stays prepared until free is\ncalled.\nThis will release all associated resources. In my Informix manual it sais,\nthat \nit will not affect an already declared cursor on this statement.\n\nexec sql prepare d_id from :stmt;\n....\nexec sql free d_id; \n\n\t> exec sql alloc\n\nAll I find for this is:\nexec sql allocate descriptor :descname [with max :colmax];\nallocates memory for a system descriptor area (sqlca) for a maximum of\n\tcolmax columns (default 100) for use with a describe statement.\n\nAndreas\n\n",
"msg_date": "Tue, 9 Feb 1999 10:14:26 +0100 ",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Embedded SQL question"
},
{
"msg_contents": "On Tue, Feb 09, 1999 at 10:14:26AM +0100, Zeugswetter Andreas IZ5 wrote:\n> \n> free is the opposite to prepare, a statement stays prepared until free is\n> called.\n> This will release all associated resources. In my Informix manual it sais,\n> that \n> it will not affect an already declared cursor on this statement.\n> \n> exec sql prepare d_id from :stmt;\n> ....\n> exec sql free d_id; \n\nOh, that's easy. :-) I already have it with the standard syntax 'exec sql\ndeallocate prepare'.\n\n> \t> exec sql alloc\n> \n> All I find for this is:\n> exec sql allocate descriptor :descname [with max :colmax];\n> allocates memory for a system descriptor area (sqlca) for a maximum of\n> \tcolmax columns (default 100) for use with a describe statement.\n\nGuess this has to wait some more.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 9 Feb 1999 20:09:03 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Embedded SQL question"
}
] |
[
{
"msg_contents": "\t> and solving the data distribution as a separate effort.\n\nI also think, that this issue can not be solved with splitting the table\nextents\nto different directories. I would leave that issue to a Tablespace\nimplementation.\nWhat I like though is the idea that all table files have the extent number\nin the \nfilename, starting with tabname.1, and not having tabname (1. extent)\ntabname.1\n(2. extent).\n\nAsside from that, I think anybody having a (non blob) table of 2-4 Gb and\nabove \nshould start thinking of a redesign of his data model. Often the solution is\nto have \ne.g. one table per year and a union all view, so that clients can access all\ndata\nwithout even noticing. I think smart rewrite rules can be implemented, so\nthat updates,\ninserts and deletes are routed to the correct table (let's call it\nfragment).\n\nAndreas\n",
"msg_date": "Tue, 9 Feb 1999 10:46:39 +0100 ",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "Zeugswetter Andreas IZ5 wrote:\n> \n> Asside from that, I think anybody having a (non blob) table of 2-4 Gb and\n> above should start thinking of a redesign of his data model. Often the \n> solution is to have e.g. one table per year and a union all view, so that\n> clients can access all data without even noticing.\n\nOracle approaches this problem from the other end. In ver 8.x you can\ndefine\nvirtual tables (or some name like that), which are actually views of\nexisting \ntables. These act mostly as ordinary tables - you can define indexes on\nthem,\ninsert/delet/update, views, etc. - except that the data is actually\nstored in \nthe main table.\n\n> I think smart rewrite rules can be implemented, so that updates,\n> inserts and deletes are routed to the correct table (let's call it\n> fragment).\n\nProbably, but why must one do that extra work which should be done by\nthe \ndatabase (in an ideal world) ?\n\nOracles virtual tables are probably 'smart rewrite rules', just the user \ndoes not have to be too smart to use them.\n\n----------------\nHannu\n",
"msg_date": "Tue, 09 Feb 1999 14:33:00 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "On Tue, 9 Feb 1999, Zeugswetter Andreas IZ5 wrote:\n\n> \t> and solving the data distribution as a separate effort.\n> \n> I also think, that this issue can not be solved with splitting the\n> table extents to different directories. I would leave that issue to a\n> Tablespace implementation. What I like though is the idea that all\n> table files have the extent number in the filename, starting with\n> tabname.1, and not having tabname (1. extent) tabname.1 (2. extent).\n\nThis is something we need to thrash out here ;-)\n \n> Asside from that, I think anybody having a (non blob) table of 2-4 Gb\n> and above should start thinking of a redesign of his data model. Often\n> the solution is to have e.g. one table per year and a union all view,\n> so that clients can access all data without even noticing. I think\n> smart rewrite rules can be implemented, so that updates, inserts and\n> deletes are routed to the correct table (let's call it fragment).\n\nThe senario I can think of, is a database containing astronomical\nobservations (aka tass http://www.tass-survey.org)\n\nYou have one row per object, and this table is indexed against not just\nthe position of the object, but its observed photometric properties, etc.\n\nIt would be pretty difficult to maintain seeing what the expected volume\nof data is going to be.\n\n-- \nPeter Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as being the\nofficial words of Maidstone Borough Council\n\n\n",
"msg_date": "Tue, 9 Feb 1999 13:56:11 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Problems with >2GB tables on Linux 2.0"
},
{
"msg_contents": "Hannu Krossing wrote:\n>\n> Oracles virtual tables are probably 'smart rewrite rules', just the user\n> does not have to be too smart to use them.\n\n Look at our programmers manual and into the rules regression\n test. The examples given there show, that the functionality\n is already part of the Postgres rule system.\n\n Just the interface to setup the scenario isn't that easy.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 9 Feb 1999 19:38:10 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Problems with >2GB tables on Linux 2.0"
}
] |
[
{
"msg_contents": "Hi,\n\nI am new in Pgsql-Hacker mailing list. I didn't have any answer from others \nlists.\n\nI tried to create a table with a timestamp field as part of primary key. \nPgsql doesn't have an \"ops_name\" for timestamp. You will see this when you \nuse create table. DON'T DO THIS WITH YOUR REGULAR DATABASE. Create a \nseparate one.\nIf you create the table without primary key and after create an unique \nindex with abstime_ops, everything will run well.\nHowever if you use primary key clause, the table can't be dropped or \ncreated again. Look the sequence above.\n\n\tcreate table TBL (FLD1 int2, FLD2 timestamp, FLD3 text, primary \nkey(FLD1,FLD2));\n--> Pgsql will not create because FLD2 is timestamp\n\tcreate table TBL (FLD1 int2, FLD2 timestamp, FLD3 text);\n--> Pgsql said: Relation TBL already exist.\n\tdrop table TBL;\n--> Pgsql said: Relation TBL don't exist. (So strange).\n\nI tried vacuum too, but TBL still was there. The only way was: dump \ndatabase, destroydb and createdb it again.\n\nI looked into database files. TBL name appears in pg_type_typname_index, \npg_class_relname_index, pg_type.\n\nSeems to me that PgSQL creates the table, try to create the index, but when \nthe problems occurs, the \"rollback\" of create table is not completed.\n\nWhat do you think about this ? Is Hackers the right place to send this ?\n\nI'm using RedHat 5.2 (Intel) with Pgsql 6.4.2\n\nThanks.\n\nRicardo Coelho.\n\n",
"msg_date": "Tue, 9 Feb 1999 11:35:58 -0200",
"msg_from": "\"Ricardo J.C.Coelho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Timestamp fileds into index "
},
{
"msg_contents": "> I tried to create a table with a timestamp field as part of primary \n> key. Pgsql doesn't have an \"ops_name\" for timestamp. You will see this \n> when you use create table. DON'T DO THIS WITH YOUR REGULAR DATABASE.\n\nI don't think this creates permanent damage; see below.\n\n> If you create the table without primary key and after create an unique\n> index with abstime_ops, everything will run well.\n> However if you use primary key clause, the table can't be dropped or\n> created again. Look the sequence above.\n> \n> create table TBL (FLD1 int2, FLD2 timestamp, FLD3 text,\n> primary key(FLD1,FLD2));\n> --> Pgsql will not create because FLD2 is timestamp\n> create table TBL (FLD1 int2, FLD2 timestamp, FLD3 text);\n> --> Pgsql said: Relation TBL already exist.\n> drop table TBL;\n> --> Pgsql said: Relation TBL don't exist. (So strange).\n\npostgres=> create table TBL (FLD1 int2, FLD2 timestamp, FLD3 text,\nprimary key(FLD1,FLD2));\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index tbl_pkey\nfor table tbl\nERROR: Can't find a default operator class for type 1296.\npostgres=> drop table tbl;\nERROR: Relation 'tbl' does not exist\npostgres=> \\q\ngolem$ psql\nWelcome to the POSTGRESQL interactive sql monitor:\npostgres=> drop table tbl;\nERROR: Relation 'tbl' does not exist\npostgres=> create table TBL (FLD1 int2, FLD2 timestamp, FLD3 text);\nCREATE\npostgres=> drop table tbl;\nDROP\n\n> I tried vacuum too, but TBL still was there. The only way was: dump\n> database, destroydb and createdb it again.\n\nI think you just needed to exit your session and restart. See above.\n\n> Seems to me that PgSQL creates the table, try to create the index, but \n> when the problems occurs, the \"rollback\" of create table is not \n> completed. What do you think about this ?\n\nYour analysis is probably correct.\n\n> Is Hackers the right place to send this ?\n\nYes.\n\n - Tom\n",
"msg_date": "Tue, 09 Feb 1999 15:20:02 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Timestamp fileds into index"
}
] |
[
{
"msg_contents": "So I'm not the only person to see a bug like\nthis?\n\nD.\n\n-----Original Message-----\nFrom: Ricardo J.C.Coelho [mailto:[email protected]]\nSent: Tuesday, February 09, 1999 8:36 AM\nTo: '[email protected]'\nSubject: [HACKERS] Timestamp fileds into index \n\n\nHi,\n\nI am new in Pgsql-Hacker mailing list. I didn't have any answer from others \nlists.\n\nI tried to create a table with a timestamp field as part of primary key. \nPgsql doesn't have an \"ops_name\" for timestamp. You will see this when you \nuse create table. DON'T DO THIS WITH YOUR REGULAR DATABASE. Create a \nseparate one.\nIf you create the table without primary key and after create an unique \nindex with abstime_ops, everything will run well.\nHowever if you use primary key clause, the table can't be dropped or \ncreated again. Look the sequence above.\n\n\tcreate table TBL (FLD1 int2, FLD2 timestamp, FLD3 text, primary \nkey(FLD1,FLD2));\n--> Pgsql will not create because FLD2 is timestamp\n\tcreate table TBL (FLD1 int2, FLD2 timestamp, FLD3 text);\n--> Pgsql said: Relation TBL already exist.\n\tdrop table TBL;\n--> Pgsql said: Relation TBL don't exist. (So strange).\n\nI tried vacuum too, but TBL still was there. The only way was: dump \ndatabase, destroydb and createdb it again.\n\nI looked into database files. TBL name appears in pg_type_typname_index, \npg_class_relname_index, pg_type.\n\nSeems to me that PgSQL creates the table, try to create the index, but when \nthe problems occurs, the \"rollback\" of create table is not completed.\n\nWhat do you think about this ? Is Hackers the right place to send this ?\n\nI'm using RedHat 5.2 (Intel) with Pgsql 6.4.2\n\nThanks.\n\nRicardo Coelho.\n\n",
"msg_date": "Tue, 9 Feb 1999 09:59:16 -0500 ",
"msg_from": "Dan Gowin <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Timestamp fileds into index "
}
] |
[
{
"msg_contents": "I am trying to drop an index and I get the following error:\n\nsdbm=> drop index dafif_runway_arpt_ident;\nERROR: IndexSupportInitialize: corrupted catalogs\n\nAny idea on how to clean up my database?\nThanks\n---------\nChris Williams\nSterling Software\nRome, New York\nPhone: (315) 336-0500\nEmail: [email protected]\n\n",
"msg_date": "Tue, 9 Feb 1999 10:14:30 -0500",
"msg_from": "\"Chris Williams\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Error dropping indexes"
}
] |
[
{
"msg_contents": "> Thank Tom. It worked here.\n> By the way, is there a fix to CHECK clause of CREATE TABLE don't use \n> the whole memory of the computer ?\n\nIf the memory problem is observed when you try reloading a table, I\nbelieve that it was just fixed yesterday (by Jan?). It will certainly be\nfixed for v6.5, and I think they may be considering releasing a source\npatch for v6.4.2.\n\nCheck the mailing list archives for recent traffic, or subscribe to\nhackers and look for new traffic on this topic...\n\nGood luck.\n\n - Tom\n",
"msg_date": "Tue, 09 Feb 1999 15:52:47 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RES: [HACKERS] Timestamp fileds into index"
}
] |
[
{
"msg_contents": "A NEXTSTEP3.3 user reported some porting problems.\n\n1. #if FALSE problem\n\nFor example in src/include/utils/int8.h:\n\n\t#if FALSE\n\textern int64 *int28 (int16 val);\n\textern int16 int82(int64 * val);\n\t\n\t#endif\n\nUnfortunately in NEXTSTEP FALSE has been already defined as:\n\n\t#define\tFALSE\t((boolean_t) 0)\n\nWhat about using #if 0 or #if PG_FALSE or whatever instead of #if\nFALSE?\n\n\n2. Datum problem\n\nNEXTSTEP has its own \"Datum\" type and of course it coflicts with\nPostgreSQL's Datum. Possible solution might be put below into c.h:\n\n#ifdef NeXT\n#undef Datum\n#define Datum PG_Datum\n#define DatumPtr PG_DatumPtr\n#endif\n\n\nComments?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 10 Feb 1999 11:02:44 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "NEXTSTEP porting problems"
},
{
"msg_contents": "> A NEXTSTEP3.3 user reported some porting problems.\n> \n> 1. #if FALSE problem\n> \n> For example in src/include/utils/int8.h:\n> \n> \t#if FALSE\n> \textern int64 *int28 (int16 val);\n> \textern int16 int82(int64 * val);\n> \t\n> \t#endif\n> \n> Unfortunately in NEXTSTEP FALSE has been already defined as:\n> \n> \t#define\tFALSE\t((boolean_t) 0)\n> \n> What about using #if 0 or #if PG_FALSE or whatever instead of #if\n> FALSE?\n> \n\nDone, by you, I think.\n\n\n> \n> 2. Datum problem\n> \n> NEXTSTEP has its own \"Datum\" type and of course it coflicts with\n> PostgreSQL's Datum. Possible solution might be put below into c.h:\n> \n> #ifdef NeXT\n> #undef Datum\n> #define Datum PG_Datum\n> #define DatumPtr PG_DatumPtr\n> #endif\n> \n> \n> Comments?\n\nIs Datum a #define on NextStep. Can we just #undef it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Mar 1999 10:01:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NEXTSTEP porting problems"
},
{
"msg_contents": ">> A NEXTSTEP3.3 user reported some porting problems.\n>> \n>> 1. #if FALSE problem\n>> \n>> For example in src/include/utils/int8.h:\n>> \n>> \t#if FALSE\n>> \textern int64 *int28 (int16 val);\n>> \textern int16 int82(int64 * val);\n>> \t\n>> \t#endif\n>> \n>> Unfortunately in NEXTSTEP FALSE has been already defined as:\n>> \n>> \t#define\tFALSE\t((boolean_t) 0)\n>> \n>> What about using #if 0 or #if PG_FALSE or whatever instead of #if\n>> FALSE?\n>> \n>\n>Done, by you, I think.\n\nYes. Marc has applied my patch.\n\n>> 2. Datum problem\n>> \n>> NEXTSTEP has its own \"Datum\" type and of course it coflicts with\n>> PostgreSQL's Datum. Possible solution might be put below into c.h:\n>> \n>> #ifdef NeXT\n>> #undef Datum\n>> #define Datum PG_Datum\n>> #define DatumPtr PG_DatumPtr\n>> #endif\n>> \n>> \n>> Comments?\n>\n>Is Datum a #define on NextStep. Can we just #undef it?\n\nI will ask the NextStep user.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 16 Mar 1999 10:29:01 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] NEXTSTEP porting problems "
}
] |
[
{
"msg_contents": "It seems that casting varchar and bpchar does not work.\n\nI create two tables.\n\ncreate table main (id char(4), sub_id char(4), caption char(10));\ncreate table sub (sub_id varchar(10), sub_caption varchar(10));\n\nThen do a select.\n\nselect main.id, sub.sub_caption, main.caption from main, sub\n\twhere main.sub_id=sub.sub_id;\n\nI get an error(this is normal, I guess).\n\nERROR: There is more than one possible operator '=' for types 'bpchar' and 'varchar'\n\tYou will have to retype this query using an explicit cast\n\nSo I try some castings.\n\ntest=> select main.id, sub.sub_caption, main.caption from main, sub where main.sub_id::varchar =sub.sub_id;\ntest=> ERROR: There is more than one possible operator '=' for types 'bpchar' and 'varchar'\n\tYou will have to retype this query using an explicit cast\ntest=> select main.id, sub.sub_caption, main.caption from main, sub where main.sub_id=sub.sub_id::bpchar;\nERROR: There is more than one possible operator '=' for types 'bpchar' and 'varchar'\n\tYou will have to retype this query using an explicit cast\ntest=> select main.id, sub.sub_caption, main.caption from main, sub where cast(main.sub_id as varchar) =sub.sub_id;\nERROR: There is more than one possible operator '=' for types 'bpchar' and 'varchar'\n\tYou will have to retype this query using an explicit cast\ntest=> select main.id, sub.sub_caption, main.caption from main, sub where main.sub_id=cast(sub.sub_id as bpchar);\nERROR: There is more than one possible operator '=' for types 'bpchar' and 'varchar'\n\tYou will have to retype this query using an explicit cast\n\nDo we have a problem with casting? BTW, this is 6.4.2. I have not\ntried current yet.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 10 Feb 1999 11:43:39 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "cannot cast bpchar and varchar"
},
{
"msg_contents": "> It seems that casting varchar and bpchar does not work.\n\nHmmm. It seems to be a combination of two problems:\n\n1) not knowing which type should be preferred (or how to compare varchar\nand bpchar; should I strip the blank padding from a bpchar? Probably\nso...).\n\n2) swallowing the type coersions since bpchar, varchar, and text are\nconsidered to be binary compatible. I may have seen another instance of\nthis being a problem, and perhaps should figure out how to propagate the\nnew coerced type into the query rather than dropping it.\n\nWill look at this. Question: how *should* we compare bpchar and varchar?\nIt may be that we should have some explicit comparison or coersion\nroutines to make things work smoothly.\n\nThanks for the report.\n\n - Tom\n",
"msg_date": "Wed, 10 Feb 1999 04:32:24 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cannot cast bpchar and varchar"
},
{
"msg_contents": ">Hmmm. It seems to be a combination of two problems:\n>\n>1) not knowing which type should be preferred (or how to compare varchar\n>and bpchar; should I strip the blank padding from a bpchar? Probably\n>so...).\n>\n>2) swallowing the type coersions since bpchar, varchar, and text are\n>considered to be binary compatible. I may have seen another instance of\n>this being a problem, and perhaps should figure out how to propagate the\n>new coerced type into the query rather than dropping it.\n>\n>Will look at this. Question: how *should* we compare bpchar and varchar?\n>It may be that we should have some explicit comparison or coersion\n>routines to make things work smoothly.\n\nNot sure. I will check some SQL books at home.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 10 Feb 1999 17:59:01 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cannot cast bpchar and varchar "
},
{
"msg_contents": "> >Will look at this. Question: how *should* we compare bpchar and varchar?\n> >It may be that we should have some explicit comparison or coersion\n> >routines to make things work smoothly.\n> \n> Not sure. I will check some SQL books at home.\n\nAccording to the standard, the result of comparison between a fixed\nlength char (bpchar) and a variable length char (varchar or text) may\nvary depending on an attribute \"PAD SPACE\" or \"NO PAD\" of the\nCOLLATION. Since we do not have COLLATION (yet), we need to have\nanother way to decide which scheme (PAD SPACE or NO PAD) should be\nemployed. Possible solution might be:\n\no decide at compile time. always use one of them at runtime.\n\no decide at runtime. new set command or an environment variable might\n be used.\n\nComments?\n---\nTatsuo Ishii\n",
"msg_date": "Thu, 11 Feb 1999 14:08:40 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cannot cast bpchar and varchar "
}
] |
[
{
"msg_contents": "Hello,\n\nI am running 6.4.2 on Sparc/Solaris 2.7. Occasionally (far more\noften than I would like - which is never :), the postmaster refuses\nconnections. Upon investigation, there are 48 postmaster processes\nrunning. If one kills the parent and restarts, all is well for some\ntime. Attached is a chunk of output from postmaster, unfortunately\nit does not timestamp, so it is hard to see when what happened. I'm\nmost disturbed about the device space message. None of my disks are\neven close to full, nor have been...\n\nHelp! Advice? Ways to gather further info? Is this a Solaris 7\nthing?\n\nAlso, is the 48 number configurable? It almost looks as though that\nis the maximum number of outstanding queries I can have at one time?\n\nTIA,\n\nDwD\n--\nDaryl W. Dunbar\nhttp://www.com, Where the Web Begins!\nmailto:[email protected]",
"msg_date": "Tue, 9 Feb 1999 22:00:39 -0500",
"msg_from": "\"Daryl W. Dunbar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL and Solaris 7?"
}
] |
[
{
"msg_contents": "Hello,\n\nI am running 6.4.2 on Sparc/Solaris 2.7. Occasionally (far more\noften than I would like - which is never :), the postmaster refuses\nconnections. Upon investigation, there are 48 postmaster processes\nrunning. If one kills the parent and restarts, all is well for some\ntime. Attached is a chunk of output from postmaster, unfortunately\nit does not timestamp, so it is hard to see when what happened. I'm\nmost disturbed about the device space message. None of my disks are\neven close to full, nor have been...\n\nHelp! Advice? Ways to gather further info? Is this a Solaris 7\nthing?\n\nAlso, is the 48 number configurable? It almost looks as though that\nis the maximum number of outstanding queries I can have at one time?\n\nTIA,\n\nDwD\n--\nDaryl W. Dunbar\nhttp://www.com, Where the Web Begins!\nmailto:[email protected]",
"msg_date": "Tue, 9 Feb 1999 22:18:22 -0500",
"msg_from": "\"Daryl W. Dunbar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL and Solaris 7?"
},
{
"msg_contents": "On Tue, 9 Feb 1999, Daryl W. Dunbar wrote:\n\n> Hello,\n> \n> I am running 6.4.2 on Sparc/Solaris 2.7. Occasionally (far more\n> often than I would like - which is never :), the postmaster refuses\n> connections. Upon investigation, there are 48 postmaster processes\n> running. If one kills the parent and restarts, all is well for some\n> time. Attached is a chunk of output from postmaster, unfortunately\n> it does not timestamp, so it is hard to see when what happened. I'm\n> most disturbed about the device space message. None of my disks are\n> even close to full, nor have been...\n> \n> Help! Advice? Ways to gather further info? Is this a Solaris 7\n> thing?\n\nWe just got our Solaris 7 CDs, and I'm going to be installing it on one of\nour quieter machines in coming weeks, but...\n\nhave you tried doing a 'truss -p' on the parent process?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 9 Feb 1999 23:22:39 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7?"
},
{
"msg_contents": "Reproduced here too. (Solaris 2.6)\n\n>On Tue, 9 Feb 1999, Daryl W. Dunbar wrote:\n>\n>> Hello,\n>> \n>> I am running 6.4.2 on Sparc/Solaris 2.7. Occasionally (far more\n>> often than I would like - which is never :), the postmaster refuses\n>> connections. Upon investigation, there are 48 postmaster processes\n>> running. If one kills the parent and restarts, all is well for some\n>> time. Attached is a chunk of output from postmaster, unfortunately\n>> it does not timestamp, so it is hard to see when what happened. I'm\n>> most disturbed about the device space message. None of my disks are\n>> even close to full, nor have been...\n>> \n>> Help! Advice? Ways to gather further info? Is this a Solaris 7\n>> thing?\n>\n>We just got our Solaris 7 CDs, and I'm going to be installing it on one of\n>our quieter machines in coming weeks, but...\n>\n>have you tried doing a 'truss -p' on the parent process?\n\nIncreasing # of semaphores should solve the problem, I guess. I'm\ngoing to try that as soon as I find the way to increase semaphores.\n---\nTatsuo Ishii\n",
"msg_date": "Wed, 10 Feb 1999 13:01:22 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": "\nI'm also getting these types of errors in the log:\nNOTICE: Deadlock detected -- See the lock(l) manual page for a\npossible cause.\nERROR: WaitOnLock: error on wakeup - Aborting this transaction\nNOTICE: Deadlock detected -- See the lock(l) manual page for a\npossible cause.\nERROR: WaitOnLock: error on wakeup - Aborting this transaction\n\nalso some of these:\nIpcSemaphoreCreate: semget failed (No space left on device)\nkey=5432614, num=16, permission=600\nIpcSemaphoreCreate: semget failed (No space left on device)\nkey=5432714, num=16, permission=600\n\nand many, many of these:\nFATAL: pq_putnchar: fputc() failed: errno=32\nFATAL: pq_putnchar: fputc() failed: errno=32\n\nMy disks are virtually empty and I can't see where I'm getting a\nbroken pipe.\n\nAll communications are local using DBI/DBD::Pg w/Perl 5.005_02.\n\nAny thoughts? I'm thinking it might have to do with sockets/streams\nsince 7 uses sockets and all prior solaris 2.x used streams. Maybe\nit's time to override some configure options?\n\nDwD\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Tatsuo Ishii\nSent: Tuesday, February 09, 1999 11:01 PM\nTo: The Hermit Hacker\nCc: Daryl W. Dunbar; [email protected]\nSubject: Re: [HACKERS] PostgreSQL and Solaris 7?\n\n\nReproduced here too. (Solaris 2.6)\n\n>On Tue, 9 Feb 1999, Daryl W. Dunbar wrote:\n>\n>> Hello,\n>>\n>> I am running 6.4.2 on Sparc/Solaris 2.7. Occasionally (far more\n>> often than I would like - which is never :), the postmaster\nrefuses\n>> connections. Upon investigation, there are 48 postmaster\nprocesses\n>> running. If one kills the parent and restarts, all is well for\nsome\n>> time. Attached is a chunk of output from postmaster,\nunfortunately\n>> it does not timestamp, so it is hard to see when what happened.\nI'm\n>> most disturbed about the device space message. None of my disks\nare\n>> even close to full, nor have been...\n>>\n>> Help! Advice? Ways to gather further info? Is this a Solaris 7\n>> thing?\n>\n>We just got our Solaris 7 CDs, and I'm going to be installing it on\none of\n>our quieter machines in coming weeks, but...\n>\n>have you tried doing a 'truss -p' on the parent process?\n\nIncreasing # of semaphores should solve the problem, I guess. I'm\ngoing to try that as soon as I find the way to increase semaphores.\n---\nTatsuo Ishii\n\n",
"msg_date": "Tue, 9 Feb 1999 23:16:20 -0500",
"msg_from": "\"Daryl W. Dunbar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": "On Wed, 10 Feb 1999, Tatsuo Ishii wrote:\n\n> Reproduced here too. (Solaris 2.6)\n> \n> >On Tue, 9 Feb 1999, Daryl W. Dunbar wrote:\n> >\n> >> Hello,\n> >> \n> >> I am running 6.4.2 on Sparc/Solaris 2.7. Occasionally (far more\n> >> often than I would like - which is never :), the postmaster refuses\n> >> connections. Upon investigation, there are 48 postmaster processes\n> >> running. If one kills the parent and restarts, all is well for some\n> >> time. Attached is a chunk of output from postmaster, unfortunately\n> >> it does not timestamp, so it is hard to see when what happened. I'm\n> >> most disturbed about the device space message. None of my disks are\n> >> even close to full, nor have been...\n> >> \n> >> Help! Advice? Ways to gather further info? Is this a Solaris 7\n> >> thing?\n> >\n> >We just got our Solaris 7 CDs, and I'm going to be installing it on one of\n> >our quieter machines in coming weeks, but...\n> >\n> >have you tried doing a 'truss -p' on the parent process?\n> \n> Increasing # of semaphores should solve the problem, I guess. I'm\n> going to try that as soon as I find the way to increase semaphores.\n\n>From oen of our servers at work, using Oracle\n\n/etc/system:\n\nset shmsys:shminfo_shmmax=16777216\nset shmsys:shminfo_shmmin=1\nset shmsys:shminfo_shmmni=100\nset shmsys:shminfo_shmseg=51\n\n\nThere are appropriate values for sem also...I actually have a document at\nwork that explains it all...let me see if I can dig it up and add it to\nthe WWW site or something...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 10 Feb 1999 00:16:30 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": "On Wed, 10 Feb 1999, The Hermit Hacker wrote:\n> /etc/system:\n> \n> set shmsys:shminfo_shmmax=16777216\n> set shmsys:shminfo_shmmin=1\n> set shmsys:shminfo_shmmni=100\n> set shmsys:shminfo_shmseg=51\n\n Don't forget to reboot. The file /etc/system is read by kernel only on\nboot.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Wed, 10 Feb 1999 11:20:32 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": ">> Increasing # of semaphores should solve the problem, I guess. I'm\n>> going to try that as soon as I find the way to increase semaphores.\n>\n>>From oen of our servers at work, using Oracle\n>\n>/etc/system:\n>\n>set shmsys:shminfo_shmmax=16777216\n>set shmsys:shminfo_shmmin=1\n>set shmsys:shminfo_shmmni=100\n>set shmsys:shminfo_shmseg=51\n>\n>\n>There are appropriate values for sem also...I actually have a document at\n>work that explains it all...let me see if I can dig it up and add it to\n>the WWW site or something...\n\nGreat.\n\nI checked my Solaris box using sysdef and got:\n\n* IPC Semaphores\n*\n 10 entries in semaphore map (SEMMAP)\n 10 semaphore identifiers (SEMMNI)\n 60 semaphores in system (SEMMNS)\n 30 undo structures in system (SEMMNU)\n 25 max semaphores per id (SEMMSL)\n 10 max operations per semop call (SEMOPM)\n 10 max undo entries per process (SEMUME)\n 32767 semaphore maximum value (SEMVMX)\n 16384 adjust on exit max value (SEMAEM)\n\nThere are so many tunable paramters! I expect cleaner explanations for \nthese kernel variables from Marc's document.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 10 Feb 1999 17:56:50 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>> Increasing # of semaphores should solve the problem, I guess. I'm\n>>> going to try that as soon as I find the way to increase semaphores.\n\nRight --- the messages Daryl is seeing indicate he's running out of\nsemaphores or semaphore IDs, not file space. (Someone was too lazy\nto create a separate kernel errno for out-of-semaphores.)\n\n: IpcSemaphoreCreate: semget failed (No space left on device)\n ^^^^^^^^^^^^^\n\nMy guess is that the \"WaitOnLock\" and \"stuck spinlock\" complaints are\nartifacts of not being able to recover from the out-of-semaphores\ncondition. I hope to make this a little more robust in time for 6.5.\n\n> I checked my Solaris box using sysdef and got:\n\n> 10 entries in semaphore map (SEMMAP)\n> 10 semaphore identifiers (SEMMNI)\n> 60 semaphores in system (SEMMNS)\n> 25 max semaphores per id (SEMMSL)\n\nThese settings are far too small if you hope to go beyond a couple dozen\nPostgres backends. Postgres requires one semaphore per backend, which\nit (presently) allocates in groups of 16. Thus you cannot get past 48\nbackends with these kernel settings --- starting the 49th backend requires\nallocating semaphores 49-64, but your system is set up to allow only 60\nsemas total.\n\n(If your platform doesn't have a TEST_AND_SET implementation then\nseveral more semas are needed for spinlock emulation, but I assume\nthat's not a problem on Solaris.)\n\nSEMMNI should also be bumped up, since you could not get past 10*16\nbackends with it set at 10 --- and that's not allowing for anything\nelse to be using semaphores! It'd be foolish not to leave at least\na couple dozen semas and sema IDs free at Postgres' peak usage.\n\nI dunno what SEMMAP is (no such parameter in my kernel) but it\nprobably needs to be at least as large as SEMMNI, possibly larger.\n\nTo run more than 64 backends you will also need to increase Postgres'\ninternal MaxBackendId constant. Somewhere along here you are also\nlikely to run into other kernel configuration limits, like the total\nnumber of processes, total processes for a single user, total number\nof open files, etc. These are all fixable but you don't want to\nreboot the system to install new values very often.\n\nWe need a chapter in the installation guide that covers all this stuff\nin more detail... offhand I don't even know how many open files to\nallow per backend.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Feb 1999 10:01:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": "In order to increase the number of semaphores, I added the following\nentries to /etc/system and rebooted:\n*\n* Increase the total number of Semaphores per user and system\nset semsys:seminfo_semmap=128\nset semsys:seminfo_semmni=128\nset semsys:seminfo_semmns=8192\nset semsys:seminfo_semmnu=8192\nset semsys:seminfo_semmsl=64\nset semsys:seminfo_semopm=32\nset semsys:seminfo_semume=32\n\nThese values are considerably larger than the defaults but more in\nline with the Linux defaults. Please note I have 512MB of memory on\nmy Sparc, so I was not concerned about allocating too much table\nspace.\n\nThanks all for your help, this seems to have solved my problem.\n\nDwD\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, February 10, 1999 10:01 AM\n> To: [email protected]\n> Cc: Daryl W. Dunbar; [email protected]\n> Subject: Re: [HACKERS] PostgreSQL and Solaris 7?\n>\n>\n> Tatsuo Ishii <[email protected]> writes:\n> >>> Increasing # of semaphores should solve the problem,\n> I guess. I'm\n> >>> going to try that as soon as I find the way to\n> increase semaphores.\n>\n> Right --- the messages Daryl is seeing indicate he's\n> running out of\n> semaphores or semaphore IDs, not file space. (Someone\n> was too lazy\n> to create a separate kernel errno for out-of-semaphores.)\n>\n> : IpcSemaphoreCreate: semget failed (No space left on device)\n> ^^^^^^^^^^^^^\n>\n> My guess is that the \"WaitOnLock\" and \"stuck spinlock\"\n> complaints are\n> artifacts of not being able to recover from the out-of-semaphores\n> condition. I hope to make this a little more robust in\n> time for 6.5.\n>\n> > I checked my Solaris box using sysdef and got:\n>\n> > 10 entries in semaphore map (SEMMAP)\n> > 10 semaphore identifiers (SEMMNI)\n> > 60 semaphores in system (SEMMNS)\n> > 25 max semaphores per id (SEMMSL)\n>\n> These settings are far too small if you hope to go beyond\n> a couple dozen\n> Postgres backends. Postgres requires one semaphore per\n> backend, which\n> it (presently) allocates in groups of 16. Thus you\n> cannot get past 48\n> backends with these kernel settings --- starting the 49th\n> backend requires\n> allocating semaphores 49-64, but your system is set up to\n> allow only 60\n> semas total.\n>\n> (If your platform doesn't have a TEST_AND_SET implementation then\n> several more semas are needed for spinlock emulation, but I assume\n> that's not a problem on Solaris.)\n>\n> SEMMNI should also be bumped up, since you could not get\n> past 10*16\n> backends with it set at 10 --- and that's not allowing\n> for anything\n> else to be using semaphores! It'd be foolish not to\n> leave at least\n> a couple dozen semas and sema IDs free at Postgres' peak usage.\n>\n> I dunno what SEMMAP is (no such parameter in my kernel) but it\n> probably needs to be at least as large as SEMMNI, possibly larger.\n>\n> To run more than 64 backends you will also need to\n> increase Postgres'\n> internal MaxBackendId constant. Somewhere along here you are also\n> likely to run into other kernel configuration limits,\n> like the total\n> number of processes, total processes for a single user,\n> total number\n> of open files, etc. These are all fixable but you don't want to\n> reboot the system to install new values very often.\n>\n> We need a chapter in the installation guide that covers\n> all this stuff\n> in more detail... offhand I don't even know how many open files to\n> allow per backend.\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Wed, 10 Feb 1999 13:08:56 -0500",
"msg_from": "\"Daryl W. Dunbar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": "> > I checked my Solaris box using sysdef and got:\n> \n> > 10 entries in semaphore map (SEMMAP)\n> > 10 semaphore identifiers (SEMMNI)\n> > 60 semaphores in system (SEMMNS)\n> > 25 max semaphores per id (SEMMSL)\n> \n> These settings are far too small if you hope to go beyond a couple dozen\n> Postgres backends. Postgres requires one semaphore per backend, which\n> it (presently) allocates in groups of 16. Thus you cannot get past 48\n> backends with these kernel settings --- starting the 49th backend requires\n> allocating semaphores 49-64, but your system is set up to allow only 60\n> semas total.\n> \n> (If your platform doesn't have a TEST_AND_SET implementation then\n> several more semas are needed for spinlock emulation, but I assume\n> that's not a problem on Solaris.)\n> \n> SEMMNI should also be bumped up, since you could not get past 10*16\n> backends with it set at 10 --- and that's not allowing for anything\n> else to be using semaphores! It'd be foolish not to leave at least\n> a couple dozen semas and sema IDs free at Postgres' peak usage.\n> \n> I dunno what SEMMAP is (no such parameter in my kernel) but it\n> probably needs to be at least as large as SEMMNI, possibly larger.\n\nOk. If I consider 64 backends, at least following settings would be\nrequired from your suggestion:\n\n64 entries in semaphore map (SEMMAP)\n64 semaphore identifiers (SEMMNI)\n64 semaphores in system (SEMMNS)\n25 max semaphores per id (SEMMSL)\n\nIs this correct?\n\n> To run more than 64 backends you will also need to increase Postgres'\n> internal MaxBackendId constant. Somewhere along here you are also\n> likely to run into other kernel configuration limits, like the total\n> number of processes, total processes for a single user, total number\n> of open files, etc. These are all fixable but you don't want to\n> reboot the system to install new values very often.\n> \n> We need a chapter in the installation guide that covers all this stuff\n> in more detail... offhand I don't even know how many open files to\n> allow per backend.\n\nPostgreSQL seems to eat up file descriptor as many as possible. I\nobserve 30-40 or more files are opened by a backend. This is\ndefinitelty a problem when thinking about large number of backends.\n\nMy solution is using limit or ulimit command to lower the number of\navaliable file descriptors when starting postmaster. The lowered\nnumber must be 20 or greater, since PostgreSQL's \"virtual file\ndescriptor\" system reserves at least 20 open files per\nbackend(backend/storage/file/fd.c).\n---\nTatsuo Ishii\n",
"msg_date": "Thu, 11 Feb 1999 14:08:36 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Ok. If I consider 64 backends, at least following settings would be\n> required from your suggestion:\n\n> 64 entries in semaphore map (SEMMAP)\n> 64 semaphore identifiers (SEMMNI)\n> 64 semaphores in system (SEMMNS)\n> 25 max semaphores per id (SEMMSL)\n\n> Is this correct?\n\nNo. You do need SEMMNS >= 64 of course, but Postgres only needs a\nsema identifier for each block of 16 semas, so SEMMNI >= 4 will work.\nAccording to my references, the recommended value of SEMMAP is SEMMNI+2\n(it's for keeping track of unused \"holes\" between allocated sema-ID\ngroups, so that seems like it ought to be enough). SEMMSL could be as\nlow as 16, though I see no reason to reduce the default value.\n\nIn reality, of course, you had better leave some slop for other Unix\nprograms to be able to grab semas of their own. I'd suggest at least\ndoubling the minimum SEMMNS and SEMMNI. (On my HP box, ipcs shows\nvarious root-owned subsystems using 6 sema IDs with a total of 8 semas.\nSo I'd need at least SEMMNS = 72, SEMMNI = 10 to run 64 backends ---\nwith no margin for error.)\n\nI never understood why the default sema configuration values were so\nsmall anyway --- it's not like a semaphore uses a huge amount of kernel\nspace...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Feb 1999 10:29:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7? "
},
{
"msg_contents": "\n\nAs promised, sorry for the delay...\n\nhttp://sunsolve.Sun.COM/private-cgi/us/doc2html?infodoc/2270+91876015215168\n\nThis gives an explnation of it all for Solaris...\n\nOn Thu, 11 Feb 1999, Tom Lane wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n> > Ok. If I consider 64 backends, at least following settings would be\n> > required from your suggestion:\n> \n> > 64 entries in semaphore map (SEMMAP)\n> > 64 semaphore identifiers (SEMMNI)\n> > 64 semaphores in system (SEMMNS)\n> > 25 max semaphores per id (SEMMSL)\n> \n> > Is this correct?\n> \n> No. You do need SEMMNS >= 64 of course, but Postgres only needs a\n> sema identifier for each block of 16 semas, so SEMMNI >= 4 will work.\n> According to my references, the recommended value of SEMMAP is SEMMNI+2\n> (it's for keeping track of unused \"holes\" between allocated sema-ID\n> groups, so that seems like it ought to be enough). SEMMSL could be as\n> low as 16, though I see no reason to reduce the default value.\n> \n> In reality, of course, you had better leave some slop for other Unix\n> programs to be able to grab semas of their own. I'd suggest at least\n> doubling the minimum SEMMNS and SEMMNI. (On my HP box, ipcs shows\n> various root-owned subsystems using 6 sema IDs with a total of 8 semas.\n> So I'd need at least SEMMNS = 72, SEMMNI = 10 to run 64 backends ---\n> with no margin for error.)\n> \n> I never understood why the default sema configuration values were so\n> small anyway --- it's not like a semaphore uses a huge amount of kernel\n> space...\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 11 Feb 1999 15:07:45 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Solaris 7? "
}
] |
[
{
"msg_contents": "\n\t> What about using #if 0 or #if PG_FALSE or whatever instead of #if\n\t> FALSE?\n\nI think the consensus was to use:\n\n#ifdef NOT_USED\n\nin these cases\n\nAndreas\n",
"msg_date": "Wed, 10 Feb 1999 11:07:26 +0100",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] NEXTSTEP porting problems"
}
] |
[
{
"msg_contents": "This is not a worthwhile announcement!!\n\nPostgreSQL Software Database wrote:\n> \n> The following entry has been added to the Software Database:\n> \n> Title: Department of Pure and Applied Mathematics of L'Aquila WEB Site\n> Version: N/A\n> Platform: Digital Alpha with Digital Unix\n> \n> For more information, check out http://www.postgresql.org/software\n",
"msg_date": "Wed, 10 Feb 1999 08:13:14 -0800",
"msg_from": "Bruce Korb <[email protected]>",
"msg_from_op": true,
"msg_subject": "ENOUGH IS ENOUGH: [ANNOUNCE] Database Entry: "
}
] |
[
{
"msg_contents": "I am using postgres 6.4.2 on BSD/OS 3.1 with a Greek locale that I\nhave developed. I knew that regexes with postgress would not work because\nof something I did but a posting from another follow from Sweden gave me a\nclue that the problem must be with the regex package and not the locale.\n\nSo I investigated the code and found out the pg_isdigit(int ch),\npg_isalpha(int ch) and the associated functions do a comparison of\ncharacters as ints. I changed a few crucial points with a cast to\n(unsigned char) and voila , regexs in Greek with full locale support. My\nguess is that an int != unsigned char when comparing, the sign bit is\nprobably the culprit.\n\nPlease test the patch on some other language too, Swedish or Finish\nwould be a nice touch.\n\nPatch follows, but it is trivial really. \n\n--------------------------------------------------------------------------------\n*** regcomp.c\tTue Sep 1 07:31:25 1998\n--- regcomp.c.patched\tWed Feb 10 19:57:11 1999\n***************\n*** 1038,1046 ****\n {\n \tassert(pg_isalpha(ch));\n \tif (pg_isupper(ch))\n! \t\treturn tolower(ch);\n \telse if (pg_islower(ch))\n! \t\treturn toupper(ch);\n \telse\n /* peculiar, but could happen */\n \t\treturn ch;\n--- 1038,1046 ----\n {\n \tassert(pg_isalpha(ch));\n \tif (pg_isupper(ch))\n! \t\treturn tolower((unsigned char)ch);\n \telse if (pg_islower(ch))\n! \t\treturn toupper((unsigned char)ch);\n \telse\n /* peculiar, but could happen */\n \t\treturn ch;\n***************\n*** 1055,1067 ****\n static void\n bothcases(p, ch)\n struct parse *p;\n! int\t\t\tch;\n {\n \tpg_wchar *oldnext = p->next;\n \tpg_wchar *oldend = p->end;\n \tpg_wchar\tbracket[3];\n \n! \tassert(othercase(ch) != ch);/* p_bracket() would recurse */\n \tp->next = bracket;\n \tp->end = bracket + 2;\n \tbracket[0] = ch;\n--- 1055,1067 ----\n static void\n bothcases(p, ch)\n struct parse *p;\n! int\t\tch;\n {\n \tpg_wchar *oldnext = p->next;\n \tpg_wchar *oldend = p->end;\n \tpg_wchar\tbracket[3];\n \n! \tassert(othercase(ch) != (unsigned char)ch);/* p_bracket() would recurse */\n \tp->next = bracket;\n \tp->end = bracket + 2;\n \tbracket[0] = ch;\n***************\n*** 1084,1090 ****\n {\n \tcat_t\t *cap = p->g->categories;\n \n! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != ch)\n \t\tbothcases(p, ch);\n \telse\n \t{\n--- 1084,1090 ----\n {\n \tcat_t\t *cap = p->g->categories;\n \n! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != (unsigned char)ch)\n \t\tbothcases(p, ch);\n \telse\n \t{\n***************\n*** 1862,1868 ****\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n #else\n! \treturn (isdigit(c));\n #endif\n }\n \n--- 1862,1868 ----\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n #else\n! \treturn (isdigit((unsigned char)c));\n #endif\n }\n \n***************\n*** 1872,1878 ****\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n #else\n! \treturn (isalpha(c));\n #endif\n }\n \n--- 1872,1878 ----\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n #else\n! \treturn (isalpha((unsigned char)c));\n #endif\n }\n \n***************\n*** 1882,1888 ****\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n #else\n! \treturn (isupper(c));\n #endif\n }\n \n--- 1882,1888 ----\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n #else\n! \treturn (isupper((unsigned char)c));\n #endif\n }\n \n***************\n*** 1892,1897 ****\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n #else\n! \treturn (islower(c));\n #endif\n }\n--- 1892,1897 ----\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n #else\n! \treturn (islower((unsigned char)c));\n #endif\n }\n\n-- \nIncredible Networks LTD Angelos Karageorgiou\n20 Karea st, +30.1.92.12.312 (voice)\n116 36 Athens, Greece. +30.1.92.12.314 (fax)\nhttp://www.incredible.com [email protected] (e-mail)\n\n",
"msg_date": "Wed, 10 Feb 1999 19:44:03 +0200 (EET)",
"msg_from": "Angelos Karageorgiou <[email protected]>",
"msg_from_op": true,
"msg_subject": "regex + locale bug"
}
] |
[
{
"msg_contents": "Hi,\n\n could someone please enlighten me on RECEIPE? What is it and\n why doesn't it call the regular rule rewrite system after\n parsing the statements?\n\n Is it a way to bypass normal production rules (constraint\n deletes etc.)?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 10 Feb 1999 19:06:21 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "RECEIPE"
},
{
"msg_contents": "> Hi,\n> \n> could someone please enlighten me on RECEIPE? What is it and\n> why doesn't it call the regular rule rewrite system after\n> parsing the statements?\n> \n> Is it a way to bypass normal production rules (constraint\n> deletes etc.)?\n> \n\nI believe it is a disabled feature. See commands/recipe.c. It is\nrelated to TIOGA and is not used. Perhaps the addition of #ifdef TIOGA\nwould allow you to disable that code like we did with the other stuff. \nI can't see any code that deals with recipe anywhere meaningful.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 10 Feb 1999 13:14:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RECEIPE"
}
] |
[
{
"msg_contents": "I am using postgres 6.4.2 on BSD/OS 3.1 with a Greek locale that I\nhave developed. I knew that regexes with postgress would not work because\nof something I did but a posting from another follow from Sweden gave me a\nclue that the problem must be with the regex package and not the locale.\n\nSo I investigated the code and found out the pg_isdigit(int ch),\npg_isalpha(int ch) and the associated functions do a comparison of\ncharacters as ints. I changed a few crucial points with a cast to\n(unsigned char) and voila , regexs in Greek with full locale support. My\nguess is that an int != unsigned char when comparing, the sign bit is\nprobably the culprit.\n\nPlease test the patch on some other language too, Swedish or Finish\nwould be a nice touch.\n\nPatch follows, but it is trivial really.\n---------------------------------------------------------------------------------\n*** regcomp.c\tTue Sep 1 07:31:25 1998\n--- regcomp.c.patched\tWed Feb 10 19:57:11 1999\n***************\n*** 1038,1046 ****\n {\n \tassert(pg_isalpha(ch));\n \tif (pg_isupper(ch))\n! \t\treturn tolower(ch);\n \telse if (pg_islower(ch))\n! \t\treturn toupper(ch);\n \telse\n /* peculiar, but could happen */\n \t\treturn ch;\n--- 1038,1046 ----\n {\n \tassert(pg_isalpha(ch));\n \tif (pg_isupper(ch))\n! \t\treturn tolower((unsigned char)ch);\n \telse if (pg_islower(ch))\n! \t\treturn toupper((unsigned char)ch);\n \telse\n /* peculiar, but could happen */\n \t\treturn ch;\n***************\n*** 1055,1067 ****\n static void\n bothcases(p, ch)\n struct parse *p;\n! int\t\t\tch;\n {\n \tpg_wchar *oldnext = p->next;\n \tpg_wchar *oldend = p->end;\n \tpg_wchar\tbracket[3];\n \n! \tassert(othercase(ch) != ch);/* p_bracket() would recurse */\n \tp->next = bracket;\n \tp->end = bracket + 2;\n \tbracket[0] = ch;\n--- 1055,1067 ----\n static void\n bothcases(p, ch)\n struct parse *p;\n! int\t\tch;\n {\n \tpg_wchar *oldnext = p->next;\n \tpg_wchar *oldend = p->end;\n \tpg_wchar\tbracket[3];\n \n! \tassert(othercase(ch) != (unsigned char)ch);/* p_bracket() would recurse */\n \tp->next = bracket;\n \tp->end = bracket + 2;\n \tbracket[0] = ch;\n***************\n*** 1084,1090 ****\n {\n \tcat_t\t *cap = p->g->categories;\n \n! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != ch)\n \t\tbothcases(p, ch);\n \telse\n \t{\n--- 1084,1090 ----\n {\n \tcat_t\t *cap = p->g->categories;\n \n! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != (unsigned char)ch)\n \t\tbothcases(p, ch);\n \telse\n \t{\n***************\n*** 1862,1868 ****\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n #else\n! \treturn (isdigit(c));\n #endif\n }\n \n--- 1862,1868 ----\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n #else\n! \treturn (isdigit((unsigned char)c));\n #endif\n }\n \n***************\n*** 1872,1878 ****\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n #else\n! \treturn (isalpha(c));\n #endif\n }\n \n--- 1872,1878 ----\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n #else\n! \treturn (isalpha((unsigned char)c));\n #endif\n }\n \n***************\n*** 1882,1888 ****\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n #else\n! \treturn (isupper(c));\n #endif\n }\n \n--- 1882,1888 ----\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n #else\n! \treturn (isupper((unsigned char)c));\n #endif\n }\n \n***************\n*** 1892,1897 ****\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n #else\n! \treturn (islower(c));\n #endif\n }\n--- 1892,1897 ----\n #ifdef MULTIBYTE\n \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n #else\n! \treturn (islower((unsigned char)c));\n #endif\n }\n",
"msg_date": "Wed, 10 Feb 1999 21:15:45 +0200 (EET)",
"msg_from": "Angelos Karageorgiou <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
},
{
"msg_contents": "Hello!\n\n Next time you'll send a patch could you use tools in\n .../src/tools/make_diff\n\n I've applied the patch to 6.4.2 on Debian 2.0 and ran locale test on\nkoi8-r locale. The locale test before the patch passed and test after patch\npassed as well. I didn't note any difference. What difference you expected?\n\n Please supply data for locale test (look into .../src/test/locale). This\nis not related to your patch, we're just collecting test data.\n\nOn Wed, 10 Feb 1999, Angelos Karageorgiou wrote:\n\n> I am using postgres 6.4.2 on BSD/OS 3.1 with a Greek locale that I\n> have developed. I knew that regexes with postgress would not work because\n> of something I did but a posting from another follow from Sweden gave me a\n> clue that the problem must be with the regex package and not the locale.\n> \n> So I investigated the code and found out the pg_isdigit(int ch),\n> pg_isalpha(int ch) and the associated functions do a comparison of\n> characters as ints. I changed a few crucial points with a cast to\n> (unsigned char) and voila , regexs in Greek with full locale support. My\n> guess is that an int != unsigned char when comparing, the sign bit is\n> probably the culprit.\n> \n> Please test the patch on some other language too, Swedish or Finish\n> would be a nice touch.\n> \n> Patch follows, but it is trivial really.\n> ---------------------------------------------------------------------------------\n> *** regcomp.c\tTue Sep 1 07:31:25 1998\n> --- regcomp.c.patched\tWed Feb 10 19:57:11 1999\n> ***************\n> *** 1038,1046 ****\n> {\n> \tassert(pg_isalpha(ch));\n> \tif (pg_isupper(ch))\n> ! \t\treturn tolower(ch);\n> \telse if (pg_islower(ch))\n> ! \t\treturn toupper(ch);\n> \telse\n> /* peculiar, but could happen */\n> \t\treturn ch;\n> --- 1038,1046 ----\n> {\n> \tassert(pg_isalpha(ch));\n> \tif (pg_isupper(ch))\n> ! \t\treturn tolower((unsigned char)ch);\n> \telse if (pg_islower(ch))\n> ! \t\treturn toupper((unsigned char)ch);\n> \telse\n> /* peculiar, but could happen */\n> \t\treturn ch;\n> ***************\n> *** 1055,1067 ****\n> static void\n> bothcases(p, ch)\n> struct parse *p;\n> ! int\t\t\tch;\n> {\n> \tpg_wchar *oldnext = p->next;\n> \tpg_wchar *oldend = p->end;\n> \tpg_wchar\tbracket[3];\n> \n> ! \tassert(othercase(ch) != ch);/* p_bracket() would recurse */\n> \tp->next = bracket;\n> \tp->end = bracket + 2;\n> \tbracket[0] = ch;\n> --- 1055,1067 ----\n> static void\n> bothcases(p, ch)\n> struct parse *p;\n> ! int\t\tch;\n> {\n> \tpg_wchar *oldnext = p->next;\n> \tpg_wchar *oldend = p->end;\n> \tpg_wchar\tbracket[3];\n> \n> ! \tassert(othercase(ch) != (unsigned char)ch);/* p_bracket() would recurse */\n> \tp->next = bracket;\n> \tp->end = bracket + 2;\n> \tbracket[0] = ch;\n> ***************\n> *** 1084,1090 ****\n> {\n> \tcat_t\t *cap = p->g->categories;\n> \n> ! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != ch)\n> \t\tbothcases(p, ch);\n> \telse\n> \t{\n> --- 1084,1090 ----\n> {\n> \tcat_t\t *cap = p->g->categories;\n> \n> ! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != (unsigned char)ch)\n> \t\tbothcases(p, ch);\n> \telse\n> \t{\n> ***************\n> *** 1862,1868 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n> #else\n> ! \treturn (isdigit(c));\n> #endif\n> }\n> \n> --- 1862,1868 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n> #else\n> ! \treturn (isdigit((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1872,1878 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n> #else\n> ! \treturn (isalpha(c));\n> #endif\n> }\n> \n> --- 1872,1878 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n> #else\n> ! \treturn (isalpha((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1882,1888 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n> #else\n> ! \treturn (isupper(c));\n> #endif\n> }\n> \n> --- 1882,1888 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n> #else\n> ! \treturn (isupper((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1892,1897 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n> #else\n> ! \treturn (islower(c));\n> #endif\n> }\n> --- 1892,1897 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n> #else\n> ! \treturn (islower((unsigned char)c));\n> #endif\n> }\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Thu, 11 Feb 1999 18:22:10 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: your mail"
},
{
"msg_contents": "\nDid we reject this 'unsigned' patch, folks? I seem to remember someone\nobjecting to it.\n\n\n> I am using postgres 6.4.2 on BSD/OS 3.1 with a Greek locale that I\n> have developed. I knew that regexes with postgress would not work because\n> of something I did but a posting from another follow from Sweden gave me a\n> clue that the problem must be with the regex package and not the locale.\n> \n> So I investigated the code and found out the pg_isdigit(int ch),\n> pg_isalpha(int ch) and the associated functions do a comparison of\n> characters as ints. I changed a few crucial points with a cast to\n> (unsigned char) and voila , regexs in Greek with full locale support. My\n> guess is that an int != unsigned char when comparing, the sign bit is\n> probably the culprit.\n> \n> Please test the patch on some other language too, Swedish or Finish\n> would be a nice touch.\n> \n> Patch follows, but it is trivial really.\n> ---------------------------------------------------------------------------------\n> *** regcomp.c\tTue Sep 1 07:31:25 1998\n> --- regcomp.c.patched\tWed Feb 10 19:57:11 1999\n> ***************\n> *** 1038,1046 ****\n> {\n> \tassert(pg_isalpha(ch));\n> \tif (pg_isupper(ch))\n> ! \t\treturn tolower(ch);\n> \telse if (pg_islower(ch))\n> ! \t\treturn toupper(ch);\n> \telse\n> /* peculiar, but could happen */\n> \t\treturn ch;\n> --- 1038,1046 ----\n> {\n> \tassert(pg_isalpha(ch));\n> \tif (pg_isupper(ch))\n> ! \t\treturn tolower((unsigned char)ch);\n> \telse if (pg_islower(ch))\n> ! \t\treturn toupper((unsigned char)ch);\n> \telse\n> /* peculiar, but could happen */\n> \t\treturn ch;\n> ***************\n> *** 1055,1067 ****\n> static void\n> bothcases(p, ch)\n> struct parse *p;\n> ! int\t\t\tch;\n> {\n> \tpg_wchar *oldnext = p->next;\n> \tpg_wchar *oldend = p->end;\n> \tpg_wchar\tbracket[3];\n> \n> ! \tassert(othercase(ch) != ch);/* p_bracket() would recurse */\n> \tp->next = bracket;\n> \tp->end = bracket + 2;\n> \tbracket[0] = ch;\n> --- 1055,1067 ----\n> static void\n> bothcases(p, ch)\n> struct parse *p;\n> ! int\t\tch;\n> {\n> \tpg_wchar *oldnext = p->next;\n> \tpg_wchar *oldend = p->end;\n> \tpg_wchar\tbracket[3];\n> \n> ! \tassert(othercase(ch) != (unsigned char)ch);/* p_bracket() would recurse */\n> \tp->next = bracket;\n> \tp->end = bracket + 2;\n> \tbracket[0] = ch;\n> ***************\n> *** 1084,1090 ****\n> {\n> \tcat_t\t *cap = p->g->categories;\n> \n> ! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != ch)\n> \t\tbothcases(p, ch);\n> \telse\n> \t{\n> --- 1084,1090 ----\n> {\n> \tcat_t\t *cap = p->g->categories;\n> \n> ! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != (unsigned char)ch)\n> \t\tbothcases(p, ch);\n> \telse\n> \t{\n> ***************\n> *** 1862,1868 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n> #else\n> ! \treturn (isdigit(c));\n> #endif\n> }\n> \n> --- 1862,1868 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n> #else\n> ! \treturn (isdigit((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1872,1878 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n> #else\n> ! \treturn (isalpha(c));\n> #endif\n> }\n> \n> --- 1872,1878 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n> #else\n> ! \treturn (isalpha((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1882,1888 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n> #else\n> ! \treturn (isupper(c));\n> #endif\n> }\n> \n> --- 1882,1888 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n> #else\n> ! \treturn (isupper((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1892,1897 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n> #else\n> ! \treturn (islower(c));\n> #endif\n> }\n> --- 1892,1897 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n> #else\n> ! \treturn (islower((unsigned char)c));\n> #endif\n> }\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Mar 1999 10:03:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: your mail"
},
{
"msg_contents": ">Did we reject this 'unsigned' patch, folks? I seem to remember someone\n>objecting to it.\n[snip]\n>> ***************\n>> *** 1862,1868 ****\n>> #ifdef MULTIBYTE\n>> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n>> #else\n>> ! \treturn (isdigit(c));\n>> #endif\n>> }\n>> \n>> --- 1862,1868 ----\n>> #ifdef MULTIBYTE\n>> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n>> #else\n>> ! \treturn (isdigit((unsigned char)c));\n>> #endif\n>> }\n\nAccording to the ANSI/C standard the argument to isdigit (or some\nother friends) must have the value of either an unsigned char or\n*EOF*. That's why the argument is typed to int, I guess. This patch\nseems to break the rule?\n\nBTW, I would like to propose yet another patches for the problem. This \nseems to work on FreeBSD and Linux. Angelos, can you test it on your\nplatform (is it a BSD/OS?)?\n--\nTatsuo Ishii\n\n*** regcomp.c~\tTue Sep 1 13:31:25 1998\n--- regcomp.c\tThu Mar 11 16:51:28 1999\n***************\n*** 95,101 ****\n \tstatic void p_b_eclass(struct parse * p, cset *cs);\n \tstatic pg_wchar p_b_symbol(struct parse * p);\n \tstatic char p_b_coll_elem(struct parse * p, int endc);\n! \tstatic char othercase(int ch);\n \tstatic void bothcases(struct parse * p, int ch);\n \tstatic void ordinary(struct parse * p, int ch);\n \tstatic void nonnewline(struct parse * p);\n--- 95,101 ----\n \tstatic void p_b_eclass(struct parse * p, cset *cs);\n \tstatic pg_wchar p_b_symbol(struct parse * p);\n \tstatic char p_b_coll_elem(struct parse * p, int endc);\n! \tstatic unsigned char othercase(int ch);\n \tstatic void bothcases(struct parse * p, int ch);\n \tstatic void ordinary(struct parse * p, int ch);\n \tstatic void nonnewline(struct parse * p);\n***************\n*** 1032,1049 ****\n - othercase - return the case counterpart of an alphabetic\n == static char othercase(int ch);\n */\n! static char\t\t\t\t\t\t/* if no counterpart, return ch */\n othercase(ch)\n int\t\t\tch;\n {\n \tassert(pg_isalpha(ch));\n \tif (pg_isupper(ch))\n! \t\treturn tolower(ch);\n \telse if (pg_islower(ch))\n! \t\treturn toupper(ch);\n \telse\n /* peculiar, but could happen */\n! \t\treturn ch;\n }\n \n /*\n--- 1032,1049 ----\n - othercase - return the case counterpart of an alphabetic\n == static char othercase(int ch);\n */\n! static unsigned char\t\t/* if no counterpart, return ch */\n othercase(ch)\n int\t\t\tch;\n {\n \tassert(pg_isalpha(ch));\n \tif (pg_isupper(ch))\n! \t\treturn (unsigned char)tolower(ch);\n \telse if (pg_islower(ch))\n! \t\treturn (unsigned char)toupper(ch);\n \telse\n /* peculiar, but could happen */\n! \t\treturn (unsigned char)ch;\n }\n \n /*\n",
"msg_date": "Tue, 16 Mar 1999 11:16:28 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: your mail "
},
{
"msg_contents": "\nI think we decided against this, right?\n\n> >Did we reject this 'unsigned' patch, folks? I seem to remember someone\n> >objecting to it.\n> [snip]\n> >> ***************\n> >> *** 1862,1868 ****\n> >> #ifdef MULTIBYTE\n> >> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n> >> #else\n> >> ! \treturn (isdigit(c));\n> >> #endif\n> >> }\n> >> \n> >> --- 1862,1868 ----\n> >> #ifdef MULTIBYTE\n> >> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n> >> #else\n> >> ! \treturn (isdigit((unsigned char)c));\n> >> #endif\n> >> }\n> \n> According to the ANSI/C standard the argument to isdigit (or some\n> other friends) must have the value of either an unsigned char or\n> *EOF*. That's why the argument is typed to int, I guess. This patch\n> seems to break the rule?\n> \n> BTW, I would like to propose yet another patches for the problem. This \n> seems to work on FreeBSD and Linux. Angelos, can you test it on your\n> platform (is it a BSD/OS?)?\n> --\n> Tatsuo Ishii\n> \n> *** regcomp.c~\tTue Sep 1 13:31:25 1998\n> --- regcomp.c\tThu Mar 11 16:51:28 1999\n> ***************\n> *** 95,101 ****\n> \tstatic void p_b_eclass(struct parse * p, cset *cs);\n> \tstatic pg_wchar p_b_symbol(struct parse * p);\n> \tstatic char p_b_coll_elem(struct parse * p, int endc);\n> ! \tstatic char othercase(int ch);\n> \tstatic void bothcases(struct parse * p, int ch);\n> \tstatic void ordinary(struct parse * p, int ch);\n> \tstatic void nonnewline(struct parse * p);\n> --- 95,101 ----\n> \tstatic void p_b_eclass(struct parse * p, cset *cs);\n> \tstatic pg_wchar p_b_symbol(struct parse * p);\n> \tstatic char p_b_coll_elem(struct parse * p, int endc);\n> ! \tstatic unsigned char othercase(int ch);\n> \tstatic void bothcases(struct parse * p, int ch);\n> \tstatic void ordinary(struct parse * p, int ch);\n> \tstatic void nonnewline(struct parse * p);\n> ***************\n> *** 1032,1049 ****\n> - othercase - return the case counterpart of an alphabetic\n> == static char othercase(int ch);\n> */\n> ! static char\t\t\t\t\t\t/* if no counterpart, return ch */\n> othercase(ch)\n> int\t\t\tch;\n> {\n> \tassert(pg_isalpha(ch));\n> \tif (pg_isupper(ch))\n> ! \t\treturn tolower(ch);\n> \telse if (pg_islower(ch))\n> ! \t\treturn toupper(ch);\n> \telse\n> /* peculiar, but could happen */\n> ! \t\treturn ch;\n> }\n> \n> /*\n> --- 1032,1049 ----\n> - othercase - return the case counterpart of an alphabetic\n> == static char othercase(int ch);\n> */\n> ! static unsigned char\t\t/* if no counterpart, return ch */\n> othercase(ch)\n> int\t\t\tch;\n> {\n> \tassert(pg_isalpha(ch));\n> \tif (pg_isupper(ch))\n> ! \t\treturn (unsigned char)tolower(ch);\n> \telse if (pg_islower(ch))\n> ! \t\treturn (unsigned char)toupper(ch);\n> \telse\n> /* peculiar, but could happen */\n> ! \t\treturn (unsigned char)ch;\n> }\n> \n> /*\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 9 May 1999 20:43:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: your mail"
}
] |
[
{
"msg_contents": "What should be the results of this little test:\n\ndrop table t1;\ndrop table t2;\ncreate table t1 (a int, b int);\ncreate table t2 (a int, c int);\ninsert into t1(a, b) VALUES (1, 1); \ninsert into t1(a, b) VALUES (1, 2); \ninsert into t1(a, b) VALUES (1, 3); \ninsert into t1(a, b) VALUES (1, 4); \ninsert into t1(a, b) VALUES (2, 1); \ninsert into t1(a, b) VALUES (2, 2); \ninsert into t1(a, b) VALUES (2, 3); \ninsert into t1(a, b) VALUES (3, 1); \ninsert into t1(a, b) VALUES (3, 2); \nselect * from t1;\ninsert into t2 (a, c) select distinct a, 0 from t1; \nselect * from t2;\nupdate t2 set c = c+b from t1 where t1.a=t2.a;\nselect * from t2;\n\nMy guess would be:\na|c\n-+--\n1|10\n2|6\n3|3\n(3 rows)\nHow does the standard address this?\n",
"msg_date": "Wed, 10 Feb 1999 13:45:53 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possible bug on update"
}
] |
[
{
"msg_contents": "Hi,\n\nWhy \"||\" operator is not associative ?\n\nselect 'A' || 'B' || 'C'; results in a parse error at second \"||\".\n\nIf you force the association, it works: select ('A' || 'B') || 'C';\n\nThe same thing happen with mod \"%\".\n\nRicardo Coelho.\n\n",
"msg_date": "Wed, 10 Feb 1999 19:47:29 -0200",
"msg_from": "\"Ricardo J.C.Coelho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multiples concatenation operator (||)"
},
{
"msg_contents": "> Hi,\n> \n> Why \"||\" operator is not associative ?\n> \n> select 'A' || 'B' || 'C'; results in a parse error at second \"||\".\n> \n> If you force the association, it works: select ('A' || 'B') || 'C';\n> \n> The same thing happen with mod \"%\".\n\nIt is a problem with the grammer not understanding associativity with\nnon-standard operators like ||. No good solution for it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 10 Feb 1999 18:29:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Multiples concatenation operator (||)"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.