threads
listlengths
1
2.99k
[ { "msg_contents": "Hi\nI have a postgres compiled with -mb=KOI8, store cyrillic data, get in the\nclient using different encoding (SET ENCODING=WIN1251, KOI8). The question\nis: what should return stored procedure if I want to see it's output in\ndifferent encodings? Win1251, KOI8, UTF-8, something else?\n\nThanks in advance\nAndriy Korud, Lviv, Ukraine.\n\n", "msg_date": "5 Nov 1999 13:24:44 +0200", "msg_from": "\"Andrij Korud\" <[email protected]>", "msg_from_op": true, "msg_subject": "Encoding in UDF's" }, { "msg_contents": "On 5 Nov 1999, Andrij Korud wrote:\n> I have a postgres compiled with -mb=KOI8, store cyrillic data, get in the\n> client using different encoding (SET ENCODING=WIN1251, KOI8). The question\n> is: what should return stored procedure if I want to see it's output in\n> different encodings? Win1251, KOI8, UTF-8, something else?\n\n Never tried it myself, but I'd expect windows-1251...\n\n SET PGCLIENTENCODING=WIN, of course.\n\nOleg.\n---- \n Oleg Broytmann Foundation for Effective Policies [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 5 Nov 1999 14:32:49 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Encoding in UDF's" }, { "msg_contents": "> I have a postgres compiled with -mb=KOI8, store cyrillic data, get in the\n> client using different encoding (SET ENCODING=WIN1251, KOI8). The question\n\t\t\t\t ~~~~~~~~CLIENTENCODING\n> is: what should return stored procedure if I want to see it's output in\n> different encodings? Win1251, KOI8, UTF-8, something else?\n\nI assume you mean \"stored procedure\" as user defined functions.\n\nSince you have created your database with KOI8, any proccessing within\nthe backend is done in KOI8 no matter what client encodings are used.\n\nSo the answer is:\nYour fuction would see the input data in KOI8 and should return data\nin KOI8 too.\n---\nTatsuo Ishii\n", "msg_date": "Sat, 06 Nov 1999 13:47:02 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Encoding in UDF's " } ]
[ { "msg_contents": "Now that the 6.5.3 release is official, I am announcing and releasing RPMs for\nversion 6.5.3. There is only one major enhancement -- the addition of an\nallowed architecture -- armv41, which was not available before. From my web\npage: \n-------------------------------------------------------\nRPM's are now available for version 6.5.3. Changes from 6.5.2-1 to\n6.5.3-1 are:\n\n* Improved README.rpm includes an explanation of the steps in an upgrade,\nincludes instructions on running the regression tests, and other minor\nenhancements \n\n* pgaccess 0.98 is now pulled in from the main tarball instead of a separate\ntarball \n\n* The patchsets were consolidated somewhat -- as 6.5.3 now includes some\nof the patches that were already included with the 6.5.2-1 rpms, fewer patches\nare required. \n\n* The source RPM should now build on StrongARM boxes (such as Corel\nNetwinders). Many thanks to Mark Knox ([email protected]) for the patches\nnecessary for StrongARM. \n\n* Now shipping the odbcinst.ini file that had been inadvertently left out of\nprevious RPM's. This file is now in /usr/lib/pgsql.\n--------------------------------------------------------\nAs always, let me know of any problems. The source RPM and Intel binary \nRPMs prebuilt for RedHat 5.2 and RedHat 6.x are available from\nhttp://www.ramifordistat.net/postgres . The wget tool is a great way to mirror\nthese RPMs.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 PEter 4:11\n", "msg_date": "Fri, 5 Nov 1999 20:57:51 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 6.5.3 STABLE RPMs released." }, { "msg_contents": "\n\tNow that the 6.5.3 release is official, here is the status of the\nLinux/Alpha pataches for PostgreSQL. Since this was only a minor bugfix\nrelease, the Linux/Alpha patches for 6.5.2 apply to the 6.5.3 tarball\ncleanly. The resulting, compiled postgres binaries pass all regression\ntests save for the standard off by one in nth decimal place geometry\ndifference, and a minor sort order difference in rules. Therefore, there\nwill be no 6.5.3 release of the Linux/Alpha patches, just use the 6.5.2\nones, which you can get from my web page. \n\tAs usual, if you hit snags with 6.5.3 and Linux/Alpha, let me\nknow, and I will do what I can to help you out.\n\t\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n\n", "msg_date": "Sat, 6 Nov 1999 21:15:43 -0600 (CST)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL 6.5.3 Linux/Alpha Update" } ]
[ { "msg_contents": "Hi,\n\nI have a problem updating my local working copy of PostgreSQL with CVS.\n\nI did the following\n\ncd /usr/local/src\nexport CVSROOT=:pserver:[email protected]:/usr/local/cvsroot\ncvs login\ncvs -z3 co -P pgsql\n\nI get the entire tree in directory /usr/local/src/pgsql, which is what I\nwant, although I got the following messages:\n\ncvs checkout: in directory pgsql:\ncvs checkout: cannot open CVS/Entries for reading: No such file or directory\ncvs server: Updating pgsql\nU pgsql/COPYRIGHT\ncvs checkout: cannot open CVS/Entries.Log: No such file or directory\nU pgsql/HISTORY\ncvs checkout: cannot open CVS/Entries.Log: No such file or directory\nU pgsql/INSTALL\ncvs checkout: cannot open CVS/Entries.Log: No such file or directory\nU pgsql/README\ncvs checkout: cannot open CVS/Entries.Log: No such file or directory\nU pgsql/register.txt\ncvs checkout: cannot open CVS/Entries.Log: No such file or directory\ncvs server: Updating pgsql/MIGRATION\ncvs server: Updating pgsql/contrib\nU pgsql/contrib/Makefile\nU pgsql/contrib/README\ncvs server: Updating pgsql/contrib/apache_logging\nU pgsql/contrib/apache_logging/README\nU pgsql/contrib/apache_logging/apachelog.sql\nU pgsql/contrib/apache_logging/httpconf.txt\n...\n\nAfter the starting errors all files are checked out successfully. I have no\n/usr/local/src/pgsql/CVS directory as I have /usr/local/src/pgsql/src/CVS, etc.\n\nIf I now want to update to the current version, I type\n\ncd /usr/local/src/pgsql\ncvs -z3 update -d -P\n\nI get the following messages:\n\ncvs update: cannot open CVS/Entries for reading: No such file or directory\ncvs [update aborted]: no repository\n\nWhich is correct, as the /usr/local/src/pgsql/CVS directory was not created\nwhen I checked out the source in the first place.\n\nOK, fine I said, let's try this:\n\ncd /usr/local/src/pgsql/src\ncvs -z3 update -d -P src\n\nThen, I get:\n\ncvs [server aborted]: absolute pathname `/usr/local/src/pgsql' illegal for\nserver\n\nFor\n\ncvs -z3 update -d -P\n\nit's the same thing.\n\nAnybody knows what I do wrong?\n\nBTW I use CVS 1.10\n\nThanx,\n\nJeroen\n\n", "msg_date": "Sat, 06 Nov 1999 18:13:43 +0100", "msg_from": "Jeroen van Vianen <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with CVS" } ]
[ { "msg_contents": "Greetings. I recently encountered a problem updating an array element on\na temp table. Instead of updating just the specified array element, it\ncopies the array values onto all the tuples. The command works as\nexpected with regular tables.\n\n\ncreate temp table tmpArray (\n id int4,\n val int4[]\n);\nCREATE\ninsert into tmpArray values (1, '{1,2,3,4}');\nINSERT 24630506 1\ninsert into tmpArray values (2, '{4,3,2,1}');\nINSERT 24630507 1\ninsert into tmpArray values (3, '{9,10,11,12}');\nINSERT 24630508 1\nselect * from tmpArray;\nid|val\n--+------------\n 1|{1,2,3,4}\n 2|{4,3,2,1}\n 3|{9,10,11,12}\n(3 rows)\n\nupdate tmpArray set val[3] = 7;\nUPDATE 3\nselect * from tmpArray;\nid|val\n--+---------\n 1|{1,2,7,4}\n 2|{1,2,7,4}\n 3|{1,2,7,4}\n(3 rows)\n\ndrop table tmpArray;\nDROP\nEOF\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * ICQ 352499 * www.munn.com\n\n", "msg_date": "Sat, 6 Nov 1999 15:09:54 -0500 (EST)", "msg_from": "Kristofer Munn <[email protected]>", "msg_from_op": true, "msg_subject": "Arrays broken on temp tables" }, { "msg_contents": "> Greetings. I recently encountered a problem updating an array element on\n> a temp table. Instead of updating just the specified array element, it\n> copies the array values onto all the tuples. The command works as\n> expected with regular tables.\n> \n> \n> create temp table tmpArray (\n> id int4,\n> val int4[]\n> );\n> CREATE\n> insert into tmpArray values (1, '{1,2,3,4}');\n> INSERT 24630506 1\n> insert into tmpArray values (2, '{4,3,2,1}');\n> INSERT 24630507 1\n> insert into tmpArray values (3, '{9,10,11,12}');\n> INSERT 24630508 1\n> select * from tmpArray;\n> id|val\n> --+------------\n> 1|{1,2,3,4}\n> 2|{4,3,2,1}\n> 3|{9,10,11,12}\n> (3 rows)\n> \n> update tmpArray set val[3] = 7;\n> UPDATE 3\n> select * from tmpArray;\n> id|val\n> --+---------\n> 1|{1,2,7,4}\n> 2|{1,2,7,4}\n> 3|{1,2,7,4}\n> (3 rows)\n> \n> drop table tmpArray;\n> DROP\n> EOF\n> \n> - K\n\nBug confirmed. Wow, that is strange. There isn't anything about temp\ntable that would suggest this would happen. I will keep the bug report\nand try to figure it out in the future.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 Nov 1999 15:51:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Bug confirmed. Wow, that is strange. There isn't anything about temp\n> table that would suggest this would happen.\n\nI see it too. explain shows something pretty fishy:\n\nregression=> explain update tmpArray set val[3] = 7;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=43043.00 rows=1000000 width=22)\n -> Seq Scan on pg_temp.2904.0 (cost=43.00 rows=1000 width=12)\n -> Seq Scan on tmparray (cost=43.00 rows=1000 width=10)\n\nEXPLAIN\n\nI'm betting that something in the array code is somehow bypassing the\nnormal table lookup mechanism, and is managing to see the underlying\ntemp-table name that should be hidden from it. Will look further...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Nov 1999 16:12:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables " }, { "msg_contents": "> I'm betting that something in the array code is somehow bypassing the\n> normal table lookup mechanism, and is managing to see the underlying\n> temp-table name that should be hidden from it. Will look further...\n\nYup, here it is, in parse_target.c:\n\n /*\n * If there are subscripts on the target column, prepare an\n * array assignment expression. This will generate an array value\n * that the source value has been inserted into, which can then\n * be placed in the new tuple constructed by INSERT or UPDATE.\n * Note that transformArraySubscripts takes care of type coercion.\n */\n if (indirection)\n {\n Attr *att = makeNode(Attr);\n Node *arrayBase;\n ArrayRef *aref;\n\n att->relname = pstrdup(RelationGetRelationName(rd)->data);\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nNext question is what to do about it --- the original table name\ndoesn't seem to be conveniently available in this routine. A quick\nsearch for other uses of RelationGetRelationName shows other places\nthat may have related bugs. Possibly, temprel.c needs to provide\na reverse-lookup routine that will give back the user name of a table\nthat might be a temp table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Nov 1999 16:33:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables " }, { "msg_contents": "> > I'm betting that something in the array code is somehow bypassing the\n> > normal table lookup mechanism, and is managing to see the underlying\n> > temp-table name that should be hidden from it. Will look further...\n> \n> Yup, here it is, in parse_target.c:\n> \n> /*\n> * If there are subscripts on the target column, prepare an\n> * array assignment expression. This will generate an array value\n> * that the source value has been inserted into, which can then\n> * be placed in the new tuple constructed by INSERT or UPDATE.\n> * Note that transformArraySubscripts takes care of type coercion.\n> */\n> if (indirection)\n> {\n> Attr *att = makeNode(Attr);\n> Node *arrayBase;\n> ArrayRef *aref;\n> \n> att->relname = pstrdup(RelationGetRelationName(rd)->data);\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Next question is what to do about it --- the original table name\n> doesn't seem to be conveniently available in this routine. A quick\n> search for other uses of RelationGetRelationName shows other places\n> that may have related bugs. Possibly, temprel.c needs to provide\n> a reverse-lookup routine that will give back the user name of a table\n> that might be a temp table?\n\nWell, I now wonder whether I did the right thing in adding temp tables\nthe way I did. Is there a better way. The current code maps to\noriginal name to temp name on opens using the relcache. That way, the\noriginal name is passed all through the code. When we print an error\nmessage, we use the user-supplied name, not the temp name.\n\nHowever, if the code reaches directly into the pg_class tuple and pulls\nout the name, it will see the temp name.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 Nov 1999 16:54:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Well, I now wonder whether I did the right thing in adding temp tables\n> the way I did. Is there a better way.\n\nI don't think there's anything wrong with the basic temp table design.\nWe've just discovered an oversight: given a Relation entry, there's no\nway to get back the original table name, and sometimes you need to.\n\nI'm inclined to think that RelationGetRelationName should be replaced\nby two access macros: one to give back the \"physical\" rel name (same\nas the current macro) and one to give back the \"logical\" name, which'd\nbe different in the case of a temp table. We'd need to extend relcache\nentries to include the logical name as an additional field. Then we'd\nneed to look at all the uses of RelationGetRelationName to see which\nones should be which. There might be some direct accesses to\nrel->rd_rel->relname as well :-( which need to be found and fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Nov 1999 17:33:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Well, I now wonder whether I did the right thing in adding temp tables\n> > the way I did. Is there a better way.\n> \n> I don't think there's anything wrong with the basic temp table design.\n> We've just discovered an oversight: given a Relation entry, there's no\n> way to get back the original table name, and sometimes you need to.\n> \n> I'm inclined to think that RelationGetRelationName should be replaced\n> by two access macros: one to give back the \"physical\" rel name (same\n> as the current macro) and one to give back the \"logical\" name, which'd\n> be different in the case of a temp table. We'd need to extend relcache\n> entries to include the logical name as an additional field. Then we'd\n> need to look at all the uses of RelationGetRelationName to see which\n> ones should be which. There might be some direct accesses to\n> rel->rd_rel->relname as well :-( which need to be found and fixed.\n\nOK, one more comment.\n\nBecause both physical and logical names map to the same oid, in _most_\ncases it doesn't matter if RelationGetRelationName returns the physical\nname.\n\nAny idea why the physical name causes a problem in this area of the\ncode?\n\nAlso, I believe I replaced most cases of rd_rel->relname with\nRelationGetRelationName during one of my cleanups long ago. Seems I had\nnot done this case because I see lots of them. Adding macro call now.\n\nBTW, it is quite easy to add reverse lookup in cache if that will fix\nthings.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Nov 1999 07:58:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I don't think there's anything wrong with the basic temp table design.\n>> We've just discovered an oversight: given a Relation entry, there's no\n>> way to get back the original table name, and sometimes you need to.\n\n> OK, one more comment.\n> Because both physical and logical names map to the same oid, in _most_\n> cases it doesn't matter if RelationGetRelationName returns the physical\n> name.\n> Any idea why the physical name causes a problem in this area of the\n> code?\n\nThe problem is that the rangetable code doesn't realize that the logical\nand physical names refer to the same table, so when the\nsubscript-processing code generates a reference to\n<physicaltablename>.<attribute> the parser generates a second RTE for\nthe physical table name, in addition to the already-existing RTE for the\nlogical table name. This causes the planner to generate a join, because\nit can see no difference between this situation and\n\tFROM tablename, tablename aliasname\nwhich *should* cause a join. But the join causes each tuple to be\nprocessed multiple times, which is the wrong thing for this case.\n\nThere is more than one way we could attack this, but I think the\ncleanest answer will be to make it possible to extract a logical\ntable name from a relcache entry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 1999 21:26:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables " }, { "msg_contents": "> The problem is that the rangetable code doesn't realize that the logical\n> and physical names refer to the same table, so when the\n> subscript-processing code generates a reference to\n> <physicaltablename>.<attribute> the parser generates a second RTE for\n> the physical table name, in addition to the already-existing RTE for the\n> logical table name. This causes the planner to generate a join, because\n> it can see no difference between this situation and\n> \tFROM tablename, tablename aliasname\n> which *should* cause a join. But the join causes each tuple to be\n> processed multiple times, which is the wrong thing for this case.\n> \n> There is more than one way we could attack this, but I think the\n> cleanest answer will be to make it possible to extract a logical\n> table name from a relcache entry.\n\nWell, as I remember, the good news is that our code was fine, and the\noriginal poster just missed the WHERE clause on the update. So I guess\nthat gets us off the hook for a while.\n\nHowever, now looking at the posting again:\n\n\thttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-11/msg00213.html\n\nI am confused again.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Nov 1999 21:37:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> There is more than one way we could attack this, but I think the\n>> cleanest answer will be to make it possible to extract a logical\n>> table name from a relcache entry.\n\n> Well, as I remember, the good news is that our code was fine, and the\n> original poster just missed the WHERE clause on the update. So I guess\n> that gets us off the hook for a while.\n> However, now looking at the posting again:\n> \thttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-11/msg00213.html\n> I am confused again.\n\nNo, our code is *not* OK. It's true that the original example was given\nwithout a WHERE clause, whereas a practical UPDATE would usually have a\nWHERE clause; but that has nothing to do with whether the planner will\ngenerate a join or not. If a join is done then the wrong things will\nhappen, WHERE or no WHERE.\n\nThe bottom line here is that we mustn't generate separate RTEs for the\nlogical and physical table names.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 1999 23:29:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables " }, { "msg_contents": "> No, our code is *not* OK. It's true that the original example was given\n> without a WHERE clause, whereas a practical UPDATE would usually have a\n> WHERE clause; but that has nothing to do with whether the planner will\n> generate a join or not. If a join is done then the wrong things will\n> happen, WHERE or no WHERE.\n> \n> The bottom line here is that we mustn't generate separate RTEs for the\n> logical and physical table names.\n\nAre you saying a join on a temp table will not work? Can you give an\nexample?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Nov 1999 23:33:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> The bottom line here is that we mustn't generate separate RTEs for the\n>> logical and physical table names.\n\n> Are you saying a join on a temp table will not work?\n\nNot at all; I'm saying that it's incorrect to generate a join for a\nsimple UPDATE. What we had was\n\n\tUPDATE table SET arrayfield[sub] = val;\n\nwhich is really implemented as (more or less)\n\n\tUPDATE table SET arrayfield = ARRAYINSERT(arrayfield, sub, val);\n\nwhich works fine as long as you apply the computation and update once\nper tuple in the table (or once per tuple selected by WHERE, if there\nis one). But for a temp table, what really gets emitted from the\nparser is effectively like\n\n\tUPDATE logtable SET arrayfield = arrayinsert(phytable.field,\n\t sub, val)\n\tFROM logtable phytable;\n\nThis is a Cartesian join, meaning that each tuple in\nlogtable-as-destination will be processed in combination with each tuple\nin logtable-as-phytable. The particular case Kristofer reported\nimplements the join as a nested loop with logtable-as-destination as the\ninner side of the join. So, each target tuple gets updated once with\nan arrayfield value computed off each available source tuple --- and\nwhen the dust settles, they've all got the value computed from the last\nsource tuple. That's why they're all the same in his bug report.\n\nAdding a WHERE clause limits the damage, but the target tuples will all\nstill get the same value, if I'm visualizing the behavior correctly.\nIt's the wrong thing in any case; the very best you could hope for is \nthat the tuples all manage to get the right values after far more\nprocessing than necessary. There should be no join for a simple UPDATE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 1999 00:04:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> The bottom line here is that we mustn't generate separate RTEs for the\n> >> logical and physical table names.\n> \n> > Are you saying a join on a temp table will not work?\n> \n> Not at all; I'm saying that it's incorrect to generate a join for a\n> simple UPDATE. What we had was\n> \n> \tUPDATE table SET arrayfield[sub] = val;\n> \n> which is really implemented as (more or less)\n> \n> \tUPDATE table SET arrayfield = ARRAYINSERT(arrayfield, sub, val);\n> \n> which works fine as long as you apply the computation and update once\n> per tuple in the table (or once per tuple selected by WHERE, if there\n> is one). But for a temp table, what really gets emitted from the\n> parser is effectively like\n> \n> \tUPDATE logtable SET arrayfield = arrayinsert(phytable.field,\n> \t sub, val)\n> \tFROM logtable phytable;\n> \n> This is a Cartesian join, meaning that each tuple in\n> logtable-as-destination will be processed in combination with each tuple\n> in logtable-as-phytable. The particular case Kristofer reported\n> implements the join as a nested loop with logtable-as-destination as the\n> inner side of the join. So, each target tuple gets updated once with\n> an arrayfield value computed off each available source tuple --- and\n> when the dust settles, they've all got the value computed from the last\n> source tuple. That's why they're all the same in his bug report.\n> \n> Adding a WHERE clause limits the damage, but the target tuples will all\n> still get the same value, if I'm visualizing the behavior correctly.\n> It's the wrong thing in any case; the very best you could hope for is \n> that the tuples all manage to get the right values after far more\n> processing than necessary. There should be no join for a simple UPDATE.\n\nOK, I see it now. They are assigning the relname at this point using\nthe in-tuple relname, which is the physical name, not the logical name.\n\nIf I look at all calls to RelationGetRelationName(), I can see several\nproblem cases where the code it assigning the rel/refname based on the\nin-tuple name.\n\nIdeas? Should i add reverse-lookup code in temprel.c, and make the\nlookups happen for those cases?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Nov 1999 00:28:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> But for a temp table, what really gets emitted from the\n>> parser is effectively like\n>> \n>> UPDATE logtable SET arrayfield = arrayinsert(phytable.field,\n>> sub, val)\n>> FROM logtable phytable;\n>> \n> OK, I see it now. They are assigning the relname at this point using\n> the in-tuple relname, which is the physical name, not the logical name.\n\nRight, the array-element-update code needs to generate a reference\nusing the logical name.\n\n> If I look at all calls to RelationGetRelationName(), I can see several\n> problem cases where the code it assigning the rel/refname based on the\n> in-tuple name.\n\nI suspected as much, but I haven't grovelled through the calls in\ndetail. Some of them probably really do want the physical name,\nwhile others need the logical name.\n\n> Ideas? Should i add reverse-lookup code in temprel.c, and make the\n> lookups happen for those cases?\n\nWe could do it that way, but as the code stands, relcache.c is\nresponsible for the forward lookup (you just pass a rel name to\nheap_openr without worrying if it is a temp rel name or not).\nSo I think relcache.c ought to provide a function or macro to\ngo the other way: produce a logical relname from a Relation pointer.\n\nWhether that's implemented by copying the originally given relname\ninto the relcache entry, or by asking temprel.c each time, is purely\na local optimization inside relcache.c --- it's a straight speed-for-\nspace tradeoff. Before choosing, we should look at the uses of\nRelationGetRelationName() to see if any of them that need to be\nfetching the logical name are in performance-critical paths.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 1999 01:18:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables " }, { "msg_contents": "\nI believe this is fixed was fixed by my RelationGetRelationName and\nRelationGetPhysicalRelationName.\n\n\n> Bruce Momjian <[email protected]> writes:\n> >> The bottom line here is that we mustn't generate separate RTEs for the\n> >> logical and physical table names.\n> \n> > Are you saying a join on a temp table will not work?\n> \n> Not at all; I'm saying that it's incorrect to generate a join for a\n> simple UPDATE. What we had was\n> \n> \tUPDATE table SET arrayfield[sub] = val;\n> \n> which is really implemented as (more or less)\n> \n> \tUPDATE table SET arrayfield = ARRAYINSERT(arrayfield, sub, val);\n> \n> which works fine as long as you apply the computation and update once\n> per tuple in the table (or once per tuple selected by WHERE, if there\n> is one). But for a temp table, what really gets emitted from the\n> parser is effectively like\n> \n> \tUPDATE logtable SET arrayfield = arrayinsert(phytable.field,\n> \t sub, val)\n> \tFROM logtable phytable;\n> \n> This is a Cartesian join, meaning that each tuple in\n> logtable-as-destination will be processed in combination with each tuple\n> in logtable-as-phytable. The particular case Kristofer reported\n> implements the join as a nested loop with logtable-as-destination as the\n> inner side of the join. So, each target tuple gets updated once with\n> an arrayfield value computed off each available source tuple --- and\n> when the dust settles, they've all got the value computed from the last\n> source tuple. That's why they're all the same in his bug report.\n> \n> Adding a WHERE clause limits the damage, but the target tuples will all\n> still get the same value, if I'm visualizing the behavior correctly.\n> It's the wrong thing in any case; the very best you could hope for is \n> that the tuples all manage to get the right values after far more\n> processing than necessary. There should be no join for a simple UPDATE.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 21:57:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays broken on temp tables" } ]
[ { "msg_contents": "It's doing exactly what you told it to.\n\n>> update tmpArray set val[3] = 7;\n>> UPDATE 3\n>> select * from tmpArray;\n>> id|val\n>> --+---------\n>> 1|{1,2,7,4}\n>> 2|{1,2,7,4}\n>> 3|{1,2,7,4}\n>> (3 rows)\n\nYou didn't specify which rows to update, so it updates all. Try:\n\nupdate tmpArray set val[3] = 7 where id = 2;\n\nThis should only update one row.\n\nMikeA\n", "msg_date": "Sat, 6 Nov 1999 22:33:48 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Arrays broken on temp tables" } ]
[ { "msg_contents": "Hi,\n\nI have a small problem compiling the new psql code.\n\nThe dependency in the makefile on the sgml files seems to\nbe failing because of the '*'.\n\nIf I remove that dependancy everything is OK.\n\nPerhaps it's my version of make.\n\nbash-2.03$ make --version\nGNU Make version 3.77, by Richard Stallman and Roland McGrath.\nCopyright (C) 1988, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98\n\nAnyway, here's the section of the make log.\n\nKeith.\n\nmake[2]: Entering directory `/export/home/pgsql/src/bin/psql'\nmake -C ../../interfaces/libpq libpq.a\nmake[3]: Entering directory `/export/home/pgsql/src/interfaces/libpq'\nmake[3]: `libpq.a' is up to date.\nmake[3]: Leaving directory `/export/home/pgsql/src/interfaces/libpq'\ngcc -I../../interfaces/libpq -I../../include -I../../backend -Wall \n-Wmissing-prototypes -g -O2 -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG -c command.c -o \ncommand.o\ngcc -I../../interfaces/libpq -I../../include -I../../backend -Wall \n-Wmissing-prototypes -g -O2 -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG -c common.c -o \ncommon.o\nmake[2]: *** No rule to make target `../../../doc/src/sgml/ref/*.sgml', needed \nby `sql_help.h'. Stop.\nmake[2]: Leaving directory `/export/home/pgsql/src/bin/psql'\n\n", "msg_date": "Sat, 6 Nov 1999 20:38:11 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "New psql compile problem." }, { "msg_contents": "> make[2]: Entering directory `/export/home/pgsql/src/bin/psql'\n> make -C ../../interfaces/libpq libpq.a\n> make[3]: Entering directory `/export/home/pgsql/src/interfaces/libpq'\n> make[3]: `libpq.a' is up to date.\n> make[3]: Leaving directory `/export/home/pgsql/src/interfaces/libpq'\n> gcc -I../../interfaces/libpq -I../../include -I../../backend -Wall \n> -Wmissing-prototypes -g -O2 -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG -c command.c -o \n> command.o\n> gcc -I../../interfaces/libpq -I../../include -I../../backend -Wall \n> -Wmissing-prototypes -g -O2 -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG -c common.c -o \n> common.o\n> make[2]: *** No rule to make target `../../../doc/src/sgml/ref/*.sgml', needed \n> by `sql_help.h'. Stop.\n> make[2]: Leaving directory `/export/home/pgsql/src/bin/psql'\n\nDo you have sgml files in that directory? You should.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 Nov 1999 15:46:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql compile problem." }, { "msg_contents": "On Sat, 6 Nov 1999, Bruce Momjian wrote:\n\n> > make[2]: Entering directory `/export/home/pgsql/src/bin/psql'\n> > make -C ../../interfaces/libpq libpq.a\n> > make[3]: Entering directory `/export/home/pgsql/src/interfaces/libpq'\n> > make[3]: `libpq.a' is up to date.\n> > make[3]: Leaving directory `/export/home/pgsql/src/interfaces/libpq'\n> > gcc -I../../interfaces/libpq -I../../include -I../../backend -Wall \n> > -Wmissing-prototypes -g -O2 -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG -c command.c -o \n> > command.o\n> > gcc -I../../interfaces/libpq -I../../include -I../../backend -Wall \n> > -Wmissing-prototypes -g -O2 -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG -c common.c -o \n> > common.o\n> > make[2]: *** No rule to make target `../../../doc/src/sgml/ref/*.sgml', needed \n> > by `sql_help.h'. Stop.\n> > make[2]: Leaving directory `/export/home/pgsql/src/bin/psql'\n> \n> Do you have sgml files in that directory? You should.\n\nThe intend was that the sql_help.h would be prepared before distribution,\nso people that don't have docs or don't have Perl or other weird problems\ndon't get that sort of problem, because after all it *is* a hack. We\nalready do the same with the pre-bisoned parsers.\n\nPerhaps there is still a problem if the docs are installed one second\nlater than the psql subtree. How do you handle that with the parsers? Do\nit the same here.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 8 Nov 1999 12:50:25 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql compile problem." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The intend was that the sql_help.h would be prepared before distribution,\n> so people that don't have docs or don't have Perl or other weird problems\n> don't get that sort of problem, because after all it *is* a hack. We\n> already do the same with the pre-bisoned parsers.\n\nRight. src/tools/release_prep is the script that generates derived\nfiles that need to be valid in the distributed tarball. You should\nsubmit a patch that fixes that script to do \"make sql_help.h\" along\nwith its other duties.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 1999 21:31:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql compile problem. " } ]
[ { "msg_contents": "Sorry, all; ignore the last mail; I was having a stupid attack.\n\n-----Original Message-----\nFrom: Kristofer Munn\nTo: [email protected]\nSent: 99/11/06 10:09\nSubject: [HACKERS] Arrays broken on temp tables\n\nGreetings. I recently encountered a problem updating an array element\non\na temp table. Instead of updating just the specified array element, it\ncopies the array values onto all the tuples. The command works as\nexpected with regular tables.\n\n\ncreate temp table tmpArray (\n id int4,\n val int4[]\n);\nCREATE\ninsert into tmpArray values (1, '{1,2,3,4}');\nINSERT 24630506 1\ninsert into tmpArray values (2, '{4,3,2,1}');\nINSERT 24630507 1\ninsert into tmpArray values (3, '{9,10,11,12}');\nINSERT 24630508 1\nselect * from tmpArray;\nid|val\n--+------------\n 1|{1,2,3,4}\n 2|{4,3,2,1}\n 3|{9,10,11,12}\n(3 rows)\n\nupdate tmpArray set val[3] = 7;\nUPDATE 3\nselect * from tmpArray;\nid|val\n--+---------\n 1|{1,2,7,4}\n 2|{1,2,7,4}\n 3|{1,2,7,4}\n(3 rows)\n\ndrop table tmpArray;\nDROP\nEOF\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * ICQ 352499 *\nwww.munn.com\n\n\n************\n", "msg_date": "Sat, 6 Nov 1999 23:08:00 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Arrays broken on temp tables" } ]
[ { "msg_contents": "\n>From: Bruce Momjian <[email protected]>\n>\n>> make[2]: Entering directory `/export/home/pgsql/src/bin/psql'\n>> make -C ../../interfaces/libpq libpq.a\n>> make[3]: Entering directory `/export/home/pgsql/src/interfaces/libpq'\n>> make[3]: `libpq.a' is up to date.\n>> make[3]: Leaving directory `/export/home/pgsql/src/interfaces/libpq'\n>> gcc -I../../interfaces/libpq -I../../include -I../../backend -Wall \n>> -Wmissing-prototypes -g -O2 -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG -c command.c \n-o \n>> command.o\n>> gcc -I../../interfaces/libpq -I../../include -I../../backend -Wall \n>> -Wmissing-prototypes -g -O2 -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG -c common.c \n-o \n>> common.o\n>> make[2]: *** No rule to make target `../../../doc/src/sgml/ref/*.sgml', \nneeded \n>> by `sql_help.h'. Stop.\n>> make[2]: Leaving directory `/export/home/pgsql/src/bin/psql'\n>\n>Do you have sgml files in that directory? You should.\n\nPlenty of them...\n\nIt seems a strange sort of dependancy though, with a '*', sort\nof saying we're dependant on anything that happens to be in the\ndirectory. Not the usual sort of thing you see in makefiles.\n\nKeith.\n\nmtcc:[/export/home/pgsql/src/bin/psql](42)% pwd\n/export/home/pgsql/src/bin/psql\nmtcc:[/export/home/pgsql/src/bin/psql](43)% ls ../../../doc/src/sgml/ref/*.sgml\n../../../doc/src/sgml/ref/abort.sgml\n../../../doc/src/sgml/ref/allfiles.sgml\n../../../doc/src/sgml/ref/alter_table.sgml\n../../../doc/src/sgml/ref/alter_user.sgml\n../../../doc/src/sgml/ref/begin.sgml\n../../../doc/src/sgml/ref/close.sgml\n.\n.\n.\n../../../doc/src/sgml/ref/vacuum.sgml\n../../../doc/src/sgml/ref/vacuumdb.sgml\nmtcc:[/export/home/pgsql/src/bin/psql](44)% \n\n", "msg_date": "Sat, 6 Nov 1999 23:01:06 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New psql compile problem." }, { "msg_contents": "Keith Parks <[email protected]> writes:\n>>> make[2]: *** No rule to make target `../../../doc/src/sgml/ref/*.sgml', \n>>> needed \n>>> by `sql_help.h'. Stop.\n>>> make[2]: Leaving directory `/export/home/pgsql/src/bin/psql'\n\n>> Do you have sgml files in that directory? You should.\n\n> Plenty of them...\n>\n> It seems a strange sort of dependancy though, with a '*', sort\n> of saying we're dependant on anything that happens to be in the\n> directory. Not the usual sort of thing you see in makefiles.\n\nBut it's just the right thing in this case, since Peter doesn't want\npsql to be dependent on exactly what set of ref .sgml files there are.\n\nThis makefile coding does depend on wildcard expansion in dependency\nlists, which is a GNU-make ism that probably doesn't get a lot of\ntesting. What version of make are you running?\n\nIt might be worth changing the rule to use explicit wildcard expansion,\n\nsql_help.h: $(wildcard ../../../doc/src/sgml/ref/*.sgml) create_help.pl\n\nin case some versions of make need that extra cue to do the right thing...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Nov 1999 20:09:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql compile problem. " } ]
[ { "msg_contents": "There is a bug fix in the cvs tree to a problem that prevents pgsql from\nhandling single quotes in strings. The solution is described here:\n\n http://www.postgresql.org/mhonarc/pgsql-interfaces/1999-09/msg00154.html\n\nAs a newcomer to PostgreSQL, I lost several hours to this problem - not\ngood since I am trying to decide if my company should switch from\nInterbase to PostgreSQL. I notice that the bug fix didn't make 6.5.3,\ncould it be included in 6.5.4 so we don't have to keep patching our\nsource?\n\nPlease don't flame me too hard if I posted to the wrong group - it seems\nto me that the gurus who specify releases hang out here!\n\nSteve\n\n\n", "msg_date": "Sat, 06 Nov 1999 22:20:07 -0800", "msg_from": "Stephen Birch <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql quote bug, put in 6.5.4?" }, { "msg_contents": "> There is a bug fix in the cvs tree to a problem that prevents pgsql from\n> handling single quotes in strings. The solution is described here:\n> \n> http://www.postgresql.org/mhonarc/pgsql-interfaces/1999-09/msg00154.html\n> \n> As a newcomer to PostgreSQL, I lost several hours to this problem - not\n> good since I am trying to decide if my company should switch from\n> Interbase to PostgreSQL. I notice that the bug fix didn't make 6.5.3,\n> could it be included in 6.5.4 so we don't have to keep patching our\n> source?\n> \n> Please don't flame me too hard if I posted to the wrong group - it seems\n> to me that the gurus who specify releases hang out here!\n\nOK, fix applied to 6.5.* tree. The fix was already applied to 7.0 tree.\n\nWe did not patch 6.5.* tree at first because even though it fixed a bug,\nthere was no confirmation from anyone else that that there were no\nadverse side-affects.\n\nOnly very safe patches are put in 6.5.* because little testing is\nperformed on the minor releases.\n\nWith your confirmation, it is safe to put in 6.5.* now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Nov 1999 07:18:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgsql quote bug, put in 6.5.4?" } ]
[ { "msg_contents": "\n>From: Tom Lane <[email protected]>\n>\n>Keith Parks <[email protected]> writes:\n>>>> make[2]: *** No rule to make target `../../../doc/src/sgml/ref/*.sgml', \n>>>> needed \n>>>> by `sql_help.h'. Stop.\n>>>> make[2]: Leaving directory `/export/home/pgsql/src/bin/psql'\n>\n>>> Do you have sgml files in that directory? You should.\n>\n>> Plenty of them...\n>>\n>> directory. Not the usual sort of thing you see in makefiles.\n>\n>\n>This makefile coding does depend on wildcard expansion in dependency\n>lists, which is a GNU-make ism that probably doesn't get a lot of\n>testing. What version of make are you running?\n>\n>It might be worth changing the rule to use explicit wildcard expansion,\n>\n>sql_help.h: $(wildcard ../../../doc/src/sgml/ref/*.sgml) create_help.pl\n>\n>in case some versions of make need that extra cue to do the right thing...\n>\n\nMy make is :-\n\nbash-2.03$ make --version\nGNU Make version 3.77, by Richard Stallman and Roland McGrath.\nCopyright (C) 1988, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98\n\nso it's fairly recent. ( binary dist from the sunfreeware site. ) \n\nYour suggestion above fixes the problem so it might as well be\nincluded in the official makefile.\n\n*** src/bin/psql/Makefile.in.orig\tSun Nov 7 10:39:26 1999\n--- src/bin/psql/Makefile.in\tSun Nov 7 10:40:30 1999\n***************\n*** 46,52 ****\n help.o: sql_help.h\n \n ifneq ($(strip $(PERL)),) \n! sql_help.h: ../../../doc/src/sgml/ref/*.sgml create_help.pl\n \t$(PERL) create_help.pl sql_help.h \n else\n sql_help.h:\n--- 46,52 ----\n help.o: sql_help.h\n \n ifneq ($(strip $(PERL)),) \n! sql_help.h: $(wildcard ../../../doc/src/sgml/ref/*.sgml) create_help.pl\n \t$(PERL) create_help.pl sql_help.h \n else\n sql_help.h:\n\n\nFurther testing shows that make fails in this case but is OK in others.\n\n\tbash-2.03$ cat Makefile2\n\ttmplist: /tmp/*\n \tls /tmp > tmplist\n\tbash-2.03$ make -f Makefile2\n\tls /tmp > tmplist\n\n\nI've just downloaded the source for GNU make 3.78.1 and built myself.\nThis version makes the above problem case without problems.\n\n\nKeith.\n\nI'm ccing this to [email protected] for info.\n\nSteve, This was the version I had installed.\n\n% pkginfo -l -d ./make-3.77-sol7-sparc-local\n PKGINST: SMCmake\n NAME: make\n CATEGORY: application\n ARCH: sparc\n VERSION: 3.77\n BASEDIR: /usr/local\n VENDOR: Free Software Foundation\n PSTAMP: Steve Christensen\n EMAIL: [email protected]\n STATUS: spooled\n FILES: 52 spooled pathnames\n 6 directories\n 1 executables\n 2 package information files\n 4633 blocks used (approx)\n\n \n\n", "msg_date": "Sun, 7 Nov 1999 12:56:37 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New psql compile problem. " }, { "msg_contents": "\nApplied.\n\n\n> \n> >From: Tom Lane <[email protected]>\n> >\n> >Keith Parks <[email protected]> writes:\n> >>>> make[2]: *** No rule to make target `../../../doc/src/sgml/ref/*.sgml', \n> >>>> needed \n> >>>> by `sql_help.h'. Stop.\n> >>>> make[2]: Leaving directory `/export/home/pgsql/src/bin/psql'\n> >\n> >>> Do you have sgml files in that directory? You should.\n> >\n> >> Plenty of them...\n> >>\n> >> directory. Not the usual sort of thing you see in makefiles.\n> >\n> >\n> >This makefile coding does depend on wildcard expansion in dependency\n> >lists, which is a GNU-make ism that probably doesn't get a lot of\n> >testing. What version of make are you running?\n> >\n> >It might be worth changing the rule to use explicit wildcard expansion,\n> >\n> >sql_help.h: $(wildcard ../../../doc/src/sgml/ref/*.sgml) create_help.pl\n> >\n> >in case some versions of make need that extra cue to do the right thing...\n> >\n> \n> My make is :-\n> \n> bash-2.03$ make --version\n> GNU Make version 3.77, by Richard Stallman and Roland McGrath.\n> Copyright (C) 1988, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98\n> \n> so it's fairly recent. ( binary dist from the sunfreeware site. ) \n> \n> Your suggestion above fixes the problem so it might as well be\n> included in the official makefile.\n> \n> *** src/bin/psql/Makefile.in.orig\tSun Nov 7 10:39:26 1999\n> --- src/bin/psql/Makefile.in\tSun Nov 7 10:40:30 1999\n> ***************\n> *** 46,52 ****\n> help.o: sql_help.h\n> \n> ifneq ($(strip $(PERL)),) \n> ! sql_help.h: ../../../doc/src/sgml/ref/*.sgml create_help.pl\n> \t$(PERL) create_help.pl sql_help.h \n> else\n> sql_help.h:\n> --- 46,52 ----\n> help.o: sql_help.h\n> \n> ifneq ($(strip $(PERL)),) \n> ! sql_help.h: $(wildcard ../../../doc/src/sgml/ref/*.sgml) create_help.pl\n> \t$(PERL) create_help.pl sql_help.h \n> else\n> sql_help.h:\n> \n> \n> Further testing shows that make fails in this case but is OK in others.\n> \n> \tbash-2.03$ cat Makefile2\n> \ttmplist: /tmp/*\n> \tls /tmp > tmplist\n> \tbash-2.03$ make -f Makefile2\n> \tls /tmp > tmplist\n> \n> \n> I've just downloaded the source for GNU make 3.78.1 and built myself.\n> This version makes the above problem case without problems.\n> \n> \n> Keith.\n> \n> I'm ccing this to [email protected] for info.\n> \n> Steve, This was the version I had installed.\n> \n> % pkginfo -l -d ./make-3.77-sol7-sparc-local\n> PKGINST: SMCmake\n> NAME: make\n> CATEGORY: application\n> ARCH: sparc\n> VERSION: 3.77\n> BASEDIR: /usr/local\n> VENDOR: Free Software Foundation\n> PSTAMP: Steve Christensen\n> EMAIL: [email protected]\n> STATUS: spooled\n> FILES: 52 spooled pathnames\n> 6 directories\n> 1 executables\n> 2 package information files\n> 4633 blocks used (approx)\n> \n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Nov 1999 10:56:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql compile problem." } ]
[ { "msg_contents": "I am confused by nameout(). There are a number of places where table\nnames are output using nameout(), and many other cases where they are\njust output without calling nameout. Can someone explain why the dash\nis important? I can see the pstrdup as being important, but not in all\nof the cases where nameout is called.\n\n---------------------------------------------------------------------------\n\n/*\n * nameout - converts internal reprsentation to \"...\"\n */\nchar *\nnameout(NameData *s)\n{ \n if (s == NULL)\n return \"-\";\n else \n return pstrdup(s->data);\n}\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Nov 1999 09:16:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "What is nameout() for?" } ]
[ { "msg_contents": "Hi all,\n\nI was wondering why all the regression tests failed for me so i ran one\nin the interactive mode.\n\n\nmtcc:[/usr/local/pgsql/src/test/regress](73)% /usr/local/pgsql/bin/psql \nregression\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nregression=> \\i sql/boolean.sql \n\nregression=>\n\nI got nothing onscreen and no work was done.\n\nAfter some digging I found that in non interactive mode psql\nstops processing a file as soon as it gets to a blank line.\n\nThis seems to be where it goes wrong. (mainloop.c)\n\n/* No more input. Time to quit, or \\i done */\nif (line == NULL || (!pset->cur_cmd_interactive && *line == '\\0'))\n\nWhen a blank line is encountered in the input \n\n\tline = gets_fromFile(source);\n\t\nreturns an empty string ('\\0') and terminates the processing.\n\nwith the if clause reduced to checking for line == NULL psql\ndoes the work but fails badly due to the differences between\nresults and expected. (comments, QUERY:, echo processing)\n\nIs the intention to modify expected to agree with the new\nresults output, or fix psql to output in the expected format?\n\nKeith.\n\n", "msg_date": "Sun, 7 Nov 1999 16:51:57 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "New psql input mode problems" }, { "msg_contents": "> Hi all,\n> \n> I was wondering why all the regression tests failed for me so i ran one\n> in the interactive mode.\n> \n> \n> mtcc:[/usr/local/pgsql/src/test/regress](73)% /usr/local/pgsql/bin/psql \n> regression\n> Welcome to psql, the PostgreSQL interactive terminal.\n> \n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n> \n> regression=> \\i sql/boolean.sql \n> \n> regression=>\n> \n> I got nothing onscreen and no work was done.\n> \n> After some digging I found that in non interactive mode psql\n> stops processing a file as soon as it gets to a blank line.\n> \n> This seems to be where it goes wrong. (mainloop.c)\n> \n> /* No more input. Time to quit, or \\i done */\n> if (line == NULL || (!pset->cur_cmd_interactive && *line == '\\0'))\n> \n> When a blank line is encountered in the input \n> \n> \tline = gets_fromFile(source);\n> \t\n> returns an empty string ('\\0') and terminates the processing.\n> \n> with the if clause reduced to checking for line == NULL psql\n> does the work but fails badly due to the differences between\n> results and expected. (comments, QUERY:, echo processing)\n\n> \n> Is the intention to modify expected to agree with the new\n> results output, or fix psql to output in the expected format?\n\nGood question. We need to know if people like the current output\nformat, or the old one better?\n\nLooks like your change in testing just for NULL is correct, and I will\napply a patch.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Nov 1999 12:55:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql input mode problems" }, { "msg_contents": "On Sun, 7 Nov 1999, Bruce Momjian wrote:\n\n> > Hi all,\n> > \n> > I was wondering why all the regression tests failed for me so i ran one\n> > in the interactive mode.\n\nI warned y'all about this. Too late now ;)\n\n> > After some digging I found that in non interactive mode psql\n> > stops processing a file as soon as it gets to a blank line.\n> > \n> > This seems to be where it goes wrong. (mainloop.c)\n> > \n> > /* No more input. Time to quit, or \\i done */\n> > if (line == NULL || (!pset->cur_cmd_interactive && *line == '\\0'))\n\nThis line was there in the old source as well.\n\n> > \n> > When a blank line is encountered in the input \n> > \n> > \tline = gets_fromFile(source);\n> > \t\n> > returns an empty string ('\\0') and terminates the processing.\n\nSame in the old one.\n\n> > \n> > with the if clause reduced to checking for line == NULL psql\n> > does the work but fails badly due to the differences between\n> > results and expected. (comments, QUERY:, echo processing)\n\nAs I said, that part was not my idea. I'll look into that though.\n\n> \n> > \n> > Is the intention to modify expected to agree with the new\n> > results output, or fix psql to output in the expected format?\n\nHow about using the old psql for regression testing?\n\n> \n> Good question. We need to know if people like the current output\n> format, or the old one better?\n\nI send in several examples. If no one comments, that's silent approval.\nI am really hesitant to put in a compatibility format, but I might just do\nthat until new regression tests are out and you ask me to.\n\n> \n> Looks like your change in testing just for NULL is correct, and I will\n> apply a patch.\n> \n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 8 Nov 1999 13:13:17 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql input mode problems" } ]
[ { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> Has anyone noticed a few weirdnesses with psql under 6.5.3? Psql\n> doesn't know how to handle mixed case object names, at least on my machine\n> (RH 6.1). Anyone else have this problem or is it just my machine? I don't\n> recall 6.5.2 having this problem.\n\nThis is news to me.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Nov 1999 12:30:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and 6.5.3" }, { "msg_contents": "\n Has anyone noticed a few weirdnesses with psql under 6.5.3? Psql\ndoesn't know how to handle mixed case object names, at least on my machine\n(RH 6.1). Anyone else have this problem or is it just my machine? I don't\nrecall 6.5.2 having this problem.\n\n\n---\nDamond Walker\n\n\n", "msg_date": "Sun, 7 Nov 1999 10:45:19 -0800", "msg_from": "\"Damond Walker\" <[email protected]>", "msg_from_op": false, "msg_subject": "psql and 6.5.3" } ]
[ { "msg_contents": "\n>From: Bruce Momjian <[email protected]>\n>\n>> Hi all,\n>> \n>> I was wondering why all the regression tests failed for me so i ran one\n>> in the interactive mode.\n>> \n>> \n>> mtcc:[/usr/local/pgsql/src/test/regress](73)% /usr/local/pgsql/bin/psql \n>> regression\n>> Welcome to psql, the PostgreSQL interactive terminal.\n>> \n>> Type: \\copyright for distribution terms\n>> \\h for help with SQL commands\n>> \\? for help on internal slash commands\n>> \\g or terminate with semicolon to execute query\n>> \\q to quit\n>> \n>> regression=> \\i sql/boolean.sql \n>> \n>> regression=>\n>> \n>> I got nothing onscreen and no work was done.\n>> \n>> After some digging I found that in non interactive mode psql\n>> stops processing a file as soon as it gets to a blank line.\n>> \n>> This seems to be where it goes wrong. (mainloop.c)\n>> \n>> /* No more input. Time to quit, or \\i done */\n>> if (line == NULL || (!pset->cur_cmd_interactive && *line == '\\0'))\n>> \n>> When a blank line is encountered in the input \n>> \n>> \tline = gets_fromFile(source);\n>> \t\n>> returns an empty string ('\\0') and terminates the processing.\n>> \n>> with the if clause reduced to checking for line == NULL psql\n>> does the work but fails badly due to the differences between\n>> results and expected. (comments, QUERY:, echo processing)\n>\n>> \n>> Is the intention to modify expected to agree with the new\n>> results output, or fix psql to output in the expected format?\n>\n>Good question. We need to know if people like the current output\n>format, or the old one better?\n>\n>Looks like your change in testing just for NULL is correct, and I will\n>apply a patch.\n\nBruce,\n\nI hope Peter can confirm that?\n\nOne concern is that, currently, we cannot run the regression tests\nand are therefore blind to any breakage from patches.\n\nI don't do much for postgresql but I do like to keep an eye out\nfor breakage on my 2 platforms (SPARC Solaris 7 and S/Linux)\n\nKeith.\n\n", "msg_date": "Sun, 7 Nov 1999 18:09:42 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New psql input mode problems" }, { "msg_contents": "> \n> Bruce,\n> \n> I hope Peter can confirm that?\n> \n> One concern is that, currently, we cannot run the regression tests\n> and are therefore blind to any breakage from patches.\n> \n> I don't do much for postgresql but I do like to keep an eye out\n> for breakage on my 2 platforms (SPARC Solaris 7 and S/Linux)\n\nYes, I am stuck too and not able to test my patches. Let's see what\npeople say about the new format.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Nov 1999 14:26:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql input mode problems" } ]
[ { "msg_contents": "Hello,\n\nI was writing a query using intersect and came across a strang error. \nIndependently, the two queries work fine but fail to compile when\nintersected. My first instinct was to rewrite the query with an\nin clause, and that too failed in even a stranger way. I've stripped\ndown the queries to the most basic case of failure. I'm running 6.5.3\non a RedHat 6.0 PII. I've included a little snippet of code to reproduce \nthe problem. I'm expecting to hear that you can't have aggregates in \nIN clauses until the rewrite engine gets fixed -- discussed in previous\nposts. I'm more hopefull that the intersection problem will be easy to\nsolve.\n\n/* create test tables and test data */\ncreate table test1 (id int);\ncreate table test2 (id int, fk int);\ninsert into test1 values (1);\ninsert into test1 values (2);\ninsert into test2 values (1,100);\ninsert into test2 values (1,102);\ninsert into test2 values (2,100);\ninsert into test2 values (3,101);\n\n/* QUERY 1: this query works */\nselect id from test1;\n\n/* QUERY 2: this query works */\nselect id from test2 group by id having count(fk) = 2;\n\n/* QUERY 3: intersected, the queries fail with:\n * ERROR: SELECT/HAVING requires aggregates to be valid \n * NOTE: reversing the order of the intersection works */\nselect id from test1 \n\tintersect \nselect id from test2 group by id having count(fk) = 2;\n\n\n/* QUERY 4: using \"QUERY 2\" as an in clause you get a more confusing error:\n * ERROR: rewrite: aggregate column of view must be at rigth side in qual */\nselect id from test1 where id in\n (select id from test2 group by id having count(fk) = 2);\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Sun, 7 Nov 1999 12:54:37 -0600", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": true, "msg_subject": "IN clause and INTERSECT not behaving as expected" }, { "msg_contents": "Okay, I've looked into this a little more and found that the rewrite \nengine converts UNION INTERSECT and EXCEPT queries to semantiacally \nequivalent queries that use IN and NOT IN subselects. \nSee: backend/rewrite/rewriteHandler.c, line 2821.\n\nSo, my hope that the intersection problem will be easier to solve than\nthe sub select problem is incorrect. I'm still confused by the error\nmessage about \"views\" with the IN clause. I'll look into that some more.\n\nOn Sun, Nov 07, 1999 at 12:54:37PM -0600, Brian Hirt wrote:\n> Hello,\n> \n> I was writing a query using intersect and came across a strang error. \n> Independently, the two queries work fine but fail to compile when\n> intersected. My first instinct was to rewrite the query with an\n> in clause, and that too failed in even a stranger way. I've stripped\n> down the queries to the most basic case of failure. I'm running 6.5.3\n> on a RedHat 6.0 PII. I've included a little snippet of code to reproduce \n> the problem. I'm expecting to hear that you can't have aggregates in \n> IN clauses until the rewrite engine gets fixed -- discussed in previous\n> posts. I'm more hopefull that the intersection problem will be easy to\n> solve.\n> \n> /* create test tables and test data */\n> create table test1 (id int);\n> create table test2 (id int, fk int);\n> insert into test1 values (1);\n> insert into test1 values (2);\n> insert into test2 values (1,100);\n> insert into test2 values (1,102);\n> insert into test2 values (2,100);\n> insert into test2 values (3,101);\n> \n> /* QUERY 1: this query works */\n> select id from test1;\n> \n> /* QUERY 2: this query works */\n> select id from test2 group by id having count(fk) = 2;\n> \n> /* QUERY 3: intersected, the queries fail with:\n> * ERROR: SELECT/HAVING requires aggregates to be valid \n> * NOTE: reversing the order of the intersection works */\n> select id from test1 \n> \tintersect \n> select id from test2 group by id having count(fk) = 2;\n> \n> \n> /* QUERY 4: using \"QUERY 2\" as an in clause you get a more confusing error:\n> * ERROR: rewrite: aggregate column of view must be at rigth side in qual */\n> select id from test1 where id in\n> (select id from test2 group by id having count(fk) = 2);\n> \n> -- \n> The world's most ambitious and comprehensive PC game database project.\n> \n> http://www.mobygames.com\n> \n> ************\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Sun, 7 Nov 1999 13:39:13 -0600", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] IN clause and INTERSECT not behaving as expected" }, { "msg_contents": "Brian Hirt <[email protected]> writes:\n> /* QUERY 1: this query works */\n> select id from test1;\n\n> /* QUERY 2: this query works */\n> select id from test2 group by id having count(fk) = 2;\n\n> /* QUERY 3: intersected, the queries fail with:\n> * ERROR: SELECT/HAVING requires aggregates to be valid \n> * NOTE: reversing the order of the intersection works */\n> select id from test1 \n> \tintersect \n> select id from test2 group by id having count(fk) = 2;\n\n> /* QUERY 4: using \"QUERY 2\" as an in clause you get a more confusing error:\n> * ERROR: rewrite: aggregate column of view must be at rigth side in qual */\n> select id from test1 where id in\n> (select id from test2 group by id having count(fk) = 2);\n\nThese are both bugs, I think. I committed rewriter fixes that take care\nof query 4 (the rewriter mistakenly thought that having count(*) inside\nWHERE was a bad thing even if the aggregate function was inside a\nsubselect). I am not seeing any failure from query 3 either in current\nsources, though I am not sure if that was the same bug or a different one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 1999 21:56:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IN clause and INTERSECT not behaving as expected " }, { "msg_contents": "\nCan anyone comment on this?\n\n> Hello,\n> \n> I was writing a query using intersect and came across a strang error. \n> Independently, the two queries work fine but fail to compile when\n> intersected. My first instinct was to rewrite the query with an\n> in clause, and that too failed in even a stranger way. I've stripped\n> down the queries to the most basic case of failure. I'm running 6.5.3\n> on a RedHat 6.0 PII. I've included a little snippet of code to reproduce \n> the problem. I'm expecting to hear that you can't have aggregates in \n> IN clauses until the rewrite engine gets fixed -- discussed in previous\n> posts. I'm more hopefull that the intersection problem will be easy to\n> solve.\n> \n> /* create test tables and test data */\n> create table test1 (id int);\n> create table test2 (id int, fk int);\n> insert into test1 values (1);\n> insert into test1 values (2);\n> insert into test2 values (1,100);\n> insert into test2 values (1,102);\n> insert into test2 values (2,100);\n> insert into test2 values (3,101);\n> \n> /* QUERY 1: this query works */\n> select id from test1;\n> \n> /* QUERY 2: this query works */\n> select id from test2 group by id having count(fk) = 2;\n> \n> /* QUERY 3: intersected, the queries fail with:\n> * ERROR: SELECT/HAVING requires aggregates to be valid \n> * NOTE: reversing the order of the intersection works */\n> select id from test1 \n> \tintersect \n> select id from test2 group by id having count(fk) = 2;\n> \n> \n> /* QUERY 4: using \"QUERY 2\" as an in clause you get a more confusing error:\n> * ERROR: rewrite: aggregate column of view must be at rigth side in qual */\n> select id from test1 where id in\n> (select id from test2 group by id having count(fk) = 2);\n> \n> -- \n> The world's most ambitious and comprehensive PC game database project.\n> \n> http://www.mobygames.com\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 21:40:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IN clause and INTERSECT not behaving as expected" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can anyone comment on this?\n\nThe given cases seem to work in current sources...\n\n\t\t\tregards, tom lane\n\n>> /* create test tables and test data */\n>> create table test1 (id int);\n>> create table test2 (id int, fk int);\n>> insert into test1 values (1);\n>> insert into test1 values (2);\n>> insert into test2 values (1,100);\n>> insert into test2 values (1,102);\n>> insert into test2 values (2,100);\n>> insert into test2 values (3,101);\n>> \n>> /* QUERY 1: this query works */\n>> select id from test1;\n>> \n>> /* QUERY 2: this query works */\n>> select id from test2 group by id having count(fk) = 2;\n>> \n>> /* QUERY 3: intersected, the queries fail with:\n>> * ERROR: SELECT/HAVING requires aggregates to be valid \n>> * NOTE: reversing the order of the intersection works */\n>> select id from test1 \n>> intersect \n>> select id from test2 group by id having count(fk) = 2;\n>> \n>> \n>> /* QUERY 4: using \"QUERY 2\" as an in clause you get a more confusing error:\n>> * ERROR: rewrite: aggregate column of view must be at rigth side in qual */\n>> select id from test1 where id in\n>> (select id from test2 group by id having count(fk) = 2);\n", "msg_date": "Mon, 29 Nov 1999 23:16:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IN clause and INTERSECT not behaving as expected " } ]
[ { "msg_contents": "--- Bruce Momjian <[email protected]> wrote:\n> I am confused by nameout(). There are a number of places where table\n> names are output using nameout(), and many other cases where they are\n> just output without calling nameout. Can someone explain why the dash\n> is important? I can see the pstrdup as being important, but not in all\n> of the cases where nameout is called.\n> \n>\n---------------------------------------------------------------------------\n> \n> /*\n> * nameout - converts internal reprsentation to \"...\"\n> */\n> char *\n> nameout(NameData *s)\n> { \n> if (s == NULL)\n> return \"-\";\n> else \n> return pstrdup(s->data);\n> }\n> \n\nActually, I have 'C' question regarding the above code. Where does the\n\"-\" live in RAM? Does the compiler generated a data hunk such that this\nstring will be apart of the final executable and each invocation of this\nroutine would result in a pointer to that 'global' location being\nreturned? \nOr does it allocate the memory for, and initialize, the \"-\" on the stack? \nIf so, isn't returning a \"-\" a dangerous act?\n\nIn fact, isn't returning a \"-\" dangerous either way without the \nprotoype being:\n\nconst char *nameout(NameData *s);\n^^^^^\n\nSorry to drift off topice, but I was just curious,\n\nMike Mascari\n([email protected])\n\n\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Sun, 7 Nov 1999 12:38:34 -0800 (PST)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] What is nameout() for?" }, { "msg_contents": "At 12:38 PM 11/7/99 -0800, Mike Mascari wrote:\n\n>Actually, I have 'C' question regarding the above code. Where does the\n>\"-\" live in RAM? Does the compiler generated a data hunk such that this\n>string will be apart of the final executable and each invocation of this\n>routine would result in a pointer to that 'global' location being\n>returned? \n\nYes.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 07 Nov 1999 12:47:51 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What is nameout() for?" }, { "msg_contents": "> Actually, I have 'C' question regarding the above code. Where does the\n> \"-\" live in RAM? Does the compiler generated a data hunk such that this\n> string will be apart of the final executable and each invocation of this\n> routine would result in a pointer to that 'global' location being\n> returned? \n> Or does it allocate the memory for, and initialize, the \"-\" on the stack? \n> If so, isn't returning a \"-\" a dangerous act?\n> \n> In fact, isn't returning a \"-\" dangerous either way without the \n> protoype being:\n\nOne copy, usually in the text segment because it is ready-only.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Nov 1999 15:48:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What is nameout() for?" }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> Actually, I have 'C' question regarding the above code. Where does the\n> \"-\" live in RAM? Does the compiler generated a data hunk such that this\n> string will be apart of the final executable and each invocation of this\n> routine would result in a pointer to that 'global' location being\n> returned? \n> Or does it allocate the memory for, and initialize, the \"-\" on the stack? \n> If so, isn't returning a \"-\" a dangerous act?\n\nAs Bruce already explained, the existing code returns a pointer to a\nconstant string \"-\" sitting somewhere in the program's text segment\n(or data segment, possibly, depending on your compiler). So it's OK\nin the sense that the pointer still points at well-defined memory\neven after the function returns. But I believe the code is bogus\nanyway, because one path returns palloc'd storage and the other\ndoesn't. If the caller pfree'd the returned pointer, it'd work\njust until nameout was given a NULL pointer; then it'd coredump.\n\n> In fact, isn't returning a \"-\" dangerous either way without the \n> protoype being:\n\n> const char *nameout(NameData *s);\n> ^^^^^\n\nThat's a different issue: if the caller tries to *modify* the returned\nstring, should the compiler complain? If the caller tries that, and\nthe compiler doesn't complain, and the compiler puts the constant string\n\"-\" into data segment, then you've got trouble: that supposedly constant\nstring will get changed and will no longer look like \"-\" on its next\nuse. (Shades of Fortran II :-(.) But I'm not very worried about that\nin practice, because most of the developers use gcc which puts constant\nstring in text segment. Any attempt to modify a constant string will\ninstantly coredump under gcc, so the logic error will be found and fixed\nbefore long.\n\nThe trouble with declaring nameout and similar functions to return\nconst char * is that C (and C++) don't distinguish \"thou shalt not\nmodify\" from \"thou shalt not free\". Ideally we'd like to declare\nnameout as returning a string that the caller can't modify, but can\nfree when no longer needed. We can't do that unfortunately...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 1999 22:13:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What is nameout() for? " } ]
[ { "msg_contents": "Here's the situation...\n\nI had a few tables left over from a small 6.5.2 database. By '\\d'ing I can\nsee a table named Employee in the list. If I try to select from the table I\nget the following...\n\n\\d returns...\n\n | damond | Employee | table |\n\nselect * from employee returns....\n\nERROR: employee: Table does not exist.\n\n---\nDamond Walker\n\n\n", "msg_date": "Sun, 7 Nov 1999 13:55:26 -0800", "msg_from": "\"Damond Walker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and 6.5.3" }, { "msg_contents": "> I had a few tables left over from a small 6.5.2 database.\n> | damond | Employee | table |\n> select * from employee returns....\n> ERROR: employee: Table does not exist.\n\nGotta use\n\n select * from \"Employee\";\n\nand it has always been this way (since we implemented mixed-case\ncapabilities anyway...).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 07 Nov 1999 23:37:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and 6.5.3" } ]
[ { "msg_contents": "I have overhauled the use of RelationGetRelationName() to it is used\nmore often.\n\nI have added a new macro NameStr to get a char * from Name. No more:\n\n\tvar.data\n\t&var\n\tvar->data\n\nto access Name as a character string.\n\nAlso, several calls to nameout() were removed and changed to\npstrdup(NameStr(var)). Clearer what is going on.\n\nCan't run regression tests because of psql changes, but at least initdb\nworked, and I can query my database.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Nov 1999 18:03:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "New NameStr() macro, RelationGetRelationName fixes" }, { "msg_contents": "On 1999-11-07, Bruce Momjian mentioned:\n\n> I have overhauled the use of RelationGetRelationName() to it is used\n> more often.\n> \n> I have added a new macro NameStr to get a char * from Name. No more:\n> \n> \tvar.data\n> \t&var\n> \tvar->data\n> \n> to access Name as a character string.\n> \n> Also, several calls to nameout() were removed and changed to\n> pstrdup(NameStr(var)). Clearer what is going on.\n> \n> Can't run regression tests because of psql changes, but at least initdb\n> worked, and I can query my database.\n\nUse the old one.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 8 Nov 1999 22:22:58 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New NameStr() macro, RelationGetRelationName fixes" } ]
[ { "msg_contents": "There are new border styles.\n\nI think I prefer \\pset border 1 as the default. What do other people\nthink? People have been slow to comment on the new psql features.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Nov 1999 01:06:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "new Psql \\pset border" }, { "msg_contents": "On Mon, 8 Nov 1999, Bruce Momjian wrote:\n\n> There are new border styles.\n> \n> I think I prefer \\pset border 1 as the default. What do other people\n> think? People have been slow to comment on the new psql features.\n\nLast time I checked this was the default. It was supposed to look as\nsimilar as possible to what is in there now. (But of course much prettier\nIMHO ;)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 8 Nov 1999 13:02:35 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new Psql \\pset border" }, { "msg_contents": "> On Mon, 8 Nov 1999, Bruce Momjian wrote:\n> \n> > There are new border styles.\n> > \n> > I think I prefer \\pset border 1 as the default. What do other people\n> > think? People have been slow to comment on the new psql features.\n> \n> Last time I checked this was the default. It was supposed to look as\n> similar as possible to what is in there now. (But of course much prettier\n> IMHO ;)\n\nIt is not the default now, I think. Looks like border 0 is default. \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Nov 1999 10:56:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] new Psql \\pset border" }, { "msg_contents": "> > On Mon, 8 Nov 1999, Bruce Momjian wrote:\n> > \n> > > There are new border styles.\n> > > \n> > > I think I prefer \\pset border 1 as the default. What do other people\n> > > think? People have been slow to comment on the new psql features.\n> > \n> > Last time I checked this was the default. It was supposed to look as\n> > similar as possible to what is in there now. (But of course much prettier\n> > IMHO ;)\n> \n> It is not the default now, I think. Looks like border 0 is default. \n\nI take that back. Looks like border 1 is the default. I must have\ngotten confused.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Nov 1999 15:41:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] new Psql \\pset border" }, { "msg_contents": "> On Mon, 8 Nov 1999, Bruce Momjian wrote:\n> \n> > There are new border styles.\n> > \n> > I think I prefer \\pset border 1 as the default. What do other people\n> > think? People have been slow to comment on the new psql features.\n> \n> Last time I checked this was the default. It was supposed to look as\n> similar as possible to what is in there now. (But of course much prettier\n> IMHO ;)\n\nYes, I see 1 as the default now. Not sure how I got so confused.\n\nI like the up-arrow history from previous sessions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Nov 1999 15:41:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] new Psql \\pset border" } ]
[ { "msg_contents": "Just need to know what sql statement to use to list the tables in a database. I\nneed to do this from C.\n", "msg_date": "Mon, 8 Nov 1999 00:29:50 -0800", "msg_from": "\"Matt M.\" <[email protected]>", "msg_from_op": true, "msg_subject": "listing tables" } ]
[ { "msg_contents": "... at least on this machine.\n\n$ make -C backend\nmake: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend'\nmake -C access all \nmake[1]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/access'\n<snip>\nmake[1]: Leaving directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/access'\nmake -C bootstrap all \nmake[1]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/bootstrap'\nmake[1]: Nothing to be done for `all'.\nmake[1]: Leaving directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/bootstrap'\nmake -C catalog all \nmake[1]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/catalog'\nmake[1]: Nothing to be done for `all'.\nmake[1]: Leaving directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/catalog'\nmake -C commands all \nmake[1]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/commands'\nmake -C .. parse.h\nmake[2]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend'\nfor i in access bootstrap catalog commands executor lib libpq main parser nodes optimizer port postmaster regex rewrite storage tcop utils; do make -C $i parser/parse.h; done\nmake[3]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/access'\nmake -C common parser/parse.h\nmake[4]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/access/common'\nmake[4]: *** No rule to make target `parser/parse.h'. Stop.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nmake[4]: Leaving directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/access/common'\nmake[3]: *** [parser/parse.h] Error 2\nmake[3]: Leaving directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/access'\n... recursive death ...\nmake: *** [commands.dir] Error 2\nmake: Leaving directory `/home/fenix0/eh99/e99re41/pgsql/src/backend'\n\nI vaguely recall that this file might be intended to be built by bison.\nThe potentially relevant lines from Makefile.global are:\nYFLAGS= -y -d\nYACC= /usr/sup/gnu/bin/bison\n\n`uname -a`\nSunOS Krokodil 5.5.1 Generic_103640-23 sun4m sparc SUNW,SPARCstation-4\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 8 Nov 1999 15:37:02 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Backend build fails in current" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> ...\n> make -C common parser/parse.h\n> make[4]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/access/common'\n> make[4]: *** No rule to make target `parser/parse.h'. Stop.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAre you still seeing this? I didn't see it with a pull from CVS\nyesterday. If you are, what version of make are you using?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Nov 1999 02:53:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Backend build fails in current " }, { "msg_contents": "On Sat, 13 Nov 1999, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > ...\n> > make -C common parser/parse.h\n> > make[4]: Entering directory `/home/fenix0/eh99/e99re41/pgsql/src/backend/access/common'\n> > make[4]: *** No rule to make target `parser/parse.h'. Stop.\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Are you still seeing this? I didn't see it with a pull from CVS\n> yesterday. If you are, what version of make are you using?\n\nAffirmative. Same problem.\n\nGNU Make version 3.74, by Richard Stallman and Roland McGrath.\n\nThat's a little old it seems, but I don't have any power to upgrade it on\nthis particular machine. It should certainly be possible to fix the make\nfiles, since requiring GNU make is already a hassle for some, but\nrequiring the latest version might be too much to ask for?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 13 Nov 1999 14:36:31 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Backend build fails in current " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Are you still seeing this? I didn't see it with a pull from CVS\n>> yesterday. If you are, what version of make are you using?\n\n> Affirmative. Same problem.\n\n> GNU Make version 3.74, by Richard Stallman and Roland McGrath.\n\n> That's a little old it seems,\n\nIt is. I'd suggest leaning on your sysadmin to get it updated to\nsomething current (3.78.1 is current I think).\n\nIn the meantime, please try the attached patch. If it seems to\nstraighten out the behavior on your make, I'll commit it.\n\n\t\t\tregards, tom lane\n\n*** src/backend/Makefile.orig\tSun Mar 7 18:05:56 1999\n--- src/backend/Makefile\tSat Nov 13 09:43:17 1999\n***************\n*** 116,127 ****\n # make files in our subdirectories.\n \n parse.h: parser/parse.h\n- \t$(MAKE) -C parser parse.h\n \tcp parser/parse.h .\n \n! fmgr.h:\n! \t$(MAKE) -C utils fmgr.h\n \tcp utils/fmgr.h .\n \n #############################################################################\n clean:\n--- 116,131 ----\n # make files in our subdirectories.\n \n parse.h: parser/parse.h\n \tcp parser/parse.h .\n \n! parser/parse.h:\n! \t$(MAKE) -C parser parse.h\n! \n! fmgr.h: utils/fmgr.h\n \tcp utils/fmgr.h .\n+ \n+ utils/fmgr.h:\n+ \t$(MAKE) -C utils fmgr.h\n \n #############################################################################\n clean:\n", "msg_date": "Sat, 13 Nov 1999 09:50:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Backend build fails in current " } ]
[ { "msg_contents": "Peter Eisentraut <[email protected]>\n>On Sun, 7 Nov 1999, Bruce Momjian wrote:\n>\n>> > This seems to be where it goes wrong. (mainloop.c)\n>> > \n>> > /* No more input. Time to quit, or \\i done */\n>> > if (line == NULL || (!pset->cur_cmd_interactive && *line == '\\0'))\n>\n>This line was there in the old source as well.\n\nJust checked and you're right..\n\n>\n>> > \n>> > When a blank line is encountered in the input \n>> > \n>> > \tline = gets_fromFile(source);\n>> > \t\n>> > returns an empty string ('\\0') and terminates the processing.\n>\n>Same in the old one.\n\nI think somehow the old version was subtly different.\n\nThe old version of psql read and processed the whole of\neach regression sql file. The new version stops processing\nas soon as it sees a blank line.\n\nI'm not sure if there's something different with the '\\n'\nhandling in the new version. If the '\\n' is being stripped\nout that could make all the difference between \"\\n\", which\nwould not terminate and \"\" which would.\n\n>\n>> > \n>> > with the if clause reduced to checking for line == NULL psql\n>> > does the work but fails badly due to the differences between\n>> > results and expected. (comments, QUERY:, echo processing)\n>\n>As I said, that part was not my idea. I'll look into that though.\n\nThis is a difficult one, I never was any good at decisions.\n\nThere are a number of key differences.\n\nQuery text OLD echoed prefixed with \"QUERY:\" NEW echoed as read.\nFormatting OLD slightly different to NEW. (alignment and '-'s)\n\n\nOLD:\n\nQUERY: SELECT 1 AS one;\none\n---\n 1\n(1 row)\n\n\nNEW:\n\n\nSELECT 1 AS one;\n one\n-----\n 1\n(1 row)\n\n\n>\n>> \n>> > \n>> > Is the intention to modify expected to agree with the new\n>> > results output, or fix psql to output in the expected format?\n>\n>How about using the old psql for regression testing?\n\nI like the new one better :-)\n\n>\n>> \n>> Good question. We need to know if people like the current output\n>> format, or the old one better?\n>\n>I send in several examples. If no one comments, that's silent approval.\n>I am really hesitant to put in a compatibility format, but I might just do\n>that until new regression tests are out and you ask me to.\n>\n\nIf it was not too difficult this would be one way to go.\n\nWe really need the regression output maintainer (Tom?) to comment\nhere. I'd hate to have to do the 1st compare between the new output\nand old expected by eye.\n\n\nKeith.\n\n", "msg_date": "Mon, 8 Nov 1999 18:01:45 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New psql input mode problems" }, { "msg_contents": "On 1999-11-08, Keith Parks mentioned:\n\n> Peter Eisentraut <[email protected]>\n> >On Sun, 7 Nov 1999, Bruce Momjian wrote:\n> >\n> >> > This seems to be where it goes wrong. (mainloop.c)\n> >> > \n> >> > /* No more input. Time to quit, or \\i done */\n> >> > if (line == NULL || (!pset->cur_cmd_interactive && *line == '\\0'))\n> >\n> >This line was there in the old source as well.\n> \n> Just checked and you're right..\n> \n> >\n> >> > \n> >> > When a blank line is encountered in the input \n> >> > \n> >> > \tline = gets_fromFile(source);\n> >> > \t\n> >> > returns an empty string ('\\0') and terminates the processing.\n> >\n> >Same in the old one.\n> \n> I think somehow the old version was subtly different.\n> \n> The old version of psql read and processed the whole of\n> each regression sql file. The new version stops processing\n> as soon as it sees a blank line.\n> \n> I'm not sure if there's something different with the '\\n'\n> handling in the new version. If the '\\n' is being stripped\n> out that could make all the difference between \"\\n\", which\n> would not terminate and \"\" which would.\n\nWe have a winner!\n\nThe new gets_fromFile strips the trailing newline, so an empty line in a\nfile really comes in as an empty line. I don't see any reason why the\ncheck was done as it was in the first place, so the correct line in\nmainloop.c should be:\n\n /* No more input. Time to quit, or \\i done */\n if (line == NULL)\n { \n\nas suggested.\n\n> \n> >\n> >> > \n> >> > with the if clause reduced to checking for line == NULL psql\n> >> > does the work but fails badly due to the differences between\n> >> > results and expected. (comments, QUERY:, echo processing)\n> >\n> >As I said, that part was not my idea. I'll look into that though.\n> \n> This is a difficult one, I never was any good at decisions.\n> \n> There are a number of key differences.\n> \n> Query text OLD echoed prefixed with \"QUERY:\" NEW echoed as read.\n\nThis behaviour is plagiarized from command shells.\n\n> Formatting OLD slightly different to NEW. (alignment and '-'s)\n> \n> \n> OLD:\n> \n> QUERY: SELECT 1 AS one;\n> one\n> ---\n> 1\n> (1 row)\n> \n> \n> NEW:\n> \n> \n> SELECT 1 AS one;\n> one\n> -----\n> 1\n> (1 row)\n> \n> \n> >\n> >> \n> >> > \n> >> > Is the intention to modify expected to agree with the new\n> >> > results output, or fix psql to output in the expected format?\n> >\n> >How about using the old psql for regression testing?\n> \n> I like the new one better :-)\n\nLife sucks ;)\n\n> \n> >\n> >> \n> >> Good question. We need to know if people like the current output\n> >> format, or the old one better?\n> >\n> >I send in several examples. If no one comments, that's silent approval.\n> >I am really hesitant to put in a compatibility format, but I might just do\n> >that until new regression tests are out and you ask me to.\n> >\n> \n> If it was not too difficult this would be one way to go.\n> \n> We really need the regression output maintainer (Tom?) to comment\n> here. I'd hate to have to do the 1st compare between the new output\n> and old expected by eye.\n\nHere's an idea: Why don't the regression tests use a single-user postgres\nbackend? That way you have even more control over internals, you can run\nit on an uninstalled database, and you don't rely on the particularities\nof the output of some obscure front-end.\n\nBut I'm not going to tailor psql around the regression tests. That's the\nwrong direction. Just run it once with the old one and then with the new\none and put those results in the current tree as a temporary solution.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 8 Nov 1999 22:08:37 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql input mode problems" }, { "msg_contents": "> We have a winner!\n> \n> The new gets_fromFile strips the trailing newline, so an empty line in a\n> file really comes in as an empty line. I don't see any reason why the\n> check was done as it was in the first place, so the correct line in\n> mainloop.c should be:\n> \n> /* No more input. Time to quit, or \\i done */\n> if (line == NULL)\n> { \n> \n> as suggested.\n\nGood. Already done.\n\n> Here's an idea: Why don't the regression tests use a single-user postgres\n> backend? That way you have even more control over internals, you can run\n> it on an uninstalled database, and you don't rely on the particularities\n> of the output of some obscure front-end.\n\nIt is generally unsafe because they don't share locking with other\nbackends, and some table are shared between databases.\n\n\n> \n> But I'm not going to tailor psql around the regression tests. That's the\n> wrong direction. Just run it once with the old one and then with the new\n> one and put those results in the current tree as a temporary solution.\n\nYes, I am sure that will be done soon.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Nov 1999 16:22:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New psql input mode problems" } ]
[ { "msg_contents": "\n\n> > CREATE USER sql command updates the file, but an UPDATE on pg_shadow\n> > does not.\n> \n> How about INSERT INTO pg_shadow? Or how do you judge the \n> following excerpt\n> from the createuser script:\n> \n> QUERY=\"insert into pg_shadow \\\n> (usename, usesysid, usecreatedb, usetrace, usesuper, \n> usecatupd) \\\n> values \\\n> ('$NEWUSER', $SYSID, '$CANCREATE', 'f', '$CANADDUSER','f')\"\n> \n> Fortunately (perhaps), I am getting rid of this as we're \n> speaking. The one\n> feature the createuser script has over the CREATE USER \"SQL\" \n> command is\n> that you can pick your system ID. Ignoring the question whether or not\n> this has any real purpose, it seems this is almost like \n> rolling dice since\n\nThe sysid is essential for one of the authentication methods available in\nPostgreSQL\n(was it ident, I forgot) where the unix system password was used.\n\nAndreas\n", "msg_date": "Mon, 8 Nov 1999 23:00:29 +0100 ", "msg_from": "Zeugswetter Andreas SEV <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: [GENERAL] users in Postgresql" }, { "msg_contents": "On Mon, 8 Nov 1999, Zeugswetter Andreas SEV wrote:\n\n> > speaking. The one\n> > feature the createuser script has over the CREATE USER \"SQL\" \n> > command is\n> > that you can pick your system ID. Ignoring the question whether or not\n> > this has any real purpose, it seems this is almost like \n> > rolling dice since\n> \n> The sysid is essential for one of the authentication methods available in\n> PostgreSQL\n> (was it ident, I forgot) where the unix system password was used.\n\nCan't be ident, since I am running it with differing user ids. Perhaps\nsome odd usage of password authentication, but I don't use that too much.\n\nHence, can anyone comment on this?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 9 Nov 1999 10:18:27 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] users in Postgresql" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> The sysid is essential for one of the authentication methods available in\n>> PostgreSQL\n>> (was it ident, I forgot) where the unix system password was used.\n\n> Can't be ident, since I am running it with differing user ids. Perhaps\n> some odd usage of password authentication, but I don't use that too much.\n> Hence, can anyone comment on this?\n\nAFAIK it's not *essential* to make Postgres and Unix UIDs the same\n... but I think it is convenient to do so from an admin standpoint.\n(One less set of numbers to keep track of, and one fewer way to get\nconfused about who is who.) I would not like to see you remove a\nfeature that makes it easy to do that.\n\nOf course there's no value in it if you are running a setup in which\nnot all the Postgres users have Unix-system accounts. But that doesn't\nmean there is no value in it for installations where there is such a\ncorrespondence.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 1999 21:40:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] users in Postgresql " }, { "msg_contents": "On 1999-11-10, Tom Lane mentioned:\n\n> AFAIK it's not *essential* to make Postgres and Unix UIDs the same\n> ... but I think it is convenient to do so from an admin standpoint.\n> (One less set of numbers to keep track of, and one fewer way to get\n> confused about who is who.) I would not like to see you remove a\n> feature that makes it easy to do that.\n> \n> Of course there's no value in it if you are running a setup in which\n> not all the Postgres users have Unix-system accounts. But that doesn't\n> mean there is no value in it for installations where there is such a\n> correspondence.\n\nExcuse my ignorance once again, but\n\n1) Why bother about those sysids at all? To the end user/administrator they\nhave about the same informational value as the oid of the float4 type. As\nlong as you always write \"float4\" or \"username\" you don't have to bother.\n\n2) The mere fact of mentioning or even prompting for these ids confuses users.\n\n3) If you really \"keep track\" of user ids (Unix or PostgreSQL) you really don't\nhave enough users or a really superior brain.\n\n4) The purpose of the wrapper scripts was to provide \"wrappers\" around the\nvarious SQL commands. If you do something in the scripts that you can't do in\n\"SQL\" then we'll never stop having these confused users that at one point\nalmost caused us to remove the scripts altogether.\n\n5) How exactly are you supposed to set the uid? The behaviour of\nINSERT INTO pg_shadow VALUES (...);\nand\nUPDATE pg_shadow SET usesysid = ...;\nis non-deterministic at best, unfortunately. The proper fix (ignoring the first\n4 points above) would be to provide an extension to CREATE/ALTER USER, and\n*then* we can extend the scripts that way.\n\nI seems to me that the scripts were written before there even was a CREATE USER\ncommand and then the functionality was just carried over without much\ncontemplation.\n\nWell, okay, everyone that wants to set their PostgreSQL user id\nexplicitly, send me a note and I'll put it back in, which ever way.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 11 Nov 1999 22:33:08 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] users in Postgresql " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> 5) How exactly are you supposed to set the uid? The behaviour of\n> INSERT INTO pg_shadow VALUES (...);\n> and\n> UPDATE pg_shadow SET usesysid = ...;\n> is non-deterministic at best, unfortunately.\n\nINSERT seems to have worked fine in the old version of createuser...\nbut I agree CREATE/ALTER USER ought to have the same functionality.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 1999 16:36:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] users in Postgresql " }, { "msg_contents": "> Well, okay, everyone that wants to set their PostgreSQL user id\n> explicitly, send me a note and I'll put it back in, which ever way.\n\nI thought that Tom Lane was representing me just fine, so was keeping\nquiet ;)\n\nAn aside on procedures: on a change like this, I might have expected a\ndiscussion on functionality *before* the patch was developed, since it\nchanges a seemingly fundamental feature. Though I haven't thought of a\nstrong, or even weak, argument for why id matching is necessary, it is\na topic about which there has been no discussion in the past, so I\ndidn't realize I needed an opinion until now.\n\nAnother aside: I'd like to think that most good ideas which stand the\ntest of an extended discussion will get a consensus to form. So if you\nreally think this is a step forward then keep talking about it; don't\ngive up too soon...\n\nBack on topic: If there is currently no apparent need for a link\nbetween Postgres user ids and external system ids, it is the case that\nthis is an obvious mechanism to make that link. So if someday a user\nor a system feature needs it, it is already there and has been so from\nday 1. afaik other DBs have a similar attribute for users.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 12 Nov 1999 07:39:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] users in Postgresql" }, { "msg_contents": "On Fri, 12 Nov 1999, Thomas Lockhart wrote:\n\n> > Well, okay, everyone that wants to set their PostgreSQL user id\n> > explicitly, send me a note and I'll put it back in, which ever way.\n> \n> I thought that Tom Lane was representing me just fine, so was keeping\n> quiet ;)\n\nOkay, vote noted.\n\n> An aside on procedures: on a change like this, I might have expected a\n> discussion on functionality *before* the patch was developed, since it\n> changes a seemingly fundamental feature. Though I haven't thought of a\n> strong, or even weak, argument for why id matching is necessary, it is\n> a topic about which there has been no discussion in the past, so I\n> didn't realize I needed an opinion until now.\n\nIt seems to me that especially in the code I'm digging around now, there\nare a lot of way old things lying around (think Postgres95). When I ask if\nsomeone actually uses them I usually get responses like \"No, we can't yank\nit, someone might be using it\", which doesn't answer my question at all.\n\nThus I found it more effective to threaten removal first and then see if\nsomeone speaks up.\n\n> Another aside: I'd like to think that most good ideas which stand the\n> test of an extended discussion will get a consensus to form. So if you\n> really think this is a step forward then keep talking about it; don't\n> give up too soon...\n\nI just remember the heart-breaking discussion about the pg_ prefix ;)\n\nWell, I outlined my points: 1) It confuses users, 2) It doesn't match the\nSQL, 3) user IDs are internal representations that you should not be able\nto mess with with user-level tools, 4) If you can pick it, you should also\nbe able to change it later. But you can't, really.\n\n> Back on topic: If there is currently no apparent need for a link\n> between Postgres user ids and external system ids, it is the case that\n> this is an obvious mechanism to make that link. So if someday a user\n> or a system feature needs it, it is already there and has been so from\n> day 1. afaik other DBs have a similar attribute for users.\n\nThis is based on the premise that it would somehow be useful to link Unix\nand PostgreSQL users. In that case this would certainly be needed.\n\nHowever, this would be a significant step backwards, since database users\nare in general not equal to system users, most importantly since clients\nmight run on completely different systems than the server.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 12 Nov 1999 12:16:23 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [GENERAL] users in Postgresql" } ]
[ { "msg_contents": ">From FAQ_DEV:\n\n\"pgindent will format source files to match our standard format, which\n has four-space tabs, and an indenting format specified by flags to the\n your operating system's utility indent.\"\n\nThen why are all files indented with eight spaces? I personally like the\nfour spaces, straight bsd style in Emacs and -orig in indent. But at least\nit should be consistent.\n\nAlso, how can I prevent this from happening:\n\nvoid\nprint_copyright(void)\n{\n puts(\n \"\n PostgreSQL Data Base Management System\n \n Copyright(c) 1996 - 9 PostgreSQL Global Development Group\n \ninstead of\n\nvoid \nprint_copyright(void) \n{ \n puts( \n\" \nPostgreSQL Data Base Management System\n\nCopyright(c) 1996 - 9 PostgreSQL Global Development Group\n\n?\nLooks really ugly in the output.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 8 Nov 1999 23:55:33 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Indent" }, { "msg_contents": "> >From FAQ_DEV:\n> \n> \"pgindent will format source files to match our standard format, which\n> has four-space tabs, and an indenting format specified by flags to the\n> your operating system's utility indent.\"\n> \n> Then why are all files indented with eight spaces? I personally like the\n> four spaces, straight bsd style in Emacs and -orig in indent. But at least\n> it should be consistent.\n\nMy guess is that you have tabs set to four spaces in your editor. \nChange it to 4-space tabs and you will be fine.\n\n> \n> Also, how can I prevent this from happening:\n> \n> void\n> print_copyright(void)\n> {\n> puts(\n> \"\n> PostgreSQL Data Base Management System\n> \n> Copyright(c) 1996 - 9 PostgreSQL Global Development Group\n> \n> instead of\n> \n> void \n> print_copyright(void) \n> { \n> puts( \n> \" \n> PostgreSQL Data Base Management System\n> \n> Copyright(c) 1996 - 9 PostgreSQL Global Development Group\n> \n> ?\n> Looks really ugly in the output.\n\nYes, it certainly does. I changed it the quotes to \" \" newline \" \",\nand committed the cleanup. Ran it through pgindent and it looks fine\nnow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Nov 1999 20:19:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "> \"pgindent will format source files to match our standard format, which\n> has four-space tabs, and an indenting format specified by flags to the\n> your operating system's utility indent.\"\n> Then why are all files indented with eight spaces?\n\nThe FAQ isn't clear on this at all. pg_indent *assumes* that all tabs\nwill be four spaces. One must set vi or emacs to tab every four spaces\nfor things to look right.\n\nApparently we never send source code directly to a printer ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 10 Nov 1999 03:09:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "> > \"pgindent will format source files to match our standard format, which\n> > has four-space tabs, and an indenting format specified by flags to the\n> > your operating system's utility indent.\"\n> > Then why are all files indented with eight spaces?\n> \n> The FAQ isn't clear on this at all. pg_indent *assumes* that all tabs\n> will be four spaces. One must set vi or emacs to tab every four spaces\n> for things to look right.\n\nDEV FAQ says:\n\n<I>pgindent</I> will format source files to match our standard format,\nwhich has four-space tabs, \n\n> \n> Apparently we never send source code directly to a printer ;)\n\nI run it through vgrind or use crisp to print with color syntax\nhighlighting. However, I do convert the tabs to 4-character spaces\nbefore printing. It is a pain, but indenting/unindenting without\ntab=indent level is a pain, and 8-space tabs are too large.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Nov 1999 22:55:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "> > > \"pgindent will format source files to match our standard format, which\n> > > has four-space tabs, and an indenting format specified by flags to the\n> > > your operating system's utility indent.\"\n> > > Then why are all files indented with eight spaces?\n> > The FAQ isn't clear on this at all. pgindent *assumes* that all tabs\n> > will be four spaces. One must set vi or emacs to tab every four spaces\n> > for things to look right.\n> DEV FAQ says:\n> <I>pgindent</I> will format source files to match our standard format,\n> which has four-space tabs,\n\nRight, I saw this the first time ;)\n\nMy point is that this statement is ambiguous, particularly for those\nwho didn't grow up in Pennsylvania or some other all-English locale.\n\"four-space tabs\" could imply that all tabs are filled with four\nspaces, but in fact pgindent replaces every four spaces with a tab.\n\nI haven't stumbled across any mention of setting text editors or\nprinting, which if it was mentioned might reduce the possibility for\nconfusion.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 10 Nov 1999 04:20:08 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "> My point is that this statement is ambiguous, particularly for those\n> who didn't grow up in Pennsylvania or some other all-English locale.\n> \"four-space tabs\" could imply that all tabs are filled with four\n> spaces, but in fact pgindent replaces every four spaces with a tab.\n> \n> I haven't stumbled across any mention of setting text editors or\n> printing, which if it was mentioned might reduce the possibility for\n> confusion.\n\nNew DEV FAQ reads:\n\nOur standard format is to indent each code level with one tab, where\neach tab is four spaces. You will need to set your editor to display\ntabs as four spaces. <I>pgindent</I> will the format code by specifying\nflags to your operating system's utility <I>indent.</I><P> \n<I>pgindent</I> is run on all source files just before each beta test\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Nov 1999 23:58:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "Thus spake Thomas Lockhart\n> The FAQ isn't clear on this at all. pg_indent *assumes* that all tabs\n> will be four spaces. One must set vi or emacs to tab every four spaces\n> for things to look right.\n> \n> Apparently we never send source code directly to a printer ;)\n\nOr else we pipe it through \"pr -e4\" first.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 10 Nov 1999 05:59:26 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "Hi Peter,\n\tThis puzzled me for some time too, I am guessing that you are using\nemacs to edit the code, and you are being tripped up by emacs use of 8\nspaces for a tab symbol. You can change the emacs tab width by\nconfiguring the emacs variable tab-width. You can do this on the\nbuffer that you are editing by using C-h v tab-width to pull up a buffer\nthat allows you to customize the variable, or you can use a hook to\nc-mode to set it whenever you are editing a c-file. I have the following\nlines in my .emacs file\n\n;;; Set tab-width in c-mode to 4 spaces\n(add-hook 'c-mode-hook\n (function (lambda () (set tab-width 4))))\n\n\nI think that there are some other small differences between emacs c-mode\nformatting and the indent formatting that PostgreSQL uses, but the\n8-space\ntab is the only one that I have fixed. \n\nBernie\n", "msg_date": "Wed, 10 Nov 1999 19:31:38 +0000", "msg_from": "Bernard Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "UNSUBSCRIBE ME FROM THIS LIST!!!!!!!!!!!!!\n\n", "msg_date": "Wed, 10 Nov 1999 19:15:09 -0500", "msg_from": "\"Frederick Cheeseborough\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "Bernard Frankpitt <[email protected]> writes:\n> I have the following lines in my .emacs file\n> ;;; Set tab-width in c-mode to 4 spaces\n> (add-hook 'c-mode-hook\n> (function (lambda () (set tab-width 4))))\n> I think that there are some other small differences between emacs c-mode\n> formatting and the indent formatting that PostgreSQL uses, but the\n> 8-space tab is the only one that I have fixed.\n\nI think I've mentioned this before, but I have the following function\nfor adapting emacs to the Postgres code-formatting conventions:\n\n; Cmd to set tab stops &etc for working with PostgreSQL code\n(defun pgsql-mode ()\n \"Set PostgreSQL C indenting conventions in current buffer.\"\n (interactive)\n (c-mode)\t\t\t\t; ensure we're in electric-C mode\n (setq tab-width 4)\n (c-set-style \"bsd\")\n (c-set-offset 'case-label '+)\n)\n\nCurrently I invoke this command by hand when looking at a Postgres file.\nI've been meaning to set up a load-time hook to invoke it automatically\nupon visiting a .c or .h file within my ~postgres directory tree, but\nhaven't got round to that yet.\n\nAs far as I've noticed, the only significant shortcoming of this mode\ndefinition is that it doesn't know that \"foreach\" should be treated as a\nblock-beginning keyword. This is also fixable with a little elisp\nhacking, but that hasn't got to the top of the to-do list either. For\nnow I just remember to manually unindent the \"{\" right below \"foreach\",\nand then it carries on correctly for the rest of the block.\n\nObFlameBait: Personally I think we should switch over to standard\n8-column tabs, but Bruce is apparently still using some medieval editor\nin which physical tab widths dictate logical indent levels :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 1999 21:03:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent " }, { "msg_contents": "> ObFlameBait: Personally I think we should switch over to standard\n> 8-column tabs, but Bruce is apparently still using some medieval editor\n> in which physical tab widths dictate logical indent levels :-(\n\nVadim is also in agreement on this. Not many editors can handle\ncases where tab size is different from indent size. Emacs obviously\ncan, and Tom has enjoyed pointing out. :-)\n\nI am willing to re-open the discussion if we people would prefer\nsomething else.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Nov 1999 21:30:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "On Wed, 10 Nov 1999, Bruce Momjian wrote:\n\n> > ObFlameBait: Personally I think we should switch over to standard\n> > 8-column tabs, but Bruce is apparently still using some medieval editor\n> > in which physical tab widths dictate logical indent levels :-(\n> \n> Vadim is also in agreement on this. Not many editors can handle\n> cases where tab size is different from indent size. Emacs obviously\n> can, and Tom has enjoyed pointing out. :-)\n> \n> I am willing to re-open the discussion if we people would prefer\n> something else.\n\nTrying to redefine a tab to be 4 spaces is asking for trouble. How about\nmaking pgindent replace 8 spaces with a tab and 4 spaces with, well, 4\nspaces. This is how emacs handles bsd indent style if you don't change the\ntab sizes. And all other editors should be fine with this.\n\nPersonally, I'm always in favour of using no tabs at all, because of this\nvery problem, and just because they annoy me. But I tend to be alone with\nthat position.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 11 Nov 1999 16:37:47 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "> ;;; Set tab-width in c-mode to 4 spaces\n> (add-hook 'c-mode-hook\n> (function (lambda () (set tab-width 4))))\n\nTypo:\n\nset -> setq (from Tom Lane's code)\n\nI'm now a happy emacs camper :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 11 Nov 1999 15:56:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "Peter Eisentraut wrote:\n> Personally, I'm always in favour of using no tabs at all, because of this\n> very problem, and just because they annoy me. But I tend to be alone with\n> that position.\n\nNo you are not :). Twenty odd years of software development have taught me\nto remove the tab key from my keyboard. Two spaces do the trick for me and\ntry to avoid at all costs lines over 80 cols. (I use emacs in vi[per] mode),\neven on win98... Oh how I wish we were back in the days of TECO.\n--------\nRegards\nTheo\n", "msg_date": "Thu, 11 Nov 1999 21:53:31 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "At 09:53 PM 11/11/99 +0200, Theo Kramer wrote:\n>Peter Eisentraut wrote:\n>> Personally, I'm always in favour of using no tabs at all, because of this\n>> very problem, and just because they annoy me. But I tend to be alone with\n>> that position.\n>\n>No you are not :). Twenty odd years of software development have taught me\n>to remove the tab key from my keyboard. Two spaces do the trick for me and\n>try to avoid at all costs lines over 80 cols. (I use emacs in vi[per] mode),\n>even on win98... Oh how I wish we were back in the days of TECO.\n\nTECO! If you were to dig up a copy of the original manual for\nDec's OS/8 TECO, you'd see a reference to me on the first page. Not\nby name, but by organization, for having done the first version.\n\nSheesh, those where old days.\n\nAnyway, I too avoid tabs. Life's too short to figure out how to\nmake all the tools I use tab the way I want them to.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 11 Nov 1999 15:16:15 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" } ]
[ { "msg_contents": "I agree with you Bruce, as replacing the current protocol with IIOP\nwould IMHO be a bad idea, and possibly not be implementable on some\nplatforms.\n\nPS: Theres a Corba/Java example in the source. Works only for Java2 (as\nit has a limited orb).\n\nPeter\n\n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:[email protected]]\n> Sent:\t09 November 1999 02:10\n> To:\[email protected]\n> Cc:\[email protected]; [email protected]\n> Subject:\tRe: [HACKERS] CORBA STATUS\n> \n> > Is there room for me to work on this project in such a way that it\n> is\n> > adequate for my masters. If anyone is working on this, or has a good\n> > knowledge of the current status of the CORBA implementation for\n> PGsql can\n> > you please let me know, so I can know whether to get started on this\n> or not.\n> > The reference thread for my initial point of contact with Marc G.\n> Fournier\n> > and Bruce Momjian and how they think I should attack the project is\n> -\n> >\n> http://www.postgresql.org/mhonarc/pgsql-hackers/1999-09/msg00076.html\n> \n> I know of no one working on it. There were two ideas as I remember. \n> One was to replace our existing client/server communication with\n> corba,\n> and the second was to have a corba server that accepted corba requests\n> and sent them to a PostgreSQL server. I prefer this second approach\n> as\n> I think CORBA may be too much overhead for people who don't need it. \n> Our current client/server communication is quite efficient.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> \n> ************\n", "msg_date": "Tue, 9 Nov 1999 08:40:04 -0000 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] CORBA STATUS" } ]
[ { "msg_contents": "> Anyway, I know that at least one ORB, TAO, runs on many more types of\n> platforms than Postgres does (e.g. VxWorks, Lynx, Solaris, NT, ...),\n> though Postgres runs on more Unix-style platforms. But that particular\n> ORB is not a good candidate for us, for reasons I already mentioned\n> (C++, large build size, poor configure support).\n\nOnly a small note (I don't know details): there is an implementation of\nCORBA for the Gnome desktop called ORBit (small, in C, fast), please see\nhttp://www.labs.redhat.com/orbit/\n\n\t\t\tDan\n", "msg_date": "Tue, 9 Nov 1999 16:19:01 +0100 ", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] CORBA STATUS" } ]
[ { "msg_contents": "IMHO, I think this would be better for the short term.\n\nPeter\n\n> -----Original Message-----\n> From:\tDmitry Samersoff [SMTP:[email protected]]\n> Sent:\t09 November 1999 16:56\n> To:\tThomas Lockhart\n> Cc:\[email protected]; [email protected];\n> Brian E Gallew; The Hermit Hacker\n> Subject:\tRe: [HACKERS] CORBA STATUS\n\t[Peter Mount] [snip]\n> \n> May be it is better make directory CORBA under interfaces subtree\n> and time-to-time put objects for differend ORB's inside, \n> into separate directory.\n> \n> Probably, It's better to make separate configure for \n> some parts of postgres distributions to allow users to build/upgrade\n> parts of postgres i.e psql or perl interface \n> \n> \n> \n> \n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n> \n> ************\n", "msg_date": "Tue, 9 Nov 1999 21:30:26 -0000 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] CORBA STATUS" } ]
[ { "msg_contents": "--- Peter Eisentraut <[email protected]> wrote:\n> From FAQ_DEV:\n> \n> \"pgindent will format source files to match our standard format, which\n> has four-space tabs, and an indenting format specified by flags to the\n> your operating system's utility indent.\"\n> \n> Then why are all files indented with eight spaces? I personally like the\n> four spaces, straight bsd style in Emacs and -orig in indent. But at\n> least\n> it should be consistent.\n> \n> Also, how can I prevent this from happening:\n> \n> void\n> print_copyright(void)\n> {\n> puts(\n> \"\n> PostgreSQL Data Base Management System\n> \n> Copyright(c) 1996 - 9 PostgreSQL Global Development\n> Group\n> \n> instead of\n> \n> void \n> \n> print_copyright(void) \n> \n> { \n> \n> puts( \n> \n> \" \n> PostgreSQL Data Base Management System\n> \n> Copyright(c) 1996 - 9 PostgreSQL Global Development Group\n> \n> ?\n> Looks really ugly in the output.\n\nAmen. I hold myself to several rules when writing code, one\nof which is that no single line exceed 80 characters in \nlength, which is of rare occurence in the backend code.\n\nMike Mascari\n([email protected])\n\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Tue, 9 Nov 1999 17:24:19 -0800 (PST)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "> Amen. I hold myself to several rules when writing code, one\n> of which is that no single line exceed 80 characters in \n> length, which is of rare occurence in the backend code.\n\nWith 4-space tabs, the code is pretty good. \n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Nov 1999 20:55:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" }, { "msg_contents": "> \n> > Amen. I hold myself to several rules when writing code, one\n> > of which is that no single line exceed 80 characters in \n> > length, which is of rare occurence in the backend code.\n> \n> With 4-space tabs, the code is pretty good. \n\nTabs vary from system to system, printer to printer etc. For the sake of\nreadability I have learnt to ignore the tab key when writing code and use\n2 spaces for indenting and 80 columns... my tuppence worth.\n\nRegards\nTheo\n", "msg_date": "Wed, 10 Nov 1999 10:15:52 +0200 (SAST)", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indent" } ]
[ { "msg_contents": "I've (finally) copied Lamar's 6.5.3 RPMs to\npostgresql.org/pub/{SRPMS,RPMS}.\n\nVince, could we update our web page? It currently mentions v6.5.2 on\nthe main page as \"the current release\" and as having RPMs available on\nour web site; both of these version numbers need to be bumped up.\n\nPerhaps on the \"Download PostgreSQL\" page, we could add explicit\nmention of RPMs in the second paragraph; something like:\n\n The latest source is available from the _primary site_ via _FTP_.\n RPMs for RedHat Linux are available, both _source_ and _binary_, \n thanks to _Lamar Owens_.\n\nwhere the new hyperlinks are\n_source_ -> ftp://ftp.postgresql.org/pub/SRPMS/\n_binary_ -> ftp://ftp.postgresql.org/pub/RPMS/\n_Lamar Owens_ -> http://www.ramifordistat.net\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 10 Nov 1999 04:49:09 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "6.5.3 RPMs are on ftp site" }, { "msg_contents": "On Wed, 10 Nov 1999, Thomas Lockhart wrote:\n\n> I've (finally) copied Lamar's 6.5.3 RPMs to\n> postgresql.org/pub/{SRPMS,RPMS}.\n> \n> Vince, could we update our web page? It currently mentions v6.5.2 on\n> the main page as \"the current release\" and as having RPMs available on\n> our web site; both of these version numbers need to be bumped up.\n> \n> Perhaps on the \"Download PostgreSQL\" page, we could add explicit\n> mention of RPMs in the second paragraph; something like:\n> \n> The latest source is available from the _primary site_ via _FTP_.\n> RPMs for RedHat Linux are available, both _source_ and _binary_, \n> thanks to _Lamar Owens_.\n> \n> where the new hyperlinks are\n> _source_ -> ftp://ftp.postgresql.org/pub/SRPMS/\n> _binary_ -> ftp://ftp.postgresql.org/pub/RPMS/\n> _Lamar Owens_ -> http://www.ramifordistat.net\n\nDone. I gave Lamar his own paragraph.\n\nFor now I also dropped the link to the patches directory since it's\na major version behind and I dropped the bindist link since it's now\nthree minor versions behind and only has one binary in it. If either\nsuddenly become active I can put them back in easy enough. All of \nthis is on the Download page.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 10 Nov 1999 06:00:37 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.3 RPMs are on ftp site" }, { "msg_contents": "On Tue, 09 Nov 1999, Thomas Lockhart wrote:\n> I've (finally) copied Lamar's 6.5.3 RPMs to\n> postgresql.org/pub/{SRPMS,RPMS}.\n\nAlright!\n\n> The latest source is available from the _primary site_ via _FTP_.\n> RPMs for RedHat Linux are available, both _source_ and _binary_, \n> thanks to _Lamar Owens_.\n\nOwen, with no 's'. Otherwise fine.\n\nI know, splitting hairs -- especially since Owen and Owens is in reality the\nsame family (from two brothers, John and Robert, who came over from Ireland in\nthe mid 1700's....but this is not a genealogy list ;-)).\n\n--\nLamar Owen\nWGCR Internet Radio \n1 Peter 4:11\n", "msg_date": "Wed, 10 Nov 1999 09:14:27 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.3 RPMs are on ftp site" }, { "msg_contents": "On Wed, 10 Nov 1999, Lamar Owen wrote:\n\n> On Tue, 09 Nov 1999, Thomas Lockhart wrote:\n> > I've (finally) copied Lamar's 6.5.3 RPMs to\n> > postgresql.org/pub/{SRPMS,RPMS}.\n> \n> Alright!\n> \n> > The latest source is available from the _primary site_ via _FTP_.\n> > RPMs for RedHat Linux are available, both _source_ and _binary_, \n> > thanks to _Lamar Owens_.\n> \n> Owen, with no 's'. Otherwise fine.\n> \n> I know, splitting hairs -- especially since Owen and Owens is in reality the\n> same family (from two brothers, John and Robert, who came over from Ireland in\n> the mid 1700's....but this is not a genealogy list ;-)).\n\nOh, plural! No problem then :) Ok, it's fixed.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 10 Nov 1999 09:29:14 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.3 RPMs are on ftp site" } ]
[ { "msg_contents": "Surely with autoconf, this becomes less of an issue. I mean, if this is the\nonly major problem (which it likely isn't, but anyway), then the whole\nexercise shouldn't be that hard.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Thomas Lockhart [mailto:[email protected]]\n>> Sent: Wednesday, November 10, 1999 6:25 AM\n>> To: The Hermit Hacker\n>> Cc: [email protected]; PostgreSQL-development;\n>> [email protected]\n>> Subject: [INTERFACES] Re: [HACKERS] CORBA STATUS\n>> \n>> \n>> > Wait...when we talked about this months back, I swore that \n>> one of the\n>> > conclusions *was* that this was possible...it would \n>> involve us doing\n>> > wrapper functions in our code that were defined in an \n>> include file based\n>> > on which ORB implementation was used...?\n>> > Basically...\n>> > pg_<corba function> maps to <insert mico corba function here>\n>> > or <insert orbit corba function here>\n>> > or <insert other implementation \n>> function here>\n>> > Has this ability changed? *raised eyebrow*\n>> \n>> No, this probably is not necessary since the C or C++ mappings for\n>> function calls in Corba are very well defined. \n>> \n>> What is not fully specified in the Corba standard is, for example,\n>> which header files (and by what names) will be generated by the IDL\n>> stubber, so each Orb has, or might have, different conventions for\n>> include files. This probably impacts server-side code a bit more than\n>> clients.\n>> \n>> There is some interest for some Orbs to try lining up the header file\n>> names, but I don't know how feasible it is in the short term.\n>> \n>> We could probably isolate this into Postgres-specific header files,\n>> but there will probably be Orb-specific #ifdef blocks in those\n>> headers.\n>> \n>> - Thomas\n>> \n>> -- \n>> Thomas Lockhart\t\t\t\t\n>> [email protected]\n>> South Pasadena, California\n>> \n>> ************\n>> \n", "msg_date": "Wed, 10 Nov 1999 09:15:01 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] Re: [HACKERS] CORBA STATUS" } ]
[ { "msg_contents": "Goran Thyni <[email protected]> writes:\n>I found that there is a fundamental problem\n>concerning the difference in process models\n>in pgsql and the POA (Portable Object Adaptor)\n>in CORBA implementations.\n>\n>AFAICS, POA assumes a threaded server while\n>PgSQL uses a traditional forking model.\n\nThis is not the case. The POA assumes a nestable, multiplexed call \ninterface. The POA server can receive multiple requests from multiple\nclients (or even multiple simultaneous requests from one client), and,\nif single threaded, is allowed to simply queue them and service each \nrequest in natural order.\n\nIf something bad happens (say, transaction deadlock, or whatever), the \nPOA server just spits out the appropriate exceptions, and the clients \nfigure out what to do next.\n\nSpeaking of which, exception handling is the one area where CORBA completely\nembarrasses the current FE/BE protocol. As PostgreSQL starts climbing\nthe database value chain, people will probably like to see error handling\nthat doesn't core-dump clients and backends.\n\n\t-Michael Robinson\n\nP.S. On the off chance this will get answered the second time around, \nis there any particular reason for minx::int4 to be an illegal cast?\n\n", "msg_date": "Wed, 10 Nov 1999 20:18:09 +0800 (CST)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "Michael Robinson wrote:\n> Goran Thyni <[email protected]> writes:\n> >AFAICS, POA assumes a threaded server while\n> >PgSQL uses a traditional forking model.\n> \n> This is not the case. The POA assumes a nestable, multiplexed call\n> interface. The POA server can receive multiple requests from multiple\n> clients (or even multiple simultaneous requests from one client), and,\n> if single threaded, is allowed to simply queue them and service each\n> request in natural order.\n\nOK,\nI went on hearsay, got confused by the code.\nThank you for clearifying.\n\nBut the issue remains,\nif you fork in a connection handler (after accept())\nyou got two servers competing on both connections.\n\nI outline a model for how CORBA could be implemented\nwithout rewriting the whole server and make it optional\nfor platforms not supporting CORBA.\n\nAttached below is a first attempt with sketchy pseudo-code.\nI hope it is understandable.\n\nregards,\n-- \n-----------------\nG�ran Thyni\nOn quiet nights you can hear Windows NT reboot!", "msg_date": "Wed, 10 Nov 1999 21:47:43 +0100", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" } ]
[ { "msg_contents": "Hi, Guys . I just installed the latest POSTGRES binary 6.5.2 for\nREDHAT 6.0 and reading the HISTORY file, it says that the deadlock\nproblem with the timeout was fixed in 6.3.\nSo, I went and open two psql sessions and tried to update the same\nrecord inside a transaction (one for each session). The thing is that\nthe other session that is waiting for the record to be released by the\nfirst session keeps waiting forever - Ther is no even a timeout deadlock\nmessage.\n\nIs there any parameter to be setup before you fire the postmaster\nproccess in order to get deadlock messages back to the client ?\n\nI thought it was fixed ...\n\nThis was the response from Thomas Lockhart\n\n> And it is. Your case is not a classic \"deadlock\" in the database\n> sense, but is just one client waiting for another to finish. That is\n> different than deadlock, in which a client *could never possibly\n> finish* because it is holding a lock that another client is waiting\n> for, while also waiting for a lock that the second client is holding.\n\nThanks Thomas for your promptly answer..\n\n So, is there anyway for the client connection to get out of the waiting\nproccess for the other connection to release the record?\nPresently, my application looks like it is hung up, so a user would think that\nsomething wrong with the client software. It would be pretty useful for the\nbackend to send a message saying that the record is locked so the front end is\nable to decide to do something else or retry to get the record or even be able\nto put a message box saying \"waiting for record to be released\" .\n\nHow I can acomplish this with the present POSTGRES locking mechanism ?\n\nThanks,\nJuan Carlos Vergara\[email protected]\n\n\n\n\n\n", "msg_date": "Wed, 10 Nov 1999 12:38:51 -0500", "msg_from": "\"Juan Vergara\" <[email protected]>", "msg_from_op": true, "msg_subject": "Locking record behaviour when using transaction -BEGIN/END" } ]
[ { "msg_contents": "Solaris 2.6/sparc; postgres 6.5.1\n\ndns=> create table test (zone int4, net cidr, unique(zone, net));\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\nCREATE\ndns=> insert into test (zone, net) values (1, '1.2.3/24');\nINSERT 21750 1\ndns=> insert into test (zone, net) values (1, '2.3.4/24');\nINSERT 21751 1\ndns=> insert into test (zone, net) values (1, '1.2.3/24');\nINSERT 21752 1\ndns=> insert into test (zone, net) values (1, '2.3.4/24');\nERROR: Cannot insert a duplicate key into a unique index\ndns=> select * from test;\nzone|net \n- ----+--------\n 1|1.2.3/24\n 1|2.3.4/24\n 1|1.2.3/24\n(3 rows)\n\n\nOnce a unique error is reported, uniqueness seems to be maintained.\nAlso, if you enter 4 values, then try a duplicate, it all works.\n\nThe threshold seems to be 3.\n\nA select before the duplicate add also seems to fix it.\n\n~f\n\n\n", "msg_date": "Thu, 11 Nov 1999 01:14:03 -0800", "msg_from": "Frank Cusack <[email protected]>", "msg_from_op": true, "msg_subject": "uniqueness not always correct" }, { "msg_contents": "Frank Cusack wrote:\n> \n> Solaris 2.6/sparc; postgres 6.5.1\n> \n> dns=> create table test (zone int4, net cidr, unique(zone, net));\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> CREATE\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21750 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> INSERT 21751 1\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21752 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> ERROR: Cannot insert a duplicate key into a unique index\n\nYes, I reproduced this (Solaris 2.5/sparc). \nSeems like CIDR problem(??!):\n\nais=> create table test (zone int4, net int4, unique(zone, net));\n ^^^^\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\nCREATE\nais=> insert into test (zone, net) values (1, 1);\nINSERT 7712479 1\nais=> insert into test (zone, net) values (1, 2);\nINSERT 7712480 1\nais=> insert into test (zone, net) values (1, 1);\nERROR: Cannot insert a duplicate key into a unique index\n\nVadim\n", "msg_date": "Thu, 11 Nov 1999 17:07:25 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Yes, I reproduced this (Solaris 2.5/sparc). \n> Seems like CIDR problem(??!):\n\nYes. Looks like the low-order bits of a CIDR address are garbage,\nbut network_cmp() compares them as though all bits are significant.\nSo, indeed, it may think two different instances of '1.2.3/24'\nare not equal.\n\nThe regular inet comparison functions at least *try* to mask out\ngarbage bits, but I think they get it wrong too --- they should be\ntaking the smaller of ip_bits(a1) and ip_bits(a2) as the number of\nbits to compare. They don't. Thus, for example,\n\nregression=> select '1.2.5/16'::cidr < '1.2.3/24'::cidr;\n?column?\n--------\nf\n(1 row)\n\nwhich looks wrong to me.\n\nIn short, it's a bug in the inet data types, not a generic problem\nwith unique indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 1999 11:57:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] uniqueness not always correct " }, { "msg_contents": "I'm not sure that a '<' comparison is really meaningful for inet/cidr?\nAt least not the '<' comparison you are doing. For networks (cf hosts),\nthe only really meanininful operators are '<<' (contained within), etc.\n\nA nice easy fix might be to make sure that the unmasked portion of the\ndata is set to all 0's when storing the data.\n\n~f\nps. I'm not subscribed to the lists so this will probably bounce. Please\nrepost for me.\n\n>>>>> On Thu, 11 Nov 1999, \"Tom\" == Tom Lane wrote:\n\n Tom> Vadim Mikheev <[email protected]> writes:\n\n +> Yes, I reproduced this (Solaris 2.5/sparc). Seems like CIDR\n +> problem(??!):\n\n Tom> Yes. Looks like the low-order bits of a CIDR address are garbage,\n Tom> but network_cmp() compares them as though all bits are significant.\n Tom> So, indeed, it may think two different instances of '1.2.3/24' are\n Tom> not equal.\n\n Tom> The regular inet comparison functions at least *try* to mask out\n Tom> garbage bits, but I think they get it wrong too --- they should be\n Tom> taking the smaller of ip_bits(a1) and ip_bits(a2) as the number of\n Tom> bits to compare. They don't. Thus, for example,\n\n Tom> regression=> select '1.2.5/16'::cidr < '1.2.3/24'::cidr;\n Tom> ?column?\n Tom> --------\n Tom> f\n Tom> (1 row)\n\n Tom> which looks wrong to me.\n\n Tom> In short, it's a bug in the inet data types, not a generic problem\n Tom> with unique indexes.\n\n Tom> regards, tom lane\n>>>>> On Thu, 11 Nov 1999,\n>>>>> \"Tom\" == Tom Lane wrote:\n\n Tom> Vadim Mikheev <[email protected]> writes:\n\n +> Yes, I reproduced this (Solaris 2.5/sparc).\n +> Seems like CIDR problem(??!):\n\n Tom> Yes. Looks like the low-order bits of a CIDR address are garbage,\n Tom> but network_cmp() compares them as though all bits are significant.\n Tom> So, indeed, it may think two different instances of '1.2.3/24'\n Tom> are not equal.\n\n Tom> The regular inet comparison functions at least *try* to mask out\n Tom> garbage bits, but I think they get it wrong too --- they should be\n Tom> taking the smaller of ip_bits(a1) and ip_bits(a2) as the number of\n Tom> bits to compare. They don't. Thus, for example,\n\n Tom> regression=> select '1.2.5/16'::cidr < '1.2.3/24'::cidr;\n Tom> ?column?\n Tom> --------\n Tom> f\n Tom> (1 row)\n\n Tom> which looks wrong to me.\n\n Tom> In short, it's a bug in the inet data types, not a generic problem\n Tom> with unique indexes.\n\n Tom> regards, tom lane\n", "msg_date": "Thu, 11 Nov 1999 12:50:59 -0800", "msg_from": "Frank Cusack <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [BUGS] uniqueness not always correct " }, { "msg_contents": "> Frank Cusack wrote:\n> > \n> > Solaris 2.6/sparc; postgres 6.5.1\n> > \n> > dns=> create table test (zone int4, net cidr, unique(zone, net));\n> > NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> > CREATE\n> > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > INSERT 21750 1\n> > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > INSERT 21751 1\n> > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > INSERT 21752 1\n> > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > ERROR: Cannot insert a duplicate key into a unique index\n> \n> Yes, I reproduced this (Solaris 2.5/sparc). \n> Seems like CIDR problem(??!):\n\nI see a more serious problem in the current source tree:\n\t\n\ttest=> create table test (zone int4, net cidr, unique(zone, net));\n\tNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n\tCREATE\n\ttest=> insert into test (zone, net) values (1, '1.2.3/24');\n\tERROR: fmgr_info: function 0: cache lookup failed\n\nSeems something is broken with CIDR, but not INET:\n\t\n\ttest=> create table test2 (x inet unique(x)); \n\tERROR: parser: parse error at or near \"(\"\n\ttest=> create table test2 (x inet, unique(x));\n\tNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test2_x_key' for table 'test2'\n\tCREATE\n\ttest=> insert into test2 values ('1.2.3.4/24');\n\tINSERT 19180 1\n\ttest=> create table test3 (x cidr, unique(x));\n\tNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test3_x_key' for table 'test3'\n\tCREATE\n\ttest=> insert into test3 values ('1.2.3.4/24');\n\tERROR: fmgr_info: function 0: cache lookup failed\n\nThe problem appears to be in _bt_mkscankey() and index_getprocid().\n\nAny ideas?\n\nBacktrace shows:\n\n---------------------------------------------------------------------------\n\n#0 elog (lev=-1, fmt=0x817848e \"fmgr_info: function %u: cache lookup failed\")\n at elog.c:94\n#1 0x8135a47 in fmgr_info (procedureId=0, finfo=0x830a060) at fmgr.c:225\n#2 0x80643f9 in ScanKeyEntryInitialize (entry=0x830a058, flags=0, \n attributeNumber=2, procedure=0, argument=137404148) at scankey.c:65\n#3 0x8083e70 in _bt_mkscankey (rel=0x8312230, itup=0x8309ee8) at nbtutils.c:56\n#4 0x8079989 in _bt_doinsert (rel=0x8312230, btitem=0x8309ee8, \n index_is_unique=1 '\\001', heapRel=0x82dfd38) at nbtinsert.c:52\n#5 0x807eabe in btinsert (rel=0x8312230, datum=0x8309b28, \n nulls=0x830a020 \" \", ht_ctid=0x8309e2c, heapRel=0x82dfd38) at nbtree.c:358\n#6 0x81358d8 in fmgr_c (finfo=0x80476e8, values=0x80476f8, \n isNull=0x80476df \"\") at fmgr.c:146\n#7 0x8135c25 in fmgr (procedureId=331) at fmgr.c:336\n#8 0x8073c6d in index_insert (relation=0x8312230, datum=0x8309b28, \n nulls=0x830a020 \" \", heap_t_ctid=0x8309e2c, heapRel=0x82dfd38)\n at indexam.c:211\n#9 0x80ae3d9 in ExecInsertIndexTuples (slot=0x8309bf8, tupleid=0x8309e2c, \n estate=0x8309950, is_update=0) at execUtils.c:1206\n#10 0x80aa77e in ExecAppend (slot=0x8309bf8, tupleid=0x0, estate=0x8309950)\n at execMain.c:1178\n#11 0x80aa60e in ExecutePlan (estate=0x8309950, plan=0x83098b0, \n operation=CMD_INSERT, offsetTuples=0, numberTuples=0, \n direction=ForwardScanDirection, destfunc=0x817cdc4) at execMain.c:1024\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 8 Dec 1999 05:57:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "\nCan someone comment on this? Can someone submit a patch? I remember\nsomething about not clearing bits somewhere.\n\nI can't reproduce the problem on BSD/OS.\n\n\n> Frank Cusack wrote:\n> > \n> > Solaris 2.6/sparc; postgres 6.5.1\n> > \n> > dns=> create table test (zone int4, net cidr, unique(zone, net));\n> > NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> > CREATE\n> > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > INSERT 21750 1\n> > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > INSERT 21751 1\n> > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > INSERT 21752 1\n> > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > ERROR: Cannot insert a duplicate key into a unique index\n> \n> Yes, I reproduced this (Solaris 2.5/sparc). \n> Seems like CIDR problem(??!):\n> \n> ais=> create table test (zone int4, net int4, unique(zone, net));\n> ^^^^\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> CREATE\n> ais=> insert into test (zone, net) values (1, 1);\n> INSERT 7712479 1\n> ais=> insert into test (zone, net) values (1, 2);\n> INSERT 7712480 1\n> ais=> insert into test (zone, net) values (1, 1);\n> ERROR: Cannot insert a duplicate key into a unique index\n> \n> Vadim\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 8 Dec 1999 06:45:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "Someone submitted this patch. It should fix your problem. It will\nappear in the next release.\n\n> Solaris 2.6/sparc; postgres 6.5.1\n> \n> dns=> create table test (zone int4, net cidr, unique(zone, net));\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> CREATE\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21750 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> INSERT 21751 1\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21752 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> ERROR: Cannot insert a duplicate key into a unique index\n> dns=> select * from test;\n> zone|net \n> - ----+--------\n> 1|1.2.3/24\n> 1|2.3.4/24\n> 1|1.2.3/24\n> (3 rows)\n> \n> \n> Once a unique error is reported, uniqueness seems to be maintained.\n> Also, if you enter 4 values, then try a duplicate, it all works.\n> \n> The threshold seems to be 3.\n> \n> A select before the duplicate add also seems to fix it.\n> \n> ~f\n> \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Wed, 15 Dec 1999 20:29:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "Fixed by recently submitted patch.\n\n\n> Vadim Mikheev <[email protected]> writes:\n> > Yes, I reproduced this (Solaris 2.5/sparc). \n> > Seems like CIDR problem(??!):\n> \n> Yes. Looks like the low-order bits of a CIDR address are garbage,\n> but network_cmp() compares them as though all bits are significant.\n> So, indeed, it may think two different instances of '1.2.3/24'\n> are not equal.\n> \n> The regular inet comparison functions at least *try* to mask out\n> garbage bits, but I think they get it wrong too --- they should be\n> taking the smaller of ip_bits(a1) and ip_bits(a2) as the number of\n> bits to compare. They don't. Thus, for example,\n> \n> regression=> select '1.2.5/16'::cidr < '1.2.3/24'::cidr;\n> ?column?\n> --------\n> f\n> (1 row)\n> \n> which looks wrong to me.\n> \n> In short, it's a bug in the inet data types, not a generic problem\n> with unique indexes.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 15 Dec 1999 20:29:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "Never mind. Patch is fir mac, not inet.\n\n> Vadim Mikheev <[email protected]> writes:\n> > Yes, I reproduced this (Solaris 2.5/sparc). \n> > Seems like CIDR problem(??!):\n> \n> Yes. Looks like the low-order bits of a CIDR address are garbage,\n> but network_cmp() compares them as though all bits are significant.\n> So, indeed, it may think two different instances of '1.2.3/24'\n> are not equal.\n> \n> The regular inet comparison functions at least *try* to mask out\n> garbage bits, but I think they get it wrong too --- they should be\n> taking the smaller of ip_bits(a1) and ip_bits(a2) as the number of\n> bits to compare. They don't. Thus, for example,\n> \n> regression=> select '1.2.5/16'::cidr < '1.2.3/24'::cidr;\n> ?column?\n> --------\n> f\n> (1 row)\n> \n> which looks wrong to me.\n> \n> In short, it's a bug in the inet data types, not a generic problem\n> with unique indexes.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 15 Dec 1999 20:42:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "Never mind. Patch is fir mac, not inet. Sorry.\n\nProblem still exists.\n\n> Solaris 2.6/sparc; postgres 6.5.1\n> \n> dns=> create table test (zone int4, net cidr, unique(zone, net));\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> CREATE\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21750 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> INSERT 21751 1\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21752 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> ERROR: Cannot insert a duplicate key into a unique index\n> dns=> select * from test;\n> zone|net \n> - ----+--------\n> 1|1.2.3/24\n> 1|2.3.4/24\n> 1|1.2.3/24\n> (3 rows)\n> \n> \n> Once a unique error is reported, uniqueness seems to be maintained.\n> Also, if you enter 4 values, then try a duplicate, it all works.\n> \n> The threshold seems to be 3.\n> \n> A select before the duplicate add also seems to fix it.\n> \n> ~f\n> \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 15 Dec 1999 20:43:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "Hello,\n\n I have modified the translate function in order to improve its\ncompatibility with Oracle. It now supports the replacement of multiple\ncharacters and it will also shorten the length of the string when characters\nare replaced with nothing.\n\n[Note: The arguments are different from the original translate]\nCan this function replace the existing function in the distribution?\n\n-------NEW FUNCTION--------------------------------------\ntext *\ntranslate(text *string, text *from, text *to)\n{\n text *ret;\n char *ptr_ret, *from_ptr, *to_ptr, *source, *target, *temp,\nrep;\n int m, fromlen, tolen, retlen, i;\n\n if ((string == (text *) NULL) ||\n ((m = VARSIZE(string) - VARHDRSZ) <= 0))\n return string;\n\n target = (char *) palloc(VARSIZE(string) - VARHDRSZ);\n source = VARDATA(string);\n temp = target;\n\n fromlen = VARSIZE(from) - VARHDRSZ;\n from_ptr = VARDATA(from);\n tolen = VARSIZE(to) - VARHDRSZ;\n to_ptr = VARDATA(to);\n retlen = 0;\n while (m--)\n {\n rep = *source;\n for(i=0;i<fromlen;i++) {\n if(from_ptr[i] == *source) {\n if(i < tolen) {\n rep = to_ptr[i];\n } else {\n rep = 0;\n }\n break;\n }\n }\n if(rep != 0) {\n *target++ = rep;\n retlen++;\n }\n source++;\n }\n\n ret = (text *) palloc(retlen + VARHDRSZ);\n VARSIZE(ret) = retlen + VARHDRSZ;\n ptr_ret = VARDATA(ret);\n for(i=0;i<retlen;i++) {\n *ptr_ret++ = temp[i];\n }\n pfree(target);\n return ret;\n}\n\n\nThanks,\nEdwin S. Ramirez\n\n\n", "msg_date": "Thu, 16 Dec 1999 09:24:14 -0500", "msg_from": "Edwin Ramirez <[email protected]>", "msg_from_op": false, "msg_subject": "Oracle Compatibility (Translate function)" }, { "msg_contents": "> I have modified the translate function in order to improve its\n> compatibility with Oracle. It now supports the replacement of \n> multiple characters and it will also shorten the length of the string \n> when characters are replaced with nothing.\n> [Note: The arguments are different from the original translate]\n> Can this function replace the existing function in the distribution?\n\nafaik yes. Does anyone have a problem with this (it allows\nsubstitution of multiple characters)? I think the system tables will\nneed to be updated; I'll do this within the next week or so if noone\nelse has already taken this on.\n\nbtw, there is some chance that when we go to native support for\nNATIONAL CHARACTER etc then TRANSLATE() will need to become SQL92\ncompliant (and basically a different function). But that is an issue\nfor later, and we may be able to solve it without having to give up on\nthe Oracle version.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 17 Dec 1999 17:21:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Oracle Compatibility (Translate function)" }, { "msg_contents": "I have just applied a user patch to fix this reported problem.\n\n\n> Vadim Mikheev <[email protected]> writes:\n> > Yes, I reproduced this (Solaris 2.5/sparc). \n> > Seems like CIDR problem(??!):\n> \n> Yes. Looks like the low-order bits of a CIDR address are garbage,\n> but network_cmp() compares them as though all bits are significant.\n> So, indeed, it may think two different instances of '1.2.3/24'\n> are not equal.\n> \n> The regular inet comparison functions at least *try* to mask out\n> garbage bits, but I think they get it wrong too --- they should be\n> taking the smaller of ip_bits(a1) and ip_bits(a2) as the number of\n> bits to compare. They don't. Thus, for example,\n> \n> regression=> select '1.2.5/16'::cidr < '1.2.3/24'::cidr;\n> ?column?\n> --------\n> f\n> (1 row)\n> \n> which looks wrong to me.\n> \n> In short, it's a bug in the inet data types, not a generic problem\n> with unique indexes.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 7 Mar 2000 17:50:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have just applied a user patch to fix this reported problem.\n\nIf you had read the followup, you would have seen that I have doubts\nabout this patch, and in fact Ryan acknowledges that it probably doesn't\ndo the right thing for INET. I think there is more work to do here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Mar 2000 18:22:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] uniqueness not always correct " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have just applied a user patch to fix this reported problem.\n> \n> If you had read the followup, you would have seen that I have doubts\n> about this patch, and in fact Ryan acknowledges that it probably doesn't\n> do the right thing for INET. I think there is more work to do here.\n\nReversed out.\n\n\"I never met a patch I didn't like.\" :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 7 Mar 2000 20:44:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] uniqueness not always correct" }, { "msg_contents": "This bug appears to still exist in 7.0:\n\t\n\ttest=> create table test (zone int4, net cidr, unique(zone, net));\n\tNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key'\n\tfor table 'test'\n\tCREATE\n\ttest=> insert into test (zone, net) values (1, '1.2.3/24');\n\tINSERT 22157 1\n\ttest=> insert into test (zone, net) values (1, '2.3.4/24');\n\tINSERT 22158 1\n\ttest=> select * from test;\n\t zone | net \n\t------+----------\n\t 1 | 1.2.3/24\n\t 1 | 2.3.4/24\n\t(2 rows)\n\t\n\ttest=> insert into test (zone, net) values (1, '2.3.4/24');\n\tINSERT 22159 1\n\ttest=> \n\n\n> Solaris 2.6/sparc; postgres 6.5.1\n> \n> dns=> create table test (zone int4, net cidr, unique(zone, net));\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> CREATE\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21750 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> INSERT 21751 1\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21752 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> ERROR: Cannot insert a duplicate key into a unique index\n> dns=> select * from test;\n> zone|net \n> - ----+--------\n> 1|1.2.3/24\n> 1|2.3.4/24\n> 1|1.2.3/24\n> (3 rows)\n> \n> \n> Once a unique error is reported, uniqueness seems to be maintained.\n> Also, if you enter 4 values, then try a duplicate, it all works.\n> \n> The threshold seems to be 3.\n> \n> A select before the duplicate add also seems to fix it.\n> \n> ~f\n> \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 19:28:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: uniqueness not always correct" }, { "msg_contents": "This is Vadim's comment on the bug.\n\n\n> Frank Cusack wrote:\n> > \n> > Solaris 2.6/sparc; postgres 6.5.1\n> > \n> > dns=> create table test (zone int4, net cidr, unique(zone, net));\n> > NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> > CREATE\n> > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > INSERT 21750 1\n> > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > INSERT 21751 1\n> > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > INSERT 21752 1\n> > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > ERROR: Cannot insert a duplicate key into a unique index\n> \n> Yes, I reproduced this (Solaris 2.5/sparc). \n> Seems like CIDR problem(??!):\n> \n> ais=> create table test (zone int4, net int4, unique(zone, net));\n> ^^^^\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> CREATE\n> ais=> insert into test (zone, net) values (1, 1);\n> INSERT 7712479 1\n> ais=> insert into test (zone, net) values (1, 2);\n> INSERT 7712480 1\n> ais=> insert into test (zone, net) values (1, 1);\n> ERROR: Cannot insert a duplicate key into a unique index\n> \n> Vadim\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 19:28:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: uniqueness not always correct" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> This bug appears to still exist in 7.0:\n> \ttest=> create table test (zone int4, net cidr, unique(zone, net));\n\nYeah. IIRC, the issue is that the CIDR data-type-specific btree\ncomparison function looks at all bits in the datatype, including bits\nthat are past the specified length (/24, here) and weren't necessarily\nzeroed by the datatype input routine. It's not clear whether the\ncomparator or the input routine or both are wrong --- *should* those\nbits be significant, or not?\n\nThe discussion about how to fix it bogged down, and apparently\nno one did anything. I recall feeling that we had some confusion\nbetween what the semantics of CIDR and INET types ought to be,\nbut I don't understand them well enough to know what they should do.\nRight now the same operators are used for both, which seems like it\ncan't be right.\n\nI was hoping someone would dig through the archives or talk to Paul\nVixie again and come away with a clear understanding of the semantics\nof these two datatypes (and why we need two, if we do).\n\nAlternatively, if no one cares enough about these types to even\nunderstand what they should do, maybe we should rip 'em out?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 May 2000 22:13:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: uniqueness not always correct " }, { "msg_contents": "Tom Lane writes:\n\n[CIDR and INET]\n> Alternatively, if no one cares enough about these types to even\n> understand what they should do, maybe we should rip 'em out?\n\nActually, I'm a happy user of these types, so that would certainly make me\nunhappy...\n\nCIDR stores network addresses, so '10.8/16' might be some network. INET\nstores both host addresses and, optionally, the network it's in, so\n'10.8.7.6/16' is the given host in the network '10.8/16'. Alternatively,\nINET '10.8.7.6' is just a host with no network. IMO, there is one of two\nbugs in the CIDR input routine:\n\n1) '10.8.7.6/16' in not rejected\n\n2) Since it is accepted, at least the hidden fields need to be zeroed.\n\n(But note that this bug is only exposed when you use the type improperly\nin the first place.)\n\nUsing the same operators for cidr and inet is fine as long as this is\nfixed.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 3 Jun 2000 01:48:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: uniqueness not always correct " }, { "msg_contents": "I can confirm this is fixed in the current source tree, to be released\nas 7.1 in a few months:\n\n\ntest=> create table test (zone int4, net cidr, unique(zone, net));\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key'\nfor table 'test'\nCREATE\ntest=> insert into test (zone, net) values (1, '1.2.3/24');\nINSERT 19822 1\ntest=> insert into test (zone, net) values (1, '2.3.4/24');\nINSERT 19823 1\ntest=> insert into test (zone, net) values (1, '1.2.3/24');\nERROR: Cannot insert a duplicate key into unique index test_zone_key\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\ntest=> insert into test (zone, net) values (1, '2.3.4/24');\nERROR: Cannot insert a duplicate key into unique index test_zone_key\ntest=> \n\n\n> Solaris 2.6/sparc; postgres 6.5.1\n> \n> dns=> create table test (zone int4, net cidr, unique(zone, net));\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> CREATE\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21750 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> INSERT 21751 1\n> dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> INSERT 21752 1\n> dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> ERROR: Cannot insert a duplicate key into a unique index\n> dns=> select * from test;\n> zone|net \n> - ----+--------\n> 1|1.2.3/24\n> 1|2.3.4/24\n> 1|1.2.3/24\n> (3 rows)\n> \n> \n> Once a unique error is reported, uniqueness seems to be maintained.\n> Also, if you enter 4 values, then try a duplicate, it all works.\n> \n> The threshold seems to be 3.\n> \n> A select before the duplicate add also seems to fix it.\n> \n> ~f\n> \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Sep 2000 22:16:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: uniqueness not always correct" }, { "msg_contents": "Yes, I can confirm this is now fixed.\n\n\n> Frank Cusack wrote:\n> > \n> > Solaris 2.6/sparc; postgres 6.5.1\n> > \n> > dns=> create table test (zone int4, net cidr, unique(zone, net));\n> > NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> > CREATE\n> > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > INSERT 21750 1\n> > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > INSERT 21751 1\n> > dns=> insert into test (zone, net) values (1, '1.2.3/24');\n> > INSERT 21752 1\n> > dns=> insert into test (zone, net) values (1, '2.3.4/24');\n> > ERROR: Cannot insert a duplicate key into a unique index\n> \n> Yes, I reproduced this (Solaris 2.5/sparc). \n> Seems like CIDR problem(??!):\n> \n> ais=> create table test (zone int4, net int4, unique(zone, net));\n> ^^^^\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_zone_key' for table 'test'\n> CREATE\n> ais=> insert into test (zone, net) values (1, 1);\n> INSERT 7712479 1\n> ais=> insert into test (zone, net) values (1, 2);\n> INSERT 7712480 1\n> ais=> insert into test (zone, net) values (1, 1);\n> ERROR: Cannot insert a duplicate key into a unique index\n> \n> Vadim\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Sep 2000 22:18:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: uniqueness not always correct" } ]
[ { "msg_contents": "Hello!\n\n Ross J. Reedstrom, current maintainer of PostgreSQL Database Adapter for\nZope (based, of course, on PyGreSQL), said very positive words about\nPostgreSQL and Postgres team:\n\n\"My experience with the PostgreSQL developers and mailing lists are very\nsimilar to the Zope lists: lots of clueful, helpful people (some of them\nare the same people ;-) Bugs exist (what software is perfect?), but get\ndiagnosed and fixed very rapidly. Features are added (and limitations\nremoved) on a weekly basis. I get the feeling that the pgsql project has\nturned a corner in the last year or two: the code base has been cleaned\nup, making it easier to understand, and contribute code. New developers\nare being attracted, and quickly becoming contributers. It feels like\nit's reaching critical mass. The pgsql-hackers list reminds me of the\nlinux-kernel list from about '92 or so, in that respect.\n\nIf you haven't looked at PostgreSQL since it's Postgres95 days, it's\ndefinitely time to look again.\"\n\n http://www.egroups.com/list/zope/md185412286.html\n\n Yes, I was one of those who sent him \"works for me\" message. :)\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n\n\n", "msg_date": "Thu, 11 Nov 1999 15:45:42 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL and Zope" } ]
[ { "msg_contents": "I have found that typing:\n\n\ttest=> select * from pg_class\\p\\g\n\nno longer works. I honors the \\p, but ignores the \\g.\n\nAny ideas Peter?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Nov 1999 13:21:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "psql and \\p\\g" }, { "msg_contents": "I see that \\e no longer works as expected:\n\n\ttest=> select * from pg_class;\n\t...\n\ttest=> \\e\n\nand in the editor, the query is not appearing. The 'select' query\nshould appear in the editor because I have not entered a non-backslash\ncommand to clear the query buffer.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Nov 1999 13:27:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "failure of \\e in psql" }, { "msg_contents": "On 1999-11-11, Bruce Momjian mentioned:\n\n> I have found that typing:\n> \n> \ttest=> select * from pg_class\\p\\g\n> \n> no longer works. I honors the \\p, but ignores the \\g.\n> \n> Any ideas Peter?\n\nselect * from foo \\p \\g\n\nThis was done to normalize the grammar a little bit (haha, very\nfunny). In particular it allows this sort of stuff:\n=> select * from foo \\p \\o out.txt \\g \\\\ select * from foo 2 \\x \\g\netc.\n\nIs it *really* necessary to be able to omit the space?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 11 Nov 1999 22:36:26 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql and \\p\\g" }, { "msg_contents": "On 1999-11-11, Bruce Momjian mentioned:\n\n> I see that \\e no longer works as expected:\n> \n> \ttest=> select * from pg_class;\n> \t...\n> \ttest=> \\e\n> \n> and in the editor, the query is not appearing. The 'select' query\n> should appear in the editor because I have not entered a non-backslash\n> command to clear the query buffer.\n\nWell, once you send it, it's sent and gone. You have to edit it before you\nsend it. I guess I'm not following you. Of course you should somehow be\nable to re-edit the previous query. Hmm.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 11 Nov 1999 22:37:49 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failure of \\e in psql" }, { "msg_contents": "> On 1999-11-11, Bruce Momjian mentioned:\n> \n> > I have found that typing:\n> > \n> > \ttest=> select * from pg_class\\p\\g\n> > \n> > no longer works. I honors the \\p, but ignores the \\g.\n> > \n> > Any ideas Peter?\n> \n> select * from foo \\p \\g\n> \n> This was done to normalize the grammar a little bit (haha, very\n> funny). In particular it allows this sort of stuff:\n> => select * from foo \\p \\o out.txt \\g \\\\ select * from foo 2 \\x \\g\n> etc.\n> \n> Is it *really* necessary to be able to omit the space?\n> \n\nYes, I believe it is, in the sense that many people are used to doing\nthem together. Can a backslash trigger some separation of commands, or\nat least \\p\\g be recognized correctly. I don't think there are other\nmeaningful combinations.\n\nCan you add a unix-style timestamp for \\T?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Nov 1999 16:50:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql and \\p\\g" }, { "msg_contents": "> On 1999-11-11, Bruce Momjian mentioned:\n> \n> > I see that \\e no longer works as expected:\n> > \n> > \ttest=> select * from pg_class;\n> > \t...\n> > \ttest=> \\e\n> > \n> > and in the editor, the query is not appearing. The 'select' query\n> > should appear in the editor because I have not entered a non-backslash\n> > command to clear the query buffer.\n> \n> Well, once you send it, it's sent and gone. You have to edit it before you\n> send it. I guess I'm not following you. Of course you should somehow be\n> able to re-edit the previous query. Hmm.\n\nThat is how it used to work. You run the query, get an error, and \\e\npulls it into the editor for fixing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Nov 1999 16:51:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: failure of \\e in psql" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> This was done to normalize the grammar a little bit (haha, very\n>> funny). In particular it allows this sort of stuff:\n>> => select * from foo \\p \\o out.txt \\g \\\\ select * from foo 2 \\x \\g\n>> etc.\n>> \n>> Is it *really* necessary to be able to omit the space?\n\n> Yes, I believe it is, in the sense that many people are used to doing\n> them together. Can a backslash trigger some separation of commands, or\n> at least \\p\\g be recognized correctly. I don't think there are other\n> meaningful combinations.\n\nIt'd probably be sufficient if backslash-commands that never take\nparameters can be adjacent to a following backslash command.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 1999 17:16:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: psql and \\p\\g " }, { "msg_contents": "> On 1999-11-11, Bruce Momjian mentioned:\n> \n> > I see that \\e no longer works as expected:\n> > \n> > \ttest=> select * from pg_class;\n> > \t...\n> > \ttest=> \\e\n> > \n> > and in the editor, the query is not appearing. The 'select' query\n> > should appear in the editor because I have not entered a non-backslash\n> > command to clear the query buffer.\n> \n> Well, once you send it, it's sent and gone. You have to edit it before you\n> send it. I guess I'm not following you. Of course you should somehow be\n> able to re-edit the previous query. Hmm.\n\nPeter, before I go hunting around, can you tell me any other things psql\nused to do that it doesn't do anymore?\n\nWe had hand-tuned psql over the years, and it would be good to know what\nfeatures no longer exist so we can decide if they are needed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Nov 1999 01:07:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: failure of \\e in psql" }, { "msg_contents": "On Thu, 11 Nov 1999, Bruce Momjian wrote:\n\n> Can you add a unix-style timestamp for \\T?\n\nDo you mean \\echo `date` ?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 12 Nov 1999 11:04:03 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql and \\p\\g" }, { "msg_contents": "On Fri, 12 Nov 1999, Bruce Momjian wrote:\n\n> Peter, before I go hunting around, can you tell me any other things psql\n> used to do that it doesn't do anymore?\n\nWell, let's put it this way: Everythings that used to work, that people\nfound useful, and that doesn't work anymore is a bug. That's what it's all\nabout after all.\n\nHowever: About the \\e thing I simply didn't know. The \\p\\g was removed for\nconsistency. You might also be interested to know that \\E no longer\nexists, because I couldn't make sense of it. Also \\d* is slated for\nimplementation but no one wanted to respond to my request to explain what\nthis is actually supposed to do. That's all I can come up with right now.\n\n> We had hand-tuned psql over the years, and it would be good to know what\n> features no longer exist so we can decide if they are needed.\n\nWell, I really comes down to what Tom said, doesn't it: If the docs don't\nmatch the code, the code it wrong. And it will get fixed. A lot of those\n\"tunings\" seemed to be of the nature \"If I put \\o after \\x I want it to do\n<foo> instead\".\n\nThat doesn't mean that they were bad of course, but the purpose of all of\nthis was to put a consistent face on things.\n\nHaving said that, if I mess it up I'll fix it of course.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 12 Nov 1999 11:11:17 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failure of \\e in psql" }, { "msg_contents": "> On Thu, 11 Nov 1999, Bruce Momjian wrote:\n> \n> > Can you add a unix-style timestamp for \\T?\n> \n> Do you mean \\echo `date` ?\n> \n\nOh, very nifty. Never mind. I didn't see that.\n\nSeems you have added more powerful flags to take over some of the old\nflag usage. If you want to remove some of the older psql flags and\nrequire them to use your newer syntax that allows more functionality,\nyou can do it.\n\nIf you want, just print an error message for the old flag showing them\nthe new syntax to use and we can remove the messages after a few\nreleases.\n\nHowever, some of the more popular flags should probably be left in.\nMaybe it should just be left alone. Not sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Nov 1999 11:54:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql and \\p\\g" }, { "msg_contents": "> On Fri, 12 Nov 1999, Bruce Momjian wrote:\n> \n> > Peter, before I go hunting around, can you tell me any other things psql\n> > used to do that it doesn't do anymore?\n> \n> Well, let's put it this way: Everythings that used to work, that people\n> found useful, and that doesn't work anymore is a bug. That's what it's all\n> about after all.\n> \n> However: About the \\e thing I simply didn't know. The \\p\\g was removed for\n> consistency. You might also be interested to know that \\E no longer\n> exists, because I couldn't make sense of it. Also \\d* is slated for\n> implementation but no one wanted to respond to my request to explain what\n> this is actually supposed to do. That's all I can come up with right now.\n\nFirst, let me say I am very glad you overhauled psql. It was very\nneeded, and I like the new functionality. Already learned \\echo `date`.\nQuite handy and very flexible.\n\nI was just curious if there was any stuff you found confusing and\nskipped so we could comment on it all at once.\n\nWe have fixed the pager off by default, and looks like \\p\\g and \\e need\nwork, but that is small compared to how much functionality does work\nperfectly. I personally found the \\e and \\p\\g stuff very tricky to\nimplement.\n\n[Of course, with the new output, I am going to have to re-do every one\nof my SQL query outputs for the book. :-) ]\n\nNo idea what \\d* does, nor \\E. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Nov 1999 12:00:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: failure of \\e in psql" }, { "msg_contents": "Peter, I no longer see the pg_description descriptions when using the\n\\do, \\df, and \\dT commands.\n\nThe commands are much less useful without the descriptions. Seems \\dd\nwith a string is much smarter, and pulls descriptions based on string\nmatching.\n\nInteresting that \\dd shows descriptions of everything.\n\nNot sure how to recommend you change this. The new \\df and \\do\ndisplays are much clearer without the descriptions. It seems \\df and\n\\do show additional information about argument types and return values,\nwhile \\dd shows comments.\n\nMaybe just add descriptions to \\dT, and suggest people use \\dd to get\ninfo about specific operators of functions? But that kind of messes the\nclarity of using \\dd for descriptions.\n\nI am stumped. Maybe it can't be improved.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Nov 1999 11:35:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Failure to show descriptions in \\df command" }, { "msg_contents": "You can turn the descriptions on by typing \\set description on (or \\set\ndescription foo, or whatever, as long as it's something), for example, in\nyour ~/.psqlrc (or in your .psqlrc-7.0.0 if you don't want to interfere\nwith the current version).\n\nThe reason for having descriptions off by default was that in a number of\nviews (I recall functions and operators), they don't fit on the screen\nvery nicely. On the other hand, the \\dd command always shows descriptions,\nbecause it's sort of the built-in manual, but it doesn't show anything\nelse (argument types, etc.).\n\nRead the fine (SGML) manual ;)\n\n\t-Peter\n\n\nOn Sat, 13 Nov 1999, Bruce Momjian wrote:\n\n> Peter, I no longer see the pg_description descriptions when using the\n> \\do, \\df, and \\dT commands.\n> \n> The commands are much less useful without the descriptions. Seems \\dd\n> with a string is much smarter, and pulls descriptions based on string\n> matching.\n> \n> Interesting that \\dd shows descriptions of everything.\n> \n> Not sure how to recommend you change this. The new \\df and \\do\n> displays are much clearer without the descriptions. It seems \\df and\n> \\do show additional information about argument types and return values,\n> while \\dd shows comments.\n> \n> Maybe just add descriptions to \\dT, and suggest people use \\dd to get\n> info about specific operators of functions? But that kind of messes the\n> clarity of using \\dd for descriptions.\n> \n> I am stumped. Maybe it can't be improved.\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 13 Nov 1999 18:15:53 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Failure to show descriptions in \\df command" }, { "msg_contents": "> You can turn the descriptions on by typing \\set description on (or \\set\n> description foo, or whatever, as long as it's something), for example, in\n> your ~/.psqlrc (or in your .psqlrc-7.0.0 if you don't want to interfere\n> with the current version).\n\nOK. I noticed that the existance of an .psqlrc file causes an extra\nnewline to be printed on startup before the first prompt. Is that\nintentional?\n\n> \n> The reason for having descriptions off by default was that in a number of\n> views (I recall functions and operators), they don't fit on the screen\n> very nicely. On the other hand, the \\dd command always shows descriptions,\n> because it's sort of the built-in manual, but it doesn't show anything\n> else (argument types, etc.).\n\nGot it. Yes, much clearer for \\df and \\do. I noticed that using \\set\ndescription on and then using \\dT generates an error of:\n\t\n\ttest=> \\set description on\n\ttest=> \\dT\n\tERROR: Relation 'p' does not exist\n\n\nAlso, the \\set commands don't seem to complain about bad commands:\n\n\ttest=> \\set figgle\n\ttest=> \n\nIs that intentional?\n\n> \n> Read the fine (SGML) manual ;)\n> \n\nThat was part of my problem. I hadn't figured out how to generate html\nfrom the sgml ref stuff. I just spent some time and figured out I have\nto issue the 'make' command from the upper sgml directory because there\nis no Makefile in the sgml/ref directory. I can view them fine now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Nov 1999 13:19:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Failure to show descriptions in \\df command" } ]
[ { "msg_contents": "\n\nHi,\n\n in TODO is the item \"Allow compression of large fields or a \ncompressed field type\". It is good idea, but it prabably needs \nbinary field firstly (or not?). \n\n I see the inv_api and other LO routines, and my idea is add support \nfor bzip2 stream to the inv_api and allow in current LO routines used \ncompression. It si good idea? \n\n\t\t\t\t\t\tKarel Zak\n\n \n\n------------------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n\n", "msg_date": "Thu, 11 Nov 1999 19:55:56 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "compression in LO and other fields" }, { "msg_contents": "Karel Zak - Zakkr <[email protected]> writes:\n> I see the inv_api and other LO routines, and my idea is add support \n> for bzip2 stream to the inv_api and allow in current LO routines used \n> compression. It si good idea? \n\nLO is a dead end. What we really want to do is eliminate tuple-size\nrestrictions and then have large ordinary fields (probably of type\nbytea) in regular tuples. I'd suggest working on compression in that\ncontext, say as a new data type called \"bytez\" or something like that.\nbytez would act just like bytea except the on-disk representation would\nbe compressed. A compressed variant of type \"text\" would also be useful.\nIn the long run this will be much more useful and easier to work with\nthan adding another frammish to large objects.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 1999 16:08:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "> LO is a dead end. What we really want to do is eliminate tuple-size\n> restrictions and then have large ordinary fields (probably of type\n> bytea) in regular tuples. I'd suggest working on compression in that\n> context, say as a new data type called \"bytez\" or something like that.\n\nIt sounds ideal but I remember that Vadim said inserting a 2GB record\nis not good idea since it will be written into the log too. If it's a\nnecessary limitation from the point of view of WAL, we have to accept\nit, I think.\n\nBTW, I still don't have enough time to run the huge sort tests on\n6.5.x. Probably I would have chance next week to do that...\n---\nTatsuo Ishii\n", "msg_date": "Fri, 12 Nov 1999 11:00:08 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "Tatsuo Ishii wrote:\n\n> > LO is a dead end. What we really want to do is eliminate tuple-size\n> > restrictions and then have large ordinary fields (probably of type\n> > bytea) in regular tuples. I'd suggest working on compression in that\n> > context, say as a new data type called \"bytez\" or something like that.\n>\n> It sounds ideal but I remember that Vadim said inserting a 2GB record\n> is not good idea since it will be written into the log too. If it's a\n> necessary limitation from the point of view of WAL, we have to accept\n> it, I think.\n\n Just in case someone want to implement a complete compressed\n data type (including comarision functions, operators and\n indexing default operator class).\n\n I already made some tests with a type I called 'lztext'\n locally. Only the input-/output-functions exist so far and\n as the name might suggest, it would be an alternative for\n 'text'. It uses a simple but fast, byte oriented LZ backward\n pointing method. No Huffman coding or variable offset/size\n tagging. First byte of a chunk tells bitwise if the next\n following 8 items are raw bytes to copy or 12 bit offset, 4\n bit size copy information. That is max back offset 4096 and\n max match size 17 bytes.\n\n What made it my preferred method was the fact, that\n decompression is done entirely using the already decompressed\n portion of the data, so it does not need any code tables or\n the like at that time.\n\n It is really FASTEST on decompression, which I assume would\n be the mostly often used operation on huge data types. With\n some care, comparision could be done on the fly while\n decompressing two values, so that the entire comparision can\n be aborted at the occurence of the first difference.\n\n The compression rates aren't that giantic. I've got 30-50%\n for rule plan strings (size limit on views!!!). And the\n method used only allows for buffer back references of 4K\n offsets at most, so the rate will not grow for larger data\n chunks. That's a heavy tradeoff between compression rate and\n no memory leakage for sure and speed, I know, but I prefer\n not to force it, instead I usually use a bigger hammer (the\n tuple size limit is still our original problem - and another\n IBM 72GB disk doing 22-37 MB/s will make any compressing data\n type obsolete then).\n\n Sorry for the compression specific slang here. Well, anyone\n interested in the code?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 12 Nov 1999 04:32:58 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "> The compression rates aren't that giantic. I've got 30-50%\n> for rule plan strings (size limit on views!!!). And the\n> method used only allows for buffer back references of 4K\n> offsets at most, so the rate will not grow for larger data\n> chunks. That's a heavy tradeoff between compression rate and\n> no memory leakage for sure and speed, I know, but I prefer\n> not to force it, instead I usually use a bigger hammer (the\n> tuple size limit is still our original problem - and another\n> IBM 72GB disk doing 22-37 MB/s will make any compressing data\n> type obsolete then).\n> \n> Sorry for the compression specific slang here. Well, anyone\n> interested in the code?\n\nIn contrib?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Nov 1999 22:50:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> LO is a dead end. What we really want to do is eliminate tuple-size\n>> restrictions and then have large ordinary fields (probably of type\n>> bytea) in regular tuples. I'd suggest working on compression in that\n>> context, say as a new data type called \"bytez\" or something like that.\n\n> It sounds ideal but I remember that Vadim said inserting a 2GB record\n> is not good idea since it will be written into the log too. If it's a\n> necessary limitation from the point of view of WAL, we have to accept\n> it, I think.\n\nLO won't make that any better: the data still goes into a table.\nYou'd have 2GB worth of WAL entries either way.\n\nThe only thing LO would do for you is divide the data into block-sized\ntuples, so there would be a bunch of little WAL entries instead of one\nbig one. But that'd probably be easy to duplicate too. If we implement\nbig tuples by chaining together disk-block-sized segments, which seems\nlike the most likely approach, couldn't WAL log each segment as a\nseparate log entry? If so, there's almost no difference between LO and\ninline field for logging purposes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 01:14:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "\nOn Fri, 12 Nov 1999, Tom Lane wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n> >> LO is a dead end. What we really want to do is eliminate tuple-size\n> >> restrictions and then have large ordinary fields (probably of type\n> >> bytea) in regular tuples. I'd suggest working on compression in that\n> >> context, say as a new data type called \"bytez\" or something like that.\n\n--- cut ---\n> \n> The only thing LO would do for you is divide the data into block-sized\n> tuples, so there would be a bunch of little WAL entries instead of one\n> big one. But that'd probably be easy to duplicate too. If we implement\n> big tuples by chaining together disk-block-sized segments, which seems\n> like the most likely approach, couldn't WAL log each segment as a\n> separate log entry? If so, there's almost no difference between LO and\n> inline field for logging purposes.\n> \n\nI'am not sure, that LO is a dead end for every users. Big (blob) fields\ngoing during SQL engine (?), but why - if I needn't use this data as\ntypically SQL data (I not need index, search .. in (example) gif files).\nWill pity if LO devel. will go down. I still thing that LO compression is\nnot bad idea :-)\n\nOther eventual compression questions:\n\n* some aplication allow use over slow networks between client<->server \n a compressed stream, and PostgreSQL? \n\n* MySQL dump allow make compressed dump file, it is good, and PostgreSQL?\n\n\t\t\t\t\t\t\tKarel\n\n \n\n", "msg_date": "Fri, 12 Nov 1999 10:16:21 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "\nOn Fri, 12 Nov 1999, Jan Wieck wrote:\n\n> Just in case someone want to implement a complete compressed\n> data type (including comarision functions, operators and\n> indexing default operator class).\n> \n> I already made some tests with a type I called 'lztext'\n> locally. Only the input-/output-functions exist so far and\n> as the name might suggest, it would be an alternative for\n> 'text'. It uses a simple but fast, byte oriented LZ backward\n> pointing method. No Huffman coding or variable offset/size\n> tagging. First byte of a chunk tells bitwise if the next\n> following 8 items are raw bytes to copy or 12 bit offset, 4\n> bit size copy information. That is max back offset 4096 and\n> max match size 17 bytes.\n\nI is your original implementation or you use any current compression \ncode? I try bzip2, but output from this algorithm is total binary, \nI don't know how this use in PgSQL if in backend are all routines\n(in/out) use *char (yes, I'am newbie for PgSQL hacking:-). \n\n> \n> What made it my preferred method was the fact, that\n> decompression is done entirely using the already decompressed\n> portion of the data, so it does not need any code tables or\n> the like at that time.\n> \n> It is really FASTEST on decompression, which I assume would\n> be the mostly often used operation on huge data types. With\n> some care, comparision could be done on the fly while\n> decompressing two values, so that the entire comparision can\n> be aborted at the occurence of the first difference.\n> \n> The compression rates aren't that giantic. I've got 30-50%\n\nNot is problem, that your implementation compress all data at once?\nTypically compression use a stream, and compress only small a buffer \nin any cycle.\n\n> for rule plan strings (size limit on views!!!). And the\n> method used only allows for buffer back references of 4K\n> offsets at most, so the rate will not grow for larger data\n> chunks. That's a heavy tradeoff between compression rate and\n> no memory leakage for sure and speed, I know, but I prefer\n> not to force it, instead I usually use a bigger hammer (the\n> tuple size limit is still our original problem - and another\n> IBM 72GB disk doing 22-37 MB/s will make any compressing data\n> type obsolete then).\n> \n> Sorry for the compression specific slang here. Well, anyone\n> interested in the code?\n\nYes, for me - I finish to_char()/to_data() ora compatible routines \n(Thomas, you still quiet?) and this is new appeal for me :-)\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Fri, 12 Nov 1999 10:38:55 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "> > It sounds ideal but I remember that Vadim said inserting a 2GB record\n> > is not good idea since it will be written into the log too. If it's a\n> > necessary limitation from the point of view of WAL, we have to accept\n> > it, I think.\n> \n> LO won't make that any better: the data still goes into a table.\n> You'd have 2GB worth of WAL entries either way.\n\nWhat in my mind was LO that is not under the transaction control. I\nwould not say this is a good thing, but I'm afraid we might need this\nkind of beast in WAL.\n\n> The only thing LO would do for you is divide the data into block-sized\n> tuples, so there would be a bunch of little WAL entries instead of one\n> big one. But that'd probably be easy to duplicate too. If we implement\n> big tuples by chaining together disk-block-sized segments, which seems\n> like the most likely approach, couldn't WAL log each segment as a\n> separate log entry? If so, there's almost no difference between LO and\n> inline field for logging purposes.\n\nRight.\n\nBTW, does anybody know how BLOBs are handled by WAL in commercial\nDBMSs?\n---\nTatsuo Ishii\n", "msg_date": "Fri, 12 Nov 1999 22:42:22 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "Karel Zak - Zakkr wrote:\n\n> On Fri, 12 Nov 1999, Jan Wieck wrote:\n>\n> > I already made some tests with a type I called 'lztext'\n> > locally. Only the input-/output-functions exist so far and\n>\n> I is your original implementation or you use any current compression\n> code? I try bzip2, but output from this algorithm is total binary,\n> I don't know how this use in PgSQL if in backend are all routines\n> (in/out) use *char (yes, I'am newbie for PgSQL hacking:-).\n\n The internal storage format is based on an article I found\n at:\n\n http://www.neutralzone.org/home/faqsys/docs/slz_art.txt\n\n Simple Compression using an LZ buffer\n Part 3 Revision 1.d:\n An introduction to compression on the Amiga by Adisak Pochanayon\n\n Freely Distributable as long as reproduced completely.\n Copyright 1993 Adisak Pochanayon\n\n I've written the code from scratch.\n\n The internal representation is binary, for sure. It's a\n PostgreSQL variable length data format as usual.\n\n I don't know if there's a compression library available that\n fit's our need. First and most important it must have a\n license that permits us to include it in the distribution\n under our existing license. Second it's implementation must\n not cause any problems in the backend like memory leakage or\n the like.\n\n> > The compression rates aren't that giantic. I've got 30-50%\n>\n> Not is problem, that your implementation compress all data at once?\n> Typically compression use a stream, and compress only small a buffer\n> in any cycle.\n\n No, that's no problem. On type input, the original value is\n completely in memory given as a char*, and the internal\n representation is returned as a palloc()'d Datum. For output\n it's vice versa.\n\n O.K. some details on the compression rate. I've used 112\n .html files with a total size of 1188346 bytes this time.\n The smallest one was 131 bytes, the largest one 114549 bytes\n and most of the files are somewhere between 3-12K.\n\n Compression results on the binary level are:\n\n gzip -9 outputs 398180 bytes (66.5% rate)\n\n gzip -1 outputs 447597 bytes (62.3% rate)\n\n my code outputs 529420 bytes (55.4% rate)\n\n Html input might be somewhat optimal for Adisak's storage\n format, but taking into account that my source implementing\n the type input and output functions is smaller than 600\n lines, I think 11% difference to a gzip -9 is a good result\n anyway.\n\n> > Sorry for the compression specific slang here. Well, anyone\n> > interested in the code?\n>\n> Yes, for me - I finish to_char()/to_data() ora compatible routines\n> (Thomas, you still quiet?) and this is new appeal for me :-)\n\n Bruce suggested the contrib area, but I'm not sure if that's\n the right place. If it goes into the distribution at all, I'd\n like to use this data type for rule plan strings and function\n source text in the system catalogs. I don't expect we'll have\n a general solution for tuples split across multiple blocks\n for v7.0. And using lztext for rules and function sources\n would lower some FRP's. But using it in the catalogs requires\n to be builtin.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 12 Nov 1999 14:58:32 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Html input might be somewhat optimal for Adisak's storage\n> format, but taking into account that my source implementing\n> the type input and output functions is smaller than 600\n> lines, I think 11% difference to a gzip -9 is a good result\n> anyway.\n\nThese strike me as very good results. I'm not at all sure that using\ngzip or bzip would give much better results in practice in Postgres,\nbecause those compressors are optimized for relatively large files,\nwhereas a compressed-field datatype would likely be getting relatively\nsmall field values to work on. (So your test data set is probably a\ngood one for our purposes --- do the numbers change if you exclude\nall the files over, say, 10K?)\n\n> Bruce suggested the contrib area, but I'm not sure if that's\n> the right place. If it goes into the distribution at all, I'd\n> like to use this data type for rule plan strings and function\n> source text in the system catalogs.\n\nRight, if we are going to bother with it at all, we should put it\ninto the core so that we can use it for rule plans.\n\n> I don't expect we'll have\n> a general solution for tuples split across multiple blocks\n> for v7.0.\n\nI haven't given up hope of that yet --- but even if we do, compressing\nthe data is an attractive choice to reduce the frequency with which\ntuples must be split across blocks.\n\n\nIt occurred to me last night that applying compression to individual\nfields might not be the best approach. Certainly a \"bytez\" data type\nis the easiest thing to fit into the existing system, but it's leaving\nsome space savings on the table. What about compressing the *whole*\ndata contents of a tuple on-disk, as a single entity? That should save\nmore space than field-by-field compression. It could be triggered in\nthe tuple storage routines whenever the uncompressed size exceeds some\nthreshold. (We'd need a flag in the tuple header to indicate compressed\ndata, but I think there are bits to spare.) When we get around to\nhaving split tuples, the code would still be useful because it'd be\napplied as a first resort before splitting a large tuple; it'd reduce\nthe frequency of splits and the number of sections big tuples get split\ninto. All automatic and transparent, too --- the user doesn't have to\nchange data declarations at all.\n\nAlso, if we do it that way, then it would *automatically* apply to\nboth regular tuples and LO, because the current LO implementation is\njust tuples. (Tatsuo's idea of a non-transaction-controlled LO would\nneed extra work, of course, if we decide that's a good idea...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 09:44:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "On Fri, 12 Nov 1999, Jan Wieck wrote:\n\n> I don't know if there's a compression library available that\n> fit's our need. First and most important it must have a\n> license that permits us to include it in the distribution\n> under our existing license. Second it's implementation must\n> not cause any problems in the backend like memory leakage or\n> the like.\n\nIs this something that could be a configure option? Put the stubs in\nplace, and if someone wants to enable that feature, they can install the\ncompression library first and run with it?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 12 Nov 1999 10:49:31 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > Html input might be somewhat optimal for Adisak's storage\n> > format, but taking into account that my source implementing\n> > the type input and output functions is smaller than 600\n> > lines, I think 11% difference to a gzip -9 is a good result\n> > anyway.\n>\n> These strike me as very good results. I'm not at all sure that using\n> gzip or bzip would give much better results in practice in Postgres,\n> because those compressors are optimized for relatively large files,\n> whereas a compressed-field datatype would likely be getting relatively\n> small field values to work on. (So your test data set is probably a\n> good one for our purposes --- do the numbers change if you exclude\n> all the files over, say, 10K?)\n\n Will give it a try.\n\n> It occurred to me last night that applying compression to individual\n> fields might not be the best approach. Certainly a \"bytez\" data type\n> is the easiest thing to fit into the existing system, but it's leaving\n> some space savings on the table. What about compressing the *whole*\n> data contents of a tuple on-disk, as a single entity? That should save\n> more space than field-by-field compression.\n\n But it requires decompression of every tuple into palloc()'d\n memory during heap access. AFAIK, the heap access routines\n currently return a pointer to the tuple inside the shm\n buffer. Don't know what it's performance impact would be.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 12 Nov 1999 15:50:02 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "> Yes, for me - I finish to_char()/to_data() ora compatible routines\n> (Thomas, you still quiet?) and this is new appeal for me :-)\n\nAck! When I saw this I rolled up my primary Netscape window and found\nyou're almost-completed reply underneath. I've sent it along now.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 12 Nov 1999 14:54:04 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "Marc G. Fournier wrote:\n\n> On Fri, 12 Nov 1999, Jan Wieck wrote:\n>\n> > I don't know if there's a compression library available that\n> > fit's our need. First and most important it must have a\n> > license that permits us to include it in the distribution\n> > under our existing license. Second it's implementation must\n> > not cause any problems in the backend like memory leakage or\n> > the like.\n>\n> Is this something that could be a configure option? Put the stubs in\n> place, and if someone wants to enable that feature, they can install the\n> compression library first and run with it?\n\n If using the new type in system catalogs, the option could\n only be what kind of compression to use. And we need our own\n default compression code shipped anyway.\n\n Of course, it could depend on the config what types are used\n in the syscat. But making the catalog headers things that are\n shipped as a .in isn't really that good IMHO.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 12 Nov 1999 15:57:04 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": ">\n>BTW, does anybody know how BLOBs are handled by WAL in commercial\n>DBMSs?\n\nDec/RDB stores them in it's equivalent of the WAL; full rollback etc is\nsupported. If you load lots of large blobs, you need to make sure you have\nenough disk space for the journal copies.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n\n", "msg_date": "Sat, 13 Nov 1999 02:07:23 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> It occurred to me last night that applying compression to individual\n>> fields might not be the best approach. Certainly a \"bytez\" data type\n>> is the easiest thing to fit into the existing system, but it's leaving\n>> some space savings on the table. What about compressing the *whole*\n>> data contents of a tuple on-disk, as a single entity? That should save\n>> more space than field-by-field compression.\n\n> But it requires decompression of every tuple into palloc()'d\n> memory during heap access. AFAIK, the heap access routines\n> currently return a pointer to the tuple inside the shm\n> buffer. Don't know what it's performance impact would be.\n\nGood point, but the same will be needed when a tuple is split across\nmultiple blocks. I would expect that (given a reasonably fast\ndecompressor) there will be a net performance *gain* due to having\nless disk I/O to do. Also, this won't be happening for \"every\" tuple,\njust those exceeding a size threshold --- we'd be able to tune the\nthreshold value to trade off speed and space.\n\nOne thing that does occur to me is that we need to store the\nuncompressed as well as the compressed data size, so that the\nworking space can be palloc'd before starting the decompression.\n\nAlso, in case it wasn't clear, I was envisioning leaving the tuple\nheader uncompressed, so that time quals etc can be checked before\ndecompressing the tuple data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 10:12:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "\nOn Fri, 12 Nov 1999, Jan Wieck wrote:\n\n> \n> I don't know if there's a compression library available that\n> fit's our need. First and most important it must have a\n> license that permits us to include it in the distribution\n\n IMHO bzip2 compression algorith is free and open source - README\nfrom bzip2 source:\n\n\"bzip2-0.9.5 is distributed under a BSD-style license. For details,\nsee the file LICENSE\"\n\n\n> Bruce suggested the contrib area, but I'm not sure if that's\n> the right place. If it goes into the distribution at all, I'd\n\nIs any space (on postgresql ftp?) for this unstable code? Good project \nhas incoming of ftp for devel. versions...\n\nIf you this code not move to contrib, send me it (patch?), please. \n\n\t\t\t\t\t\tKarel\n\n\n", "msg_date": "Fri, 12 Nov 1999 16:18:30 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n>\n> > But it requires decompression of every tuple into palloc()'d\n> > memory during heap access. AFAIK, the heap access routines\n> > currently return a pointer to the tuple inside the shm\n> > buffer. Don't know what it's performance impact would be.\n>\n> Good point, but the same will be needed when a tuple is split across\n> multiple blocks. I would expect that (given a reasonably fast\n> decompressor) there will be a net performance *gain* due to having\n> less disk I/O to do. Also, this won't be happening for \"every\" tuple,\n> just those exceeding a size threshold --- we'd be able to tune the\n> threshold value to trade off speed and space.\n\n Right, this time it's your good point. All of the problems\n will be there on tuple split implementation.\n\n The major problem I see is that a palloc()'d tuple should be\n pfree()'d after the fetcher is done with it. Since they are\n in buffer actually, the fetcher doesn't have to care.\n\n> One thing that does occur to me is that we need to store the\n> uncompressed as well as the compressed data size, so that the\n> working space can be palloc'd before starting the decompression.\n\n Yepp - and I'm doing so. Only during compression the result\n size isn't known. But there is a well known maximum, that is\n the header overhead plus the data size by 1.125 plus 2 bytes\n (totally worst case on uncompressable data). And a general\n mechanism working on the tuple level would fallback to store\n uncompressed data in the case the compressed size is bigger.\n\n> Also, in case it wasn't clear, I was envisioning leaving the tuple\n> header uncompressed, so that time quals etc can be checked before\n> decompressing the tuple data.\n\n Of course.\n\n Well, you asked for the rates on the smaller html files only.\n 78 files, 131 bytes min, 10000 bytes max, 4582 bytes avg,\n 357383 bytes total.\n\n gzip -9 outputs 145659 bytes (59.2%)\n gzip -1 outputs 155113 bytes (56.6%)\n my code outputs 184109 bytes (48.5%)\n\n 67 files, 2000 bytes min, 10000 bytes max, 5239 bytes avg,\n 351006 bytes total.\n\n gzip -9 outputs 141772 bytes (59.6%)\n gzip -1 outputs 151150 bytes (56.9%)\n my code outputs 179428 bytes (48.9%)\n\n The threshold will surely be a tuning parameter of interest.\n Another tuning option must be to allow/deny compression per\n table at all. Then we could have both options, using a\n compressing field type to define which portion of a tuple to\n compress, or allow to compress the entire tuples.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 12 Nov 1999 16:41:10 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "On Fri, Nov 12, 1999 at 10:16:21AM +0100, Karel Zak - Zakkr wrote:\n> \n> \n> Other eventual compression questions:\n> \n> * MySQL dump allow make compressed dump file, it is good, and PostgreSQL?\n> \n\npg_dump database | gzip >db.out.gz\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Fri, 12 Nov 1999 09:59:55 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "\n\nOn Fri, 12 Nov 1999, Ross J. Reedstrom wrote:\n\n> On Fri, Nov 12, 1999 at 10:16:21AM +0100, Karel Zak - Zakkr wrote:\n> > \n> > \n> > Other eventual compression questions:\n> > \n> > * MySQL dump allow make compressed dump file, it is good, and PostgreSQL?\n> > \n> \n> pg_dump database | gzip >db.out.gz\n\n\n Thank... :-)) \n\n But mysqldump --compress is very nice.\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Fri, 12 Nov 1999 17:15:05 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> The major problem I see is that a palloc()'d tuple should be\n> pfree()'d after the fetcher is done with it. Since they are\n> in buffer actually, the fetcher doesn't have to care.\n\nI think this may not be as big a problem as it looks. Most places\nin the executor keep tuples in TupleTableSlots, which are responsible\nfor pfree'ing the tuple if (and only if) necessary; all that code is\nready for this change already. There are probably some routines in\nheapam/indexam that assume they only work with tuples that never need\nto be freed, but I don't think the fixes will be pervasive. And we're\ngoing to have to do that work in any case to support big tuples\n(assuming we do it by splitting tuples into segments that fit in disk\npages).\n\n> And a general\n> mechanism working on the tuple level would fallback to store\n> uncompressed data in the case the compressed size is bigger.\n\nRight. Another possible place for speed-vs-space tuning would be to\nstore the uncompressed representation unless the compressed version is\nat least X percent smaller, not just at-least-one-byte smaller.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 11:19:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "> > Bruce suggested the contrib area, but I'm not sure if that's\n> > the right place. If it goes into the distribution at all, I'd\n> > like to use this data type for rule plan strings and function\n> > source text in the system catalogs.\n> \n> Right, if we are going to bother with it at all, we should put it\n> into the core so that we can use it for rule plans.\n\nAgreed. I suggested contrib so the code doesn't just disappear.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Nov 1999 12:10:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "At 05:15 PM 11/12/99 +0100, Karel Zak - Zakkr wrote:\n\n>> pg_dump database | gzip >db.out.gz\n\n> Thank... :-)) \n\n> But mysqldump --compress is very nice.\n\nWhy add functionality for something that can be done so\neasily by piping the output of pg_dump?\n\nThis is exactly the kind of coupling of tools that pipes\nwere invented for.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 12 Nov 1999 10:09:57 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Of course.\n> \n> Well, you asked for the rates on the smaller html files only.\n> 78 files, 131 bytes min, 10000 bytes max, 4582 bytes avg,\n> 357383 bytes total.\n> \n> gzip -9 outputs 145659 bytes (59.2%)\n> gzip -1 outputs 155113 bytes (56.6%)\n> my code outputs 184109 bytes (48.5%)\n> \n> 67 files, 2000 bytes min, 10000 bytes max, 5239 bytes avg,\n> 351006 bytes total.\n> \n> gzip -9 outputs 141772 bytes (59.6%)\n> gzip -1 outputs 151150 bytes (56.9%)\n> my code outputs 179428 bytes (48.9%)\n> \n> The threshold will surely be a tuning parameter of interest.\n> Another tuning option must be to allow/deny compression per\n> table at all. Then we could have both options, using a\n> compressing field type to define which portion of a tuple to\n> compress, or allow to compress the entire tuples.\n\nThe next step would be tweaking the costs for sequential scans vs.\nindex scans.\n\nI guess that the indexes would stay uncompressed ?\n\n------\nHannu\n", "msg_date": "Sat, 13 Nov 1999 00:02:54 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "Don Baccus wrote:\n> \n> At 05:15 PM 11/12/99 +0100, Karel Zak - Zakkr wrote:\n> \n> >> pg_dump database | gzip >db.out.gz\n> \n> > Thank... :-))\n> \n> > But mysqldump --compress is very nice.\n> \n> Why add functionality for something that can be done so\n> easily by piping the output of pg_dump?\n> \n> This is exactly the kind of coupling of tools that pipes\n> were invented for.\n\nExactly !\n\nAnother version of the same is used when dumping databases \nbigger than 2GB (or whatever the file system size limit is)\n\njust do:\n\npg_dump database | gzip | split -b 1000000000 db.out.gz.\n\nand restore it using\n\ncat db.out.gz* | gunzip > psql\n\nyou could also do other fancy things with your dumps - send \nthem directly to tape storage in Japan or whatever ;)\n\nIf you need that functionality often enough then write a shell \nscript.\n\n---------------\nHannu\n", "msg_date": "Sat, 13 Nov 1999 00:11:12 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "> \n> Jan Wieck wrote:\n> > \n> > Of course.\n> > \n> > Well, you asked for the rates on the smaller html files only.\n> > 78 files, 131 bytes min, 10000 bytes max, 4582 bytes avg,\n> > 357383 bytes total.\n> > \n> > gzip -9 outputs 145659 bytes (59.2%)\n> > gzip -1 outputs 155113 bytes (56.6%)\n> > my code outputs 184109 bytes (48.5%)\n> > \n> > 67 files, 2000 bytes min, 10000 bytes max, 5239 bytes avg,\n> > 351006 bytes total.\n> > \n> > gzip -9 outputs 141772 bytes (59.6%)\n> > gzip -1 outputs 151150 bytes (56.9%)\n> > my code outputs 179428 bytes (48.9%)\n> > \n> > The threshold will surely be a tuning parameter of interest.\n> > Another tuning option must be to allow/deny compression per\n> > table at all. Then we could have both options, using a\n> > compressing field type to define which portion of a tuple to\n> > compress, or allow to compress the entire tuples.\n> \n> The next step would be tweaking the costs for sequential scans vs.\n> index scans.\n> \n> I guess that the indexes would stay uncompressed ?\n> \n> ------\n> Hannu\n> \n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n", "msg_date": "Sat, 13 Nov 1999 02:43:47 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "On Fri, 12 Nov 1999, Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > Tom Lane wrote:\n> >> It occurred to me last night that applying compression to individual\n> >> fields might not be the best approach. Certainly a \"bytez\" data type\n> >> is the easiest thing to fit into the existing system, but it's leaving\n> >> some space savings on the table. What about compressing the *whole*\n> >> data contents of a tuple on-disk, as a single entity? That should save\n> >> more space than field-by-field compression.\n> \n> > But it requires decompression of every tuple into palloc()'d\n> > memory during heap access. AFAIK, the heap access routines\n> > currently return a pointer to the tuple inside the shm\n> > buffer. Don't know what it's performance impact would be.\n> \n> Good point, but the same will be needed when a tuple is split across\n> multiple blocks. I would expect that (given a reasonably fast\n> decompressor) there will be a net performance *gain* due to having\n> less disk I/O to do. \n\n\tRight now, we're dealing theory...my concern is what Jan points\nout \"what it's performance impact would be\"...would much harder would it\nbe to extent our \"CREATE TABLE\" syntax to do something like:\n\nCREATE TABLE classname ( .. ) compressed;\n\n\tOr something similar? Something that leaves the ability to do\nthis in the core, but makes the use of this the choice of the admin? \n\n\t*Assuming* that I'm also reading this thread correctly, it should\nalmost be extended into \"ALTER TABLE classname SET COMPRESSED on;\", or\nsomething like that. Where all new records are *written* compressed (or\nuncompressed), but any reads checks if compressed size == uncompressed\nsize, and decompresses accordingly...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 12 Nov 1999 22:02:10 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "Ech - wrong key :-)\n\nHannu Krosing wrote:\n\n> Jan Wieck wrote:\n> >\n>\n> The next step would be tweaking the costs for sequential scans vs.\n> index scans.\n>\n> I guess that the indexes would stay uncompressed ?\n\n I'm sure about this. On a database of significant size,\n anyone indexing a field with a possible size over 100 bytes\n is doing something wrong (and only idiots go above 500\n bytes). They are IMPLEMENTING a not well thought out database\n DESIGN. A database engine should support indices on bigger\n fields, but it's still a bad schema and thus idiotic.\n\n Currently, we don't check the size of indexed fields. And the\n only problems I've seen with it where some reports that huge\n PL functions could not be created because there was an unused\n (idiotic) index on the prosrc attribute and they exceeded the\n 4K limit for index tuples. I've removed this index already in\n the v7.0 tree. The ?bug? in the btree code, failing to split\n a page if the key values exceed 4K, is still there. But I\n don't think anyone really cares for it.\n\n Thus, I assume there aren't many idiots out there. And I\n don't expect that anyone would ever create an index on a\n compressed data type.\n\n ?bug? -> The difference between a bug and a feature is\n DOCUMENTATION. Thomas, would you please add this limit on\n index tuples to the doc's so we have a new FEATURE to tell in\n the v7.0 announcement?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Sat, 13 Nov 1999 03:25:02 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "Marc G. Fournier wrote:\n\n> On Fri, 12 Nov 1999, Tom Lane wrote:\n> > [email protected] (Jan Wieck) writes:\n> > > Tom Lane wrote:\n\n> Right now, we're dealing theory...my concern is what Jan points\n> out \"what it's performance impact would be\"...would much harder would it\n> be to extent our \"CREATE TABLE\" syntax to do something like:\n>\n> CREATE TABLE classname ( .. ) compressed;\n>\n> Or something similar? Something that leaves the ability to do\n> this in the core, but makes the use of this the choice of the admin?\n\n Yepp, exactly that's what I meant with making tuple\n compression a per table option. Obviously, ALTER TABLE ...\n must be supported too - that's simply a parser -> utility ->\n flip flag in pg_class thing (90% cut&paste).\n\n I think the part on deciding what to compress is easy,\n because the flag telling if heap access should try to\n compress a tuple on append (i.e. INSERT or UPDATE) has to be\n in pg_class. And the content of a relations pg_class entry is\n somewhere below the Relation struct (thus already known after\n heap_open).\n\n The idea was to use another bit in the tuple header to tell\n if an existing heap tuple's data is compressed or not. So the\n heap fetching allways looks at the bit in the tuple header,\n and the heap appending looks at the flag in the relation\n pointer. That's exactly what you want, no?\n\n The major part is to make all callers of heap_fetch() and\n sisters treat in memory decompressed (or from block split\n reconstructed) tuples the right way.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Sat, 13 Nov 1999 03:51:48 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> The idea was to use another bit in the tuple header to tell\n> if an existing heap tuple's data is compressed or not. So the\n> heap fetching allways looks at the bit in the tuple header,\n> and the heap appending looks at the flag in the relation\n> pointer. That's exactly what you want, no?\n\nRight. Compressed tuples must be unambiguously marked as such on-disk.\nWhether to compress a tuple when writing it out is a decision that\ncan be made on-the-fly, using strategies that could change from time\nto time, without invalidating the data that's already out there or\naffecting the tuple-reading code.\n\nIf we choose to provide also a way of compressing individual fields\nrather than whole tuples, it would be good to provide the same\nflexibility at the field level. Some tuples might contain the field\nin compressed form, some in uncompressed form. The reading logic\nshould not need to be aware of the way that the writing logic chooses\nwhich to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 22:45:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields " }, { "msg_contents": "\n\nTom Lane wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n> >> LO is a dead end. What we really want to do is eliminate tuple-size\n> >> restrictions and then have large ordinary fields (probably of type\n> >> bytea) in regular tuples. I'd suggest working on compression in that\n> >> context, say as a new data type called \"bytez\" or something like that.\n>\n> > It sounds ideal but I remember that Vadim said inserting a 2GB record\n> > is not good idea since it will be written into the log too. If it's a\n> > necessary limitation from the point of view of WAL, we have to accept\n> > it, I think.\n>\n> LO won't make that any better: the data still goes into a table.\n> You'd have 2GB worth of WAL entries either way.\n>\n> The only thing LO would do for you is divide the data into block-sized\n> tuples, so there would be a bunch of little WAL entries instead of one\n> big one. But that'd probably be easy to duplicate too. If we implement\n> big tuples by chaining together disk-block-sized segments, which seems\n> like the most likely approach, couldn't WAL log each segment as a\n> separate log entry? If so, there's almost no difference between LO and\n> inline field for logging purposes.\n>\n\nI don't know LO well.\nBut seems LO allows partial update.\nBig tuples\nIf so,isn't it a significant difference ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Tue, 16 Nov 1999 17:41:22 +0900", "msg_from": "inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" }, { "msg_contents": "Sorry,\nI sent a mail by mistake.\nIgnore my previous mail.\n\ninoue wrote:\n\n> Tom Lane wrote:\n>\n> > Tatsuo Ishii <[email protected]> writes:\n> > >> LO is a dead end. What we really want to do is eliminate tuple-size\n> > >> restrictions and then have large ordinary fields (probably of type\n> > >> bytea) in regular tuples. I'd suggest working on compression in that\n> > >> context, say as a new data type called \"bytez\" or something like that.\n> >\n> > > It sounds ideal but I remember that Vadim said inserting a 2GB record\n> > > is not good idea since it will be written into the log too. If it's a\n> > > necessary limitation from the point of view of WAL, we have to accept\n> > > it, I think.\n> >\n> > LO won't make that any better: the data still goes into a table.\n> > You'd have 2GB worth of WAL entries either way.\n> >\n> > The only thing LO would do for you is divide the data into block-sized\n> > tuples, so there would be a bunch of little WAL entries instead of one\n> > big one. But that'd probably be easy to duplicate too. If we implement\n> > big tuples by chaining together disk-block-sized segments, which seems\n> > like the most likely approach, couldn't WAL log each segment as a\n> > separate log entry? If so, there's almost no difference between LO and\n> > inline field for logging purposes.\n> >\n>\n> I don't know LO well.\n> But seems LO allows partial update.\n> Big tuples\n> If so,isn't it a significant difference ?\n>\n> Regards.\n>\n> Hiroshi Inoue\n> [email protected]\n\n", "msg_date": "Tue, 16 Nov 1999 17:45:43 +0900", "msg_from": "inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compression in LO and other fields" } ]
[ { "msg_contents": "Hi\n\nI have a single table with two views. The table effectively contains both\nmaster and detail info (legacy stuff I'm afraid). The query in question is\nused to see if any records exist in the detail that do not exist in the\nmaster. The table and index definition is as follows\n\n create table accounts (\n domain text,\n registrationtype char\n /* Plus a couple of other irrelevant fields */\n );\n\n create index domain_idx on accounts (domain);\n create index domain_type_idx on accounts (domain, registrationtype);\n\nThe views are\n\n create view accountmaster as SELECT * from accounts where registrationtype =\n'N';\n create view accountdetail as SELECT * from accounts where registrationtype <>\n'N';\n\nThe query is\n\n select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster);\n\nI started the query about 5 hours ago and it is still running. I did the same\non Informix Online 7 and it took less than two minutes...\n\nMy system details are\n postgres: 6.5.3\n O/S: RH6.0 Kernel 2.2.5-15smp\n\nExplain shows the following\n\n explain select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster) limit 10;\n NOTICE: QUERY PLAN:\n\n Seq Scan on accounts (cost=3667.89 rows=34958 width=12)\n SubPlan\n -> Seq Scan on accounts (cost=3667.89 rows=33373 width=12)\n\n EXPLAIN\n\nThe number of records in the two views are\n\n psql -c \"select count(*) from accountmaster\" coza;\n count\n -----\n 45527\n (1 row)\n\n psql -c \"select count(*) from accountdetail\" coza;\n count\n -----\n 22803\n\nI know of exactly one record (I put it there myself) that satisfies the\nselection criteria.\n\nAny ideas would be appreciated\n\n--------\nRegards\nTheo\n\nPS We have it running live at http://co.za (commercial domains in South Africa).\n", "msg_date": "Thu, 11 Nov 1999 21:41:09 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Slow - grindingly slow - query" }, { "msg_contents": "\nWhat does:\n\nexplain select domain from accountdetail \n\twhere domain not in ( \n\t\tselect domain from accountmaster);\n\nshow?\n\nAlso, did you do a 'vacuum analyze' on the tables?\n\nAlso, how about if you get rid of the views\n\nSELECT domain FROM account\nWHERE registrationtype <> 'N';\n\n*shakes head* am I missing something here? I'm reading your SELECT and\n'CREATE VIEW's and don't they negate each other? *scratch head*\n\nIf I'm reading your select properly, and with the amount of sleep I've had\nrecently, its possible I'm not...\n\nThe subselect is saying give me all domains whose registration type = 'N'.\nThe select itself is saying give me all domains whoe registration type <>\n'N' (select accountdetail.domain from accountdetail), and narrow that\nlisting down further to only include those domains whose registration type\n<> 'N'?\n\nEither I'm reading this *totally* wrong, or you satisfy that condition\nujust by doing a 'SELECT domain FROM accountdetail;' ...\n\nNo?\n\nOn Thu, 11 Nov 1999, Theo Kramer wrote:\n\n> Hi\n> \n> I have a single table with two views. The table effectively contains both\n> master and detail info (legacy stuff I'm afraid). The query in question is\n> used to see if any records exist in the detail that do not exist in the\n> master. The table and index definition is as follows\n> \n> create table accounts (\n> domain text,\n> registrationtype char\n> /* Plus a couple of other irrelevant fields */\n> );\n> \n> create index domain_idx on accounts (domain);\n> create index domain_type_idx on accounts (domain, registrationtype);\n> \n> The views are\n> \n> create view accountmaster as SELECT * from accounts where registrationtype =\n> 'N';\n> create view accountdetail as SELECT * from accounts where registrationtype <>\n> 'N';\n> \n> The query is\n> \n> select accountdetail.domain from accountdetail where\n> accountdetail.domain not in\n> (select accountmaster.domain from accountmaster);\n> \n> I started the query about 5 hours ago and it is still running. I did the same\n> on Informix Online 7 and it took less than two minutes...\n> \n> My system details are\n> postgres: 6.5.3\n> O/S: RH6.0 Kernel 2.2.5-15smp\n> \n> Explain shows the following\n> \n> explain select accountdetail.domain from accountdetail where\n> accountdetail.domain not in\n> (select accountmaster.domain from accountmaster) limit 10;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on accounts (cost=3667.89 rows=34958 width=12)\n> SubPlan\n> -> Seq Scan on accounts (cost=3667.89 rows=33373 width=12)\n> \n> EXPLAIN\n> \n> The number of records in the two views are\n> \n> psql -c \"select count(*) from accountmaster\" coza;\n> count\n> -----\n> 45527\n> (1 row)\n> \n> psql -c \"select count(*) from accountdetail\" coza;\n> count\n> -----\n> 22803\n> \n> I know of exactly one record (I put it there myself) that satisfies the\n> selection criteria.\n> \n> Any ideas would be appreciated\n> \n> --------\n> Regards\n> Theo\n> \n> PS We have it running live at http://co.za (commercial domains in South Africa).\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Thu, 11 Nov 1999 16:33:47 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "Theo Kramer wrote:\n> \n> Hi\n> \n> I have a single table with two views. The table effectively contains both\n> master and detail info (legacy stuff I'm afraid). The query in question is\n> used to see if any records exist in the detail that do not exist in the\n> master. The table and index definition is as follows\n> \n> create table accounts (\n> domain text,\n> registrationtype char\n> /* Plus a couple of other irrelevant fields */\n> );\n> \n> create index domain_idx on accounts (domain);\n> create index domain_type_idx on accounts (domain, registrationtype);\n\ntry using\n create index registrationtype_index on accounts (registrationtype);\n\n------\nHannu\n", "msg_date": "Thu, 11 Nov 1999 22:41:31 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> What does:\n> \n> explain select domain from accountdetail\n> where domain not in (\n> select domain from accountmaster);\n> \n> show?\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on accounts (cost=3667.89 rows=34958 width=12)\n SubPlan\n -> Seq Scan on accounts (cost=3667.89 rows=33373 width=12)\n\nEXPLAIN\n\n\n> Also, did you do a 'vacuum analyze' on the tables?\n\nYes - should have mentioned that.\n \n> Also, how about if you get rid of the views\n> \n> SELECT domain FROM account\n> WHERE registrationtype <> 'N';\n> \n> *shakes head* am I missing something here? I'm reading your SELECT and\n> 'CREATE VIEW's and don't they negate each other? *scratch head*\n\nNo - a domain can both be new (registrationtype 'N') and updated \n(registrationtype 'U') ie. one or more rows with the same domain with one row\ncontaining a domain with registrationtype 'N' and zero or more rows containing\nthe same domain with registrationtype not 'N'. The reason for the <> 'N' and \nnot just = 'U' is that we have a couple of rows with registrationtype set to\nsomething else.\n \n> The subselect is saying give me all domains whose registration type = 'N'.\n> The select itself is saying give me all domains whoe registration type <>\n> 'N' (select accountdetail.domain from accountdetail), and narrow that\n> listing down further to only include those domains whose registration type\n> <> 'N'?\n> \n> Either I'm reading this *totally* wrong, or you satisfy that condition\n> ujust by doing a 'SELECT domain FROM accountdetail;' ...\n> \n> No?\n\nNo :). See above\n\n--------\nRegards\nTheo\n", "msg_date": "Thu, 11 Nov 1999 22:50:14 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "Hannu Krosing wrote:\n> try using\n> create index registrationtype_index on accounts (registrationtype);\n\nOK did that, and am rerunning the query. \n\nThe explain now shows\n explain select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster);\n NOTICE: QUERY PLAN:\n\n Seq Scan on accounts (cost=3667.89 rows=34958 width=12)\n SubPlan\n -> Index Scan using registrationtype_idx on accounts (cost=2444.62\nrows=33373 width=12)\n\n EXPLAIN\n\n\nWill let you all know when it completes.\n--------\nRegards\nTheo\n", "msg_date": "Thu, 11 Nov 1999 23:10:20 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "Theo Kramer <[email protected]> writes:\n> The query is\n\n> select accountdetail.domain from accountdetail where\n> accountdetail.domain not in\n> (select accountmaster.domain from accountmaster);\n\nTry something like\n\n select accountdetail.domain from accountdetail where\n not exists (select accountmaster.domain from accountmaster where\n accountmaster.domain = accountdetail.domain);\n\nI believe this is in the FAQ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 1999 16:56:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query " }, { "msg_contents": "Tom Lane wrote:\n> \n> Theo Kramer <[email protected]> writes:\n> > The query is\n> \n> > select accountdetail.domain from accountdetail where\n> > accountdetail.domain not in\n> > (select accountmaster.domain from accountmaster);\n\nThis takes more than 5 hours and 30 minutes.\n\n> Try something like\n> \n> select accountdetail.domain from accountdetail where\n> not exists (select accountmaster.domain from accountmaster where\n> accountmaster.domain = accountdetail.domain);\n\nThis takes 5 seconds - wow!\n\n> I believe this is in the FAQ...\n\nWill check out the FAQs. Many thanks.\n--------\nRegards\nTheo\n", "msg_date": "Fri, 12 Nov 1999 07:09:15 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "Theo Kramer wrote:\n> \n> > Try something like\n> >\n> > select accountdetail.domain from accountdetail where\n> > not exists (select accountmaster.domain from accountmaster where\n> > accountmaster.domain = accountdetail.domain);\n> \n> This takes 5 seconds - wow!\n\n> I did the same on Informix Online 7 and it took less than two minutes...\n ^^^^^^^^^^^\nCould you run the query above in Informix?\nHow long would it take to complete?\n\nVadim\n", "msg_date": "Fri, 12 Nov 1999 12:52:29 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "Vadim wrote:\n\n> > I did the same on Informix Online 7 and it took less than two minutes...\n>\n> Could you run the query above in Informix?\n> How long would it take to complete?\n\nI include both explain and timing for the queries for both postgres and\nInformix.\n\nExplain from postgres for the two queries.\n------------------------------------------\n\nexplain select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster);\nNOTICE: QUERY PLAN:\n\nSeq Scan on accounts (cost=3667.89 rows=34958 width=12)\n SubPlan\n -> Index Scan using registrationtype_idx on accounts (cost=2444.62 rows=33373 width=12)\n\nEXPLAIN\n\n\n\nexplain select accountdetail.domain from accountdetail\n where not exists (\n select accountmaster.domain from accountmaster where\n accountmaster.domain = accountdetail.domain);\nNOTICE: QUERY PLAN:\n\nSeq Scan on accounts (cost=3667.89 rows=34958 width=12)\n SubPlan\n -> Index Scan using domain_type_idx on accounts (cost=2.04 rows=1 width=12)\n\nEXPLAIN\n\nExplain from informix online 7 for the two queries\n--------------------------------------------------\n\nQUERY:\n------\nselect accountdetail.domain from accountdetail where\n accountdetail.domain not in (select accountmaster.domain from accountmaster)\n\nEstimated Cost: 8995\nEstimated # of Rows Returned: 47652\n\n1) informix.accounts: SEQUENTIAL SCAN\n\n Filters: (informix.accounts.domain != ALL <subquery> AND informix.accounts.registrationtype != 'N' ) \n\n Subquery:\n ---------\n Estimated Cost: 4497\n Estimated # of Rows Returned: 5883\n\n 1) informix.accounts: SEQUENTIAL SCAN\n\n Filters: informix.accounts.registrationtype = 'N' \n\n\nQUERY:\n------\nselect accountdetail.domain from accountdetail where\n accountdetail.domain not in (select accountmaster.domain from accountmaster)\n\nEstimated Cost: 4510\nEstimated # of Rows Returned: 58810\n\n1) informix.accounts: SEQUENTIAL SCAN\n\n Filters: (informix.accounts.domain != ALL <subquery> AND informix.accounts.registrationtype != 'N' ) \n\n Subquery:\n ---------\n Estimated Cost: 12\n Estimated # of Rows Returned: 10\n\n 1) informix.accounts: INDEX PATH\n\n (1) Index Keys: registrationtype \n Lower Index Filter: informix.accounts.registrationtype = 'N' \n\n\nTiming from postgres 6.5.3 for the two queries\n----------------------------------------------\nexplain select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster);\n\nGreater than 5 hours and 30 minutes\n\n\nexplain select accountdetail.domain from accountdetail\n where not exists (\n select accountmaster.domain from accountmaster where\n accountmaster.domain = accountdetail.domain);\n\n0.00user 0.01system 0:04.75elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n\nTiming from Informix Online 7 for the two queries\n----------------------------------------------\nexplain select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster);\n\n0.03user 0.01system 0:10.35elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n\nexplain select accountdetail.domain from accountdetail\n where not exists (\n select accountmaster.domain from accountmaster where\n accountmaster.domain = accountdetail.domain);\n\n0.03user 0.00system 0:03.56elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n\nThe machine is a Pentium II 400 MHz with Fast Wide SCSI and is the same\nfor both Informix and Postgres. Informix uses Linux I/O ie. it does not\nuse a raw partition. The datasets are the same.\n\nRegards\nTheo\n", "msg_date": "Fri, 12 Nov 1999 10:04:58 +0200 (SAST)", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "> > > select accountdetail.domain from accountdetail where\n> > > accountdetail.domain not in\n> > > (select accountmaster.domain from accountmaster);\n> \n> This takes more than 5 hours and 30 minutes.\n> \n> > select accountdetail.domain from accountdetail where\n> > not exists (select accountmaster.domain from accountmaster where\n> > accountmaster.domain = accountdetail.domain);\n> \n> This takes 5 seconds - wow!\n> \n\nI have a general comment/question here. Why do in/not in clauses seem\nto perform so slowly? I've noticed this type of behavior with with my \nsystem also. I think the above queries will always return the exact \nsame results regardless of the data. From looking at the query plan \nwith explain, it's clear the second query makes better use of the \nindexes. Can't the rewrite engine recognize a simple case like the \none above and rewrite it to use exists and not exists with the proper \njoins? Or possibly the optimizer can generate a better plan? Sometimes \nit's not so easy to just change a query in the code. Sometimes you can't\nchange the code because you only have executables and sometimes you are\nusing a tool that automatically generates SQL using in clauses. \nAdditionally, since intersect and union get rewritten as in clauses they \nsuffer the same performance problems. \n\n-brian\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Fri, 12 Nov 1999 03:49:01 -0600", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "Brian Hirt <[email protected]> writes:\n> Can't the rewrite engine recognize a simple case like the \n> one above and rewrite it to use exists and not exists with the proper \n> joins? Or possibly the optimizer can generate a better plan?\n\nThis is on the TODO list, and will get done someday. IMHO it's not as\nurgent as a lot of the planner/optimizer's other shortcomings, because\nit can usually be worked around by revising the query.\n\nIf it's bugging you enough to go fix it now, contributions are always\nwelcome ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 09:58:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query " }, { "msg_contents": "On Fri, Nov 12, 1999 at 09:58:14AM -0500, Tom Lane wrote:\n> Brian Hirt <[email protected]> writes:\n> > Can't the rewrite engine recognize a simple case like the \n> > one above and rewrite it to use exists and not exists with the proper \n> > joins? Or possibly the optimizer can generate a better plan?\n> \n> This is on the TODO list, and will get done someday. IMHO it's not as\n> urgent as a lot of the planner/optimizer's other shortcomings, because\n> it can usually be worked around by revising the query.\n> \n> If it's bugging you enough to go fix it now, contributions are always\n> welcome ;-)\n> \n\nOkay, what would be the correct approach to solving the problem, \nand where would be a good place to start? I'v only been on this list\nfor a few weeks, so I'm missed discussion on the approach to solving \nthis problem. Should this change be localized to just the planner? \nShould the rewrite system be creating a different query tree? Will both \nneed to be changed? If a lot of work is being done to this part of \nthe system, is now a bad time to try this work?\n\nI'm willing to jump in to this, but I may take a while to figure it out \nand ask a lot of questions that are obvious to the hardened postgres \nprogrammer. I'm not famaliar with the postgres code, yet. \n\n\n-brian\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Fri, 12 Nov 1999 12:02:07 -0600", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "Brian Hirt <[email protected]> writes:\n> On Fri, Nov 12, 1999 at 09:58:14AM -0500, Tom Lane wrote:\n>> If it's bugging you enough to go fix it now, contributions are always\n>> welcome ;-)\n\n> Okay, what would be the correct approach to solving the problem, \n> and where would be a good place to start? I'v only been on this list\n> for a few weeks, so I'm missed discussion on the approach to solving \n> this problem. Should this change be localized to just the planner? \n> Should the rewrite system be creating a different query tree? Will both \n> need to be changed? If a lot of work is being done to this part of \n> the system, is now a bad time to try this work?\n\nWell, actually, figuring out how & where to do it is the trickiest part\nof the work. Might not be the best project for a newbie backend-hacker\nto start with :-(.\n\nAfter a few moments' thought, it seems to me that this issue might be\nclosely intertwined with the OUTER JOIN stuff that Thomas is working on\nand the querytree representation redesign that Jan and I have been\nmuttering about (but not yet actually doing anything about). We want\nto handle SELECT ... WHERE expr IN (SELECT ...) like a join, but the\nsemantics aren't exactly the same as a conventional join, so it might\nbe that the thing needs to be rewritten as a special join type. In\nthat case it'd fit right in with OUTER JOIN, I suspect.\n\nThe Informix EXPLAIN results that Theo Kramer posted (a few messages\nback in this thread) are pretty interesting too. If I'm reading that\nprintout right, Informix is not any smarter than we are about choosing\nthe scan types for the outer and inner queries; and yet they have a much\nfaster runtime for the WHERE IN query. I speculate that they are doing\nthe physical matching of outer and inner tuples in a smarter way than we\nare --- perhaps they are doing one scan of the inner query and entering\nall the values into a hashtable that's then probed for each outer tuple.\n(As opposed to rescanning the inner query for each outer tuple, as we\ncurrently do.) If that's the answer, then it could probably be\nimplemented as a localized change: rewrite the SubPlan node executor to\nlook more like the HashJoin node executor. This isn't perfect --- it\nwouldn't pick up the possibility of a merge-style join --- but it would\nbe better than what we have for a lot less work than the \"full\" solution.\n\nThis is all shooting from the hip; I haven't spent time looking into it.\nHas anyone else got insights to offer?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 23:30:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query " }, { "msg_contents": "Tom Lane wrote:\n\n> The Informix EXPLAIN results that Theo Kramer posted (a few messages\n> back in this thread) are pretty interesting too. If I'm reading that\n> printout right, Informix is not any smarter than we are about choosing\n> the scan types for the outer and inner queries; and yet they have a much\n> faster runtime for the WHERE IN query.\n\nThe informix EXPLAIN for the 'not in' query was when I did not have an\nindex on registrationtype (the explain appends to file sqexplain.out so I\nmissed it :(). Anyway here is the Informix EXPLAIN with the index on\nregistrationtype.\n\n\nQUERY:\n------\nselect accountdetail.domain from accountdetail where\n accountdetail.domain not in (select accountmaster.domain from accountmaster)\n\nEstimated Cost: 4510\nEstimated # of Rows Returned: 58810\n\n1) informix.accounts: SEQUENTIAL SCAN\n\n Filters: (informix.accounts.domain != ALL <subquery> AND\ninformix.accounts.registrationtype != 'N' )\n\n Subquery:\n ---------\n Estimated Cost: 12\n Estimated # of Rows Returned: 10\n\n 1) informix.accounts: INDEX PATH\n\n (1) Index Keys: registrationtype\n Lower Index Filter: informix.accounts.registrationtype = 'N'\n\n\nThe speed difference with or without the subquery index is neglible for\nInformix.\n--------\nRegards\nTheo\n", "msg_date": "Sat, 13 Nov 1999 11:55:38 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" }, { "msg_contents": "hi...\n\nis anyone working on replication services in pgsql?\n\n-- \nAaron J. Seigo\nSys Admin\n", "msg_date": "Tue, 16 Nov 1999 09:26:50 -0700", "msg_from": "\"Aaron J. Seigo\" <[email protected]>", "msg_from_op": false, "msg_subject": "replication" } ]
[ { "msg_contents": "Hi,\n\nI was hoping one of the contributors here on the list could point me in\nthe right direction with some of the issues I am currently facing with\npostgres 6.5.0\n\nI apologize if these topics have been thoroughly discussed before, but I\nwasn't able to find anything in the archives.\n\nFor these questions, assume that the slave and master are separate\nmachines each running postmaster, but the postgres data files are\nmounted on the same physical (but on different partitions) raid-5\nserver.\n\n1) real-time syncing\nIs it possible to have a \"slave\" postgres server (a separate machine)\ndoing nothing but syncing to a master? If the master were to go down,\nthe slave would be up-to-date and ready to handle connection requests\nalmost instantly.\n\n2) Replicating\nIf #1 isn't possible, what would be the best way to periodically sync\nthe two databases? Would it be possible to exchange deltas (the slave's\nonly job is to duplicate the master) or would a complete backup and\nrestore frome one machine to another be necessary?\n\n3) Crash-Recovery!!\nThe documentation does not have this section filled in. So I'm not even\nsure whether or not postgres maintains transaction logging or some other\nway to recover from a serious crash. Can someone please give me the\nscoop on the current state of this, or what should be done to recover\nfrom a crash?\n\nAre there any other concerns or technologies I'm not aware of that might\nbe of use?\n\nAll comments appreciated!\n\nThanks,\nBryan Ingram\n\n\n\n\n\n\n -----------== Posted via Newsfeeds.Com, Uncensored Usenet News ==----------\n http://www.newsfeeds.com The Largest Usenet Servers in the World!\n------== Over 73,000 Newsgroups - Including Dedicated Binaries Servers ==-----\n", "msg_date": "Thu, 11 Nov 1999 15:26:35 -0600", "msg_from": "Bryan Ingram <[email protected]>", "msg_from_op": true, "msg_subject": "syncing, replicating & crash recovery Q's" } ]
[ { "msg_contents": "Probably it is of interest for jou, or i did something very stupid:\nI had a little piece of code who works fine up till know, just after\ninstalling \nthe version 6.5.3 i got the message 'parse error near union'\n\n\ncreate table work as\n\tselect * from opdracht\n\tunion\n\tselect * from opdrachtproost\n\nI know i can solve the problem with \n\tinsert into work in stead of the union keyword, but it is enoying that all\nthe programs had to be recompiled\n\nmany thanks, its a good product\n\nFrans\n\n\n", "msg_date": "Fri, 12 Nov 1999 00:50:11 +0100", "msg_from": "Frans Van Elsacker <[email protected]>", "msg_from_op": true, "msg_subject": "new version 6.5.3" } ]
[ { "msg_contents": "subscribe\n\n\n", "msg_date": "Fri, 12 Nov 1999 00:54:48 +0100", "msg_from": "Frans Van Elsacker <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Fri, 12 Nov 1999 00:58:11 +0100", "msg_from": "Frans Van Elsacker <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "Probably it is of interest for jou, or i did something very stupid:\nI had a little piece of code who works fine up till know, just after\ninstalling \nthe version 6.5.3 i got the message 'parse error near union'\n\n\ncreate table work as\n\tselect * from opdracht\n\tunion\n\tselect * from opdrachtproost\n\nI know i can solve the problem with \n\tinsert into work in stead of the union keyword, but it is enoying that all\nthe programs had to be recompiled\n\nmany thanks, its a good product\n\nFrans\n\n\n", "msg_date": "Fri, 12 Nov 1999 01:10:06 +0100", "msg_from": "Frans Van Elsacker <[email protected]>", "msg_from_op": true, "msg_subject": "union problem version 6.5.3" }, { "msg_contents": "Frans Van Elsacker <[email protected]> writes:\n> I had a little piece of code who works fine up till know, just after\n> installing \n> the version 6.5.3 i got the message 'parse error near union'\n\n> create table work as\n> \tselect * from opdracht\n> \tunion\n> \tselect * from opdrachtproost\n\nHmm, the grammar has\n\nCreateAsStmt: CREATE OptTemp TABLE relation_name OptCreateAs AS SubSelect\n\nand SubSelect doesn't allow unions. This is overly restrictive,\nI agree, but I thought it had been that way for a good while.\nWhat version were you using before?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 10:34:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union problem version 6.5.3 " }, { "msg_contents": "\nthanks for your quick answer,\n\nOur earlier version was postgresql 6.4.2\n\nRegards, Frans\n\n\n\n\n\nAt 10:34 12/11/99 -0500, Tom Lane wrote:\n>Frans Van Elsacker <[email protected]> writes:\n>> I had a little piece of code who works fine up till know, just after\n>> installing \n>> the version 6.5.3 i got the message 'parse error near union'\n>\n>> create table work as\n>> \tselect * from opdracht\n>> \tunion\n>> \tselect * from opdrachtproost\n>\n>Hmm, the grammar has\n>\n>CreateAsStmt: CREATE OptTemp TABLE relation_name OptCreateAs AS SubSelect\n>\n>and SubSelect doesn't allow unions. This is overly restrictive,\n>I agree, but I thought it had been that way for a good while.\n>What version were you using before?\n>\n>\t\t\tregards, tom lane\n>\n>************\n>\n>\n\n", "msg_date": "Sat, 13 Nov 1999 00:31:30 +0100", "msg_from": "Frans Van Elsacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] union problem version 6.5.3 " }, { "msg_contents": "Frans Van Elsacker <[email protected]> writes:\n> Our earlier version was postgresql 6.4.2\n\nOK, I thought the change was older than that. It probably got into 6.5\nas a side-effect of incorporating the INTERSECT/EXCEPT feature. Anyway,\nwe ought to try to restore the old functionality.\n\n\t\t\tregards, tom lane\n\n>> Hmm, the grammar has\n>> \n>> CreateAsStmt: CREATE OptTemp TABLE relation_name OptCreateAs AS SubSelect\n>> \n>> and SubSelect doesn't allow unions. This is overly restrictive,\n>> I agree, but I thought it had been that way for a good while.\n>> What version were you using before?\n", "msg_date": "Fri, 12 Nov 1999 18:58:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union problem version 6.5.3 " }, { "msg_contents": ">>> Hmm, the grammar has\n>>> \n>>> CreateAsStmt: CREATE OptTemp TABLE relation_name OptCreateAs AS SubSelect\n>>> \n>>> and SubSelect doesn't allow unions. This is overly restrictive,\n\nAs far as I can tell, it should work to just change the above line in\nsrc/backend/parser/gram.y to\n\nCreateAsStmt: CREATE OptTemp TABLE relation_name OptCreateAs AS SelectStmt\n\nI am doing this in current sources right now. I have not tried it in\nREL6_5, but if the problem is getting in your way then give it a try...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Nov 1999 13:47:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union problem version 6.5.3 " } ]
[ { "msg_contents": "> People may have problems with the NULL statements with some versions\n> of PostgreSQL. I have information about editing the applix macro\n> on that creates the tables my web site:\n> http://www.radix.net/~cobrien/applix/applix.txt\n\nJust in case someone cares ;)\n\nThe \"NULL\" constraint for a column definition is not defined in SQL92,\nand is not necessary and could be dropped from Applix's definition of\nthe table. The default behavior of any column defined in SQL is to\nallow NULL values. \n\nPostgres does not implement this redundant syntax extension because\nyacc-style parsers such as the one used in Postgres find the use of\nthe bare NULL an ambiguous context. Presumably that is why SQL92 does\nnot define it.\n\nHowever, I see that in a limited context, such as a bare NULL with no\nother qualifiers, yacc can handle its use. I'll add it to Postgres'\nnext release...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 12 Nov 1999 06:54:20 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AWL: Re: tm1" }, { "msg_contents": "> > People may have problems with the NULL statements with some versions\n> > of PostgreSQL. I have information about editing the applix macro\n> > on that creates the tables my web site:\n> > http://www.radix.net/~cobrien/applix/applix.txt\n> \n> Just in case someone cares ;)\n> \n> The \"NULL\" constraint for a column definition is not defined in SQL92,\n> and is not necessary and could be dropped from Applix's definition of\n> the table. The default behavior of any column defined in SQL is to\n> allow NULL values. \n> \n> Postgres does not implement this redundant syntax extension because\n> yacc-style parsers such as the one used in Postgres find the use of\n> the bare NULL an ambiguous context. Presumably that is why SQL92 does\n> not define it.\n> \n> However, I see that in a limited context, such as a bare NULL with no\n> other qualifiers, yacc can handle its use. I'll add it to Postgres'\n> next release...\n\nYes, we are hearing people use it. Seems like we could just ignore the\nNULL if possible.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Nov 1999 11:50:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: AWL: Re: tm1" } ]
[ { "msg_contents": "Vadim wrote:\n\n> > I did the same on Informix Online 7 and it took less than two minutes...\n>\n> Could you run the query above in Informix?\n> How long would it take to complete?\n\nI include both explain and timing for the queries for both postgres and\nInformix.\n\nExplain from postgres for the two queries.\n------------------------------------------\n\nexplain select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster);\nNOTICE: QUERY PLAN:\n\nSeq Scan on accounts (cost=3667.89 rows=34958 width=12)\n SubPlan\n -> Index Scan using registrationtype_idx on accounts (cost=2444.62 rows=33373 width=12)\n\nEXPLAIN\n\n\n\nexplain select accountdetail.domain from accountdetail\n where not exists (\n select accountmaster.domain from accountmaster where\n accountmaster.domain = accountdetail.domain);\nNOTICE: QUERY PLAN:\n\nSeq Scan on accounts (cost=3667.89 rows=34958 width=12)\n SubPlan\n -> Index Scan using domain_type_idx on accounts (cost=2.04 rows=1 width=12)\n\nEXPLAIN\n\nExplain from informix online 7 for the two queries\n--------------------------------------------------\n\nQUERY:\n------\nselect accountdetail.domain from accountdetail where\n accountdetail.domain not in (select accountmaster.domain from accountmaster)\n\nEstimated Cost: 8995\nEstimated # of Rows Returned: 47652\n\n1) informix.accounts: SEQUENTIAL SCAN\n\n Filters: (informix.accounts.domain != ALL <subquery> AND informix.accounts.registrationtype != 'N' ) \n\n Subquery:\n ---------\n Estimated Cost: 4497\n Estimated # of Rows Returned: 5883\n\n 1) informix.accounts: SEQUENTIAL SCAN\n\n Filters: informix.accounts.registrationtype = 'N' \n\n\nQUERY:\n------\nselect accountdetail.domain from accountdetail where\n accountdetail.domain not in (select accountmaster.domain from accountmaster)\n\nEstimated Cost: 4510\nEstimated # of Rows Returned: 58810\n\n1) informix.accounts: SEQUENTIAL SCAN\n\n Filters: (informix.accounts.domain != ALL <subquery> AND informix.accounts.registrationtype != 'N' ) \n\n Subquery:\n ---------\n Estimated Cost: 12\n Estimated # of Rows Returned: 10\n\n 1) informix.accounts: INDEX PATH\n\n (1) Index Keys: registrationtype \n Lower Index Filter: informix.accounts.registrationtype = 'N' \n\n\nTiming from postgres 6.5.3 for the two queries\n----------------------------------------------\nexplain select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster);\n\nGreater than 5 hours and 30 minutes\n\n\nexplain select accountdetail.domain from accountdetail\n where not exists (\n select accountmaster.domain from accountmaster where\n accountmaster.domain = accountdetail.domain);\n\n0.00user 0.01system 0:04.75elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n\nTiming from Informix Online 7 for the two queries\n----------------------------------------------\nexplain select accountdetail.domain from accountdetail where\n accountdetail.domain not in\n (select accountmaster.domain from accountmaster);\n\n0.03user 0.01system 0:10.35elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n\nexplain select accountdetail.domain from accountdetail\n where not exists (\n select accountmaster.domain from accountmaster where\n accountmaster.domain = accountdetail.domain);\n\n0.03user 0.00system 0:03.56elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n\nThe machine is a Pentium II 400 MHz with Fast Wide SCSI and is the same\nfor both Informix and Postgres. Informix uses Linux I/O ie. it does not\nuse a raw partition. The datasets are the same.\n\nRegards\nTheo\n", "msg_date": "Fri, 12 Nov 1999 10:14:25 +0200 (SAST)", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Slow - grindingly slow - query" } ]
[ { "msg_contents": "> This is based on the premise that it would somehow be useful to link Unix\n> and PostgreSQL users. \n\nThis is useful, for a system like postgres, since due to the user types,\nsome user functions will eventually be executed with a setuid to a specific\nunix user.\nThis may be the function owner, (dba procedure) or the user who is\nconnected.\n\n> In that case this would certainly be needed.\n\nYes imho.\n\n> However, this would be a significant step backwards, since \n> database users\n> are in general not equal to system users, most importantly \n> since clients\n> might run on completely different systems than the server.\n\nWell , since I need all users as unix users, I do not want \npostgres users at all. At the very least I do not want to keep\nseparate passwords for db users in the db.\n\nAndreas \n", "msg_date": "Fri, 12 Nov 1999 13:01:03 +0100", "msg_from": "Zeugswetter Andreas SEV <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: [HACKERS] Re: [GENERAL] users in Postgresql" } ]
[ { "msg_contents": "> I send you my last letter again, because I'am not sure if you obtain it\n> (you had problem with connection (?)).\n\nActually yes! I was off the air for a week or so, and didn't remember\nthat I had a mail to answer. Sorry about that...\n\n> > For normal strings, we can implement NATIONAL CHARACTER and CHARACTER\n> > SET features from SQL92 to handle locales and alternative languages.\n> > We'll do this by defining new types for each language or locale. For\n> > date/time, a clear path is to standardize on ISO-8601 formats (already\n> > available as an option) which use numeric fields only. SQL92 offers no\n> > suggestions on how to do date/time types with alphabetic fields.\n> >\n> > I'm going to switch the default date format to ISO-8601 for the next\n> > release, which should help. We could also think about\n> > internationalizing the date/time support as you suggest, with external\n> > language-specific catalogs, but a catalog lookup to do date/time i/o\n> > would seem to be very slow, for a catalog within Postgres or for an\n> > external flat file.\n> \n> Yes, external catalogs is problem (speed, needs glibc..), better resolution is\n> probably (cached) system catalogs. If I good underatand you, your idea is\n> add langs and locales to pg_type (..example), well. After this si not problem\n> make internal catalogs for locales with months, days names...etc. And join\n> this locales table (pg_locale?) with pg_type via oid and we can implement\n> any translator between langs (for datetime strings).\n> \n> (This is better, because in the glibc's catalogs has features which we\n> needn't in PgSQL.)\n> \n> Will feature which allow make possible to set LOCALE type during transaction,\n> SET LOCALE/NATIONAL command ?\n> \n> If you have any exactly ideas I can help you with it (if you want).\n> I have a little time now and I want spend it with PgSQL :-)\n\nMy thought for the *first* implementation of locales and character\nsets is to do it all through pg_type, pg_proc, and pg_operator, with a\nlittle help from built-in features in the parser to recognize the\nSQL92 conventions for character sets.\n\nFor the moment then, we focus on the character set features, not on\nhow to tie this into the date/time features. For SQL92, you can't\nchange character sets on the fly, though you can specify that they be\nconverted on the fly. And the same would hold true of the date/time\ntypes. You can't do a \"SET LOCALE\" and magically find that both\ncharacter types and date/time types change locale too. Or I should say\nI haven't seen how to do that yet.\n\n> I complete to_char/to_date, and I testing it now. Where/Who I must send this\n> routines for moving to contrib (to You or to other major developer)?\n\nSend it to the \"patches\" list, or directly to the \"hackers\" list if it\nis under 40kbytes. After we look at the code, we will decide if it\ngoes into contrib or directly into the backend, and will put it into\nthe tree.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 12 Nov 1999 14:52:58 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: internationalizing and etc.." } ]
[ { "msg_contents": "Is everyone okay with the following syntax:\n\nCREATE USER username\n[ WITH ID digits ]\n^^^^^^^^^^^^^^^^^^\n[ WITH PASSWORD password ]\n[ CREATEDB | NOCREATEDB ]\n[ CREATEUSER | NOCREATEUSER ]\n[ IN GROUP groupname [, ...] ]\n[ VALID UNTIL 'abstime' ]\n\nALTER USER username\n[ WITH ID digits ]\n^^^^^^^^^^^^^^^^^^\n[ WITH PASSWORD password ]\n[ CREATEDB | NOCREATEDB ]\n[ CREATEUSER | NOCREATEUSER ]\n[ IN GROUP groupname [, ...] ]\n[ VALID UNTIL 'abstime' ]\n\nThe catch is that ID would have to be a new keyword and we'd have to live\nwith that for a long time. Other choices include:\n* UID\n* SYSID\n* USESYSID\netc.\n\nWhat do the standards and pseudo-standards say?\n\nI think I'll take a stab at this and settle the createuser script issue\nthe proper way.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 12 Nov 1999 16:02:45 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "RFC: create/alter user extension" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Is everyone okay with the following syntax:\n> CREATE USER username\n> [ WITH ID digits ]\n> ^^^^^^^^^^^^^^^^^^\n\n> The catch is that ID would have to be a new keyword and we'd have to live\n> with that for a long time. Other choices include:\n> * UID\n> * SYSID\n> * USESYSID\n> etc.\n\nI'd be inclined to go with UID or SYSID. In any case, since the new\nkeyword is used in such a limited context, we could almost certainly\nstill allow it as a ColId and thus not create any real compatibility\nproblem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 1999 23:10:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RFC: create/alter user extension " }, { "msg_contents": "On Fri, 12 Nov 1999, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > Is everyone okay with the following syntax:\n> > CREATE USER username\n> > [ WITH ID digits ]\n> > ^^^^^^^^^^^^^^^^^^\n> \n> > The catch is that ID would have to be a new keyword and we'd have to live\n> > with that for a long time. Other choices include:\n> > * UID\n> > * SYSID\n> > * USESYSID\n> > etc.\n> \n> I'd be inclined to go with UID or SYSID. In any case, since the new\n> keyword is used in such a limited context, we could almost certainly\n> still allow it as a ColId and thus not create any real compatibility\n> problem.\n\nI'm not sure about this distinction. Where would that be reflected in the\n(parser) code?\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 13 Nov 1999 14:38:03 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RFC: create/alter user extension " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> I'd be inclined to go with UID or SYSID. In any case, since the new\n>> keyword is used in such a limited context, we could almost certainly\n>> still allow it as a ColId and thus not create any real compatibility\n>> problem.\n\n> I'm not sure about this distinction. Where would that be reflected in the\n> (parser) code?\n\nYou should try to add this (or any other) new keyword to the list in the\nColId: production in gram.y. If that doesn't provoke any complaints\nfrom yacc (shift/reduce conflicts etc), then you're home free: the\nparser won't get confused if the keyword is used as a column name.\n\nIf it does cause a shift/reduce conflict, which is fairly likely for\nanything that can appear inside an expression, you might still be\nable to add the new keyword to the ColLabel: list. That allows it\nto be used as an identifier in a more restricted set of contexts.\n\nOnly if neither of these will work does the keyword need to be a\ntruly \"reserved\" word.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Nov 1999 09:57:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RFC: create/alter user extension " } ]
[ { "msg_contents": "[[-Fri 12 November-]]\n Added su.c Unixware 7.0 local exploit by K2, posted by demi.(0-day)\n Added xlock.c Unixware 7.0 local exploit by K2, posted by demi.(0-day)\n Added Xsco.c Unixware 7.0 local exploit by K2, posted by demi.(0-day)\n\n[[-Thur 11 November-]]\n Added seyon.sh FreeBSD 3.3 local exploit by Brock Tellier.\n Added nfsd2.c RPC + Debian 2.1 + Redhat 5.2 remote exploit by tmoggie.\n Added ypbreak.c RPC/nis exploit by anonymous. (old)\n Added ypsnarf.c RPC/nis exploit by anonymous. (old)\n Added hijaak.sh Sendmail 8.8.8 local exploit by Michal Zalewski\n Added Win95/98 and WinNT exploit section.(requested numerous times)\n Added wftpdexp.tgz Win95/98/NT remote exploit by Alberto Solino.\n Added msadc2.pl WinNT 4.0 remote exploit by rfp.\n Added ex_cmail.c Win98 remote exploit by UNYUN.\n Added ex_fuse.c Win98 remote exploit by UNYUN.\n Added ex_netsrv.c Win98 remote exploit by UNYUN.\n Added ex_servu.c Win98 remote exploit by UNYUN.\n Added ex_tinyftpd.c Win98 remote exploit by UNYUN.\n Added ex_zommail.c Win98 remote exploit by UNYUN.\n Added ex_almail.c Win98 local exploit by UNYUN.\n Added ex_midiplug.c Win98 local exploit by UNYUN.\n Added ex_ssmail.c WinNT 4.0 remote exploit by UNYUN.\n Added sendexp.c WinNT 4.0 remote exploit by UNYUN.\n\n[[-Tues 9 November-]]\n Added dtappgather.sh Unixware 7.0 local exploit by K2.\n Added dopewarez.c Linux/misc remote exploit by nuuB.\n Added hylafax.c FreeBSD 3.3 local exploit by Brock Tellier.\n\n[[-Wed 3 November-]]\n Added amanda.c FreeBSD 3.3 local exploit by Brock Tellier.\n Added canuum.c TurboLinux 3.5 local exploit by UNYUN.\n Added sendmail-8.9.3.tar.gz by icesk. (0-day)\n Added sperl4.036.c FreeBSD 2.2.8 exploit by OVX. (old)\n Added tcpdump.c Linux/misc exploit BLADI. (old)\n\n(exploits/code to post? [email protected])\n(queries/questions? wwwboard is running)\n\n[[-Disclaimer-]]\n The members at www.hack.co.za can not and will not be held\n responsible for anyone's actions, nothing on this site is meant\n to be used to malicious intent. We will not give out server logs\n of people connecting, or will not supply any information that\n envolves the \"tracking down\" of people. We believe in freedom\n of speech, any code posted to this page will not be removed\n under any circumstances, regardless of copyright or anything\n similar. If you do not agree with any of the above, leave now.\n\n\n\n", "msg_date": "Fri, 12 Nov 1999 19:32:13 +0200", "msg_from": "\"gov-boi\" <[email protected]>", "msg_from_op": true, "msg_subject": "www.hack.co.za - exploit archives updates - 0day" } ]
[ { "msg_contents": "> I don't know if there's a compression library available that\n> fit's our need. First and most important it must have a\n> license that permits us to include it in the distribution\n> under our existing license. Second it's implementation must\n> not cause any problems in the backend like memory leakage or\n> the like.\n\nLZO (Lempel-Ziv-Oberhuemer) sounds like a candidate. \n\nhttp://wildsau.idv.uni-linz.ac.at/mfx/lzo.html\n\nIt is a realtime compressor, designed for real time\ncompression/decompression of data.\nIt is really fast, like >20 Mb/s decompressions are easily possible and has\na pretty good compression ratio,\nalmost as good as gzip, since it favours speed over ratio.\n\nIt seems to have a pretty safe and compatible license.\n\nIt comes with a gzip cmdline compatible program called lzop.\n\nI really love it.\nAndreas\n", "msg_date": "Fri, 12 Nov 1999 20:32:34 +0100", "msg_from": "Zeugswetter Andreas SEV <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] compression in LO and other fields" } ]
[ { "msg_contents": "i tried posting this to other groups, as well as searching the archives,\nand came up with nothing.\n\ni am using 6.5.2/Mandrake(RedHat)linux 6.0/pentium. i am trying to port\nan existing informix 7 web application to postgresql. all of the CGI\nprograms use embedded SQL. they all use constructs of the form:\n...\nEXEC SQL BEGIN DECLARE SECTION;\nstruct user { /* 40 or 50 or 80 fields... */\n int x;\n ...\n};\nEXEC SQL END DECLARE SECTION;\n\nfunc1(struct user *x,...);\nfunc2(struct user *X, ...);\n\nmain()\n{\n EXEC SQL BEGIN DECLARE SECTION;\n struct user user_rec;\n EXEC SQL END DECLARE SECTION;\n\n...\n}\n\necpg doesn't recognize (ie, \"parse error\") struct declarations that\noccurred outside of the current BEGIN/END section. that forces me to\nredeclare the entire struct definition with every variable declaration.\ni have gone through and exploded all the declarations to do that, but\nnow ecpg dies with a segmentation fault, does not report a line number,\nand erases the .c in the process so i can't tell how far it got (i\nexperimented with 2 and 3 field structs, and it worked okay, but some of\nthese structs have 140+ fields).\n\nwhat is the proper solution for defining a database record structure,\nthen declaring variables that use that definition within EXEC SQL\nsections?\n\n", "msg_date": "Fri, 12 Nov 1999 16:09:48 -0500", "msg_from": "bayard kohlhepp <[email protected]>", "msg_from_op": true, "msg_subject": "how to handle struct within EXEC SQL DECLARE SECTION?" } ]
[ { "msg_contents": "\ni tried posting this to other groups, as well as searching the archives,\n\nand came up with nothing.\n\ni am using 6.5.2/Mandrake(RedHat)linux 6.0/pentium. i am trying to port\nan existing informix 7 web application to postgresql. all of the CGI\nprograms use embedded SQL. they all use constructs of the form:\n...\nEXEC SQL BEGIN DECLARE SECTION;\nstruct user { /* 40 or 50 or 80 fields... */\n int x;\n ...\n};\nEXEC SQL END DECLARE SECTION;\n\nfunc1(struct user *x,...);\nfunc2(struct user *X, ...);\n\nmain()\n{\n EXEC SQL BEGIN DECLARE SECTION;\n struct user user_rec;\n EXEC SQL END DECLARE SECTION;\n\n...\n}\n\necpg doesn't recognize (ie, \"parse error\") struct declarations that\noccurred outside of the current BEGIN/END section. that forces me to\nredeclare the entire struct definition with every variable declaration.\ni have gone through and exploded all the declarations to do that, but\nnow ecpg dies with a segmentation fault, does not report a line number,\nand erases the .c in the process so i can't tell how far it got (i\nexperimented with 2 and 3 field structs, and it worked okay, but some of\n\nthese structs have 140+ fields).\n\nwhat is the proper solution for defining a database record structure,\nthen declaring variables that use that definition within EXEC SQL\nsections?\n\n\n\n", "msg_date": "Fri, 12 Nov 1999 16:47:33 -0500", "msg_from": "bayard kohlhepp <[email protected]>", "msg_from_op": true, "msg_subject": "how should you define a struct within EXEC SQL section?" } ]
[ { "msg_contents": "I need to create a cross-process producer/consumer data queue (e.g. singly-linked list). \n\nThat is - Processes A, B, and C add nodes to a controlled list and process D removes them.\nNot sure if the creation of the nodes would be best done by the producers or consumers,\nbut destruction would have to be done by the consumer, as the producers don't wait for\nprocessing. For optimal results, the consumer process should sleep until item(s) are added\nto its queue.\n\nQuery: within the existing backend framework, what's the best way to accomplish this?\n\n Thanks,\n\n Tim Holloway\n", "msg_date": "Sat, 13 Nov 1999 00:04:00 -0500", "msg_from": "Tim Holloway <[email protected]>", "msg_from_op": true, "msg_subject": "Thread-safe queueing?" }, { "msg_contents": "Tim Holloway <[email protected]> writes:\n> I need to create a cross-process producer/consumer data queue\n> (e.g. singly-linked list). That is - Processes A, B, and C add nodes\n> to a controlled list and process D removes them. Not sure if the\n> creation of the nodes would be best done by the producers or\n> consumers, but destruction would have to be done by the consumer, as\n> the producers don't wait for processing. For optimal results, the\n> consumer process should sleep until item(s) are added to its queue.\n\n> Query: within the existing backend framework, what's the best way to\n> accomplish this?\n\nMore context, please. What are you trying to accomplish? Is this\nreally a communication path between backends (and if so, what backend\ncode needs it?), or are you trying to set up a queue between SQL\nclients? How much data might need to be in the queue at one time?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Nov 1999 10:14:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Thread-safe queueing? " }, { "msg_contents": "\n\nTom Lane wrote:\n> \n> Tim Holloway <[email protected]> writes:\n> > I need to create a cross-process producer/consumer data queue\n> > (e.g. singly-linked list). That is - Processes A, B, and C add nodes\n> > to a controlled list and process D removes them. Not sure if the\n> > creation of the nodes would be best done by the producers or\n> > consumers, but destruction would have to be done by the consumer, as\n> > the producers don't wait for processing. For optimal results, the\n> > consumer process should sleep until item(s) are added to its queue.\n> \n> > Query: within the existing backend framework, what's the best way to\n> > accomplish this?\n> \n> More context, please. What are you trying to accomplish? Is this\n> really a communication path between backends (and if so, what backend\n> code needs it?), or are you trying to set up a queue between SQL\n> clients? How much data might need to be in the queue at one time?\n> \n> regards, tom lane\n> \n\nThis is for the logging subsystem I'm developing. The backends call pg_log(),\nwhich is like elog(), except that the message is a resource ID + any parameters\nin order to support locales and custom message formatting. These ID+parameter\npackets are then pipelined down to the logging channels via the log engine to\nbe formatted and output according to rules in the configuration file.\n\nI *think* that the log engine should be a distinct process. I'm not sure I can\ntrust the output not to come out sliced and diced if each backend can run the engine\ndirectly -- and for that matter, I see problems if the engine is reconfigured on the\nfly owing to the need for each backend to replicate the configuration process (among\nother things). The basic singly-linked list component is all I need to handle the\nFIFO, but obviously I need guards to preserve its integrity. As to the amount of data\ninvolved, I sincerely hope the queue would stay pretty shallow!\n\nI have the configuration parser and logging engine operational, so the last\nsignificant hurdle is making sure that A) the data to be logged is\naccessable/addressable by the engine, and B) that the process runs in the\nproper sequence. A description of what it all will look like is now online at http://postgres.mousetech.com/index.html\n(with apologies for the ugly formatting).\n\n Thanks,\n\n TIm Holloway\n", "msg_date": "Sat, 13 Nov 1999 19:58:14 -0500", "msg_from": "Tim Holloway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Thread-safe queueing?" }, { "msg_contents": "Tim Holloway <[email protected]> writes:\n> Tom Lane wrote:\n>> More context, please.\n\n> This is for the logging subsystem I'm developing. The backends call\n> pg_log(), which is like elog(), except that the message is a resource\n> ID + any parameters in order to support locales and custom message\n> formatting. These ID+parameter packets are then pipelined down to the\n> logging channels via the log engine to be formatted and output\n> according to rules in the configuration file.\n\nOK. You probably want something roughly comparable to the shared-inval\nmessage queue --- see include/storage/sinvaladt.h and\nbackend/storage/ipc/sinvaladt.c. That's more complex than your problem\nin one way (the sinval queue must be read by all N backends, not just\none process) but simpler in another (we can shoehorn all SI messages\ninto a fixed-size structure; is that practical for log data?). Anyway,\na queue structure in shared memory protected by spinlocks is what you\nwant, and sinval is about the closest thing we have to that at the\nmoment.\n\n> I *think* that the log engine should be a distinct process.\n\nProbably so, if you use a shared-memory queue. Shared memory is a\nfinite resource; AFAIK it's not practical to enlarge it on-the-fly.\nSo you will have to set a maximum queue size --- either a compile-time\nconstant, or at best a value chosen at postmaster start time. This\nimplies that there will be scenarios where backends are waiting for room\nto be made in the log queue. If the queue emptier is a separate process\nthen those waits can't turn into deadlocks. (sinval works around the\nmemory-overflow problem with a special \"reset\" mechanism, but that\ndoesn't seem appropriate for logging.)\n\nAlternatively, you could forget about a queue per se, and just allow\neach backend to execute the sending of its own log messages, using\na spinlock in shared memory to prevent concurrent issuance of log\nmessages on channels where that's a problem. That might be the\nsimplest and most robust approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Nov 1999 21:28:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Thread-safe queueing? " }, { "msg_contents": "> Alternatively, you could forget about a queue per se, and just allow\n> each backend to execute the sending of its own log messages, using\n> a spinlock in shared memory to prevent concurrent issuance of log\n> messages on channels where that's a problem. That might be the\n> simplest and most robust approach.\n\nHold on. Unix guarantees all write() calls are atomic, so no one gets\nin between that write. Why not just collect the output into one buffer\nin the backend, and blast the entire buffer in one write() to the log\nfile.\n\nI don't think there is any way another backend could mess that up.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Nov 1999 22:11:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Thread-safe queueing?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Alternatively, you could forget about a queue per se, and just allow\n>> each backend to execute the sending of its own log messages, using\n>> a spinlock in shared memory to prevent concurrent issuance of log\n>> messages on channels where that's a problem. That might be the\n>> simplest and most robust approach.\n\n> Hold on. Unix guarantees all write() calls are atomic, so no one gets\n> in between that write.\n\nActually, I didn't say that I *believed* there were any channel types\nwhere such an interlock is essential ;-). I just said that spinlocking\nis a solution if the problem comes up.\n\nTim mentioned on-the-fly reconfiguration of logging as an area that\nmight need interlocking, and I'm more prepared to believe that.\nStill, we seem to be getting on just fine with each backend responding\nindependently to reconfiguration of the pg_options values. So I'd be\ninclined to build it that way, and wait for evidence of a performance\nproblem before spending effort to make it smarter.\n\nWhich I guess is Bruce's point also...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Nov 1999 22:48:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Thread-safe queueing? " } ]
[ { "msg_contents": "> Okay, the build went fine, but the following came up:\n> make -C parser all \n> make[1]: Entering directory `/home/fenix0/eh99/e99re41/postgresql-cur/src/backend/parser'\n> Makefile:39: warning: overriding commands for target `parse.h'\n> Makefile:34: warning: ignoring old commands for target `parse.h'\n\n> With that fixed you could commit it.\n\nI haven't the foggiest how to work around that --- but since my make\n(3.76.1, no spring chicken itself) doesn't generate any such complaint,\nI'd say it's another bug in that old version you have.\n\nMy inclination is to apply the patch anyway, since it's cleaner coding.\n\nIf make 3.74 does the right things despite the warning, then you could\nlive with it. Otherwise, time to upgrade.\n\nIt seems we ought to add a minimum GNU make version number to the list\nof prerequisites for Postgres. Data points so far are that 3.74 has\nproblems and 3.76.1 is OK --- can anyone fill in more observations?\nAnyone using 3.75, for instance?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Nov 1999 10:41:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Backend build fails in current " }, { "msg_contents": "> It seems we ought to add a minimum GNU make version number to the list\n> of prerequisites for Postgres. Data points so far are that 3.74 has\n> problems and 3.76.1 is OK --- can anyone fill in more observations?\n> Anyone using 3.75, for instance?\n\n3.75 ships with BSD/OS and is fine.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Nov 1999 11:19:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Backend build fails in current" } ]
[ { "msg_contents": "What happened with this?\n\nI think I saw a response from Bruce but it wasn't fixed and\ndoesn't appear in the current CVS of V7.\n\nKeith.\n\n>From: Edwin Ramirez <[email protected]>\n>Hello all,\n>\n>I was looking at the translate function and I think that it does not\n>behave quite right. I modified the translate function in\n>oracle_compat.c (included below) to make work more like its Oracle\n>counterpart. It seems to work but it returns the following message:\n>\tNOTICE: PortalHeapMemoryFree: 0x8241fcc not in alloc set!\n>\n>Below are the Oracle and Postgres session transcripts. \n>\n>select translate('edwin', 'wi', 'af') from dual;\n>ORACLE:\n>TRANS\n>-----\n>edafn\n>1 row selected.\n>\n>POSTGRES\n>translate\n>---------\n>edain \n>(1 row)\n\n<snip>\n\n", "msg_date": "Sat, 13 Nov 1999 17:15:28 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] translate function (BUG?)" } ]
[ { "msg_contents": "This message appeared from the backend when running vacuum \nanalyze on our live data. I remember seeing a posting \nregarding this sometime back but no longer have this message. \n\nThe version of postgres is 6.5.1 on RH6. There is more than \nsufficient disk space left, and postmaster does not seem\nto be running out of memory.\n\nAny help would be much appreciated.\n \n--------\nRegards\nTheo\n", "msg_date": "Sat, 13 Nov 1999 23:17:21 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "My bits moved right off the end of the world..." }, { "msg_contents": "I recall that this was due to a corrupted B-Tree index. Try dropping and\nrebuilding that, if you have one.\n\nI also recall that Bruce, this being his \"favourite error message\", wanted\nto make it \"show it more often\", so perhaps it now comes up in different\nsituations as well. ;)\n\n\t-Peter\n\nOn Sat, 13 Nov 1999, Theo Kramer wrote:\n\n> This message appeared from the backend when running vacuum \n> analyze on our live data. I remember seeing a posting \n> regarding this sometime back but no longer have this message. \n> \n> The version of postgres is 6.5.1 on RH6. There is more than \n> sufficient disk space left, and postmaster does not seem\n> to be running out of memory.\n> \n> Any help would be much appreciated.\n> \n> --------\n> Regards\n> Theo\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 13 Nov 1999 22:25:55 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My bits moved right off the end of the world..." }, { "msg_contents": "> I recall that this was due to a corrupted B-Tree index. Try dropping and\n> rebuilding that, if you have one.\n> \n> I also recall that Bruce, this being his \"favourite error message\", wanted\n> to make it \"show it more often\", so perhaps it now comes up in different\n> situations as well. ;)\n\nThat's still on my TODO list. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Nov 1999 17:19:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My bits moved right off the end of the world..." }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> I recall that this was due to a corrupted B-Tree index. Try dropping and\n> rebuilding that, if you have one.\n\nThanks Peter, that did the trick.\n\nAny chance of adding the following to the FAQ\n\n4.24) What is the meaning of 'my bits moved right off the end of the world'\n\nThis message may appear in the backend log and may be due to a possibly \ncorrupt index. \n\nIt may be preceded (by several minutes) with a notice to the client such as \n\nNOTICE: Index my_idx: NUMBER OF INDEX' TUPLES (78933) IS NOT THE SAME AS HEAP\n(78931)\n\nThe message may occur when running a 'vacuum analyze' with the backend\nterminating abnormally.\n\n[Other explanations...]\n\nThe postmaster must be started without the '-S' option for this\nnotice to appear.\n\n--------\nRegards\nTheo\n", "msg_date": "Sun, 14 Nov 1999 08:19:22 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] My bits moved right off the end of the world..." }, { "msg_contents": "> Peter Eisentraut wrote:\n> > \n> > I recall that this was due to a corrupted B-Tree index. Try dropping and\n> > rebuilding that, if you have one.\n> \n> Thanks Peter, that did the trick.\n> \n> Any chance of adding the following to the FAQ\n> \n> 4.24) What is the meaning of 'my bits moved right off the end of the world'\n\nWith specific messages like this one, it is best to put the fix right in\nthe error message.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Nov 1999 10:52:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My bits moved right off the end of the world..." }, { "msg_contents": "> > Peter Eisentraut wrote:\n> > > \n> > > I recall that this was due to a corrupted B-Tree index. Try dropping and\n> > > rebuilding that, if you have one.\n> > \n> > Thanks Peter, that did the trick.\n> > \n> > Any chance of adding the following to the FAQ\n> > \n> > 4.24) What is the meaning of 'my bits moved right off the end of the world'\n> \n> With specific messages like this one, it is best to put the fix right in\n> the error message.\n\nWith great pain, I have added an additional sentence to the error\nmessage stating to try and recreate index.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Nov 1999 11:19:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My bits moved right off the end of the world..." }, { "msg_contents": "> Peter Eisentraut wrote:\n> > \n> > I recall that this was due to a corrupted B-Tree index. Try dropping and\n> > rebuilding that, if you have one.\n> \n> Thanks Peter, that did the trick.\n> \n> Any chance of adding the following to the FAQ\n> \n> 4.24) What is the meaning of 'my bits moved right off the end of the world'\n> \n> This message may appear in the backend log and may be due to a possibly \n> corrupt index. \n> \n> It may be preceded (by several minutes) with a notice to the client such as \n> \n> NOTICE: Index my_idx: NUMBER OF INDEX' TUPLES (78933) IS NOT THE SAME AS HEAP\n> (78931)\n\nI have added a mention when this message appears to try recreating the\nindex.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Nov 1999 12:24:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My bits moved right off the end of the world..." } ]
[ { "msg_contents": "I see that src/bin/psql/sql_help.h is now generated automatically\nfrom the SGML documentation. This is a Good Thing. But: since\nsql_help.h is now a derived file, shouldn't it be removed from the\nCVS repository, for the same reasons that we don't keep gram.c\nand other derived files in CVS? If we leave it there, it'll generate\na lot of extra update traffic.\n\nThe only reason I can see for leaving it in CVS is that if we remove it,\npeople who pull sources from CVS would need Perl in order to build psql.\n(People who download tarballs would *not*, since release_prep updates\nsql_help.h along with the other derived files.) That's annoying, but\nI think it may not be a fatal objection. Most hackers are probably\nmore likely to already have Perl than to already have bison or flex...\n\nI thought about suggesting that create_help.pl be rewritten in some\n\"more portable\" fashion such as an awk script. But really, if you\nconsider non-Unix platforms, Perl is more portable than awk or any\nother likely alternative. (It might be worthwhile to remove the one\nor two unnecessary Perl-5-isms in the script, so that it will run on\nPerl 4 if that's what's available.)\n\nComments? Anyone feel that we really can't expect users of the CVS\nrepository to have Perl?\n\n\t\t\tregards, tom lane\n\nPS: \"make distclean\" should probably not remove sql_help.h, for the\nsame reasons that we don't remove gram.c --- it *is* a distributed\nfile, and a particular user might not have the tools to rebuild it.\nThis is true whether or not we leave it in CVS.\n", "msg_date": "Sat, 13 Nov 1999 17:21:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Status of sql_help.h" }, { "msg_contents": "On 1999-11-13, Tom Lane mentioned:\n\n> sql_help.h is now a derived file, shouldn't it be removed from the\n> CVS repository, for the same reasons that we don't keep gram.c\n\nYou're the CVS guys. Whatever works.\n\n> I thought about suggesting that create_help.pl be rewritten in some\n> \"more portable\" fashion such as an awk script. But really, if you\n\nIf I'm supposed to maintain this (or do *anything* with it), it can't be\nawk. No reason to make a step backwards to accomodate an undocumented\nportability problem. I would argue that a lot more people are familiar\nwith Perl and can read that script than an awk alternative. We don't write\nstrict (-pedantic) ANSI C code either for the sake of portability.\n\n> other likely alternative. (It might be worthwhile to remove the one\n> or two unnecessary Perl-5-isms in the script, so that it will run on\n> Perl 4 if that's what's available.)\n\nOn a quick look I couldn't find a useful listing of things new in Perl 5\nor some way to test for Perl 4 compatibility. Shortly, I don't know what a\nPerl-5-ism is and I really don't feel like finding out either. However, if\nsomeone is inclined to fix those things if it doesn't make it all ugly, be\nmy guest.\n\n> Comments? Anyone feel that we really can't expect users of the CVS\n> repository to have Perl?\n\nIf you don't have Perl, the question is really: Do you have CVS? Do you\nhave rlogin? Do you have networking support in your kernel? Do you have a\ncomputer?\n\nSeriously, I'd suggest that we wait for a documented problem before taking\nunnecessary steps.\n\nHmm, interesting. From the GNU Makefile standards:\n\n\"The `configure' script and the Makefile rules for building and\ninstallation should not use any utilities directly except these:\n \n cat cmp cp diff echo egrep expr false grep install-info\n ln ls mkdir mv pwd rm rmdir sed sleep sort tar test touch true\"\n\nNo awk there either.\n\n> PS: \"make distclean\" should probably not remove sql_help.h, for the\n> same reasons that we don't remove gram.c --- it *is* a distributed\n> file, and a particular user might not have the tools to rebuild it.\n\nThat was my bad. For some reason I had the idea that \"distclean\" stood for\n\"distinctly clean\" (really clean). :-\\ I'll fix that. Perhaps we ought to\ndecide on some standard targets. \"maintainer-clean\" would be the proper\none to use (in GNU, again). It also contains the note:\n\n\"... Since these files are normally included in the distribution, we don't\ntake care to make them easy to reconstruct. If you find you need to\nunpack the full distribution again, don't blame us.\"\n\nWell said.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 14 Nov 1999 19:34:26 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Status of sql_help.h" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> \"The `configure' script and the Makefile rules for building and\n> installation should not use any utilities directly except these:\n> cat cmp cp diff echo egrep expr false grep install-info\n> ln ls mkdir mv pwd rm rmdir sed sleep sort tar test touch true\"\n> No awk there either.\n\nDo I need to point out that Perl isn't there either? But this GNU rule\nis irrelevant, because it applies to tools needed to build a standard\n*distribution* of a package. Maintainer tools can include other things.\nUsing perl to generate sql_help.h seems perfectly appropriate to me,\nas I said before.\n\nWhat I wanted to find out was whether there were a lot of people using\nthe CVS server who don't have Perl and would object to installing it.\nThat's what will determine whether we can remove sql_help.h from the CVS\narchive (as opposed to distributed tarballs).\n\n>> PS: \"make distclean\" should probably not remove sql_help.h, for the\n>> same reasons that we don't remove gram.c --- it *is* a distributed\n>> file, and a particular user might not have the tools to rebuild it.\n\n> That was my bad. For some reason I had the idea that \"distclean\" stood for\n> \"distinctly clean\" (really clean). :-\\ I'll fix that. Perhaps we ought to\n> decide on some standard targets. \"maintainer-clean\" would be the proper\n> one to use (in GNU, again).\n\nNo, it wouldn't be. We use distclean precisely as specified in the GNU\ncoding standards:\n\n`distclean'\n Delete all files from the current directory that are created by\n configuring or building the program. If you have unpacked the\n source and built the program without creating any other files,\n `make distclean' should leave only the files that were in the\n distribution.\n\nsql_help.h will now be in the distribution, therefore distclean\nshouldn't remove it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Nov 1999 20:02:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Status of sql_help.h " }, { "msg_contents": "> I see that src/bin/psql/sql_help.h is now generated automatically\n> from the SGML documentation. This is a Good Thing. But: since\n> sql_help.h is now a derived file, shouldn't it be removed from the\n> CVS repository, for the same reasons that we don't keep gram.c\n> and other derived files in CVS? If we leave it there, it'll generate\n> a lot of extra update traffic.\n> \n> The only reason I can see for leaving it in CVS is that if we remove it,\n> people who pull sources from CVS would need Perl in order to build psql.\n> (People who download tarballs would *not*, since release_prep updates\n> sql_help.h along with the other derived files.) That's annoying, but\n> I think it may not be a fatal objection. Most hackers are probably\n> more likely to already have Perl than to already have bison or flex...\n> \n> I thought about suggesting that create_help.pl be rewritten in some\n> \"more portable\" fashion such as an awk script. But really, if you\n> consider non-Unix platforms, Perl is more portable than awk or any\n> other likely alternative. (It might be worthwhile to remove the one\n> or two unnecessary Perl-5-isms in the script, so that it will run on\n> Perl 4 if that's what's available.)\n> \n> Comments? Anyone feel that we really can't expect users of the CVS\n> repository to have Perl?\n\nBecause we have proper dependency, any change to sgml will force the\nnext committer to commit a new sql_help.h right? If so, seems like it\nwill work fine as is.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 22:11:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Status of sql_help.h" } ]
[ { "msg_contents": "\nSome observations about compression (sorry if this is obvious):\n\nCompression won't work unless there is redundancy in the information\nbeing compressed. In a normal [1] database, there may be a little\nredundancy in each row. The real redundancy will be when you group\nrows together. I.E. in it is quite likely that many rows will have\nidentical or similar values for some columns. It may be worth\nconsidering compressing each storage block[2], in the hopes that each\nblock will contain several rows, and that the rows will have some\nsimilar (redundant) information. Then compression will work. Just\ncompressing one column of one row may work in some situations, but\nin general, it won't.\n\nOnce a while ago we implemented a storage system where we grouped\nrecords (about 1k in size) into blocks (about 16k in size) and\ncompressed each block individually. To get at a record you had to\ndecompress the block it was in, and skip to the desired offset. Even\nthough the system had to throw away a lot [3] of data for each\nretrieval, the system turned out to be be both economical of disk[4]\nstorage and fast at retrieval. For this app we found that compression\nrates increased as we got up to about 16 records, and then flattend\noff. I.E. compressing 16 records used less space than compressing 4\ngroups of 4, but compressing 256 records together used as much space\nas compressing 16 groups of 16.\n\nThe point?\n\nI wish I understood the PostgreSQL backend storage algorithms better,\nbut it seems that the combination of a Tuneable block size [5] and\nan option to compress individual storage blocks [6] might be worth\nlooking at.\n\n-- cary\n\t\n\n\n[1] Whatever the heck that is.\n\n[2] My lack of understanding of PostgreSQL storage manager\n internals is showing.\n\n[3] Half a block on average.\n\n[4] Optical disk in this particular app.\n\n[5] This is an option now, right? But is it compile time, or run-time?\n and is it postmaster-wide, per database? I *try* to read the pgsql-hackers-digest\n but there is just too much going on!\n\n[6] Can this work with the existing data file structure?\n\n\n\n\n\n\n\n\n\n", "msg_date": "Sat, 13 Nov 1999 17:23:15 -0500 (EST)", "msg_from": "\"Cary O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Compression in LO and other fields" } ]
[ { "msg_contents": "Hi,\n\nA few time in the past, my indexes have become corrupted somehow\nand a vacuum analyze will cause the backend to dump core. I've\nseen other people post similiar problems on hackers and admin. All\nof the suggestions seem to be dumping the database and reloading it \nto have the indexes rebuilt or vacuuming the tables one by one until \nthe system crashes and then drop the indexes for that table and \nrecreate them from your DDL scripts. Well, this happened to me again,\nso I searched the list looking for any way to automatically rebuild the\nindexes but didn't manage to find any. It got me thinking, pg_dump \nalready dumps index creates, all we need to do is modify it to dump\nindex drops and prevent all of the other schema and data from being\ndumped.\n\nI got into the code and quickly learned about the -c option (creates \ndrops statements) that wasn't talked about on my outdated manpage. \nDoh! So now the problem is even easier -- I only have to suppress the \ndumping of everying except indexes and turn on the -c flag. I added\nan option called -r (rebuild indexes). It turns on the dropSchema,\nschemaOnly and dataOnly flags which in essence causes pg_dump to \ndo nothing. An extra snippet of code checks to see if indexes should\nbe rebuilt and dumps the index creates. It's a real hack, but it \nsuites my needs. \n\n\nNow, whenever I want to rebuild my indexes I can just type:\n\n\tpg_dump -r mydatabase | psql mydatabase\n\nIf something else already exists -- oops maybe we could add it to the \nfaq. I actually would have liked to implement the code differently, but \nthe current code isn't very condusive to a more elegant solution and I din't\nwant to put much time into something that might be rejected by the source\ncode maintainers. If this is rejected, I only wasted 10 minutes. Ideally,\nwhat I would like is a flag that allows you to specify the types to be \ndumped. This way, it would be flexible enough to allow you to dump any \ncombination of types. If you only wanted to dump trigger schema you could. \nIf you wanted sequences and indexes, no problem. Something like:\n\t\n\tpg_dump --dump-types \"type trigger aggregate\"\n\nAnyway, \n\ndiff -u ./pg_dump.c ../../../../postgresql-6.5.3/src/bin/pg_dump/pg_dump.c\n--- ./pg_dump.c\tSun Nov 14 01:41:05 1999\n+++ ../../../../postgresql-6.5.3/src/bin/pg_dump/pg_dump.c\tThu Sep 23 14:13:49 1999\n@@ -112,7 +112,6 @@\n PGconn\t *g_conn;\t\t\t\t/* the database connection */\n \n bool\t\tforce_quotes;\t\t/* User wants to suppress double-quotes */\n-bool\t\trebuildIndexes;\t\t/* dump DDL for index rebuilds */\n bool\t\tdumpData;\t\t\t/* dump data using proper insert strings */\n bool\t\tattrNames;\t\t\t/* put attr names into insert strings */\n bool\t\tschemaOnly;\n@@ -543,7 +542,6 @@\n \n \tg_verbose = false;\n \tforce_quotes = true;\n-\trebuildIndexes = false;\n \tdropSchema = false;\n \n \tstrcpy(g_comment_start, \"-- \");\n@@ -554,9 +552,7 @@\n \n \tprogname = *argv;\n \n- /* Get the arguments for the command line via getopts cycle through\n- each option, setting the appropriate flags as necessary */\n-\twhile ((c = getopt(argc, argv, \"acdDf:h:nNop:rst:uvxz\")) != EOF)\n+\twhile ((c = getopt(argc, argv, \"acdDf:h:nNop:st:uvxz\")) != EOF)\n \t{\n \t\tswitch (c)\n \t\t{\n@@ -594,17 +590,6 @@\n \t\t\tcase 'p':\t\t\t/* server port */\n \t\t\t\tpgport = optarg;\n \t\t\t\tbreak;\n-\t\t\tcase 'r':\t\t\t/* rebuild indexes */\n-\t\t\t\trebuildIndexes = true; /* forces only indexes to be dumped */\n-\t\t\t\tdropSchema = true; /* causes drop statements to be created */\n-\n-\t\t\t\t/* Setting data only to true, causes the dumpSchema() to dump \n-\t\t\t\t schema to a NULL file handle, and setting schemaOnly to true\n-\t\t\t\t prevents dumpClasses() from dumping the data -- it's a HACK */\n-\t\t\t\tschemaOnly = true; \n-\t\t\t\tdataOnly = true; \n-\n-\t\t\t\tbreak;\n \t\t\tcase 's':\t\t\t/* dump schema only */\n \t\t\t\tschemaOnly = true;\n \t\t\t\tbreak;\n@@ -765,13 +750,8 @@\n \tif (!schemaOnly)\n \t\tdumpClasses(tblinfo, numTables, g_fout, tablename, oids);\n \n-\n-\t/* dump indexes and triggers at the end for performance */\n-\tif (rebuildIndexes)\n-\t{\n-\t\tdumpSchemaIdx(g_fout, tablename, tblinfo, numTables);\n-\t}\n-\telse if (!dataOnly)\t\t\t\t\n+\tif (!dataOnly)\t\t\t\t/* dump indexes and triggers at the end\n+\t\t\t\t\t\t\t\t * for performance */\n \t{\n \t\tdumpSchemaIdx(g_fout, tablename, tblinfo, numTables);\n \t\tdumpTriggers(g_fout, tablename, tblinfo, numTables);\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Sun, 14 Nov 1999 02:29:06 -0600", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump - rebuilding indexes" } ]
[ { "msg_contents": "Hi all\nI'm very new to SQL server I can do allot of things in access database but I'm moving to SQL. isen that grate?\n\nWhat i m looking for is witch one of the datatype in table design view I can used for an autocount like in access?\n\nPlease reply to \[email protected]\n\nWebmaster\[email protected]\nhttp://www.link2casino.com\nAuto\n\n\n\n\n\n\n\nHi all\nI'm very new to SQL server I can do allot of things in access \ndatabase but I'm moving to SQL. isen that grate?\n \nWhat i m looking for is witch one of the datatype in table \ndesign view I can used for an autocount like in access?\nPlease reply to \[email protected]\[email protected]://www.link2casino.comAuto", "msg_date": "Sun, 14 Nov 1999 13:17:31 -0500", "msg_from": "\"link2\" <[email protected]>", "msg_from_op": true, "msg_subject": "Autocount" } ]
[ { "msg_contents": "Hi,\n\n I need to keep track of changes to a set of tables in a generic way.\nTo do this, I want to keep track of oid's. I'm writing the code using\nthe SPI interface, but I've hit the following problem: after I have\ninserted a tuple into a table from the C routine, I cannot figure out\nhow to get hold of its oid. I had hope that there may be something in\nSPI_tuptable, but there isn't. SPI_processed is 1, and the row is\ninserted successfully. Surely there must be some way to get the oid of\nthe row that was just inserted.\n\nI'd really appreciate any help on this one -- even digging aruond in the\nbackend code hasn't helped up to now.\n\nCheers!\n\nAdriaan\n\n", "msg_date": "Sun, 14 Nov 1999 21:17:33 +0200", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": true, "msg_subject": "Help with SPI and oids" } ]
[ { "msg_contents": "Hello Tom,\n\nI was hoping you might have some insight on a problem we've encountered\nwith PostgreSQL 6.5.0 (RedHat 5.2) this morning, since you are the\n\"file descriptor\" king, as it were :-) . The database is backing a website\nused by a network of hospitals for materials management and this morning,\nthe postmaster died with the following appearing in the system log:\n\nNov 14 11:50:14 emptoris logger: \nFATAL 1: ReleaseLruFile: No opened files - no one can be closed\n\nThis is the first time this has ever happened. I've had such good luck\nwith PostgreSQL that I didn't have the postmaster started by inittab.\nThe number of backends should have been very light today (Sunday) --\nonly a few ODBC users and an occassional HTTP user, so after the \npostmaster exited, the log (I assume these are forked backend complaints)\nshows:\n\nNov 14 11:55:03 emptoris logger: \npq_recvbuf: unexpected EOF on client connection\nNov 14 11:55:03 emptoris logger: \npq_recvbuf: unexpected EOF on client connection\nNov 14 11:55:04 emptoris logger: \npq_flush: send() failed: Broken pipe\nNov 14 11:55:04 emptoris logger: \nFATAL: pq_endmessage failed: errno=32 \n\n>From previous posts, I know you've done a cleanup with respect to \nfile descriptors, but all I see in the log after 6.5.0 is a 6.5.1 entry:\n\nACL file descriptor leak fix(Atsushi Ogawa)\n\nIs this a rare occurence or something that might have been fixed between\n6.5.0 and 6.5.3? Like I said, this is the first time this has happened and\notherwise has been very robust under much heavier loads -- so much so\nthat I didn't put the postmaster into inittab for respawning. Its \nbeen working pretty much flawlessly in production for about a year. \n\nAnyways, after starting the postmaster again, I vacuum analyzed the \ndatabase, accessed the HTTP application, etc. without problems.\n\nAny info would be greatly appreciated, \n\nMike Mascari\n([email protected])\n\n\n\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Sun, 14 Nov 1999 11:24:20 -0800 (PST)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Postmaster dies with FATAL 1: ReleaseLruFile: No opened files - no\n\tone can be closed" }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> FATAL 1: ReleaseLruFile: No opened files - no one can be closed\n\n> This is the first time this has ever happened.\n\nI've never seen that either. Offhand I do not recall any post-6.5\nchanges that would affect it, so the problem (whatever it is) is\nprobably still there.\n\nAfter eyeballing the code, it seems there are only two ways this\ncould happen:\n\n1. the number of \"allocated\" (non-virtual) file descriptors grew to\nexceed the number of files Postgres thinks it can have open;\n\n2. something else was temporarily exhausting your kernel's file table\nspace, so that ENFILE was returned for many successive attempts to\nopen a file. (After each one, fd.c will close another file and try\nagain.)\n\n#2 seems improbable on an unloaded system, and isn't real probable even\non a loaded one, since you'd have to assume that some other process\nmanaged to suck up each filetable slot that fd.c released before fd.c\ncould re-acquire it. Once, yes, but several dozen times in a row?\n\nSo I'm guessing a leak of allocated file descriptors.\n\nAfter grovelling through the calls to AllocateFile, I only see one\nprospect for a leak: it looks to me like verify_password() neglects\nto close the password file if an invalid user name is given. Do you\nuse a plain (non-encrypted) password file? If so, I'll bet you can\nreproduce the crash by trying repeatedly to connect with a username\nthat's not in the password file. If that pans out, it's a simple fix:\nadd \"FreeFile(pw_file);\" near the bottom of verify_password() in\nsrc/backend/libpq/password.c. Let me know if this guess is right...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Nov 1999 18:23:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with FATAL 1: ReleaseLruFile: No opened\n\tfiles - no one can be closed" } ]
[ { "msg_contents": "Hi,\n\ntoday I found a problem with MB on freebsd-elf running 6.5.3 - \nI have compiled 6.5.3 with --enable-locale --with-mb=KOI8\nand in psql I tried set client_encoding to 'WIN'\nbut result of simple query looks like 8-bit was stripped. \nThe same query works as expected under Linux.\nAlso, when I tried\nselect * from t1 where a ~* 'О©╫';\nI got:\nERROR: Can't find right op '~*' for type 25\nI'm outside of city and has access only to b/w terminal\nwithout cut'n paste support ( old 286 + dialup ) and I can't\nprovide more information. Will do this tomorrow.\n\n\tOleg\n\nPS. The same weird thing happens even if I \nset client_encoding to 'KOI8' - native encoding !\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 14 Nov 1999 23:40:06 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "problem with MB ? 6.5.3 under freebsd-elf" } ]
[ { "msg_contents": "\nhttp://www.pgsql.com/app-index ...\n\nIts a start, at least. I have to add 'update' facilities for ppl to edit\nrecords, and it *looks* like hell, but its a start...\n\nI'm concentrating on apps first, and, eventually, its going to include\n'Usage of' records...\n\nAll input has to be approved before it becomes live...and it will\neventually contain information similar to what FreshMeat provides\n(dependancies, license, etc)...\n\nCheck it out, add content and watch it evolve...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 14 Nov 1999 19:13:27 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Looks like hell, but ..." }, { "msg_contents": "The Hermit Hacker wrote:\n> Check it out, add content and watch it evolve...\n\nA good start, Marc. I added the interesting package called onShore\nTimesheet -- looks like your entry form covers most of the bases.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Nov 1999 12:12:47 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Looks like hell, but ..." }, { "msg_contents": "On Mon, 15 Nov 1999, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > Check it out, add content and watch it evolve...\n> \n> A good start, Marc. I added the interesting package called onShore\n> Timesheet -- looks like your entry form covers most of the bases.\n\nThanks...I've been populating it with what I can readily find (through\nfreshmeat so far), but I definitely don't know everything that is out\nthere :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 15 Nov 1999 13:50:33 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Looks like hell, but ..." }, { "msg_contents": "On Mon, Nov 15, 1999 at 01:50:33PM -0400, The Hermit Hacker wrote:\n> \n> Thanks...I've been populating it with what I can readily find (through\n> freshmeat so far), but I definitely don't know everything that is out\n> there :)\n> \nI'm not sure if you worked on the user gallery also, but I like that as \nwell. I do have one suggestion and a comment. It would be nice if a \ndescription of the project could be included. I'd like to browse through \nsome of the other project but clicking on each one and seeing a description \nof the project would help decide which ones to look at.\n\nMy comment is that there appears to be no maintainer of the gallery. There \nis a project name testing_shit. I counted at least 10 duplicate entries and \nmany that are obvously incomplete. Do you want any help with the gallery?\n\n-brian\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Mon, 15 Nov 1999 12:14:26 -0600", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Looks like hell, but ..." }, { "msg_contents": "On Mon, 15 Nov 1999, Brian Hirt wrote:\n\n> On Mon, Nov 15, 1999 at 01:50:33PM -0400, The Hermit Hacker wrote:\n> > \n> > Thanks...I've been populating it with what I can readily find (through\n> > freshmeat so far), but I definitely don't know everything that is out\n> > there :)\n> > \n> I'm not sure if you worked on the user gallery also, but I like that as \n> well. I do have one suggestion and a comment. It would be nice if a \n> description of the project could be included. I'd like to browse through \n> some of the other project but clicking on each one and seeing a description \n> of the project would help decide which ones to look at.\n> \n> My comment is that there appears to be no maintainer of the gallery. There \n> is a project name testing_shit. I counted at least 10 duplicate entries and \n> many that are obvously incomplete. Do you want any help with the gallery?\n\nNow that I have what looks like a nice format (technically, not visually)\nfor the apps, I'm going to redo the user gallery itself also...assuming\nnothing comes up later tonight, will hopefully have most of it converted\nthen...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 15 Nov 1999 15:26:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Looks like hell, but ..." } ]
[ { "msg_contents": "While chasing the apparent FD leakage reported by Mike Mascari,\nI realized that there are probably code paths in which the postmaster\nitself will invoke elog(ERROR) or elog(FATAL). This is fairly likely\nanyplace that the postmaster uses routines also used by the backend.\nMascari's example shows that it *will* happen, in the current state of\nthe code, if the postmaster does enough failed password lookups.\nThat's a bug of course, but there will always be bugs. Trying to ensure\nthat the postmaster will never call elog() seems like a losing game;\ninstead we need to ensure that something acceptable will happen.\n\nRight now, what will happen is a postmaster coredump (or worse,\nundefined behavior) due to trying to longjmp through an uninitialized\njmp_buf to return to the never-yet-entered backend main loop. That\ndoesn't rate as acceptable in my book.\n\nelog() should probably check for being in the postmaster and force\na postmaster shutdown if it gets elog(FATAL). Should elog(ERROR)\ndo the same, or do we want to try to add a return-to-main-loop\ncapability in the postmaster? I'm inclined to keep it simple and\njust do an orderly shutdown in both cases. Any such code paths that\nare out there are obviously not heavily exercised, so I doubt that\nit's worth adding new code to try to recover from elog(ERROR).\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Nov 1999 18:37:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "elog() executed by postmaster" }, { "msg_contents": "> elog() should probably check for being in the postmaster and force\n> a postmaster shutdown if it gets elog(FATAL). Should elog(ERROR)\n> do the same, or do we want to try to add a return-to-main-loop\n> capability in the postmaster? I'm inclined to keep it simple and\n> just do an orderly shutdown in both cases. Any such code paths that\n> are out there are obviously not heavily exercised, so I doubt that\n> it's worth adding new code to try to recover from elog(ERROR).\n\nAgreed. Just try simple first.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Nov 1999 19:48:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] elog() executed by postmaster" } ]
[ { "msg_contents": "\n Any clue why Postgresql version 6.5.1 can't handle null values for int8 in\na WHERE clause?\n\nincanta=> create table t(v int8);\nCREATE\nincanta=> insert into t(v) values(0);\nINSERT 101737 1\nincanta=> insert into t(v) values(1);\nINSERT 101738 1\nincanta=> insert into t(v) values(-1);\nINSERT 101739 1\nincanta=> select * from t where v>=0;\nv\n-\n0\n1\n(2 rows)\n\nincanta=> insert into t(v) values (null);\nINSERT 101740 1\nincanta=> select * from t;\n v\n--\n 0\n 1\n-1\n \n(4 rows)\n\nincanta=> select * from t where v>=0;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n", "msg_date": "Mon, 15 Nov 1999 14:10:34 -0500", "msg_from": "Robert Forsman <[email protected]>", "msg_from_op": true, "msg_subject": "6.5.1 -vs- null values for an int8" } ]
[ { "msg_contents": "--- Tom Lane <[email protected]> wrote:\n> Mike Mascari <[email protected]> writes:\n> > FATAL 1: ReleaseLruFile: No opened files - no one can be closed\n> \n> > This is the first time this has ever happened.\n> \n> I've never seen that either. Offhand I do not recall any post-6.5\n> changes that would affect it, so the problem (whatever it is) is\n> probably still there.\n> \n> After eyeballing the code, it seems there are only two ways this\n> could happen:\n> \n> 1. the number of \"allocated\" (non-virtual) file descriptors grew to\n> exceed the number of files Postgres thinks it can have open;\n> \n> 2. something else was temporarily exhausting your kernel's file table\n> space, so that ENFILE was returned for many successive attempts to\n> open a file. (After each one, fd.c will close another file and try\n> again.)\n> \n> #2 seems improbable on an unloaded system, and isn't real probable even\n> on a loaded one, since you'd have to assume that some other process\n> managed to suck up each filetable slot that fd.c released before fd.c\n> could re-acquire it. Once, yes, but several dozen times in a row?\n> \n\nThanks for the response, Tom. When looking at the system log, \nthe kernel was logging messages regarding IPX network name collisions\nwhich apprently can happen when there are autoconfigured Win95 boxes\non the same subnet. These messages were flooding the log at a rate of\none every second or two...Even though #2 seems improbable, and just\nglancing at the IPX kernel code didn't point to how that may have\ncaused a continual consumption of file descriptors, I'm willing to \nblame the kernel on this (and me for using autoprimary and autointerface\noptions).\n\nThanks again, \n\nMike Mascari\n([email protected])\n\n\n\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Mon, 15 Nov 1999 15:22:21 -0800 (PST)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postmaster dies with FATAL 1: ReleaseLruFile: No opened\n\tfiles - no one can be closed" }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> Thanks for the response, Tom. When looking at the system log, \n> the kernel was logging messages regarding IPX network name collisions\n> which apprently can happen when there are autoconfigured Win95 boxes\n> on the same subnet. These messages were flooding the log at a rate of\n> one every second or two...Even though #2 seems improbable, and just\n> glancing at the IPX kernel code didn't point to how that may have\n> caused a continual consumption of file descriptors, I'm willing to \n> blame the kernel on this (and me for using autoprimary and autointerface\n> options).\n\nThat doesn't strike me as a bulletproof explanation. fd.c has a tight\nloop that close()s an FD and then tries to open() the file it wants,\nrepeat until success or an error other than ENFILE/EMFILE. If the\nscenario really is that it got ENFILE every time until it was down to\nzero FDs, there'd have to be something sucking up each freed FD within\nmicroseconds of its being freed. Repeatedly. Forty or fifty (or more)\ntimes in a row. I don't think a once-a-second Win95 lossage will do\nthat. And if you were down to zero free FDs system-wide, Postgres\nwouldn't be the only thing having troubles!\n\nI take it you don't use Postgres password authentication at all? If you\ndo, the other theory looks a lot more viable to me... I haven't had time\nto try to reproduce a crash yet, but I'm pretty sure there's one there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Nov 1999 20:26:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with FATAL 1: ReleaseLruFile: No opened\n\tfiles - no one can be closed" } ]
[ { "msg_contents": "Are there any web hosting services that offer postgresql database\naccess (cgi would be nice too)?\n\n\n\n==============================================================\nHere's some for the killfiles\[email protected] [email protected] [email protected]\[email protected]\n", "msg_date": "Mon, 15 Nov 1999 23:30:38 GMT", "msg_from": "Speedy Fast <[email protected]>", "msg_from_op": true, "msg_subject": "shell accounts that offer postgresql?" } ]
[ { "msg_contents": "> > > I notice that the postgresql docs say that postgresql is a public domain\n> > > program, while they really carry a Berkley copyright. You might want to\n> > > correct this for the next release.\n> > > http://www.bbin.com/pd/\n> > Ooh. I guess I'm not familiar with the fine points here. Our\n> > Berkeley-style license allows use, modification, sale, gift, theft,\n> > etc. of the software with only one provision: that the copyright\n> > notice remain intact. Clearly, this copyright notice is designed to\n> > protect UCB from rabid lawyers once the software is no longer under\n> > UCB's control, and this copyright allows any and all of the above\n> > uses, and any other use also.\n> > So what about this would not be considered public domain software?\n> Something can not be both Copyrighted and in the public domain.\n\nHmm. I've taken this on-list, just in case someone else has a comment.\nBut in the absence of alternate information, I'll just assume that we\nare not public domain software. But I sure still have the feeling that\nwe are getting gypped by the legaleze.\n\nThanks for the heads-up...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 16 Nov 1999 02:52:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql Docs...." }, { "msg_contents": "\nOn 16-Nov-99 Thomas Lockhart wrote:\n>> > > I notice that the postgresql docs say that postgresql is a public domain\n>> > > program, while they really carry a Berkley copyright. You might want to\n>> > > correct this for the next release.\n>> > > http://www.bbin.com/pd/\n>> > Ooh. I guess I'm not familiar with the fine points here. Our\n>> > Berkeley-style license allows use, modification, sale, gift, theft,\n>> > etc. of the software with only one provision: that the copyright\n>> > notice remain intact. Clearly, this copyright notice is designed to\n>> > protect UCB from rabid lawyers once the software is no longer under\n>> > UCB's control, and this copyright allows any and all of the above\n>> > uses, and any other use also.\n>> > So what about this would not be considered public domain software?\n>> Something can not be both Copyrighted and in the public domain.\n> \n> Hmm. I've taken this on-list, just in case someone else has a comment.\n> But in the absence of alternate information, I'll just assume that we\n> are not public domain software. But I sure still have the feeling that\n> we are getting gypped by the legaleze.\n\nIIRC, All copyright notices must be kept intact. Software in the public\ndomain carries no protection whatsoever. PD software can be taken and \nrenamed to whatever by the person that renamed it and claimed to be their \nown property. This happened to the WinVN project at least once (it's a\nPD Windows newsreader). At least one commercial project came directly \nfrom the WinVN sources - which are in the public domain.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Mon, 15 Nov 1999 22:08:12 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: Postgresql Docs...." }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>>>>>> I notice that the postgresql docs say that postgresql is a public domain\n>>>>>> program, while they really carry a Berkley copyright. You might want to\n>>>>>> correct this for the next release.\n\n>>>> So what about this would not be considered public domain software?\n\n>> Something can not be both Copyrighted and in the public domain.\n\n> Hmm. I've taken this on-list, just in case someone else has a comment.\n> But in the absence of alternate information, I'll just assume that we\n> are not public domain software. But I sure still have the feeling that\n> we are getting gypped by the legaleze.\n\nIANAL, but I've paid considerable attention to these issues over the\npast ten years. My understanding is that \"public domain\" means\nspecifically that there is *no* copyright or any other intellectual-\nproperty restriction on the software. In particular, anything that\nhas either a BSD- or GPL-style license is most certainly not public\ndomain.\n\nI'd suggest replacing all uses of the phrase \"public domain\" with\n\"open source\" or \"freely available\" or some other term that hasn't\ngot such a clearly-inapplicable legal meaning.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Nov 1999 23:00:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql Docs.... " }, { "msg_contents": "On Tue, 16 Nov 1999, Thomas Lockhart wrote:\n\n> > > So what about this would not be considered public domain software?\n> > Something can not be both Copyrighted and in the public domain.\n> \n> Hmm. I've taken this on-list, just in case someone else has a comment.\n> But in the absence of alternate information, I'll just assume that we\n> are not public domain software. But I sure still have the feeling that\n> we are getting gypped by the legaleze.\n\nHow about \"free software\" or \"freely available\"? As in \"free to do\nwhatever you want\", not Free(tm) as in FSF. IMHO, \"open source\" sounds to\nbuzzword-compliant these days.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 17 Nov 1999 12:07:45 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql Docs...." }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> On Tue, 16 Nov 1999, Thomas Lockhart wrote:\n> \n> > > > So what about this would not be considered public domain software?\n> > > Something can not be both Copyrighted and in the public domain.\n> >\n> > Hmm. I've taken this on-list, just in case someone else has a comment.\n> > But in the absence of alternate information, I'll just assume that we\n> > are not public domain software. But I sure still have the feeling that\n> > we are getting gypped by the legaleze.\n> \n> How about \"free software\" or \"freely available\"? As in \"free to do\n> whatever you want\", not Free(tm) as in FSF. IMHO, \"open source\" sounds to\n> buzzword-compliant these days.\n\nHow about simply \"BSD licensed?\"\n\n--\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Wed, 17 Nov 1999 10:07:21 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql Docs...." }, { "msg_contents": "hi...\n\n> > > Hmm. I've taken this on-list, just in case someone else has a comment.\n> > > But in the absence of alternate information, I'll just assume that we\n> > > are not public domain software. But I sure still have the feeling that\n> > > we are getting gypped by the legaleze.\n> > \n> > How about \"free software\" or \"freely available\"? As in \"free to do\n> > whatever you want\", not Free(tm) as in FSF. IMHO, \"open source\" sounds to\n> > buzzword-compliant these days.\n> \n> How about simply \"BSD licensed?\"\n\ntraditional \"BSD Liscences\" have that silly advertising\nclause.. which postgres does as well, unfortunately... quite honestly, i find\nthat irritating and antiquated. *shrug* not like the regents are exactly doing\nanything important w/postgres now, right? and for all the shouting of \"its\nTRULY free\", there are string attatched...\n\nanyways... as long as a lisence protects what needs to be protected, all is\ngood. instead of arguing silly semantics (BSD/XFree/Public\nDomain/GPL/blahblahblah) we should be looking more importantly at which rights\nwe want to secure and which we don't really care about.\n\nBSD/XFree style liscences are good when a permisiveness is desired (like apache\nand how it help keep HTTP on track) and bad when you aren't trying to enforce\ncertain standards but endevouring to keep a software available to others...\nfortunately, postgres isn't a trivial piece of software, which serves as a\nprotection. but it isn't so complex that it couldn't be taken on by another\nentity. in fact, a compay could easily come along and swoop up the core 4\nprogrammers with terrific job offers and that would pretty much be that =) \nlets hope people's scruples and dedications are in the place We would like them\nto be...\n\npublic domain would be horrid. BSD/XFree style is fine, though probably more\npermissive than needed (and perhaps even desired). the GPL is\nprobably a little too demanding for this type of software though...\n\nit would be interesting to see it settle somewhere in between. e.g. if you want\nto extend it, GREAT! if you distribute it gratis, you have to make it available\nto everyone... perhaps require source code be available for the current\nrelease (not distributed, but available)... and if someone wants to _sell_ it\nas a closed package, fine! but require they give something back to the postgres\ndevelopment team.\n\n-- \nAaron J. Seigo\nSys Admin\n", "msg_date": "Wed, 17 Nov 1999 09:10:12 -0700", "msg_from": "\"Aaron J. Seigo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql Docs...." }, { "msg_contents": "One half-decent newspeak alternative I've seen to \"free software\" and \"open\nsource\" is \"Wide-Open Source\" (WOS). It seems designed to imply that your licence\nis \"open source, and then some\", such as BSD-style without the advertising clause\nand meaningless reference to Berkeley.\n\n\"Aaron J. Seigo\" wrote:\n\n> > > How about \"free software\" or \"freely available\"? As in \"free to do\n> > > whatever you want\", not Free(tm) as in FSF. IMHO, \"open source\" sounds to\n> > > buzzword-compliant these days.\n> >\n> > How about simply \"BSD licensed?\"\n>\n> traditional \"BSD Liscences\" have that silly advertising\n> clause.. which postgres does as well, unfortunately... quite honestly, i find\n> that irritating and antiquated. *shrug* not like the regents are exactly doing\n> anything important w/postgres now, right? and for all the shouting of \"its\n> TRULY free\", there are string attatched...\n\nCheers,\n\nEvan @ 4-am\n\n", "msg_date": "Wed, 17 Nov 1999 12:02:12 -0600", "msg_from": "Evan Simpson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql Docs...." }, { "msg_contents": "\"Aaron J. Seigo\" wrote:\n> > How about simply \"BSD licensed?\"\n> \n> traditional \"BSD Liscences\" have that silly advertising\n> clause.. which postgres does as well, unfortunately... quite honestly, i find\n> that irritating and antiquated. *shrug* not like the regents are exactly doing\n> anything important w/postgres now, right? and for all the shouting of \"its\n> TRULY free\", there are string attatched...\n\nThere are always strings attached. The BSD license has the fewest\nstrings short of fully public domain.\n\n> Domain/GPL/blahblahblah) we should be looking more importantly at which rights\n> we want to secure and which we don't really care about.\n\nThat has already been done -- PostgreSQL still has Berkeley code in it,\nand therefore HAS TO BE BSD licensed -- if the license terms are to be\nchanged (which is not likely to happen), Berkeley code will have to be\neradicated -- which is also not likely to happen.\n\n[snip]\n\nThe point is this: the license is not changing (unless ALL contributors\npast and present agree to it). I just stated the fact of what license\nit is. \n\nThere is really no use in discussing what license to put PostgreSQL\nunder, as it is already under one. That means that there is absolutely\nno obligation on anyone who uses the software to give back to the\ncommunity -- in fact, if they want to take PostgreSQL, rename it, and\nsell it, they are free to do so -- and they don't have to give anything\nback. In fact, the original Postgres had this very thing happen -- the\ncommercial database Illustra was the result, and that got swallowed by\nInformix. PostgreSQL lives -- Illustra is dead. Long live PostgreSQL!\n\n--\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Wed, 17 Nov 1999 15:22:54 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql Docs...." }, { "msg_contents": "At 12:22 PM -0800 11/17/99, Lamar Owen wrote:\n>There is really no use in discussing what license to put PostgreSQL\n>under, as it is already under one. That means that there is absolutely\n>no obligation on anyone who uses the software to give back to the\n>community -- in fact, if they want to take PostgreSQL, rename it, and\n>sell it, they are free to do so -- and they don't have to give anything\n>back. In fact, the original Postgres had this very thing happen -- the\n\nWell there is *one* thing they have to give back: credit. They have to\nreproduce the copyright notice.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n", "msg_date": "Wed, 17 Nov 1999 16:51:25 -0800", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql Docs...." } ]
[ { "msg_contents": "\nDear Sir\n\nCan you help me with a problem I have:\n\nWhen more than 30 process query my Postgres data base, then my postmaster\ncreate zombies process.\nLike this:\nUSER PID %CPU %MEM SIZE RSS TTY STAT START TIME COMMAND\npostgres 4139 0.1 0.0 0 0 ? Z 18:38 0:01 (postmaster\n<zombie>)\npostgres 4140 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4141 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4142 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4143 0.1 0.0 0 0 ? Z 18:38 0:01 (postmaster\n<zombie>)\npostgres 4146 0.1 0.0 0 0 ? Z 18:38 0:02 (postmaster\n<zombie>)\npostgres 4150 0.1 0.0 0 0 ? Z 18:38 0:02 (postmaster\n<zombie>)\npostgres 4152 0.1 0.0 0 0 ? Z 18:38 0:02 (postmaster\n<zombie>)\npostgres 4159 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4170 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4174 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4175 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4177 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4178 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4179 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4194 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4195 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4197 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4199 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4200 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4201 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4202 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4203 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4204 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4205 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4206 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4209 0.0 0.0 0 0 ? Z 18:38 0:00 (postmaster\n<zombie>)\npostgres 4597 0.0 0.9 1148 608 p4 S 18:52 0:00 bash \npostgres 11356 0.0 1.3 4816 824 ? S Nov 11 0:08\n/usr/local/pgsql/bin/postmaster -B 256 -i -D /usr/local/pgsql/data\nroot 1 0.0 0.3 828 192 ? S Nov 9 0:05 init [3] \nroot 2 0.0 0.0 0 0 ? SW Nov 9 0:00 (kflushd)\nroot 3 0.0 0.0 0 0 ? SW< Nov 9 0:00 (kswapd)\n\nI have a server with 64 MB.\nCan you help me ?\nthanks\n\nsorry for my English\n\nAtentamente e com os melhores cumprimentos\n\nConstantino Martins\n\n\n\n", "msg_date": "Mon, 15 Nov 1999 19:36:04 -0800", "msg_from": "Constantino Martins <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "Hello,\n Not sure is this is the place for this but, I have two object\ncreated a table (customers) and a view cust which joins customers and\ncust runs pretty slow when select count(*) from cust. I think it may be\nan index issue, can anyone help? there are about 1200 records in\ncustomers and it takes about 30 seconds to count(*). Also how to I\ncreate a foreign key constraint?\n\nTyson Oswald\n\nCreate table Customers\n (\n id serial,\n Name varchar(25),\n UID varchar(7),\n Ext char(4),\n fkMailcode int4,\n fkContext int4,\n fkServer int4,\n fkAdmin int4\n );\n\ndrop index idx_Customers_fkAdmin;\ndrop index idx_Customers_fkMailcode;\ndrop index idx_Customers_fkServer;\ndrop index idx_Customers_fkContext;\n\nCreate index idx_Customers_fkAdmin on Customers(fkadmin);\nCreate index idx_Customers_fkMailcode on Customers(fkMailcode);\nCreate index idx_Customers_fkContext on Customers(fkContext);\nCreate index idx_Customers_fkServer on Customers(fkServer);\n\nCreate table Codes\n (\n id serial,\n description varchar(50),\n type varchar(5),\n typecode varchar(5)\n );\ncreate view cust as\n select\n c.name,\n c.ext,\n c.uid,\n tmail.description as mail,\n tcontext.description as context,\n tserver.description as server,\n admins.name as admin,\n c.id\n from\n customers c,\n codes tmail,\n codes tserver,\n codes tcontext,\n admins\n where\n c.fkadmin=admins.id\n and c.fkmailcode = tmail.id\n and c.fkcontext = tcontext.id\n and c.fkserver = tserver.id;\n\n\n\n", "msg_date": "Tue, 16 Nov 1999 18:44:19 -0500", "msg_from": "Tyson Oswald <[email protected]>", "msg_from_op": true, "msg_subject": "Slow access" } ]
[ { "msg_contents": "Hey Hackers - \nI wouldn't normally forward an install problem from general to the\nhackers list, but Chris is the database guy at Digital Creations, the\ncompany behind Zope, the really cool web app. building tool, that Sybase\nrecently endorsed as their offical web frontend. Any of you FreeBSD\ntypes recognize the problem?\n\nRoss\n\nOn Tue, Nov 16, 1999 at 11:49:03PM -0500, Christopher Petrilli wrote:\n> On 11/16/99 11:35 PM, Ross J. Reedstrom at [email protected]\n> wrote:\n> \n> > Chris - \n> > Did I see you post to postgresql-general, looking for help with an install\n> > on one of the BSDs? I seem to have had a snafu with my email clients, and\n> > lost a few emails today (Mutt doesn't lock files properly...) so I can't\n> > find the exact mail. If you haven't resolved the build/install problem,\n> > let me know: a number of the core developers run various BSD flavors\n> > (www.postgresql.org is FreeBSD, for example) so this should be easily\n> > resolvable, although I run linux, myself.\n> \n> Yeah, basically it looks like:\n> \n> ./configure\n> make\n> make install\n> instaldb\n\nI seem to remember from your other post that you did get this last command\nright: initdb\n\nHowever, it is critical that initdb be run as the postgres user, rather\nthan as root. Other than that, I don't know. I'll forward your message\nto the hackers list.\n\nAh, one last thought: how do you set up access to shared libraries\non FreeBSD? the make install will have dropped several libs in\n/usr/local/pgsql/lib that the executables need access to.\n\n> \n> Dosn't create the PG_VERSION files, nor the pg_user tables correctly... I\n> tried it on 3 different FreeBSD3.3 machines each downloaded seperately...\n> bizarre.\n\nAnd as I recall, this is the pgsql-6.5.3 tar ball.\n\n> \n> Chris\n> -- \n> | Christopher Petrilli Python Powered Digital Creations, Inc.\n> | [email protected] http://www.digicool.com\n> \n", "msg_date": "Tue, 16 Nov 1999 23:13:14 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL install" }, { "msg_contents": "On 11/17/99 12:13 AM, Ross J. Reedstrom at [email protected]\nwrote:\n\n> Hey Hackers - \n> I wouldn't normally forward an install problem from general to the\n> hackers list, but Chris is the database guy at Digital Creations, the\n> company behind Zope, the really cool web app. building tool, that Sybase\n> recently endorsed as their offical web frontend. Any of you FreeBSD\n> types recognize the problem?\n\nFlattry flattery.... we're jus evaluating options for support in the\nfuture, no commitments, of course, but we've had some people ask, and so we\nfeel the need to understand it better. I've only used Postgres (under\nStonebraker) and Illustra.\n\n> On Tue, Nov 16, 1999 at 11:49:03PM -0500, Christopher Petrilli wrote:\n>> On 11/16/99 11:35 PM, Ross J. Reedstrom at [email protected]\n>> wrote:\n>> \n>>> Chris - \n>>> Did I see you post to postgresql-general, looking for help with an install\n>>> on one of the BSDs? I seem to have had a snafu with my email clients, and\n>>> lost a few emails today (Mutt doesn't lock files properly...) so I can't\n>>> find the exact mail. If you haven't resolved the build/install problem,\n>>> let me know: a number of the core developers run various BSD flavors\n>>> (www.postgresql.org is FreeBSD, for example) so this should be easily\n>>> resolvable, although I run linux, myself.\n>> \n>> Yeah, basically it looks like:\n>> \n>> ./configure\n>> make\n>> make install\n>> instaldb\n> \n> I seem to remember from your other post that you did get this last command\n> right: initdb\n\nYeah, that part was from memory :-)\n\n> However, it is critical that initdb be run as the postgres user, rather\n> than as root. Other than that, I don't know. I'll forward your message\n> to the hackers list.\n\nYup, run as uid 'postgres'.\n\n> Ah, one last thought: how do you set up access to shared libraries\n> on FreeBSD? the make install will have dropped several libs in\n> /usr/local/pgsql/lib that the executables need access to.\n\nI put LD_LIBRARY_PATH to be correct. This is in the startup for the user,\nso it should always be correct.\n\n>> \n>> Dosn't create the PG_VERSION files, nor the pg_user tables correctly... I\n>> tried it on 3 different FreeBSD3.3 machines each downloaded seperately...\n>> bizarre.\n> \n> And as I recall, this is the pgsql-6.5.3 tar ball.\n\nYup, off the main FTP site... just ./configure and go with it...\n\nChris\n-- \n| Christopher Petrilli Python Powered Digital Creations, Inc.\n| [email protected] http://www.digicool.com\n\n", "msg_date": "Wed, 17 Nov 1999 00:27:03 -0500", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL install" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n>> Yeah, basically it looks like:\n>> ./configure\n>> make\n>> make install\n>> instaldb\n\n> I seem to remember from your other post that you did get this last command\n> right: initdb\n\nTwo other thoughts: (1) is initdb in your path, and is the *right\nversion* the first one in your path (I've been burnt by that on\nupgrades). (2) I think that initdb requires USER, PGLIB, PGDATA\nenv variables to be set properly for reliable operation; also PATH\nhad better find the new version of postgres, psql, etc before any\nolder versions.\n\nIf that doesn't strike gold, a copy of the failing initdb session's\nprintout would be useful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 1999 00:39:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: PostgreSQL install " } ]
[ { "msg_contents": "Hi,\n\n I know that it's the new psql output format why all the\n regression tests currently fail. But I think we are in this\n state for a little too long now. With the latest CVS I got\n this near the end of the suite after the plpgsql test:\n\nNOTICE: trying to delete a reldesc that does not exist.\nNOTICE: trying to delete a reldesc that does not exist.\nServer process (pid 12207) exited with status 139 at Wed Nov 17 10:57:36 1999\nTerminating any active server processes...\nServer processes were terminated at Wed Nov 17 10:57:36 1999\nReinitializing shared memory and semaphores\nDEBUG: Data Base System is starting up at Wed Nov 17 10:57:36 1999\n\n This indicates that someone made changes that really broke\n something and since he wasn't able to get any useful results\n from a regression run, he just didn't do it.\n\n I see a little problem with checking if the output is still\n O.K. too. It seems that psql now buffers all the query\n result messages until a SELECT is done. So if the regression\n input contains only INSERT/UPDATE/DELETE statements, all the\n responses are at the end, not after each statement.\n\n It's really a mess. How should someone check if a system\n catalog change is O.K. in this situation? I intend to do so\n soon!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 17 Nov 1999 11:07:48 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "regression tests" }, { "msg_contents": "On Wed, 17 Nov 1999, Jan Wieck wrote:\n\n> I know that it's the new psql output format why all the\n> regression tests currently fail. But I think we are in this\n> state for a little too long now. With the latest CVS I got\n\nAs I mentioned before: use the old psql for regression tests and/or just\nrun the regression tests once with the old one and once with the new one\nand make those results the new reference. (Maybe I'm oversimplifying here,\nthough.)\n\nOnce again this thought also: How about running the regression tests on a\nsingle user postgres backend directly? That way you don't rely on some\nobscure frontend and some client library which might change soon, too.\nAlso you have more control over internals. Finally, you could easily run\nthe regression tests on an uninstalled build. Think ./configure; make;\nmake check; make install. Or am I way out there now?\n\n> I see a little problem with checking if the output is still\n> O.K. too. It seems that psql now buffers all the query\n> result messages until a SELECT is done. So if the regression\n> input contains only INSERT/UPDATE/DELETE statements, all the\n> responses are at the end, not after each statement.\n\nHuh? psql doesn't buffer anything. Could you please elaborate on this\nand/or give me an example? I never heard of that one and I thought Bruce\nwas a really thorough tester . . .\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 17 Nov 1999 12:22:11 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "Peter Eisentraut wrote:\n>\n> Once again this thought also: How about running the regression tests on a\n> single user postgres backend directly? That way you don't rely on some\n> obscure frontend and some client library which might change soon, too.\n> Also you have more control over internals. Finally, you could easily run\n> the regression tests on an uninstalled build. Think ./configure; make;\n> make check; make install. Or am I way out there now?\n\n That should have been done BEFORE messing up anything. This\n hasn't been done, so I think it's the job of those who change\n output formats, to provide new expected regression results\n too. This hasn't been done too, and that's bad.\n\n>\n> > I see a little problem with checking if the output is still\n> > O.K. too. It seems that psql now buffers all the query\n> > result messages until a SELECT is done. So if the regression\n> > input contains only INSERT/UPDATE/DELETE statements, all the\n> > responses are at the end, not after each statement.\n>\n> Huh? psql doesn't buffer anything. Could you please elaborate on this\n> and/or give me an example? I never heard of that one and I thought Bruce\n> was a really thorough tester . . .\n\n As I see, the result messages aren't in the (old) expected\n outputs at all. But they are now. From the boolean test:\n\n CREATE TABLE BOOLTBL1 (f1 bool);\n\n INSERT INTO BOOLTBL1 (f1) VALUES ('t'::bool);\n\n INSERT INTO BOOLTBL1 (f1) VALUES ('True'::bool);\n\n INSERT INTO BOOLTBL1 (f1) VALUES ('true'::bool);\n\n -- BOOLTBL1 should be full of true's at this point\n SELECT '' AS t_3, BOOLTBL1.*;\n CREATE\n INSERT 18633 1\n INSERT 18634 1\n INSERT 18635 1\n t_3 | f1\n -----+----\n | t\n | t\n | t\n (3 rows)\n\n As you can see, the CREATE and INSERT responses are printed\n after the SELECT statement, just before it's own output.\n\n Again, if someone changes things that change output, he has\n to provide new expected results for the regression suite. If\n the changes to psql are still a work in progress, it should\n have been done on separated sources until at least the output\n format is stable.\n\n What actually happened isn't good practice (IMHO). Ask all\n other developers to work around some temporary misbehaviour\n that makes the entire backend development a blind flight. And\n the fatal abnormal backend termination at the end of the\n regression show's what this lazyness can end in. ISTM someone\n has broken something and didn't notice. Thus, at least that\n other one didn't do it with your mentioned workaround.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 17 Nov 1999 12:46:29 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "I wrote:\n\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> Server process (pid 12207) exited with status 139 at Wed Nov 17 10:57:36 1999\n> Terminating any active server processes...\n> Server processes were terminated at Wed Nov 17 10:57:36 1999\n> Reinitializing shared memory and semaphores\n> DEBUG: Data Base System is starting up at Wed Nov 17 10:57:36 1999\n>\n\n I took Peter Eisentraut's advice and did it with the old pslq\n (thanks for the hint).\n\n This problem (as expected) remains and happens in the temp\n test. The two notices occur on creating the temp table and\n the index on it. After that, the database connection get's\n lost on the attempt to drop the temp table.\n\n Since the postmaster is doing recovery then, the numeric test\n hasn't been run. All other tests are still O.K.\n\n The question is, who did something that could cause this\n error?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 17 Nov 1999 13:48:39 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "On Wed, 17 Nov 1999, Jan Wieck wrote:\n\n> That should have been done BEFORE messing up anything. This\n> hasn't been done, so I think it's the job of those who change\n> output formats, to provide new expected regression results\n> too. This hasn't been done too, and that's bad.\n\nI explicitly informed everyone that this would happen a long time before I\nfinalized psql. Nobody seemed to care a lot. Now, weeks after the fact\nsome people start wondering that perhaps they want to run regression tests\nonce in a while. I'm not the regression test maintainer, nor do I have\nknowledge of how to remake them, so all I can do is inform everyone and\ncooperate on anything that's necessary. But silence is implicit approval.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 17 Nov 1999 14:33:37 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "> I took Peter Eisentraut's advice and did it with the old pslq\n> (thanks for the hint).\n> \n> This problem (as expected) remains and happens in the temp\n> test. The two notices occur on creating the temp table and\n> the index on it. After that, the database connection get's\n> lost on the attempt to drop the temp table.\n> \n> Since the postmaster is doing recovery then, the numeric test\n> hasn't been run. All other tests are still O.K.\n> \n> The question is, who did something that could cause this\n> error?\n\nI am sure it was me changing the temp behavior. I will look at it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Nov 1999 10:52:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > The question is, who did something that could cause this\n> > error?\n>\n> I am sure it was me changing the temp behavior. I will look at it.\n\n Running the queries in question in gdb shows:\n\nbackend> CREATE TABLE temptest(col int);\nbackend> CREATE INDEX i_temptest ON temptest(col);\nbackend> CREATE TEMP TABLE temptest(col int);\nNOTICE: trying to delete a reldesc that does not exist.\nNOTICE: trying to delete a reldesc that does not exist.\nbackend> CREATE INDEX i_temptest ON temptest(col);\nNOTICE: trying to delete a reldesc that does not exist.\nNOTICE: trying to delete a reldesc that does not exist.\nbackend> DROP INDEX i_temptest;\nbackend> DROP TABLE temptest;\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x806b47d in heap_openr (relationName=0x81c4e90 \"temptest\", lockmode=7)\n at heapam.c:569\n569 if (RelationIsValid(r) && r->rd_rel->relkind == RELKIND_INDEX)\n\n(gdb) print *r\n$2 = {rd_fd = 65536, rd_nblocks = 184, rd_refcnt = 38017,\n rd_myxactonly = 16 '\\020', rd_isnailed = 8 '\\b', rd_isnoname = 0 '\\000',\n rd_unlinked = 0 '\\000', rd_am = 0xb8, rd_rel = 0x2, rd_id = 2,\n rd_lockInfo = {lockRelId = {relId = 403, dbId = 131072}}, rd_att = 0xb8,\n rd_rules = 0x8109480, rd_istrat = 0x0, rd_support = 0xb8, trigdesc = 0x2}\n\n The problem at this point is that r->rd_rel is 0x2, causing\n the SIGSEGV. But I assume the real problem occured earlier\n where the notice's came from. The relation descriptor must\n have gotten messed up somehow during the CREATE TEMP TABLE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 17 Nov 1999 20:37:32 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > > The question is, who did something that could cause this\n> > > error?\n> >\n> > I am sure it was me changing the temp behavior. I will look at it.\n> \n> Running the queries in question in gdb shows:\n> \n> backend> CREATE TABLE temptest(col int);\n> backend> CREATE INDEX i_temptest ON temptest(col);\n> backend> CREATE TEMP TABLE temptest(col int);\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> backend> CREATE INDEX i_temptest ON temptest(col);\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> backend> DROP INDEX i_temptest;\n> backend> DROP TABLE temptest;\n> \n\nLet me try some tests now. I have an idea.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Nov 1999 15:15:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > > The question is, who did something that could cause this\n> > > error?\n> >\n> > I am sure it was me changing the temp behavior. I will look at it.\n> \n> Running the queries in question in gdb shows:\n> \n> backend> CREATE TABLE temptest(col int);\n> backend> CREATE INDEX i_temptest ON temptest(col);\n> backend> CREATE TEMP TABLE temptest(col int);\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> backend> CREATE INDEX i_temptest ON temptest(col);\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> backend> DROP INDEX i_temptest;\n> backend> DROP TABLE temptest;\n\nSorry. I see it now. Let me fix it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Nov 1999 15:56:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "> > Bruce Momjian wrote:\n> > \n> > > > The question is, who did something that could cause this\n> > > > error?\n> > >\n> > > I am sure it was me changing the temp behavior. I will look at it.\n> > \n> > Running the queries in question in gdb shows:\n> > \n> > backend> CREATE TABLE temptest(col int);\n> > backend> CREATE INDEX i_temptest ON temptest(col);\n> > backend> CREATE TEMP TABLE temptest(col int);\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > backend> CREATE INDEX i_temptest ON temptest(col);\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > backend> DROP INDEX i_temptest;\n> > backend> DROP TABLE temptest;\n> \n> Sorry. I see it now. Let me fix it.\n\nOK, fixed. Seems there was some confusion in the cache over whether\ntables where indexed by logical or phsical names in the hash table to\nclear out the cache. Fixed now. Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Nov 1999 18:50:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression tests" } ]
[ { "msg_contents": "Hi All,\n\nI think I had read this question with count(*) before.....\n\nPgSql returns one row with a null field when we use select MIN or MAX on a\ntable, but the result should be \"no rows selected\".\n\nI didn't try with C function, but I got the same result with psql.\n\nSo, I'm sending a Java example to Peter.\n\ntry {\n rs=stmt.executeQuery(\"select min(field1) from tab where\nfield1>maxValueOfField1\");\n if (rs.next()) {\n System.out.println(\"I found a row !!!!\");\n theResult=rs.getString(1);\n if (theResult==null)\n System.out.println(\"Min of field1 is NULL !!!!\");\n }\n} catch (.............\n\nmaxValueofField1 = select max(field1) from tab;\n\nIs it correct ?\n\nI'm using PgSql 6.5.2, RHLinux/Intel 6.0\n\nThanks,\n\nRicardo Coelho.\n\n\n\n", "msg_date": "Wed, 17 Nov 1999 08:08:51 -0200", "msg_from": "\"Ricardo Coelho\" <[email protected]>", "msg_from_op": true, "msg_subject": "select MIN/MAX when no row selected" } ]
[ { "msg_contents": "> > I took Peter Eisentraut's advice and did it with the old pslq\n> > (thanks for the hint).\n> > \n> > This problem (as expected) remains and happens in the temp\n> > test. The two notices occur on creating the temp table and\n> > the index on it. After that, the database connection get's\n> > lost on the attempt to drop the temp table.\n> > \n> > Since the postmaster is doing recovery then, the numeric test\n> > hasn't been run. All other tests are still O.K.\n> > \n> > The question is, who did something that could cause this\n> > error?\n> \n> I am sure it was me changing the temp behavior. I will look at it.\n\nJan, I can't reproduce the temp regression failure here. I did make\nchanges yesterday morning to this. I assume you have an updated cvs,\nright?\n\nAnyway to make the default numeric regression test run faster. It seems\nto take quite a while compared to the others.\n\nWe certainly need someone to update the regression tests to match the\nnew format so people can continue running regression tests before\napplying patches.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Nov 1999 11:04:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regression tests" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> We certainly need someone to update the regression tests to match the\n> new format so people can continue running regression tests before\n> applying patches.\n\nI've been using old psql (per Peter's suggestion) to run regression\ntests. I have not pulled CVS since Saturday but things seemed OK then.\nIf there is a breakage, it's recent.\n\nI thought we were putting off committing a new set of regress test\nexpected outputs until the dust settles in the new psql. Isn't Peter\nstill tweaking the output format? Not much point in generating new\nexpected files until everyone agrees the format is frozen.\n\nOf course there's a bit of a catch-22 situation here: since I am not\nusing the new psql, I'm not contributing any feedback on it. The same\nis probably true of some other developers... when we do finally adopt\nnew psql (after regress test update) there may be a bunch of belated\nrequests for changes...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 1999 11:51:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression tests " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > We certainly need someone to update the regression tests to match the\n> > new format so people can continue running regression tests before\n> > applying patches.\n> \n> I've been using old psql (per Peter's suggestion) to run regression\n> tests. I have not pulled CVS since Saturday but things seemed OK then.\n> If there is a breakage, it's recent.\n> \n> I thought we were putting off committing a new set of regress test\n> expected outputs until the dust settles in the new psql. Isn't Peter\n> still tweaking the output format? Not much point in generating new\n> expected files until everyone agrees the format is frozen.\n> \n> Of course there's a bit of a catch-22 situation here: since I am not\n> using the new psql, I'm not contributing any feedback on it. The same\n> is probably true of some other developers... when we do finally adopt\n> new psql (after regress test update) there may be a bunch of belated\n> requests for changes...\n\nYes, I am waiting to see if anyone changes the format before updating\nall the queries in my book.\n\nIs everyone OK with the new format? Do you like it more or less than\nthe old one? Please someone weight in on one side or the other so we\ncan conclude this. People have been very quiet on this issue.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Nov 1999 12:54:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regression tests" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From:\tNarayanan, Kannan \n> Sent:\tWednesday, November 17, 1999 10:47 AM\n> To:\t'[email protected]'\n> Subject:\tHow to select the millisecond/microsecond parts of the\n> datetime column\n> Importance:\tHigh\n> \n> Hello,\n> \n> Product version: Postgres 6.5.3\n> \n> SELECT DATETIME('MILLISECOND', 'NOW'::DATETIME) always returns 0(ZERO) and\n> so does 'MICROSECOND'. However a reading of the manuals indicate that the\n> database supports much higher precision values. How do I retrieve the\n> additional precision values (beyond seconds) when I use the datetime\n> field? This is a requirement for a conversion project that I am working\n> on. Could someone help please. \n> \n> Thanks\n> Kannan\n", "msg_date": "Wed, 17 Nov 1999 10:51:12 -0800", "msg_from": "\"Narayanan, Kannan\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: How to select the millisecond/microsecond parts of the dateti\n\tme column" } ]
[ { "msg_contents": "\nThis was sent to [email protected]. Anyone have an answer\nto it? I'm not familiar with oracle so I don't know what he's\ntalking about.\n\nVince.\n\n\n\n-----FW: <[email protected]>-----\nFrom: =?iso-8859-1?q?nitin=20thakkar?= <[email protected]>\nTo: [email protected]\nSubject: Query\n\nHi,\n\nI am PostGRESql user and have a query :\n\nDOES POSTGRESQL SUPPORT DATABASE LINK AS IN ORACLE 8.0\n\nRegards\nNitin\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n\n--------------End of forwarded message-------------------------\n\n\n", "msg_date": "Wed, 17 Nov 1999 14:07:21 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Query" }, { "msg_contents": ">\n>\n> This was sent to [email protected]. Anyone have an answer\n> to it? I'm not familiar with oracle so I don't know what he's\n> talking about.\n>\n> Vince.\n>\n>\n> I am PostGRESql user and have a query :\n>\n> DOES POSTGRESQL SUPPORT DATABASE LINK AS IN ORACLE 8.0\n>\n\n AFAIK it means to show up a virtual relation, that is a\n relation in another database. So if you query your local\n relation, your backend connects to the other database and\n virtually shares this relation.\n\n No, PostgreSQL does not support this.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 17 Nov 1999 20:28:11 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FW: Query" } ]
[ { "msg_contents": "Hi,\n\n I just committed some changes that require an initdb.\n\n New are the discussed, simple LZ compressor, placed into\n /utils/adt/pg_compress.c, and a new lztext data type based on\n it. You'll find a fairly detailed description of the\n compression algorithm in the comments at the top of\n pg_lzcompress.c.\n\n Not very surprisingly to me it turns out, that the compressor\n does a very good job on rule action strings. I used the 48\n rules that can be found in pg_rewrite after the regression\n test. The original string sizes range from 820 to 4615 and\n the compression rates from 35-76% with an average of 60%. The\n 4615 size rule action has been coded into a 1126\n octet_length.\n\n For the lztext type, there are conversion functions to/from\n text and the length() and octet_length() functions available.\n Length() returns the same as length on text would. While\n octet_length returns the compressed size without VARHDRSZ.\n\n The type does not support MULTIBYTE or CYR_ENCODE up to now.\n It shouldn't be too hard to add it and after that, we might\n add another lzbpchar type too. The latter is really\n interesting, because an empty char(200) (thus containing 200\n spaces) could result in an octet_length of 12 instead of 204\n - that's a compression rate of 94.1%! It actually wouldn't,\n because the compressors default is to start only if the input\n is at least 256 bytes, but there is a mechanism so a lzbpchar\n type could force this behaviour.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 17 Nov 1999 23:38:36 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "LZ compressing data type" }, { "msg_contents": "> \n> Hi,\n> \n> I just committed some changes that require an initdb.\n> \n> New are the discussed, simple LZ compressor, placed into\n> /utils/adt/pg_compress.c, and a new lztext data type based on\n> it. You'll find a fairly detailed description of the\n> compression algorithm in the comments at the top of\n> pg_lzcompress.c.\n\nOne question.\n\nYou say this is an LZ algorythm. Is this the same as LZW, as in the\nsame kind that gifs use. If so, are we sure that this algorythm is not\ncovered by the Unisys patent? Their patent is on the algorythm for\ncompression not on gifs themselves. \n\nIf it is, you are liable for a $5,000 fee to use it throughout your\nsite, or a per-licence fee if you are distributing (thay are worked out\non a per case basis but typical licences are $5 per unit sold from what\nI am told).\n\nI came across this problem with a gif manipulation program that *I\nWROTE FROM SCRATCH* and had to switch to using the libungif\ncompression routines.\n\nJust thought Id mention this, in case it has been overlooked.\n\n\t\t\t\t\t\t~Michael\n", "msg_date": "Wed, 17 Nov 1999 23:54:07 +0000 (GMT)", "msg_from": "Michael Simms <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LZ compressing data type" }, { "msg_contents": ">\n> >\n> > Hi,\n> >\n> > I just committed some changes that require an initdb.\n> >\n> > New are the discussed, simple LZ compressor, placed into\n> > /utils/adt/pg_compress.c, and a new lztext data type based on\n> > it. You'll find a fairly detailed description of the\n> > compression algorithm in the comments at the top of\n> > pg_lzcompress.c.\n>\n> One question.\n>\n> You say this is an LZ algorythm. Is this the same as LZW, as in the\n> same kind that gifs use. If so, are we sure that this algorythm is not\n> covered by the Unisys patent? Their patent is on the algorythm for\n> compression not on gifs themselves.\n>\n> If it is, you are liable for a $5,000 fee to use it throughout your\n> site, or a per-licence fee if you are distributing (thay are worked out\n> on a per case basis but typical licences are $5 per unit sold from what\n> I am told).\n\n It't an SLZ algorithm, not LZW. There are FLZ and LZ77 out\n too (and I don't know how many other subtypes). At least, LZ\n is a family of compression algorithms where LZW is just a\n member of them.\n\n I've written the entire code from scatch, inspired by an\n article from Adisak Pochanayon dated back in 1993. If they\n have a license on this algorithm, they have one on code that\n can be coded from scatch in 20 hours after reading\n information that's claimed to be free on the internet,\n congrats.\n\n This is M$ practice, I never thought that there's more than\n one company out doing this kind of business.\n\n But thanks for the info.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 18 Nov 1999 01:20:45 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] LZ compressing data type" }, { "msg_contents": "> > If it is, you are liable for a $5,000 fee to use it throughout your\n> > site, or a per-licence fee if you are distributing (thay are worked out\n> > on a per case basis but typical licences are $5 per unit sold from what\n> > I am told).\n>\n> It't an SLZ algorithm, not LZW. There are FLZ and LZ77 out\n> too (and I don't know how many other subtypes). At least, LZ\n> is a family of compression algorithms where LZW is just a\n> member of them.\n>\n> [...]\n>\n> But thanks for the info.\n\n From the very beginning of US patent No. 4,558,302 (the thing\n behind Unisys's LZW):\n\n The compressor searches the input stream to determine the\n longest match to a stored string. Each stored string\n comprises a prefix string and an extension character\n where the extension character is the last character in\n the string and the prefix string comprises all but the\n extension character. Each string has a code signal\n associated therewith and a string is stored in the string\n table by, at least implicitly, storing the code signal\n for the string, the code signal for the string prefix and\n the extension character.\n\n The format my code stores is different to this definition.\n It only stores offset/length pairs or literal bytes, signaled\n by a bit in a control unit. So there is no prefix string\n and/or extension character at all.\n\n If someone might state that storing TAG's and LITERAL chars\n is still equivalent to PREFIX and EXTension character, I say\n that my code possibly output's sequences of immediately\n following TAG's, that aren't neccessarily PREFIX'es. One TAG\n could code a sequence of characters, not previously occured\n in any other TAG, without any intermediate LITERAL character\n occured.\n\n And my code forgets AUTOMATICALLY about too far behind\n occurences of matching strings for speed AND compression rate\n improvement.\n\n Thus I think my implementation is not related to this patent.\n\n When reading patent #4,558,302 I really remembered the\n question, if anyone could ever have a patent on rectangle\n front lights for cars. The definition is so general, that I\n cannot think that gzip, bzip or any other existing\n compression tool doesn't use it's METHOD. If such a GENERAL\n patent is possible at all, car manufacturers all over the\n world would have to pay millions of dollars license fee's\n just to sell cars with rectangular lights.\n\n I don't see any relationship between my code and what Unisys\n has a patent for. If they see, I might be blind - or they see\n a horizon from inside a black hole. If they need to, I'll\n enlighten them.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 18 Nov 1999 03:23:41 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] LZ compressing data type" }, { "msg_contents": "At 03:23 18/11/99 +0100, Jan Wieck wrote:\n>\n> From the very beginning of US patent No. 4,558,302 (the thing\n> behind Unisys's LZW):\n>\n\nThe other thing to remember is that there are a very large number of\ncountries in which this patent does not apply because it was not even\napplied for until after the alogorithm was made public. In the first\npublication of the LZW algorithm, there was no patent notice, but I think\nthe US patent had been applied for.\n\nFor US-based distribution sites, however, you may find the threat of legal\naction from a large company is enough to make it less desirable to\ndistribute. \n\nFWIW, I think Unisys patented the LZW algorithm only.\n\nIt's probably a bit late to ask, but how difficult would it be to\ngeneralize your code to use the compressor/encryption library of choice?\nzlib and pgp spring to mind. This would kill three birds with one stone.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 18 Nov 1999 14:26:05 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LZ compressing data type" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> FWIW, I think Unisys patented the LZW algorithm only.\n\nMore specifically, Unisys patented Terry Welch's variant of the LZ78\nclass of algorithms. Jan's code falls in the substantially different\nLZ77 class. There could be problematic patents out there, but Unisys'\nis certainly not one.\n\n(If you don't know the difference between LZ77 and LZ78, see the\ncomp.compression FAQ ... but don't raise alarms about compression\npatents if you don't know even that much about the field, eh?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 1999 23:28:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LZ compressing data type " }, { "msg_contents": "At 23:28 17/11/99 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> FWIW, I think Unisys patented the LZW algorithm only.\n>\n>More specifically, Unisys patented Terry Welch's variant of the LZ78\n>class of algorithms. \n\nHence the 'W' in LZW, no?\n\n>Jan's code falls in the substantially different\n>LZ77 class. There could be problematic patents out there, but Unisys'\n>is certainly not one.\n>\n\nIs this a legal opinion, or a personal one?\n\n\n>(If you don't know the difference between LZ77 and LZ78, see the\n>comp.compression FAQ ... but don't raise alarms about compression\n>patents if you don't know even that much about the field, eh?)\n\n1. I did not raise the alarm, I just responded to it.\n\n2. Since Unisys have moved to block code that does not even use LZW\ncompression, but happens to be compatible with most GIF decompressors, I\nthink your comment is misinformed, eh? (See the GD graphics library)\n\nThe fear is not whether one wins court cases, but whether you can afford to\nfight them.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 18 Nov 1999 15:50:10 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LZ compressing data type " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 23:28 17/11/99 -0500, Tom Lane wrote:\n>> Jan's code falls in the substantially different\n>> LZ77 class. There could be problematic patents out there, but Unisys'\n>> is certainly not one.\n\n> Is this a legal opinion, or a personal one?\n\nI'm not a lawyer, but I believe I qualify as an expert witness when\nit comes to compression questions. It is not a matter of opinion\nwhether Jan's code is LZ77 or LZ78, nor is it a matter of opinion\nwhich class Unisys has claims on a small piece of.\n\n> The fear is not whether one wins court cases, but whether you can afford to\n> fight them.\n\nThere are patents related to databases. Shall we therefore shut down\nPostgres development and run screaming for the hills? If we let\nourselves be intimidated by irrelevant patents, the Microsofts and\nUnisyses will win without a fight.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 00:40:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LZ compressing data type " }, { "msg_contents": "Tom Lane wrote:\n\n> There are patents related to databases. Shall we therefore shut down\n> Postgres development and run screaming for the hills? If we let\n> ourselves be intimidated by irrelevant patents, the Microsofts and\n> Unisyses will win without a fight.\n\n It's definitely a LZ77 coder, and this technique is used in\n many other compressors too (lha, arj, zlib, gzip, ...). So I\n don't think we have to fear anything.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 18 Nov 1999 13:13:38 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] LZ compressing data type" } ]
[ { "msg_contents": "No, we don't support anything like this. This is the ability to create a\nlink in one database that points to a table in another database, so that it\ncan be accessed as if local. The other database can be a separate instance\n(a concept which PG doesn't really have), or even on another machine.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Vince Vielhaber [mailto:[email protected]]\n>> Sent: Wednesday, November 17, 1999 9:07 PM\n>> To: [email protected]\n>> Subject: [HACKERS] FW: Query\n>> \n>> \n>> \n>> This was sent to [email protected]. Anyone have an answer\n>> to it? I'm not familiar with oracle so I don't know what he's\n>> talking about.\n>> \n>> Vince.\n>> \n>> \n>> \n>> -----FW: \n>> <[email protected]>-----\n>> From: =?iso-8859-1?q?nitin=20thakkar?= <[email protected]>\n>> To: [email protected]\n>> Subject: Query\n>> \n>> Hi,\n>> \n>> I am PostGRESql user and have a query :\n>> \n>> DOES POSTGRESQL SUPPORT DATABASE LINK AS IN ORACLE 8.0\n>> \n>> Regards\n>> Nitin\n>> \n>> \n>> =====\n>> \n>> __________________________________________________\n>> Do You Yahoo!?\n>> Bid and sell for free at http://auctions.yahoo.com\n>> \n>> --------------End of forwarded message-------------------------\n>> \n>> \n>> \n>> ************\n>> \n", "msg_date": "Thu, 18 Nov 1999 11:56:25 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] FW: Query" } ]
[ { "msg_contents": "Hi,\n\n I just committed a little change to pg_rewrite.h and related\n sources. Needs a new clean compile and initdb. Attributes\n ev_qual and ev_action are now of type lztext (formerly text).\n\n This one is impressive:\n\n create table t1 (\n typname name ,\n typowner int4 ,\n typlen int2 ,\n typprtlen int2 ,\n typbyval bool ,\n typtype char ,\n .\n . (totally 54 attributes of various types)\n .\n atthasdef bool\n );\n CREATE\n\n create view v1 as select * from t1;\n CREATE 148540 1\n\n select rulename, length(ev_action), octet_length(ev_action),\n (100 - octet_length(ev_action) * 100 / length(ev_action))::text || '%'\n as ratio\n from pg_rewrite;\n rulename |length|octet_length|ratio\n --------------+------+------------+-----\n _RETpg_user | 2683| 794|71%\n _RETpg_rules | 2562| 934|64%\n _RETpg_views | 3740| 1043|73%\n _RETpg_tables | 4615| 1126|76%\n _RETpg_indexes| 2639| 854|68%\n _RETv1 | 14121| 1910|87%\n (6 rows)\n\n That means, the rule action string of the view v1 has an\n original length of 14121 bytes and is stored in pg_rewrite in\n 1910 bytes only.\n\n This should give us some room for complicated views/rules.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 18 Nov 1999 14:57:36 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "rules use lztext - initdb required" }, { "msg_contents": " That means, the rule action string of the view v1 has an\n original length of 14121 bytes and is stored in pg_rewrite in\n 1910 bytes only.\n\n This should give us some room for complicated views/rules.\n\nThis is great! One of my major frustrations has been designing a\ngreat set of tables/views and then running out of space for the\nrules.\n\nThanks for putting this together.\n\nCheers,\nBrook\n", "msg_date": "Thu, 18 Nov 1999 08:20:19 -0700 (MST)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] rules use lztext - initdb required" } ]
[ { "msg_contents": "Hi,\n\nDoes anybody know how a user could change your own password ? (without\npg_shadow access, of course).\n\nI would like to transfer this task to each one.\n\nI don't remember. Trusted languages overlaps security restrictions, isn't it\n?\n\nIf I build a C function chgpwd(username,oldpassword,newpassword), it will\nwork ?\n\nThanks,\n\nRicardo Coelho.\n\n\n", "msg_date": "Thu, 18 Nov 1999 12:21:24 -0200", "msg_from": "\"Ricardo Coelho\" <[email protected]>", "msg_from_op": true, "msg_subject": "Change your own password" } ]
[ { "msg_contents": "The attached report is still correct as of 6.5.3; but will having a\ndatabase name that has to be quoted cause problems anywhere that SQL\nqueries are constructed to do internal operations?\n\nThe SQL command `CREATE DATABASE \"www-data\"' works correctly.\n\n------- Forwarded Message\n\nDate: Thu, 18 Nov 1999 20:39:51 +0100\nFrom: Eric Gentilini <[email protected]>\nTo: [email protected]\nSubject: [POSTGRESQL] Does createdb belong to postgresql ?\n\nhi !\n\nI fixed a _very_ little bug in createdb and destroydb that prevents the\ncreation of postgresql users whose name contains special characters, and\nespecially '-', useful when accessing a database through a CGI script, whose\nuser is 'www-data' by default.\n\nBut I don't know if this prog is debian specific or if it belongs to\npostrgesql.\n\nCan you help me ?\nthx !\n\n\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name : Eric Gentilini\nYour email address : [email protected]\n\n\nSystem Configuration\n- ---------------------\n Architecture (example: Intel Pentium) : Intel Pentium&PentiumII\n\n Operating System (example: Linux 2.0.26 ELF) : Linux 2.2.13 ELF\n\n PostgreSQL version (example: PostgreSQL-6.5.2): PostgreSQL-6.5.2 \n\n Compiler used (example: gcc 2.8.0) : not compiled by me\n\n\nPlease enter a FULL description of your problem:\n- ------------------------------------------------\nI don't know if it is really a bug or if it is intentionnal, but\nI found out that createdb and destroydb prevented the\ncreation/destruction of postgresql users whose\nname contained special characters, and\nespecially '-', useful when accessing a database through a CGI script, whose\nuser is 'www-data' by default.\n(In this case, createdb is executed by createuser)\nThe query aborts with the message \"ERROR: parser: parse error at or near \"-\"\"\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n- ----------------------------------------------------------------------\nfor instance : createdb www-data\n\n\nIf you know how this problem might be fixed, list the solution below:\n- ---------------------------------------------------------------------\nThe simplest solution I found is to modify the scripts createdb and destroydb.\ncreatedb : replace line 114 with :\npsql $PASSWDOPT -tq $AUTHOPT $PGHOSTOPT $PGPORTOPT -c \"create\ndatabase \\\"$dbname\\\" $location $encoding\" template1\n ^^^ ^^^\ndestroydb : replace line 78 with :\npsql -tq $AUTHOPT $PGHOSTOPT $PGPORTOPT -c \"drop database \\\"$dbname\\\"\" template\n1\n ^^^ ^^^\n\nEric (Yam) Gentilini\n\nLinux � Nantes sur http://www.linux-nantes.fr.eu.org\n\n\n------- End of Forwarded Message\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"A Song for the sabbath day. It is a good thing to \n give thanks unto the LORD, and to sing praises unto \n thy name, O most High.\" Psalms 92:1 \n\n\n", "msg_date": "Thu, 18 Nov 1999 21:19:33 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Createdb problem report" }, { "msg_contents": "\nApplied. Will appear in 7.0\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> The attached report is still correct as of 6.5.3; but will having a\n> database name that has to be quoted cause problems anywhere that SQL\n> queries are constructed to do internal operations?\n> \n> The SQL command `CREATE DATABASE \"www-data\"' works correctly.\n> \n> ------- Forwarded Message\n> \n> Date: Thu, 18 Nov 1999 20:39:51 +0100\n> From: Eric Gentilini <[email protected]>\n> To: [email protected]\n> Subject: [POSTGRESQL] Does createdb belong to postgresql ?\n> \n> hi !\n> \n> I fixed a _very_ little bug in createdb and destroydb that prevents the\n> creation of postgresql users whose name contains special characters, and\n> especially '-', useful when accessing a database through a CGI script, whose\n> user is 'www-data' by default.\n> \n> But I don't know if this prog is debian specific or if it belongs to\n> postrgesql.\n> \n> Can you help me ?\n> thx !\n> \n> \n> ============================================================================\n> POSTGRESQL BUG REPORT TEMPLATE\n> ============================================================================\n> \n> \n> Your name : Eric Gentilini\n> Your email address : [email protected]\n> \n> \n> System Configuration\n> - ---------------------\n> Architecture (example: Intel Pentium) : Intel Pentium&PentiumII\n> \n> Operating System (example: Linux 2.0.26 ELF) : Linux 2.2.13 ELF\n> \n> PostgreSQL version (example: PostgreSQL-6.5.2): PostgreSQL-6.5.2 \n> \n> Compiler used (example: gcc 2.8.0) : not compiled by me\n> \n> \n> Please enter a FULL description of your problem:\n> - ------------------------------------------------\n> I don't know if it is really a bug or if it is intentionnal, but\n> I found out that createdb and destroydb prevented the\n> creation/destruction of postgresql users whose\n> name contained special characters, and\n> especially '-', useful when accessing a database through a CGI script, whose\n> user is 'www-data' by default.\n> (In this case, createdb is executed by createuser)\n> The query aborts with the message \"ERROR: parser: parse error at or near \"-\"\"\n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> - ----------------------------------------------------------------------\n> for instance : createdb www-data\n> \n> \n> If you know how this problem might be fixed, list the solution below:\n> - ---------------------------------------------------------------------\n> The simplest solution I found is to modify the scripts createdb and destroydb.\n> createdb : replace line 114 with :\n> psql $PASSWDOPT -tq $AUTHOPT $PGHOSTOPT $PGPORTOPT -c \"create\n> database \\\"$dbname\\\" $location $encoding\" template1\n> ^^^ ^^^\n> destroydb : replace line 78 with :\n> psql -tq $AUTHOPT $PGHOSTOPT $PGPORTOPT -c \"drop database \\\"$dbname\\\"\" template\n> 1\n> ^^^ ^^^\n> \n> Eric (Yam) Gentilini\n> \n> Linux _ Nantes sur http://www.linux-nantes.fr.eu.org\n> \n> \n> ------- End of Forwarded Message\n> \n> \n> -- \n> Vote against SPAM: http://www.politik-digital.de/spam/\n> ========================================\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"A Song for the sabbath day. It is a good thing to \n> give thanks unto the LORD, and to sing praises unto \n> thy name, O most High.\" Psalms 92:1 \n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 16:46:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Createdb problem report" }, { "msg_contents": "The cleanup on the scripts I recently did contained that, too. I think\nit's on hold now because I'm going to fix up the create user SQL statement\nto allow picking your user id first. Or was there any other reason?\nAnyway, just letting you know that this problem has been recognized.\n\n\t-Peter\n\nOn 1999-11-18, Bruce Momjian mentioned:\n\n> \n> Applied. Will appear in 7.0\n> \n> \n> > I fixed a _very_ little bug in createdb and destroydb that prevents the\n> > creation of postgresql users whose name contains special characters, and\n> > especially '-', useful when accessing a database through a CGI script, whose\n> > user is 'www-data' by default.\n\n\n\n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 Nov 1999 00:08:05 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Createdb problem report" } ]
[ { "msg_contents": "Okay, here is my semiofficial take on the situation:\n\n* Use the psql from the 6.5.* distro to do regression tests until further\nnotice.\n\n\n* Although the output format in the current psql is not under intensive\ndevelopment anymore, I am not sure if I can guarantee a \"freeze\" soon.\nOnce in a while I find some sort of flaw in really strange query results;\nthe aim of the output is to be visually pleasing, not to provide an exact\nmatch so something. Having said that, I do not expect any major changes to\ntake place anymore though.\n\n\n* The other problem is *what* is actually printed, as opposed to the\npeculiarities of the table format. This is still somewhat confusing even\nto me and I am still fixing things so they make sense at the end.\n\nExample:\n***OLD\nQUERY: CREATE TABLE BOOLTBL1 (f1 bool);\nQUERY: INSERT INTO BOOLTBL1 (f1) VALUES ('t'::bool);\nQUERY: INSERT INTO BOOLTBL1 (f1) VALUES ('True'::bool);\nQUERY: INSERT INTO BOOLTBL1 (f1) VALUES ('true'::bool);\nQUERY: SELECT '' AS t_3, BOOLTBL1.*;\nt_3|f1\n---+--\n |t\n |t\n |t\n(3 rows)\n\n***CURRENT\nCREATE TABLE BOOLTBL1 (f1 bool);\n \nINSERT INTO BOOLTBL1 (f1) VALUES ('t'::bool);\n \nINSERT INTO BOOLTBL1 (f1) VALUES ('True'::bool);\n \nINSERT INTO BOOLTBL1 (f1) VALUES ('true'::bool);\n \n \n-- BOOLTBL1 should be full of true's at this point\nSELECT '' AS t_3, BOOLTBL1.*;\n t_3 | f1\n-----+----\n | t\n | t\n | t\n(3 rows)\n\n(In fact, it's so current, it's not even in CVS yet, thanks to some\nproblems pointed out by Jan.)\n\nYes, there actually is a reasoning behind all of this, I'm just not sure\nright now what it was ;). If someone is interested, I can bore you with\nthe details.\n\n\n* Since no one has picked up on my idea to run the tests directly on the\nbackend, I will keep reiterating this idea until someone shuts me up\n:*) The idea would be to have a target \"check\" in the top level makefile\nlike this (conceptually):\n\ncheck: all\n\tmkdir ./regress\n\tinitdb -l . -d ./regress\n\tfor i in test1 test2 test3 ...; do\n\t\tpostgres -D ./regress -E template1 \\\n\t\t < $(srcdir)/test/regress/sql/$i.sql \\\n\t\t >& output-$i.out\n\tdone\n\tfor i in test1 test2 test3 ...; do\n\t\tcmp output-$i.out expected-$i.out\n\t\tif [ $? == 1]; then\n\t\t\techo \"Test $i failed.\"\n\t\telse\n\t\t\techo \"Test $i passed.\"\n\t\t\trm -f output-$i.out\n\t\tfi\n\tdone\n\trm -rf ./regress\n\nThen you can do\n./configure\nmake\nmake check\nmake install\n\nOr am I missing something here? Of course this change would require some\nwork, but I'm just getting at the concept here.\n\n\nFinally, I'd like to apologize for the extra trouble some must have had. I\ncan only offer to cooperate on anything that needs to be done.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n", "msg_date": "Thu, 18 Nov 1999 23:29:58 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "psql & regress tests" }, { "msg_contents": "> * Since no one has picked up on my idea to run the tests directly on the\n> backend, I will keep reiterating this idea until someone shuts me up\n> :*) The idea would be to have a target \"check\" in the top level makefile\n> like this (conceptually):\n\nRunning the backend standalone does not use locking with other backends,\nso it is dangerous.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 17:41:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "On Thu, Nov 18, 1999 at 05:41:36PM -0500, Bruce Momjian wrote:\n> > * Since no one has picked up on my idea to run the tests directly on the\n> > backend, I will keep reiterating this idea until someone shuts me up\n> > :*) The idea would be to have a target \"check\" in the top level makefile\n> > like this (conceptually):\n> \n> Running the backend standalone does not use locking with other backends,\n> so it is dangerous.\n\nBruce, how dos this apply to Peter's suggestion? We're talking about\n_regression_ tests. Things to do after changing the code. Do you often\nrecompile, and run regression tests against a db with a (now out of date)\npostmaster running against it? Do other developers? I'd have thought a\ncomplete shutdown/restart is part of the cycle anyway. has to be if an\ninitdb is in there of course. Checking to make sure a postmaster isn't\nrunning could be added to Peter's script, just to be sure.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Thu, 18 Nov 1999 17:13:33 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "> On Thu, Nov 18, 1999 at 05:41:36PM -0500, Bruce Momjian wrote:\n> > > * Since no one has picked up on my idea to run the tests directly on the\n> > > backend, I will keep reiterating this idea until someone shuts me up\n> > > :*) The idea would be to have a target \"check\" in the top level makefile\n> > > like this (conceptually):\n> > \n> > Running the backend standalone does not use locking with other backends,\n> > so it is dangerous.\n> \n> Bruce, how dos this apply to Peter's suggestion? We're talking about\n> _regression_ tests. Things to do after changing the code. Do you often\n> recompile, and run regression tests against a db with a (now out of date)\n> postmaster running against it? Do other developers? I'd have thought a\n> complete shutdown/restart is part of the cycle anyway. has to be if an\n> initdb is in there of course. Checking to make sure a postmaster isn't\n> running could be added to Peter's script, just to be sure.\n\nRegression tests are often run by end-users as part of the install. I \ncan imagine someone seeing a problem and running the regression tests\nwhile having running backends to see if everything is still working.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 18:35:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": ">\n> On Thu, Nov 18, 1999 at 05:41:36PM -0500, Bruce Momjian wrote:\n> > > * Since no one has picked up on my idea to run the tests directly on the\n> > > backend, I will keep reiterating this idea until someone shuts me up\n> > > :*) The idea would be to have a target \"check\" in the top level makefile\n> > > like this (conceptually):\n> >\n> > Running the backend standalone does not use locking with other backends,\n> > so it is dangerous.\n>\n> Bruce, how dos this apply to Peter's suggestion? We're talking about\n> _regression_ tests. Things to do after changing the code. Do you often\n> recompile, and run regression tests against a db with a (now out of date)\n> postmaster running against it? Do other developers? I'd have thought a\n> complete shutdown/restart is part of the cycle anyway. has to be if an\n> initdb is in there of course. Checking to make sure a postmaster isn't\n> running could be added to Peter's script, just to be sure.\n\n I'm actually doing some tests if the 'make check' would be\n possible. I.e. doing a complete install with POSTGRESDIR\n below the regress dir, running initdb with the libdir and\n datadir pointing into there too, then starting new postmaster\n on a different port in background etc., etc., etc..\n\n That would make it possible to do a complete check without\n doing a regular install at all.\n\n Will give some results soon.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 00:49:51 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> * Since no one has picked up on my idea to run the tests directly on the\n>> backend, I will keep reiterating this idea until someone shuts me up\n\n> Running the backend standalone does not use locking with other backends,\n> so it is dangerous.\n\nIt wouldn't be particularly \"dangerous\" if we assume that no one else is\naccessing the regression database. What it *would* be is less useful at\ncatching problems. Standalone mode wouldn't test the cross-backend\ninterlocking code at all.\n\nAdmittedly, running a bunch of tests serially isn't a strong stress test\nof cross-backend behavior, but it's not as content-free a check as you\nmight think. On my machine, at least, the old backend is still around\ndoing shutdown for the first half-second or so while the next one is\nrunning.\n\nWhat I'd really like to see is some deliberate parallelism in some of\nthe regress tests...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 19:17:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests " }, { "msg_contents": "Tom Lane wrote:\n\n> Admittedly, running a bunch of tests serially isn't a strong stress test\n> of cross-backend behavior, but it's not as content-free a check as you\n> might think. On my machine, at least, the old backend is still around\n> doing shutdown for the first half-second or so while the next one is\n> running.\n>\n> What I'd really like to see is some deliberate parallelism in some of\n> the regress tests...\n\n It's amusing how often we two have the same wishes or ideas.\n\n The run_check.sh script, I'm actually hacking on, would be a\n replacement for the regress.sh, started off from the 'make\n check'. And from the first try I added syntax to the\n sql/tests file to run either groups of tests parallel\n intermixed with single tests sequentially.\n\n The new syntax will look like this:\n\n parallel group1\n test boolean\n test char\n test name\n endparallel\n\n test varchar\n test text\n test strings\n\n parallel group2\n test int2\n test int4\n test int8\n endparallel\n\n .\n .\n .\n\n The above will run boolean, char and name parallel. After all\n three terminated, it will check their output and continue to\n run varchar, text and strings sequentially, followed by the\n next parallel group.\n\n To test real concurrency we might need to split up some or\n create new tests, where the same tables are accessed\n concurrently. But that wouldn't be hard to do I think.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 01:57:30 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> * Since no one has picked up on my idea to run the tests directly on the\n> >> backend, I will keep reiterating this idea until someone shuts me up\n> \n> > Running the backend standalone does not use locking with other backends,\n> > so it is dangerous.\n> \n> It wouldn't be particularly \"dangerous\" if we assume that no one else is\n> accessing the regression database. What it *would* be is less useful at\n> catching problems. Standalone mode wouldn't test the cross-backend\n> interlocking code at all.\n\nAny modifications to shared pg_ tables would be a problem. Also, pg_log\nand pg_variable locking is not happening in there either, is it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 19:58:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> It wouldn't be particularly \"dangerous\" if we assume that no one else is\n>> accessing the regression database. What it *would* be is less useful at\n>> catching problems. Standalone mode wouldn't test the cross-backend\n>> interlocking code at all.\n\n> Any modifications to shared pg_ tables would be a problem. Also, pg_log\n> and pg_variable locking is not happening in there either, is it?\n\nGood point --- it wouldn't be just that database, but the whole\ninstallation (data directory) that would have to be unused. You really\nwouldn't dare even have a postmaster running, at least not in the same\ndata directory. But that could be made safe by using a nonstandard\nlocation for the data directory for regression.\n\nHowever, this is all beside the main point: we want the regress tests\nto run in an environment as close as possible to the way Postgres is\nnormally used. The more we hack up a special regress-test environment,\nthe less the tests reflect reality.\n\nAside from the cross-backend synchronization issue, I forgot to point\nout a really obvious benefit: doing it the current way allows the regress\ntests to help check the backend's frontend communication code, and\nlibpq, and psql itself. We'd need some other way of testing all that\ncode if we switched to a standalone-backend regression test set.\n\nWhat I *would* like to see is more support for running regress tests on\na not-yet-installed build, so people can test a fresh build before they\nblow away their working installation. This requires doing an initdb\ninto a temporary directory, starting a postmaster therein (using a\nnonstandard port number), and running the tests there. This is doable\nby hand, of course, but it's tedious and error-prone even for an expert;\nI think it's out of the question for a novice installer. We ought to\noffer a canned script to do it that way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 20:57:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n>\n> > Any modifications to shared pg_ tables would be a problem. Also, pg_log\n> > and pg_variable locking is not happening in there either, is it?\n>\n> Good point --- it wouldn't be just that database, but the whole\n> installation (data directory) that would have to be unused. You really\n> wouldn't dare even have a postmaster running, at least not in the same\n> data directory. But that could be made safe by using a nonstandard\n> location for the data directory for regression.\n>\n> However, this is all beside the main point: we want the regress tests\n> to run in an environment as close as possible to the way Postgres is\n> normally used. The more we hack up a special regress-test environment,\n> the less the tests reflect reality.\n\n My new script actually does a\n\n make POSTGRESDIR=somewhere_else install\n PATH=\"somewhere_else/bin:$PATH\"\n\n Then it initializes a database below there and starts a\n postmaster with the resulting data directory, listening on\n port 65432.\n\n So I think it's very close to a real live setup, while\n another \"production\" running installation isn't affected at\n all.\n\n> Aside from the cross-backend synchronization issue, I forgot to point\n> out a really obvious benefit: doing it the current way allows the regress\n> tests to help check the backend's frontend communication code, and\n> libpq, and psql itself. We'd need some other way of testing all that\n> code if we switched to a standalone-backend regression test set.\n>\n> What I *would* like to see is more support for running regress tests on\n> a not-yet-installed build, so people can test a fresh build before they\n> blow away their working installation. This requires doing an initdb\n> into a temporary directory, starting a postmaster therein (using a\n> nonstandard port number), and running the tests there. This is doable\n> by hand, of course, but it's tedious and error-prone even for an expert;\n> I think it's out of the question for a novice installer. We ought to\n> offer a canned script to do it that way.\n\n Right, right, right - I'm on it.\n\n The ugly detail, I'm currently running into, is that there\n already seems to be a concurrency problem I discovered with\n my testing. Occationally I get this into the postmaster log\n for parallel executing tests:\n\nERROR: Bad boolean external representation 'XXX'\nFATAL 1: SearchSysCache: recursive use of cache 10\nFATAL 2: elog: error in postmaster or backend startup, giving up!\npq_flush: send() failed: Broken pipe\nServer process (pid 9791) exited with status 512 at Fri Nov 19 03:17:09 1999\nTerminating any active server processes...\n\n It happens during the first parallel group of 11 tests. Not\n allways, so it's timing critical. Outch.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 03:24:43 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "> ERROR: Bad boolean external representation 'XXX'\n> FATAL 1: SearchSysCache: recursive use of cache 10\n> FATAL 2: elog: error in postmaster or backend startup, giving up!\n> pq_flush: send() failed: Broken pipe\n> Server process (pid 9791) exited with status 512 at Fri Nov 19 03:17:09 1999\n> Terminating any active server processes...\n> \n> It happens during the first parallel group of 11 tests. Not\n> allways, so it's timing critical. Outch.\n> \n\nNow that we know numeric is working, can we make the test run faster in\nthe default mode?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 21:42:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> I'm actually doing some tests if the 'make check' would be\n> possible. I.e. doing a complete install with POSTGRESDIR\n> below the regress dir, running initdb with the libdir and\n> datadir pointing into there too, then starting new postmaster\n> on a different port in background etc., etc., etc..\n> That would make it possible to do a complete check without\n> doing a regular install at all.\n\nSounds great! I was just griping elsewhere in this thread that we\nneeded that.\n\n> It's amusing how often we two have the same wishes or ideas.\n\nWell, they're so obviously the Right Thing ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 22:28:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests " }, { "msg_contents": "Bruce Momjian wrote:\n\n> > ERROR: Bad boolean external representation 'XXX'\n> > FATAL 1: SearchSysCache: recursive use of cache 10\n> > FATAL 2: elog: error in postmaster or backend startup, giving up!\n> > pq_flush: send() failed: Broken pipe\n> > Server process (pid 9791) exited with status 512 at Fri Nov 19 03:17:09 1999\n> > Terminating any active server processes...\n> >\n> > It happens during the first parallel group of 11 tests. Not\n> > allways, so it's timing critical. Outch.\n\nHmmm,\n\n the first FATAL is emitted from catcache.c in line 988. I\n think that the cache->busy lives in shared memory and isn't\n protected against concurrent usage, as it should be. Cache\n #10 is RELNAME. That really makes sense, because most of the\n tests I'm running parallel now issue CREATE TABLE commands\n first.\n\n\n> Now that we know numeric is working, can we make the test run faster in\n> the default mode?\n\n It is already down to 100 digits after the decimal point. I\n don't want to lower it too much, but maybe 30 or 50 are\n enough too - no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 05:27:07 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "> > Now that we know numeric is working, can we make the test run faster in\n> > the default mode?\n> \n> It is already down to 100 digits after the decimal point. I\n> don't want to lower it too much, but maybe 30 or 50 are\n> enough too - no?\n\nIt is just taking longer than most other tests. Seems 30 would be fine.\nThey can always run the bigtest.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 23:36:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> The ugly detail, I'm currently running into, is that there\n> already seems to be a concurrency problem I discovered with\n> my testing. Occationally I get this into the postmaster log\n> for parallel executing tests:\n\n> ERROR: Bad boolean external representation 'XXX'\n> FATAL 1: SearchSysCache: recursive use of cache 10\n> FATAL 2: elog: error in postmaster or backend startup, giving up!\n> pq_flush: send() failed: Broken pipe\n> Server process (pid 9791) exited with status 512 at Fri Nov 19 03:17:09 1999\n> Terminating any active server processes...\n\n> It happens during the first parallel group of 11 tests. Not\n> allways, so it's timing critical. Outch.\n\nIn other words, you've already exposed a bug! Right on!\n\nCommit the thing, so more eyes can look for the problem. I expect\nyou have found a pre-existing backend bug, not a problem in your\nnew regress test scaffold.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 23:48:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests " }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Bruce Momjian wrote:\n>> Now that we know numeric is working, can we make the test run faster in\n>> the default mode?\n\n> It is already down to 100 digits after the decimal point. I\n> don't want to lower it too much, but maybe 30 or 50 are\n> enough too - no?\n\nSince multiply and so on are presumably O(N^2), cutting the precision\nto 30 might cut the runtime by almost a factor of 10.\n\nJan probably has a better idea than the rest of us whether a test of\n100, or 30, or 10 digits is likely to expose bugs that would not be\nexposed by a test with less precision --- that depends on whether the\ncode has any internal behavioral dependence on the length of numeric\nvalues. The numeric test certainly is a lot slower than the others, so\nI think it would be a good idea to trim the precision as much as we can.\nAnyone who's actually touching the numeric code could and should run\nthe \"bigtest\", but the rest of us just want to know whether we've got\nporting problems. Seems like it shouldn't take 100-digit tests to\nexpose porting problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 1999 00:50:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests " }, { "msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > Bruce Momjian wrote:\n> >> Now that we know numeric is working, can we make the test run faster in\n> >> the default mode?\n>\n> > It is already down to 100 digits after the decimal point. I\n> > don't want to lower it too much, but maybe 30 or 50 are\n> > enough too - no?\n>\n> Since multiply and so on are presumably O(N^2), cutting the precision\n> to 30 might cut the runtime by almost a factor of 10.\n>\n> Jan probably has a better idea than the rest of us whether a test of\n> 100, or 30, or 10 digits is likely to expose bugs that would not be\n> exposed by a test with less precision --- that depends on whether the\n\n I created a new default numeric test using numbers of range\n 10,10 as input only. It doesn't save that much of time as you\n expect, but you're right anyways.\n\n Will commit it later, together with the new parallel test\n suite.\n\n BTW: The parallel problems I encountered aren't anything.\n Starting the postmaster with -D... isn't the same as setting\n PGDATA environment - as it IMHO should be. It happened that I\n killed the test-install postmaster, started with -D pointing\n into my temp dirs, with SIGKILL. It corrupted the pg_control\n file of my default installation :-}\n\n And if you do not have an initialized data directory at the\n compiled in default location, postmaster doesn't startup with\n -D at all.\n\n Vadim, could you take a look at it?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 15:48:35 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql & regress tests" }, { "msg_contents": "On 1999-11-19, Jan Wieck mentioned:\n\n> I'm actually doing some tests if the 'make check' would be\n> possible. I.e. doing a complete install with POSTGRESDIR\n> below the regress dir, running initdb with the libdir and\n> datadir pointing into there too, then starting new postmaster\n> on a different port in background etc., etc., etc..\n> \n> That would make it possible to do a complete check without\n> doing a regular install at all.\n\nWasn't that exactly what my conceptual script intended to do? Great to see\nsome other people thinking in the same direction. The main point is to\nhave the tests run before any installation is done. Sounds like we're on\ntrack . . .\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 20 Nov 1999 03:22:50 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql & regress tests" } ]
[ { "msg_contents": "Where is that file that makes an initdb required? We are supposed to\nchange that file when an initdb is needed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 19:22:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "pg version date file" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Where is that file that makes an initdb required? We are supposed to\n> change that file when an initdb is needed.\n\nsrc/include/catalog/catversion.h.\n\nI put it there on the theory that include/catalog/*.h changes would\nbe the most common reason for wanting to bump the serial number.\nBut maybe it belongs somewhere else...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 22:24:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg version date file " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Where is that file that makes an initdb required? We are supposed to\n> > change that file when an initdb is needed.\n> \n> src/include/catalog/catversion.h.\n> \n> I put it there on the theory that include/catalog/*.h changes would\n> be the most common reason for wanting to bump the serial number.\n> But maybe it belongs somewhere else...\n\nI just couldn't find it. I looked in include only because version.h.in\nwas there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 22:31:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg version date file" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> src/include/catalog/catversion.h.\n>> \n>> I put it there on the theory that include/catalog/*.h changes would\n>> be the most common reason for wanting to bump the serial number.\n>> But maybe it belongs somewhere else...\n\n> I just couldn't find it. I looked in include only because version.h.in\n> was there.\n\nYou could certainly make a case for moving it to src/include. If\nyou want to do that, I won't object.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 23:06:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg version date file " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> src/include/catalog/catversion.h.\n> >> \n> >> I put it there on the theory that include/catalog/*.h changes would\n> >> be the most common reason for wanting to bump the serial number.\n> >> But maybe it belongs somewhere else...\n> \n> > I just couldn't find it. I looked in include only because version.h.in\n> > was there.\n> \n> You could certainly make a case for moving it to src/include. If\n> you want to do that, I won't object.\n\nNo, seems fine there.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 23:16:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg version date file" } ]
[ { "msg_contents": "In writing the book, I see the serious limitation that there is no way\nin psql to access the most recently inserted oid. Without it, there\nseems to be no way to use the oid value as a foreign key in another\ntable.\n\nShould I add a function to return the most recently assigned oid to the\nbackend, or is there a better way?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 20:12:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Getting OID in psql of recent insert" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> In writing the book, I see the serious limitation that there is no way\n> in psql to access the most recently inserted oid. Without it, there\n> seems to be no way to use the oid value as a foreign key in another\n> table.\n\n> Should I add a function to return the most recently assigned oid to the\n> backend, or is there a better way?\n\nI'm not sure why, but a backend-side function seems like the wrong way\nto approach it. I guess I'm worried that the state would be too\nvolatile on the backend side. (Example: if you use the hypothetical\nlastoid() function in an SQL query that causes triggers or rules to\nbe invoked behind your back, those triggers/rules could do new inserts.\nWill lastoid() still return the \"right\" value by the time it gets\nexecuted?)\n\nIt'd certainly be easy enough for psql to save off the OID anytime it\ngets an \"INSERT nnn\" command response. The missing link is to invent\na way for a psql script to access that value and insert it into\nsubsequent SQL commands.\n\nIf you want to attack this, I'd suggest thinking a little larger than\njust the last-OID problem. I'd like to be able to save off both\ninsertion OIDs and values extracted by SELECTs into named variables\nof some sort, and then insert those values into as many later commands\nas I want. Right now there's no way to do any such thing in a psql\nscript; you have to move up a level of difficulty into ecpg or pgtcl\nor even C code if your application needs this. Plain psql scripts\nwould become substantially more powerful if psql had a capability\nlike this.\n\nOTOH: we shouldn't ask psql to do everything under the sun. I'd\ncertainly think that it'd be unreasonable to try to do conditional\nevaluation or looping in psql scripts, for instance. Maybe the right\nanswer is to teach people a little bit about using honest-to-goodness\nscripting languages when their applications reach this level of\ncomplexity. How much daylight is there between needing script\nvariables and needing control flow, do you think?\n\n\t\t\tregards, tom lane\n\nPS: not relevant to your main point, but to your example: I think it's\na real bad idea to teach people to use OIDs as foreign keys. That'll\ncreate all kinds of trouble when it comes time to dump/reload their\ndatabase. Better to tell them to use SERIAL columns as keys. Not so\nincidentally, we have currval() already...\n", "msg_date": "Thu, 18 Nov 1999 22:45:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Getting OID in psql of recent insert " }, { "msg_contents": "> If you want to attack this, I'd suggest thinking a little larger than\n> just the last-OID problem. I'd like to be able to save off both\n> insertion OIDs and values extracted by SELECTs into named variables\n> of some sort, and then insert those values into as many later commands\n> as I want. Right now there's no way to do any such thing in a psql\n> script; you have to move up a level of difficulty into ecpg or pgtcl\n> or even C code if your application needs this. Plain psql scripts\n> would become substantially more powerful if psql had a capability\n> like this.\n\nYes, I understand. The new psql has the ability to have variables, so\nthis seems like a natural use for this:\n\n\ttestdb=> \\set foo bar\n \nMaybe we could have:\n\n\ttestdb=> \\set foo lastoid\n\ntestdb=> \\echo \"foo is now ${foo}.\"\n\n\nSeems those variables are not available in queries, though.\n\n> OTOH: we shouldn't ask psql to do everything under the sun. I'd\n> certainly think that it'd be unreasonable to try to do conditional\n> evaluation or looping in psql scripts, for instance. Maybe the right\n> answer is to teach people a little bit about using honest-to-goodness\n> scripting languages when their applications reach this level of\n> complexity. How much daylight is there between needing script\n> variables and needing control flow, do you think?\n\nI think I agree, but a powerful psql interface is very important for any\ndatabase.\n\n\n> PS: not relevant to your main point, but to your example: I think it's\n> a real bad idea to teach people to use OIDs as foreign keys. That'll\n> create all kinds of trouble when it comes time to dump/reload their\n> database. Better to tell them to use SERIAL columns as keys. Not so\n> incidentally, we have currval() already...\n\nOK, I am dealing with this in the book. What are oids good for then?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 23:16:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Getting OID in psql of recent insert" }, { "msg_contents": "On Thu, 18 Nov 1999, Bruce Momjian wrote:\n> > If you want to attack this, I'd suggest thinking a little larger than\n> > just the last-OID problem. I'd like to be able to save off both\n> > insertion OIDs and values extracted by SELECTs into named variables\n> > of some sort, and then insert those values into as many later commands\n> > as I want. Right now there's no way to do any such thing in a psql\n> > script; you have to move up a level of difficulty into ecpg or pgtcl\n> > or even C code if your application needs this. Plain psql scripts\n> > would become substantially more powerful if psql had a capability\n> > like this.\n> \n\nwe talked about this a few weeks ago as users... even those of us using C or\nhigher level scripting languages agreed it would be nice to be able to have\narbitrary values that are the result of an insert/update/delete able to be\nreturned, without a subsequent select. if this made it into postgres, i think\nyou'd have many happy users =)\n\n> OK, I am dealing with this in the book. What are oids good for then?\n> \n\ni can tell you what i use them for as someone who works with postgres daily...\ni'm not sure if this was what they were intended for.. but =)\n\nonce inserted, a row keeps its oid. so, when performing complex selects, i'll\noften grab the oid too... do some tests on the returned values, and if an action\nis appropriate on that row, i reference it by its oid. the only chance of this\nfailing is if the database is dumped then restored between the select and the\nupdate (not gonna happen, as the program requires the database available for\nexecution)... using the oid this way, its often simpler and faster to update a\nknown row, especially when the initial select involved many fields.\n\n-- \nAaron J. Seigo\nSys Admin\n", "msg_date": "Thu, 18 Nov 1999 21:48:54 -0700", "msg_from": "\"Aaron J. Seigo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Getting OID in psql of recent insert" }, { "msg_contents": "\"Aaron J. Seigo\" <[email protected]> writes:\n> On Thu, 18 Nov 1999, Bruce Momjian wrote:\n>> OK, I am dealing with this in the book. What are oids good for then?\n\n> once inserted, a row keeps its oid. so, when performing complex\n> selects, i'll often grab the oid too... do some tests on the returned\n> values, and if an action is appropriate on that row, i reference it by\n> its oid. the only chance of this failing is if the database is dumped\n> then restored between the select and the update (not gonna happen, as\n> the program requires the database available for execution)... using\n> the oid this way, its often simpler and faster to update a known row,\n> especially when the initial select involved many fields.\n\nYes, I use 'em the same way. I think an OID is kind of like a pointer\nin a C program: good for fast, unique access to an object within the\ncontext of the execution of a particular application (and maybe not\neven that long). You don't write pointers into files to be used again\nby other programs, though, and in the same way an OID isn't a good\ncandidate for a long-lasting reference from one table to another.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 1999 00:40:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Getting OID in psql of recent insert " }, { "msg_contents": "> \"Aaron J. Seigo\" <[email protected]> writes:\n> > On Thu, 18 Nov 1999, Bruce Momjian wrote:\n> >> OK, I am dealing with this in the book. What are oids good for then?\n> \n> > once inserted, a row keeps its oid. so, when performing complex\n> > selects, i'll often grab the oid too... do some tests on the returned\n> > values, and if an action is appropriate on that row, i reference it by\n> > its oid. the only chance of this failing is if the database is dumped\n> > then restored between the select and the update (not gonna happen, as\n> > the program requires the database available for execution)... using\n> > the oid this way, its often simpler and faster to update a known row,\n> > especially when the initial select involved many fields.\n> \n> Yes, I use 'em the same way. I think an OID is kind of like a pointer\n> in a C program: good for fast, unique access to an object within the\n> context of the execution of a particular application (and maybe not\n> even that long). You don't write pointers into files to be used again\n> by other programs, though, and in the same way an OID isn't a good\n> candidate for a long-lasting reference from one table to another.\n\nMy feeling was that oid's are fine for joins in cases where the number\nis not visible to the user, because they are not sequential. Does that\nmake sense, or is that too broad a usage?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Nov 1999 07:36:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Getting OID in psql of recent insert" }, { "msg_contents": "On 1999-11-18, Tom Lane mentioned:\n\n> It'd certainly be easy enough for psql to save off the OID anytime it\n> gets an \"INSERT nnn\" command response. The missing link is to invent\n> a way for a psql script to access that value and insert it into\n> subsequent SQL commands.\n\nOkay, I guess I'm way ahead of everyone here. It is in fact only a matter\nof adding a few lines to save the oid in a variable, and all the\ninfrastructure for doing this is already present. In fact, I was going to\ndo this in the next few days.\n\ntestdb=> \\set singlestep on\ntestdb=> \\set sql_interpol '#'\ntestdb=> \\set foo 'pg_class'\ntestdb=> select * from #foo#;\n***(Single step mode: Verify query)**************\nQUERY: select * from pg_class\n***(press return to proceed or enter x and return to\ncancel)********************\nx\ntestdb=>\n\n> If you want to attack this, I'd suggest thinking a little larger than\n> just the last-OID problem. I'd like to be able to save off both\n> insertion OIDs and values extracted by SELECTs into named variables\n> of some sort, and then insert those values into as many later commands\n> as I want. Right now there's no way to do any such thing in a psql\n> script; you have to move up a level of difficulty into ecpg or pgtcl\n> or even C code if your application needs this. Plain psql scripts\n> would become substantially more powerful if psql had a capability\n> like this.\n\nHmm, saving the SELECT results in a variable sounds like a great\nidea. I'll work on that. But in general, all the framework for this sort\nof thing is already there as you see.\n\n> OTOH: we shouldn't ask psql to do everything under the sun. I'd\n> certainly think that it'd be unreasonable to try to do conditional\n> evaluation or looping in psql scripts, for instance. Maybe the right\n\nI actually had (simple) conditional expressions on my list, but loops are\nnot possible in the current design. Since I just redesigned it, I am quite\nhesitant to changing the design again.\n\n> answer is to teach people a little bit about using honest-to-goodness\n> scripting languages when their applications reach this level of\n> complexity. How much daylight is there between needing script\n> variables and needing control flow, do you think?\n\nGood question. It has been bothering me all along. The best answer to this\nis probably an interactive interpreter of some procedural language we\noffer. (I recall Oracle has their frontend that way.) Adding any more\ncomplex functionality to psql will probably cripple it beyond recognition.\nYou can only go so far with hand-written parsers acting on poorly\nspecified rules consisting of tons of backslashes. :)\n\nAnyway, good to see that all this \"thinking big\" might have had a point\nafter all.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 20 Nov 1999 03:38:35 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Getting OID in psql of recent insert " }, { "msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 1999-11-18, Tom Lane mentioned:\n> \n> > It'd certainly be easy enough for psql to save off the OID anytime it\n> > gets an \"INSERT nnn\" command response. The missing link is to invent\n> > a way for a psql script to access that value and insert it into\n> > subsequent SQL commands.\n> \n> Okay, I guess I'm way ahead of everyone here. It is in fact only a matter\n> of adding a few lines to save the oid in a variable, and all the\n> infrastructure for doing this is already present. In fact, I was going to\n> do this in the next few days.\n> \n> testdb=> \\set singlestep on\n> testdb=> \\set sql_interpol '#'\n> testdb=> \\set foo 'pg_class'\n> testdb=> select * from #foo#;\n> ***(Single step mode: Verify query)**************\n> QUERY: select * from pg_class\n> ***(press return to proceed or enter x and return to\n> cancel)********************\n> x\n> testdb=>\n\nThis is exactly what I was hoping you would say.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Nov 1999 19:55:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Getting OID in psql of recent insert" } ]
[ { "msg_contents": "Hi all,\n\nWe are converting Oracle system to PostgreSQL.\nBut some queries takes very looong time.\n\nI have tried variously and made a typical example\nin PostgreSQL 6.5.2 .\nIt's already known ?\n\nselect count(*) from a,b where a.id1=b.id1;\n\nreturns immeidaitely and EXPLAIN shows\n\nAggregate (cost=3380.24 rows=24929 width=8)\n -> Hash Join (cost=3380.24 rows=24929 width=8)\n -> Seq Scan on a (cost=1551.41 rows=27558 width=4)\n -> Hash (cost=604.21 rows=8885 width=4)\n -> Seq Scan on b (cost=604.21 rows=8885 width=4)\n\nBut\n\nselect count(*) from a,b where a.id1=b.id1 and a.id2=b.id2;\n\ntakes very looong time.\nEXPLAIN shows\n\nAggregate (cost=3382.24 rows=8195 width=12)\n -> Hash Join (cost=3382.24 rows=8195 width=12)\n -> Seq Scan on a (cost=1551.41 rows=27558 width=6)\n -> Hash (cost=604.21 rows=8885 width=6)\n -> Seq Scan on b (cost=604.21 rows=8885 width=6)\n\nQuery plans are almost same.\nWhy is there a such difference ?\n\nI examined an output by EXPLAIN VERBOSE and found that\nthe 1st query uses id1 as its hashkey and 2nd query uses id2\nas its hashkey.\n\nSo I tried the following query.\n\nselect count(*) from a,b where a.id2=b.id2 and a.id1=b.id1;\n\nThis returns immediately as I expected.\n\nid1-s are type int4 and their disbursions are 0.00010181/ \n0.000203409.\nid2-s are type int2 and their disbursions are 0.523526/\n0.328712 .\nIs id2 suitable for a hashkey ?\n\nI can't try it in current now,sorry.\nBut seems the following code in createplan.c is unchanged.\n\n /* Now the righthand op of the sole hashclause is the inner hash key. */\n innerhashkey = get_rightop(lfirst(hashclauses)); \n\nComments ?\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 19 Nov 1999 10:14:56 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hash Join is very slooow in some cases" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> select count(*) from a,b where a.id1=b.id1;\n> returns immeidaitely ...\n> But\n> select count(*) from a,b where a.id1=b.id1 and a.id2=b.id2;\n> takes very looong time.\n> I examined an output by EXPLAIN VERBOSE and found that\n> the 1st query uses id1 as its hashkey and 2nd query uses id2\n> as its hashkey.\n\nYes, and since id2 has terrible disbursion, most of the hashtable\nentries end up in a small number of hash buckets, resulting in\nan unexpectedly large number of comparisons done for each outer\ntuple. I've seen this effect before.\n\nI have a TODO item to make the optimizer pay attention to disbursion\nwhen estimating the cost of a hashjoin. That would cause it to make\nthe right choice of key in this example. Not done yet though :-(.\nFeel free to jump in if you need it today...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 22:51:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hash Join is very slooow in some cases " } ]
[ { "msg_contents": "We currently only allow the words PRIMARY KEY on a SERIAL column. Is\nthere a reason we don't allow PRIMARY KEY on an integer field? Seems it\nshould be allowed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 22:26:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Primary key requires SERIAL" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> We currently only allow the words PRIMARY KEY on a SERIAL column.\n\nSay what? There are ColConstraintElem and ConstraintElem productions\nfor PRIMARY KEY ... are they broken?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 1999 23:14:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Primary key requires SERIAL " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > We currently only allow the words PRIMARY KEY on a SERIAL column.\n> \n> Say what? There are ColConstraintElem and ConstraintElem productions\n> for PRIMARY KEY ... are they broken?\n> \n> \t\t\tregards, tom lane\n> \n\nOh, I see it now. The grammer seems to only support it in SERIAL, but I\nsee it works now. I guess i am surprised SERIAL PRIMARY creates the\nindex and sequence, while INTEGER PRIMARY only creates the index.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 23:19:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Primary key requires SERIAL" }, { "msg_contents": "At 10:26 PM 11/18/99 -0500, Bruce Momjian wrote:\n>We currently only allow the words PRIMARY KEY on a SERIAL column. Is\n>there a reason we don't allow PRIMARY KEY on an integer field? Seems it\n>should be allowed.\n\nPresumably the only reason to disallow this is to make life difficult\nfor those of us who want to port Oracle-based applications.\n\nGiven that Oracle represents a huge slice of the established market,\nand given that in the past Postgres has been an \"Oracle-friendly\" db in\nterms of dialectical support (nextval and currval on sequences being\ngermane to the subject at hand) one can only presume that the Postgres\ndevelopment group wants to make porting of Oracle-ish systems difficult.\n\nWhy?\n\n\"Currently\" must mean the 7.0-in-work because 6.5.1 supports primary\nkey on integer just fine.\n\nWhy support \"serial\" and not support \"primary key on integer\" when\nOracle rules the roost, not Sybase? \n\nIf your statement's true, this is a horrible shift in direction for\nthe PostgreSQL project.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 18 Nov 1999 20:34:20 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Primary key requires SERIAL" }, { "msg_contents": "At 11:19 PM 11/18/99 -0500, Bruce Momjian wrote:\n>> Bruce Momjian <[email protected]> writes:\n>> > We currently only allow the words PRIMARY KEY on a SERIAL column.\n>> \n>> Say what? There are ColConstraintElem and ConstraintElem productions\n>> for PRIMARY KEY ... are they broken?\n>> \n>> \t\t\tregards, tom lane\n>> \n>\n>Oh, I see it now. The grammer seems to only support it in SERIAL, but I\n>see it works now. I guess i am surprised SERIAL PRIMARY creates the\n>index and sequence, while INTEGER PRIMARY only creates the index.\n\nOops, I guess I blew it by responding to a post by Bruce assuming he\nwas right.\n\nPostgres supports a quasi-serial type by creating an index and\nsequence (while Sybase supports it more transparently)\n\nPostgres REALLY supports sequences much like Oracle (and others?\nI don't know, my DB knowledge is very sketchy). In Oracle, if\nyou define a primary key of type integer and want to sequence\nit, you define a sequence and use \"sequence_name.nextval\" and\n\"sequence_name.currval\". This is very much like \"nextval\" and\n\"currval\" in Postgres, and I presume no accident.\n\nAnd in Oracle you create the sequence by hand - just like you do\nin Postgres.\n\nPersonally, I think maintaining an \"Oracle-ish\" framework is wise,\nfor the simple selfish reason that I'm interested in porting \nOracle-dependent SQL to Postgres.\n\nIf being \"Oracle-ish\" is still a goal (it was once a goal of at\nleast some of the implementors, it's obvious) then generating\nthe sequence just makes porting more difficult.\n\nActually, I think the inclusion of \"serial\" as a more integrated\ntype and leaving primary key stuff alone for existing types is\nwhat makes sense. You could provide a higher level of Sybase\nportability without messing up us Oracle-derived folk.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 18 Nov 1999 20:43:53 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Primary key requires SERIAL" }, { "msg_contents": "> At 11:19 PM 11/18/99 -0500, Bruce Momjian wrote:\n> >> Bruce Momjian <[email protected]> writes:\n> >> > We currently only allow the words PRIMARY KEY on a SERIAL column.\n> >> \n> >> Say what? There are ColConstraintElem and ConstraintElem productions\n> >> for PRIMARY KEY ... are they broken?\n> >> \n> >> \t\t\tregards, tom lane\n> >> \n> >\n> >Oh, I see it now. The grammer seems to only support it in SERIAL, but I\n> >see it works now. I guess i am surprised SERIAL PRIMARY creates the\n> >index and sequence, while INTEGER PRIMARY only creates the index.\n> \n> Oops, I guess I blew it by responding to a post by Bruce assuming he\n> was right.\n\nSeems I can't realy on understanding what we support by just looking at\ngram.y.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 23:58:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Primary key requires SERIAL" }, { "msg_contents": "> > >> > We currently only allow the words PRIMARY KEY on a SERIAL column.\n> > Oops, I guess I blew it by responding to a post by Bruce assuming he\n> > was right.\n> Seems I can't realy on understanding what we support by just looking at\n> gram.y.\n\nYou can, if you read carefully :)\n\nThe grammar allows *only* PRIMARY KEY on the SERIAL column\ndeclaration, since the other keywords or clauses are either redundant\nor nonsensical in the context of a serial column. As others have\npointed out, PRIMARY KEY is also allowed elsewhere.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 19 Nov 1999 06:32:14 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Primary key requires SERIAL" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> The grammar allows *only* PRIMARY KEY on the SERIAL column\n> declaration, since the other keywords or clauses are either redundant\n> or nonsensical in the context of a serial column.\n\nJust to put another item on your todo list ;-) ...\n\nI think it's poor practice to try to enforce such a restriction via\nthe grammar, because that way you cannot generate an error more\nspecific than \"parse error near FOO\". It'd be better to allow the\nsame ColQualifier for SERIAL as for any other column type, and then to\nput sanity checks in analyze.c that would complain about conflicting\nspecifications. We have, or should have, most of those checks in\nplace already to catch conflicting ColQualifier entries for a plain\ncolumn type (eg, \"foo int4 NULL NOT NULL\"). Also, I do not like\ngenerating hard errors for specifications that are merely redundant\n(\"foo SERIAL NOT NULL\"); is there any basis in the SQL spec for\nrefusing such constructs?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 1999 01:48:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Primary key requires SERIAL " }, { "msg_contents": "> I think it's poor practice to try to enforce such a restriction via\n> the grammar, because that way you cannot generate an error more\n> specific than \"parse error near FOO\". It'd be better to allow the\n> same ColQualifier for SERIAL as for any other column type, and then to\n> put sanity checks in analyze.c that would complain about conflicting\n> specifications. We have, or should have, most of those checks in\n> place already to catch conflicting ColQualifier entries for a plain\n> column type (eg, \"foo int4 NULL NOT NULL\"). Also, I do not like\n> generating hard errors for specifications that are merely redundant\n> (\"foo SERIAL NOT NULL\"); is there any basis in the SQL spec for\n> refusing such constructs?\n\nBasis? Basis?? Since SERIAL is an extension, there is not anything\ndefined explicitly. And SQL tends to be a context-sensitive language\n(hmm, what's the term for that?) so it does things in different ways\nall over the place; it's not very self-consistant. What *should*\nhappen with a declaration like \"foo int NOT NULL NOT NULL\"? One could\nargue that the backend should just do it, or perhaps should reject\nthis as a possibly corrupted declaration.\n\nWhen I first implemented SERIAL, I'm pretty sure I would have had\ntrouble checking for conflicting qualifiers. But maybe now all the\npieces are there to do it right. Will look at it...\n\nAnyway, I agree with your points, and will put this on my ToDo. btw, I\nstill have an item about the parser swallowing multiple SERIAL or\nPRIMARY KEY declarations (don't remember which right now); will get to\nthat also.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 19 Nov 1999 15:32:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Primary key requires SERIAL" } ]
[ { "msg_contents": "Here are the major open issues for 7.0 that I remember:\n\n\tForeign Keys - Jan\n\tWAL - Vadim\n\tFunction args - Tom\n\tSystem indcxes - Bruce\n\t\nOuter joins and new multi-query parse tree are questionable items for\n7.0.\n\nIs this currect?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Nov 1999 22:43:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "7.0 status request" }, { "msg_contents": "> Here are the major open issues for 7.0 that I remember:\n> Foreign Keys - Jan\n> WAL - Vadim\n> Function args - Tom\n> System indcxes - Bruce\n> Outer joins and new multi-query parse tree are questionable items for\n> 7.0.\n\nYou might include \"join syntax\", which will be ready even if outer\njoins are not.\n\nAlso, didn't some folks express concern that indices on system tables\nwould make the backend more fragile? Did we resolve that issue?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 19 Nov 1999 06:35:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Here are the major open issues for 7.0 that I remember:\n> \tForeign Keys - Jan\n> \tWAL - Vadim\n> \tFunction args - Tom\n> \tSystem indcxes - Bruce\n> Outer joins and new multi-query parse tree are questionable items for\n> 7.0.\n\nI have a bunch of optimizer tweaking that I'd like to finish before 7.0,\nbut perhaps that doesn't qualify as a major open issue. There's no\nsingle item there that I would rank as \"must fix or don't ship\"; yet\nI feel we need to make some progress in that area.\n\nI think there are also a lot of unresolved questions about interlocking\nand updating of the catalog caches and relcache. These might be\nmust-fix items. IIRC, Hiroshi is pretty concerned about that area...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 1999 02:02:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request " }, { "msg_contents": "> > Here are the major open issues for 7.0 that I remember:\n> > Foreign Keys - Jan\n> > WAL - Vadim\n> > Function args - Tom\n> > System indcxes - Bruce\n> > Outer joins and new multi-query parse tree are questionable items for\n> > 7.0.\n> \n> You might include \"join syntax\", which will be ready even if outer\n> joins are not.\n> \n> Also, didn't some folks express concern that indices on system tables\n> would make the backend more fragile? Did we resolve that issue?\n\nWe have indexes on most system tables and it isn't a problem currently.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Nov 1999 07:28:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Here are the major open issues for 7.0 that I remember:\n[snip] \n> Is this currect?\n\nIs large tuple support not open at this time? I know it is one of the\nthings that has been wanted for ages. I know a discussion about it\nhappened not long ago, but no one stepped up to the plate on it.\n\n--\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Fri, 19 Nov 1999 12:24:51 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > > Here are the major open issues for 7.0 that I remember:\n> > > Foreign Keys - Jan\n> > > WAL - Vadim\n> > > Function args - Tom\n> > > System indcxes - Bruce\n> > > Outer joins and new multi-query parse tree are questionable items for\n> > > 7.0.\n> >\n> > You might include \"join syntax\", which will be ready even if outer\n> > joins are not.\n> >\n> > Also, didn't some folks express concern that indices on system tables\n> > would make the backend more fragile? Did we resolve that issue?\n>\n> We have indexes on most system tables and it isn't a problem currently.\n\n It is, because a corrupted index on a system table cannot be\n corrected by drop/create, as a user defined index could be.\n\n I don't know why and when reindexdb disappeared, but that\n script was a last resort for exactly the situation of a\n corrupted system index. Let me take a look if this\n functionality could easily be recreated.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 20:55:14 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "On Fri, Nov 19, 1999 at 08:55:14PM +0100, Jan Wieck wrote:\n> \n> It is, because a corrupted index on a system table cannot be\n> corrected by drop/create, as a user defined index could be.\n> \n> I don't know why and when reindexdb disappeared, but that\n> script was a last resort for exactly the situation of a\n> corrupted system index. Let me take a look if this\n> functionality could easily be recreated.\n> \n\nJan,\n\nI submitted a very small patch to dumpdb that creates SQL that will\nreindex the database. It's then trivial to then redirect that output to\npgsql on UNIX. I run into this problem frequently, so I wanted\nto automate the process. I never saw a reply to my post on the list,\nso I wonder if it made it. \n\nI'm not sure how reindexdb worked, but if it just generated SQL based of \nthe indexes in the database it would make sense to only have the SQL \ngeneration in one common place instead of having it in dumpdb and reindexdb.\nTwo branches of nearly identical code would be a pain to maintain. \n\n-brian\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Fri, 19 Nov 1999 15:41:49 -0600", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Here are the major open issues for 7.0 that I remember:\n> [snip] \n> > Is this currect?\n> \n> Is large tuple support not open at this time? I know it is one of the\n> things that has been wanted for ages. I know a discussion about it\n> happened not long ago, but no one stepped up to the plate on it.\n\nNo one has claimed it. It may be too hard.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Nov 1999 16:49:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "\nOn 19-Nov-99 Bruce Momjian wrote:\n> Here are the major open issues for 7.0 that I remember:\n> \n> Foreign Keys - Jan\n> WAL - Vadim\n> Function args - Tom\n> System indcxes - Bruce\n> \n> Outer joins and new multi-query parse tree are questionable items for\n> 7.0.\n> \n> Is this currect?\n\nThere was talk about purging functions from libpq. Is there any concensus\non which functions may be going away?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Fri, 19 Nov 1999 16:56:59 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 7.0 status request" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Bruce Momjian wrote:\n>> Here are the major open issues for 7.0 that I remember:\n\n> Is large tuple support not open at this time?\n\nI think Bruce was trying to list the work items that are both large\nand fairly likely to be done before 7.0. (I suspect his motivation\nis to figure out what changes he should allow for while writing his\nbook...)\n\nNo one has spoken up and said \"I will work on large tuples for 7.0\",\nso it's not on his list. It could still happen though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 1999 17:14:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request " }, { "msg_contents": "> I submitted a very small patch to dumpdb that creates SQL that will\n> reindex the database. It's then trivial to then redirect that output to\n> pgsql on UNIX. I run into this problem frequently, so I wanted\n> to automate the process. I never saw a reply to my post on the list,\n> so I wonder if it made it.\n>\n> I'm not sure how reindexdb worked, but if it just generated SQL based of\n> the indexes in the database it would make sense to only have the SQL\n> generation in one common place instead of having it in dumpdb and reindexdb.\n> Two branches of nearly identical code would be a pain to maintain.\n\n It's a different approach. And recreating system catalog\n indices cannot work through the regular psql interface. So\n your pgdump enhancement will never be able to do that.\n\n You need to be in bootstrap processing mode (the one the\n system is running in while initdb) to drop or create indices\n on the system catalog tables. Therefore the postmaster must\n NOT be running and you have a (very limited) interface to the\n bootstrapping postgres process. Thus you'll have to talk the\n *.bki.source dialect to issue commands.\n\n I already made some tests, but they all corrupted more than\n they fixed :-). Seems the semantics of the Postgres 4.2\n reindexdb have been hit by changes in index handling since\n 1994. Not really surprising to me.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 23:17:39 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "> Lamar Owen <[email protected]> writes:\n> > Bruce Momjian wrote:\n> >> Here are the major open issues for 7.0 that I remember:\n> \n> > Is large tuple support not open at this time?\n> \n> I think Bruce was trying to list the work items that are both large\n> and fairly likely to be done before 7.0. (I suspect his motivation\n> is to figure out what changes he should allow for while writing his\n> book...)\n\nActually, the motivation was to just get a heads-up from everyone that\nwe are on track for 7.0. If someone wanted to cancel their offer of\nadding a feature, that is usually the way we hear about it. That way,\nwe don't find out just before beta that someone has decided to abandon a\nfeature addition.\n\nThis doesn't mean they will actually complete the addition, but it does\nmean that at this time they are going to attempt to complete it for 7.0.\n\nAs we get closer, the dreaded Open Items list appears to marshall forces\nto get as many bugs fixed/features as possible, though Tom, your\npresence is making that unnecessary because you seem to fix the bugs\nas soon as they come up. In the old days, the stuff stayed in my\nmailbox, and nearing beta, I would comb through for open bug reports,\nmake a list, and try and get as many fixed as possible.\n\nBTW, I am writing the book assuming no additions will be made for 7.0.\nIf they happen, I will rewrite. If I put a feature on paper, there is\nextra pressure to complete it, and I don't want to do that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Nov 1999 17:39:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "On 1999-11-19, Vince Vielhaber mentioned:\n\n> There was talk about purging functions from libpq. Is there any concensus\n> on which functions may be going away?\n\nAs far as I'm concerned, the printing functions only have a few more weeks\nto live ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 20 Nov 1999 03:19:49 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 7.0 status request" }, { "msg_contents": "On 1999-11-18, Bruce Momjian mentioned:\n\n> Here are the major open issues for 7.0 that I remember:\n\nDoes anyone ever plan on dropping any columns? I think this is a must-have\nfor the latest and greatest database system. I'm sure an experienced\nbackend hacker can put something together in a few days. If need be we\ncould just use a radical solution similar to cluster where you have to\nshut down the whole database and rebuild all indices afterwards, but\npleeeeease.\n\nOr this forgotten? Impossible? A feature?\n\nJust wondering . . .\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 20 Nov 1999 03:56:19 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "> >> Here are the major open issues for 7.0 that I remember:\n> I think Bruce was trying to list the work items that are both large\n> and fairly likely to be done before 7.0. (I suspect his motivation\n> is to figure out what changes he should allow for while writing his\n> book...)\n\nAh, that reminds me...\n\nI will do a \"type reunification\" for the date/time types, hopefully by\nthe end of December. I consider this fairly important and have been\nholding off for a major release to do it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 20 Nov 1999 03:13:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Friday, November 19, 1999 4:03 PM\n> To: Bruce Momjian\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] 7.0 status request \n> \n> I think there are also a lot of unresolved questions about interlocking\n> and updating of the catalog caches and relcache. These might be\n> must-fix items. IIRC, Hiroshi is pretty concerned about that area...\n>\n\nUnfortunately I don't have a reasonable solution for interlocking yet.\nFirst,row level locking for system tuples not only exclusive but\nalso shared will be needed. I couldn't find the way to implement\nshared row level locking now. \nMoreover I'm suspicious that this row level locking could be used\nfor parser/planner. Row level locking(at least in current implemen\ntation) is held till end of transaction.\n\nAs for cache invalidation(rollback),I may be able to do something.\nHowever new save point feature would need some change around\nit. I don't know how Vadim would change it. \n\nRegards.\n\nHiroshi Inoue\[email protected] \n\n", "msg_date": "Tue, 23 Nov 1999 07:04:27 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 7.0 status request " }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> As for cache invalidation(rollback),I may be able to do something.\n> However new save point feature would need some change around\n> it. I don't know how Vadim would change it.\n\nI have no plans to implement it for 7.0...\n\nVadim\n", "msg_date": "Tue, 23 Nov 1999 09:50:24 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Tuesday, November 23, 1999 11:50 AM\n> To: Hiroshi Inoue\n> Cc: Tom Lane; Bruce Momjian; PostgreSQL-development\n> Subject: Re: [HACKERS] 7.0 status request\n> \n> \n> Hiroshi Inoue wrote:\n> > \n> > As for cache invalidation(rollback),I may be able to do something.\n> > However new save point feature would need some change around\n> > it. I don't know how Vadim would change it.\n> \n> I have no plans to implement it for 7.0...\n>\n\nDoesn't ROLLBACK to a save point rollback catalog cache ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Tue, 23 Nov 1999 12:20:57 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 7.0 status request" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > > As for cache invalidation(rollback),I may be able to do something.\n> > > However new save point feature would need some change around\n> > > it. I don't know how Vadim would change it.\n> >\n> > I have no plans to implement it for 7.0...\n> >\n> \n> Doesn't ROLLBACK to a save point rollback catalog cache ?\n\nThere will be no savepoints in 7.0...\nBut I'm going to rollback all changes made by xaction\non abort...\n\nVadim\n", "msg_date": "Tue, 23 Nov 1999 10:20:59 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "> > Hiroshi Inoue wrote:\n> > > \n> > > As for cache invalidation(rollback),I may be able to do something.\n> > > However new save point feature would need some change around\n> > > it. I don't know how Vadim would change it.\n> > \n> > I have no plans to implement it for 7.0...\n> >\n> \n> Doesn't ROLLBACK to a save point rollback catalog cache ?\n\nI thought we flush cache on abort, no?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Nov 1999 22:38:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, November 23, 1999 12:38 PM\n> To: Hiroshi Inoue\n> Cc: Vadim Mikheev; Tom Lane; PostgreSQL-development\n> Subject: Re: [HACKERS] 7.0 status request\n> \n> \n> > > Hiroshi Inoue wrote:\n> > > > \n> > > > As for cache invalidation(rollback),I may be able to do something.\n> > > > However new save point feature would need some change around\n> > > > it. I don't know how Vadim would change it.\n> > > \n> > > I have no plans to implement it for 7.0...\n> > >\n> > \n> > Doesn't ROLLBACK to a save point rollback catalog cache ?\n> \n> I thought we flush cache on abort, no?\n>\n\nSorry,I don't remember well now.\nBut at least catalog cache is not rollbacked for the transaction\nitself in case of abort.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 23 Nov 1999 13:17:29 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 7.0 status request" } ]
[ { "msg_contents": "With PostgreSQL compiled with support for locales and multi-byte encoding:\n\ninitdb -e BIG5\n\n[start postmaster]\npsql template1\n\\dS causes a segmentation fault in the backend\n\n>From the log:\n\nStartTransactionCommand\nquery: SELECT usename, relname, relkind, relhasrules FROM pg_class, pg_user \nWHERE usesysid = relowner and ( relkind = 'r' OR relkind = 'i' OR relkind = \n'S') and relname ~ '^pg_' and (relkind != 'i' OR relname !~ '^xinx') ORDER BY \nrelname\nProcessQuery\n/usr/lib/postgresql/bin/postmaster: reaping dead processes...\n/usr/lib/postgresql/bin/postmaster: CleanupProc: pid 294 exited with status 11\n\n\nThis can be isolated to the pattern-matching operator:\n\ntemplate1=> select * from pg_class where relname ~ '^pg_' ;\npqReadData() -- backend closed the channel unexpectedly.\n\n------- Forwarded Message\n\nDate: Thu, 18 Nov 1999 19:48:39 +0800\nFrom: Chuan-kai Lin <[email protected]>\nTo: Oliver Elphick <[email protected]>\nSubject: Re: Bug#50388: Backend close client-server channel unexpectedly\n\nOn Thu, Nov 18, 1999 at 09:18:34AM +0000, Oliver Elphick wrote:\n> Something must have happened to the database as it was being created.\n> At the moment, I would put it down to cosmic rays or something.\n\nThrough a friend who specializes in metaphysics and astronomy, I\nhave traced down the exact cause of our problem: PostgreSQL does\nnot like BIG5 encoding. If you supply \"-e BIG5\" to initdb, the\nresulting database will be hosed. Plain and simple.\n\nThis looks like a tough one... somebody better notify the upstream\ndevelopers about this.\n\n- -- Chuan-kai Lin\n\n\n------- End of Forwarded Message\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"To show forth thy lovingkindness in the morning, and \n thy faithfulness every night.\" Psalms 92:2 \n\n\n", "msg_date": "Fri, 19 Nov 1999 10:19:49 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problems in 6.5.3 with Multi-Byte encoding" }, { "msg_contents": "I have another kind of problem related with MB and 6.5.3.\n\nOn FreeBSD 3.1:\n\nnature=> select u_address from users;\nu_address\n------------------------------\nО©╫О©╫О©╫О©╫О©╫О©╫, О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫. 13\n(1 row)\n\nnature=> set client_encoding to 'WIN';\nSET VARIABLE\nnature=> select u_address from users;\nu_address\n------------------------------\nmOSKWA, uNIWERSITETSKIJ PR. 13\n(1 row)\n\nIt seems that 8-bit was stripped !\nI've checked locale on this machine - it works.\n\nMoreover, 8-bit stripped even if I do silly setting of client_encoding\nto native encoding:\n\nature=> set client_encoding to 'KOI8';\nSET VARIABLE\nnature=> select u_address from users;\nu_address\n------------------------------\nmOSKWA, uNIWERSITETSKIJ PR. 13\n(1 row)\n\nIt's interesting that on Linux I have no problem.\nI have no time right now to test 6.5.2 but I recall I had no\nsuch problem (not 100% sure).\n\n\tRegards,\n\t\tOleg\n\n\nOn Fri, 19 Nov 1999, Oliver Elphick wrote:\n\n> Date: Fri, 19 Nov 1999 10:19:49 +0000\n> From: Oliver Elphick <[email protected]>\n> To: [email protected], [email protected]\n> Cc: [email protected]\n> Subject: [BUGS] Problems in 6.5.3 with Multi-Byte encoding\n> \n> With PostgreSQL compiled with support for locales and multi-byte encoding:\n> \n> initdb -e BIG5\n> \n> [start postmaster]\n> psql template1\n> \\dS causes a segmentation fault in the backend\n> \n> >From the log:\n> \n> StartTransactionCommand\n> query: SELECT usename, relname, relkind, relhasrules FROM pg_class, pg_user \n> WHERE usesysid = relowner and ( relkind = 'r' OR relkind = 'i' OR relkind = \n> 'S') and relname ~ '^pg_' and (relkind != 'i' OR relname !~ '^xinx') ORDER BY \n> relname\n> ProcessQuery\n> /usr/lib/postgresql/bin/postmaster: reaping dead processes...\n> /usr/lib/postgresql/bin/postmaster: CleanupProc: pid 294 exited with status 11\n> \n> \n> This can be isolated to the pattern-matching operator:\n> \n> template1=> select * from pg_class where relname ~ '^pg_' ;\n> pqReadData() -- backend closed the channel unexpectedly.\n> \n> ------- Forwarded Message\n> \n> Date: Thu, 18 Nov 1999 19:48:39 +0800\n> From: Chuan-kai Lin <[email protected]>\n> To: Oliver Elphick <[email protected]>\n> Subject: Re: Bug#50388: Backend close client-server channel unexpectedly\n> \n> On Thu, Nov 18, 1999 at 09:18:34AM +0000, Oliver Elphick wrote:\n> > Something must have happened to the database as it was being created.\n> > At the moment, I would put it down to cosmic rays or something.\n> \n> Through a friend who specializes in metaphysics and astronomy, I\n> have traced down the exact cause of our problem: PostgreSQL does\n> not like BIG5 encoding. If you supply \"-e BIG5\" to initdb, the\n> resulting database will be hosed. Plain and simple.\n> \n> This looks like a tough one... somebody better notify the upstream\n> developers about this.\n> \n> - -- Chuan-kai Lin\n> \n> \n> ------- End of Forwarded Message\n> \n> \n> -- \n> Vote against SPAM: http://www.politik-digital.de/spam/\n> ========================================\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"To show forth thy lovingkindness in the morning, and \n> thy faithfulness every night.\" Psalms 92:2 \n> \n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 19 Nov 1999 14:19:00 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Problems in 6.5.3 with Multi-Byte encoding" }, { "msg_contents": "> I have another kind of problem related with MB and 6.5.3.\n> \n> On FreeBSD 3.1:\n[snip]\n> It's interesting that on Linux I have no problem.\n> I have no time right now to test 6.5.2 but I recall I had no\n> such problem (not 100% sure).\n\nI think no change has been made between 6.5.2 and 6.5.3 regarding MB.\nCan you send me the KOI8 data and WIN data that is supposed to be\ncorrect? I will check it out on a FreeBSD 3.2 machine.\n\n> > With PostgreSQL compiled with support for locales and multi-byte encoding:\n> > \n> > initdb -e BIG5\n\nNo, you cannot do this. If you want to use traditional Chinese, you\nhave to make a database with EUC_TW (initdb -e EUC_TW) then set the\nPGCLIENTENCODING environment variable to BIG5 on the client side. See\ndoc/README.mb for more details.\n\nMaybe initdb should reject BIG5. I will do it in for the next release.\n---\nTatsuo Ishii\n", "msg_date": "Sat, 20 Nov 1999 21:05:26 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] Problems in 6.5.3 with Multi-Byte encoding " }, { "msg_contents": "\nAnyone want to comment on this? BIG5 anyone?\n\n> With PostgreSQL compiled with support for locales and multi-byte encoding:\n> \n> initdb -e BIG5\n> \n> [start postmaster]\n> psql template1\n> \\dS causes a segmentation fault in the backend\n> \n> >From the log:\n> \n> StartTransactionCommand\n> query: SELECT usename, relname, relkind, relhasrules FROM pg_class, pg_user \n> WHERE usesysid = relowner and ( relkind = 'r' OR relkind = 'i' OR relkind = \n> 'S') and relname ~ '^pg_' and (relkind != 'i' OR relname !~ '^xinx') ORDER BY \n> relname\n> ProcessQuery\n> /usr/lib/postgresql/bin/postmaster: reaping dead processes...\n> /usr/lib/postgresql/bin/postmaster: CleanupProc: pid 294 exited with status 11\n> \n> \n> This can be isolated to the pattern-matching operator:\n> \n> template1=> select * from pg_class where relname ~ '^pg_' ;\n> pqReadData() -- backend closed the channel unexpectedly.\n> \n> ------- Forwarded Message\n> \n> Date: Thu, 18 Nov 1999 19:48:39 +0800\n> From: Chuan-kai Lin <[email protected]>\n> To: Oliver Elphick <[email protected]>\n> Subject: Re: Bug#50388: Backend close client-server channel unexpectedly\n> \n> On Thu, Nov 18, 1999 at 09:18:34AM +0000, Oliver Elphick wrote:\n> > Something must have happened to the database as it was being created.\n> > At the moment, I would put it down to cosmic rays or something.\n> \n> Through a friend who specializes in metaphysics and astronomy, I\n> have traced down the exact cause of our problem: PostgreSQL does\n> not like BIG5 encoding. If you supply \"-e BIG5\" to initdb, the\n> resulting database will be hosed. Plain and simple.\n> \n> This looks like a tough one... somebody better notify the upstream\n> developers about this.\n> \n> - -- Chuan-kai Lin\n> \n> \n> ------- End of Forwarded Message\n> \n> \n> -- \n> Vote against SPAM: http://www.politik-digital.de/spam/\n> ========================================\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"To show forth thy lovingkindness in the morning, and \n> thy faithfulness every night.\" Psalms 92:2 \n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 22:17:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems in 6.5.3 with Multi-Byte encoding" }, { "msg_contents": "> Anyone want to comment on this? BIG5 anyone?\n> \n> > With PostgreSQL compiled with support for locales and multi-byte encoding:\n> > \n> > initdb -e BIG5\n> > \n> > [start postmaster]\n> > psql template1\n> > \\dS causes a segmentation fault in the backend\n\nThe answer is:\n\nYou should not do initdb with BIG5, instead you could do:\n\ninitdb -e EUC_TW\n\nBIG5 and EUC_TW are both for traditional Chinese, only EUC_TW can be\nused for the proffered encoding for the backend, however. In the\nsetting above, you could use EUC_TW for the frontend side and BIG5 as\nwell. To use BIG5 in the fronend, you set the environment variable\nPGCLIENTENCODING to \"BIG5\" if you use psql or applications those are\nusing libpq. In this case, an automatic code conversion between BIG5\nand EUC_TW will be performed in the backend.\n\nI'll add the code to prevent BIG5 for initdb -e in the next release.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 30 Nov 1999 13:43:27 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems in 6.5.3 with Multi-Byte encoding" } ]
[ { "msg_contents": "Hi,\n\nI think I found a bug in pg_dump:\nI created a table like:\n\nCREATE TABLE ut (\n Azienda CHAR(16) NOT NULL,\n ragione_sociale VARCHAR(45) NOT NULL,\n indirizzo CHAR(40),\n inizio_attivita DATE DEFAULT CURRENT_DATE,\n fine_attivita DATE\n );\n\nand pg_dump modify the structure table as:\n\n\\connect - postgres\nCREATE TABLE \"ut\" (\n \"azienda\" character(16) NOT NULL,\n \"ragione_sociale\" character varying(45) NOT NULL,\n \"indirizzo\" character(40),\n \"inizio_attivita\" date DEFAULT date( 'current'::datetime + '0\nsec') NOT NULL,\n \"fine_attivita\" date);\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIf I try to recreate the table I have this:\nERROR: parser: parse error at or near \"'\"\n\nAny ideas ?\n\nJos�\n\n\n", "msg_date": "Fri, 19 Nov 1999 12:32:04 +0100", "msg_from": "jose soares <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump bug" }, { "msg_contents": "I confirm this bug for 6.5.3, Linux\n\n\tOleg\n\nOn Fri, 19 Nov 1999, jose soares wrote:\n\n> Date: Fri, 19 Nov 1999 12:32:04 +0100\n> From: jose soares <[email protected]>\n> To: hackers <[email protected]>\n> Subject: [HACKERS] pg_dump bug\n> \n> Hi,\n> \n> I think I found a bug in pg_dump:\n> I created a table like:\n> \n> CREATE TABLE ut (\n> Azienda CHAR(16) NOT NULL,\n> ragione_sociale VARCHAR(45) NOT NULL,\n> indirizzo CHAR(40),\n> inizio_attivita DATE DEFAULT CURRENT_DATE,\n> fine_attivita DATE\n> );\n> \n> and pg_dump modify the structure table as:\n> \n> \\connect - postgres\n> CREATE TABLE \"ut\" (\n> \"azienda\" character(16) NOT NULL,\n> \"ragione_sociale\" character varying(45) NOT NULL,\n> \"indirizzo\" character(40),\n> \"inizio_attivita\" date DEFAULT date( 'current'::datetime + '0\n> sec') NOT NULL,\n> \"fine_attivita\" date);\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> If I try to recreate the table I have this:\n> ERROR: parser: parse error at or near \"'\"\n> \n> Any ideas ?\n> \n> JosО©╫\n> \n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 19 Nov 1999 15:30:14 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I think I found a bug in pg_dump:\n> I created a table like:\n> \n> CREATE TABLE ut (\n> Azienda CHAR(16) NOT NULL,\n> ragione_sociale VARCHAR(45) NOT NULL,\n> indirizzo CHAR(40),\n> inizio_attivita DATE DEFAULT CURRENT_DATE,\n> fine_attivita DATE\n> );\n> \n> and pg_dump modify the structure table as:\n> \n> \\connect - postgres\n> CREATE TABLE \"ut\" (\n> \"azienda\" character(16) NOT NULL,\n> \"ragione_sociale\" character varying(45) NOT NULL,\n> \"indirizzo\" character(40),\n> \"inizio_attivita\" date DEFAULT date( 'current'::datetime + '0\n> sec') NOT NULL,\n> \"fine_attivita\" date);\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nStrange, but the query looks fine, and creates fine here in the current\nsources. We had a quoting bug in defaults at some point. What version\nare you using?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Nov 1999 07:35:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bugu" }, { "msg_contents": "jose soares <[email protected]> writes:\n> I think I found a bug in pg_dump:\n\nIt's not pg_dump's fault; it's just putting out what's in the system\ntables, and \"date( 'current'::datetime + '0 sec')\" is how the 6.5.*\nparser translates DEFAULT CURRENT_DATE. (Which is inconsistent with\nhow it translates CURRENT_DATE in other contexts, but nevermind.)\n\nThe failure actually comes up because the 6.5.* parser can't cope with\n\"x::y\"-style typecasts in default expressions; it translates them to\na syntactically invalid string. CAST ... AS doesn't work either, BTW.\n\nI have ripped out and rewritten all of that cruft for 7.0, which is why\nit works now (more or less). I dunno if it's worth trying to patch\naround this particular bug in the default-handling code in 6.5.*.\nIt's got so many others :-(\n\nCurrent sources still have a problem with this example, which is that\nthe default expression gets prematurely constant-folded:\n\tCREATE TABLE ut (d1 DATE DEFAULT CURRENT_DATE);\npg_dumps as\n\tCREATE TABLE \"ut\" (\n\t \"d1\" date DEFAULT '11-19-1999'::date);\nDrat. I thought I'd taken care of that class of problems...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 1999 10:07:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug " }, { "msg_contents": "> I confirm this bug for 6.5.3, Linux\n\nHmm. I'm running a more-or-less current development tree, and don't\nsee a problem (on Linux also). Does someone want to track it down??\n\nAlthough Jose may have marked the line causing a problem, perhaps\nsomeone can more explicitly identify the offending syntax?\n\n - Thomas\n\n> > I think I found a bug in pg_dump:\n> > CREATE TABLE \"ut\" (\n> > \"azienda\" character(16) NOT NULL,\n> > \"ragione_sociale\" character varying(45) NOT NULL,\n> > \"indirizzo\" character(40),\n> > \"inizio_attivita\" date DEFAULT date( 'current'::datetime + '0\n> > sec') NOT NULL,\n> > \"fine_attivita\" date);\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > If I try to recreate the table I have this:\n> > ERROR: parser: parse error at or near \"'\"\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 19 Nov 1999 15:24:52 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug" }, { "msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> I confirm this bug for 6.5.3, Linux\n\nOK, seems like it is only 6.5.* tree and not development tree, which\nmeans development has a fix that was too risky for 6.5.*. Seems user\nwill have to wait for 7.0.\n\n> \n> \tOleg\n> \n> On Fri, 19 Nov 1999, jose soares wrote:\n> \n> > Date: Fri, 19 Nov 1999 12:32:04 +0100\n> > From: jose soares <[email protected]>\n> > To: hackers <[email protected]>\n> > Subject: [HACKERS] pg_dump bug\n> > \n> > Hi,\n> > \n> > I think I found a bug in pg_dump:\n> > I created a table like:\n> > \n> > CREATE TABLE ut (\n> > Azienda CHAR(16) NOT NULL,\n> > ragione_sociale VARCHAR(45) NOT NULL,\n> > indirizzo CHAR(40),\n> > inizio_attivita DATE DEFAULT CURRENT_DATE,\n> > fine_attivita DATE\n> > );\n> > \n> > and pg_dump modify the structure table as:\n> > \n> > \\connect - postgres\n> > CREATE TABLE \"ut\" (\n> > \"azienda\" character(16) NOT NULL,\n> > \"ragione_sociale\" character varying(45) NOT NULL,\n> > \"indirizzo\" character(40),\n> > \"inizio_attivita\" date DEFAULT date( 'current'::datetime + '0\n> > sec') NOT NULL,\n> > \"fine_attivita\" date);\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > \n> > If I try to recreate the table I have this:\n> > ERROR: parser: parse error at or near \"'\"\n> > \n> > Any ideas ?\n> > \n> > Jos_\n> > \n> > \n> > \n> > ************\n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Nov 1999 10:55:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug" }, { "msg_contents": "I wrote:\n> Current sources still have a problem with this example, which is that\n> the default expression gets prematurely constant-folded:\n> \tCREATE TABLE ut (d1 DATE DEFAULT CURRENT_DATE);\n> pg_dumps as\n> \tCREATE TABLE \"ut\" (\n> \t \"d1\" date DEFAULT '11-19-1999'::date);\n> Drat. I thought I'd taken care of that class of problems...\n\nFixed in CVS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Nov 1999 16:44:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug " } ]
[ { "msg_contents": "Welcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.6.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n\n(a side note - wouldn't it be helpful to have a little more info about the\nbuild, namely its time stamp and/or the CVS time stamp)\n\ntest=> \\d ord\nTable = ord\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| id | int4 |\n4 |\n| pos | int4 |\n4 |\n| tp | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\ntest=> select * from ord;\nid|pos|tp\n--+---+--\n 1| 1| 1\n 2| 2| 1\n 3| 3| 2\n 4| 1| 2\n 5| 3| 1\n(5 rows)\n\nThis query is fine:\n\ntest=> select o1.id from ord as o1, ord as o2 where o1.pos>2 and o2.pos<2\ntest-> and o1.tp=o2.tp;\nid\n--\n 5\n 3\n(2 rows)\n\nAnd this one is invalid:\n\ntest=> select o1.id from ord as o1, ord as o2 where o1.pos>2 and o2.pos<2\ntest-> and o1.tp=o2.tp and ord.id>3;\nid\n--\n 5\n 5\n 3\n 3\n(4 rows)\n\nThis query should probably fail instead of returning an invalid result. MS\nSQL 6.5 does just that:\n\nMsg 107, Level 16, State 3\nThe column prefix 'ord' does not match with a table name or alias name used\nin the query.\n\nGene Sokolov\n\n\n", "msg_date": "Fri, 19 Nov 1999 18:16:26 +0300", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Curiously confused query parser." }, { "msg_contents": "\"Gene Sokolov\" <[email protected]> writes:\n> This query is fine:\n\n> test=> select o1.id from ord as o1, ord as o2 where o1.pos>2 and o2.pos<2\ntest-> and o1.tp=o2.tp;\n> id\n> --\n> 5\n> 3\n> (2 rows)\n\n> And this one is invalid:\n\n> test=> select o1.id from ord as o1, ord as o2 where o1.pos>2 and o2.pos<2\ntest-> and o1.tp=o2.tp and ord.id>3;\n> id\n> --\n> 5\n> 5\n> 3\n> 3\n> (4 rows)\n\nIt's not invalid, at least not according to Postgres' view of the world;\nyour reference to ord.id adds an implicit \"FROM ord AS ord\" to the FROM\nclause, turning the query into a 3-way join. The output is correct for\nthat interpretation.\n\nImplicit FROM clauses are a POSTQUEL leftover that is not to be found\nin the SQL92 spec. There's been some talk of emitting a warning message\nwhen one is added, because we do regularly see questions from confused\nusers. But if we took the feature out entirely, we'd doubtless break\nsome existing applications :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 1999 10:36:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Curiously confused query parser. " }, { "msg_contents": "Gene Sokolov wrote:\n\n> And this one is invalid:\n>\n> test=> select o1.id from ord as o1, ord as o2 where o1.pos>2 and o2.pos<2\n> test-> and o1.tp=o2.tp and ord.id>3;\n> id\n> --\n> 5\n> 5\n> 3\n> 3\n> (4 rows)\n>\n> This query should probably fail instead of returning an invalid result. MS\n> SQL 6.5 does just that:\n>\n> Msg 107, Level 16, State 3\n> The column prefix 'ord' does not match with a table name or alias name used\n> in the query.\n\n Seems PostgreSQL tries to be a little too smart. It\n automatically adds another rangetable entry for ORD, so the\n query is executed as\n\n test=> select o1.id from ord as o1, ord as o2, ord as auto_rte\n test-> where o1.pos>2 and o2.pos<2\n test-> and o1.tp=o2.tp and auto_rte.id>3;\n\n For this query, the result is O.K.\n\n I don't know if this is according to the SQL specs and MS is\n wrong, or if PostgreSQL is violating the specs. Thomas?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 21:05:39 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Curiously confused query parser." } ]
[ { "msg_contents": "I've installed Mandrake 6.1 on a new laptop (an early Christmas\npresent :) and notice a little trouble with the RPMs. Somehow, they\ninclude the old postgresql-clients-6.4.2 package as well as all of the\n-6.5.1 packages. I assume that RedHat 6.1 does not show the same\nproblem?\n\nDoes anyone already talk to the Mandrake folks, or run Mandrake and\nwould like to pursue this?\n\nbtw, the RPM installation was *really nice*!!!!! For some reason the\nserver packages were not installed when I built the system, and I\ninstalled later so got to see it happen. The RPM automatically\nunpacked everything, did the initdb, and a\n\"/etc/rc.d/init.d/postgresql start\" got me a server.\n\nOne detail: Mandrake defines a Postgres user, but disables the\npassword. I added the password and was then able to log in as the\nPostgres user and add other users. Is that the preferred way to do\nit??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 19 Nov 1999 15:40:30 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Mandrake Postgres RPMs" }, { "msg_contents": "> One detail: Mandrake defines a Postgres user, but disables the\n> password. I added the password and was then able to log in as the\n> Postgres user and add other users. Is that the preferred way to do\n> it??\n\nI usually 'su' to the postgres - user from root, and then create a\nsuper-user-account\nfor myself, and then do all the other stuff from there.\nI don't know how other people think about this, but I have found that\nthe less passwords there are into a system, the harder it is for people to\nbreak in.\n\nJoost Roeleveld\n\n", "msg_date": "Fri, 19 Nov 1999 16:56:48 +0100", "msg_from": "\"J. Roeleveld\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mandrake Postgres RPMs" }, { "msg_contents": "Thomas Lockhart wrote:\n> I've installed Mandrake 6.1 on a new laptop (an early Christmas\n> present :) and notice a little trouble with the RPMs. Somehow, they\n> include the old postgresql-clients-6.4.2 package as well as all of the\n> -6.5.1 packages. I assume that RedHat 6.1 does not show the same\n> problem?\n\nThat is a Mandrake problem. They haven't really followed the\ndevelopment of the new RPM's like RedHat did for RedHat 6.1. So, they\nshipped mutually exclusive RPMs for PostgreSQL because they didn't pay\nclose enough attention. Also, the RPM's shipped with Mandrake 6.1 are\nsomewhat older than what shipped with RedHat 6.1. And RedHat 6.1 does\nnot include any older RPMs of PostgreSQL. (http://www.cheapbytes.com\nfor real cheap RedHat CD's..... :-)) HOWEVER, f you have a laptop, you\nmay want to wait until RedHat 6.2 is released, as there are some issues\nwith some laptops and RedHat 6.1's kernel -- although I think that most\nof the troubles are with Toshiba's.\n\n> Does anyone already talk to the Mandrake folks, or run Mandrake and\n> would like to pursue this?\n\nI have e-mailed the Mandrake folks on two occasions -- haven't received\na reply. HOWEVER, they will pull whatever is the current RPM set from\nftp.postgresql.org when they get to another release point (AFAIK).\n\n> btw, the RPM installation was *really nice*!!!!! For some reason the\n> server packages were not installed when I built the system, and I\n> installed later so got to see it happen. The RPM automatically\n> unpacked everything, did the initdb, and a\n> \"/etc/rc.d/init.d/postgresql start\" got me a server.\n\nWell, thank you. It is virtually impossible to force the installation\nof the server package for the OS install without forcing it for all\ninstalls -- which I believe we didn't want -- the purpose of splitting\nit out, IIRC, was to allow a client-only installation for those who\nmight want such a beast.\n\nAs I said, that was a slightly older RPM set than RedHat 6.1 shipped --\nMandrake 6.1 shipped before RH 6.1 by a couple of weeks. The Mandrake\npeople pulled the 6.5.1-0.8 RPM's off of RawHide -- apparently about two\ndays before I finalized the upgrading stuff -- but after I put in the\nautomatic initdb, I believe.\n\n> One detail: Mandrake defines a Postgres user, but disables the\n> password. I added the password and was then able to log in as the\n> Postgres user and add other users. Is that the preferred way to do\n> it??\n\nRedHat 6.1 also does this. This way, passwordless accounts are not left\nafter an installation as a security hole. BTW, this is the way I\nnormally run -- if I want to do stuff as postgres, I su to root then su\nto postgres, as I don't want to allow direct logins to postgres. Of\ncourse, to each his own. The RPM post install script could feed a\ndefault password in, but that would be just as bad as no password at\nall.\n\nGlad you're enjoying it....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 19 Nov 1999 12:14:20 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mandrake Postgres RPMs" } ]
[ { "msg_contents": "Check out http://www.linuxplanet.com/linuxplanet/tutorials/1251/1/\n\nSome interesting comments in there on us. This is a tutorial/comparison,\nso don't expect extreme accuracy -- but th author does have some\ncomments on features and documentation for PostgreSQL.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 19 Nov 1999 13:18:05 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "LinuxPlanet RDBMS comparison article series." } ]
[ { "msg_contents": "I't committed.\n\n There is a new shell script run_check.sh in the regression\n test directory. It is driven by the conficuration file\n ./sql/run_check.tests and runs most of our tests parallel. It\n is invoked with the new GNUmakefile target 'runcheck'.\n\n\n The regress.sh is using the new tests file too by extracting\n the tests to run via awk, so ./sql/tests is obsolete now and\n subject to be removed soon.\n\nBruce Momjian wrote:\n\n> Any modifications to shared pg_ tables would be a problem. Also, pg_log\n> and pg_variable locking is not happening in there either, is it?\n\n Thus, it does a complete independant database installation\n below the regression test, starting it's own postmaster (and\n terminating it at the end, of course). The entire test suite\n can be run without even shutting down the currently installed\n database.\n\n So a\n\n ...src > ./configure\n ...src > make\n ...src > cd test/regression\n ...src/test/regression > make clean all runcheck\n\n sequence will compile and temporarily install the new build\n under the regression path, and then run all the tests against\n it.\n\n I think if my new test driver has settled, we should change\n the GNUmakefile to just print some messages if 'make runtest'\n is typed. The current runtest target should IMHO still be\n availabe under another name, to test the real life\n installation created by 'make install'.\n\n Alternatively (IMHO better) some parameter to run_check.sh\n could tell if it should create it's own, temporary\n installation, or if it should use the existing installed\n database system and the already running postmaster.\n\nTom Lane wrote:\n\n> In other words, you've already exposed a bug! Right on!\n\n Absolutely right and I've commented out that code for now.\n It is in utils/cache/catcache.c line 996. The comments say\n that the code should prevent the backend from entering\n infinite recursion while loading new cache entries. But the\n flag used for it seems to live in shared memory, thus it is\n affected by other backends too. If the flag is true doesn't\n tell if a backend set it itself, or if another one did. If we\n really need this check, it must be implemented in another\n way.\n\n Another bug I discoverd is that Vadims WAL code allways looks\n for the pg_control file in the PGDATA or compiled in default\n directory, ignoring the -D switch. Haven't fixed it up to\n now, but run_check.sh uses PGDATA, so it's safe at the\n moment.\n\n I ran the old regress.sh using the default installation\n parallel to the run_check.sh using it's own installation and\n postmaster.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 19 Nov 1999 20:30:14 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "New regression driver" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> There is a new shell script run_check.sh in the regression\n> test directory. It is driven by the conficuration file\n> ./sql/run_check.tests and runs most of our tests parallel. It\n> is invoked with the new GNUmakefile target 'runcheck'.\n\nThis is way cool. I had to fix a couple of silly little portability\nproblems, but I like it.\n\n> I think if my new test driver has settled, we should change\n> the GNUmakefile to just print some messages if 'make runtest'\n> is typed. The current runtest target should IMHO still be\n> availabe under another name, to test the real life\n> installation created by 'make install'.\n\n> Alternatively (IMHO better) some parameter to run_check.sh\n> could tell if it should create it's own, temporary\n> installation, or if it should use the existing installed\n> database system and the already running postmaster.\n\nWe should leave the old driver available, so that if an unexpected\nproblem arises one can easily check to see if it's being triggered by\nconcurrent execution or not. Or, run_check could have a parameter to\nforce serialized execution, if you would rather have just one script.\nIn that case we could toss the old runtest and rename run_check to\nruntest. (If we do keep both scripts, can we pick more helpful names\nthan \"runtest\" and \"run_check\"? The difference is not immediately\nobvious...)\n\nI agree that run_check needs to be able to test a normal installation\nas well as a temporary one.\n\n> Absolutely right and I've commented out that code for now.\n> It is in utils/cache/catcache.c line 996. The comments say\n> that the code should prevent the backend from entering\n> infinite recursion while loading new cache entries. But the\n> flag used for it seems to live in shared memory, thus it is\n> affected by other backends too. If the flag is true doesn't\n> tell if a backend set it itself, or if another one did. If we\n> really need this check, it must be implemented in another\n> way.\n\nI will look at this. I don't think that the catcaches live in\nshared memory, so the problem is probably not what you suggest.\nThe fact that the behavior is different under load may point to a\nreal problem, not just an insufficiently clever debugging check.\n\n> I ran the old regress.sh using the default installation\n> parallel to the run_check.sh using it's own installation and\n> postmaster.\n\nThey both give the same results on my platform, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Nov 1999 16:58:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New regression driver " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> [email protected] (Jan Wieck) writes:\n>> It is in utils/cache/catcache.c line 996. The comments say\n>> that the code should prevent the backend from entering\n>> infinite recursion while loading new cache entries.\n\n> I will look at this. I don't think that the catcaches live in\n> shared memory, so the problem is probably not what you suggest.\n> The fact that the behavior is different under load may point to a\n> real problem, not just an insufficiently clever debugging check.\n\nIndeed, this is a real bug, and commenting out the code that caught\nit is not the right fix!\n\nWhat is happening is that utils/inval.c is trying to initialize some\nvariables that contain OIDs of system relations. This means calling\nthe catcache routines in order to look up relation names in pg_class.\nHowever, if a shared cache inval message arrives from another backend\nwhile that's happening, we recursively invoke inval.c to deal with the\nmessage. And inval.c sees that its OID variables aren't initialized\nyet, so it recursively calls the catcache routines to try to get them\ninitialized. Or, if just the first one's been initialized so far,\nValidateHacks() assumes they're all valid, and you can end up at the\nelog(FATAL) panic at the bottom of CacheIdInvalidate(). I've got a core\ndump which contains a ten-deep recursion between inval.c and syscache.c,\nculminating in elog(FATAL) because the eleventh incoming sinval message\nwas just slow enough to let inval.c's first OID variable get filled in\nbefore it arrived.\n\nIn short: we don't deal very robustly with cache invals happening\nduring backend startup. Send invals at a new backend with just the\nright timing, and it'll choke.\n\nI am not sure if this bug is of long standing or if we introduced it\nsince 6.5. It's possible I created it while messing with the relcache\nstuff a month or two ago. But I can easily believe that it's been\nthere a long time and we never had a way of reproducing the problem\nwith any reliability before.\n\nI think the fix is to rip out inval.c's attempt to look up system\nrelation names, and just give it hardwired knowledge of their OIDs.\nEven though it sort-of works to do the lookups, it's bad practice for\nroutines that are potentially called during catcache initialization\nto depend on the catcache to be already working. And there are other\nplaces that already have hardwired knowledge of the system relation\nOIDs, so...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Nov 1999 19:10:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New regression driver " }, { "msg_contents": "> I think the fix is to rip out inval.c's attempt to look up system\n> relation names, and just give it hardwired knowledge of their OIDs.\n> Even though it sort-of works to do the lookups, it's bad practice for\n> routines that are potentially called during catcache initialization\n> to depend on the catcache to be already working. And there are other\n> places that already have hardwired knowledge of the system relation\n> OIDs, so...\n\nFYI, I am in the process of coding all cache miss lookups to use new\nsystem indexes. I have also added code to SearchSelfReferences()\nbecause pg_operator has some fancy depency on its lookup using an index,\nand has to have certain lookup happen with an sequential and not an\nindex scan.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Nov 1999 19:58:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New regression driver" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> ... I have also added code to SearchSelfReferences()\n> because pg_operator has some fancy depency on its lookup using an index,\n> and has to have certain lookup happen with an sequential and not an\n> index scan.\n\nSay what? That's got to be a symptom of a bug somewhere. Maybe\npg_operator needs some CommandCounterIncrement calls so that the\ntuples it inserts become visible earlier? What are you seeing exactly?\n\nFor that matter, SearchSelfReferences looks like one giant kluge to me.\nWho added this, and why, and what's the logic? (Undocumented kluges\nare very high on my hate list.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Nov 1999 21:11:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New regression driver " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Sunday, November 21, 1999 11:12 AM\n> To: Bruce Momjian\n> Cc: PostgreSQL HACKERS\n> Subject: Re: [HACKERS] New regression driver\n>\n>\n> Bruce Momjian <[email protected]> writes:\n> > ... I have also added code to SearchSelfReferences()\n> > because pg_operator has some fancy depency on its lookup using an index,\n> > and has to have certain lookup happen with an sequential and not an\n> > index scan.\n>\n> Say what? That's got to be a symptom of a bug somewhere. Maybe\n> pg_operator needs some CommandCounterIncrement calls so that the\n> tuples it inserts become visible earlier? What are you seeing exactly?\n>\n> For that matter, SearchSelfReferences looks like one giant kluge to me.\n> Who added this, and why, and what's the logic? (Undocumented kluges\n> are very high on my hate list.)\n>\n\nIt's me who added the function.\nI left it undocumented,sorry.\nBruce,could you add an document on it ?\n\nBruce added a new index to pg_index.\nIndex scan needs an information of pg_index.\nIf we use the new index,we needs the information about the index\nin pg_index.\nDoesn't this cause a real cycle ?\n\nI added the function in order to hold one tuple which causes a real\ncycle. The tuple in pg_index should be scanned sequentially.\n\nI don't think it's the best solution.\nPlease change it if there's a better way.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Mon, 22 Nov 1999 11:40:26 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] New regression driver " }, { "msg_contents": "> It's me who added the function.\n> I left it undocumented,sorry.\n> Bruce,could you add an document on it ?\n\nDone. Will commit soon.\n\n> \n> Bruce added a new index to pg_index.\n> Index scan needs an information of pg_index.\n> If we use the new index,we needs the information about the index\n> in pg_index.\n> Doesn't this cause a real cycle ?\n\nYes. I am using it for a pg_operator index too.\n\n> \n> I added the function in order to hold one tuple which causes a real\n> cycle. The tuple in pg_index should be scanned sequentially.\n> \n> I don't think it's the best solution.\n> Please change it if there's a better way.\n\nI talked to Tom, and we think it is a good solution.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Nov 1999 21:46:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New regression driver" } ]
[ { "msg_contents": "I need some Unix guidance.\n\nFoolishly or not, I designed the new PostgreSQL logging subsystem\nto run as a process. It's forked off a function called by the\nPostmaster main program right before the if(...)pmdaemonize\nstatements -- meaning that the shared memory enviroment has been\nestablished, but the signals have not yet been attached.\n\nWhen I issue the fork() call, it successfully creates a child process,\nbut the child is DOA. Investigation reveals a signal 5 trace/breakpoint\ntrap at the fork.\n\nHow do I prevent this? I presume you can mask it, but is that really\nwhat I want to do?\n\n TIA,\n\n Tim Holloway\n", "msg_date": "Fri, 19 Nov 1999 14:47:13 -0500", "msg_from": "Tim Holloway <[email protected]>", "msg_from_op": true, "msg_subject": "All forked up" }, { "msg_contents": "Tim Holloway <[email protected]> writes:\n> Foolishly or not, I designed the new PostgreSQL logging subsystem\n> to run as a process. It's forked off a function called by the\n> Postmaster main program right before the if(...)pmdaemonize\n> statements -- meaning that the shared memory enviroment has been\n> established, but the signals have not yet been attached.\n\nI'd be inclined to think you should fork off after pmdaemonize rather\nthan before, but of course that makes no difference unless you're\nusing -S.\n\n> When I issue the fork() call, it successfully creates a child process,\n> but the child is DOA. Investigation reveals a signal 5 trace/breakpoint\n> trap at the fork.\n\nWhat did your investigation consist of? That could just be the result\nof your debugger trying to step through the fork. Many Unix debuggers\ndon't cope very well with multi-process programs.\n\nI'm still wondering why you are bothering with an extra process, though.\nBy the time you get done writing the required communication support,\nyou'll be adding quite a large amount of code, and at least one new\nfailure mode for Postgres, in return for what exactly? This design\nisn't making sense to me, compared to just letting the backends issue\ntheir own logging messages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 1999 17:33:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] All forked up " } ]
[ { "msg_contents": "--- Jan Wieck <[email protected]> wrote:\n> Bruce Momjian wrote:\n> \n> > > > Here are the major open issues for 7.0 that I remember:\n> > > > Foreign Keys - Jan\n> > > > WAL - Vadim\n> > > > Function args - Tom\n> > > > System indcxes - Bruce\n> > > > Outer joins and new multi-query parse tree are questionable items\n> for\n> > > > 7.0.\n> > >\n> > > You might include \"join syntax\", which will be ready even if outer\n> > > joins are not.\n> > >\n> > > Also, didn't some folks express concern that indices on system\n> tables\n> > > would make the backend more fragile? Did we resolve that issue?\n> >\n> > We have indexes on most system tables and it isn't a problem\n> currently.\n> \n> It is, because a corrupted index on a system table cannot be\n> corrected by drop/create, as a user defined index could be.\n> \n> I don't know why and when reindexdb disappeared, but that\n> script was a last resort for exactly the situation of a\n> corrupted system index. Let me take a look if this\n> functionality could easily be recreated.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #========================================= [email protected] (Jan Wieck) #\n\nFor what its worth, TRUNCATE TABLE already rebuilds indexes on TRUNCATE'd\ntables, so it shouldn't be that much of a leap, one would think.\n\nMike Mascari\n([email protected])\n\n\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Fri, 19 Nov 1999 14:48:21 -0800 (PST)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 7.0 status request" }, { "msg_contents": "Mike Mascari wrote:\n\n> > It is, because a corrupted index on a system table cannot be\n> > corrected by drop/create, as a user defined index could be.\n> >\n> > I don't know why and when reindexdb disappeared, but that\n> > script was a last resort for exactly the situation of a\n> > corrupted system index. Let me take a look if this\n> > functionality could easily be recreated.\n>\n> For what its worth, TRUNCATE TABLE already rebuilds indexes on TRUNCATE'd\n> tables, so it shouldn't be that much of a leap, one would think.\n\n Imagine a corrupted pg_attribute_attrelid_index. What would\n it be good for to do a TRUNCATE TABLE pg_attribute? For god's\n sake, it's not permitted.\n\n I'm talking about corrupted system catalog indices! There's a\n substantial difference for them. If only one is missing, you\n might not be able to even connect to that database because\n the backend wouldn't start up. The problem here is, that the\n system catalog's partially reside in a shared memory cache\n during the lifetime of a postmaster! And remember, one\n postmaster can server many databases. There are REALLY\n different semantics!\n\n Please folks, it's nice if you succeded in recovering from a\n corrupted index. But as long as it's name didn't start with\n pg_, it's not what I'm talking about.\n\n Maybe a directly issued index_build() from the bootstrap\n interface might help. Will create another bootparser command\n and give it a try.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Sat, 20 Nov 1999 01:13:54 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 7.0 status request" } ]
[ { "msg_contents": "I tried the following:\n\n{CREATE|ALTER} USER username\n[ WITH ID/UID/<whatever> number ]\n[ WITH PASSWORD password ]\n[ etc. as usual ]\n\nwhich gives shift/reduce conflicts, even if I make PASSWORD and\nID/whatever a pure keyword (WITH is already one). So that won't work.\n\nI am currently basing my \"experiments\" on CREATE USER name [ SYSID nr ] [\nWITH PASSWORD ... ] ... which allows SYSID to be a ColId. Any better (and\nworking) syntax suggestions are welcome.\n\n(Also, the idea would be to reuse \"SYSID\" for a CREATE GROUP (any day now\n;) statement, so no UID.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sat, 20 Nov 1999 02:34:38 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "create/alter user extension syntax" }, { "msg_contents": "> I tried the following:\n> {CREATE|ALTER} USER username\n> [ WITH ID/UID/<whatever> number ]\n> [ WITH PASSWORD password ]\n> [ etc. as usual ]\n> which gives shift/reduce conflicts, even if I make PASSWORD and\n> ID/whatever a pure keyword (WITH is already one). So that won't work.\n\nSure it will (well, probably ;)\n\nIt depends how you set up the syntax. If you just try to have\nsomething like (pseudocode, I'm rushing to leave for the weekend)\n\ncreateuser: CREATE USER ColId Qual {};\n\nQual: WITH ID number {}\n | WITH PASSWORD password {};\n\nthen the single-token lookahead of yacc will get in trouble. But if\nyou break it up some more then yacc can start maintaining multiple\ntoken pointers to keep going, and the shift/reduce conflicts will go\naway. Something like\n\ncu: CREATE USER ColId QualClause {};\n\nQualClause: QualClause WITH QualExpr {};\n | QualClause {}\n | /*EMPTY*/ {};\n\nmight do the trick, though I might be omitting one level. Check gram.y\nfor similar syntax examples such as the column qualifiers for CREATE\nTABLE (though those are a bit more involved than this probably needs).\n\nGood luck, and I'll be happy to help in a few days if you want.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 20 Nov 1999 03:08:48 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create/alter user extension syntax" } ]
[ { "msg_contents": "Current list is:\n\n Foreign Keys - Jan\n WAL - Vadim\n Function args - Tom\n System indexes - Bruce\n\tDate/Time types - Thomas\n\tOptimizer\t- Tom\n\n\tOuter Joins - Thomas?\n\tLong Tuples - ?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Nov 1999 23:23:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "7.0 status request" } ]
[ { "msg_contents": "Hello all,\n\nDue to the large objects in our database, it's not easy to\ndump/reload. So if the 'pg_log' file becomes very big, can I\nsimply stop the server & delete it, and restart the server?\n(w/o dump/reloading.) to save the disk space.\n\nBest,\nC.S.Park\n\n", "msg_date": "Sat, 20 Nov 1999 22:26:16 +0900", "msg_from": "\"C.S.Park\" <[email protected]>", "msg_from_op": true, "msg_subject": "[q] can I simply remove pg_log file?" } ]
[ { "msg_contents": "I have two tables of roughly 200,000,000 records and want\nto update one column in one of the tables according to\nvalues in the second table using a unique key.\n\nFor example:\n\n\tupdate table1 set x=1 from table2 where \n\t exists (select * from table2 table1.key=table2.key);\n\n(or using an IN clause or using a straight join but EXPLAIN tells me\nthat the latter is much slower).\n\nThis does work but appends the updates (until the next vacuum). \nFor a 100GB database, this is too large of a storage overhead. \nIs there another good way? I've searched the newsgroups, docs and \nbooks without a clue . . .\n\nThanks much,\n\n--Martin\n\n===========================================================================\n\nMartin Weinberg Phone: (413) 545-3821\nDept. of Physics and Astronomy FAX: (413) 545-2117/0648\n530 Graduate Research Tower\t [email protected]\nUniversity of Massachusetts\t http://www.astro.umass.edu/~weinberg/\nAmherst, MA 01003-4525\n", "msg_date": "Sat, 20 Nov 1999 11:04:23 -0500", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "Bulk update of large database" }, { "msg_contents": "Martin Weinberg <[email protected]> writes:\n> This does work but appends the updates (until the next vacuum). \n> For a 100GB database, this is too large of a storage overhead. \n> Is there another good way?\n\nThere is no alternative; any sort of update operation will write a\nnew tuple value without first deleting the old. This must be so\nto preserve transaction semantics: if an error occurs later on\nduring the update (eg, violation of a unique-index constraint) the\nold tuple value must still be there.\n\nThe only answer I can see is to update however many tuples you can\nspare the space for, commit the transaction, vacuum, repeat.\n\nThe need for repeated vacuums in this scenario is pretty annoying.\nIt'd be nice if we could recycle dead tuples without a full vacuum.\nOffhand I don't see any way to do it without introducing performance\npenalties elsewhere...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Nov 1999 12:32:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bulk update of large database " } ]
[ { "msg_contents": "Hi,\nI'm trying to compile SPI function written on C++.\nCompile fail on using C++ keywords (typeid, typename) in header files. \nWrapping #include in extern \"C\" {} don't help.\nHere is output of the compiler:\n\n\n++ -I/home/akorud/develop/postgresql-6.5.3/src/include\n-I/usr/local/pgsql/include -traditional -o dialup.o -c dialup.cpp\nIn file included from\n/home/akorud/develop/postgresql-6.5.3/src/include/nodes/relation.h:16,\n from\n/home/akorud/develop/postgresql-6.5.3/src/include/executor/spi.h:14,\n from dialup.cpp:4:\n/home/akorud/develop/postgresql-6.5.3/src/include/nodes/parsenodes.h:698:\nparse error before `typename'\n/home/akorud/develop/postgresql-6.5.3/src/include/nodes/parsenodes.h:738:\nparse error before `typename'\n/home/akorud/develop/postgresql-6.5.3/src/include/nodes/parsenodes.h:770:\nparse error before `typename'\n/home/akorud/develop/postgresql-6.5.3/src/include/nodes/parsenodes.h:874:\nparse error before `;'\n/home/akorud/develop/postgresql-6.5.3/src/include/nodes/parsenodes.h:875:\nparse error before `typename'\nIn file included from\n/home/akorud/develop/postgresql-6.5.3/src/include/utils/rel.h:17,\n from\n/home/akorud/develop/postgresql-6.5.3/src/include/access/relscan.h:17,\n from\n/home/akorud/develop/postgresql-6.5.3/src/include/nodes/execnodes.h:19,\n from\n/home/akorud/develop/postgresql-6.5.3/src/include/executor/spi.h:15,\n from dialup.cpp:4:\n/home/akorud/develop/postgresql-6.5.3/src/include/access/tupdesc.h:74:\nparse error before `typeid'\n\n\nAny suggestions?\n\nThanks in advance, \nAndriy Korud, Lviv, Ukraine.\n\n\n", "msg_date": "20 Nov 1999 19:25:42 +0200", "msg_from": "\"Andrij Korud\" <[email protected]>", "msg_from_op": true, "msg_subject": "C++ and SPI" } ]