threads
listlengths
1
2.99k
[ { "msg_contents": "> I looked into your XLOG stuff a little.\n> It seems that XLogFileOpen() isn't implemented yet.\n> Would/should XLogFIleOpen() guarantee to open a Relation\n> properly at any time ?\n\nIf each relation will have unique file name then there will be no\nproblem. If a relation was dropped then after crash redo will try\nto open probably unexisted file. XLogFileOpen will return NULL in this case\n(redo will do nothing) and remember this fact (ie - \"file deletion is\nexpected\").\n\nVadim\n\n\n", "msg_date": "Thu, 14 Sep 2000 16:57:17 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: strange behaviour (bug) " } ]
[ { "msg_contents": "> Philip Warner mentioned about the advantage of random number.\n> It's exactly what I've wanted to say.\n> \n> >> it removes the temptation to write utilities that rely on\n> >> the internal representation of our data.\n> \n> It is preferable that file naming rule is encapsulated so that we\n> can change it without notice.\n\nSo, I assume that you vote YES on this subject? -:)\n(As far as I remember, it was your idea).\n\nVadim\n", "msg_date": "Thu, 14 Sep 2000 17:02:30 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Status of new relation file naming" }, { "msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > Philip Warner mentioned about the advantage of random number.\n> > It's exactly what I've wanted to say.\n> > \n> > >> it removes the temptation to write utilities that rely on\n> > >> the internal representation of our data.\n> > \n> > It is preferable that file naming rule is encapsulated so that we\n> > can change it without notice.\n> \n> So, I assume that you vote YES on this subject? -:)\n> (As far as I remember, it was your idea).\n>\n\nYes.\n\nHiroshi Inoue \n", "msg_date": "Fri, 15 Sep 2000 09:19:34 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Status of new relation file naming" } ]
[ { "msg_contents": "> > So, I assume that you vote YES on this subject? -:)\n> > (As far as I remember, it was your idea).\n> >\n> \n> Yes.\n\nUNIQUE_ID file names: Hiroshi, Marc, Vadim\n\nWe can use oids as unique ids, but these were another oids -:)\n\nVadim\n", "msg_date": "Thu, 14 Sep 2000 17:16:00 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Status of new relation file naming" } ]
[ { "msg_contents": "It seems that foreign key does not work in current, if specified with\nprimary key definition. Take a look at following example(works in\n7.0.2.):\n\ntest=# CREATE TABLE PKTABLE ( ptest1 int, ptest2 int, ptest3 int, ptest4 text, PRIMARY KEY(ptest1, ptest2, ptest3) );\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nCREATE\ntest=# CREATE TABLE FKTABLE ( ftest1 int, ftest2 int, ftest3 int, ftest4 int, primary key (ftest1,ftest2,ftest3,ftest4), CONSTRAINT constrname3 FOREIGN KEY(ftest1, ftest2, ftest3) REFERENCES PKTABLE);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'fktable_pkey' for table 'fktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: columns referenced in foreign key constraint not found.\n\nHowever, if primary key definition is not used with fkey, it works.\n\ntest=# CREATE TABLE FKTABLE ( ftest1 int, ftest2 int, ftest3 int, ftest4 int, CONSTRAINT constrname3 FOREIGN KEY(ftest1, ftest2, ftest3) REFERENCES PKTABLE);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE\n\nAny thoughts?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 15 Sep 2000 20:51:32 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "fkey + primary key does not work in current" }, { "msg_contents": "\nOn Fri, 15 Sep 2000, Tatsuo Ishii wrote:\n\n> It seems that foreign key does not work in current, if specified with\n> primary key definition. Take a look at following example(works in\n> 7.0.2.):\n> \n> test=# CREATE TABLE PKTABLE ( ptest1 int, ptest2 int, ptest3 int, ptest4 text, PRIMARY KEY(ptest1, ptest2, ptest3) );\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\n> CREATE\n> test=# CREATE TABLE FKTABLE ( ftest1 int, ftest2 int, ftest3 int, ftest4 int, primary key (ftest1,ftest2,ftest3,ftest4), CONSTRAINT constrname3 FOREIGN KEY(ftest1, ftest2, ftest3) REFERENCES PKTABLE);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'fktable_pkey' for table 'fktable'\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> ERROR: columns referenced in foreign key constraint not found.\n\nHmm, that's very strange. I wonder which columns it think didn't exist.\nIt shouldn't be checking the pktable in that case, which would imply\nit doesn't believe the existance of ftest1,ftest2,ftest3. Probably\na stupid mistake on my part. As soon as I clear off space to compile\ncurrent, I'll look.\n\n", "msg_date": "Fri, 15 Sep 2000 10:02:59 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fkey + primary key does not work in current" }, { "msg_contents": "Hello Stephan,\n\nI think this listserver hates me. none of my messages seem to go through.\nAnyone read this ??\n\nI need to get access to records that is marked expired in the database any\nidea how or if it's possible ?\n\nPlease respond even if it's just to say that you don't know so I KNOW that\nmy messages get through!!!\n\n\nBest regards,\n Eje mailto:[email protected]\nThe Family Entertainment Network http://www.fament.com\nPhone : 316-231-7777 Fax : 316-231-4066\n - Your Internet Solution Provider & PC Computer Solutions Provider -\n\n\n\n", "msg_date": "Fri, 15 Sep 2000 12:11:05 -0500", "msg_from": "Eje Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Expired records ?" }, { "msg_contents": "Your message got though. I don't know the answer to your question, but\nI'll bet that it is NO.\n\nOn Fri, 15 Sep 2000, Eje Gustafsson wrote:\n\n> Hello Stephan,\n> \n> I think this listserver hates me. none of my messages seem to go through.\n> Anyone read this ??\n> \n> I need to get access to records that is marked expired in the database any\n> idea how or if it's possible ?\n> \n> Please respond even if it's just to say that you don't know so I KNOW that\n> my messages get through!!!\n> \n> \n> Best regards,\n> Eje mailto:[email protected]\n> The Family Entertainment Network http://www.fament.com\n> Phone : 316-231-7777 Fax : 316-231-4066\n> - Your Internet Solution Provider & PC Computer Solutions Provider -\n> \n> \n> \n\n", "msg_date": "Fri, 15 Sep 2000 13:29:16 -0500 (CDT)", "msg_from": "John McKown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expired records ?" }, { "msg_contents": "Has this been resolved?\n\n\n> \n> On Fri, 15 Sep 2000, Tatsuo Ishii wrote:\n> \n> > It seems that foreign key does not work in current, if specified with\n> > primary key definition. Take a look at following example(works in\n> > 7.0.2.):\n> > \n> > test=# CREATE TABLE PKTABLE ( ptest1 int, ptest2 int, ptest3 int, ptest4 text, PRIMARY KEY(ptest1, ptest2, ptest3) );\n> > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\n> > CREATE\n> > test=# CREATE TABLE FKTABLE ( ftest1 int, ftest2 int, ftest3 int, ftest4 int, primary key (ftest1,ftest2,ftest3,ftest4), CONSTRAINT constrname3 FOREIGN KEY(ftest1, ftest2, ftest3) REFERENCES PKTABLE);\n> > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'fktable_pkey' for table 'fktable'\n> > NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> > ERROR: columns referenced in foreign key constraint not found.\n> \n> Hmm, that's very strange. I wonder which columns it think didn't exist.\n> It shouldn't be checking the pktable in that case, which would imply\n> it doesn't believe the existance of ftest1,ftest2,ftest3. Probably\n> a stupid mistake on my part. As soon as I clear off space to compile\n> current, I'll look.\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Oct 2000 20:51:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fkey + primary key does not work in current" }, { "msg_contents": "\nI believe that I sent a patch on Sep 17 for this to -patches although\nI don't know if anyone saw it (it's in the archives, so I know it\nwent through).\n\nStephan Szabo\[email protected]\n\nOn Mon, 16 Oct 2000, Bruce Momjian wrote:\n\n> Has this been resolved?\n>\n> > On Fri, 15 Sep 2000, Tatsuo Ishii wrote:\n> > \n> > > It seems that foreign key does not work in current, if specified with\n> > > primary key definition. Take a look at following example(works in\n> > > 7.0.2.):\n> > > \n> > > test=# CREATE TABLE PKTABLE ( ptest1 int, ptest2 int, ptest3 int, ptest4 text, PRIMARY KEY(ptest1, ptest2, ptest3) );\n> > > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\n> > > CREATE\n> > > test=# CREATE TABLE FKTABLE ( ftest1 int, ftest2 int, ftest3 int, ftest4 int, primary key (ftest1,ftest2,ftest3,ftest4), CONSTRAINT constrname3 FOREIGN KEY(ftest1, ftest2, ftest3) REFERENCES PKTABLE);\n> > > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'fktable_pkey' for table 'fktable'\n> > > NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> > > ERROR: columns referenced in foreign key constraint not found.\n> > \n> > Hmm, that's very strange. I wonder which columns it think didn't exist.\n> > It shouldn't be checking the pktable in that case, which would imply\n> > it doesn't believe the existance of ftest1,ftest2,ftest3. Probably\n> > a stupid mistake on my part. As soon as I clear off space to compile\n> > current, I'll look.\n\n", "msg_date": "Mon, 16 Oct 2000 18:32:55 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fkey + primary key does not work in current" }, { "msg_contents": "That's strange. I didn't see it. Can you send it over. The archives\ndon't seem to be working again.\n\n> \n> I believe that I sent a patch on Sep 17 for this to -patches although\n> I don't know if anyone saw it (it's in the archives, so I know it\n> went through).\n> \n> Stephan Szabo\n> [email protected]\n> \n> On Mon, 16 Oct 2000, Bruce Momjian wrote:\n> \n> > Has this been resolved?\n> >\n> > > On Fri, 15 Sep 2000, Tatsuo Ishii wrote:\n> > > \n> > > > It seems that foreign key does not work in current, if specified with\n> > > > primary key definition. Take a look at following example(works in\n> > > > 7.0.2.):\n> > > > \n> > > > test=# CREATE TABLE PKTABLE ( ptest1 int, ptest2 int, ptest3 int, ptest4 text, PRIMARY KEY(ptest1, ptest2, ptest3) );\n> > > > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\n> > > > CREATE\n> > > > test=# CREATE TABLE FKTABLE ( ftest1 int, ftest2 int, ftest3 int, ftest4 int, primary key (ftest1,ftest2,ftest3,ftest4), CONSTRAINT constrname3 FOREIGN KEY(ftest1, ftest2, ftest3) REFERENCES PKTABLE);\n> > > > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'fktable_pkey' for table 'fktable'\n> > > > NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> > > > ERROR: columns referenced in foreign key constraint not found.\n> > > \n> > > Hmm, that's very strange. I wonder which columns it think didn't exist.\n> > > It shouldn't be checking the pktable in that case, which would imply\n> > > it doesn't believe the existance of ftest1,ftest2,ftest3. Probably\n> > > a stupid mistake on my part. As soon as I clear off space to compile\n> > > current, I'll look.\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Oct 2000 21:42:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fkey + primary key does not work in current" } ]
[ { "msg_contents": "At 04:41 15/09/00 -0500, Jan Wieck wrote:\n>\n> So if you find an ON SELECT rule on a relation, it is a VIEW.\n>\n\nThanks for this, but I'm using 'pg_views' now since it means pg_dump does\nnot have to interpret the meanings of the various columns. With time, I\nwould like to remove as much internal knowledge as I can from pg_dump.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 15 Sep 2000 22:07:11 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: current is broken" } ]
[ { "msg_contents": "Michael Meskes <[email protected]> writes:\n> Can I safely assume that the OID of the standard data types remain the same\n> for future releases? And of course that they are the same for every\n> installation?\n\nThey are fixed in any one version, and really are not very likely to\nchange across versions either. But I suppose it could happen.\n\n> I've been send a patch to speed up ecpg significantly by not looking up\n> datatypes everytime. As it is written right now it works by har coding some\n> types. I wonder if this will create problems.\n\nExactly how \"hard coded\" do you mean? If you #include \"catalog/pg_types.h\"\nand use the OID #defines therein, you're not doing any worse than a lot\nof places in the backend. At worst you'd create a cross-major-version\nincompatibility for ecpg.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Sep 2000 10:23:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type OIDs " }, { "msg_contents": "Can I safely assume that the OID of the standard data types remain the same\nfor future releases? And of course that they are the same for every\ninstallation?\n\nI've been send a patch to speed up ecpg significantly by not looking up\ndatatypes everytime. As it is written right now it works by har coding some\ntypes. I wonder if this will create problems.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 15 Sep 2000 12:57:20 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "type OIDs" }, { "msg_contents": "On Fri, Sep 15, 2000 at 10:23:01AM -0400, Tom Lane wrote:\n> They are fixed in any one version, and really are not very likely to\n> change across versions either. But I suppose it could happen.\n\nThat's good enough for my usage I think.\n\n> Exactly how \"hard coded\" do you mean? If you #include \"catalog/pg_types.h\"\n> and use the OID #defines therein, you're not doing any worse than a lot\n> of places in the backend. At worst you'd create a cross-major-version\n> incompatibility for ecpg.\n\nSounds good. I will do that.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 17 Sep 2000 18:59:57 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type OIDs" } ]
[ { "msg_contents": "Michael Meskes <[email protected]> writes:\n> What's going on?\n\nI'd suggest a full \"make distclean\" and reconfigure. Looks like you\nmissed some build steps, which is maybe not too surprising considering\nthat Peter has extensively revised the Makefile tree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Sep 2000 10:24:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cannot compile " }, { "msg_contents": "I just did a cvsup to get up-to-date again after I hadn't found time to work\non ecpg for some months and found out that I cannot even compile anymore:\n\nmake[3]: Entering directory /home/postgres/pgsql/src/backend/parser'\ngcc -MM -I../../../src/include -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error *.c >depend\nanalyze.c:22: parser/parse.h: No such file or directory\nanalyze.c:30: utils/fmgroids.h: No such file or directory\nkeywords.c:22: parser/parse.h: No such file or directory\nparse_clause.c:22: parser/parse.h: No such file or directory\nparse_expr.c:24: parser/parse.h: No such file or directory\nparse_func.c:32: utils/fmgroids.h: No such file or directory\nparse_oper.c:25: utils/fmgroids.h: No such file or directory\nparser.c:22: parser/parse.h: No such file or directory\nscan.l:30: parser/parse.h: No such file or directory\nmake[3]: *** [depend] Error 1\nmake[3]: Leaving directory /home/postgres/pgsql/src/backend/parser'\nmake[2]: *** [parser/parse.h] Error 2\nmake[2]: Leaving directory /home/postgres/pgsql/src/backend'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory /home/postgres/pgsql/src'\nmake: *** [all] Error 2 \n\nAnd so on. \n\nWhat's going on?\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 15 Sep 2000 14:08:15 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Cannot compile" }, { "msg_contents": "It might be that `make depend' is broken in one way or another. What you\nwant to do is rm `find . -name depend` and then ./configure\n--enable-depend. Or you could try to rerun make depend then. I have a\nfeeling what is causing this but it's too weird to explain but I'll try to\nlook at it. ;-)\n\nMichael Meskes writes:\n\n> I just did a cvsup to get up-to-date again after I hadn't found time to work\n> on ecpg for some months and found out that I cannot even compile anymore:\n> \n> make[3]: Entering directory /home/postgres/pgsql/src/backend/parser'\n> gcc -MM -I../../../src/include -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error *.c >depend\n> analyze.c:22: parser/parse.h: No such file or directory\n> analyze.c:30: utils/fmgroids.h: No such file or directory\n> keywords.c:22: parser/parse.h: No such file or directory\n> parse_clause.c:22: parser/parse.h: No such file or directory\n> parse_expr.c:24: parser/parse.h: No such file or directory\n> parse_func.c:32: utils/fmgroids.h: No such file or directory\n> parse_oper.c:25: utils/fmgroids.h: No such file or directory\n> parser.c:22: parser/parse.h: No such file or directory\n> scan.l:30: parser/parse.h: No such file or directory\n> make[3]: *** [depend] Error 1\n> make[3]: Leaving directory /home/postgres/pgsql/src/backend/parser'\n> make[2]: *** [parser/parse.h] Error 2\n> make[2]: Leaving directory /home/postgres/pgsql/src/backend'\n> make[1]: *** [all] Error 2\n> make[1]: Leaving directory /home/postgres/pgsql/src'\n> make: *** [all] Error 2 \n> \n> And so on. \n> \n> What's going on?\n> \n> Michael\n> \n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 17 Sep 2000 12:29:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot compile" }, { "msg_contents": "On Fri, Sep 15, 2000 at 10:24:30AM -0400, Tom Lane wrote:\n> I'd suggest a full \"make distclean\" and reconfigure. Looks like you\n\nI did. And I did it more than just once but to no avail.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 17 Sep 2000 19:00:27 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot compile" }, { "msg_contents": "\n\tWhen you use 'make distclean' make is under control of the\nMakefile and does exactly what the code writer wants, which usually is to\nrm all .o files.\n\n\nOn Mon, 18 Sep 2000, Michael Meskes wrote:\n\n> On Sun, Sep 17, 2000 at 12:29:16PM +0200, Peter Eisentraut wrote:\n> > It might be that `make depend' is broken in one way or another. What you\n> > want to do is rm `find . -name depend` and then ./configure\n> > --enable-depend. Or you could try to rerun make depend then. I have a\n> > feeling what is causing this but it's too weird to explain but I'll try to\n> > look at it. ;-)\n> \n> That was the solution. It's compiling right now. I never tried this because\n> I thought make distclean would remove the depend files as well.\n> \n> Anyway thanks a lot.\n> \n> Michael\n> -- \n> Michael Meskes\n> [email protected]\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n> \n\nYours Truly,\n\n \t - Karl F. Larsen, [email protected] (505) 524-3303 -\n\n", "msg_date": "Mon, 18 Sep 2000 05:32:19 -0600 (MDT)", "msg_from": "\"Karl F. Larsen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot compile" }, { "msg_contents": "On Sun, Sep 17, 2000 at 12:29:16PM +0200, Peter Eisentraut wrote:\n> It might be that `make depend' is broken in one way or another. What you\n> want to do is rm `find . -name depend` and then ./configure\n> --enable-depend. Or you could try to rerun make depend then. I have a\n> feeling what is causing this but it's too weird to explain but I'll try to\n> look at it. ;-)\n\nThat was the solution. It's compiling right now. I never tried this because\nI thought make distclean would remove the depend files as well.\n\nAnyway thanks a lot.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 18 Sep 2000 13:09:47 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot compile" } ]
[ { "msg_contents": "\nDoes anyone have the Sept issue of Linux Magazine? According to a\nnotification we just received, PostgreSQL got 4th place in the editor's\nchoice awards. I wanna know what was in first, second and third! \nAnyone know?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 15 Sep 2000 14:27:09 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Winner Notification - Linux Magazine Editor's Choice Awards (fwd)" } ]
[ { "msg_contents": "1st - MySQL\n2nd - Oracle 8i\n3rd - Informix Dynamic Server.2000\n\n\n-----Original Message-----\nFrom: Vince Vielhaber <[email protected]>\nTo: [email protected] <[email protected]>\nDate: Friday, September 15, 2000 1:31 PM\nSubject: [HACKERS] Winner Notification - Linux Magazine Editor's Choice\nAwards (fwd)\n\n\n>\n>Does anyone have the Sept issue of Linux Magazine? According to a\n>notification we just received, PostgreSQL got 4th place in the editor's\n>choice awards. I wanna know what was in first, second and third!\n>Anyone know?\n>\n>Vince.\n>--\n>==========================================================================\n>Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n>==========================================================================\n>\n>\n>\n>\n\n", "msg_date": "Fri, 15 Sep 2000 13:30:56 -0500", "msg_from": "\"Len Morgan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Winner Notification - Linux Magazine Editor's Choice Awards (fwd)" }, { "msg_contents": "\n*rofl* now that wasn't rigged or nothing ... I could see us second to\nMySQL, or third to Oracle/Informix, depending on how it was evaluated, but\nfourth to all three?\n\nOn Fri, 15 Sep 2000, Len Morgan wrote:\n\n> 1st - MySQL\n> 2nd - Oracle 8i\n> 3rd - Informix Dynamic Server.2000\n> \n> \n> -----Original Message-----\n> From: Vince Vielhaber <[email protected]>\n> To: [email protected] <[email protected]>\n> Date: Friday, September 15, 2000 1:31 PM\n> Subject: [HACKERS] Winner Notification - Linux Magazine Editor's Choice\n> Awards (fwd)\n> \n> \n> >\n> >Does anyone have the Sept issue of Linux Magazine? According to a\n> >notification we just received, PostgreSQL got 4th place in the editor's\n> >choice awards. I wanna know what was in first, second and third!\n> >Anyone know?\n> >\n> >Vince.\n> >--\n> >==========================================================================\n> >Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> > 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> >==========================================================================\n> >\n> >\n> >\n> >\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 15 Sep 2000 16:20:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Winner Notification - Linux Magazine Editor's Choice Awards (fwd)" }, { "msg_contents": "The Hermit Hacker wrote:\n>\n> *rofl* now that wasn't rigged or nothing ... I could see us second to\n> MySQL, or third to Oracle/Informix, depending on how it was evaluated, but\n> fourth to all three?\n\n Hey, look at the headline:\n\n Linux Magazine's Editor's Choice Awards\n by The Editors of Linux Magazine\n\n The Linux market is exploding with all kinds of\n great new (and old) products. We decided it was\n time to round up our editors and pick our\n favorites. Here are the results.\n\n (From www.linux-mag.com)\n\n Does that sound like any comparision was done? Not to me. To\n me it sounds more that The Editors of Linux Magazine just\n told what tools they prefer for themself.\n\n\nJan\n\n>\n> On Fri, 15 Sep 2000, Len Morgan wrote:\n>\n> > 1st - MySQL\n> > 2nd - Oracle 8i\n> > 3rd - Informix Dynamic Server.2000\n> >\n> >\n> > -----Original Message-----\n> > From: Vince Vielhaber <[email protected]>\n> > To: [email protected] <[email protected]>\n> > Date: Friday, September 15, 2000 1:31 PM\n> > Subject: [HACKERS] Winner Notification - Linux Magazine Editor's Choice\n> > Awards (fwd)\n> >\n> >\n> > >\n> > >Does anyone have the Sept issue of Linux Magazine? According to a\n> > >notification we just received, PostgreSQL got 4th place in the editor's\n> > >choice awards. I wanna know what was in first, second and third!\n> > >Anyone know?\n> > >\n> > >Vince.\n> > >--\n> > >==========================================================================\n> > >Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> > > 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> > > Online Campground Directory http://www.camping-usa.com\n> > > Online Giftshop Superstore http://www.cloudninegifts.com\n> > >==========================================================================\n> > >\n> > >\n> > >\n> > >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 15 Sep 2000 17:57:28 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Winner Notification - Linux Magazine Editor's Choice Awards\n (fwd))" }, { "msg_contents": "On Fri, 15 Sep 2000, Jan Wieck wrote:\n\n> The Hermit Hacker wrote:\n> >\n> > *rofl* now that wasn't rigged or nothing ... I could see us second to\n> > MySQL, or third to Oracle/Informix, depending on how it was evaluated, but\n> > fourth to all three?\n> \n> Hey, look at the headline:\n> \n> Linux Magazine's Editor's Choice Awards\n> by The Editors of Linux Magazine\n> \n> The Linux market is exploding with all kinds of\n> great new (and old) products. We decided it was\n> time to round up our editors and pick our\n> favorites. Here are the results.\n> \n> (From www.linux-mag.com)\n> \n> Does that sound like any comparision was done? Not to me. To\n> me it sounds more that The Editors of Linux Magazine just\n> told what tools they prefer for themself.\n\nTrue ... seems kind of a stupid way to award something though ... like,\nbased on what merits? *shrug* (<- rhetorical question there, eh? *grin*)\n\n\n\n> >\n> > > 1st - MySQL\n> > > 2nd - Oracle 8i\n> > > 3rd - Informix Dynamic Server.2000\n> > >\n> > >\n> > > -----Original Message-----\n> > > From: Vince Vielhaber <[email protected]>\n> > > To: [email protected] <[email protected]>\n> > > Date: Friday, September 15, 2000 1:31 PM\n> > > Subject: [HACKERS] Winner Notification - Linux Magazine Editor's Choice\n> > > Awards (fwd)\n> > >\n> > >\n> > > >\n> > > >Does anyone have the Sept issue of Linux Magazine? According to a\n> > > >notification we just received, PostgreSQL got 4th place in the editor's\n> > > >choice awards. I wanna know what was in first, second and third!\n> > > >Anyone know?\n> > > >\n> > > >Vince.\n> > > >--\n> > > >==========================================================================\n> > > >Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> > > > 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> > > > Online Campground Directory http://www.camping-usa.com\n> > > > Online Giftshop Superstore http://www.cloudninegifts.com\n> > > >==========================================================================\n> > > >\n> > > >\n> > > >\n> > > >\n> > >\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> >\n> \n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 15 Sep 2000 21:29:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Winner Notification - Linux Magazine Editor's Choice Awards\n (fwd))" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Hey, look at the headline:\n> \n> Linux Magazine's Editor's Choice Awards\n> by The Editors of Linux Magazine\n> \n> The Linux market is exploding with all kinds of\n> great new (and old) products. We decided it was\n> time to round up our editors and pick our\n> favorites. Here are the results.\n> \n> (From www.linux-mag.com)\n> \n> Does that sound like any comparision was done? Not to me. To\n> me it sounds more that The Editors of Linux Magazine just\n> told what tools they prefer for themself.\n\nTo me, the most incredible thing is that they put Oracle\nafter MySQL! the choice they did is tough to us, but it\nmust have brought hell to Oracle evangelists lives!\n", "msg_date": "Sat, 16 Sep 2000 14:16:42 +0200", "msg_from": "Fabrice Scemama <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Winner Notification - Linux Magazine Editor's Choice Awards(fwd))" } ]
[ { "msg_contents": "I finished revising the LIKE operators back into an index-optimizable\nform. But I notice there is some non-multibyte-aware code that needs\nto be fixed, specifically the pattern analysis routines in\nsrc/backend/utils/adt/selfuncs.c:\n\tlike_fixed_prefix\n\tregex_fixed_prefix\n\tlike_selectivity\n\tregex_selectivity_sub\n\tregex_selectivity\nI don't have time to work on this now, but perhaps someone else would\nlike to fix these...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Sep 2000 16:06:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE is still short some MULTIBYTE code" }, { "msg_contents": "Hasn't the [HACKERS] / [GENERAL] crossover thing been resolved yet? I'm\nnot subscribed to hackers.\n\nTom Lane wrote:\n> \n> I finished revising the LIKE operators back into an index-optimizable\n> form. But I notice there is some non-multibyte-aware code that needs\n> to be fixed, specifically the pattern analysis routines in\n> src/backend/utils/adt/selfuncs.c:\n> like_fixed_prefix\n> regex_fixed_prefix\n> like_selectivity\n> regex_selectivity_sub\n> regex_selectivity\n> I don't have time to work on this now, but perhaps someone else would\n> like to fix these...\n> \n> regards, tom lane\n", "msg_date": "Fri, 15 Sep 2000 16:26:38 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE is still short some MULTIBYTE code" }, { "msg_contents": "> I finished revising the LIKE operators back into an index-optimizable\n> form. But I notice there is some non-multibyte-aware code that needs\n> to be fixed, specifically the pattern analysis routines in\n> src/backend/utils/adt/selfuncs.c:\n> \tlike_fixed_prefix\n> \tregex_fixed_prefix\n> \tlike_selectivity\n> \tregex_selectivity_sub\n> \tregex_selectivity\n> I don't have time to work on this now, but perhaps someone else would\n> like to fix these...\n\nI have taken a glance on them. Seems no special multibyte hack is\nneccesary for them. Also, though limited to small scale data, SELECT\nusing index on multibyte data seems to be working...\n--\nTatsuo Ishii\n", "msg_date": "Sat, 16 Sep 2000 10:36:55 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE is still short some MULTIBYTE code" } ]
[ { "msg_contents": "Hi,\n\nwhile I'm doing more accurate test I just want to ask if\nsomebody test locale in 7.0.2 under FreeBSD ?\nthe point is that I usually compile postgres with \n--enable-locale --enable-multibyte and never had a problem \nwith locale. Today I decided to use only --enable-locale\nand found that LC_CTYPE support seems broken in 7.0.2 \nunder FreeBSD 4.01. release. \nI used folowing select: select c_name from city where c_name ~* 'О©╫О©╫';\ninteresting that there are no problem under Linux !\nI used the same compiler gcc version 2.95.2 19991024 (release)\non both systems. One hypothesis is that gcc 2.95.2 under Linux\ntreats 'char' as 'unsigned char' while on FreeBSD there is no such \ndefault. This could be demonstrated using test-ctype.c from\nsrc/test/locale directory. In current version there is\n\n.......\nvoid\ndescribe_char(int c)\n{\n char cp = c,\n up = toupper(c),\n lo = tolower(c);\n...........\n\nwhich works as expected on Linux and broken under FreeBSD (gcc 2.95.2)\nIt's clear that we must use 'unsigned char' instead of 'char'\nand corrected version runs ok on both systems. That's why I suspect\nthat gcc 2.95.2 has different default under FreeBSD which could\ncause problem with LC_CTYPE in 7.0.2 \nI didn't test current CVS under FreeBSD but probably will check it.\n\n\n\tRegards,\n\n\t\tOleg\nPS.\n\n\tforget to mention that collation works fine \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 16 Sep 2000 13:54:49 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> It's clear that we must use 'unsigned char' instead of 'char'\n> and corrected version runs ok on both systems. That's why I suspect\n> that gcc 2.95.2 has different default under FreeBSD which could\n> cause problem with LC_CTYPE in 7.0.2 \n> I didn't test current CVS under FreeBSD but probably will check it.\n\nI think Peter recently went around and inserted explicit casts to\nfix this problem. Please do see if it's fixed in CVS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Sep 2000 11:23:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "On Sat, 16 Sep 2000, Tom Lane wrote:\n\n> Date: Sat, 16 Sep 2000 11:23:33 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: Re: [HACKERS] broken locale in 7.0.2 without multibyte support (FreeBSD 4.1-RELEASE) ? \n> \n> Oleg Bartunov <[email protected]> writes:\n> > It's clear that we must use 'unsigned char' instead of 'char'\n> > and corrected version runs ok on both systems. That's why I suspect\n> > that gcc 2.95.2 has different default under FreeBSD which could\n> > cause problem with LC_CTYPE in 7.0.2 \n> > I didn't test current CVS under FreeBSD but probably will check it.\n> \n> I think Peter recently went around and inserted explicit casts to\n> fix this problem. Please do see if it's fixed in CVS.\n\nok. will check this. I've recompile 7.0.2 on freebsd with -funsigned-char\nand the problem has gone away. This prove my suggestion. I also \nchecked 6.5 and it has the same probelm on FreeBSD. Also,\nthis makes clear many complains about broken locale under FreeBSD\nI got from people. \nHmm, current cvs has the same problem :-(\n\n\tRegards,\n\t\tOleg\n\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Sat, 16 Sep 2000 18:43:43 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n>>>> It's clear that we must use 'unsigned char' instead of 'char'\n>>>> and corrected version runs ok on both systems. That's why I suspect\n>>>> that gcc 2.95.2 has different default under FreeBSD which could\n>>>> cause problem with LC_CTYPE in 7.0.2 \n>> \n>> I think Peter recently went around and inserted explicit casts to\n>> fix this problem. Please do see if it's fixed in CVS.\n\n> Hmm, current cvs has the same problem :-(\n\nNow that I look at it, what Peter was doing was just trying to eliminate\ncompiler warnings on some platform or other, and he made changes like\nthese (this example is interfaces/ecpg/preproc/pgc.l):\n\n@@ -491,7 +491,7 @@\n /* this should leave the last byte set to '\\0' */\n strncpy(lower_text, yytext, NAMEDATALEN-1);\n for(i = 0; lower_text[i]; i++)\n- if (isascii((unsigned char)lower_text[i]) && isupper(lower_text[i]))\n+ if (isascii((int)lower_text[i]) && isupper((int) lower_text[i]))\n lower_text[i] = tolower(lower_text[i]);\n \n if (i >= NAMEDATALEN)\n@@ -682,7 +682,7 @@\n \n /* skip the \";\" and trailing whitespace. Note that yytext contains\n at least one non-space character plus the \";\" */\n- for ( i = strlen(yytext)-2; i > 0 && isspace(yytext[i]); i-- ) {}\n+ for ( i = strlen(yytext)-2; i > 0 && isspace((int) yytext[i]); i-- ) {}\n yytext[i+1] = '\\0';\n \n for ( defptr = defines; defptr != NULL &&\n@@ -754,7 +754,7 @@\n \n /* skip the \";\" and trailing whitespace. Note that yytext contains\n at least one non-space character plus the \";\" */\n- for ( i = strlen(yytext)-2; i > 0 && isspace(yytext[i]); i-- ) {}\n+ for ( i = strlen(yytext)-2; i > 0 && isspace((int) yytext[i]); i-- ) {}\n yytext[i+1] = '\\0';\n \n yyin = NULL;\n\nPeter, I suppose what you were trying to clean up is a \"char used as\narray subscript\" kind of warning? These changes wouldn't help Oleg's\nproblem, in fact the first change in this file would have broken code\nthat was previously not broken for him.\n\nI think that the correct coding convention is to be careful to call the\n<ctype.h> macros only with values that are either declared as or casted\nto \"unsigned char\". I would like to think that your compiler will not\ncomplain about\n if (isascii((unsigned char)lower_text[i]) ...\nIf it does we'd have to write something as ugly as\n if (isascii((int)(unsigned char)lower_text[i]) ...\nwhich I can see no value in from a portability standpoint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Sep 2000 13:21:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "INteresting, \nthat I tried to find out what cause the problem just compiling\nbackend/utils/adt/ with -funsigned-char option but this won't help.\nI thought (as in earlier time) locale-aware code are in this \ndirectory. The problem already exists in 6.5 release,\nso I'm not sure recent Peter's changes could cause the problem \n\n\tRegards,\n\t\tOleg\n\nOn Sat, 16 Sep 2000, Tom Lane wrote:\n\n> Date: Sat, 16 Sep 2000 13:21:20 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>, Peter Eisentraut <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] broken locale in 7.0.2 without multibyte support (FreeBSD 4.1-RELEASE) ? \n> \n> Oleg Bartunov <[email protected]> writes:\n> >>>> It's clear that we must use 'unsigned char' instead of 'char'\n> >>>> and corrected version runs ok on both systems. That's why I suspect\n> >>>> that gcc 2.95.2 has different default under FreeBSD which could\n> >>>> cause problem with LC_CTYPE in 7.0.2 \n> >> \n> >> I think Peter recently went around and inserted explicit casts to\n> >> fix this problem. Please do see if it's fixed in CVS.\n> \n> > Hmm, current cvs has the same problem :-(\n> \n> Now that I look at it, what Peter was doing was just trying to eliminate\n> compiler warnings on some platform or other, and he made changes like\n> these (this example is interfaces/ecpg/preproc/pgc.l):\n> \n> @@ -491,7 +491,7 @@\n> /* this should leave the last byte set to '\\0' */\n> strncpy(lower_text, yytext, NAMEDATALEN-1);\n> for(i = 0; lower_text[i]; i++)\n> - if (isascii((unsigned char)lower_text[i]) && isupper(lower_text[i]))\n> + if (isascii((int)lower_text[i]) && isupper((int) lower_text[i]))\n> lower_text[i] = tolower(lower_text[i]);\n> \n> if (i >= NAMEDATALEN)\n> @@ -682,7 +682,7 @@\n> \n> /* skip the \";\" and trailing whitespace. Note that yytext contains\n> at least one non-space character plus the \";\" */\n> - for ( i = strlen(yytext)-2; i > 0 && isspace(yytext[i]); i-- ) {}\n> + for ( i = strlen(yytext)-2; i > 0 && isspace((int) yytext[i]); i-- ) {}\n> yytext[i+1] = '\\0';\n> \n> for ( defptr = defines; defptr != NULL &&\n> @@ -754,7 +754,7 @@\n> \n> /* skip the \";\" and trailing whitespace. Note that yytext contains\n> at least one non-space character plus the \";\" */\n> - for ( i = strlen(yytext)-2; i > 0 && isspace(yytext[i]); i-- ) {}\n> + for ( i = strlen(yytext)-2; i > 0 && isspace((int) yytext[i]); i-- ) {}\n> yytext[i+1] = '\\0';\n> \n> yyin = NULL;\n> \n> Peter, I suppose what you were trying to clean up is a \"char used as\n> array subscript\" kind of warning? These changes wouldn't help Oleg's\n> problem, in fact the first change in this file would have broken code\n> that was previously not broken for him.\n> \n> I think that the correct coding convention is to be careful to call the\n> <ctype.h> macros only with values that are either declared as or casted\n> to \"unsigned char\". I would like to think that your compiler will not\n> complain about\n> if (isascii((unsigned char)lower_text[i]) ...\n> If it does we'd have to write something as ugly as\n> if (isascii((int)(unsigned char)lower_text[i]) ...\n> which I can see no value in from a portability standpoint.\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 16 Sep 2000 20:44:51 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "> INteresting, \n> that I tried to find out what cause the problem just compiling\n> backend/utils/adt/ with -funsigned-char option but this won't help.\n> I thought (as in earlier time) locale-aware code are in this \n> directory. The problem already exists in 6.5 release,\n> so I'm not sure recent Peter's changes could cause the problem \n\nYou might want to compile backend/regex also with -funsigned-char\noption.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 17 Sep 2000 11:05:42 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support\n\t(FreeBSD 4.1-RELEASE) ?" }, { "msg_contents": "Tom Lane writes:\n\n> - if (isascii((unsigned char)lower_text[i]) && isupper(lower_text[i]))\n> + if (isascii((int)lower_text[i]) && isupper((int) lower_text[i]))\n\n> Peter, I suppose what you were trying to clean up is a \"char used as\n> array subscript\" kind of warning?\n\nYep.\n\n> I would like to think that your compiler will not complain about\n> if (isascii((unsigned char)lower_text[i]) ...\n> If it does we'd have to write something as ugly as\n> if (isascii((int)(unsigned char)lower_text[i]) ...\n> which I can see no value in from a portability standpoint.\n\nI think that the problem might rather be that lower_text (and various\nother arrays) are not declared as unsigned char in the first place. That\nwould also explain why -funsigned-chars fixes it. Because calling\ntoupper() etc. with a signed char argument is in violation of the spec.\n\n(Hmm, template/aix contains this: CFLAGS='-qchars=signed ...'. That can't\nbe good.)\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 17 Sep 2000 15:05:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support\n\t(FreeBSD 4.1-RELEASE) ?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I think that the problem might rather be that lower_text (and various\n> other arrays) are not declared as unsigned char in the first place. That\n> would also explain why -funsigned-chars fixes it. Because calling\n> toupper() etc. with a signed char argument is in violation of the spec.\n\nWell, we could fix it either by propagating use of \"unsigned char\" all\nover the place, or by casting the arguments given to ctype macros.\nThe former would be a lot more invasive because it would propagate to\nroutines that don't actually call any ctype macros (since they'd have\nto conform to prototypes, struct definitions, etc). So I'm inclined to\ngo with the latter. A quick search-and-replace scan ought to do it.\n\nAlso, on machines where the ctype macros actually are implemented as\narray lookups, gcc -Wall should warn about any calls we miss, so as\nlong as someone is paying attention on such a platform, we don't have\nto worry about the problem sneaking back in.\n\n> (Hmm, template/aix contains this: CFLAGS='-qchars=signed ...'. That can't\n> be good.)\n\nProbably Andreas put that in --- maybe he still remembers why. But it\nshouldn't matter. We need to be able to run on platforms where char is\nsigned and there's no handy \"-funsigned-chars\" compiler option.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Sep 2000 13:48:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "Well,\n\nwith the help of Tatsuo I found the problem is in backend/regex/regcomp.c\nI'll look for more details and probably could make a fix.\nquick question: is there in sources locale-aware strncmp function\nor I need to write myself ?\nAs for the compiler option I think we should'nt use any \n\"-funsigned-chars\" like options !\n\n\tRegards,\n\n\t\tOleg\nOn Sun, 17 Sep 2000, Tom Lane wrote:\n\n> Date: Sun, 17 Sep 2000 13:48:37 -0400\n> From: Tom Lane <[email protected]>\n> To: Peter Eisentraut <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] broken locale in 7.0.2 without multibyte support (FreeBSD 4.1-RELEASE) ? \n> \n> Peter Eisentraut <[email protected]> writes:\n> > I think that the problem might rather be that lower_text (and various\n> > other arrays) are not declared as unsigned char in the first place. That\n> > would also explain why -funsigned-chars fixes it. Because calling\n> > toupper() etc. with a signed char argument is in violation of the spec.\n> \n> Well, we could fix it either by propagating use of \"unsigned char\" all\n> over the place, or by casting the arguments given to ctype macros.\n> The former would be a lot more invasive because it would propagate to\n> routines that don't actually call any ctype macros (since they'd have\n> to conform to prototypes, struct definitions, etc). So I'm inclined to\n> go with the latter. A quick search-and-replace scan ought to do it.\n> \n> Also, on machines where the ctype macros actually are implemented as\n> array lookups, gcc -Wall should warn about any calls we miss, so as\n> long as someone is paying attention on such a platform, we don't have\n> to worry about the problem sneaking back in.\n> \n> > (Hmm, template/aix contains this: CFLAGS='-qchars=signed ...'. That can't\n> > be good.)\n> \n> Probably Andreas put that in --- maybe he still remembers why. But it\n> shouldn't matter. We need to be able to run on platforms where char is\n> signed and there's no handy \"-funsigned-chars\" compiler option.\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Sun, 17 Sep 2000 21:56:09 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "On Sun, 17 Sep 2000, Tatsuo Ishii wrote:\n\n> Date: Sun, 17 Sep 2000 11:05:42 +0900\n> From: Tatsuo Ishii <[email protected]>\n> To: [email protected]\n> Cc: [email protected], [email protected], [email protected]\n> Subject: Re: [HACKERS] broken locale in 7.0.2 without multibyte support (FreeBSD 4.1-RELEASE) ? \n> \n> > INteresting, \n> > that I tried to find out what cause the problem just compiling\n> > backend/utils/adt/ with -funsigned-char option but this won't help.\n> > I thought (as in earlier time) locale-aware code are in this \n> > directory. The problem already exists in 6.5 release,\n> > so I'm not sure recent Peter's changes could cause the problem \n> \n> You might want to compile backend/regex also with -funsigned-char\n> option.\n\nThanks, \n\nbackend/regex/regcomp.c cause the problem. I compiled only this file\nwith -funsigned-char option and the problem gone away !\nAlso, I know that in case of --enable-multibyte I dont' have any problem,\nso in principle it's enough to look into \n#ifdef MULTIBYTE sections in backend/regex/regcomp.c\nand made according\n#ifdef USE_LOCALE sections\n\nTatsuo, am I right and what critical sections in backend/regex/regcomp.c ?\n\n\tRegards,\n\n\t\tOleg\n\n\n\n\n> --\n> Tatsuo Ishii\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 17 Sep 2000 22:52:12 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "> backend/regex/regcomp.c cause the problem. I compiled only this file\n> with -funsigned-char option and the problem gone away !\n> Also, I know that in case of --enable-multibyte I dont' have any problem,\n> so in principle it's enough to look into \n> #ifdef MULTIBYTE sections in backend/regex/regcomp.c\n> and made according\n> #ifdef USE_LOCALE sections\n> \n> Tatsuo, am I right and what critical sections in backend/regex/regcomp.c ?\n\nBesides the toupper etc. vs. signed char issues, there is an upper\nlimit of char values defined in include/regex/regex2.h. For none MB\ninstalltions, it is defined as:\n\n#define OUT\t\t (CHAR_MAX+1)\t/* a non-character value */\n\nwhere CHAR_MAX gives 255 for \"char = unsigned char\" platforms. This is\ngood. However it gives 127 for \"char = signed char\" platforms. So if\nyou have some none ascii letters greater than 128 on \"char = unsigned\nchar\" platforms, you will lose.\n\nChanging above to:\n\n#define OUT\t\t (UCHAR_MAX+1)\t/* a non-character value */\n\nmight help...\n--\nTatsuo Ishii\n", "msg_date": "Mon, 18 Sep 2000 10:00:57 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support\n\t(FreeBSD 4.1-RELEASE) ?" }, { "msg_contents": "Tom Lane writes:\n\n> Well, we could fix it either by propagating use of \"unsigned char\" all\n> over the place, or by casting the arguments given to ctype macros.\n> The former would be a lot more invasive because it would propagate to\n> routines that don't actually call any ctype macros (since they'd have\n> to conform to prototypes, struct definitions, etc).\n\nI'm not married to either solution, I just opine that it is cleaner to use\n\"signed char\" and \"unsigned char\" explicitly when you depend on the\nsigned-ness. Otherwise you might just end up moving the problem elsewhere,\nnamely those structs and prototypes, etc.\n\n> > (Hmm, template/aix contains this: CFLAGS='-qchars=signed ...'. That can't\n> > be good.)\n> \n> Probably Andreas put that in --- maybe he still remembers why. But it\n> shouldn't matter. We need to be able to run on platforms where char is\n> signed and there's no handy \"-funsigned-chars\" compiler option.\n\nWhat I meant was that\n\n(a) according to Oleg's report, the source depends on char being unsigned\nin some places, so those places break on AIX, and\n\n(b) according to the above, the source apparently requires char to be\nsigned in some places, so it breaks when char is made unsigned.\n\n*That* can't be good.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 18 Sep 2000 10:56:13 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support\n\t(FreeBSD 4.1-RELEASE) ?" }, { "msg_contents": "Oleg Bartunov <[email protected]> wrote a couple months ago:\n>>>> It's clear that we must use 'unsigned char' instead of 'char'\n>>>> and corrected version runs ok on both systems. That's why I suspect\n>>>> that gcc 2.95.2 has different default under FreeBSD which could\n>>>> cause problem with LC_CTYPE in 7.0.2 \n\n> ok. will check this. I've recompile 7.0.2 on freebsd with -funsigned-char\n> and the problem has gone away. This prove my suggestion. I also \n> checked 6.5 and it has the same probelm on FreeBSD. Also,\n> this makes clear many complains about broken locale under FreeBSD\n> I got from people. \n> Hmm, current cvs has the same problem :-(\n\nToday I inserted (unsigned char) casts into all the <ctype.h> function\ncalls I could find. This issue should be fixed as of current cvs.\nPlease try it again when you have time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Dec 2000 18:13:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" }, { "msg_contents": "On Sun, 3 Dec 2000, Tom Lane wrote:\n\n> Date: Sun, 03 Dec 2000 18:13:47 -0500\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] broken locale in 7.0.2 without multibyte support (FreeBSD 4.1-RELEASE) ? \n> \n> Oleg Bartunov <[email protected]> wrote a couple months ago:\n> >>>> It's clear that we must use 'unsigned char' instead of 'char'\n> >>>> and corrected version runs ok on both systems. That's why I suspect\n> >>>> that gcc 2.95.2 has different default under FreeBSD which could\n> >>>> cause problem with LC_CTYPE in 7.0.2 \n> \n> > ok. will check this. I've recompile 7.0.2 on freebsd with -funsigned-char\n> > and the problem has gone away. This prove my suggestion. I also \n> > checked 6.5 and it has the same probelm on FreeBSD. Also,\n> > this makes clear many complains about broken locale under FreeBSD\n> > I got from people. \n> > Hmm, current cvs has the same problem :-(\n> \n> Today I inserted (unsigned char) casts into all the <ctype.h> function\n> calls I could find. This issue should be fixed as of current cvs.\n> Please try it again when you have time.\n> \n\nJust tried on FreeBSD 3.4-STABLE, current cvs, gcc version 2.95.2 \n19991024 (release), ru-RU.KOI8-R locale, postgresql configured with\n--enable-locale, no gcc option like --unisgned-chars\nLooks like your changes did right job !\n\n\tregards,\n\n\t\tOleg\n\n\n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 4 Dec 2000 23:36:32 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken locale in 7.0.2 without multibyte support (FreeBSD\n\t4.1-RELEASE) ?" } ]
[ { "msg_contents": "Hi,\n\nI am tring to use the qnx version of postgresql 7.0.0\n\nI have qnx 4.25 and TCP/IP\n\nI have compiled postgres using gcc\nI have installed it.\n\nthen I have started postgres with -D and -i options.\n\nThe only commad that I can execute is initdb.\n\nWhen I execute any other command I have a SIGSEGV error.\n\nI don't understand why.\n\nCould someone help me?\nHave I to change the configuration of QNX kernel, TCP/IP or postgresql ?\n\nAttached is a log file with the error.\n\nThanks.\n\nRegards\n\nDREAMTECH\nMaurizio Cauci\n \n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \n\nI am tring to use the qnx version of postgresql \n7.0.0\n \nI have qnx 4.25 and TCP/IP\n \n\nI have compiled postgres using \ngcc\nI have installed it.\n \n\nthen I have started postgres with -D and -i \noptions.\n \nThe only commad that I can execute is initdb.\n \nWhen I execute any other command I have a \nSIGSEGV error.\n \nI don't understand why.\n \nCould someone help me?\nHave I to change the configuration of QNX kernel, TCP/IP or postgresql \n?\n \nAttached is a log file with the error.\n \n\nThanks.\n \nRegards\n \nDREAMTECH\nMaurizio Cauci", "msg_date": "Sat, 16 Sep 2000 15:56:17 +0200", "msg_from": "\"Maurizio\" <[email protected]>", "msg_from_op": true, "msg_subject": "SIGSEGV in postgres 7.0.0 for QNX" }, { "msg_contents": "Maurizio writes:\n\n> The only commad that I can execute is initdb.\n> \n> When I execute any other command I have a SIGSEGV error.\n\n> Attached is a log file with the error.\n\nI see no attachments to your post. You should look at the file\ndoc/FAQ_QNX4 if you haven't already. It seems that choosing a working\ncompiler is non-trivial on QNX. Also, configure with --enable-debug and\ngenerate a stack trace from the failing program, e.g.,\n\n$ /usr/local/pgsql/bin/psql\nSegementation fault (core dumped)\n$ gdb /usr/local/pgsql/bin/psql core\n...\n(gdb) bt\n[ output here ]\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 18 Sep 2000 21:57:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV in postgres 7.0.0 for QNX" }, { "msg_contents": "Sorry for the delay but I can't find gdb.\n\nI already looked at the doc/FAQ_QNX4 file but I didn't find any help.\n\nAt the moment I'm tring to fix the problem, also with the QNX italian\nreseller, but we can't undestand if the problem is the QNX configuration or\nthe PGSQL QNX version.\n\nToday I also tried POSTGRES 7.0.2 but I obtained the same error.\n\nPlease could you give me any other ideas to fix the problem ?\n\nThanks.\nMaurizio Cauci\nwww.dreamtech-it.com\n\n----- Original Message -----\nFrom: \"Peter Eisentraut\" <[email protected]>\nTo: \"Maurizio\" <[email protected]>\nCc: \"PostgreSQL Development\" <[email protected]>\nSent: Monday, September 18, 2000 9:57 PM\nSubject: Re: [HACKERS] SIGSEGV in postgres 7.0.0 for QNX\n\n\n> Maurizio writes:\n>\n> > The only commad that I can execute is initdb.\n> >\n> > When I execute any other command I have a SIGSEGV error.\n>\n> > Attached is a log file with the error.\n>\n> I see no attachments to your post. You should look at the file\n> doc/FAQ_QNX4 if you haven't already. It seems that choosing a working\n> compiler is non-trivial on QNX. Also, configure with --enable-debug and\n> generate a stack trace from the failing program, e.g.,\n>\n> $ /usr/local/pgsql/bin/psql\n> Segementation fault (core dumped)\n> $ gdb /usr/local/pgsql/bin/psql core\n> ...\n> (gdb) bt\n> [ output here ]\n>\n> --\n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n>\n\n", "msg_date": "Sat, 30 Sep 2000 11:02:34 +0200", "msg_from": "\"Maurizio\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SIGSEGV in postgres 7.0.0 for QNX" } ]
[ { "msg_contents": "The named directory seems to be the fossile of a prehistoric regression\ntest, last updated 1996. It seems to be completely outdated, is there any\nneed to keep this?\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 16 Sep 2000 19:30:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "src/test/suite dead?" } ]
[ { "msg_contents": "Postgres has an 'ascii' function that converts\ncharacters to ascii, values, but it appears to be a\none way street. I can't find a way to convert ascii\nvalues to characters, like 'chr' in Oracle. Anyone\nknow how to do this? \n\n-Alex\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Mail - Free email you can access from anywhere!\nhttp://mail.yahoo.com/\n", "msg_date": "Sat, 16 Sep 2000 19:11:17 -0700 (PDT)", "msg_from": "Alex Sokoloff <[email protected]>", "msg_from_op": true, "msg_subject": "ascii to character conversion in postgres" }, { "msg_contents": "\nOn Sat, 16 Sep 2000, Alex Sokoloff wrote:\n\n> Postgres has an 'ascii' function that converts\n> characters to ascii, values, but it appears to be a\n> one way street. I can't find a way to convert ascii\n> values to characters, like 'chr' in Oracle. Anyone\n> know how to do this? \n\n Interesting for me, I try explore (or write) it next week :-)\n\n By the way, the Oracle has more interesting fuction that can \nseason DB's users life and I mean that for porting from rich\nOracle to great PostgreSQL we need it....\n\n\t\t\t\tKarel \n\n", "msg_date": "Sun, 17 Sep 2000 14:27:38 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres" }, { "msg_contents": ">> Postgres has an 'ascii' function that converts\n>> characters to ascii, values, but it appears to be a\n>> one way street. I can't find a way to convert ascii\n>> values to characters, like 'chr' in Oracle. Anyone\n>> know how to do this? \n\nichar(). Since that's part of the \"oracle_compatibility\" file,\nI'd assumed the function name spelling was the same as Oracle's.\nNot so?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Sep 2000 14:18:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "\nOn Sun, 17 Sep 2000, Tom Lane wrote:\n\n> >> Postgres has an 'ascii' function that converts\n> >> characters to ascii, values, but it appears to be a\n> >> one way street. I can't find a way to convert ascii\n> >> values to characters, like 'chr' in Oracle. Anyone\n> >> know how to do this? \n> \n> ichar(). Since that's part of the \"oracle_compatibility\" file,\n> I'd assumed the function name spelling was the same as Oracle's.\n> Not so?\n\n Not documented (from oracle_compat.c) in PG documentation:\n\n\tbtrim()\n\tascii()\n\tichar()\n\trepeat()\n\n and about ichar() is nothing in Oracle documentation, it's knows chr() \nonly...\n\nDirectly rename it, or add \"alias\" entry to the pg_proc? \n\nOr ignore? :-)\n\n\t\t\t\tKarel\n\n", "msg_date": "Mon, 18 Sep 2000 08:18:16 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> Not documented (from oracle_compat.c) in PG documentation:\n> \tbtrim()\n> \tascii()\n> \tichar()\n> \trepeat()\n> and about ichar() is nothing in Oracle documentation, it's knows chr() \n> only...\n\nSounds to me like calling it ichar() was an error, then. Should be chr().\n\n> Directly rename it, or add \"alias\" entry to the pg_proc? \n\nThe alias would only be useful to people who had been using it as\n\"ichar()\" --- which is not many people, since it's undocumented ;-)\nFurthermore, now that I look, it looks like ichar() was new in\ncontrib/odbc in 7.0 and has only recently been moved into the main\ncode.\n\nI vote for just renaming it to chr(). Any objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Sep 2000 11:21:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "On Mon, 18 Sep 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> > Not documented (from oracle_compat.c) in PG documentation:\n> > \tbtrim()\n> > \tascii()\n> > \tichar()\n> > \trepeat()\n> > and about ichar() is nothing in Oracle documentation, it's knows chr() \n> > only...\n> \n> Sounds to me like calling it ichar() was an error, then. Should be chr().\n> \n> > Directly rename it, or add \"alias\" entry to the pg_proc? \n> \n> The alias would only be useful to people who had been using it as\n> \"ichar()\" --- which is not many people, since it's undocumented ;-)\n> Furthermore, now that I look, it looks like ichar() was new in\n> contrib/odbc in 7.0 and has only recently been moved into the main\n> code.\n> \n> I vote for just renaming it to chr(). Any objections?\n\nfirst thing off the top of my head ... was there a reason why it was added\nto contrib/odbc? ignoring the \"oracle documentation\", is it something\nthat is/was needed for ODBC?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 18 Sep 2000 12:41:06 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> I vote for just renaming it to chr(). Any objections?\n\n> first thing off the top of my head ... was there a reason why it was added\n> to contrib/odbc? ignoring the \"oracle documentation\", is it something\n> that is/was needed for ODBC?\n\nNow that I look, it seems ODBC specifies the function as \"char()\",\nwhich means contrib/odbc is wrong on that score too :-(\n\nNew proposal: forget ichar(), give the function two entries chr() and\nchar().\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Sep 2000 12:20:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "On Mon, 18 Sep 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >> I vote for just renaming it to chr(). Any objections?\n> \n> > first thing off the top of my head ... was there a reason why it was added\n> > to contrib/odbc? ignoring the \"oracle documentation\", is it something\n> > that is/was needed for ODBC?\n> \n> Now that I look, it seems ODBC specifies the function as \"char()\",\n> which means contrib/odbc is wrong on that score too :-(\n> \n> New proposal: forget ichar(), give the function two entries chr() and\n> char().\n\nsounds good to me ... chr() == char(), I take it? is there a reason for\nhaving both vs just changing char() to chr() in the odbc stuff?\n\n\n", "msg_date": "Mon, 18 Sep 2000 13:36:32 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> sounds good to me ... chr() == char(), I take it? is there a reason for\n> having both vs just changing char() to chr() in the odbc stuff?\n\nWe don't control the ODBC specification ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Sep 2000 13:38:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "On Mon, 18 Sep 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > sounds good to me ... chr() == char(), I take it? is there a reason for\n> > having both vs just changing char() to chr() in the odbc stuff?\n> \n> We don't control the ODBC specification ...\n\nokay, granted, but, other then ODBC, do we need the char() type? could it\nnot be an internal translation in the ODBC driver itself?\n\n\n", "msg_date": "Mon, 18 Sep 2000 14:48:24 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "Tom Lane writes:\n\n> ichar(). Since that's part of the \"oracle_compatibility\" file,\n> I'd assumed the function name spelling was the same as Oracle's.\n> Not so?\n\nichar() is for ODBC compliance. chr() could probably be an alias to it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 18 Sep 2000 21:57:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres" }, { "msg_contents": "I wrote:\n\n> ichar() is for ODBC compliance. chr() could probably be an alias to it.\n\nIgnore that. Probably \"char()\" had some parsing or overloading problems.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 18 Sep 2000 22:05:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres" }, { "msg_contents": "\n> New proposal: forget ichar(), give the function two entries chr() and\n> char().\n\n OK, I will send patch for this and send domumentation for all \noracle_compat.c routines...\n\n I don't want make some changes to contrib/odbc, because it's out of\nme... but I have a question, Why in the contrib/odbc/odbc.c are total\nsame function as in oracle_compat.c (like ascii(), ichar(), repeat())? \n\nIs it anything specific for ODBC?\n\n\t\t\t\t\tKarel \n\n\n", "msg_date": "Tue, 19 Sep 2000 09:29:31 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "odbc (was: Re: ascii to character conversion in postgres)" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> I don't want make some changes to contrib/odbc, because it's out of\n> me... but I have a question, Why in the contrib/odbc/odbc.c are total\n> same function as in oracle_compat.c (like ascii(), ichar(), repeat())? \n\ncontrib/odbc was a quick hack just before 7.0 release; we had already\nfrozen the system catalogs for 7.0, and didn't want to force another\ninitdb for beta testers. It's not supposed to survive into 7.1 --- most\nor all of what's in there should be, or perhaps already has been, merged\ninto the main code.\n\nThomas did the work on that originally, and might remember more about\nwhether any of the ODBC compatibility functions ought *not* go into\nthe main tree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Sep 2000 11:26:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odbc (was: Re: ascii to character conversion in postgres) " }, { "msg_contents": "Karel Zak writes:\n\n> I don't want make some changes to contrib/odbc, because it's out of\n> me... but I have a question, Why in the contrib/odbc/odbc.c are total\n> same function as in oracle_compat.c (like ascii(), ichar(), repeat())? \n\nThe odbc.c file is for installing the set of ODBC compatibility functions\ninto a pre-7.0 server.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 19 Sep 2000 20:37:04 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odbc (was: Re: ascii to character conversion in postgres)" } ]
[ { "msg_contents": "> Dear Sir/Madam,\n> I'm a PhD student at Essex University in UK, doing a research in Distributed\n> Database Systems. I would like to ask you some questions:\n> I'm using Postgres ver.7, installed in Linux lab which consists of 25\n> terminals.\n> My actual question is if Postgres provides the functionality to access\n> tables which are stored on different hosts using a single Postgres DBMS. For\n> instance, let's assume the follwing query:\n> \n> select * from A, B where A.num = B.num\n> \n> 1- Is it possible to execute this query if the tables A and B are stored\n> on two different machines?\n\nWe don't do that. Sorry.\n\n> 2- Do I need to add something in the Data Catalog, to be able to direct\n> the query into the right host (the one which has the Data).\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 16 Sep 2000 22:58:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: your mail" } ]
[ { "msg_contents": "I was experimenting today with pg_dump's reaction to missing\ndependencies, such as a rule that refers to a no-longer-existing\ntable. It's pretty bad. For example:\n\ncreate table test (f1 int);\ncreate view v_test as select f1+1 as f11 from test;\ndrop table test;\n\nthen run pg_dump:\n\ngetTables(): SELECT failed. Explanation from backend: 'ERROR: cache lookup of attribute 1 in relation 400384 failed\n'.\n\nThis is just about entirely useless as an error message, wouldn't you\nsay?\n\nThe immediate cause of this behavior is the initial data fetch in\ngetTables():\n\n appendPQExpBuffer(query,\n \"SELECT pg_class.oid, relname, relkind, relacl, usename, \"\n \"relchecks, reltriggers, relhasindex, pg_get_viewdef(relname) as viewdef \"\n \"from pg_class, pg_user \"\n \"where relowner = usesysid and relname !~ '^pg_' \"\n \"and relkind in ('%c', '%c', '%c') \"\n \"order by oid\",\n RELKIND_RELATION, RELKIND_SEQUENCE, RELKIND_VIEW);\n\n res = PQexec(g_conn, query->data);\n if (!res ||\n PQresultStatus(res) != PGRES_TUPLES_OK)\n {\n fprintf(stderr, \"getTables(): SELECT failed. Explanation from backend: '%s'.\\n\", \n PQerrorMessage(g_conn));\n exit_nicely(g_conn);\n }\n\nThis can be criticized on a couple of points:\n\n1. It invokes pg_get_viewdef() on every table and sequence, which is a\nbig waste of time even when it doesn't fail outright. When it does fail\noutright, as above, you have no way to identify which view it failed\nfor. pg_get_viewdef() should be invoked retail, for one view at a time,\nand only for things you have determined are indeed views.\n\n2. As somebody pointed out a few days ago, pg_dump silently loses tables\nwhose owners can't be identified. The cause is the inner join being\ndone here against pg_user --- pg_dump will never even notice that a\ntable exists if there's not a matching pg_user row for it. This is not\nrobust.\n\nYou should be able to fix the latter problem by doing an outer join,\nthough it doesn't quite work yet in current sources. pg_get_userbyid()\noffers a different solution, although it won't return NULL for unknown\nIDs, which might be an easier failure case to check for.\n\nMore generally I think there are comparable problems elsewhere in\npg_dump, caused by trying to do too much per query and not thinking\nabout what will happen if there's a failure. It looks like the join-\nagainst-pg_user problem exists for all object types, not just tables.\nIt'd be worth examining all the queries closely with an eye to failure\nmodes and whether you can give a usefully specific error message when\nsomething is wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Sep 2000 16:29:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump tries to do too much per query" }, { "msg_contents": "At 16:29 17/09/00 -0400, Tom Lane wrote:\n>\n>getTables(): SELECT failed. Explanation from backend: 'ERROR: cache\nlookup of attribute 1 in relation 400384 failed\n>'.\n>\n>This is just about entirely useless as an error message, wouldn't you\n>say?\n\nI agree but the is representative of the error handling throughout pg_dump\n(eg. the notorious 'you are hosed' error message). Over time I will try to\nclean it up where possible.\n\nThere are a number of different kinds of errors to deal with; ones\nresulting a corrupt database seem to be low on the list. In fact I would\nargue that 'DROP TABLE' should not work on a view relation. Secondly, your\ncomments probably highlight the need for a database verification utility.\n\nThat said, I will address your posints below.\n\n\n>\n>1. It invokes pg_get_viewdef() on every table and sequence, which is a\n>big waste of time even when it doesn't fail outright. \n\nEither pg_get_viewdef is a lot less efficient that I expected, or this is\nan exageration. If it helps, I can replace it with a case statement:\n\n case \n when relkind='v' then pg_get_viewdef() \n else ''\n end\n\nbut this seems a little pointless and won't prevent errors when the db is\ncorrupt.\n\nBeing forced to break up SQL statements because the backend produces\nunclear errors from a function seems to be a case of the tail wagging the\ndog: perhaps pg_get_viewdef should at least identify itself as the source\nof the error, if that is what is happening.\n\n\n> When it does fail\n>outright, as above, you have no way to identify which view it failed\n>for.\n\nGood point. This is going to affect anybody who calls get_viewdef. Maybe it\ncan be modified to indicate (a) that the error occurred in get_viewdef, and\n(b) which view is corrupt.\n\nTry:\n\n select * from pg_views;\n\nSame error.\n\n\n>pg_get_viewdef() should be invoked retail, for one view at a time,\n>and only for things you have determined are indeed views.\n\nDo you truly, ruly believe the first part?\n\n\n>2. As somebody pointed out a few days ago, pg_dump silently loses tables\n>whose owners can't be identified. The cause is the inner join being\n>done here against pg_user --- pg_dump will never even notice that a\n>table exists if there's not a matching pg_user row for it. This is not\n>robust.\n>\n>You should be able to fix the latter problem by doing an outer join,\n>though it doesn't quite work yet in current sources. pg_get_userbyid()\n>offers a different solution, although it won't return NULL for unknown\n>IDs, which might be an easier failure case to check for.\n\nThis sounds sensible; and I think you are right - pg_dump crosses with user\ninfo relations all the time. I'll look at using pg_get_userbyid, LOJ and/or\ncolumn selects now that they are available.\n\nBased on this suggestion, maybe pg_get_viewdef should return NULL if the\nview table does not exist. But I would still prefer a meaningful error\nmessage, since it really does reflect DB corruption.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 18 Sep 2000 12:48:56 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump tries to do too much per query" }, { "msg_contents": "At 12:48 18/09/00 +1000, Philip Warner wrote:\n>>\n>>You should be able to fix the latter problem by doing an outer join,\n>>though it doesn't quite work yet in current sources. pg_get_userbyid()\n>>offers a different solution, although it won't return NULL for unknown\n>>IDs, which might be an easier failure case to check for.\n>\n>This sounds sensible; and I think you are right - pg_dump crosses with user\n>info relations all the time. I'll look at using pg_get_userbyid, LOJ and/or\n>column selects now that they are available.\n>\n\nI've just made these changes, and will commit them once I put in some\nwarnings about NULL usernames - my guess is that this should not stop\npg_dump, just warn the user.\n\nIt's a real pity about pg_get_userbyid - the output for non-existant users\nis pretty near useless. I presume it's there for convenience of a specific\npiece of code.\n\nWould you see any value in creating a pg_get_usernamebyid() or similar that\nreturns NULL when there is no match?\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 18 Sep 2000 13:21:07 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump tries to do too much per query" }, { "msg_contents": "At 16:29 17/09/00 -0400, Tom Lane wrote:\n>As somebody pointed out a few days ago, pg_dump silently loses tables\n>whose owners can't be identified. \n\nThis is now fixed in CVS. The owner of all objects are now retrieved by\nusing column select expressions.\n\nIf you can recall where it was, I'd be interested in seeing the report\nsince I have no recollection of it, and I try to keep abreast of pg_dump\nissues...\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 18 Sep 2000 15:25:38 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump tries to do too much per query" }, { "msg_contents": "At 16:29 17/09/00 -0400, Tom Lane wrote:\n>\n>create table test (f1 int);\n>create view v_test as select f1+1 as f11 from test;\n>drop table test;\n>\n>then run pg_dump:\n>\n>getTables(): SELECT failed. Explanation from backend: 'ERROR: cache\nlookup of attribute 1 in relation 400384 failed\n>'.\n>\n\nFWIW, the patch below causes get_viewdef to produce:\n\n ERROR: pg_get_viewdef: cache lookup of attribute 1 in relation 19136\nfailed for rule v_test\n\nwhen a table has been deleted.\n\n----------------------------------------------------\n--- ruleutils.c.orig\tWed Sep 13 22:08:04 2000\n+++ ruleutils.c\tMon Sep 18 20:59:25 2000\n@@ -72,6 +72,7 @@\n * ----------\n */\n static char *rulename = NULL;\n+static char *toproutinename = NULL;\n static void *plan_getrule = NULL;\n static char *query_getrule = \"SELECT * FROM pg_rewrite WHERE rulename = $1\";\n static void *plan_getview = NULL;\n@@ -134,6 +135,12 @@\n \tint\t\t\tlen;\n \n \t/* ----------\n+\t * We use this in reporting errors.\n+\t * ----------\n+\t */\n+\ttoproutinename = \"pg_get_ruledef\";\n+\n+\t/* ----------\n \t * We need the rules name somewhere deep down: rulename is global\n \t * ----------\n \t */\n@@ -234,6 +241,12 @@\n \tchar\t *name;\n \n \t/* ----------\n+\t * We use this in reporting errors.\n+\t * ----------\n+\t */\n+\ttoproutinename = \"pg_get_viewdef\";\n+\n+\t/* ----------\n \t * We need the view name somewhere deep down\n \t * ----------\n \t */\n@@ -337,6 +350,13 @@\n \tchar\t *sep;\n \n \t/* ----------\n+\t * We use this in reporting errors.\n+\t * ----------\n+\t */\n+\ttoproutinename = \"pg_get_indexdef\";\n+\trulename = NULL;\n+\n+\t/* ----------\n \t * Connect to SPI manager\n \t * ----------\n \t */\n@@ -554,6 +574,13 @@\n \tForm_pg_shadow user_rec;\n \n \t/* ----------\n+\t * We use this in reporting errors.\n+\t * ----------\n+\t */\n+\ttoproutinename = \"pg_get_userbyid\";\n+\trulename = NULL;\n+\n+\t/* ----------\n \t * Allocate space for the result\n \t * ----------\n \t */\n@@ -2014,8 +2041,16 @@\n \t\t\t\t\t\t\t\t ObjectIdGetDatum(relid), (Datum) attnum,\n \t\t\t\t\t\t\t\t 0, 0);\n \tif (!HeapTupleIsValid(atttup))\n-\t\telog(ERROR, \"cache lookup of attribute %d in relation %u failed\",\n-\t\t\t attnum, relid);\n+\t{\n+\t\tif (rulename != NULL)\n+\t\t{\n+\t\t\telog(ERROR, \"%s: cache lookup of attribute %d in relation %u failed for\nrule %s\",\n+\t\t\t\ttoproutinename, attnum, relid, rulename);\n+\t\t} else {\n+\t\t\telog(ERROR, \"%s: cache lookup of attribute %d in relation %u failed\",\n+\t\t\t\ttoproutinename, attnum, relid);\n+\t\t}\n+\t}\n \n \tattStruct = (Form_pg_attribute) GETSTRUCT(atttup);\n \treturn pstrdup(NameStr(attStruct->attname));\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 18 Sep 2000 22:18:19 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump tries to do too much per query" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> FWIW, the patch below causes get_viewdef to produce:\n> ERROR: pg_get_viewdef: cache lookup of attribute 1 in relation 19136\n> failed for rule v_test\n> when a table has been deleted.\n\nNot much of a solution --- or do you propose to go through and hack up\nevery elog in every routine that could potentially be called during\npg_get_ruledef?\n\nThe reason that changing pg_dump is a superior solution for this problem\nis that there's only one place to change, not umpteen dozen ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Sep 2000 11:37:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: pg_dump tries to do too much per query " }, { "msg_contents": "At 11:37 18/09/00 -0400, Tom Lane wrote:\n>\n>The reason that changing pg_dump is a superior solution for this problem\n>is that there's only one place to change, not umpteen dozen ...\n>\n\nWell at least two, unless you like the following:\n\n zzz=# select * from pg_views;\n ERROR: cache lookup of attribute 1 in relation 3450464 failed\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 19 Sep 2000 10:42:12 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump tries to do too much per query " }, { "msg_contents": "At 11:37 18/09/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> FWIW, the patch below causes get_viewdef to produce:\n>> ERROR: pg_get_viewdef: cache lookup of attribute 1 in relation 19136\n>> failed for rule v_test\n>> when a table has been deleted.\n>\n>Not much of a solution --- or do you propose to go through and hack up\n>every elog in every routine that could potentially be called during\n>pg_get_ruledef?\n\nWell, I had assumed that most routines/subsystems would identify themselves\nas sources of errors, and that the behaviour of pg_get_viewdef was unusual.\nEven the existing code for get_viewdef tries to output it's name in most\nplaces, just not all.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 19 Sep 2000 10:45:16 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump tries to do too much per query " }, { "msg_contents": "At 10:42 19/09/00 +1000, Philip Warner wrote:\n>At 11:37 18/09/00 -0400, Tom Lane wrote:\n>>\n>>The reason that changing pg_dump is a superior solution for this problem\n>>is that there's only one place to change, not umpteen dozen ...\n>>\n>\n>Well at least two, unless you like the following:\n>\n> zzz=# select * from pg_views;\n> ERROR: cache lookup of attribute 1 in relation 3450464 failed\n>\n\nApologies - I just noticed you fixed this in CVS, so it now manages\n(somehow!) to output a valid view definition even without the underlying\ntable. A little scary, though.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 19 Sep 2000 11:33:00 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump tries to do too much per query " }, { "msg_contents": "Doing the following:\n\n create table test (f1 int);\n create view v_test as select f1+1 as f11 from test;\n drop table test;\n\nthen selecting from the view results in:\n\n ERROR: Relation 'test' does not exist\n\nwhich is fine.\n\nThen I try:\n\n create table test (f1 int);\n select * from v_test;\n\nand I get:\n\n ERROR: has_subclass: Relation 19417 not found\n\nwhich not very helpful for the person who does not know the history, and\nleads me to believe that there may be a few issues here. \n\nShould a 'DROP TABLE' drop the views, fail, or be recoverable from by\nrecreating the table?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 19 Sep 2000 13:59:50 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Cascade delete views?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> create table test (f1 int);\n> create view v_test as select f1+1 as f11 from test;\n> drop table test;\n> create table test (f1 int);\n> select * from v_test;\n> ERROR: has_subclass: Relation 19417 not found\n\n> which not very helpful for the person who does not know the history, and\n> leads me to believe that there may be a few issues here. \n\nYes, this mistake needs to be detected earlier. The stored view\ncontains both the name and the OID of the referenced table. It should\n*not* accept a new table with same name and different OID, since there's\nno guarantee that the new table has anything like the same column set.\n(ALTER TABLE has some issues here too...)\n\n> Should a 'DROP TABLE' drop the views, fail, or be recoverable from by\n> recreating the table?\n\nYes ;-).\n\nAny of those behaviors would be better than what we have now. However,\nnone of them is going to be easy to implement. There will need to be\nmore info stored about views than there is now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Sep 2000 11:55:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cascade delete views? " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>>> The reason that changing pg_dump is a superior solution for this problem\n>>> is that there's only one place to change, not umpteen dozen ...\n>> \n>> Well at least two, unless you like the following:\n>> \n>> zzz=# select * from pg_views;\n>> ERROR: cache lookup of attribute 1 in relation 3450464 failed\n\n> Apologies - I just noticed you fixed this in CVS, so it now manages\n> (somehow!) to output a valid view definition even without the underlying\n> table. A little scary, though.\n\nSay what? (... tries it ...) Fascinating. I wouldn't rely on this\nbehavior however; the fact that it works today is a totally unintended\nconsequence of a change I made for column alias support. Next week\nruleutils.c might try to access the underlying tables again.\n\nThe general issue still remains: if a database contains an inconsistency\nor error, introduced by whatever means (and there'll always be bugs),\na pg_dump failure is likely to be the first notice a dbadmin has about it.\nSo it behooves us to make sure that pg_dump issues error messages that\nare as specific as possible. In particular, if there is a specific\nobject such as a view or rule that's broken, pg_dump should take care\nthat it can finger that particular object, not have to report a generic\n\"SELECT failed\" error message.\n\nThis problem has been around for a long time, of course, but now that\nwe have someone who's taking an active interest in fixing pg_dump ;-)\nI'm hoping something will get done about it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Sep 2000 12:13:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: pg_dump tries to do too much per query " }, { "msg_contents": "\nOn Tue, 19 Sep 2000, Tom Lane wrote:\n\n> Yes, this mistake needs to be detected earlier. The stored view\n> contains both the name and the OID of the referenced table. It should\n> *not* accept a new table with same name and different OID, since there's\n> no guarantee that the new table has anything like the same column set.\n> (ALTER TABLE has some issues here too...)\n> \n> > Should a 'DROP TABLE' drop the views, fail, or be recoverable from by\n> > recreating the table?\n> \n> Yes ;-).\n> \n> Any of those behaviors would be better than what we have now. However,\n> none of them is going to be easy to implement. There will need to be\n> more info stored about views than there is now.\n\nThis is an example of a place where the dependencies chart will come\nin handy. :) I do actually hope to get to it (if noone else does it)\nafter my work job has their official release and I get a chance to take\ntime off and after I've figured out match partial for the referential\nintegrity stuff (which is more of a pain than I could have ever imagined).\n\n", "msg_date": "Tue, 19 Sep 2000 10:16:18 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Cascade delete views? " }, { "msg_contents": "At 12:13 19/09/00 -0400, Tom Lane wrote:\n>\n>Say what? (... tries it ...) Fascinating. I wouldn't rely on this\n>behavior however; the fact that it works today is a totally unintended\n>consequence of a change I made for column alias support. Next week\n>ruleutils.c might try to access the underlying tables again.\n\nIn this particular instance, I'd still be much happier to see the code that\nfails identify itself properly, identify the cause, and explain the error.\nIn the case in point, all of this information is available. I always prefer\nto fix two bugs with one code change where possible (ie. 'select * from\npg_views' as well as pg_dump).\n\nTo carry this to extremes, at one point pg_dump crosses pg_index, pg_class\nand pg_am somewhere else it crosses pg_rewrite, pg_class, and pg_rules etc.\n\nIf we want to allow for database corruption, pg_dump should really be doing\nthis sort of cross internally to verify that each part is present (or use\nLOJ, and test for NULLs - but you hinted that there was a problem with this).\n\n\n>This problem has been around for a long time, of course, but now that\n>we have someone who's taking an active interest in fixing pg_dump ;-)\n>I'm hoping something will get done about it...\n\nPart of my interest in pg_dump is in trying to make it know less about the\ndatabase structure, and so make it a lot more independant of versions. If I\ncould use one pg_* view or function for each entity type, I'd be very\nhappy. The not-so-recent discussions about definition schemas should\nindicate where pg_dump wants to go, and the growing use of functions like\n'format_type' make it increasingly hard to know why an error occurred - I\nhaven't looked to see what format_type does when it encounters an internal\ninconsistency, but I hope it returns NULL.\n\n[Thought: What would be the impact of pg_get_ruledef etc returning NULL\nwhen it can't find the relevant data? This sort of approach seems to me\nmore consistent with DB-ish things, and pg_dump could easily test for a\nNULL defn of a view relation]\n\nI'd prefer to see a 'pg_verify' (or similar). The idea being that it would\nknow about and be totally anal about interrelationships - and successfully\nreport a missing view table - unlike the current (or proposed) situation.\nIn some ways, like a vacuum, but on metadata.\n\nI'm not toally opposed to your suggestion, and I certainly agree that\nmaking pg_dump more carefull about the data it retrieves is a good idea.\nBut I still need more convincing about the calls to get_viewdef. \n\nIn the current situation, I think ruleutils.c might need to be looked at\nmore closely: eg. several error messages prepend the main routine name,\nsome do not. Some display the problem rule name, other do not. There are\nonly three or four external routines defined, so by using (more) globals it\nis easily possible for all routines to indicate the primary routine\nresponsible. Similarly, the rule/view name is already a global, and is\navailable to all routines in ruleutils.c.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 20 Sep 2000 11:05:52 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump tries to do too much per query " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> It's a real pity about pg_get_userbyid - the output for non-existant users\n> is pretty near useless. I presume it's there for convenience of a specific\n> piece of code.\n\nNo, I imagine it's just that way because it seemed like a good idea at\nthe time. Returning NULL for an unknown userid seems at least as\nplausible as what it does now. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Sep 2000 00:41:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump tries to do too much per query " }, { "msg_contents": "Philip Warner writes:\n\n> Doing the following:\n> \n> create table test (f1 int);\n> create view v_test as select f1+1 as f11 from test;\n> drop table test;\n> \n> then selecting from the view results in:\n> \n> ERROR: Relation 'test' does not exist\n> \n> which is fine.\n\nIf you peak into the standard, all DROP commands have a trailing\nRESTRICT/CASCADE (mandatory, btw.), which will tell what to do. But it's\nextremely hard to implement and keep up to date.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 20 Sep 2000 19:38:33 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cascade delete views?" } ]
[ { "msg_contents": "I get the following on untuned Linux (Redhat 6.2) using stock 7.0.2\nrpm-s\n\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\n\nActually I get many of them ;(\n\nI'm running a script that does a bunch of mixed INSERTS, UPDATES,\nDELETES and SELECTS.\n\nafter getting that I'm unable to vacuum database until I reset the OS\n\nWhere/how should I start looking (or is it a known problem)\n\nAre there any simple workarounds to stop it happening.\n\n-----------\nHannu\n", "msg_date": "Mon, 18 Sep 2000 01:21:25 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Notice and share memory corruption" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> I get the following on untuned Linux (Redhat 6.2) using stock 7.0.2\n> rpm-s\n\n> NOTICE: RegisterSharedInvalid: SI buffer overflow\n> NOTICE: InvalidateSharedInvalid: cache state reset\n\n> Actually I get many of them ;(\n\nAFAIK, these are just noise in 7.0. The only reason you see them is\nwe haven't got round to removing the messages or downgrading them to\nelog(DEBUG).\n\n> I'm running a script that does a bunch of mixed INSERTS, UPDATES,\n> DELETES and SELECTS.\n\nI'll bet you also have some backends sitting idle with open\ntransactions? The combination of idle and active backends is what\nusually provokes SI overruns.\n\n> after getting that I'm unable to vacuum database until I reset the OS\n\nDefine your terms more carefully, please. What do you mean by\n\"unable to vacuum\" --- what happens *exactly*? In any case,\nsurely it doesn't take an OS reboot to recover. I might believe\nyou need to restart the postmaster...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Sep 2000 01:30:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Notice and share memory corruption " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > I get the following on untuned Linux (Redhat 6.2) using stock 7.0.2\n> > rpm-s\n> \n> > NOTICE: RegisterSharedInvalid: SI buffer overflow\n> > NOTICE: InvalidateSharedInvalid: cache state reset\n> \n> > Actually I get many of them ;(\n> \n> AFAIK, these are just noise in 7.0. The only reason you see them is\n> we haven't got round to removing the messages or downgrading them to\n> elog(DEBUG).\n> \n> > I'm running a script that does a bunch of mixed INSERTS, UPDATES,\n> > DELETES and SELECTS.\n> \n> I'll bet you also have some backends sitting idle with open\n> transactions? The combination of idle and active backends is what\n> usually provokes SI overruns.\n> \n> > after getting that I'm unable to vacuum database until I reset the OS\n> \n> Define your terms more carefully, please. What do you mean by\n> \"unable to vacuum\" --- what happens *exactly*? \n\nNOTICE: FlushRelationBuffers(access_right, 2009): block 1944 is\nreferenced (private 0, global 2)\nFATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\n> In any case,\n> surely it doesn't take an OS reboot to recover. I might believe\n> you need to restart the postmaster...\n\non one machine a simple restart worked\n\nMaybe i have to really restart it (instead of doing\n/etc/rc.d/init.d/postgresql restart)\nby running killall -9 /usr/bin/postgres\n\nI was quite sure that just restarting it did not help, but maybe \nit really did not restart, just claimed to .\n\n\n\nOn the other I still get \n\namphora2=# vacuum;\nNOTICE: FlushRelationBuffers(item, 30): block 2 is referenced (private\n0, global 1)\nFATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\nafter stopping postmaster (and checking it is stopped)\n\nI could do a vacuum after restarting the whole machine...\n\nOTOH it _may_ be that someone started another backend right after\nrestart and did something, \nbut must this be a FATAL error ?\n\n-----------\nHannu\n", "msg_date": "Mon, 18 Sep 2000 11:17:57 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Notice and share memory corruption" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n>> Define your terms more carefully, please. What do you mean by\n>> \"unable to vacuum\" --- what happens *exactly*? \n\n> NOTICE: FlushRelationBuffers(access_right, 2009): block 1944 is\n> referenced (private 0, global 2)\n> FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2\n\nOh, that's interesting. This error indicates that some prior\ntransaction neglected to release a reference count on a shared buffer.\nWe have seen sporadic reports of this problem in 7.0, but so far no\none has come up with a reproducible example. If you can boil down\nyour script to something that reproducibly causes the problem then\nthat'd be a great help in tracking it down.\n\nIf you have clients that sometimes disconnect in the middle of a\ntransaction, it might help to apply the attached patch.\n\n> Maybe i have to really restart it (instead of doing\n> /etc/rc.d/init.d/postgresql restart)\n> by running killall -9 /usr/bin/postgres\n\nRestarting the postmaster should clear the problem (by releasing and\nreinitializing shared memory). I dunno where you got the idea that\nkill -9 was a recommended way of shutting down the system, but I sure\nwouldn't recommend it. A plain kill on the postmaster ought to do it\n(see the pg_ctl script in release 7.0.*).\n\n\t\t\tregards, tom lane\n\n*** src/backend/tcop/postgres.c.orig\tSat May 20 22:23:30 2000\n--- src/backend/tcop/postgres.c\tWed Aug 30 16:47:51 2000\n***************\n*** 1459,1465 ****\n \t * Initialize the deferred trigger manager\n \t */\n \tif (DeferredTriggerInit() != 0)\n! \t\tproc_exit(0);\n \n \tSetProcessingMode(NormalProcessing);\n \n--- 1459,1465 ----\n \t * Initialize the deferred trigger manager\n \t */\n \tif (DeferredTriggerInit() != 0)\n! \t\tgoto normalexit;\n \n \tSetProcessingMode(NormalProcessing);\n \n***************\n*** 1479,1490 ****\n \t\t\tTPRINTF(TRACE_VERBOSE, \"AbortCurrentTransaction\");\n \n \t\tAbortCurrentTransaction();\n! \t\tInError = false;\n \t\tif (ExitAfterAbort)\n! \t\t{\n! \t\t\tProcReleaseLocks(); /* Just to be sure... */\n! \t\t\tproc_exit(0);\n! \t\t}\n \t}\n \n \tWarn_restart_ready = true;\t/* we can now handle elog(ERROR) */\n--- 1479,1489 ----\n \t\t\tTPRINTF(TRACE_VERBOSE, \"AbortCurrentTransaction\");\n \n \t\tAbortCurrentTransaction();\n! \n \t\tif (ExitAfterAbort)\n! \t\t\tgoto errorexit;\n! \n! \t\tInError = false;\n \t}\n \n \tWarn_restart_ready = true;\t/* we can now handle elog(ERROR) */\n***************\n*** 1553,1560 ****\n \t\t\t\tif (HandleFunctionRequest() == EOF)\n \t\t\t\t{\n \t\t\t\t\t/* lost frontend connection during F message input */\n! \t\t\t\t\tpq_close();\n! \t\t\t\t\tproc_exit(0);\n \t\t\t\t}\n \t\t\t\tbreak;\n \n--- 1552,1558 ----\n \t\t\t\tif (HandleFunctionRequest() == EOF)\n \t\t\t\t{\n \t\t\t\t\t/* lost frontend connection during F message input */\n! \t\t\t\t\tgoto normalexit;\n \t\t\t\t}\n \t\t\t\tbreak;\n \n***************\n*** 1608,1618 ****\n \t\t\t\t */\n \t\t\tcase 'X':\n \t\t\tcase EOF:\n! \t\t\t\tif (!IsUnderPostmaster)\n! \t\t\t\t\tShutdownXLOG();\n! \t\t\t\tpq_close();\n! \t\t\t\tproc_exit(0);\n! \t\t\t\tbreak;\n \n \t\t\tdefault:\n \t\t\t\telog(ERROR, \"unknown frontend message was received\");\n--- 1606,1612 ----\n \t\t\t\t */\n \t\t\tcase 'X':\n \t\t\tcase EOF:\n! \t\t\t\tgoto normalexit;\n \n \t\t\tdefault:\n \t\t\t\telog(ERROR, \"unknown frontend message was received\");\n***************\n*** 1642,1651 ****\n \t\t\tif (IsUnderPostmaster)\n \t\t\t\tNullCommand(Remote);\n \t\t}\n! \t}\t\t\t\t\t\t\t/* infinite for-loop */\n \n! \tproc_exit(0);\t\t\t\t/* shouldn't get here... */\n! \treturn 1;\n }\n \n #ifndef HAVE_GETRUSAGE\n--- 1636,1655 ----\n \t\t\tif (IsUnderPostmaster)\n \t\t\t\tNullCommand(Remote);\n \t\t}\n! \t}\t\t\t\t\t\t\t/* end of main loop */\n! \n! normalexit:\n! \tExitAfterAbort = true;\t\t/* ensure we will exit if elog during abort */\n! \tAbortOutOfAnyTransaction();\n! \tif (!IsUnderPostmaster)\n! \t\tShutdownXLOG();\n! \n! errorexit:\n! \tpq_close();\n! \tProcReleaseLocks();\t\t\t/* Just to be sure... */\n! \tproc_exit(0);\n \n! \treturn 1;\t\t\t\t\t/* keep compiler quiet */\n }\n \n #ifndef HAVE_GETRUSAGE\n", "msg_date": "Mon, 18 Sep 2000 13:57:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Notice and share memory corruption " } ]
[ { "msg_contents": "Guys,\n\nI have some odd behaviour with VB6 & postgresql that may be a bug - I would appreciate someone else replicating this; or any other suggestions anyone might have.\n\nVersions: VB6 sp5 on W2K pro sp2 running postgresql 7.1.2 via cygwin. Insight ODBC driver 7.01.00.06 with ODBC 3.520.6526.0 (obtained from control panel/admin tools/ODBC). 7.01.00.05 behaves identically.\n\nPostgreSQL code:\n\nCREATE TABLE tb_search (\nsession_id int,\nemp_id int,\nrank int\n);\n\nand some data:\n\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 10);\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 10);\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,102, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,102, 10);\ninsert into tb_search (session_id , emp_id, rank) values (1,102, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,103, 10);\ninsert into tb_search (session_id , emp_id, rank) values (1,103, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,104, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,104, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,105, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,106, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,107, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,108, 5);\n\nVB Code:\n\ndim lSesh as long\ndim rsEmps as ADODB.Recordset\n\n'set up your DBConn here or use implicit connection\n\nlSesh = 1\n\nsSQL = \"SELECT emp_id, sum(rank) \" \nsSQL = sSQL & \"FROM tb_search \" \n'sSQL = sSQL & \"ON e.emp_id = s.emp_id \"\nsSQL = sSQL & \"WHERE session_id = \" & lSesh\nsSQL = sSQL & \" GROUP BY emp_id \" \nsSQL = sSQL & \" ORDER BY sum(rank) DESC\"\n\n\nfrmEmpSearch.Caption = sOrigCapt & \" - retrieving results\"\nSet rsEmps = New ADODB.Recordset\nrsEmps.CursorLocation = adUseClient 'adUseServer\nrsEmps.Open sSQL, DBConn, adOpenForwardOnly, adLockReadOnly\n\nif rsEmps.BOF and rsEmps.EOF then\n msgbox \"No records returnes\" 'adUseClient returns no records\nelse \n msgbox \"We got records!\" 'adUseSever returns records\nend if\n\nThe select statement returns records when run from psql, yet the location of cursor affects whether or not rows are returned when the select is run from within VB. The fact that location of cursor determines success makes me think there *could* be an issue with the ODBC driver.\n\nOut of interest, replacing \nsSQL = \"SELECT emp_id, sum(rank) \" \nwith\nsSQL = \"SELECT emp_id, max(rank) \" \ncauses the query to work wherever the cursor is!?!?!\n\nAll help/suggestions appreciated.\n\nJonathan Stanford, UK\n\n\n\n\n\n\n\n\n\n\n\n\nGuys,\n \nI have some odd behaviour with VB6 & postgresql \nthat may be a bug - I would appreciate someone else replicating this; or any \nother suggestions anyone might have.\n \nVersions: VB6 sp5 on W2K pro sp2 running postgresql \n7.1.2 via cygwin.  Insight ODBC driver 7.01.00.06 with ODBC 3.520.6526.0 \n(obtained from control panel/admin tools/ODBC).  7.01.00.05 behaves \nidentically.\n \n\nPostgreSQL code:\n \nCREATE TABLE tb_search (session_id int,emp_id int,rank \nint);\n \nand some data:\n \n\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 10);\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 10);\ninsert into tb_search (session_id , emp_id, rank) values (1,101, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,102, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,102, 10);\ninsert into tb_search (session_id , emp_id, rank) values (1,102, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,103, 10);\ninsert into tb_search (session_id , emp_id, rank) values (1,103, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,104, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,104, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,105, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,106, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,107, 5);\ninsert into tb_search (session_id , emp_id, rank) values (1,108, 5);\n \nVB Code:\n \ndim lSesh as long\ndim rsEmps as ADODB.Recordset\n'set up your DBConn here or use implicit \nconnection\n \nlSesh = 1\n \nsSQL = \"SELECT  emp_id, sum(rank) \" \n\nsSQL = sSQL & \"FROM tb_search  \" 'sSQL \n= sSQL & \"ON e.emp_id = s.emp_id \"sSQL = sSQL & \"WHERE session_id = \n\" & lSeshsSQL = sSQL & \" GROUP BY emp_id \" sSQL = sSQL & \" \nORDER BY sum(rank) DESC\"\n \nfrmEmpSearch.Caption = sOrigCapt & \" - retrieving results\"Set \nrsEmps = New ADODB.RecordsetrsEmps.CursorLocation = adUseClient \n'adUseServerrsEmps.Open sSQL, DBConn, adOpenForwardOnly, \nadLockReadOnly\n \nif rsEmps.BOF and rsEmps.EOF then\n    msgbox \"No records \nreturnes\"             \n'adUseClient returns no records\nelse \n    msgbox \"We got \nrecords!\"                   \n'adUseSever returns records\nend if\nThe select statement returns records when run from psql, yet the \nlocation of cursor affects whether or not rows are returned when the select is \nrun from within VB.  The fact that location of cursor determines success \nmakes me think there *could* be an issue with the ODBC driver.\n \nOut of interest, replacing \n\nsSQL = \"SELECT  emp_id, sum(rank) \" \n\nwith\n\nsSQL = \"SELECT  emp_id, max(rank) \" \n\ncauses the query to work wherever the cursor is!?!?!\n \nAll help/suggestions appreciated.\n \nJonathan Stanford, UK", "msg_date": "Mon, 18 Sep 2000 01:36:09 +0100", "msg_from": "\"Jonathan Stanford\" <[email protected]>", "msg_from_op": true, "msg_subject": "Odd behaviour - *possible* ODBC bug?" }, { "msg_contents": "Hiroshi\n\nYour ::int works a treat.\n\nThanks\n\nJon\n\n----- Original Message -----\nFrom: \"Hiroshi Inoue\" <[email protected]>\nTo: \"Jonathan Stanford\" <[email protected]>\nCc: \"pgsql-hackers\" <[email protected]>;\n<[email protected]>\nSent: Tuesday, September 18, 2001 4:12 AM\nSubject: Re: [ODBC] Odd behaviour - *possible* ODBC bug?\n\n\n> -----Original Message-----\n> From: Jonathan Stanford\n>\n> > Guys,\n>\n> > I have some odd behaviour with VB6 & postgresql that may be a bug - I\n> would appreciate someone else > > replicating this; or any other\nsuggestions\n> anyone might have.\n>\n> [snip]\n>\n> > PostgreSQL code:\n>\n> > CREATE TABLE tb_search (\n> > session_id int,\n> > emp_id int,\n> > rank int\n> > );\n>\n> > and some data:\n>\n> > insert into tb_search (session_id , emp_id, rank) values (1,101, 5);\n>\n> [snip several insert commands]\n>\n> > VB Code:\n>\n> [snip]\n>\n> > sSQL = \"SELECT emp_id, sum(rank) \"\n> > sSQL = sSQL & \"FROM tb_search \"\n> > sSQL = sSQL & \"ON e.emp_id = s.emp_id \"\n> > sSQL = sSQL & \"WHERE session_id = \" & lSesh\n> > sSQL = sSQL & \" GROUP BY emp_id \"\n> > sSQL = sSQL & \" ORDER BY sum(rank) DESC\"\n>\n> > frmEmpSearch.Caption = sOrigCapt & \" - retrieving results\"\n> > Set rsEmps = New ADODB.Recordset\n> > rsEmps.CursorLocation = adUseClient 'adUseServer\n> > rsEmps.Open sSQL, DBConn, adOpenForwardOnly, adLockReadOnly\n>\n> I don't think it's an ODBC driver's bug.\n> The cause is that PostgreSQL returns NUMERIC type as sum(int).\n>\n> adUseClient for CursorLocation property indicates ADO to use\n> Microsoft Cursor Service for OLE DB. Microsoft Cursor service\n> seems to think that sum(rank) is of type int but PostgreSQL\n> returns NUMERIC type. I don't know what should be done here.\n> Please change sum(rank) -> sum(rank)::int and try.\n>\n> regards,\n> Hiroshi Inoue\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Tue, 19 Sep 2000 02:36:32 +0100", "msg_from": "\"Jonathan Stanford\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd behaviour - *possible* ODBC bug?" }, { "msg_contents": "-----Original Message-----\nFrom: Jonathan Stanford\n\n> Guys,\n\n> I have some odd behaviour with VB6 & postgresql that may be a bug - I\nwould appreciate someone else > > replicating this; or any other suggestions\nanyone might have.\n\n[snip]\n\n> PostgreSQL code:\n\n> CREATE TABLE tb_search (\n> session_id int,\n> emp_id int,\n> rank int\n> );\n\n> and some data:\n\n> insert into tb_search (session_id , emp_id, rank) values (1,101, 5);\n\n[snip several insert commands]\n\n> VB Code:\n\n[snip]\n\n> sSQL = \"SELECT emp_id, sum(rank) \"\n> sSQL = sSQL & \"FROM tb_search \"\n> sSQL = sSQL & \"ON e.emp_id = s.emp_id \"\n> sSQL = sSQL & \"WHERE session_id = \" & lSesh\n> sSQL = sSQL & \" GROUP BY emp_id \"\n> sSQL = sSQL & \" ORDER BY sum(rank) DESC\"\n\n> frmEmpSearch.Caption = sOrigCapt & \" - retrieving results\"\n> Set rsEmps = New ADODB.Recordset\n> rsEmps.CursorLocation = adUseClient 'adUseServer\n> rsEmps.Open sSQL, DBConn, adOpenForwardOnly, adLockReadOnly\n\nI don't think it's an ODBC driver's bug.\nThe cause is that PostgreSQL returns NUMERIC type as sum(int).\n\nadUseClient for CursorLocation property indicates ADO to use\nMicrosoft Cursor Service for OLE DB. Microsoft Cursor service\nseems to think that sum(rank) is of type int but PostgreSQL\nreturns NUMERIC type. I don't know what should be done here.\nPlease change sum(rank) -> sum(rank)::int and try.\n\nregards,\nHiroshi Inoue\n\n\n", "msg_date": "Tue, 18 Sep 2001 12:12:43 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ODBC] Odd behaviour - *possible* ODBC bug?" } ]
[ { "msg_contents": "\n> > \tadd the functionality for \"with check option\" clause of \n> create view\n> >\n> \n> I'm not familiar with this. What does it do?\n\nIt checks on view insert or update, that the resulting tuple will still \nbe seen through this view (it satisfies the view's where restriction).\nIf not, the insert or update is not allowed.\n\nAndreas\n", "msg_date": "Mon, 18 Sep 2000 10:07:32 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: new relkind for view" } ]
[ { "msg_contents": "\n> But the pg_shadow authentication is based on credentials \n> provided by the\n> client whereas what you propose here would run on the server, so this\n> doesn't make sense. \n\nSince you can write extensions to PostgreSQL that reach far into the OS,\nit does make sense to execute those extensions under a \"non priviledged\"\nuser, and not postgres. This OS user would somehow be tied to the username\nthat the client passes as his credentials (and that we trust to be\nauthenticated).\n\nThis is actually not my idea, it is implemented in Informix, DB2 and I think\nOracle.\n\nAndreas\n", "msg_date": "Mon, 18 Sep 2000 10:16:59 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: \"setuid\" functions, a solution to the RI privil ege problem" }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> Since you can write extensions to PostgreSQL that reach far into the OS,\n> it does make sense to execute those extensions under a \"non priviledged\"\n> user, and not postgres.\n\nAgreed.\n\n> This OS user would somehow be tied to the username that the client\n> passes as his credentials (and that we trust to be authenticated).\n\nNot agreed. It's a feature, not an accident, that client user names,\nserver OS user names, and database user names are independent. The mapping\nof database user names to server OS user names needs to have a separate\nmapping and authentication system, which could probably be similar to the\nexisting client authentication, but still separate.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 18 Sep 2000 21:56:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: \"setuid\" functions, a solution to the RI privil\n\tege problem" } ]
[ { "msg_contents": "Hi,\nI'm just curious how MVCC will work witk WAL ? Will\nit work in the same fashion as now only tuples written\nusing WAL ?\nOr will it search for old tuple's versions in log ?\n\nthanks devik\n", "msg_date": "Mon, 18 Sep 2000 11:31:30 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "WAL & MVCC" } ]
[ { "msg_contents": "Hi all,\nI've just downloaded the latest CVS snapshot, to play with\nTOASTed text field. I've tried to compile contrib extensions\nbut I've some problem with soundex.\n\nI take\n\ngcc -c -I../../src/include -O2 -Wall -Wmissing-prototypes\n-Wmissing-declarations -I. -fpic soundex.c -o soundex.o\nsoundex.c: In function `text_soundex':\nsoundex.c:37: invalid lvalue in assignment\nmake: *** [soundex.o] Error 1\n\n\ntext_soundex() is declared as\n\ntext *\ntext_soundex(text *t)\n\n\nperhaps the interface is changed with TOAST.\n\nAny FAQ pointer to help me in this case ???\nTIA\n\n/gp\n\n\n-- \nDiscussion: How do you feel about Open Source firms making \nmillions through public offerings?\n\n\"I wish these companies were making the same millions without\ndistributing any non-free, user-subjugating software.\" --\n Richard Stallman \n\n\"We're not here to let the monkey dance.\"\n\t\t\t\t\t\tg.p.\t\t\t\t\n\n Gian Paolo Ciceri Via B.Diotti 45 - 20153 Milano MI ITALY\n CTO @ Louise mobile : ++39 348 3658272 \n eMail : [email protected],\[email protected] \n webSite: http://www.louise.it\n", "msg_date": "Mon, 18 Sep 2000 16:23:29 +0200", "msg_from": "\"g.p.ciceri\" <[email protected]>", "msg_from_op": true, "msg_subject": "contrib module soundex in CVS snapshot (function returning text and\n\tTOAST ???)" }, { "msg_contents": "\"g.p.ciceri\" <[email protected]> writes:\n> gcc -c -I../../src/include -O2 -Wall -Wmissing-prototypes\n> -Wmissing-declarations -I. -fpic soundex.c -o soundex.o\n> soundex.c: In function `text_soundex':\n> soundex.c:37: invalid lvalue in assignment\n> make: *** [soundex.o] Error 1\n\n> perhaps the interface is changed with TOAST.\n\nYup --- you can't assign to VARSIZE(ptr) anymore. Assign to\nVARATT_SIZEP(ptr) instead.\n\nSomeone needs to go through all the contrib modules and clean up the\nones that don't compile anymore ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Sep 2000 12:27:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: contrib module soundex in CVS snapshot (function returning text\n\tand TOAST ???)" } ]
[ { "msg_contents": "> Hi,\n> I'm just curious how MVCC will work witk WAL ? Will\n> it work in the same fashion as now only tuples written\n> using WAL ?\n\nYes.\n\n> Or will it search for old tuple's versions in log ?\n\nSMGR is strill non-overwriting one.\n\nVadim\n", "msg_date": "Mon, 18 Sep 2000 08:57:27 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL & MVCC" } ]
[ { "msg_contents": "I'm developing a db-driven web site for a client.\nSo far the solution happens to use a lot of open sources software (best tool\nfor the job).\n\nBut when looking at areas of high-availability and performance in relation\nto our database back-end, I'm trying to find a solution that will fit the\nclients need (say, 4 \"nines\" of reliability or so). The application the db\nserver is running is mostly SELECTs, but a fair share of inserts\n(interchange e-commerce is the application). The open source\nperformance/reliability solution I came up with:\n\n\t- master database server (high end box) is read/write.\n\t- primary slave database server (high end box) is read-only, and gets it's\ndata by means of replication from master database server. This box is\nspecially marked to take over for the master in the event that the master\nfails (hot failover).\n\t- many slave database servers (low end boxes) are read-only. These get\ntheir data from the primary slave database server, instead of the master\ndatabase server, so that the master only has to replicate once (and then,\nonly to one machine: the primary slave db server).\n\nWhat do you guys think of my solution? It's more complicated than Oracle's\nparallel clustering, etc. But Oracle costs $30,000 (for our install,\nanyway). So I would like to implement the above on open source software.\n\nBut, I've read that postgresql replication code is not yet in \"usable\"\nstatus. MySQL on the other hand claims their replication has \"alpha\" code\nquality, but that many customers use it successfully on a day-to-day basis\n(that was the feeling I got, anyway). And neither pgsql or mysql have\nclaimed any hot failover abilities. So my questions are twofold:\n\n1)\tWhat is the status of the features I described? (replication, seamless\nfailover).\n\n2)\tMy client is able to \"donate\" several thousand dollars to the development\nof said features (heck, I might kick in a few bucks). What are our options\nfor this? Anyone willing to step up to the plate and say, \"yes, I'll do it\non a contract for 10k!\". Or is there already an established way of getting\nX feature for Y dollars?\n\n3)\tOr, should I just bite the bullet and use Mysql? (minus foreign keys,\nminus transactions, minus ....)\n\nThanks,\n\nDan Browning\nNetwork & Database Administrator\nCyclone Computer Systems\n\n", "msg_date": "Mon, 18 Sep 2000 09:31:32 -0700", "msg_from": "\"Dan Browning\" <[email protected]>", "msg_from_op": true, "msg_subject": "Feature request: client would like to donate X thousand dollars for\n\tdevelopment of features Y and Z." }, { "msg_contents": "\nPgSQL, Inc just recently announced that they were working on this ... I\nhaven't heard of anyone else, but that doesn't mean nobody else is ... \n\n\nOn Mon, 18 Sep 2000, Dan Browning wrote:\n\n> I'm developing a db-driven web site for a client.\n> So far the solution happens to use a lot of open sources software (best tool\n> for the job).\n> \n> But when looking at areas of high-availability and performance in relation\n> to our database back-end, I'm trying to find a solution that will fit the\n> clients need (say, 4 \"nines\" of reliability or so). The application the db\n> server is running is mostly SELECTs, but a fair share of inserts\n> (interchange e-commerce is the application). The open source\n> performance/reliability solution I came up with:\n> \n> \t- master database server (high end box) is read/write.\n> \t- primary slave database server (high end box) is read-only, and gets it's\n> data by means of replication from master database server. This box is\n> specially marked to take over for the master in the event that the master\n> fails (hot failover).\n> \t- many slave database servers (low end boxes) are read-only. These get\n> their data from the primary slave database server, instead of the master\n> database server, so that the master only has to replicate once (and then,\n> only to one machine: the primary slave db server).\n> \n> What do you guys think of my solution? It's more complicated than Oracle's\n> parallel clustering, etc. But Oracle costs $30,000 (for our install,\n> anyway). So I would like to implement the above on open source software.\n> \n> But, I've read that postgresql replication code is not yet in \"usable\"\n> status. MySQL on the other hand claims their replication has \"alpha\" code\n> quality, but that many customers use it successfully on a day-to-day basis\n> (that was the feeling I got, anyway). And neither pgsql or mysql have\n> claimed any hot failover abilities. So my questions are twofold:\n> \n> 1)\tWhat is the status of the features I described? (replication, seamless\n> failover).\n> \n> 2)\tMy client is able to \"donate\" several thousand dollars to the development\n> of said features (heck, I might kick in a few bucks). What are our options\n> for this? Anyone willing to step up to the plate and say, \"yes, I'll do it\n> on a contract for 10k!\". Or is there already an established way of getting\n> X feature for Y dollars?\n> \n> 3)\tOr, should I just bite the bullet and use Mysql? (minus foreign keys,\n> minus transactions, minus ....)\n> \n> Thanks,\n> \n> Dan Browning\n> Network & Database Administrator\n> Cyclone Computer Systems\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 18 Sep 2000 13:38:33 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: client would like to donate X thousand\n\tdollars for development of features Y and Z." } ]
[ { "msg_contents": "* Michael Meskes <[email protected]> [000918 05:03] wrote:\n> If I change some stuff in a library that forces the user to recompile all\n> programs because it's not binary compatible I have to change the major\n> number right? But just changing the order in an enum datatype does not\n> exactly look like such a big change. But the only way to avoid this problem\n> is adding the new entries at the end of the enum. Or do I miss something\n> obvious here?\n\nIf you re-order an exported emum you are going to complete break binary\ncompatibility, either bump the major or add the entries at the end.\n\nHowever there's another problem even if you add at the end, namely\nthat if previously the 'default' doesn't describe the new values\nthen you also break compatibility, for instance:\n\nChapter 17. libpq - C Library\n\n..\n\n At any time during connection, the status of the connection\n may be checked, by calling PQstatus. If this is CONNECTION_BAD,\n then the connection procedure has failed; if this is CONNECTION_OK,\n then the connection is ready. Either of these states should be\n equally detectable from the return value of PQconnectPoll, as\n above. Other states may be shown during (and only during) an\n asynchronous connection procedure. These indicate the current\n stage of the connection procedure, and may be useful to provide\n feedback to the user for example. These statuses may include:\n\n CONNECTION_STARTED: Waiting for connection to be made. \n\n CONNECTION_MADE: Connection OK; waiting to send. \n\n CONNECTION_AWAITING_RESPONSE: Waiting for a response from the postmaster. \n\n CONNECTION_AUTH_OK: Received authentication; waiting for backend startup. \n\n CONNECTION_SETENV: Negotiating environment. \n\n Note that, although these constants will remain (in order to maintain compatibility) an application should\n never rely upon these appearing in a particular order, or at all, or on the status always being one of these\n documented values. An application may do something like this: \n\n switch(PQstatus(conn))\n {\n case CONNECTION_STARTED:\n feedback = \"Connecting...\";\n break;\n\n case CONNECTION_MADE:\n feedback = \"Connected to server...\";\n break;\n .\n .\n .\n default:\n feedback = \"Connecting...\";\n }\n\n\n\nIf you happened to add another error state or something that indicated\nsome other action was required you'd also be breaking compatibility.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 18 Sep 2000 11:17:19 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Library versioning" }, { "msg_contents": "If I change some stuff in a library that forces the user to recompile all\nprograms because it's not binary compatible I have to change the major\nnumber right? But just changing the order in an enum datatype does not\nexactly look like such a big change. But the only way to avoid this problem\nis adding the new entries at the end of the enum. Or do I miss something\nobvious here?\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 18 Sep 2000 14:03:01 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Library versioning" }, { "msg_contents": "On Mon, Sep 18, 2000 at 11:17:19AM -0700, Alfred Perlstein wrote:\n> If you re-order an exported emum you are going to complete break binary\n> compatibility, either bump the major or add the entries at the end.\n\nThe later is what I did so far.\n\n> However there's another problem even if you add at the end, namely\n> that if previously the 'default' doesn't describe the new values\n> then you also break compatibility, for instance:\n> ... \n> If you happened to add another error state or something that indicated\n> some other action was required you'd also be breaking compatibility.\n\nIf the library returns the value, yes. But in my case the library just\naccepts two more values. So th eonly problem I can think of is that someone\nwould try to run a new binary against the old library.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 19 Sep 2000 14:04:53 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Library versioning" } ]
[ { "msg_contents": "Of course, if an alias for ichar is carried forward I\ncan write code for the current postgres that won't\nbreak with future releases. I realize that I might end\nup being the only person on the planet who ends up\nusing ichar, and that may not be sufficient\njustification for an alias....\n\nBest,\n\nAlex\n\n--- Tom Lane <[email protected]> wrote:\n>\n> New proposal: forget ichar(), give the function two\n> entries chr() and\n> char().\n> \n> \t\t\tregards, tom lane\n\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Mail - Free email you can access from anywhere!\nhttp://mail.yahoo.com/\n", "msg_date": "Mon, 18 Sep 2000 15:25:48 -0700 (PDT)", "msg_from": "Alex Sokoloff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ascii to character conversion in postgres " }, { "msg_contents": "Alex Sokoloff <[email protected]> writes:\n> Of course, if an alias for ichar is carried forward I\n> can write code for the current postgres that won't\n> break with future releases. I realize that I might end\n> up being the only person on the planet who ends up\n> using ichar, and that may not be sufficient\n> justification for an alias....\n\nWell, we will certainly have chr(), so why not save yourself the\ntrouble of converting later and make that alias today?\n\ncreate function chr(int4) returns text as 'ichar'\nlanguage 'internal' with (iscachable);\n\nought to do it in 7.0.*.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Sep 2000 11:17:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ascii to character conversion in postgres " } ]
[ { "msg_contents": "greetings,\n\ni planning on making heavy use of the postgresql\ninheritance features for a large scale application\n and i was wondering if anyone has run into any interesting \nerrata regarding this that i should be aware of.\n\nbugs, features, caveats, side notes etc..\n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Tue, 19 Sep 2000 01:35:42 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "ordbms - postgresql errata" } ]
[ { "msg_contents": "i just found a (few) caveat already..\n\n1:\n\nbignose=# create table people(\nbignose(# name varchar(64),\nbignose(# age int8, \nbignose(# sin int4, \nbignose(# id serial);\n\nbignose=# create table soldier(\nbignose(# rank varchar(32),\nbignose(# post varchar(32)) inherits (people);\n\nbignose=# alter table people add column gender int2; \nALTER\nbignose=# \\d soldier\n Table \"soldier\"\n Attribute | Type | Modifier \n-----------+-------------+-------------------------------------------------\n name | varchar(64) | \n age | bigint | \n sin | integer | \n id | integer | not null default nextval('people_id_seq'::text)\n rank | varchar(32) | \n post | varchar(32) | \n\nnow you can see that the attribute was added to the super class\nbut the sub class didn't inhereit.. \n\n----------- NEXT -------------\n\nbignose=# insert into soldier (name,age,sin) values\nbignose-# ('fred',19,12321);\n\nbignose=# select p.* from people* p;\n name | age | sin | id | gender \n------+-----+-------+----+--------\n fred | 19 | 12321 | 1 | -16968\n\ni didn't specify a gender, but it put in a \"randomish\" value..\nshouldn't it have just left this untouched ?\n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Tue, 19 Sep 2000 01:45:30 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "ordbms - postgresql errata" }, { "msg_contents": "it would appear my first mail didn't go thru\n\nthe basic gist was, \n\ncan anyone point out any caveats/pitfalls to the\npostgresql inheritance functions..\n\n\nOn Tue, 19 Sep 2000, Jeff MacDonald wrote:\n\n> i just found a (few) caveat already..\n> \n> 1:\n> \n> bignose=# create table people(\n> bignose(# name varchar(64),\n> bignose(# age int8, \n> bignose(# sin int4, \n> bignose(# id serial);\n> \n> bignose=# create table soldier(\n> bignose(# rank varchar(32),\n> bignose(# post varchar(32)) inherits (people);\n> \n> bignose=# alter table people add column gender int2; \n> ALTER\n> bignose=# \\d soldier\n> Table \"soldier\"\n> Attribute | Type | Modifier \n> -----------+-------------+-------------------------------------------------\n> name | varchar(64) | \n> age | bigint | \n> sin | integer | \n> id | integer | not null default nextval('people_id_seq'::text)\n> rank | varchar(32) | \n> post | varchar(32) | \n> \n> now you can see that the attribute was added to the super class\n> but the sub class didn't inhereit.. \n> \n> ----------- NEXT -------------\n> \n> bignose=# insert into soldier (name,age,sin) values\n> bignose-# ('fred',19,12321);\n> \n> bignose=# select p.* from people* p;\n> name | age | sin | id | gender \n> ------+-----+-------+----+--------\n> fred | 19 | 12321 | 1 | -16968\n> \n> i didn't specify a gender, but it put in a \"randomish\" value..\n> shouldn't it have just left this untouched ?\n> \n> Jeff MacDonald,\n> \n> -----------------------------------------------------\n> PostgreSQL Inc\t\t| Hub.Org Networking Services\n> [email protected]\t\t| [email protected]\n> www.pgsql.com\t\t| www.hub.org\n> 1-902-542-0713\t\t| 1-902-542-3657\n> -----------------------------------------------------\n> Fascimile : 1 902 542 5386\n> IRC Nick : bignose\n> \n\nJeff MacDonald,\n\n-----------------------------------------------------\nPostgreSQL Inc\t\t| Hub.Org Networking Services\[email protected]\t\t| [email protected]\nwww.pgsql.com\t\t| www.hub.org\n1-902-542-0713\t\t| 1-902-542-3657\n-----------------------------------------------------\nFascimile : 1 902 542 5386\nIRC Nick : bignose\n\n", "msg_date": "Tue, 19 Sep 2000 02:07:50 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ordbms - postgresql errata" } ]
[ { "msg_contents": "Hi,\n\nHow can I convert char* to Datum to pass a string to the SPI_modifytuple function?\n\nregards,\nAlex\n\n\n", "msg_date": "Tue, 19 Sep 2000 11:30:32 +0400", "msg_from": "Alex Guryanow <[email protected]>", "msg_from_op": true, "msg_subject": "char* to Datum conversion" }, { "msg_contents": "\nOn Tue, 19 Sep 2000, Alex Guryanow wrote:\n\n> Hi,\n> \n> How can I convert char* to Datum to pass a string to the SPI_modifytuple function?\n\n A little explore src/include/utils/atd/builtins.h... \n(charin(), textin() ...etc.)\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Tue, 19 Sep 2000 10:11:56 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] char* to Datum conversion" } ]
[ { "msg_contents": "\n> > Since you can write extensions to PostgreSQL that reach far into the OS,\n> > it does make sense to execute those extensions under a \"non priviledged\"\n> > user, and not postgres.\n> \n> Agreed.\n> \n> > This OS user would somehow be tied to the username that the client\n> > passes as his credentials (and that we trust to be authenticated).\n> \n> Not agreed. It's a feature, not an accident, that client user names,\n> server OS user names, and database user names are independent. The mapping\n> of database user names to server OS user names needs to have a separate\n> mapping\n\nYes, a mapping is useful (as I said).\n\n> and authentication system, which could probably be similar to the\n> existing client authentication, but still separate.\n\nI do not think that a separate authentication is necessary, it might be a\nfeature,\nbut imho the standard would be a trusted mapping from db to OS user.\nRemember that execution rights are granted to db users for functions.\nIf a user does not have a desirable mapping you don't grant him any rights.\n\nA special flag \"suid\" for functions would imply that the function always\nruns under \nthe same OS user of the function owner, regardless of client credentials. \n\nAndreas\n", "msg_date": "Tue, 19 Sep 2000 09:50:55 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: AW: \"setuid\" functions, a solution to the RI pr\n\tivil ege problem" } ]
[ { "msg_contents": "OK, I hope ya'll don't mind a thought from a newbie. And I hope this is\nthe right forum to ask about this. I was wondering if it would be possible\n(no I don't have the expertise!) to extend one of the system tables. What\nI was hoping for was somewhere to store the \"options\" used to create\ncolumns. In particular, I create a column similiar to:\n\nname text not null references other_table(name) on delete cascade\non update cascade deferrable initially deferred\n\nWell this works wonderfully. However, unless I'm really good about\ndocumentation (I'm not really), then I may forget about one of those\nattributes. I know that they are implicitly there if I carefully look at\nthe TRIGGERs created. But I think that it might be nice to have a column\nin one of the system tables (whichever one describes the rows) which would\ncontain this information.\n\nI hope that I'm making some sense to you.\n\nThank you for a very nice ORDBMS! I do appreciate you're efforts.\n\nJohn\n\n", "msg_date": "Tue, 19 Sep 2000 05:44:00 -0500 (CDT)", "msg_from": "John McKown <[email protected]>", "msg_from_op": true, "msg_subject": "Possible \"enhancement\"?" }, { "msg_contents": "John McKown <[email protected]> writes:\n> OK, I hope ya'll don't mind a thought from a newbie. And I hope this is\n> the right forum to ask about this. I was wondering if it would be possible\n> (no I don't have the expertise!) to extend one of the system tables. What\n> I was hoping for was somewhere to store the \"options\" used to create\n> columns. In particular, I create a column similiar to:\n\npg_dump -s (schema only) is a convenient way of producing such info ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Sep 2000 12:27:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible \"enhancement\"? " } ]
[ { "msg_contents": "Hi,\n\nIs there a problem? I haven't received anything today...\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Tue, 19 Sep 2000 21:36:44 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "No digests today" } ]
[ { "msg_contents": "\t......\n\n> The general issue still remains: if a database contains an inconsistency\n> or error, introduced by whatever means (and there'll always be bugs),\n> a pg_dump failure is likely to be the first notice a dbadmin has about it.\n> So it behooves us to make sure that pg_dump issues error messages that\n> are as specific as possible. In particular, if there is a specific\n> object such as a view or rule that's broken, pg_dump should take care\n> that it can finger that particular object, not have to report a generic\n> \"SELECT failed\" error message.\n> \n> This problem has been around for a long time, of course, but now that\n> we have someone who's taking an active interest in fixing pg_dump ;-)\n> I'm hoping something will get done about it...\n> \n> \t\t\tregards, tom lane\n\t`-----Original Message-----\n\tFrom:\tTom Lane [SMTP:[email protected]]\n\tSent:\tTuesday, September 19, 2000 11:13 AM\n\tTo:\tPhilip Warner\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] Re: pg_dump tries to do too much per\nquery \n\n\tI can't agree with this more... On several occasions I have had\ndatabases that apparently were working fine with no problems, and the only\nindication of the problem was that pg_dump would fail. while this is nice\nin that I now know there is a problem, I had very small info on where to\nstart looking.\n", "msg_date": "Tue, 19 Sep 2000 15:23:28 -0500", "msg_from": "Matthew <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Re: pg_dump tries to do too much per query " } ]
[ { "msg_contents": "I have just noticed that VACUUM doesn't always call\nFlushRelationBuffers(); it does so only if it wants to truncate\nthe relation (ie, shrink the physical file).\n\nThis is OK for normal purposes but it's bad for pg_upgrade, which\nis invoking VACUUM just to ensure that on-row transaction status bits\nare set correctly. Without FlushRelationBuffers(), pages that only\nhad status bit updates may never get written back to disk...\n\nI have fixed VACUUM so that FlushRelationBuffers() will be called in\nall execution paths, and back-patched the fix into REL7_0 branch.\nThis looks like another good reason for a 7.0.3 release :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Sep 2000 17:13:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Another hole detected in pg_upgrade" } ]
[ { "msg_contents": "It seems -S option for postmaster (detaching ttys) does not exist in\npostgresql.conf. Is there any reason for this?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 20 Sep 2000 09:21:02 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "-S is missing in postgresql.conf?" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> It seems -S option for postmaster (detaching ttys) does not exist in\n> postgresql.conf. Is there any reason for this?\n\nIt has been declared evil, because it loses the log output. Using shell\nredirection, or better yet pg_ctl is the way to go.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 20 Sep 2000 19:41:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: -S is missing in postgresql.conf?" }, { "msg_contents": "> > It seems -S option for postmaster (detaching ttys) does not exist in\n> > postgresql.conf. Is there any reason for this?\n> \n> It has been declared evil, because it loses the log output. Using shell\n> redirection, or better yet pg_ctl is the way to go.\n\nReally? I thought -S was evil because we had no working logging\nfacilities other than logging to stdout/stderr. Now we have the fixed\nsyslog functionarity, thus (at least part of ) the problem has gone.\n\nMoreover if -S is still evil, why don't we completely remove -S? It\nlooks slightly inconsistent anyway.\n\nIMHO we should make -S be configurable in postgresql.conf and let\nusers choose what they want.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 21 Sep 2000 10:23:58 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: -S is missing in postgresql.conf?" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> IMHO we should make -S be configurable in postgresql.conf and let\n> users choose what they want.\n\nI agree. Mind you, I think we should discourage use of -S, because it\nmakes troubleshooting so much more difficult. But we shouldn't remove\nthe option; people who are running a stable application mix and not\nhaving problems may well think they don't need to expend disk space on\na postmaster log. And if we have the option then it ought to be\nsupported uniformly via GUC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Sep 2000 22:34:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: -S is missing in postgresql.conf? " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > IMHO we should make -S be configurable in postgresql.conf and let\n> > users choose what they want.\n> \n> I agree. Mind you, I think we should discourage use of -S, because it\n> makes troubleshooting so much more difficult. But we shouldn't remove\n> the option; people who are running a stable application mix and not\n> having problems may well think they don't need to expend disk space on\n> a postmaster log. And if we have the option then it ought to be\n> supported uniformly via GUC.\n\nOk. I have committed the changes. The name for the option in\npostgresql.conf is \"silent_mode\" taking a value true or false (default\nto false). If silent_mode is set to true, it would have a same effect\nas -S of postmaser.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 08 Oct 2000 18:31:36 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: -S is missing in postgresql.conf? " } ]
[ { "msg_contents": "I've seen mention of this on the list, but I can't see it mentioned in TODO\nfrom current CVS.\n\n------- Forwarded Message\n\nDate: Wed, 20 Sep 2000 11:17:52 +0200\nFrom: Martijn van de Streek <[email protected]>\nTo: [email protected]\nSubject: Bug#72084: Broken permissions required with foreign keys\n\nPackage: postgresql\nVersion: 7.0.2-2\nSeverity: important\n\nIf I create a table with a foreign key, inserts into that table won't work\nunless I give the user/group UPDATE permission on the table the foreign key\nrefers to.\n\nThis behaviour doesn't seem logical and/or safe (I give 'SELECT only' access\nfor a reason). \n\nThe same thing happens in 7.0.2-5\n\nMartijn\n\nExample:\n- -------- \nblurgh=# CREATE TABLE A(ID SERIAL, \n\t\tPRIMARY KEY(ID));\nblurgh=# CREATE TABLE B(ID SERIAL, B INT, \n\t\tPRIMARY KEY(ID), FOREIGN KEY(B) REFERENCES A ON DELETE RESTRICT\n);\n\nblurgh=# CREATE GROUP A;\nblurgh=# CREATE GROUP B;\n\nblurgh=# GRANT ALL ON B TO GROUP A;\nblurgh=# GRANT SELECT ON A TO GROUP A;\n\nblurgh=# CREATE USER 'test' IN GROUP A;\n\nblurgh=# INSERT INTO A(ID) VALUES(1);\nblurgh=# INSERT INTO A(ID) VALUES(2);\nblurgh=# INSERT INTO A(ID) VALUES(3);\n\nblurgh=# \\c blurgh test\n\nblurgh=> INSERT INTO B(B) VALUES(1);\nERROR: a: Permission denied.\n\nblurgh=# \\c blurgh postgres\nblurgh=# GRANT SELECT,UPDATE ON A TO GROUP A;\nblurgh=# \\c blurgh test\n\nblurgh=> INSERT INTO B(B) VALUES(1);\nINSERT 6178592 1\n\n- -- System Information\nDebian Release: 2.2\nArchitecture: i386\nKernel: Linux beeblebrox 2.2.17pre13 #1 SMP Fri Jul 21 05:48:45 CEST 2000 i686\n\nVersions of packages postgresql depends on:\nii debianutils 1.13.3 Miscellaneous utilities specific t\nii libc6 2.1.3-13 GNU C Library: Shared libraries an\nii libncurses5 5.0-6 Shared libraries for terminal hand\nii libpgsql2 7.0.2-2 Shared library libpq.so.2 for Post\nii libreadline4 4.1-1 GNU readline and history libraries\nii postgresql-client 7.0.2-2 Front-end programs for PostgreSQL \nii procps 1:2.0.6-5 The /proc file system utilities. \n\n- -- Configuration Files:\n/etc/cron.d/postgresql changed [not included]\n/etc/postgresql/pg_hba.conf changed [not included]\n/etc/postgresql/postmaster.init changed [not included]\n- -- \nDon't die on the motorway. The moon would freeze, the plants would die.\nI couldn't cope if you crashed today. All the things I forgot to say.\n\t- Radiohead, Killer Cars\n\n\n------- End of Forwarded Message\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"But my God shall supply all your need according to his\n riches in glory by Christ Jesus.\" Philippians 4:19\n\n\n", "msg_date": "Wed, 20 Sep 2000 14:16:57 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Debian Bug#72084: Broken permissions required with foreign keys (fwd)" } ]
[ { "msg_contents": "There's two tracebacks of crashed 7.0.2 backends at the end of this\nemail.\n\nI posted earlier this week about a table of ours getting corrupted\nafter some time. The table looks like this:\n\ndetails:\n id | integer | \n attr_type | varchar(32) | \n attr_name | varchar(32) | \n attr_vers | varchar(32) | \n attr_hits | bigint | default 0\n\nWe're using a perl script to update this table from a \"raw\" table:\n\ndetails_raw:\n id | integer | \n stat_date | timestamp | \n attr_type | varchar(32) | \n attr_name | varchar(32) | \n attr_vers | varchar(32) | \n attr_hits | bigint | default 0\n\n\nSELECT\n id, attr_type, attr_name, attr_vers, sum(attr_hits) as attr_hits\nFROM\n details_raw\nWHERE\n stat_date < ? \nGROUP BY\n counter_id, attr_type, attr_name, attr_vers \n;\n\nthen we loop over this UPDATE query calling an INSERT query if UPDATE\nreturns 0 rows. We are vacuuming the table after every update (which\ncan be several hundred rows) which is why i need the vacuum.\n\nINSERT INTO\n details\n (id, attr_type, attr_name, attr_vers, attr_hits)\n VALUES ( ?, ?, ?, ?, ? )\n;\n\nUPDATE\n details\nSET\n attr_hits = attr_hits + ? \nWHERE\n id = ?\n AND attr_type = ?\n AND attr_name = ?\n AND attr_vers = ?\n;\n\nAfter a while we get this crash apparently, followed by crashes of any\nbackend that scans this table, not updating this specific table makes\nthe crashes go away, the problem seems pretty isolated to this data.\n\n% gdb /usr/local/pgsql/bin/postgres postgres.54738.core \n#0 0x4829f77a in memmove () from /usr/lib/libc.so.4\n(gdb) bt\n#0 0x4829f77a in memmove () from /usr/lib/libc.so.4\n#1 0x53260f14 in ?? ()\n#2 0x8093c93 in vc_attrstats (onerel=0x84a2788, vacrelstats=0x8496290, \n tuple=0xbfbfe93c) at vacuum.c:2354\n#3 0x8091609 in vc_scanheap (vacrelstats=0x8496290, onerel=0x84a2788, \n vacuum_pages=0xbfbfe9d0, fraged_pages=0xbfbfe9c0) at vacuum.c:980\n#4 0x8090ccb in vc_vacone (relid=2238241037, analyze=1, va_cols=0x0)\n at vacuum.c:599\n#5 0x8090454 in vc_vacuum (VacRelP=0xbfbfea60, analyze=1 '\\001', va_cols=0x0)\n at vacuum.c:299\n#6 0x80903dc in vacuum (vacrel=0x84960e8 \"\\230`I\\b \", verbose=1, \n analyze=1 '\\001', va_spec=0x0) at vacuum.c:223\n#7 0x80fa444 in ProcessUtility (parsetree=0x8496110, dest=Remote)\n at utility.c:694\n#8 0x80f7e5e in pg_exec_query_dest (\n query_string=0x81a9370 \"VACUUM verbose analyze webhit_details_formatted;\", \n dest=Remote, aclOverride=0) at postgres.c:617\n#9 0x80f7db9 in pg_exec_query (\n query_string=0x81a9370 \"VACUUM verbose analyze webhit_details_formatted;\")\n at postgres.c:562\n#10 0x80f8d1a in PostgresMain (argc=9, argv=0xbfbff0e0, real_argc=10, \n real_argv=0xbfbffb40) at postgres.c:1590\n#11 0x80e1d06 in DoBackend (port=0x843f000) at postmaster.c:2009\n#12 0x80e1899 in BackendStartup (port=0x843f000) at postmaster.c:1776\n#13 0x80e0abd in ServerLoop () at postmaster.c:1037\n#14 0x80e04be in PostmasterMain (argc=10, argv=0xbfbffb40) at postmaster.c:725\n#15 0x80aee43 in main (argc=10, argv=0xbfbffb40) at main.c:93\n#16 0x80633c5 in _start ()\n(gdb) list\n2349 stats->guess1_hits = 1;\n2350 stats->guess2_hits = 1;\n2351 }\n2352 if (!value_hit)\n2353 {\n2354 vc_bucketcpy(stats->attr, value, &stats->guess2, &stats->guess2_len);\n2355 stats->guess1_hits = 1;\n2356 stats->guess2_hits = 1;\n2357 }\n2358 }\n(gdb) print stats\nNo symbol \"stats\" in current context.\n(gdb) print stats->attr\nNo symbol \"stats\" in current context.\n(gdb) print value_hit\n$1 = 0 '\\000'\n(gdb) print value\n$2 = 1395003156\n(gdb) print stats->guess2_len\nNo symbol \"stats\" in current context.\n(gdb) print i\n$1 = 3\n(gdb) print attr_cnt\n$2 = 167920861\n(gdb) print *vacattrstats\nCannot access memory at address 0xb5a9e104.\n(gdb) print tupDesc\n$3 = 0x84a6368\n(gdb) print *tupDesc\n$4 = {natts = 5, attrs = 0x8492748, constr = 0x84a6380}\n(gdb) print *onerel \n$5 = {rd_fd = 138985496, rd_nblocks = 892691556, rd_refcnt = 25698, \n rd_myxactonly = 53 '5', rd_isnailed = 53 '5', rd_isnoname = 100 'd', \n rd_unlinked = 102 'f', rd_am = 0xac70000, rd_rel = 0x40580044, rd_id = 77, \n rd_lockInfo = {lockRelId = {relId = 1937204590, dbId = 1848586042}}, \n rd_att = 0x31737765, rd_rules = 0x2e63712e, rd_istrat = 0x706d7973, \n rd_support = 0x63697461, trigdesc = 0x61632e6f}\n(gdb) print value;\nInvalid character ';' in expression.\n(gdb) print value \n$6 = 1395003156\n(gdb) print *value\n$7 = 892691554\n(gdb) print isnull\n$8 = 0 '\\000'\n(gdb) \n\n * IDENTIFICATION\n * $Header: /home/pgcvs/pgsql/src/backend/commands/vacuum.c,v 1.148 2000/\n05/19 03:22:29 tgl Exp $\n\n\nhere's another that happened to occur during what seems to be a 'COPY OUT':\n#0 0x482a7d95 in ?? ()\n(gdb) bt\n#0 0x482a7d95 in ?? ()\n#1 0x808c393 in CopyTo (rel=0x8777890, binary=0 '\\000', oids=0 '\\000', \n fp=0x0, delim=0x8159fa9 \"\\t\", null_print=0x8159fab \"\\\\N\") at copy.c:508\n#2 0x808bf99 in DoCopy (relname=0x87230e8 \"~+\", binary=0 '\\000', \n oids=0 '\\000', from=0 '\\000', pipe=1 '\\001', filename=0x0, \n delim=0x8159fa9 \"\\t\", null_print=0x8159fab \"\\\\N\") at copy.c:374\n#3 0x80f98a3 in ProcessUtility (parsetree=0x8723110, dest=Remote)\n at utility.c:262\n#4 0x80f7e5e in pg_exec_query_dest (query_string=0x81a9388 \"\", dest=Remote, \n aclOverride=0) at postgres.c:617\n#5 0x80f7db9 in pg_exec_query (query_string=0x81a9388 \"\") at postgres.c:562\n#6 0x80f8d1a in PostgresMain (argc=9, argv=0xbfbff0e0, real_argc=10, \n real_argv=0xbfbffb40) at postgres.c:1590\n#7 0x80e1d06 in DoBackend (port=0x843f000) at postmaster.c:2009\n#8 0x80e1899 in BackendStartup (port=0x843f000) at postmaster.c:1776\n#9 0x80e0abd in ServerLoop () at postmaster.c:1037\n#10 0x80e04be in PostmasterMain (argc=10, argv=0xbfbffb40) at postmaster.c:725\n#11 0x80aee43 in main (argc=10, argv=0xbfbffb40) at main.c:93\n#12 0x80633c5 in _start ()\n(gdb) up\n#1 0x808c393 in CopyTo (rel=0x8777890, binary=0 '\\000', oids=0 '\\000', \n fp=0x0, delim=0x8159fa9 \"\\t\", null_print=0x8159fab \"\\\\N\") at copy.c:508\n508 string = (char *) (*fmgr_faddr(&out_functions[i]))\n(gdb) list\n503 continue;\n504 }\n505 #endif /* _DROP_COLUMN_HACK__ */\n506 if (!isnull)\n507 {\n508 string = (char *) (*fmgr_faddr(&out_functions[i]))\n509 (value, elements[i], typmod[i]);\n510 CopyAttributeOut(fp, string, delim);\n511 pfree(string);\n512 }\n(gdb) print isnull\n$1 = 0 '\\000'\n(gdb) print string\n$2 = 0xfffffffc <Address 0xfffffffc out of bounds>\n(gdb) print value\n$3 = 1072255572\n(gdb) print elements[i]\n$4 = 11134\n(gdb) print typmod[i] \n$5 = 1742544\n(gdb) print out_functions[i]\n$6 = {fn_addr = 0x4005a, fn_plhandler = 0x208900, fn_oid = 11134, fn_nargs = 0}\n(gdb) \n\n * IDENTIFICATION\n * $Header: /home/pgcvs/pgsql/src/backend/commands/copy.c,v 1.106.2.2 2000/06/28 06:13:01 tgl Exp $\n\nI know there's been a couple of updates to the source since this date\n(compiled on Aug 3), any idea if:\na) an upgrade is a good idea\nb) an upgrade is a safe idea\n\nIf there's anything that I can do to provide clearer information?\n\nthanks very much for your time,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Wed, 20 Sep 2000 06:18:08 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "7.0.2 crash, backtrace with debug available" }, { "msg_contents": "* Alfred Perlstein <[email protected]> [000920 06:19] wrote:\n> There's two tracebacks of crashed 7.0.2 backends at the end of this\n> email.\n\nAlso, after this the database is extremely unstable, queries lock up\nand the postgresql server (not OS) requires a complete restart and\nvacuum analyze before it's stable.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Wed, 20 Sep 2000 07:01:42 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.2 crash, backtrace with debug available" }, { "msg_contents": "* Alfred Perlstein <[email protected]> [000920 06:19] wrote:\n> There's two tracebacks of crashed 7.0.2 backends at the end of this\n> email.\n\nSorry, a couple more things I should have included:\n\nI'm running with a ~256 shared memory segment:\n\nDAEMON=/usr/local/pgsql/bin/postmaster\nDATA=/vol/amrd0/database/data\nLOGFILE=/vol/amrd0/database/data/postgres.log\n$DAEMON -N 32 -i -B 32768 -D$DATA -S -o \"-F -S 65534\" > $LOGFILE\n\n-rw------- 1 pgsql pgsql 1179570176 Sep 20 05:35 postgres.54738.core\n vacuuming process, why is it so huge?\n \n-rw------- 1 pgsql pgsql 280502272 Sep 20 05:48 postgres.54896.core\n COPY OUT process.\n\n-Alfred\n\n", "msg_date": "Wed, 20 Sep 2000 07:06:01 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0.2 crash, backtrace with debug available" }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> I posted earlier this week about a table of ours getting corrupted\n> after some time. The table looks like this:\n> ...\n> After a while we get this crash apparently, followed by crashes of any\n> backend that scans this table, not updating this specific table makes\n> the crashes go away, the problem seems pretty isolated to this data.\n\nHmm. It looks like one of the variable-length fields is getting\nclobbered, but I have no idea why. It will probably take some digging\nin the corefile and clobbered table to learn much. The good thing is\nthat you seem to have a fairly easily reproducible bug, so tracking it\ndown should be possible.\n\nIs the data+program needed to trigger the bug self-contained enough that\nyou could wrap it up and send it to me? The most convenient thing from\nmy end would be to reproduce and study it here, if possible.\n\n> I know there's been a couple of updates to the source since this date\n> (compiled on Aug 3), any idea if:\n> a) an upgrade is a good idea\n> b) an upgrade is a safe idea\n\nI see no reason that you shouldn't update to the tip of the\nREL7_0_PATCHES branch, but I don't have a lot of hope that it will cure\nyour problem. I doubt this is related to anything we've fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Sep 2000 10:40:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 crash, backtrace with debug available " } ]
[ { "msg_contents": "This (invalid) query crashes the 7.0.2 backend:\n\nauction=# SELECT (select max(b.lot)) as last_lot,auction_status(a.id) > 0 AS current, a.lot, a.person_id, next_price(a.id), seller.mail AS seller_mail, buyer.mail AS buyer_mail, seller.locale AS seller_locale, buyer.login AS buyer_login, num_bid(a.id), seller.login AS seller_login, t.name AS auction_type FROM auction* a, person seller, person buyer, auction_type t,bid b WHERE a.id = 84 AND seller.id = a.person_id AND COALESCE(a.type,1) = t.id AND buyer.id = 2 AND b.person_id = buyer.id ;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nLet me know if you need the DB schema for debugging.\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\nLord, protect me from your followers.\n", "msg_date": "Wed, 20 Sep 2000 15:59:51 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "PG crashes on query" } ]
[ { "msg_contents": "Current ecpg sources will not build on a compiler that doesn't accept\n\"long long int\". They are also overly optimistic about the prospects\nof having strtoull() in libc. I think some autoconf work is needed\nhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Sep 2000 11:20:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "loss of portability in ecpg" }, { "msg_contents": "Tom Lane writes:\n\n> Current ecpg sources will not build on a compiler that doesn't accept\n> \"long long int\". They are also overly optimistic about the prospects\n> of having strtoull() in libc. I think some autoconf work is needed\n> here.\n\nWill look.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 20 Sep 2000 19:46:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: loss of portability in ecpg" }, { "msg_contents": "On Wed, Sep 20, 2000 at 11:20:05AM -0400, Tom Lane wrote:\n> Current ecpg sources will not build on a compiler that doesn't accept\n> \"long long int\". They are also overly optimistic about the prospects\n> of having strtoull() in libc. I think some autoconf work is needed\n> here.\n\nSorry about that. I intented to tell you about these changes since I do not\nknow autoconf enough to make the changes myself but real life\ninterfered after I committed them. And then I simply forgot about it. My\nfault.\n\nPlease take my apologies.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 21 Sep 2000 10:12:02 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: loss of portability in ecpg" } ]
[ { "msg_contents": "COPY tr FROM 'file' USING DELIMITERS '/';\n\nRun-time exception error; current exception: RWBoundsErr\n No handler for exception.\n\nanyone can explain me?\nThanks.\n", "msg_date": "Wed, 20 Sep 2000 17:37:31 +0200", "msg_from": "Jerome Raupach <[email protected]>", "msg_from_op": true, "msg_subject": "error with COPY" } ]
[ { "msg_contents": "Here's what I've come up with to avoid \"permission denied\" errors when a\nRI trigger has to lock a PK table. Whenever the SELECT FOR UPDATE is\nexecuted I temporarily switch the current user id to the owner of the PK\ntable. It's not the grand unified solution via setuid functions that was\nenvisioned now and then, but it does the same conceptually. For a\nterminally elegant solution I can only suggest not using the SPI\ninterface.\n\nI recommend this patch to be checked out by someone knowledgeable in the\nRI area.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/", "msg_date": "Wed, 20 Sep 2000 19:38:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Solution for RI permission problem" }, { "msg_contents": "\nAs a question, since I don't have a source tree available here at work, \nwill there be an issue if an elog occurs between the various two user id\nsets? Just wondering, because most of those statements are do some\nSPI thing or elog.\n\nStephan Szabo\[email protected]\n\nOn Wed, 20 Sep 2000, Peter Eisentraut wrote:\n\n> Here's what I've come up with to avoid \"permission denied\" errors when a\n> RI trigger has to lock a PK table. Whenever the SELECT FOR UPDATE is\n> executed I temporarily switch the current user id to the owner of the PK\n> table. It's not the grand unified solution via setuid functions that was\n> envisioned now and then, but it does the same conceptually. For a\n> terminally elegant solution I can only suggest not using the SPI\n> interface.\n> \n> I recommend this patch to be checked out by someone knowledgeable in the\n> RI area.\n\n", "msg_date": "Wed, 20 Sep 2000 11:06:16 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solution for RI permission problem" }, { "msg_contents": "\nOn Wed, 20 Sep 2000, Peter Eisentraut wrote:\n\n> Here's what I've come up with to avoid \"permission denied\" errors when a\n> RI trigger has to lock a PK table. Whenever the SELECT FOR UPDATE is\n> executed I temporarily switch the current user id to the owner of the PK\n> table. It's not the grand unified solution via setuid functions that was\n> envisioned now and then, but it does the same conceptually. For a\n> terminally elegant solution I can only suggest not using the SPI\n> interface.\n> \n> I recommend this patch to be checked out by someone knowledgeable in the\n> RI area.\n\nIt seems to be working on my system (and you don't need to give any access\nto the pk table to the user).\n\nWith that, I do have a general question though. Are referential actions\nsupposed to be limited by the permissions of the user executing the query?\nSo, if you for example have write access on the pk table, but not to the\nfk table, and there is a on cascade delete relationship, should that user\nnot be able to delete from the pk table?\n\n", "msg_date": "Sun, 1 Oct 2000 11:28:55 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solution for RI permission problem" }, { "msg_contents": "Stephan Szabo writes:\n\n> With that, I do have a general question though. Are referential actions\n> supposed to be limited by the permissions of the user executing the query?\n> So, if you for example have write access on the pk table, but not to the\n> fk table, and there is a on cascade delete relationship, should that user\n> not be able to delete from the pk table?\n\nThen you could delete records that are not in relation to the foreign keys\nin your table. So I suppose not. Of course there does seem to be a very\nlimited range of usefulness of such a setup, but we shouldn't extrapolate\nsomething potentially more useful from that.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 1 Oct 2000 23:05:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Solution for RI permission problem" }, { "msg_contents": "\nOn Sun, 1 Oct 2000, Peter Eisentraut wrote:\n\n> Stephan Szabo writes:\n> \n> > With that, I do have a general question though. Are referential actions\n> > supposed to be limited by the permissions of the user executing the query?\n> > So, if you for example have write access on the pk table, but not to the\n> > fk table, and there is a on cascade delete relationship, should that user\n> > not be able to delete from the pk table?\n> \n> Then you could delete records that are not in relation to the foreign keys\n> in your table. So I suppose not. Of course there does seem to be a very\n> limited range of usefulness of such a setup, but we shouldn't extrapolate\n> something potentially more useful from that.\n\nActually, I'm mostly confused about what the spec wants done. The section\non the referential actions says things like \"the rows are marked for\ndeletion\" without and I can't find something there that says whether or\nnot you are actually supposed to pay attention to the associated privs.\n\n\n", "msg_date": "Mon, 2 Oct 2000 09:13:41 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solution for RI permission problem" }, { "msg_contents": "Stephan Szabo wrote:\n>\n> On Sun, 1 Oct 2000, Peter Eisentraut wrote:\n>\n> > Stephan Szabo writes:\n> >\n> > > With that, I do have a general question though. Are referential actions\n> > > supposed to be limited by the permissions of the user executing the query?\n> > > So, if you for example have write access on the pk table, but not to the\n> > > fk table, and there is a on cascade delete relationship, should that user\n> > > not be able to delete from the pk table?\n> >\n> > Then you could delete records that are not in relation to the foreign keys\n> > in your table. So I suppose not. Of course there does seem to be a very\n> > limited range of usefulness of such a setup, but we shouldn't extrapolate\n> > something potentially more useful from that.\n>\n> Actually, I'm mostly confused about what the spec wants done. The section\n> on the referential actions says things like \"the rows are marked for\n> deletion\" without and I can't find something there that says whether or\n> not you are actually supposed to pay attention to the associated privs.\n\n I think the user deleting (or updating) the PK table must not\n have DELETE or UPDATE permissions on the FK table. Another\n user, who had ALTER permission for the FK table implicitly\n granted that right due to the CASCADE definition.\n\n The point is IMHO, that the user with the ALTER permission\n for the FK table must have REFERENCE permission to the PK\n table at the time he sets up the constraint. Otherwise, he\n could insert references to all PK items without specifying\n CASCADE and thus, deny operations on the PK table.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 4 Oct 2000 05:33:46 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solution for RI permission problem" }, { "msg_contents": "On Wed, 4 Oct 2000, Jan Wieck wrote:\n\n> Stephan Szabo wrote:\n> >\n> > On Sun, 1 Oct 2000, Peter Eisentraut wrote:\n> >\n> > > Stephan Szabo writes:\n> > >\n> > > > With that, I do have a general question though. Are referential actions\n> > > > supposed to be limited by the permissions of the user executing the query?\n> > > > So, if you for example have write access on the pk table, but not to the\n> > > > fk table, and there is a on cascade delete relationship, should that user\n> > > > not be able to delete from the pk table?\n> > >\n> > > Then you could delete records that are not in relation to the foreign keys\n> > > in your table. So I suppose not. Of course there does seem to be a very\n> > > limited range of usefulness of such a setup, but we shouldn't extrapolate\n> > > something potentially more useful from that.\n> >\n> > Actually, I'm mostly confused about what the spec wants done. The section\n> > on the referential actions says things like \"the rows are marked for\n> > deletion\" without and I can't find something there that says whether or\n> > not you are actually supposed to pay attention to the associated privs.\n> \n> I think the user deleting (or updating) the PK table must not\n> have DELETE or UPDATE permissions on the FK table. Another\n> user, who had ALTER permission for the FK table implicitly\n> granted that right due to the CASCADE definition.\n>\n> The point is IMHO, that the user with the ALTER permission\n> for the FK table must have REFERENCE permission to the PK\n> table at the time he sets up the constraint. Otherwise, he\n> could insert references to all PK items without specifying\n> CASCADE and thus, deny operations on the PK table.\n\nActually, right now it may be denying non-owners the right to make\nconstraint at all. You have to be a super user or owner of each \nside. I just noticed this yesterday on my CVS copy that it wouldn't\nlet me log in as a different user and create a table that references\nanother table my other user created. I haven't looked, but my guess\nfrom the notices is that it won't let the other user place triggers\non the PK table.\n\nI assume that you're voting on the side of if you set up a cascade you're\nimplicitly giving permission to modify the table through the cascade\nrelationship. I figure I can make it do either thing easily, it's like\nfour lines of code in each of the action triggers to do the change\nownership now, so I want to get an idea of what people think is the right\nbehavior.\n\n", "msg_date": "Wed, 4 Oct 2000 10:18:56 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solution for RI permission problem" } ]
[ { "msg_contents": "I wanted to add a few features and fix a few bugs in the regression test\ndriver, and I ended up re-writing most of it. I'd like to offer it for\ntesting.\n\nFeatures/fixes:\n\n* Use one driver script for both standalone test and test against running\n installation.\n\n* Also use only one script for both parallel and serial test schedule,\n allow adding of other schedules (e.g., running a lot of simple tests all\n in parallel).\n\n* Return useful test summary (x of y tests passed) and exit status, to be\n run via `make check' (GNU makefile standards)\n\n* Add option to ignore some failed tests (e.g., \"random\") for purposes of\n the exit status\n\n* Add flag for debug mode\n\n* Avoid use of `make install prefix=xxx', which can (a) yield the test\n suite unusable if you overrode specific installation directories,\n (b) corrupt your files with wrong hard-coded paths.\n\n* Feable support for running the test suite outside of the source tree,\n for binary packages. (incomplete)\n\n\nWhat I would like to do is to commit it in parallel with the existing\ninfrastructure, and if it works out well we can remove the other stuff\nbefore release.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 20 Sep 2000 19:38:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Improved regression test driver" } ]
[ { "msg_contents": "Well, I was interested in binary operators on integers\nand as Peter suggested that I should look into it\nmyself, so I did it.\n\nChoice of operators:\n\n ~ - not\n & - and\n # - xor - I like it :)\n | - or\n\nThings I am unsure of:\n\n1) Precedence. I quite nonscientifically hacked in gram.y,\n and could not still make it understand expression '5 # ~1'\n nor the precedence between '&' and '|#'...\n\n At the moment all the gram.y changes could be dropped and\n it works ok, but without operator precedence. Any hints?\n\n2) Choice of oids. I took 1890 - 1913. Should I have taken\n directly from 1874 upwards, or somewhere else?\n\n3) Choice of operators. As I understand the '^' is taken,\n I wont get it. Now, in gram.y I found that the '|' is\n used in weird situations and with weird precedence so\n maybe I should use something else for OR too?\n\n4) Is anybody else interested? ;)\n\n\nI would like to get comments/further hints on this...\n\n\n\n-- \nmarko", "msg_date": "Wed, 20 Sep 2000 21:11:59 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "[patch,rfc] binary operators on integers" }, { "msg_contents": "Marko Kreen <[email protected]> writes:\n> 1) Precedence. I quite nonscientifically hacked in gram.y,\n> and could not still make it understand expression '5 # ~1'\n> nor the precedence between '&' and '|#'...\n> At the moment all the gram.y changes could be dropped and\n> it works ok, but without operator precedence. Any hints?\n\nWhat you missed is that there's a close coupling between gram.y and\nscan.l. There are certain single-character operators that are returned\nas individual-character tokens by scan.l, and these are exactly the ones\nthat gram.y wants to treat specially. All else are folded into the\ngeneric token \"Op\". You'd need to twiddle the character type lists in\nscan.l if you want to treat '~' '&' or '#' specially in gram.y.\n\nHowever, I'm pretty dubious of the idea of changing the precedence\nassigned to these operator names, because there's a real strong risk\nof breaking existing applications if you do that --- worse, of breaking\nthem in a subtle, hard-to-detect way. Even though I think '|' is\nclearly given a bogus precedence, I doubt it's a good idea to change it.\n\n> 3) Choice of operators. As I understand the '^' is taken,\n> I wont get it. Now, in gram.y I found that the '|' is\n> used in weird situations and with weird precedence so\n> maybe I should use something else for OR too?\n\nWell, you *could* use '^' since there's no definition of it for integer\noperands. But that would mean that something like '4^2', which was\nformerly implicitly coerced to float and interpreted as floating\npower function, would suddenly mean something different. Again a\nserious risk of silently breaking applications. This doesn't apply to\n'|' though, since it has no numeric interpretation at all right now.\n\n> 4) Is anybody else interested? ;)\n\nDunno. I think the bitstring datatype is probably a better choice,\nsince it's standard and this feature is not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Sep 2000 12:26:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch,rfc] binary operators on integers " }, { "msg_contents": "On Fri, Sep 22, 2000 at 12:26:45PM -0400, Tom Lane wrote:\n> Marko Kreen <[email protected]> writes:\n> > 1) Precedence. I quite nonscientifically hacked in gram.y,\n> > and could not still make it understand expression '5 # ~1'\n> > nor the precedence between '&' and '|#'...\n> > At the moment all the gram.y changes could be dropped and\n> > it works ok, but without operator precedence. Any hints?\n> \n> What you missed is that there's a close coupling between gram.y and\n> scan.l. There are certain single-character operators that are returned\n> as individual-character tokens by scan.l, and these are exactly the ones\n> that gram.y wants to treat specially. All else are folded into the\n> generic token \"Op\". You'd need to twiddle the character type lists in\n> scan.l if you want to treat '~' '&' or '#' specially in gram.y.\n> \n> However, I'm pretty dubious of the idea of changing the precedence\n> assigned to these operator names, because there's a real strong risk\n> of breaking existing applications if you do that --- worse, of breaking\n> them in a subtle, hard-to-detect way. Even though I think '|' is\n> clearly given a bogus precedence, I doubt it's a good idea to change it.\n> \nI guess I better drop it then...\n\nOne idea I had while looking at gram.y is that the precedence\nshould somehow based on context e.g. depending on what datatypes\noperator is used. Escpecially because one symbol has different\nmeaning based on data. Heh, but this would be complex...\n\n> > 3) Choice of operators. As I understand the '^' is taken,\n> > I wont get it. Now, in gram.y I found that the '|' is\n> > used in weird situations and with weird precedence so\n> > maybe I should use something else for OR too?\n> \n> Well, you *could* use '^' since there's no definition of it for integer\n> operands. But that would mean that something like '4^2', which was\n> formerly implicitly coerced to float and interpreted as floating\n> power function, would suddenly mean something different. Again a\n> serious risk of silently breaking applications. This doesn't apply to\n> '|' though, since it has no numeric interpretation at all right now.\n> \nI am afraid of '^'. Also as the 'power' precedence would be used\nit would be very un-intuitive, no-precedence is better. OTOH the\nbit-string stuff uses '^' (with its precedence) so it would be nice\nto be similar?\n\n> > 4) Is anybody else interested? ;)\n> \n> Dunno. I think the bitstring datatype is probably a better choice,\n> since it's standard and this feature is not.\n> \nI looked at it and did not liked it. If it is from some standard\nthen its nice to PostgreSQL to support it, but somehow I guess\nthat the binary ops on int's would be used more than the bit-string\nstuff. Mostly because its something familiar from other languages.\nBut maybe its just me...\n\nI'll send a revised diff shortly\n\n-- \nmarko\n\n", "msg_date": "Sat, 23 Sep 2000 15:59:06 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [patch,rfc] binary operators on integers" }, { "msg_contents": "Well, what are we going to do with this? I think we should take it. \nSince I encouraged him to write it, I'd volunteer to take care of it.\n\nWe might want to change the bitxor operator to # (or at least something\ndistinct from ^) as well, for consistency.\n\nMarko Kreen writes:\n\n> \n> Well, I was interested in binary operators on integers\n> and as Peter suggested that I should look into it\n> myself, so I did it.\n> \n> Choice of operators:\n> \n> ~ - not\n> & - and\n> # - xor - I like it :)\n> | - or\n> \n> Things I am unsure of:\n> \n> 1) Precedence. I quite nonscientifically hacked in gram.y,\n> and could not still make it understand expression '5 # ~1'\n> nor the precedence between '&' and '|#'...\n> \n> At the moment all the gram.y changes could be dropped and\n> it works ok, but without operator precedence. Any hints?\n> \n> 2) Choice of oids. I took 1890 - 1913. Should I have taken\n> directly from 1874 upwards, or somewhere else?\n> \n> 3) Choice of operators. As I understand the '^' is taken,\n> I wont get it. Now, in gram.y I found that the '|' is\n> used in weird situations and with weird precedence so\n> maybe I should use something else for OR too?\n> \n> 4) Is anybody else interested? ;)\n> \n> \n> I would like to get comments/further hints on this...\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 12 Oct 2000 21:34:05 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch,rfc] binary operators on integers" }, { "msg_contents": "Tom Lane writes:\n\n> Even though I think '|' is clearly given a bogus precedence, I doubt\n> it's a good idea to change it.\n\nThe only builtin '|' operator, besides the not-there-yet bitor, is some\narcane prefix operator for the \"tinterval\" type, which returns the start\nof the interval. This is all long dead so that would perhaps give us a\nchance to change this before we add \"or\" operators. That might weigh more\nthan the possibility of a few users having highly specialized '|'\noperators that rely on this precedence.\n\nThe tinterval type has pretty interesting parsing rules, btw.:\n\npeter=# select 'whatever you say'::tinterval;\n ?column?\n-----------------------------------------------------\n [\"1935-12-23 09:42:00+01\" \"1974-04-16 17:52:52+01\"]\n(1 row)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 12 Oct 2000 21:46:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Precedence of '|' operator (was Re: [patch,rfc] binary\n\toperators on integers)" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Even though I think '|' is clearly given a bogus precedence, I doubt\n>> it's a good idea to change it.\n\n> The only builtin '|' operator, besides the not-there-yet bitor, is some\n> arcane prefix operator for the \"tinterval\" type, which returns the start\n> of the interval. This is all long dead so that would perhaps give us a\n> chance to change this before we add \"or\" operators. That might weigh more\n> than the possibility of a few users having highly specialized '|'\n> operators that rely on this precedence.\n\nWell, that's a good point --- it isn't going to get any less painful to\nfix it later. Do we want to just remove the special treatment of '|'\nand let it become one with the undifferentiated mass of Op, or do we\nwant to try to set up reasonable precedence for all the bitwise\noperators (and if so, what should that be)? The second choice has a\ngreater chance of breaking existing apps because it's changing more\noperators ...\n\nThomas, any opinions here?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Oct 2000 16:18:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precedence of '|' operator (was Re: [patch,\n\trfc] binary operators on integers)" }, { "msg_contents": "On Thu, Oct 12, 2000 at 09:34:05PM +0200, Peter Eisentraut wrote:\n> Well, what are we going to do with this? I think we should take it. \n> Since I encouraged him to write it, I'd volunteer to take care of it.\n\nNice :)\n\n> We might want to change the bitxor operator to # (or at least something\n> distinct from ^) as well, for consistency.\n\nNote that a sent a updated patch to pgsql-patches, which had\nadded <<, >> operators and the gram.y stuff removed. But there\nI changed the xor operator to '^'. So I can send updated patch\nwhere xor='#', when this was lost? pg_operator.h was there more\ncleaner too.\n\n-- \nmarko\n\n", "msg_date": "Thu, 12 Oct 2000 22:30:48 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [patch,rfc] binary operators on integers" }, { "msg_contents": "On Thu, Oct 12, 2000 at 04:18:05PM -0400, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > Tom Lane writes:\n> >> Even though I think '|' is clearly given a bogus precedence, I doubt\n> >> it's a good idea to change it.\n> \n> > The only builtin '|' operator, besides the not-there-yet bitor, is some\n> > arcane prefix operator for the \"tinterval\" type, which returns the start\n> > of the interval. This is all long dead so that would perhaps give us a\n> > chance to change this before we add \"or\" operators. That might weigh more\n> > than the possibility of a few users having highly specialized '|'\n> > operators that rely on this precedence.\n> \n> Well, that's a good point --- it isn't going to get any less painful to\n> fix it later. Do we want to just remove the special treatment of '|'\n> and let it become one with the undifferentiated mass of Op, or do we\n> want to try to set up reasonable precedence for all the bitwise\n> operators (and if so, what should that be)? The second choice has a\n> greater chance of breaking existing apps because it's changing more\n> operators ...\n> \nFor bitops it would be nice if '~' had a precedence equal to other\nbuiltin unary operators, '&' had higher precedence than '#' and '|'.\n(C has also XOR higher that OR).\n\nAbout breaking existing apps - all those operators [~|#&] are\nnot actually in use (well, in PostgreSQL mainstream) Only\nbitstring in 7.1 will start using them and I guess it has hopefully\nsame precedence needs :) But yes, some outside add-on may use\nthem or maybe when in future those ops will be used for something\nelse then it will be messy...\n\nWell, it is not for me to decide, but a Nice Thing would be:\n(Looking at 'Lexical precedence' in docs)\n\n[- unary minus]\t\t'~' unary BITNOT\n\n...\n\n[+ - add sub]\n\t\t\t& BITAND\n[ IS ]\n\n...\n\n[(all other) ]\t\t'#', '|'\n\n\nAlso note that bitstring uses '^' for xor so it has a little\nweird rules and is inconsistent with this.\n\n-- \nmarko\n\n", "msg_date": "Thu, 12 Oct 2000 23:11:32 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Precedence of '|' operator (was Re: [patch,\n\trfc] binary operators on integers)" }, { "msg_contents": "> Well, that's a good point --- it isn't going to get any less painful to\n> fix it later. Do we want to just remove the special treatment of '|'\n> and let it become one with the undifferentiated mass of Op, or do we\n> want to try to set up reasonable precedence for all the bitwise\n> operators (and if so, what should that be)? The second choice has a\n> greater chance of breaking existing apps because it's changing more\n> operators ...\n> Thomas, any opinions here?\n\nI'd like to see closer adherence to the \"usual\" operator precedence. But\nI really *hate* having to explicitly call out each rule in the a_expr,\nb_expr, and/or c_expr productions. Any way around this?\n\n - Thomas\n", "msg_date": "Mon, 16 Oct 2000 15:13:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precedence of '|' operator (was Re: [patch,rfc] binary\n\toperators on integers)" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'd like to see closer adherence to the \"usual\" operator precedence. But\n> I really *hate* having to explicitly call out each rule in the a_expr,\n> b_expr, and/or c_expr productions. Any way around this?\n\nIt's not easy in yacc/bison, I don't believe. Precedence of an operator\nis converted to precedence of associated productions, so there's no way\nto make it work without an explicit production for each operator token\nthat needs a particular precedence.\n\nIn any case, the only way to make things really significantly better\nwould be if the precedence of an operator could be specified in its\npg_operator entry. That would be way cool, but (a) yacc can't do it,\n(b) there's a fundamental circularity in the idea: you can't identify\nan operator's pg_operator entry until you know its input data types,\nwhich means you have to have already decided which subexpressions are\nits inputs, and (c) the grammar phase of parsing cannot look at database\nentries anyway because of transaction-abort issues.\n\nBecause of point (b) there is no chance of driving precedence lookup\nfrom pg_operator anyway. You can only drive precedence lookup from\nthe operator *name*, not the input datatypes. This being so, I don't\nsee any huge advantage to having the precedence be specified in a\ndatabase table as opposed to hard-coding it in the grammar files.\n\nOne thing that might reduce the rule bloat a little bit is to have\njust one symbolic token (like the existing Op) for each operator\nprecedence level, thus only one production per precedence level in\na_expr and friends. Then the lexer would have to have a table to\nlook up operator names to see which symbolic token to return them\nas. Still don't get to go to the database, but at least setting a\nparticular operator name's precedence is a one-liner affair instead\nof a matter of multiple rules.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Oct 2000 11:35:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Precedence of '|' operator (was Re: [patch,\n\trfc] binary operators on integers)" } ]
[ { "msg_contents": "I tried to find information in the lists but got no luck.\n\nI want to make a client application that performs a query to show the\nresults, but the client application stays open and the database gets\nupdated.\n\nI want that the updates of the database reflects on the open client\napplication, I think this can be done with triggers but I'm not sure how\n\nto do this if there are 3 (or more) client applications open at the same\n\ntime, how the trigger can send a refresh to the 3 (or more) of them.\n\nCan anybody help me????\n\n", "msg_date": "Wed, 20 Sep 2000 14:35:54 -0600", "msg_from": "Jesus Sandoval <[email protected]>", "msg_from_op": true, "msg_subject": "Dynamic application data refreshing" }, { "msg_contents": "\n\n> I want to make a client application that performs a query to show the\n> results, but the client application stays open and the database gets\n> updated.\n> \n> I want that the updates of the database reflects on the open client\n> application, I think this can be done with triggers but I'm not sure how\n> to do this if there are 3 (or more) client applications open at the same\n> time, how the trigger can send a refresh to the 3 (or more) of them.\n\nI'm not familiar with the following SQL statements, but I think these\ncould be useful:\n\nLISTEN and NOTIFY.\n \nPapp Gyozo\n\[email protected], [email protected]\n\n\n\n\n", "msg_date": "Wed, 27 Sep 2000 13:20:05 +0200 (MET DST)", "msg_from": "Papp Gyozo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dynamic application data refreshing" } ]
[ { "msg_contents": "I have a large list ot parameters and I'd like to have just one main\nfunction for checking them all. Like so:\n\ncheck_param(some_identifier_for_the_param, param_value)\n...\n rec RECORD;\n...\n select into rec check_param_function\n from param_check_table\n where id == identifier;\n\n select rec.check_param_function(param_value) as result;\n...\n\n\nThat's the general idea. Is there a way to dynamically call a plsql or sql\nfunction? Is there a way to do what I have above (evaluating a string\nbefore a SQL statement gets sent to postgres)?\n\nThanks,\n\nR.\n\n\n", "msg_date": "Wed, 20 Sep 2000 15:39:37 -0700 (PDT)", "msg_from": "Richard Harvey Chapman <[email protected]>", "msg_from_op": true, "msg_subject": "dynamic SQL/plsql functions" } ]
[ { "msg_contents": "As things stand, if you use the --with-tcl configure option and a\nsufficient Tcl installation could not be found, it will print a message\nand continue without it. I have already on several occasions explained\nwhy I consider that behaviour is undesirable, and it also seems quite\nabsurd, considering that the user explicitly asked for Tcl by specifying\nthe option in the first place.\n\nThat seems easy to fix, but what should we do about the Tk part? \nCurrently, --with-tcl implies Tk, except that it will be disabled if\ntkConfig.sh or X Windows could not be found. I propose that we instead\nmake that a failure and advise the user to use the --without-x option to\ndisable Tk. (That option already exists, but it is not evaluated.) That\nwould imply that Tk = Tcl + X, and consequently X = Tk - Tcl, which would\nfail if we ever add another X program that is unrelated to Tcl/Tk. If you\nare concerned about that, maybe a --without-tk option would be better. \nThe general assumption here is that the majority of users that want to use\nTcl is also equipped with Tk and X, so that only a few users would have to\nspecifically disable Tk.\n\nComments?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 21 Sep 2000 14:57:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Weird Tcl/Tk configuration" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> That seems easy to fix, but what should we do about the Tk part? \n> Currently, --with-tcl implies Tk, except that it will be disabled if\n> tkConfig.sh or X Windows could not be found. I propose that we instead\n> make that a failure and advise the user to use the --without-x option to\n> disable Tk. (That option already exists, but it is not evaluated.) That\n> would imply that Tk = Tcl + X, and consequently X = Tk - Tcl, which would\n> fail if we ever add another X program that is unrelated to Tcl/Tk. If you\n> are concerned about that, maybe a --without-tk option would be better. \n\nI think I prefer \"--without-tk\", since that says directly what you mean.\n\n> The general assumption here is that the majority of users that want to use\n> Tcl is also equipped with Tk and X, so that only a few users would have to\n> specifically disable Tk.\n\nThat seems a safe assumption, but there does need to be some way to\ndisable the tk support.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Sep 2000 12:55:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird Tcl/Tk configuration " } ]
[ { "msg_contents": "Hi,\n\nI encountered the following problem:\n\n./configure --enable-debug --prefix=/opt/postgres\nEdit config.h: BLCKSZ 32768\n\npostgres=# select version();\n version\n---------------------------------------------------------------\n PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.2\n(1 row)\n\nI have the following table:\n\nCREATE TABLE \"folders\" (\n \"nr\" int4 NOT NULL,\n \"parent\" int4,\n \"name\" character varying(100) NOT NULL,\n \"lang\" character varying(2) NOT NULL,\n \"sort_order\" int2 DEFAULT 0 NOT NULL,\n \"stylesheet\" character varying(100),\n \"introduction\" character varying(1000),\n \"template\" character varying(100) NOT NULL,\n \"img_normal\" character varying(50),\n \"img_over\" character varying(50),\n \"img_active\" character varying(50),\n PRIMARY KEY (\"nr\")\n);\n\nCREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER INSERT OR UPDATE ON \n\"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE \nPROCEDURE \"RI_FKey_check_ins\" ('fk_folders__parent', 'folders', 'folders', \n'UNSPECIFIED', 'parent', 'nr');\n\nCREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER DELETE ON \n\"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE \nPROCEDURE \"RI_FKey_noaction_del\" ('fk_folders__parent', 'folders', \n'folders', 'UNSPECIFIED', 'parent', 'nr');\n\nCREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER UPDATE ON \n\"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE \nPROCEDURE \"RI_FKey_noaction_upd\" ('fk_folders__parent', 'folders', \n'folders', 'UNSPECIFIED', 'parent', 'nr');\n\n\nIf I do the following query:\n\nupdate folders set title='Sitemap' where nr=43;\n\nI get the following error in the log:\n\nServer process (pid 31566) exited with status 139 at Thu Sep 21 17:24:39 2000\nTerminating any active server processes...\nServer processes were terminated at Thu Sep 21 17:24:39 2000\nReinitializing shared memory and semaphores\nThe Data Base System is starting up\nDEBUG: Data Base System is starting up at Thu Sep 21 17:24:39 2000\nDEBUG: Data Base System was interrupted being in production at Thu Sep 21 \n17:24:25 2000\nDEBUG: Data Base System is in production state at Thu Sep 21 17:24:39 2000\n\nand the following error in psql:\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n\nA backtrace says:\n\n#0 ri_BuildQueryKeyFull (key=0xbfffe4c8, constr_id=21463, constr_queryno=0,\n fk_rel=0x0, pk_rel=0x8217c20, argc=6, argv=0x821a9e0) at \nri_triggers.c:2951\n2951 key->fk_relid = fk_rel->rd_id;\n\n(gdb) bt\n#0 ri_BuildQueryKeyFull (key=0xbfffe4c8, constr_id=21463, constr_queryno=0,\n fk_rel=0x0, pk_rel=0x8216780, argc=6, argv=0x8219540) at \nri_triggers.c:2951\n#1 0x813292e in RI_FKey_keyequal_upd () at ri_triggers.c:2853\n#2 0x809cfe2 in DeferredTriggerSaveEvent (rel=0x8216780, event=2,\n oldtup=0x8227bb0, newtup=0x8227ac8) at trigger.c:1904\n#3 0x809c0ed in ExecARUpdateTriggers (estate=0x8225dc8, tupleid=0xbfffe668,\n newtuple=0x8227ac8) at trigger.c:915\n#4 0x80a36a6 in ExecReplace (slot=0x82261e8, tupleid=0xbfffe668,\n estate=0x8225dc8) at execMain.c:1591\n#5 0x80a3261 in ExecutePlan (estate=0x8225dc8, plan=0x8225cb8,\n operation=CMD_UPDATE, offsetTuples=0, numberTuples=0,\n direction=ForwardScanDirection, destfunc=0x8227a60) at execMain.c:1213\n#6 0x80a27be in ExecutorRun (queryDesc=0x8226048, estate=0x8225dc8,\n feature=3, limoffset=0x0, limcount=0x0) at execMain.c:327\n#7 0x8101f84 in ProcessQueryDesc (queryDesc=0x8226048, limoffset=0x0,\n limcount=0x0) at pquery.c:310\n#8 0x8102017 in ProcessQuery (parsetree=0x820a840, plan=0x8225cb8,\n dest=Remote) at pquery.c:353\n#9 0x8100839 in pg_exec_query_dest (\n query_string=0x81bae28 \"update folders set name='Sitemap' where nr=43;\",\n dest=Remote, aclOverride=0) at postgres.c:663\n#10 0x81006fa in pg_exec_query (\n query_string=0x81bae28 \"update folders set name='Sitemap' where nr=43;\")\n at postgres.c:562\n#11 0x81018c3 in PostgresMain (argc=4, argv=0xbfffed80, real_argc=5,\n real_argv=0xbffff734) at postgres.c:1590\n#12 0x80e9727 in DoBackend (port=0x81c00d8) at postmaster.c:2009\n#13 0x80e92da in BackendStartup (port=0x81c00d8) at postmaster.c:1776\n#14 0x80e8499 in ServerLoop () at postmaster.c:1037\n#15 0x80e7e5e in PostmasterMain (argc=5, argv=0xbffff734) at postmaster.c:725\n#16 0x80b485b in main (argc=5, argv=0xbffff734) at main.c:93\n\n\nAny ideas? If you need any additional info, please let me know.\n\n\n\nJeroen\n\n", "msg_date": "Thu, 21 Sep 2000 17:41:10 +0200", "msg_from": "Jeroen van Vianen <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in RI" }, { "msg_contents": "\nOdd, it looks like it had trouble doing the heap_openr \non the relation, although I don't immediately see why...\n\nWhat does \n select * from pg_trigger where \n tgconstrname='fk_folders__parent' \ngive you?\n\nI wasn't able to duplicate with the table statements below\nand dummy values. Do you have a subset of your data that\nwill cause the probably that you can send?\n\nStephan Szabo\[email protected]\n\nOn Thu, 21 Sep 2000, Jeroen van Vianen wrote:\n\n> Hi,\n> \n> I encountered the following problem:\n> \n> ./configure --enable-debug --prefix=/opt/postgres\n> Edit config.h: BLCKSZ 32768\n> \n> postgres=# select version();\n> version\n> ---------------------------------------------------------------\n> PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.2\n> (1 row)\n> \n> I have the following table:\n> \n> CREATE TABLE \"folders\" (\n> \"nr\" int4 NOT NULL,\n> \"parent\" int4,\n> \"name\" character varying(100) NOT NULL,\n> \"lang\" character varying(2) NOT NULL,\n> \"sort_order\" int2 DEFAULT 0 NOT NULL,\n> \"stylesheet\" character varying(100),\n> \"introduction\" character varying(1000),\n> \"template\" character varying(100) NOT NULL,\n> \"img_normal\" character varying(50),\n> \"img_over\" character varying(50),\n> \"img_active\" character varying(50),\n> PRIMARY KEY (\"nr\")\n> );\n> \n> CREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER INSERT OR UPDATE ON \n> \"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE \n> PROCEDURE \"RI_FKey_check_ins\" ('fk_folders__parent', 'folders', 'folders', \n> 'UNSPECIFIED', 'parent', 'nr');\n> \n> CREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER DELETE ON \n> \"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE \n> PROCEDURE \"RI_FKey_noaction_del\" ('fk_folders__parent', 'folders', \n> 'folders', 'UNSPECIFIED', 'parent', 'nr');\n> \n> CREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER UPDATE ON \n> \"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE \n> PROCEDURE \"RI_FKey_noaction_upd\" ('fk_folders__parent', 'folders', \n> 'folders', 'UNSPECIFIED', 'parent', 'nr');\n\n", "msg_date": "Thu, 21 Sep 2000 10:18:27 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in RI" }, { "msg_contents": "At 10:18 21-9-00 -0700, Stephan Szabo wrote:\n>Odd, it looks like it had trouble doing the heap_openr\n>on the relation, although I don't immediately see why...\n>\n>What does\n> select * from pg_trigger where\n> tgconstrname='fk_folders__parent'\n>give you?\n\nFirst it didn't give me anything (0 rows). After I recreated the constraint \ntriggers:\n\nCREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER INSERT OR UPDATE ON\n\"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\nPROCEDURE \"RI_FKey_check_ins\" ('fk_folders__parent', 'folders', 'folders',\n'UNSPECIFIED', 'parent', 'nr');\n\nCREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER DELETE ON\n\"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\nPROCEDURE \"RI_FKey_noaction_del\" ('fk_folders__parent', 'folders',\n'folders', 'UNSPECIFIED', 'parent', 'nr');\n\nCREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER UPDATE ON\n\"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\nPROCEDURE \"RI_FKey_noaction_upd\" ('fk_folders__parent', 'folders',\n'folders', 'UNSPECIFIED', 'parent', 'nr');\n\nthe above query returned three rows:\n\njeroenv=> select * from pg_trigger where tgconstrname='fk_folders__parent' ;\n tgrelid | tgname | tgfoid | tgtype | tgenabled | \ntgisconstr\naint | tgconstrname | tgconstrrelid | tgdeferrable | tginitdeferred | \ntgna\nrgs | tgattr | tgargs\n\n---------+----------------------------+--------+--------+-----------+-----------\n-----+--------------------+---------------+--------------+----------------+-----\n----+--------+------------------------------------------------------------------\n-----------\n 20152 | RI_ConstraintTrigger_21856 | 1644 | 21 | t | t\n | fk_folders__parent | 0 | f | f |\n 6 | | \nfk_folders__parent\\000folders\\000folders\\000UNSPECIFIED\\000parent\n\\000nr\\000\n 20152 | RI_ConstraintTrigger_21858 | 1654 | 9 | t | t\n | fk_folders__parent | 0 | f | f |\n 6 | | \nfk_folders__parent\\000folders\\000folders\\000UNSPECIFIED\\000parent\n\\000nr\\000\n 20152 | RI_ConstraintTrigger_21860 | 1655 | 17 | t | t\n | fk_folders__parent | 0 | f | f |\n 6 | | \nfk_folders__parent\\000folders\\000folders\\000UNSPECIFIED\\000parent\n\\000nr\\000\n(3 rows)\n\nBut the same query (update folders set title='Sitemap' where nr=43) still \ncrashes the backend at exactly the same spot.\n\nSo, still no clue.\n\nThanks,\n\n\nJeroen\n\n", "msg_date": "Fri, 22 Sep 2000 00:39:45 +0200", "msg_from": "Jeroen van Vianen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in RI" }, { "msg_contents": "\nDid you compile from sources or install from a binaries package?\nI think it would be handy to get a notice from where I think it's\nfailing to open the relation to make sure it's getting the correct\nparameter there. (I don't have source in front of me to give you\na patch - I'll send one tonight)\n\nStephan Szabo\[email protected]\n\nOn Fri, 22 Sep 2000, Jeroen van Vianen wrote:\n\n> At 10:18 21-9-00 -0700, Stephan Szabo wrote:\n> >Odd, it looks like it had trouble doing the heap_openr\n> >on the relation, although I don't immediately see why...\n> >\n> >What does\n> > select * from pg_trigger where\n> > tgconstrname='fk_folders__parent'\n> >give you?\n> \n> First it didn't give me anything (0 rows). After I recreated the constraint \n> triggers:\n> \n> CREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER INSERT OR UPDATE ON\n> \"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\n> PROCEDURE \"RI_FKey_check_ins\" ('fk_folders__parent', 'folders', 'folders',\n> 'UNSPECIFIED', 'parent', 'nr');\n> \n> CREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER DELETE ON\n> \"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\n> PROCEDURE \"RI_FKey_noaction_del\" ('fk_folders__parent', 'folders',\n> 'folders', 'UNSPECIFIED', 'parent', 'nr');\n> \n> CREATE CONSTRAINT TRIGGER \"fk_folders__parent\" AFTER UPDATE ON\n> \"folders\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\n> PROCEDURE \"RI_FKey_noaction_upd\" ('fk_folders__parent', 'folders',\n> 'folders', 'UNSPECIFIED', 'parent', 'nr');\n> \n> the above query returned three rows:\n> \n> jeroenv=> select * from pg_trigger where tgconstrname='fk_folders__parent' ;\n> tgrelid | tgname | tgfoid | tgtype | tgenabled | \n> tgisconstr\n> aint | tgconstrname | tgconstrrelid | tgdeferrable | tginitdeferred | \n> tgna\n> rgs | tgattr | tgargs\n> \n> ---------+----------------------------+--------+--------+-----------+-----------\n> -----+--------------------+---------------+--------------+----------------+-----\n> ----+--------+------------------------------------------------------------------\n> -----------\n> 20152 | RI_ConstraintTrigger_21856 | 1644 | 21 | t | t\n> | fk_folders__parent | 0 | f | f |\n> 6 | | \n> fk_folders__parent\\000folders\\000folders\\000UNSPECIFIED\\000parent\n> \\000nr\\000\n> 20152 | RI_ConstraintTrigger_21858 | 1654 | 9 | t | t\n> | fk_folders__parent | 0 | f | f |\n> 6 | | \n> fk_folders__parent\\000folders\\000folders\\000UNSPECIFIED\\000parent\n> \\000nr\\000\n> 20152 | RI_ConstraintTrigger_21860 | 1655 | 17 | t | t\n> | fk_folders__parent | 0 | f | f |\n> 6 | | \n> fk_folders__parent\\000folders\\000folders\\000UNSPECIFIED\\000parent\n> \\000nr\\000\n> (3 rows)\n> \n> But the same query (update folders set title='Sitemap' where nr=43) still \n> crashes the backend at exactly the same spot.\n\n", "msg_date": "Thu, 21 Sep 2000 16:36:38 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in RI" }, { "msg_contents": "At 21:13 21-9-00 -0700, Stephan Szabo wrote:\n>This is a one line patch that will throw a notice with\n>what relation name it's trying to open and what it\n>got back in RI_FKey_keyequal_upd. It should say\n>the name of your table and a number, but I expect\n>the number will be 0.\n\nYes, it is. So I also found the error: I did a rename table and the \nconstraint triggers were not updated with the new table name.\n\nMaybe a little check should be built in to check for fkey == 0, like this \n(from the top of my head, no actual checking):\n\n fk_rel = heap_openr(tgargs[RI_FK_RELNAME_ARGNO], NoLock);\n+ if (fk_rel == NULL) {\n+ elog(ERROR, \"In foreign key constraint, cannot open relname: %s\",\n+ tgargs[RI_FK_RELNAME_ARGNO]);\n+ }\n pk_rel = trigdata->tg_relation;\n new_row = trigdata->tg_newtuple;\n old_row = trigdata->tg_trigtuple;\n\n\nThanks for your help,\n\n\nJeroen\n\n\n", "msg_date": "Fri, 22 Sep 2000 10:08:12 +0200", "msg_from": "Jeroen van Vianen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in RI" }, { "msg_contents": "\nActually, current sources already work better (well, elog rather\nthan crash).\n\nEventually, the triggers will reference things by OID rather\nthan name so renames will work. I'd also like to make the \ndependencies known so we can make it work properly when\ndrop column gets implemented. No known eta at this point\nthough.\n\nStephan Szabo\[email protected]\n\nOn Fri, 22 Sep 2000, Jeroen van Vianen wrote:\n\n> At 21:13 21-9-00 -0700, Stephan Szabo wrote:\n> >This is a one line patch that will throw a notice with\n> >what relation name it's trying to open and what it\n> >got back in RI_FKey_keyequal_upd. It should say\n> >the name of your table and a number, but I expect\n> >the number will be 0.\n> \n> Yes, it is. So I also found the error: I did a rename table and the \n> constraint triggers were not updated with the new table name.\n> \n> Maybe a little check should be built in to check for fkey == 0, like this \n> (from the top of my head, no actual checking):\n> \n> fk_rel = heap_openr(tgargs[RI_FK_RELNAME_ARGNO], NoLock);\n> + if (fk_rel == NULL) {\n> + elog(ERROR, \"In foreign key constraint, cannot open relname: %s\",\n> + tgargs[RI_FK_RELNAME_ARGNO]);\n> + }\n> pk_rel = trigdata->tg_relation;\n> new_row = trigdata->tg_newtuple;\n> old_row = trigdata->tg_trigtuple;\n\n\n\n", "msg_date": "Fri, 22 Sep 2000 10:03:16 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in RI" } ]
[ { "msg_contents": "On Wed, 20 Sep 2000, you wrote:\n> hi mark,\n>\n> i had posted this to the General Postgres List without knowing your\n> email address. not intending to overstep any toes, i provide it to\n> you. if you have any comments or criticisms or whatever, i'd be\n> glad to rewrite as necessary.\n>\n\nThis seems to be fine. Much better than what was there.\n\nThe 'Notes on Usage' stuff probably ought also appear in the sgml docs\nas well.\n\n-- \nMark Hollomon\n", "msg_date": "Thu, 21 Sep 2000 15:43:03 -0400", "msg_from": "Mark Hollomon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal for new PL/Perl README" } ]
[ { "msg_contents": "Michael Meskes writes:\n\n> I just read in fe-connect.c that the use of PQsetdbLogin is not recommended\n> anymore. Shall I replace the call in libecpg?\n\nYou need to if you want to provide SSL functionality.\n\n> But then psql also uses this function I think.\n\nI was lazy. :)\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 21 Sep 2000 21:59:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PQsetdbLogin" }, { "msg_contents": "I just read in fe-connect.c that the use of PQsetdbLogin is not recommended\nanymore. Shall I replace the call in libecpg? But then psql also uses this\nfunction I think.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 21 Sep 2000 14:43:54 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "PQsetdbLogin" }, { "msg_contents": "On Thu, Sep 21, 2000 at 09:59:03PM +0200, Peter Eisentraut wrote:\n> You need to if you want to provide SSL functionality.\n\nOkay, I see. Is there another interface that already uses the new functions\nso I can simply copy that stuff?\n\nMichael\n\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 22 Sep 2000 13:21:38 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PQsetdbLogin" } ]
[ { "msg_contents": "I've been thinking about this for quite some time now but I'm still not\nsure. The question is, is there a way for insert/delete/update to affect 0\nrows other than the where clause giving a condition that is not satisfiable.\n\nOf course this can happen via constraints but then the backend will return\nan error message so I know the difference. \n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 21 Sep 2000 17:08:52 -0700", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT/UPDATE/DELETE" } ]
[ { "msg_contents": "gcc -o ecpg preproc.o pgc.o type.o ecpg.o ecpg_keywords.o output.o keywords.o c_keywords.o ../lib/typename.o descriptor.o variable.o -lz -lcrypt -lnsl -ldl -lm -lbsd -lreadline -ltermcap -lncurses -export-dynamic\npgc.o: In function `yylex':\npgc.o(.text+0x582): undefined reference to `pg_mbcliplen'\npgc.o(.text+0x953): undefined reference to `pg_mbcliplen'\ncollect2: ld returned 1 exit status\n\npg_mbcliplen cannot be used in the frontend. Remove them, please.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 22 Sep 2000 15:31:59 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "ecpg is broken in current" } ]
[ { "msg_contents": "Hi,\n\nFollowing my bug report yesterday about a bug in RI, I'll first show a \nreproducible example:\n\ncreate table t1 ( a int4 primary key, b varchar(5) );\ncreate table t2 ( b varchar(5) primary key );\nalter table t1 add constraint fk_t1__b foreign key (b) references t2 (b);\ninsert into t2 values ( 'abc' );\ninsert into t2 values ( 'def' );\ninsert into t1 values ( 1, 'abc' );\n-- This statement fails, which is correct\ninsert into t1 values ( 2, 'xyz' );\ninsert into t1 values ( 3, 'def' );\n\n-- Now, do the rename table\nalter table t2 rename to t3;\n-- This statement crashes the backend\ninsert into t1 values ( 4, 'abc' );\n\nWith the attached patch for src/backend/utils/adt/ri_triggers.c, you'll get \nthe following error message instead:\n\nERROR: RI constraint fk_t1__b cannot find table t2\n\nOf course, the long-time solution would be to update the pg_triggers table \non alter table X rename to Y. However, I do not feel qualified to implement \nthis.\n\nI have not executed all different elog()'s that I've added, but I feel \nconfident they'll work.\n\nPlease review my patch,\n\n\nJeroen", "msg_date": "Fri, 22 Sep 2000 13:08:17 +0200", "msg_from": "Jeroen van Vianen <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for Bug in RI" }, { "msg_contents": "This oversight (lack of check for heap_open failure) was already fixed\nin another way for 7.1 --- heap_open itself always checks now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Sep 2000 10:58:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch for Bug in RI " }, { "msg_contents": "At 10:58 22-9-00 -0400, Tom Lane wrote:\n>This oversight (lack of check for heap_open failure) was already fixed\n>in another way for 7.1 --- heap_open itself always checks now.\n\nCool, I'll wait eagerly for 7.1 ;-)\n\n\nJeroen\n\n\n", "msg_date": "Fri, 22 Sep 2000 17:34:53 +0200", "msg_from": "Jeroen van Vianen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch for Bug in RI " } ]
[ { "msg_contents": "\n In the current CVS:\n\ntest=# SET DateStyle TO DEFAULT;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!#\n\nin log:\n\n[..cut..]\nDEBUG: ProcessUtility: SET DateStyle TO DEFAULT;\n/usr/lib/postgresql/bin/postmaster: reaping dead processes...\n/usr/lib/postgresql/bin/postmaster: CleanupProc: pid 30063 exited with\nstatus 1\nServer process (pid 30063) exited with status 11 at Fri Sep 22 14:35:17 2000\nTerminating any active server processes...\n\n \t\t\t\t\tKarel\n\nPS. Sorry of this brief info, but I haven't time for detail \n exploration now :-(\n\n", "msg_date": "Fri, 22 Sep 2000 14:38:42 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "crash: SET DateStyle TO DEFAULT" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> In the current CVS:\n> test=# SET DateStyle TO DEFAULT;\n> pqReadData() -- backend closed the channel unexpectedly.\n\nConfirmed here. It looks like SetPGVariable() has failed to account\nfor the possibility that its \"value\" argument will be NULL.\n\nThis bug has evidently been in there for a while, which indicates\nthat we don't have any regression tests that exercise SET var TO\nDEFAULT. Probably time to add one --- I see from the CVS log that\nI was burnt on this same point back in February, and now Peter has\nre-introduced the problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Sep 2000 11:11:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crash: SET DateStyle TO DEFAULT " } ]
[ { "msg_contents": "I am working on designing some new datatypes and could use some\nguidance.\n\nAlong with each data item, I must keep additional information about\nthe scale of measurement. Further, the relevant scales of measurement\nfall into a few major families of related scales, so at least a\ndifferent type will be required for each of these major families.\nAdditionally, I wish to be able to convert data measured according to\none scale into other scales (both within the same family and between\ndifferent families), and these interconversions require relatively\nlarge sets of parameters.\n\nIt seems that there are several alternative approaches, and I am\nseeking some guidance from the wizards here who have some\nunderstanding of the backend internals, performance tradeoffs, and\nsuch issues.\n\nPossible solutions:\n\n1. Store the data and all the scale parameters within the type.\n\n Advantages: All information contained within each type. Can be\n implemented with no backend changes. No access to ancillary tables\n required, so processing might be fast.\n\n Disadvantages: Duplicate information on the scales recorded in\n each field of the types; i.e., waste of space. I/O is either\n cumbersome (if all parameters are required) or they type-handling\n code has built-in tables for supplying missing parameters, in\n which case the available types and families cannot be extended by\n users without recompiling the code.\n\n2. Store only the data and a reference to a compiled-in data table\n holding the scale parameters.\n\n Advantages: No duplicate information stored in the fields.\n Access to scale data compiled into backend, so processing might be\n fast.\n\n Disadvantages: Tables of scale data fixed at compile time, so\n users cannot add additional scales or families of scales.\n Requires backend changes to implement, but these changes are\n relatively minor since all the scale parameters are compiled into\n the code handling the type.\n\n3. Store only the data and a reference to a new system table (or\n tables) holding the scale parameters.\n\n Advantages: No duplicate information stored in the fields.\n Access to scale data _not_ compiled into backend, so users could\n add scales or families of scales by modifying the system tables.\n\n Disadvantages: Requires access to system tables to perform\n conversions, so processing might be slow. Requires more complex\n backend changes to implement, including the ability to retrieve\n information from system tables.\n\nClearly, option 3 is optimal (more flexible, no data duplication)\nunless the access to system tables by the backend presents too much\noverhead. (Other suggestions are welcome, especially if I have\nmisjudged the relative merits of these ideas or missed one\naltogether.) The advice I need is the following:\n\n- How much of an overhead is introduced by requiring the backend to\n query system tables during tuple processing? Is this unacceptable\n from the outset or is it reasonable to consider this option further?\n Note that the size of these new tables will not be large (probably\n less than 100 tuples) if that matters.\n\n- How does one access system tables from the backend code? I seem to\n recall that issuing straight queries via SPI is not necessarily the\n right way to go about this, but I'm not sure where to look for\n alternatives.\n\nThanks for your help.\n\nCheers,\nBrook\n\n", "msg_date": "Fri, 22 Sep 2000 17:05:24 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "type design guidance needed" }, { "msg_contents": "Brook Milligan <[email protected]> writes:\n> Along with each data item, I must keep additional information about\n> the scale of measurement. Further, the relevant scales of measurement\n> fall into a few major families of related scales, so at least a\n> different type will be required for each of these major families.\n> Additionally, I wish to be able to convert data measured according to\n> one scale into other scales (both within the same family and between\n> different families), and these interconversions require relatively\n> large sets of parameters.\n\nIt'd be useful to know more about your measurement scales. Evgeni\nremarks that for his applications, units can be broken down into\nsimple linear combinations of fundamental units --- but if you're\ndoing something like converting between different device-dependent\ncolor spaces, I can well believe that that model wouldn't work...\n\n> - How much of an overhead is introduced by requiring the backend to\n> query system tables during tuple processing? Is this unacceptable\n> from the outset or is it reasonable to consider this option further?\n\nAssuming that the scale tables are not too large and not frequently\nchanged, the ideal access mechanism seems to be the \"system cache\"\nmechanism (cf src/backend/utils/cache/syscache.c,\nsrc/backend/utils/cache/lsyscache.c). The cache support allows each\nbackend to keep copies in memory of recently-used rows of a cached\ntable. Updating a cached table requires rather expensive cross-\nbackend signaling, but as long as that doesn't happen often compared\nto accesses, you win. The only real restriction is that you have to\nlook up cached rows by a unique key that corresponds to an index, but\nthat seems not to be a problem for your application.\n\nAdding a new system cache is a tad more invasive than the usual sort of\nuser-defined-type addition, but it's certainly not out of the question.\nBruce Momjian has done it several times and documented the process,\nIIRC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Sep 2000 01:41:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type design guidance needed " }, { "msg_contents": " It'd be useful to know more about your measurement scales. Evgeni\n remarks that for his applications, units can be broken down into\n simple linear combinations of fundamental units --- but if you're\n doing something like converting between different device-dependent\n color spaces, I can well believe that that model wouldn't work...\n\nThose ideas about linear combinations are great, but I think too\nsimplistic for what I have in mind. I'll give it more thought,\nthough, as I further define the structure of all the interconversions.\n\n > - How much of an overhead is introduced by requiring the backend to\n > query system tables during tuple processing? Is this unacceptable\n > from the outset or is it reasonable to consider this option further?\n\n Assuming that the scale tables are not too large and not frequently\n changed, the ideal access mechanism seems to be the \"system cache\"\n mechanism (cf src/backend/utils/cache/syscache.c,\n src/backend/utils/cache/lsyscache.c). The cache support allows each\n backend to keep copies in memory of recently-used rows of a cached\n table. Updating a cached table requires rather expensive cross-\n backend signaling, but as long as that doesn't happen often compared\n to accesses, you win. The only real restriction is that you have to\n look up cached rows by a unique key that corresponds to an index, but\n that seems not to be a problem for your application.\n\nI have in mind cases in which the system tables will almost never be\nupdated. That is, the table installed initially will serve the vast\nmajority of purposes, but I'd still like the flexibility of updating\nit when needed. Caches may very well be perfectly appropriate, here;\nthanks for the pointer.\n\n Adding a new system cache is a tad more invasive than the usual sort of\n user-defined-type addition, but it's certainly not out of the question.\n Bruce Momjian has done it several times and documented the process,\n IIRC.\n\nBruce, is that the case? Do you really have it documented? If so,\nwhere?\n\nCheers,\nBrook\n", "msg_date": "Sat, 23 Sep 2000 09:49:50 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type design guidance needed" }, { "msg_contents": "Hi Brook,\n\tIt seems to me that the answer depends on how much effort you want to\nput in to it.\n\nBrook Milligan wrote:\n> \n> I am working on designing some new datatypes and could use some\n> guidance.\n\n> It seems that there are several alternative approaches, and I am\n> seeking some guidance from the wizards here who have some\n> understanding of the backend internals, performance tradeoffs, and\n> such issues.\n> \n> Possible solutions:\n> \n> 1. Store the data and all the scale parameters within the type.\n> \n\nProbably the easiest solution, but it might leave a furry taste in your\nmouth.\n\n> \n> 2. Store only the data and a reference to a compiled-in data table\n> holding the scale parameters.\n> \n\nIf you can fix all the parameters at compile time this is a good\nsolution. Don't forget that the code for the type is going to\ndynamically linked into the backend, so \"compile time\" can happen on the\nfly. You can write an external script to update your function's source\ncode with the new data, compile the updated source and relink the new\nexecutable. This is of course a hack. If you really want to do this on\nthe fly you would need to be sure that simultaneously executing\nbackends, which might be linked to different versions of the library,\nalways do consistent things with the datatypes. Also, you might want to\ncheck out what exactly happens when a backend links new symbols over old\nones.\n\nI have actually done this in a situation where I wanted to load a bunch\nof values in to a database, do some analysis, change parameters in\nbackend functions, and repeat. I was only using a single backend at a\ntime, and I closed the backend between relinks. It is easier than it\nsounds, the only part of the backend that you need to understand is the\nfunction manager and dynamic loader.\n\nThe major advantage of this approach is that you are not hacking the\nbackend, and your code might actually continue to work across release\nversions.\n\n> \n> 3. Store only the data and a reference to a new system table (or\n> tables) holding the scale parameters.\n> \n> \n\nThis solution is probably the neatest, and in the long term the most\nrobust. However it might also involve the most effort. Depending on how\nyou are using the datatypes, you will probably want to cache the\ninformation that you are storing in \"system tables\" locally in each\nbackend. Basically this means allocating an array to a statically\nallocated pointer in your code, and populating the array with the data\nfrom the system table the first time that you use it. You also need to\nwrite a trigger that will invalidate the caches in all backends in the\nevent that the system table is updated. There is already a lot of\ncaching code in the backend, and a system for cache invalidation. I\nexpect that you would end up modifying or copying that.\n\nAnother point about writing internal backend code is that you end up\nwriting to changing interfaces. You can expect your custom\nmodifications to break with every release. The new function manager,\nwhich is a much needed and neatly executed improvement, broke all of my\ncode. This would be a major consideration if you had to support the code\nacross several releases. Of course if the code is going to be generally\nuseful, and the person who pays you is amenable, you can always submit\nthe modifications as patches.\n\n> \n> Cheers,\n> Brook\n", "msg_date": "Sat, 23 Sep 2000 11:59:01 -0400", "msg_from": "Bernard Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type design guidance needed" } ]
[ { "msg_contents": "Brook,\n\nI have been contemplating such data type for years. I believe I have\nassembled the most important parts, but I did not have time to\ncomplete the whole thing.\n\nThe idea is that hte units of measurement can be treated as arithmetic\nexpressions. One can assign each of the few existing base units a\nfixed position in a bit vector, parse the expression, then evaluate it\nto obtain three things: scale factor, numerator and quotient, the\nlatter two being bit vectors.\n\nSo, if you assign the base units as\n\n 'm' => 1,\n 'kg' => 2,\n 's' => 4,\n 'K' => 8,\n 'mol' => 16,\n 'A' => 32,\n 'cd' => 64,\n\nthe unit, umol/min/mg, will be represented as \n\n(0.01667, 00010000,00000110). \n\nSuch structure is compact enough to be stashed into an atomic type.\nIn fact, one needs more than just a plain bit vector to represent\nexponents:\n\numol/min/ml => (0.01667, '00010000', '00000103') (because ml is a m^3)\n\nHere I use the whole charater per bit for clarity, but one does not\nneed more than two or three bits -- you normally don't have kg^4 or\nm^7 in your units.\n\nI considered other alternatives, but none seemed as good as an atomic\ntype. I can bet you will see performance problems and indexing\nnightmare with non-atomic solutions well before you hit the space\nconstraints with the atomic type. You are even likely to see the space\nproblems with the non-atomic storage: pointers can easily cost more\nthan compacted units.\n\nThere are numerous benefits to the atomic type. The units can be\nre-assembled on the output, the operators can be written to work on\nnon-normalized units and discard the incompatible ones, and the\nchances that you screw up the unit integrity are none.\n\nSo, if that makes sense, I will be willing to funnel more energy into\nthis project, and I would aprreciate any co-operation.\n\nIn the meanwhile, you might want to check out what I have done so far.\n\n1. A perl parser for the units of measurement that computes units as\n algebraic expressions. I have done it in perl for the ease of\n prototyping, but it is flex- and bison-generated and can be ported\n to c and included into the data type.\n\n Get it from\n http://wit.mcs.anl.gov/~selkovjr/Unit.tgz\n\n This is a regular perl extension; do a \n\n\tperl Makefile.PL; make; make install\n\n type of thing, but first you need to build and install my version of\n bison, http://wit.mcs.anl.gov/~selkovjr/camel-1.24.tar.gz\n\n There is a demo script that you can run as follows\n\n perl browse.pl units\n\n2. The postgres extension, seg, to which I was planning to add the\n units of measurement. It has its own use already, and it\n exemplifies the use of the yacc parser in an extension.\n\n Please see the README in \n\n\thttp://wit.mcs.anl.gov/~selkovjr/pg_extensions/\n\n as well as a brief description in \n\n\thttp://wit.mcs.anl.gov/EMP/seg-type.html\n\n and a running demo in \n\n\thttp://wit.mcs.anl.gov/EMP/indexing.html (search for seg)\n\nFood for thought.\n\n--Gene\n", "msg_date": "Fri, 22 Sep 2000 23:41:41 -0500", "msg_from": "\"Evgeni E. Selkov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type design guidance needed " } ]
[ { "msg_contents": ">> Bruce, is that the case? Do you really have it documented? If so,\n>> where?\n\n> src/backend/utils/cache/syscache.c\n\nBTW, it occurs to me that the real reason adding a syscache is invasive\nis that the syscache routines accept parameters that are integer indexes\ninto syscache.c's cacheinfo[] array. So there's no way to add a\nsyscache without changing this file. But suppose that the routines\ninstead accepted pointers to cachedesc structs. Then an add-on module\ncould define its own syscache without ever touching syscache.c. This\nwouldn't even take any widespread code change, just change what the\nmacros AGGNAME &etc expand to...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Sep 2000 12:37:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type design guidance needed " } ]
[ { "msg_contents": "psql has some problems with views in current CVS: \\d doesn't show views,\nand if you do \\d on a specific view, it doesn't identify it as a view\nand doesn't show the view definition rule.\n\nI assume this breakage is from the recent RELKIND_VIEW change;\nprobably psql didn't get updated to know about the new relkind.\n\nAnyone care to work on this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Sep 2000 20:03:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "psql's \\d functions broken for views in current sources" }, { "msg_contents": "\n\nTom Lane wrote:\n\n> psql has some problems with views in current CVS: \\d doesn't show views,\n> and if you do \\d on a specific view, it doesn't identify it as a view\n> and doesn't show the view definition rule.\n>\n> I assume this breakage is from the recent RELKIND_VIEW change;\n> probably psql didn't get updated to know about the new relkind.\n>\n\nProbably psql uses pg_views though I don't remember correctly.\nIt seemd that pg_views(initdb) should be changed first.\n\nRegards.\n\nHiroshi Inoue\n\n\n", "msg_date": "Mon, 25 Sep 2000 10:26:02 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql's \\d functions broken for views in current sources" }, { "msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> Tom Lane wrote:\n>> I assume this breakage is from the recent RELKIND_VIEW change;\n>> probably psql didn't get updated to know about the new relkind.\n\n> Probably psql uses pg_views though I don't remember correctly.\n> It seemd that pg_views(initdb) should be changed first.\n\nNo, pg_views still works --- although it could be made far more\nefficient (don't need the WHERE EXISTS(...) test anymore, just look\nat relkind). So I don't think that explains why psql is misbehaving.\n\nYou are right that we ought to change the definition of pg_views,\nanyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Sep 2000 21:37:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql's \\d functions broken for views in current sources " }, { "msg_contents": "Was this addressed?\n\n\n> Hiroshi Inoue <[email protected]> writes:\n> > Tom Lane wrote:\n> >> I assume this breakage is from the recent RELKIND_VIEW change;\n> >> probably psql didn't get updated to know about the new relkind.\n> \n> > Probably psql uses pg_views though I don't remember correctly.\n> > It seemd that pg_views(initdb) should be changed first.\n> \n> No, pg_views still works --- although it could be made far more\n> efficient (don't need the WHERE EXISTS(...) test anymore, just look\n> at relkind). So I don't think that explains why psql is misbehaving.\n> \n> You are right that we ought to change the definition of pg_views,\n> anyway.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Oct 2000 23:41:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql's \\d functions broken for views in current sources" } ]
[ { "msg_contents": "\nTwo routines do eccentric things when they can't find required supporting\ndata:\n\npg_get_userbyid\n\n returns 'unknown (UID=<uid-number>)' when the UID does not exist.\n\npg_get_viewdef\n\n returns 'Not a view' when passed a non-existant or non-view table\n it also signals errors when the underlying metadata can not be found.\n\nThe proposal is to return NULL in the above cases - in the final case,\nprobably also generate a NOTICE.\n\nDoes anybody have a problem with this? Think it's a bad idea etc?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 24 Sep 2000 13:15:55 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "RFC - change of behaviour of pg_get_userbyid & pg_get_viewdef?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> pg_get_viewdef\n\n> returns 'Not a view' when passed a non-existant or non-view table\n> it also signals errors when the underlying metadata can not be found.\n\n> The proposal is to return NULL in the above cases - in the final case,\n> probably also generate a NOTICE.\n\nI don't believe it's practical to trap errors and return a NULL for\nbroken views. Moreover, I do not think it's a good idea to respond\nto client errors (invalid view name) the same as database problems\n(broken views). So, I agree with the part of the proposal that says\nto return NULL instead of 'Not a view' when there is no view by the\ngiven name, but I do not agree with trying to suppress errors due to\nmetadata problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Sep 2000 19:02:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RFC - change of behaviour of pg_get_userbyid &\n\tpg_get_viewdef?" }, { "msg_contents": "On Sun, 24 Sep 2000, Philip Warner wrote:\n\n> Two routines do eccentric things when they can't find required supporting\n> data:\n> \n> pg_get_userbyid\n> \n> returns 'unknown (UID=<uid-number>)' when the UID does not exist.\n\n[Snip]\n\n> The proposal is to return NULL in the above cases - in the final case,\n> probably also generate a NOTICE.\n\nIn these cases, is NULL = 0? - What if it returns the UID for \"root\"\n(typically UID 0)... I think an error message should/would be better in\nthis case.\n\nJust my $.02.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Sun, 24 Sep 2000 19:22:29 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RFC - change of behaviour of pg_get_userbyid &\n\tpg_get_viewdef?" }, { "msg_contents": "At 19:22 24/09/00 -0500, Dominic J. Eidson wrote:\n>\n>In these cases, is NULL = 0? - What if it returns the UID for \"root\"\n>(typically UID 0)... I think an error message should/would be better in\n>this case.\n>\n\nNo NULL is NULL, a special value that usually means 'nothing found'.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 25 Sep 2000 11:48:03 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] RFC - change of behaviour of pg_get_userbyid &\n\tpg_get_viewdef?" }, { "msg_contents": "-Hello \n- If you want the default value to be \"0\" the code for that is\ndefault = '0'\n\nLike so\n\nCREATE TABLE Customer (Customer_ID INT Default = '0',\n\nI believe this should work.\n\ndannyh\n\[email protected]\n\nOn Mon, 25 Sep 2000, Philip Warner wrote:\n> At 19:22 24/09/00 -0500, Dominic J. Eidson wrote:\n> >\n> >In these cases, is NULL = 0? - What if it returns the UID for \"root\"\n> >(typically UID 0)... I think an error message should/would be better in\n> >this case.\n> >\n> \n> No NULL is NULL, a special value that usually means 'nothing found'.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 26 Sep 2000 23:06:52 +1100", "msg_from": "Danny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] RFC - change of behaviour of pg_get_userbyid &\n\tpg_get_viewdef?" }, { "msg_contents": "\nBased on the lack of reaction on GENERAL and SQL, I am inclined to go ahead\nwith the \nchanges below at least as far as returning NULL instead of 'Not a View' or\n'unknown (UID=<uid-number>)' (as per Tom's request), if noone objects...\n\n\nAt 13:15 24/09/00 +1000, Philip Warner wrote:\n>\n>Two routines do eccentric things when they can't find required supporting\n>data:\n>\n>pg_get_userbyid\n>\n> returns 'unknown (UID=<uid-number>)' when the UID does not exist.\n>\n>pg_get_viewdef\n>\n> returns 'Not a view' when passed a non-existant or non-view table\n> it also signals errors when the underlying metadata can not be found.\n>\n>The proposal is to return NULL in the above cases - in the final case,\n>probably also generate a NOTICE.\n>\n>Does anybody have a problem with this? Think it's a bad idea etc?\n>\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 27 Sep 2000 21:10:12 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Change of behaviour of pg_get_userbyid & pg_get_viewdef - do\n it?" } ]
[ { "msg_contents": "\tOn the documentation\nhttp://www.postgresql.org/users-lounge/docs/7.0/postgres/mvcc4646.htm what\ndoes mean \"...Postgres doesn't remember any information about modified rows\nin memory and so has no limit to the number of rows locked without lock\nescalation.\"?\n\n\tFollowing you find \"However, take into account that SELECT FOR UPDATE will\nmodify selected rows to mark them and so will results in disk writes.\" Why\ndoes it result in disk writes?\n\nPaulo Siqueira\n\n", "msg_date": "Sun, 24 Sep 2000 14:26:30 -0300", "msg_from": "\"Paulo Roberto Siqueira\" <[email protected]>", "msg_from_op": true, "msg_subject": "Row level locks" } ]
[ { "msg_contents": "The comments for bufmgr.c's BufferSync routine point out that it's a\nbad thing for some other backend to be modifying a page while it is\nwritten out. The following text has gone unchanged since Postgres95:\n\n * Also, we need to be sure that no other transaction is\n * modifying the page as we flush it. This is only a problem for objects\n * that use a non-two-phase locking protocol, like btree indices. For\n * those objects, we would like to set a write lock for the duration of\n * our IO. Another possibility is to code updates to btree pages\n * carefully, so that writing them out out of order cannot cause\n * any unrecoverable errors.\n *\n * I don't want to think hard about this right now, so I will try\n * to come back to it later.\n\nUnfortunately, the comment is wrong about this being a problem only for\nindexes. It's possible for an invalid state of a heap page to be\nwritten out, as well. PageAddItem() sets the item pointer for an added\ntuple before it copies the tuple onto the page, so if it is recycling an\nexisting item pointer slot, there is a state where a valid item pointer\nis pointing at a tuple that's wholly or partly not valid. This doesn't\nmatter as far as active backends are concerned because we should be\nholding BUFFER_LOCK_EXCLUSIVE on the page while modifying it. But some\nother backend could be in process of writing out the page (if it had\npreviously dirtied the page), and so it's possible for this invalid\nstate to reach disk. If the database is shut down before the new\nupdate of the page can be written out, then we have a problem.\n\nNormally, the new page state will be written out at transaction commit,\nbut what happens if the current transaction aborts? In that case, the\ndirty page just sits in shared memory. It will get written the next\ntime a transaction modifies the page (and commits), or when some backend\ndecides to recycle the buffer to hold another page. But if the\npostmaster gets shut down before that happens, we lose; the dirty page\nis never written at all, and when it's re-read after database restart,\nthe corrupted page state becomes visible.\n\nThe window of vulnerability is considerably wider in 7.0 than in prior\nreleases, because in prior releases *any* transaction commit will write\nall dirty pages. In 7.0 the dirtied page will not get written out until\nwe commit a transaction that modified that particular page (or decide to\nrecycle the buffer). The odds of seeing a problem are still pretty\nsmall, but the risk is definitely there.\n\nI believe the correct fix for this problem is for bufmgr.c to grab\na read lock (BUFFER_LOCK_SHARED) on any page that it is writing out.\nA read lock is sufficient since there's no need to prevent other\nbackends from reading the page, we just need to prevent them from\nchanging it during the I/O.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Sep 2000 19:48:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Concurrent-update problem in bufmgr.c" }, { "msg_contents": "\n\nTom Lane wrote:\n\n[snip]\n\n>\n> The window of vulnerability is considerably wider in 7.0 than in prior\n> releases, because in prior releases *any* transaction commit will write\n> all dirty pages. In 7.0 the dirtied page will not get written out until\n> we commit a transaction that modified that particular page (or decide to\n> recycle the buffer). The odds of seeing a problem are still pretty\n> small, but the risk is definitely there.\n>\n> I believe the correct fix for this problem is for bufmgr.c to grab\n> a read lock (BUFFER_LOCK_SHARED) on any page that it is writing out.\n> A read lock is sufficient since there's no need to prevent other\n> backends from reading the page, we just need to prevent them from\n> changing it during the I/O.\n>\n> Comments anyone?\n\nThis seems to be almost same as I posted 4 months ago(smgrwrite()\nwithout LockBuffer(was RE: ...).\nMaybe Vadim would take care of it in the inplementation of WAL.\nThe following was Vadim's reply to you and me.\n\n>\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > As far as I see,PostgreSQL doesn't call LockBuffer() before\n> > > calling smgrwrite(). This seems to mean that smgrwrite()\n> > > could write buffers to disk which are being changed by\n> > > another backend. If the(another) backend was aborted by\n> > > some reason the buffer page would remain half-changed.\n> >\n> > Hmm ... looks fishy to me too. Seems like we ought to hold\n> > BUFFER_LOCK_SHARE on the buffer while dumping it out. It\n> > wouldn't matter under normal circumstances, but as you say\n> > there could be trouble if the other backend crashed before\n> > it could mark the buffer dirty again, or if we had a system\n> > crash before the dirtied page got written again.\n>\n> Well, known issue. Buffer latches were implemented in 6.5 only\n> and there was no time to think about subj hard -:)\n> Yes, we have to shlock buffer before writing and this is what\n> bufmgr will must do for WAL anyway (to ensure that last buffer\n> changes already logged)... but please check that buffer is not\n> exc-locked by backend itself somewhere before smgrwrite()...\n>\n> Vadim\n>\n>\n\nRegards.\n\nHiroshi Inoue\n\n\n", "msg_date": "Mon, 25 Sep 2000 09:26:38 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Concurrent-update problem in bufmgr.c" }, { "msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> This seems to be almost same as I posted 4 months ago(smgrwrite()\n> without LockBuffer(was RE: ...).\n\nYou are right, this was already a known issue (and I had it buried in\nmy to-do list, in fact). I rediscovered it while puzzling over some\nof the corrupted-data reports we've seen lately.\n\nI'll go ahead and make the change in current sources. Does anyone\nhave a strong feeling about whether or not to back-patch it into\nREL7_0 branch for the upcoming 7.0.3 release?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Sep 2000 20:30:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Concurrent-update problem in bufmgr.c " } ]
[ { "msg_contents": "> I believe the correct fix for this problem is for bufmgr.c to grab\n> a read lock (BUFFER_LOCK_SHARED) on any page that it is writing out.\n> A read lock is sufficient since there's no need to prevent other\n> backends from reading the page, we just need to prevent them from\n> changing it during the I/O.\n> \n> Comments anyone?\n\nDo it.\n\nVadim\n \n", "msg_date": "Sun, 24 Sep 2000 17:39:26 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Concurrent-update problem in bufmgr.c" } ]
[ { "msg_contents": "Here is my next take on binary operators for integers.\nIt implements the following operators for int2/int4/int8:\n\n ~ - not\n & - and\n ^ - xor\n | - or\n << - shift left\n >> - shift right\n\nNotes:\n\n* My original choice for xor was '#' because the '^' operator conflicts\n with power operator on floats but Tom Lane said:\n\n> Well, you *could* use '^' since there's no definition of it for integer\n> operands. But that would mean that something like '4^2', which was\n> formerly implicitly coerced to float and interpreted as floating\n> power function, would suddenly mean something different. Again a\n> serious risk of silently breaking applications. This doesn't apply to\n> '|' though, since it has no numeric interpretation at all right now.\n\n As the bit-string uses '^' too for xor-ing it would be nice to be\n consistent. I am quite unsure on this matter. The patch now seems\n otherwise sane to me, this is the only issue left.\n\n* On << and >> the second argument is always int32 as this seems\n to be the 'default' int type in PostgreSQL.\n\n* Oids used are 1874 - 1909.\n\nComments?\n\nPatch is against current CVS.\n\n-- \nmarko", "msg_date": "Mon, 25 Sep 2000 09:21:46 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "binary operators on integers" }, { "msg_contents": "Can someone comment on this?\n\n\n\n> Here is my next take on binary operators for integers.\n> It implements the following operators for int2/int4/int8:\n> \n> ~ - not\n> & - and\n> ^ - xor\n> | - or\n> << - shift left\n> >> - shift right\n> \n> Notes:\n> \n> * My original choice for xor was '#' because the '^' operator conflicts\n> with power operator on floats but Tom Lane said:\n> \n> > Well, you *could* use '^' since there's no definition of it for integer\n> > operands. But that would mean that something like '4^2', which was\n> > formerly implicitly coerced to float and interpreted as floating\n> > power function, would suddenly mean something different. Again a\n> > serious risk of silently breaking applications. This doesn't apply to\n> > '|' though, since it has no numeric interpretation at all right now.\n> \n> As the bit-string uses '^' too for xor-ing it would be nice to be\n> consistent. I am quite unsure on this matter. The patch now seems\n> otherwise sane to me, this is the only issue left.\n> \n> * On << and >> the second argument is always int32 as this seems\n> to be the 'default' int type in PostgreSQL.\n> \n> * Oids used are 1874 - 1909.\n> \n> Comments?\n> \n> Patch is against current CVS.\n> \n> -- \n> marko\n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Oct 2000 00:00:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: binary operators on integers" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone comment on this?\n\nWe were debating what to do about the precedence issues; see\nfollowup messages. I have no problem with adding functions\nlike this, just gotta pick the operator names and precedences...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Oct 2000 00:07:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: binary operators on integers " }, { "msg_contents": "This patch was installed, with xor as \"#\". The parser still needs work. \nBesides the known issue of \"|\", this also parses funny:\n\n=> select 5 & ~ 6;\nERROR: Unable to identify a right operator '&' for type 'int4'\n\n\nMarko Kreen writes:\n\n> Here is my next take on binary operators for integers.\n> It implements the following operators for int2/int4/int8:\n> \n> ~ - not\n> & - and\n> ^ - xor\n> | - or\n> << - shift left\n> >> - shift right\n> \n> Notes:\n> \n> * My original choice for xor was '#' because the '^' operator conflicts\n> with power operator on floats but Tom Lane said:\n> \n> > Well, you *could* use '^' since there's no definition of it for integer\n> > operands. But that would mean that something like '4^2', which was\n> > formerly implicitly coerced to float and interpreted as floating\n> > power function, would suddenly mean something different. Again a\n> > serious risk of silently breaking applications. This doesn't apply to\n> > '|' though, since it has no numeric interpretation at all right now.\n> \n> As the bit-string uses '^' too for xor-ing it would be nice to be\n> consistent. I am quite unsure on this matter. The patch now seems\n> otherwise sane to me, this is the only issue left.\n> \n> * On << and >> the second argument is always int32 as this seems\n> to be the 'default' int type in PostgreSQL.\n> \n> * Oids used are 1874 - 1909.\n> \n> Comments?\n> \n> Patch is against current CVS.\n> \n> \n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 24 Oct 2000 22:23:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: binary operators on integers" }, { "msg_contents": "On Tue, Oct 24, 2000 at 10:23:55PM +0200, Peter Eisentraut wrote:\n> This patch was installed, with xor as \"#\". The parser still needs work. \n> Besides the known issue of \"|\", this also parses funny:\n> \n> => select 5 & ~ 6;\n> ERROR: Unable to identify a right operator '&' for type 'int4'\n\nI have known that from the beginning. On first patch I did not get\nit work correctly, so in second patch I disabled all gram.y hack\naltogether. So this patch does not change anything in parser/.\nAt the moment it should be used: select 5 & (~ 6);\n\nI can hack the gram.y and scan.l to get those operators to work\nbut as I saw no consensus has been reached in -hackers whether\nand how it should be solved globally?\n\n\n-- \nmarko\n\n", "msg_date": "Tue, 24 Oct 2000 22:38:44 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: binary operators on integers" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> This patch was installed, with xor as \"#\". The parser still needs work. \n> Besides the known issue of \"|\", this also parses funny:\n\n> => select 5 & ~ 6;\n> ERROR: Unable to identify a right operator '&' for type 'int4'\n\nI think we're kind of stuck on that, at least in terms of a solution\nspecifically for ~ --- I don't think we should be wiring knowledge of\nwhether specific operators are prefix/suffix/infix into the grammar.\n\nIt might perhaps be possible to tweak the grammar so that\n\n\toperand operator operator operand\n\nis generically resolved as\n\n\toperand infix-op (prefix-op operand)\n\nand not\n\n\t(operand postfix-op) infix-op operand\n\nthe way it is now. Given that postfix operators are relatively seldom\nused, this seems a more sensible default --- but I suppose somewhere out\nthere is an application that will break. (At least it probably won't\nbreak silently.)\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Oct 2000 17:30:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] binary operators on integers " }, { "msg_contents": "Looks like this is fixed:\n\t\n\ttest=> select 5 & ~ 6;\n\tERROR: Unable to identify a right operator '&' for type 'int4'\n\t You may need to add parentheses or an explicit cast\n\ttest=> select 5 & (~ 6);\n\t ?column? \n\t----------\n\t 1\n\t(1 row)\n\n> This patch was installed, with xor as \"#\". The parser still needs work. \n> Besides the known issue of \"|\", this also parses funny:\n> \n> => select 5 & ~ 6;\n> ERROR: Unable to identify a right operator '&' for type 'int4'\n> \n> \n> Marko Kreen writes:\n> \n> > Here is my next take on binary operators for integers.\n> > It implements the following operators for int2/int4/int8:\n> > \n> > ~ - not\n> > & - and\n> > ^ - xor\n> > | - or\n> > << - shift left\n> > >> - shift right\n> > \n> > Notes:\n> > \n> > * My original choice for xor was '#' because the '^' operator conflicts\n> > with power operator on floats but Tom Lane said:\n> > \n> > > Well, you *could* use '^' since there's no definition of it for integer\n> > > operands. But that would mean that something like '4^2', which was\n> > > formerly implicitly coerced to float and interpreted as floating\n> > > power function, would suddenly mean something different. Again a\n> > > serious risk of silently breaking applications. This doesn't apply to\n> > > '|' though, since it has no numeric interpretation at all right now.\n> > \n> > As the bit-string uses '^' too for xor-ing it would be nice to be\n> > consistent. I am quite unsure on this matter. The patch now seems\n> > otherwise sane to me, this is the only issue left.\n> > \n> > * On << and >> the second argument is always int32 as this seems\n> > to be the 'default' int type in PostgreSQL.\n> > \n> > * Oids used are 1874 - 1909.\n> > \n> > Comments?\n> > \n> > Patch is against current CVS.\n> > \n> > \n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 16:30:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: binary operators on integers" }, { "msg_contents": "On Fri, Jan 19, 2001 at 04:30:09PM -0500, Bruce Momjian wrote:\n> Looks like this is fixed:\n> \t\n> \ttest=> select 5 & ~ 6;\n> \tERROR: Unable to identify a right operator '&' for type 'int4'\n> \t You may need to add parentheses or an explicit cast\n> \ttest=> select 5 & (~ 6);\n> \t ?column? \n> \t----------\n> \t 1\n> \t(1 row)\n\nI can still reproduce it:\n\nmarko=# SELECT 5 & ~6;\nERROR: Unable to identify a right operator '&' for type 'int4'\n You may need to add parentheses or an explicit cast\n\n\nOr did you mean it can be fixed with parenthesis? That was the\ncase from the beginning.\n\n\n> \n> > This patch was installed, with xor as \"#\". The parser still needs work. \n> > Besides the known issue of \"|\", this also parses funny:\n> > \n> > => select 5 & ~ 6;\n> > ERROR: Unable to identify a right operator '&' for type 'int4'\n> > \n\n-- \nmarko\n\n", "msg_date": "Sat, 20 Jan 2001 17:31:28 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: binary operators on integers" }, { "msg_contents": "Marko Kreen <[email protected]> writes:\n> I can still reproduce it:\n> marko=# SELECT 5 & ~6;\n> ERROR: Unable to identify a right operator '&' for type 'int4'\n> You may need to add parentheses or an explicit cast\n\nCorrect, we did not rejigger the operator precedence.\n\nI played around with this a little bit, and find that the attached patch\nmakes the above case work as desired --- essentially, it changes things\nso that\n\ta_expr Op Op a_expr\nwill be parsed as\n\ta_expr Op (Op a_expr)\nnot\n\t(a_expr Op) Op a_expr\nwhich is what you get now because Op is marked left-associative.\n\nNow, this is a situation where we can't fix one case without breaking\nanother, namely the case where you really DID want the first Op to be\nparsed as a postfix operator. Thus the problem moves over to here:\n\nregression=# select 4! ~ 10;\nERROR: Unable to identify an operator '!' for types 'int4' and 'int4'\n You will have to retype this query using an explicit cast\nregression=# select (4!) ~ 10;\n ?column?\n----------\n f\n(1 row)\n\nwhereas this worked without parens in 7.0.\n\nGiven the infrequency of use of postfix operators compared to prefix,\nI am inclined to think that we should change the grammar to make the\nlatter easier to use at the expense of the former. On the other hand,\nit seems there's a pretty large risk of backwards-incompatibility here.\nComments?\n\nBTW, the regress tests do not break, so they contain no examples where\nit makes a difference.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/parser/gram.y.orig\tSat Jan 20 12:37:52 2001\n--- src/backend/parser/gram.y\tSat Jan 20 13:03:17 2001\n***************\n*** 383,388 ****\n--- 383,389 ----\n %nonassoc\tOVERLAPS\n %nonassoc\tBETWEEN\n %nonassoc\tIN\n+ %left\t\tPOSTFIXOP\t\t/* dummy for postfix Op rules */\n %left\t\tOp\t\t\t\t/* multi-character ops and user-defined operators */\n %nonassoc\tNOTNULL\n %nonassoc\tISNULL\n***************\n*** 4312,4320 ****\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"+\", NULL, $2); }\n \t\t| '-' a_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = doNegate($2); }\n! \t\t| '%' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", NULL, $2); }\n! \t\t| '^' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", NULL, $2); }\n \t\t| a_expr '%'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, NULL); }\n--- 4313,4321 ----\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"+\", NULL, $2); }\n \t\t| '-' a_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = doNegate($2); }\n! \t\t| '%' a_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", NULL, $2); }\n! \t\t| '^' a_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", NULL, $2); }\n \t\t| a_expr '%'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, NULL); }\n***************\n*** 4353,4361 ****\n \n \t\t| a_expr Op a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, $2, $1, $3); }\n! \t\t| Op a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, $1, NULL, $2); }\n! \t\t| a_expr Op\n \t\t\t\t{\t$$ = makeA_Expr(OP, $2, $1, NULL); }\n \n \t\t| a_expr AND a_expr\n--- 4354,4362 ----\n \n \t\t| a_expr Op a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, $2, $1, $3); }\n! \t\t| Op a_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = makeA_Expr(OP, $1, NULL, $2); }\n! \t\t| a_expr Op\t\t\t\t\t%prec POSTFIXOP\n \t\t\t\t{\t$$ = makeA_Expr(OP, $2, $1, NULL); }\n \n \t\t| a_expr AND a_expr\n***************\n*** 4560,4568 ****\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"+\", NULL, $2); }\n \t\t| '-' b_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = doNegate($2); }\n! \t\t| '%' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", NULL, $2); }\n! \t\t| '^' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", NULL, $2); }\n \t\t| b_expr '%'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, NULL); }\n--- 4561,4569 ----\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"+\", NULL, $2); }\n \t\t| '-' b_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = doNegate($2); }\n! \t\t| '%' b_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", NULL, $2); }\n! \t\t| '^' b_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", NULL, $2); }\n \t\t| b_expr '%'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, NULL); }\n***************\n*** 4589,4597 ****\n \n \t\t| b_expr Op b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, $2, $1, $3); }\n! \t\t| Op b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, $1, NULL, $2); }\n! \t\t| b_expr Op\n \t\t\t\t{\t$$ = makeA_Expr(OP, $2, $1, NULL); }\n \t\t;\n \n--- 4590,4598 ----\n \n \t\t| b_expr Op b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, $2, $1, $3); }\n! \t\t| Op b_expr\t\t\t\t\t%prec UMINUS\n \t\t\t\t{\t$$ = makeA_Expr(OP, $1, NULL, $2); }\n! \t\t| b_expr Op\t\t\t\t\t%prec POSTFIXOP\n \t\t\t\t{\t$$ = makeA_Expr(OP, $2, $1, NULL); }\n \t\t;\n \n", "msg_date": "Sat, 20 Jan 2001 13:31:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] binary operators on integers " }, { "msg_contents": "On Sat, Jan 20, 2001 at 01:31:49PM -0500, Tom Lane wrote:\n> Given the infrequency of use of postfix operators compared to prefix,\n> I am inclined to think that we should change the grammar to make the\n> latter easier to use at the expense of the former. On the other hand,\n> it seems there's a pretty large risk of backwards-incompatibility here.\n> Comments?\n\nI say, go for it! :) if it matters anything :]\n\nAnd the backwards incompatibility should be simply mentioned in\nrelease notes. Only problem is, that this is such a obscure\nincompatibility that I am not sure e.g. distro packagers bother\nto mention it in new 7.1 install splash-screens. \"If you have\nused factorial '!' or start of interval '|' operator in\nexpressions, note that ...\" ?\n\n-- \nmarko\n\n", "msg_date": "Sun, 21 Jan 2001 20:52:30 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] binary operators on integers" }, { "msg_contents": "I wrote:\n> Given the infrequency of use of postfix operators compared to prefix,\n> I am inclined to think that we should change the grammar to make the\n> latter easier to use at the expense of the former. On the other hand,\n> it seems there's a pretty large risk of backwards-incompatibility here.\n> Comments?\n\nI backed away from part of the proposed patch --- changing the\nprecedence of all the prefix-operator productions to UMINUS would\nprobably break people's queries. But I've applied the part that\nchanges the behavior of a_expr Op Op a_expr. This will now be\nparsed as an infix operator followed by a prefix operator.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 17:43:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] binary operators on integers " } ]
[ { "msg_contents": "\nFor a patch to fix the AIX port I would like to differentiate\nVersions below 4.3 and above, or rather I would like to \ndifferentiate whether -ldl has dlopen().\n\nA compiler define would be _AIX43, but I guess we are supposed to \nuse a define from configure.\n\nThanks\nAndreas\n", "msg_date": "Mon, 25 Sep 2000 11:44:01 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "Q: How to #ifdef for dlopen() or a specific OS Version" }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> For a patch to fix the AIX port I would like to differentiate\n> Versions below 4.3 and above, or rather I would like to \n> differentiate whether -ldl has dlopen().\n\nChecking the latter condition seems more robust.\n\nIt's not hard. After configure.in's line\n\nAC_CHECK_LIB(dl, main)\n\nyou could add something like\n\nAC_CHECK_LIB(dl, dlopen, [AC_DEFINE(HAVE_DLOPEN_IN_LIBDL)])\n\nand make the corresponding addition to config.h.in and/or\nMakefile.global.in, depending on whether you need access to this symbol\nfrom C code, Makefiles, or both.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Sep 2000 10:55:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q: How to #ifdef for dlopen() or a specific OS Version " } ]
[ { "msg_contents": " Date: Monday, September 25, 2000 @ 08:58:47\nAuthor: momjian\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n from hub.org:/home/projects/pgsql/tmp/cvs-serv33900/pgsql/src/backend/utils/adt\n\nModified Files:\n\tformatting.c oracle_compat.c \n\n----------------------------- Log Message -----------------------------\n\n the patch include:\n\n - rename ichar() to chr() (discussed with Tom)\n\n - add docs for oracle compatible routines:\n\n btrim()\n ascii()\n chr()\n repeat()\n\n - fix bug with timezone in to_char()\n\n - all to_char() variants return NULL instead textin(\"\")\n if it's needful.\n\n The contrib/odbc is without changes and contains same routines as main \ntree ... because I not sure how plans are Thomas with this :-)\n\n Karel\n---------------------------------------------------------------------------\n\nThis effectively one line patch should fix the fact that\nforeign key definitions in create table were erroring if\na primary key was defined. I was using the columns \nlist to get the columns of the table for comparison, but\nit got reused as a temporary list inside the primary key\nstuff.\n\nStephan Szabo\n\n", "msg_date": "Mon, 25 Sep 2000 08:58:47 -0400 (EDT)", "msg_from": "Bruce Momjian - CVS <momjian>", "msg_from_op": true, "msg_subject": "pgsql/src/backend/utils/adt (formatting.c oracle_compat.c)" }, { "msg_contents": "> \n> This effectively one line patch should fix the fact that\n> foreign key definitions in create table were erroring if\n> a primary key was defined. I was using the columns \n> list to get the columns of the table for comparison, but\n> it got reused as a temporary list inside the primary key\n> stuff.\n> \n> Stephan Szabo\n> \n> \n\nI think this was the fix Stephan was talking about. I grabbed all the\npatches a few weeks ago.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Oct 2000 00:02:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/utils/adt (formatting.c\n\toracle_compat.c)" }, { "msg_contents": "\nYep, that's the one. I was going to resend, but it had \ndisappeared from my sent mail for some reason.\n\nOn Tue, 17 Oct 2000, Bruce Momjian wrote:\n\n> > \n> > This effectively one line patch should fix the fact that\n> > foreign key definitions in create table were erroring if\n> > a primary key was defined. I was using the columns \n> > list to get the columns of the table for comparison, but\n> > it got reused as a temporary list inside the primary key\n> > stuff.\n> \n> I think this was the fix Stephan was talking about. I grabbed all the\n> patches a few weeks ago.\n\n", "msg_date": "Tue, 17 Oct 2000 08:44:51 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/src/backend/utils/adt (formatting.c\n\toracle_compat.c)" } ]
[ { "msg_contents": "\nJust going through Peter's new 'mk-snapshot' script, and found a problem:\n\ngmake[4]: Entering directory `/home/projects/pgsql/snapshot/pgsql/postgresql-snapshot/src/backend/parser'\nbyacc -d gram.y\nbyacc: f - maximum table size exceeded\ngmake[4]: *** [gram.c] Error 2\ngmake[4]: Leaving directory `/home/projects/pgsql/snapshot/pgsql/postgresql-snapshot/src/backend/parser'\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 25 Sep 2000 10:45:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "byacc problem with FreeBSD ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Just going through Peter's new 'mk-snapshot' script, and found a problem:\n\n> gmake[4]: Entering directory `/home/projects/pgsql/snapshot/pgsql/postgresql-snapshot/src/backend/parser'\n> byacc -d gram.y\n> byacc: f - maximum table size exceeded\n\nbyacc? Why isn't it using bison?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Sep 2000 11:00:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: byacc problem with FreeBSD ... " }, { "msg_contents": "* Tom Lane <[email protected]> [000925 08:11] wrote:\n> The Hermit Hacker <[email protected]> writes:\n> > Just going through Peter's new 'mk-snapshot' script, and found a problem:\n> \n> > gmake[4]: Entering directory `/home/projects/pgsql/snapshot/pgsql/postgresql-snapshot/src/backend/parser'\n> > byacc -d gram.y\n> > byacc: f - maximum table size exceeded\n> \n> byacc? Why isn't it using bison?\n\nBecause we no longer ship the GPL encumbered bison in the base\nsystem. I've mentioned this before.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 25 Sep 2000 09:58:59 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: byacc problem with FreeBSD ..." }, { "msg_contents": "On Mon, 25 Sep 2000, Alfred Perlstein wrote:\n\n> * Tom Lane <[email protected]> [000925 08:11] wrote:\n> > The Hermit Hacker <[email protected]> writes:\n> > > Just going through Peter's new 'mk-snapshot' script, and found a problem:\n> > \n> > > gmake[4]: Entering directory `/home/projects/pgsql/snapshot/pgsql/postgresql-snapshot/src/backend/parser'\n> > > byacc -d gram.y\n> > > byacc: f - maximum table size exceeded\n> > \n> > byacc? Why isn't it using bison?\n> \n> Because we no longer ship the GPL encumbered bison in the base\n> system. I've mentioned this before.\n\nD'oh, tha twould explain why I didn't have it in /usr/local *nod*\n\nthanks alfred ...\n\n\n", "msg_date": "Mon, 25 Sep 2000 19:03:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: byacc problem with FreeBSD ..." } ]
[ { "msg_contents": "Subj, sais it all,\n\nPlease don't use C++ style comments in C source files. \nIt does not work for all ports.\n\nCurrently in connect.c.\n\nThanks\nAndreas\n", "msg_date": "Mon, 25 Sep 2000 15:57:24 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "Please no // comments in C source (ecpg)" }, { "msg_contents": "Fixed.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Subj, sais it all,\n> \n> Please don't use C++ style comments in C source files. \n> It does not work for all ports.\n> \n> Currently in connect.c.\n> \n> Thanks\n> Andreas\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Sep 2000 10:40:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please no // comments in C source (ecpg)" }, { "msg_contents": "On Mon, 25 Sep 2000, Zeugswetter Andreas SB wrote:\n\n> Subj, sais it all,\n> \n> Please don't use C++ style comments in C source files. \n> It does not work for all ports.\n\nAFAIK, only GCC supports // as comments.\n\n> Currently in connect.c.\n\nAnd in the java areas, but that doesn't count here ;-)\n\n-- \nPeter T Mount [email protected] http://www.retep.org.uk\nPostgreSQL JDBC Driver http://www.retep.org.uk/postgres/\nJava PDF Generator http://www.retep.org.uk/pdf/\n\n\n", "msg_date": "Mon, 25 Sep 2000 16:30:06 +0100 (BST)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please no // comments in C source (ecpg)" } ]
[ { "msg_contents": "\n> > > Please don't use C++ style comments in C source files. \n> > > It does not work for all ports.\n> > \n> > AFAIK, only GCC supports // as comments.\n> \n> // comments are legal as of Standard C 1999, so expect more \n> compilers to\n> accept them silently. (That still doesn't mean we get to use them, of\n> course.)\n\nWell, xlc (aix compiler) does have a flag to allow // comments,\nso if we think that all other compilers support them we can use the flag\nin the AIX port, and forget about the issue.\n\nBut I guess we better not, and that was why we don't use the flag.\nThat makes me the dummy that complains from time to time,\nsince nobody else who compiles snapshots seems to notice :-)\n\nSuch is life :-)\nAndreas\n", "msg_date": "Mon, 25 Sep 2000 17:50:57 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Please no // comments in C source (ecpg)" }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> That makes me the dummy that complains from time to time,\n> since nobody else who compiles snapshots seems to notice :-)\n\nI think most of the regular developers use gcc. It's good to have\npeople testing with other compilers --- keep it up!\n\nEven though C99 does allow // comments, I agree that we have to keep\nthem out of portable code for the foreseeable future.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Sep 2000 12:03:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Please no // comments in C source (ecpg) " } ]
[ { "msg_contents": "\ndamn ... I thought that our configure refused anything *but* bison? how\ncome its allowying me to use byacc? :)\n\n\nOn Mon, 25 Sep 2000, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n> \n> > Just going through Peter's new 'mk-snapshot' script, and found a problem:\n> > \n> > gmake[4]: Entering directory `/home/projects/pgsql/snapshot/pgsql/postgresql-snapshot/src/backend/parser'\n> > byacc -d gram.y\n> > byacc: f - maximum table size exceeded\n> > gmake[4]: *** [gram.c] Error 2\n> > gmake[4]: Leaving directory `/home/projects/pgsql/snapshot/pgsql/postgresql-snapshot/src/backend/parser'\n> \n> You don't have a bison binary installed on hub.org.\n> \n> hub:~$ which bison\n> hub:~$ locate bison\n> /home/share/info/bison.info.gz\n> /home/share/man/cat1/bison.1.gz\n> /home/share/man/man1/bison.1.gz\n> /home/share/misc/bison.hairy\n> /home/share/misc/bison.simple\n> /home/vhosts/kde.org/cvsroot/kdelibs/jscript/Attic/bison2cpp.h,v\n> /home/vhosts/kde.org/cvsroot/kdelibs/jscript/Attic/cpp2bison.cpp,v\n> /usr/ports/devel/bison\n> /usr/ports/devel/bison/Makefile\n> /usr/ports/devel/bison/files\n> /usr/ports/devel/bison/files/md5\n> /usr/ports/devel/bison/patches\n> /usr/ports/devel/bison/patches/patch-getargs.c\n> /usr/ports/devel/bison/patches/patch-reader.c\n> /usr/ports/devel/bison/pkg\n> /usr/ports/devel/bison/pkg/COMMENT\n> /usr/ports/devel/bison/pkg/DESCR\n> /usr/ports/devel/bison/pkg/PLIST\n> \n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 25 Sep 2000 13:22:37 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: byacc problem with FreeBSD ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> damn ... I thought that our configure refused anything *but* bison? how\n> come its allowying me to use byacc? :)\n\nI think it should try to use the system yacc if it can't find bison.\nIt is possible to build our grammar with non-bison yaccs, since we\naren't using any bison-only features (not true for lex/flex,\nunfortunately).\n\nPersuading the local yacc to enlarge its tables enough to accept our\ngrammar is an exercise for the user ;-). I have some notes about\nmaking HPUX's yacc work in FAQ_HPUX.\n\nBut our distro should certainly use bison to build the derived files.\nYou used to have bison on hub.org, what happened to it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Sep 2000 12:32:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: byacc problem with FreeBSD ... " }, { "msg_contents": "On Mon, 25 Sep 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > damn ... I thought that our configure refused anything *but* bison? how\n> > come its allowying me to use byacc? :)\n> \n> I think it should try to use the system yacc if it can't find bison.\n> It is possible to build our grammar with non-bison yaccs, since we\n> aren't using any bison-only features (not true for lex/flex,\n> unfortunately).\n> \n> Persuading the local yacc to enlarge its tables enough to accept our\n> grammar is an exercise for the user ;-). I have some notes about\n> making HPUX's yacc work in FAQ_HPUX.\n> \n> But our distro should certainly use bison to build the derived files.\n> You used to have bison on hub.org, what happened to it?\n\nNot sure ... its in ports, and it isn't one that I can think of having\never removed *scratch head* And we haven't done any major hardware\nupgrades recently that would have caused me to rebuild /usr/local :(\n\nMaybe up until recently it actually squeeked by on byacc *shrug*\n\n\n", "msg_date": "Mon, 25 Sep 2000 19:02:44 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: byacc problem with FreeBSD ... " } ]
[ { "msg_contents": "No response to this one on -general, so here goes...\n\nThe documentation for initdb says that the \"-t\" (== \"--template\") option\nrecreates the template1 database but doesn't touch anything else. But it\nseems that if it detects a failure it will abort and remove anything it\n*might* have created:\n\n-- begin cut-and-paste\n\n[barbra rp]/usr/lib/postgresql/bin/initdb -t -D /home/rp/tmp/pgtest\nUpdating template1 database only.\nThis database system will be initialized with username \"rp\".\nThis user will own all the data files and must also own the server process.\n\nCreating template database in /home/rp/tmp/pgtest/base/template1\n000925.16:49:28.545 [5432] FATAL 2: BootStrapXLOG failed to create control file (/home/rp/tmp/pgtest/pg_control): 17\n000925.16:49:28.545 [5432] FATAL 2: BootStrapXLOG failed to create control file (/home/rp/tmp/pgtest/pg_control): 17\n\ninitdb failed.\nRemoving /home/rp/tmp/pgtest.\nRemoving temp file /tmp/initdb.5412.\n\n-- end cut-and-paste\n\nIt seems that initdb starts a single-user backend but gives it the \"-x\"\noption, which makes it call BootStrapXLOG, which fails because it\nexpects to be called only on absolutely first-time system startup (?).\ninitdb sees the failure and removes everything under the data directory,\nwhich is the wrong behaviour here. Everything seems to be OK if I fix\ninitdb not to pass \"-x\" to postgres if it's been given \"-t\", but I don't\nknow enough to know that this is really the right thing. If it is, I'll\nsubmit a patch; any opinions?\n\nRichard\n", "msg_date": "Mon, 25 Sep 2000 18:32:46 +0100", "msg_from": "Richard Poole <[email protected]>", "msg_from_op": true, "msg_subject": "\"initdb -t\" destroys all databases" }, { "msg_contents": "Richard Poole <[email protected]> writes:\n> It seems that initdb starts a single-user backend but gives it the \"-x\"\n> option, which makes it call BootStrapXLOG, which fails because it\n> expects to be called only on absolutely first-time system startup (?).\n> initdb sees the failure and removes everything under the data directory,\n> which is the wrong behaviour here.\n\nSounds like a bug to me too. Peter E. has been hacking initdb to be\nmore robust; Peter, have you fixed this already in current sources?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Sep 2000 16:31:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases " }, { "msg_contents": "Peter, comments?\n\n\n> Richard Poole <[email protected]> writes:\n> > It seems that initdb starts a single-user backend but gives it the \"-x\"\n> > option, which makes it call BootStrapXLOG, which fails because it\n> > expects to be called only on absolutely first-time system startup (?).\n> > initdb sees the failure and removes everything under the data directory,\n> > which is the wrong behaviour here.\n> \n> Sounds like a bug to me too. Peter E. has been hacking initdb to be\n> more robust; Peter, have you fixed this already in current sources?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Oct 2000 00:04:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases" }, { "msg_contents": "Bruce Momjian writes:\n\n> Peter, comments?\n\nIt doesn't destroy all databases anymore, although I can't make any\nstatements about what it actually does do. I suppose it's still broken.\n\n> > Richard Poole <[email protected]> writes:\n> > > It seems that initdb starts a single-user backend but gives it the \"-x\"\n> > > option, which makes it call BootStrapXLOG, which fails because it\n> > > expects to be called only on absolutely first-time system startup (?).\n> > > initdb sees the failure and removes everything under the data directory,\n> > > which is the wrong behaviour here.\n> > \n> > Sounds like a bug to me too. Peter E. has been hacking initdb to be\n> > more robust; Peter, have you fixed this already in current sources?\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> \n> \n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 17 Oct 2000 17:07:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases" }, { "msg_contents": "\nAny idea if this is fixed?\n\n> Bruce Momjian writes:\n> \n> > Peter, comments?\n> \n> It doesn't destroy all databases anymore, although I can't make any\n> statements about what it actually does do. I suppose it's still broken.\n> \n> > > Richard Poole <[email protected]> writes:\n> > > > It seems that initdb starts a single-user backend but gives it the \"-x\"\n> > > > option, which makes it call BootStrapXLOG, which fails because it\n> > > > expects to be called only on absolutely first-time system startup (?).\n> > > > initdb sees the failure and removes everything under the data directory,\n> > > > which is the wrong behaviour here.\n> > > \n> > > Sounds like a bug to me too. Peter E. has been hacking initdb to be\n> > > more robust; Peter, have you fixed this already in current sources?\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > \n> > \n> > \n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jan 2001 16:43:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Any idea if this is fixed?\n\n> Peter, comments?\n>> \n>> It doesn't destroy all databases anymore, although I can't make any\n>> statements about what it actually does do. I suppose it's still broken.\n\nPeter did put in a hack to make sure it wouldn't do \"rm -rf $PGDATA\"\nupon failure, but it still doesn't appear to me to offer any non-broken\nfunctionality. Note my comment in initdb.sh:\n\n# XXX --- I do not believe the \"template_only\" option can actually work.\n# With this coding, it'll fail to make entries for pg_shadow etc. in\n# template1 ... tgl 11/2000\n\nIt occurs to me that the only likely use for initdb -t is now served by\n\tDROP DATABASE template1;\n\tCREATE DATABASE template1 WITH TEMPLATE = template0;\nie, we have a *real* way to reconstruct a virgin template1 rather than\nan initdb kluge.\n\nAccordingly, I suggest that initdb -t should be flushed entirely.\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jan 2001 19:53:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases " }, { "msg_contents": "Tom Lane writes:\n\n> It occurs to me that the only likely use for initdb -t is now served by\n> \tDROP DATABASE template1;\n> \tCREATE DATABASE template1 WITH TEMPLATE = template0;\n> ie, we have a *real* way to reconstruct a virgin template1 rather than\n> an initdb kluge.\n\nI agree.\n\n> Accordingly, I suggest that initdb -t should be flushed entirely.\n\nKill it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 20 Jan 2001 19:15:22 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases " }, { "msg_contents": "Tom Lane writes:\n\n> Accordingly, I suggest that initdb -t should be flushed entirely.\n\nI guess we won't need two separate files global.bki and template1.bki\nanymore. That would simplify some things, but maybe it's still a\nstilistic thing.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 23 Jan 2001 21:44:30 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I guess we won't need two separate files global.bki and template1.bki\n> anymore. That would simplify some things, but maybe it's still a\n> stilistic thing.\n\nIt's probably not absolutely necessary to have two, but why change it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jan 2001 16:17:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > I guess we won't need two separate files global.bki and template1.bki\n> > anymore. That would simplify some things, but maybe it's still a\n> > stilistic thing.\n> \n> It's probably not absolutely necessary to have two, but why change it?\n\nOne less *bki file certainly would be cleaner. I never understood the\ndifference between them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 07:55:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"initdb -t\" destroys all databases" } ]
[ { "msg_contents": "Hi,\n\nIs there a way to make postgre insensitive about field name cases?\n\nLike \"initdb --fields-are-case-insensitive --compares-are-case-insensitive\"\n\nYes I know about \"CaseIsKept\" and CaseIsNotKept (note the quotes). But that\ngives me more trouble than it solves. And what about \"case insensitive field\nname with spaces\". I believe that space is legal in field names.\n\nAre there any real reason why postgre is sensitive about field names (except\nSQL92 states that this is how it must be)?\n\nI suppose somewhere along the way I have all field names separated from the\nquery, and in which file(s) does that happen? (So I can do my own hack, add\n\"tolower(fieldName)\").\n\nIve tried to locate the right files in the source for 7.0.2, but there are\nmore that one file.\n\n// Jarmo\n\n", "msg_date": "Tue, 26 Sep 2000 09:39:59 +0200", "msg_from": "\"Jarmo Paavilainen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Case sensitive field names" }, { "msg_contents": "Jarmo Paavilainen wrote:\n> \n> Hi,\n> \n> Is there a way to make postgre insensitive about field name cases?\n> \n> Like \"initdb --fields-are-case-insensitive --compares-are-case-insensitive\"\n> \n> Yes I know about \"CaseIsKept\" and CaseIsNotKept (note the quotes). But that\n> gives me more trouble than it solves. And what about \"case insensitive field\n> name with spaces\". I believe that space is legal in field names.\n\nThe main problem I see with case-insensitivity is the fact that there\nare always \nmore than one way to do it, as it depends on charset _and_ locale ;(\n\nFor example '�'=='�' in my locale but not in US, not to mention that in\nsome \nlocales even the character count may change when going from upper to\nlower case.\n\nSo I suspect that only valid reason for case-insensitivity is\ncompatibility with \narbitraryly-case-altering OS-es, like the ones Microsoft produces.\n\nFor any other use WYSIWYG field names should be preferred.\n\n> Are there any real reason why postgre is sensitive about field names (except\n> SQL92 states that this is how it must be)?\n> \n> I suppose somewhere along the way I have all field names separated from the\n> query, and in which file(s) does that happen? (So I can do my own hack, add\n> \"tolower(fieldName)\").\n> \n> Ive tried to locate the right files in the source for 7.0.2, but there are\n> more that one file.\n\nI guess the best place would be sobewhere very near lexer.\n\nYou could also try just uppercasing anything outside ''/\"\" even before\nit is \npassed to backend.\n\n---------\nHannu\n", "msg_date": "Tue, 26 Sep 2000 13:35:24 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Case sensitive field names" }, { "msg_contents": "\n...\n> > Is there a way to make postgre insensitive about field name cases?\n> >\n> > Like\n\"initdb --fields-are-case-insensitive --compares-are-case-insensitive\"\n...\n> The main problem I see with case-insensitivity is the fact that there\n> are always more than one way to do it, as it depends on charset _and_\nlocale ;(\n>\n> For example '�'=='�' in my locale but not in US, not to mention that in\n> some locales even the character count may change when going from upper to\n> lower case.\n\nThats not really a problem with field names. *I think* you should always use\nASCII chars in fieldnames (and only those between 32 (space) and 'z'.\n\nAnd PostgreSQL should cope with case insensitive search. If not, then I can\nnot use it.\n\nCan PostgreSQL do a case insensitive search?\n\n...\n> arbitraryly-case-altering OS-es, like the ones Microsoft produces.\n\nYeah and microsoft SQL server can do a case insensitive search, so can\nSybase (at least the Win versions).\n\n...\n> > query, and in which file(s) does that happen? (So I can do my own hack,\nadd\n> > \"tolower(fieldName)\").\n...\n> I guess the best place would be sobewhere very near lexer.\n\nIll look for a good spot.\n\n> You could also try just uppercasing anything outside ''/\"\" even before\n> it is passed to backend.\n\nNo good, because field values should keep case (even if you search on them\ncase insensitive). But then again to use \" as a field value delimiter is\nillegal, isnt it?\n\n// Jarmo\n\n", "msg_date": "Tue, 26 Sep 2000 19:21:46 +0200", "msg_from": "\"Jarmo Paavilainen\" <[email protected]>", "msg_from_op": true, "msg_subject": "SV: Case sensitive field names" }, { "msg_contents": "Jarmo Paavilainen wrote:\n> \n> ...\n> > > Is there a way to make postgre insensitive about field name cases?\n> > >\n> > > Like\n> \"initdb --fields-are-case-insensitive --compares-are-case-insensitive\"\n> ...\n> > The main problem I see with case-insensitivity is the fact that there\n> > are always more than one way to do it, as it depends on charset _and_\n> locale ;(\n> >\n> > For example '�'=='�' in my locale but not in US, not to mention that in\n> > some locales even the character count may change when going from upper to\n> > lower case.\n> \n> Thats not really a problem with field names. *I think* you should always use\n\nWhat do you mean by \"should\" ;)\n\n> ASCII chars in fieldnames (and only those between 32 (space) and 'z'.\n\nhannu=> create table \"b�v\"(\"gbz�h\" int);\nCREATE\nhannu=> \\d b�v\nTable = b�v\n+----------------------------------+----------------------------------+-------+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n| gbz�h | int4 \n| 4 |\n+----------------------------------+----------------------------------+-------+\n\n \n> And PostgreSQL should cope with case insensitive search. If not, then I can\n> not use it.\n> \n> Can PostgreSQL do a case insensitive search?\n\nPostgres can do CI regular expressions :\n\nselect * from books where title ~* '.*Tom.*');\n\n\ncase insensitive LIKE is not directly supported , but you can do\nsomething like\n\nselect * from books where upper(title) LIKE upper('%Tom%');\n\n> \n> ...\n> > arbitraryly-case-altering OS-es, like the ones Microsoft produces.\n> \n> Yeah and microsoft SQL server can do a case insensitive search, so can\n> Sybase (at least the Win versions).\n\nIIRC, MSSQL == Sybase (at least older versions of MSSQL)\n\n> \n> > You could also try just uppercasing anything outside ''/\"\" even before\n> > it is passed to backend.\n> \n> No good, because field values should keep case (even if you search on them\n> case insensitive). But then again to use \" as a field value delimiter is\n> illegal, isnt it?\n\nI understood that you wanted field _names_ to be case-insensitive, not\nfield values.\n\nField names are delimited by \"\", values of type string by ''\n\n---------------\nHannu\n", "msg_date": "Wed, 27 Sep 2000 00:52:24 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SV: Case sensitive field names" } ]
[ { "msg_contents": "Hello,\nI recently spoke about extending index scan to be able\nto take data directly from index pages. I wanted to know\nwhether should I spend my time and implement it.\nSo that I hacked last pgsql a bit to use proposed scan\nmode and did some measurements (see bellow). Measurements\nwas done on (id int,txt varchar(20)) table with 1 000 000 rows\nwith btree index on both attrs. Query involved was:\nselect id,count(txt) from big group by id;\nDuplicates distribution on id column was 1:1000. I was run\nquery twice after linux restart to ensure proper cache \nutilization (on disk heap & index was 90MB in total).\nSo I think that by implementing this scan mode we can expect\nto gain huge speedup in all queries which uses indices and\ncan found all data in their pages.\n\nProblems:\nmy changes implemented only indexscan and new cost function.\nit doesn't work when index pages contains tuples which doesn't\nbelong to our transaction. test was done after vacuum and\nonly one tx running.\n\nTODO:\n- add HeapTupleHeaderData into each IndexTupleData\n- change code to reflect above\n- when deleting-updating heap then also update tuples'\n HeapTupleHeaderData in indices\n\nThe last step could be done in two ways. First by limiting\nnumber of indices for one table we can store coresponding \nindices' TIDs in each heap tuple. The update is then simple\ntaking one disk write.\nOr do it in standart way - lookup appropriate index tuple\nby traversing index. It will cost us more disk accesses.\n\nIs someone interested in this ??\nregards devik\n\nWith current indexscan:\n! system usage stats:\n! 1812.534505 elapsed 93.060547 user 149.447266 system sec\n! [93.118164 user 149.474609 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 130978/32 [131603/297] page faults/reclaims, 132 [132] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 555587 read, 551155 written, buffer hit\nrate = 44.68%\n! Local blocks: 0 read, 0 written, buffer hit\nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n\nWith improved indexscan:\n! system usage stats:\n! 23.686788 elapsed 22.157227 user 0.372071 system sec\n! [22.193359 user 0.385742 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 1186/42 [1467/266] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 4385 read, 0 written, buffer hit\nrate = 4.32%\n! Local blocks: 0 read, 0 written, buffer hit\nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n\n\n", "msg_date": "Tue, 26 Sep 2000 11:15:28 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pgsql is 75 times faster with my new index scan" }, { "msg_contents": "[email protected] wrote:\n> \n> Hello,\n> I recently spoke about extending index scan to be able\n> to take data directly from index pages. I wanted to know\n> whether should I spend my time and implement it.\n> So that I hacked last pgsql a bit to use proposed scan\n> mode and did some measurements (see bellow). Measurements\n> was done on (id int,txt varchar(20)) table with 1 000 000 rows\n> with btree index on both attrs. Query involved was:\n> select id,count(txt) from big group by id;\n> Duplicates distribution on id column was 1:1000. I was run\n> query twice after linux restart to ensure proper cache\n> utilization (on disk heap & index was 90MB in total).\n> So I think that by implementing this scan mode we can expect\n> to gain huge speedup in all queries which uses indices and\n> can found all data in their pages.\n> \n> Problems:\n> my changes implemented only indexscan and new cost function.\n> it doesn't work when index pages contains tuples which doesn't\n> belong to our transaction. test was done after vacuum and\n> only one tx running.\n> \n> TODO:\n> - add HeapTupleHeaderData into each IndexTupleData\n> - change code to reflect above\n> - when deleting-updating heap then also update tuples'\n> HeapTupleHeaderData in indices\n\nI doubt everyone would like trading query speed for insert/update \nspeed plus index size\n\nPerhaps this could be implemented as a special type of index which \nyou must explicitly specify at create time ?\n\n> The last step could be done in two ways. First by limiting\n> number of indices for one table we can store coresponding\n> indices' TIDs in each heap tuple. The update is then simple\n> taking one disk write.\n\nWhy limit it ? One could just save an tid array in each tuple .\n\n-----------------\nHannu\n", "msg_date": "Tue, 26 Sep 2000 12:28:07 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "* [email protected] <[email protected]> [000926 02:33] wrote:\n> Hello,\n> I recently spoke about extending index scan to be able\n> to take data directly from index pages.\n[snip]\n> \n> Is someone interested in this ??\n\nConsidering the speedup, I sure as hell am interested. :)\n\nWhen are we going to have this?\n\n-Alfred\n", "msg_date": "Tue, 26 Sep 2000 03:22:19 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "\n\nHannu Krosing wrote:\n\n>\n> >\n> > TODO:\n> > - add HeapTupleHeaderData into each IndexTupleData\n> > - change code to reflect above\n> > - when deleting-updating heap then also update tuples'\n> > HeapTupleHeaderData in indices\n>\n> I doubt everyone would like trading query speed for insert/update\n> speed plus index size\n>\n> Perhaps this could be implemented as a special type of index which\n> you must explicitly specify at create time ?\n>\n> > The last step could be done in two ways. First by limiting\n> > number of indices for one table we can store coresponding\n> > indices' TIDs in each heap tuple. The update is then simple\n> > taking one disk write.\n>\n> Why limit it ? One could just save an tid array in each tuple .\n>\n\nIndice's TIDs are transient.\nIsn't it useless to store indice's TIDs ?\n\nRegards.\n\nHiroshi Inoue\n\n", "msg_date": "Tue, 26 Sep 2000 19:59:46 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> > > The last step could be done in two ways. First by limiting\n> > > number of indices for one table we can store coresponding\n> > > indices' TIDs in each heap tuple. The update is then simple\n> > > taking one disk write.\n> >\n> > Why limit it ? One could just save an tid array in each tuple .\n\nbecause when you add new index you had to rescan whole\nheap and grow the tid array ..\n\n> Indice's TIDs are transient.\n> Isn't it useless to store indice's TIDs ?\n\nbut yes Hiroshi is right. Index TID is transient. I first looked\ninto pg sources two weeks ago so I have still holes in my knowledge.\nSo that only solution is to traverse it ..\n\n\n", "msg_date": "Tue, 26 Sep 2000 13:13:40 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "At 12:28 26/09/00 +0300, Hannu Krosing wrote:\n>> TODO:\n>> - add HeapTupleHeaderData into each IndexTupleData\n>> - change code to reflect above\n>> - when deleting-updating heap then also update tuples'\n>> HeapTupleHeaderData in indices\n>\n>I doubt everyone would like trading query speed for insert/update \n>speed plus index size\n\nAnybody who has a high enquiry:update ratio would, I think. And that will\nbe most people, especially web people.\n\n\n>Perhaps this could be implemented as a special type of index which \n>you must explicitly specify at create time ?\n\nThat would be great, or alternatively, an attribute on the index?\n\n\nDec RDB offers this kind of feature, and it suffers from deadlocking\nproblems because of index node vs. row locking as well as high lock\ncontention when updating indexed data, so it's not just the cost of doing\nthe updates. I definitely thinks it's a good feature to implement, if\npossible, but making it optional would be wise.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 26 Sep 2000 21:46:16 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> That would be great, or alternatively, an attribute on the index?\n\nyes attribute would be nice.\n\n> \n> Dec RDB offers this kind of feature, and it suffers from deadlocking\n> problems because of index node vs. row locking as well as high lock\n> contention when updating indexed data, so it's not just the cost of doing\n> the updates. I definitely thinks it's a good feature to implement, if\n\nwhere is the reason of contention/ddlock ? We (or you?) only need to\nupdate index tuple's header data (like TX min/max) and key is\nuntouched. So that all index scans should be able to go without\ndisturbing. Index page should be locked in memory only for few\nticks during actual memcpy to page.\n\nBTW: IMHO when WAL become functional, need we still multi versioned\ntuples in heap ? Why don't just version tuples on WAL log and add\nthem during scans ?\n\ndevik\n\n", "msg_date": "Tue, 26 Sep 2000 13:49:41 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> > TODO:\n> > - add HeapTupleHeaderData into each IndexTupleData\n> > - change code to reflect above\n> > - when deleting-updating heap then also update tuples'\n> > HeapTupleHeaderData in indices\n> \n> I doubt everyone would like trading query speed for insert/update \n> speed plus index size\n\nIf he is scanning through the entire index, he could do a sequential\nscan of the table, grab all the tid transaction status values, and use\nthose when viewing the index. No need to store/update the transaction\nstatus in the index that way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Oct 2000 00:14:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> > I doubt everyone would like trading query speed for insert/update \n> > speed plus index size\n> \n> If he is scanning through the entire index, he could do a sequential\n> scan of the table, grab all the tid transaction status values, and use\n> those when viewing the index. No need to store/update the transaction\n> status in the index that way.\n\nHuh ? How ? It is how you do it now. Do you expect\nload several milion transaction statuses into memory,\nthen scan index and lookup these values ?\nMissed I something ?\ndevik\n\n", "msg_date": "Tue, 17 Oct 2000 09:37:27 +0200 (CEST)", "msg_from": "Devik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > > I doubt everyone would like trading query speed for insert/update\n> > > > speed plus index size\n> > >\n> > > If he is scanning through the entire index, he could do a sequential\n> > > scan of the table, grab all the tid transaction status values, and use\n> > > those when viewing the index. No need to store/update the transaction\n> > > status in the index that way.\n> >\n> > Huh ? How ? It is how you do it now. Do you expect\n> > load several milion transaction statuses into memory,\n> > then scan index and lookup these values ?\n> > Missed I something ?\n> > devik\n> >\n> >\n> \n> Not sure. I figured they were pretty small values.\n\nIIRC the whole point was to avoid scanning the table ?\n\n-------------\nHannu\n", "msg_date": "Tue, 17 Oct 2000 14:16:24 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": " Hi guys,\n\nHavin some trouble.\n\nOne of my databases appeared to be empty suddenly after having a large\namount of data in it. I contacted our server company and they gave me the\npostgres dir.\n\nI have put back the folder of the newsdatabase from the base dir into the\nbase dir of Postgres and recreated the database.\n\nThat's fine.\n\nHowever now the database is empty. When I do a cat on the file of the same\nname as one of the tables - it has loads of data in it. However when I go\nin to Postgres and try to list the table it comes back with ) rows.\n\nAny ideas I am desperate.\n\nI am using linux redhat and Postgres\n\nThanks\nAbe\n\n", "msg_date": "Tue, 17 Oct 2000 12:54:21 +0100", "msg_from": "\"Abe Asghar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Deep Trouble" }, { "msg_contents": "> > > I doubt everyone would like trading query speed for insert/update \n> > > speed plus index size\n> > \n> > If he is scanning through the entire index, he could do a sequential\n> > scan of the table, grab all the tid transaction status values, and use\n> > those when viewing the index. No need to store/update the transaction\n> > status in the index that way.\n> \n> Huh ? How ? It is how you do it now. Do you expect\n> load several milion transaction statuses into memory,\n> then scan index and lookup these values ?\n> Missed I something ?\n> devik\n> \n> \n\nNot sure. I figured they were pretty small values.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Oct 2000 08:05:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> > > > those when viewing the index. No need to store/update the transaction\n> > > > status in the index that way.\n> > >\n> > > Huh ? How ? It is how you do it now. Do you expect\n> > > load several milion transaction statuses into memory,\n> > > then scan index and lookup these values ?\n> > > Missed I something ?\n> > > devik\n> > Not sure. I figured they were pretty small values.\n> IIRC the whole point was to avoid scanning the table ?\n\nYes. This was the main point ! For small number of records the\ncurrent method is fast enough. The direct index scan is useful\nfor big tables and doing scan over large parts of them (like\nin aggregates).\ndevik\n\n", "msg_date": "Tue, 17 Oct 2000 15:23:55 +0200 (CEST)", "msg_from": "Devik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > > > I doubt everyone would like trading query speed for insert/update\n> > > > > speed plus index size\n> > > >\n> > > > If he is scanning through the entire index, he could do a sequential\n> > > > scan of the table, grab all the tid transaction status values, and use\n> > > > those when viewing the index. No need to store/update the transaction\n> > > > status in the index that way.\n> > >\n> > > Huh ? How ? It is how you do it now. Do you expect\n> > > load several milion transaction statuses into memory,\n> > > then scan index and lookup these values ?\n> > > Missed I something ?\n> > > devik\n> > >\n> > >\n> > \n> > Not sure. I figured they were pretty small values.\n> \n> IIRC the whole point was to avoid scanning the table ?\n\nYes, sorry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Oct 2000 10:28:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" } ]
[ { "msg_contents": "I'm thinking about changing the way that access permission checks are\nhandled for rules. The rule mechanism provides that accesses to tables\nthat are mentioned within rules are done with the permissions of the\nrule owner, not the invoking user. The way this is implemented is that\nwhen a rule is substituted into a query, the rule rewriter\n (a) does its own permission checking on the newly-added rangetable\n entries, and\n (b) sets a \"skipAcl\" flag in each such RTE to prevent the executor\n from doing normal permissions checking on that RTE.\n\nThis is pretty ugly. For one thing, it means near-duplicate code that\nhas to be kept in sync between the executor and the rewriter. For\nanother, it's not good that rule-related permissions checks happen at\nrewrite time instead of execution time. That means that a cached\nexecution plan will not respond to later changes in table permissions,\nif the access comes via a rule rather than a direct reference.\n\nWhat I'm thinking about doing is eliminating the \"skipAcl\" RTE field\nand instead adding an Oid field named something like \"checkAclAs\".\nThe semantics of this field would be \"if zero, check access permissions\nfor this table using the current effective userID; but if not zero,\ncheck access permissions as if you are this userID\". Then the rule\nrewriter would do no access permission checks of its own, but would\nset this field appropriately in RTEs that it adds to queries. All the\nactual permissions checking would happen in one place in the executor.\n\nComments? Is this a general enough mechanism, and does it fit well\nwith the various setUID tricks that people are thinking about?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Sep 2000 10:54:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Reimplementing permission checks for rules" }, { "msg_contents": "At 10:54 26/09/00 -0400, Tom Lane wrote:\n>\n>Comments? Is this a general enough mechanism, and does it fit well\n>with the various setUID tricks that people are thinking about?\n>\n\nDidn't Peter & Jan have a rewrite of the permissions system in the pipeline\n- or has that disappeared? What Jan was proposing was rather more\nsubstantial than just the setuid stuff, I *think*.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 27 Sep 2000 02:13:24 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reimplementing permission checks for rules" }, { "msg_contents": "Tom Lane writes:\n\n> What I'm thinking about doing is eliminating the \"skipAcl\" RTE field\n> and instead adding an Oid field named something like \"checkAclAs\".\n> The semantics of this field would be \"if zero, check access permissions\n> for this table using the current effective userID; but if not zero,\n> check access permissions as if you are this userID\". Then the rule\n> rewriter would do no access permission checks of its own, but would\n> set this field appropriately in RTEs that it adds to queries. All the\n> actual permissions checking would happen in one place in the executor.\n\nI like it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 27 Sep 2000 12:41:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reimplementing permission checks for rules" }, { "msg_contents": "Philip Warner writes:\n\n> Didn't Peter & Jan have a rewrite of the permissions system in the pipeline\n> - or has that disappeared? What Jan was proposing was rather more\n> substantial than just the setuid stuff, I *think*.\n\nIf I had known that we wouldn't beta until October I probably would have\nstarted on it. But there's a technical incompatibility issue at the core\nof my idea that would have forced a postpone until 7.2 for some parts\nanyway.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 27 Sep 2000 12:42:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reimplementing permission checks for rules" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> What I'm thinking about doing is eliminating the \"skipAcl\" RTE field\n>> and instead adding an Oid field named something like \"checkAclAs\".\n>> The semantics of this field would be \"if zero, check access permissions\n>> for this table using the current effective userID; but if not zero,\n>> check access permissions as if you are this userID\". Then the rule\n>> rewriter would do no access permission checks of its own, but would\n>> set this field appropriately in RTEs that it adds to queries. All the\n>> actual permissions checking would happen in one place in the executor.\n\n> I like it.\n\nOK. BTW, what is the status of the changeover you proposed re using\nOIDs instead of int4 userids as the unique identifiers for users?\nIn other words, should my field be type Oid or type int4?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Sep 2000 10:41:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reimplementing permission checks for rules " }, { "msg_contents": "Tom Lane writes:\n\n> OK. BTW, what is the status of the changeover you proposed re using\n> OIDs instead of int4 userids as the unique identifiers for users?\n\nBecause of the pg_dumpall thing that had to be postponed for another\nrelease, otherwise the users would be associated to the wrong groups on\nrestore.\n\n> In other words, should my field be type Oid or type int4?\n\nInteresting question, actually, because the master uid global variable has\nalways been a Oid type but it was mostly referenced as int4. Considering\nthat we have a whole oid/int4 mess and that you can't have negative uid's\nanyway, you might as well go for the Oid now if you don't mind.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 28 Sep 2000 10:49:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reimplementing permission checks for rules " } ]
[ { "msg_contents": "I've posted replacement RPMs for php3 for use with PostgreSQL-7.0.2. The\nRPMs as shipped with Mandrake-7.1 were for PostgreSQL-6.5.3, and\ncontained a dependency on libpq.so.2.\n\nThey are at\n\n ftp://ftp.postgresql.org/pub/binary/v7.0.2/Mandrake-7.1/RPMS/\n\nThese are simply rebuilt versions of the latest security-patch release\nof the Mandrake RPMs, with a release identifier of 2mdkPG7 rather than\n2mdk. Nothing was changed in the build.\n\n - Thomas\n", "msg_date": "Tue, 26 Sep 2000 15:00:33 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "New mod_php3 RPMs for Mandrake" } ]
[ { "msg_contents": "Olof Nyqvist ([email protected]) reports a bug with a severity of 2\nThe lower the number the more severe it is.\n\nShort Description\n7.0.2 source rpm failed to compile\n\nLong Description\nI'm running RedHat 6.2 SPARC, two processors, 128 MB RAM\nI downloaded postgresql-7.0.2-2.src.rpm from your main FTP site and tried 'rpm --rebuild postgresql-7.0.2-2.src.rpm', it ran fine for a long time but then exited abnormally.\n\nBasically nothing has been upgraded or changed in the RedHat installation.\n\nThese postgres-packages where installed with the RedHat distribution: postgresql-6.5.3-6\npostgresql-devel-6.5.3-6\npostgresql-jdbc-6.5.3-6\npostgresql-odbc-6.5.3-6\npostgresql-perl-6.5.3-6\npostgresql-server-6.5.3-6\n\nI want to upgrade to 7.0 to benefit from the FOREIGN KEY implementation.\n\nThis is the output from when it exited:\n***********************************************************\nmake[2]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.0.2/src/pl/tcl'\nmake[1]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.0.2/src/pl'\nAll of PostgreSQL is successfully made. Ready to install.\n+ pushd interfaces/python\n/usr/src/redhat/BUILD/postgresql-7.0.2/src/interfaces/python /usr/src/redhat/BUILD/postgresql-7.0.2/src /usr/src/redhat/BUILD/postgresql-7.0.2\n+ cp /usr/lib/python1.5/config/Makefile.pre.in .\ncp: /usr/lib/python1.5/config/Makefile.pre.in: No such file or directory\nBad exit status from /var/tmp/rpm-tmp.46557 (%build)\n****************************************************************\n\nA check in the rpm database showed that these python modules are installed:\npython-1.5.2-13\npythonlib-1.23-1\nrpm-python-3.0.4-0.48\n\nSo, what gives? \n\nSample Code\n\n\nNo file was uploaded with this report\n\n", "msg_date": "Tue, 26 Sep 2000 12:29:58 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "7.0.2 source rpm failed to compile" }, { "msg_contents": "> Short Description\n> 7.0.2 source rpm failed to compile\n> Long Description\n> I'm running RedHat 6.2 SPARC, two processors, 128 MB RAM\n> I downloaded postgresql-7.0.2-2.src.rpm from your main FTP site and tried 'rpm --rebuild postgresql-7.0.2-2.src.rpm', it ran fine for a long time but then exited abnormally.\n> This is the output from when it exited:\n> ***********************************************************\n> make[2]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.0.2/src/pl/tcl'\n> make[1]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.0.2/src/pl'\n> All of PostgreSQL is successfully made. Ready to install.\n> + pushd interfaces/python\n> /usr/src/redhat/BUILD/postgresql-7.0.2/src/interfaces/python /usr/src/redhat/BUILD/postgresql-7.0.2/src /usr/src/redhat/BUILD/postgresql-7.0.2\n> + cp /usr/lib/python1.5/config/Makefile.pre.in .\n> cp: /usr/lib/python1.5/config/Makefile.pre.in: No such file or directory\n> Bad exit status from /var/tmp/rpm-tmp.46557 (%build)\n> ****************************************************************\n> A check in the rpm database showed that these python modules are installed:\n> python-1.5.2-13\n> pythonlib-1.23-1\n> rpm-python-3.0.4-0.48\n> So, what gives?\n\nY'all are missing the python-devel RPM. Install it and try again. It may\nbe an RPM issue, with python-devel a required package for the source\nRPM. Not sure how that works; it may be that things failed just as they\nshould, since afaik \"required packages\" are specified for the binary\nRPMs, but not for the src RPMs. Lamar?\n\n - Thomas\n", "msg_date": "Tue, 26 Sep 2000 17:01:31 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 source rpm failed to compile" }, { "msg_contents": "[email protected] wrote:\n> A check in the rpm database showed that these python modules are installed:\n> python-1.5.2-13\n> pythonlib-1.23-1\n> rpm-python-3.0.4-0.48\n \n> So, what gives?\n\nYou need to have python-devel installed.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 26 Sep 2000 14:05:39 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.2 source rpm failed to compile" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > Short Description\n> > 7.0.2 source rpm failed to compile\n> > Long Description\n> > I'm running RedHat 6.2 SPARC, two processors, 128 MB RAM\n> > I downloaded postgresql-7.0.2-2.src.rpm from your main FTP site and tried 'rpm --rebuild postgresql-7.0.2-2.src.rpm', it ran fine for a long time but then exited abnormally.\n> > This is the output from when it exited:\n> > ***********************************************************\n> > make[2]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.0.2/src/pl/tcl'\n> > make[1]: Leaving directory `/usr/src/redhat/BUILD/postgresql-7.0.2/src/pl'\n> > All of PostgreSQL is successfully made. Ready to install.\n> > + pushd interfaces/python\n> > /usr/src/redhat/BUILD/postgresql-7.0.2/src/interfaces/python /usr/src/redhat/BUILD/postgresql-7.0.2/src /usr/src/redhat/BUILD/postgresql-7.0.2\n> > + cp /usr/lib/python1.5/config/Makefile.pre.in .\n> > cp: /usr/lib/python1.5/config/Makefile.pre.in: No such file or directory\n> > Bad exit status from /var/tmp/rpm-tmp.46557 (%build)\n> > ****************************************************************\n> > A check in the rpm database showed that these python modules are installed:\n> > python-1.5.2-13\n> > pythonlib-1.23-1\n> > rpm-python-3.0.4-0.48\n> > So, what gives?\n> \n> Y'all are missing the python-devel RPM. Install it and try again. It may\n> be an RPM issue, with python-devel a required package for the source\n> RPM. Not sure how that works; it may be that things failed just as they\n> should, since afaik \"required packages\" are specified for the binary\n> RPMs, but not for the src RPMs. Lamar?\n\nFixed in the latest stuff from RedHat in RH 7. Will shortly be fixed on\nour server, as the RH 7 RPM's won't rebuild smoothly on 6.2 as yet. \nThere will be an error issued complaining about the lack of python-devel\nas part of the build if python-devel isn't there.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 26 Sep 2000 14:08:45 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] 7.0.2 source rpm failed to compile" } ]
[ { "msg_contents": "I remember a post about 2 weeks back concerning a new patch that was to\nbe introduced as 7.0.3. I haven't seen any reference to this since then.\nIs this still happening, or will the patch be part of 7.1?\n\n-Tony Reina\n\n\n", "msg_date": "Tue, 26 Sep 2000 09:52:45 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.0.3?" }, { "msg_contents": "\ntom is looking into a bug right now that he wants to try and fix before we\nrelease it ... hopefully this week we'll release it ...\n\n\nOn Tue, 26 Sep 2000, G. Anthony Reina wrote:\n\n> I remember a post about 2 weeks back concerning a new patch that was to\n> be introduced as 7.0.3. I haven't seen any reference to this since then.\n> Is this still happening, or will the patch be part of 7.1?\n> \n> -Tony Reina\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 26 Sep 2000 14:47:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.0.3?" } ]
[ { "msg_contents": "\nCan someone add something to the docs that gives an example of what should\nbe used from the command line to reindex a database's system tables?\n\nAll the man page says is use th e-O an d-P options :(\n\nI'm getting:\n\npsql -h pgsql horde\nERROR: cannot read block 6 of pg_attribute_relid_attnam_index\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nhorde=> \\d\nERROR: SearchSysCache: recursive use of cache 4\nhorde=> \\q\n\nI've tried:\n\nbin/postgres -O -P -D `pwd`/data horde\n\nPOSTGRES backend interactive interface\n$Revision: 1.155.2.1 $ $Date: 2000/08/30 21:19:32 $\n\nbackend> reindex database horde;\nbackend> \n\nstill get it ...\n\nI'm either doing something wrong with REINDEXng the system tables, or this\nisn't what hte problem is :(\n\nv7.0.2+ database being run ...\n\nHelp? :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 26 Sep 2000 15:06:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Damn, pg_proc index corrupted, can't find anythign on REINDEX ..." }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> The Hermit Hacker\n>\n> Can someone add something to the docs that gives an example of what should\n> be used from the command line to reindex a database's system tables?\n>\n> All the man page says is use th e-O an d-P options :(\n>\n> I'm getting:\n>\n> psql -h pgsql horde\n> ERROR: cannot read block 6 of pg_attribute_relid_attnam_index\n> Welcome to psql, the PostgreSQL interactive terminal.\n>\n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n>\n> horde=> \\d\n> ERROR: SearchSysCache: recursive use of cache 4\n> horde=> \\q\n>\n> I've tried:\n>\n> bin/postgres -O -P -D `pwd`/data horde\n>\n> POSTGRES backend interactive interface\n> $Revision: 1.155.2.1 $ $Date: 2000/08/30 21:19:32 $\n>\n> backend> reindex database horde;\n\nMaybe you have to add FORCE option i.e.\n\treindex database horde force;\n\nRegards.\n\nHiroshi Inoue\n\n", "msg_date": "Wed, 27 Sep 2000 05:13:48 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Damn, pg_proc index corrupted, can't find anythign on REINDEX ..." }, { "msg_contents": "\nTom is looking around the server right now, as he wants to try and see\nwhat caused it before we go any further at trying to fix it, but I hadn't\nthought to try FORCE ... thanks :)\n\n\n\nOn Wed, 27 Sep 2000, Hiroshi Inoue wrote:\n\n> > -----Original Message-----\n> > From: [email protected]\n> > The Hermit Hacker\n> >\n> > Can someone add something to the docs that gives an example of what should\n> > be used from the command line to reindex a database's system tables?\n> >\n> > All the man page says is use th e-O an d-P options :(\n> >\n> > I'm getting:\n> >\n> > psql -h pgsql horde\n> > ERROR: cannot read block 6 of pg_attribute_relid_attnam_index\n> > Welcome to psql, the PostgreSQL interactive terminal.\n> >\n> > Type: \\copyright for distribution terms\n> > \\h for help with SQL commands\n> > \\? for help on internal slash commands\n> > \\g or terminate with semicolon to execute query\n> > \\q to quit\n> >\n> > horde=> \\d\n> > ERROR: SearchSysCache: recursive use of cache 4\n> > horde=> \\q\n> >\n> > I've tried:\n> >\n> > bin/postgres -O -P -D `pwd`/data horde\n> >\n> > POSTGRES backend interactive interface\n> > $Revision: 1.155.2.1 $ $Date: 2000/08/30 21:19:32 $\n> >\n> > backend> reindex database horde;\n> \n> Maybe you have to add FORCE option i.e.\n> \treindex database horde force;\n> \n> Regards.\n> \n> Hiroshi Inoue\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 26 Sep 2000 17:38:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Damn, pg_proc index corrupted, can't find anythign on REINDEX ..." }, { "msg_contents": "It looks like you are suffering from actual hardware failures:\n\n%cd /pgsql/data/base/horde\n%ls -l pg_attribute_relid_attnam_index\n-rw------- 1 pgsql pgsql 65536 Aug 21 12:27 pg_attribute_relid_attnam_index\n%wc pg_attribute_relid_attnam_index\nwc: pg_attribute_relid_attnam_index: read: Input/output error\n\n%wc *\n 1 1 4 PG_VERSION\nwc: active_sessions: read: Input/output error\n 13 50 16384 active_sessions_pkey\n 0 0 0 auth_user\n 0 0 0 auth_user_md5\n 0 3 16384 auth_user_md5_pkey\n 0 3 16384 auth_user_pkey\n 51 806 32768 imp_addr\n 0 5 8192 imp_logs\n 97 468 16384 imp_pref\n 0 3 16384 k_username\n 0 3 16384 k_username_md5\n 1 101 8192 pg_aggregate\n 1 11 16384 pg_aggregate_name_type_index\n 1 12 8192 pg_am\nwc: pg_am_name_index: read: Input/output error\n 3 220 16384 pg_amop\n 3 31 16384 pg_amop_opid_index\n 2 26 16384 pg_amop_strategy_index\n 2 70 8192 pg_amproc\n 0 136 8192 pg_attrdef\n 0 10 16384 pg_attrdef_adrelid_index\n 35 736 57344 pg_attribute\nwc: pg_attribute_relid_attnam_index: read: Input/output error\n 16 664 32768 pg_attribute_relid_attnum_index\n 23 144 16384 pg_class\n 2 112 16384 pg_class_oid_index\n 2 17 16384 pg_class_relname_index\n 58 3096 73728 pg_description\nwc: pg_description_objoid_index: read: Input/output error\n 2 84 8192 pg_index\n 1 47 16384 pg_index_indexrelid_index\n 0 0 0 pg_indexes\n 0 0 0 pg_inheritproc\n 0 0 0 pg_inherits\n 0 3 16384 pg_inherits_relid_seqno_index\n 0 4 1752 pg_internal.init\n 0 0 0 pg_ipl\n 0 10 8192 pg_language\n 0 3 16384 pg_language_name_index\n 0 8 16384 pg_language_oid_index\n 0 0 0 pg_listener\nwc: pg_listener_relname_pid_index: read: Input/output error\n 1 39 8192 pg_opclass\n 1 39 16384 pg_opclass_deftype_index\n 1 9 16384 pg_opclass_name_index\nwc: pg_operator: read: Input/output error\n 11 652 32768 pg_operator_oid_index\n 14 95 65536 pg_operator_oprname_l_r_k_index\n 176 3305 212992 pg_proc\n 71 1520 49152 pg_proc_oid_index\nwc: pg_proc_proname_narg_type_index: read: Input/output error\n 0 0 0 pg_relcheck\n 0 3 16384 pg_relcheck_rcrelid_index\n 28 351 8192 pg_rewrite\nwc: pg_rewrite_oid_index: read: Input/output error\n 0 3 16384 pg_rewrite_rulename_index\n 0 0 0 pg_rules\n 15 327 16384 pg_statistic\n 10 232 16384 pg_statistic_relid_att_index\n 0 0 0 pg_tables\n 0 6 8192 pg_trigger\n 0 4 16384 pg_trigger_tgconstrname_index\n 0 5 16384 pg_trigger_tgconstrrelid_index\n 0 5 16384 pg_trigger_tgrelid_index\n 8 170 16384 pg_type\n 3 150 16384 pg_type_oid_index\n 2 20 16384 pg_type_typname_index\n 0 0 0 pg_user\n 0 0 0 pg_views\n 655 13822 1132252 total\n\n\nDo you know if there's a way to determine where these files are\nphysically stored? I'm wondering if all the damaged indexes live\non the same disk track/cylinder/whatever ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Sep 2000 16:53:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Damn, pg_proc index corrupted,\n can't find anythign on REINDEX ... " }, { "msg_contents": "\n@#%@#$@#$@!$@ and checking /var/log/messages confirms that :(\n\nSep 26 17:01:04 pgsql /kernel: (da1:ahc0:0:1:0): READ(10). CDB: 28 0 0 93 d6 9f 0 0 80 0 \nSep 26 17:01:04 pgsql /kernel: (da1:ahc0:0:1:0): HARDWARE FAILURE info:93d6dd asc:32,0\nSep 26 17:01:04 pgsql /kernel: (da1:ahc0:0:1:0): No defect spare location available field replaceable unit: 4\nSep 26 17:01:04 pgsql /kernel: (da1:ahc0:0:1:0): READ(10). CDB: 28 0 0 93 d6 af 0 0 70 0 \nSep 26 17:01:04 pgsql /kernel: (da1:ahc0:0:1:0): HARDWARE FAILURE info:93d6f1 asc:32,0\nSep 26 17:01:04 pgsql /kernel: (da1:ahc0:0:1:0): No defect spare location available field replaceable unit: 4\nSep 26 17:01:06 pgsql /kernel: (da1:ahc0:0:1:0): READ(10). CDB: 28 0 0 72 96 2f 0 0 10 0 \nSep 26 17:01:06 pgsql /kernel: (da1:ahc0:0:1:0): HARDWARE FAILURE info:729637 asc:32,0\nSep 26 17:01:06 pgsql /kernel: (da1:ahc0:0:1:0): No defect spare location available field replaceable unit: 4\n\nshit shit shit :(\n\nthanks tom ... never even thought to check that :(\n\n\nOn Tue, 26 Sep 2000, Tom Lane wrote:\n\n> It looks like you are suffering from actual hardware failures:\n> \n> %cd /pgsql/data/base/horde\n> %ls -l pg_attribute_relid_attnam_index\n> -rw------- 1 pgsql pgsql 65536 Aug 21 12:27 pg_attribute_relid_attnam_index\n> %wc pg_attribute_relid_attnam_index\n> wc: pg_attribute_relid_attnam_index: read: Input/output error\n> \n> %wc *\n> 1 1 4 PG_VERSION\n> wc: active_sessions: read: Input/output error\n> 13 50 16384 active_sessions_pkey\n> 0 0 0 auth_user\n> 0 0 0 auth_user_md5\n> 0 3 16384 auth_user_md5_pkey\n> 0 3 16384 auth_user_pkey\n> 51 806 32768 imp_addr\n> 0 5 8192 imp_logs\n> 97 468 16384 imp_pref\n> 0 3 16384 k_username\n> 0 3 16384 k_username_md5\n> 1 101 8192 pg_aggregate\n> 1 11 16384 pg_aggregate_name_type_index\n> 1 12 8192 pg_am\n> wc: pg_am_name_index: read: Input/output error\n> 3 220 16384 pg_amop\n> 3 31 16384 pg_amop_opid_index\n> 2 26 16384 pg_amop_strategy_index\n> 2 70 8192 pg_amproc\n> 0 136 8192 pg_attrdef\n> 0 10 16384 pg_attrdef_adrelid_index\n> 35 736 57344 pg_attribute\n> wc: pg_attribute_relid_attnam_index: read: Input/output error\n> 16 664 32768 pg_attribute_relid_attnum_index\n> 23 144 16384 pg_class\n> 2 112 16384 pg_class_oid_index\n> 2 17 16384 pg_class_relname_index\n> 58 3096 73728 pg_description\n> wc: pg_description_objoid_index: read: Input/output error\n> 2 84 8192 pg_index\n> 1 47 16384 pg_index_indexrelid_index\n> 0 0 0 pg_indexes\n> 0 0 0 pg_inheritproc\n> 0 0 0 pg_inherits\n> 0 3 16384 pg_inherits_relid_seqno_index\n> 0 4 1752 pg_internal.init\n> 0 0 0 pg_ipl\n> 0 10 8192 pg_language\n> 0 3 16384 pg_language_name_index\n> 0 8 16384 pg_language_oid_index\n> 0 0 0 pg_listener\n> wc: pg_listener_relname_pid_index: read: Input/output error\n> 1 39 8192 pg_opclass\n> 1 39 16384 pg_opclass_deftype_index\n> 1 9 16384 pg_opclass_name_index\n> wc: pg_operator: read: Input/output error\n> 11 652 32768 pg_operator_oid_index\n> 14 95 65536 pg_operator_oprname_l_r_k_index\n> 176 3305 212992 pg_proc\n> 71 1520 49152 pg_proc_oid_index\n> wc: pg_proc_proname_narg_type_index: read: Input/output error\n> 0 0 0 pg_relcheck\n> 0 3 16384 pg_relcheck_rcrelid_index\n> 28 351 8192 pg_rewrite\n> wc: pg_rewrite_oid_index: read: Input/output error\n> 0 3 16384 pg_rewrite_rulename_index\n> 0 0 0 pg_rules\n> 15 327 16384 pg_statistic\n> 10 232 16384 pg_statistic_relid_att_index\n> 0 0 0 pg_tables\n> 0 6 8192 pg_trigger\n> 0 4 16384 pg_trigger_tgconstrname_index\n> 0 5 16384 pg_trigger_tgconstrrelid_index\n> 0 5 16384 pg_trigger_tgrelid_index\n> 8 170 16384 pg_type\n> 3 150 16384 pg_type_oid_index\n> 2 20 16384 pg_type_typname_index\n> 0 0 0 pg_user\n> 0 0 0 pg_views\n> 655 13822 1132252 total\n> \n> \n> Do you know if there's a way to determine where these files are\n> physically stored? I'm wondering if all the damaged indexes live\n> on the same disk track/cylinder/whatever ...\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 26 Sep 2000 18:07:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Damn, pg_proc index corrupted,\n can't find anythign on REINDEX ... " } ]
[ { "msg_contents": "> > The last step could be done in two ways. First by limiting\n> > number of indices for one table we can store coresponding\n> > indices' TIDs in each heap tuple. The update is then simple\n> > taking one disk write.\n> \n> Why limit it ? One could just save an tid array in each tuple .\n\nAnd update *entire* heap after addition new index?!\n\nVadim\n", "msg_date": "Tue, 26 Sep 2000 12:07:41 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pgsql is 75 times faster with my new index scan" } ]
[ { "msg_contents": "Why not implement *true* CLUSTER?\nWith cluster, all heap tuples will be in cluster index.\n\nVadim\n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> Sent: Tuesday, September 26, 2000 2:15 AM\n> To: [email protected]\n> Subject: [HACKERS] pgsql is 75 times faster with my new index scan\n> \n> \n> Hello,\n> I recently spoke about extending index scan to be able\n> to take data directly from index pages. I wanted to know\n> whether should I spend my time and implement it.\n> So that I hacked last pgsql a bit to use proposed scan\n> mode and did some measurements (see bellow). Measurements\n> was done on (id int,txt varchar(20)) table with 1 000 000 rows\n> with btree index on both attrs. Query involved was:\n> select id,count(txt) from big group by id;\n> Duplicates distribution on id column was 1:1000. I was run\n> query twice after linux restart to ensure proper cache \n> utilization (on disk heap & index was 90MB in total).\n> So I think that by implementing this scan mode we can expect\n> to gain huge speedup in all queries which uses indices and\n> can found all data in their pages.\n> \n> Problems:\n> my changes implemented only indexscan and new cost function.\n> it doesn't work when index pages contains tuples which doesn't\n> belong to our transaction. test was done after vacuum and\n> only one tx running.\n> \n> TODO:\n> - add HeapTupleHeaderData into each IndexTupleData\n> - change code to reflect above\n> - when deleting-updating heap then also update tuples'\n> HeapTupleHeaderData in indices\n> \n> The last step could be done in two ways. First by limiting\n> number of indices for one table we can store coresponding \n> indices' TIDs in each heap tuple. The update is then simple\n> taking one disk write.\n> Or do it in standart way - lookup appropriate index tuple\n> by traversing index. It will cost us more disk accesses.\n> \n> Is someone interested in this ??\n> regards devik\n> \n> With current indexscan:\n> ! system usage stats:\n> ! 1812.534505 elapsed 93.060547 user 149.447266 system sec\n> ! [93.118164 user 149.474609 sys total]\n> ! 0/0 [0/0] filesystem blocks in/out\n> ! 130978/32 [131603/297] page faults/reclaims, 132 [132] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/0 [0/0] voluntary/involuntary context switches\n> ! postgres usage stats:\n> ! Shared blocks: 555587 read, 551155 written, buffer hit\n> rate = 44.68%\n> ! Local blocks: 0 read, 0 written, buffer hit\n> rate = 0.00%\n> ! Direct blocks: 0 read, 0 written\n> \n> With improved indexscan:\n> ! system usage stats:\n> ! 23.686788 elapsed 22.157227 user 0.372071 system sec\n> ! [22.193359 user 0.385742 sys total]\n> ! 0/0 [0/0] filesystem blocks in/out\n> ! 1186/42 [1467/266] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/0 [0/0] voluntary/involuntary context switches\n> ! postgres usage stats:\n> ! Shared blocks: 4385 read, 0 written, buffer hit\n> rate = 4.32%\n> ! Local blocks: 0 read, 0 written, buffer hit\n> rate = 0.00%\n> ! Direct blocks: 0 read, 0 written\n> \n> \n", "msg_date": "Tue, 26 Sep 2000 12:09:29 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> Why not implement *true* CLUSTER?\n> With cluster, all heap tuples will be in cluster index.\n> \n\nWhat is *true* CLUSTER ?\n\n'grep CLUSTER' over the latest SQL standards gives back nothing.\n\n\n> > > The last step could be done in two ways. First by limiting\n> > > number of indices for one table we can store coresponding\n> > > indices' TIDs in each heap tuple. The update is then simple\n> > > taking one disk write.\n> > \n> > Why limit it ? One could just save an tid array in each tuple .\n> \n> And update *entire* heap after addition new index?!\n\nI guess that this should be done even for limited number of \nindices' TIDs in a heap tuple ?\n\n--------------\nHannu\n", "msg_date": "Wed, 27 Sep 2000 09:41:18 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> Why not implement *true* CLUSTER?\n> With cluster, all heap tuples will be in cluster index.\n\nIt would be nice. It's pity that pg AMs are not general.\nThere is no simple way to use btree instead of heap. But\nit would help.\nBut using values from index is good idea too because you\ncan have table with many columns and aggregate query which\nneeds only two columns.\nThe it will be MUCH faster to create secondary index which\nis much smaller than heap and use values from it.\n\nVadim where can I found some code from upcoming WAL ?\nI'm thinking about implementing special ranked b-tree\nwhich will store precomputed aggregate values (like\ncnt,min,max,sum) in btree node keys. It can be then\nused for extremely fast evaluation of aggregates. But\nin case of MVCC it is more complicated and I'd like\nto see how it would be affected by WAL.\n\ndevik\n\n\n", "msg_date": "Wed, 27 Sep 2000 10:12:10 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> What is *true* CLUSTER ?\n> \n> 'grep CLUSTER' over the latest SQL standards gives back nothing.\n\nstoring data in b-tree instead of heap for example.\n\n> > And update *entire* heap after addition new index?!\n> \n> I guess that this should be done even for limited number of\n> indices' TIDs in a heap tuple ?\n\nyep - the idea was throwed away already.\n", "msg_date": "Wed, 27 Sep 2000 11:33:16 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" } ]
[ { "msg_contents": "> > Indice's TIDs are transient.\n> > Isn't it useless to store indice's TIDs ?\n> \n> but yes Hiroshi is right. Index TID is transient. I first looked\n> into pg sources two weeks ago so I have still holes in my knowledge.\n> So that only solution is to traverse it ..\n\nIt was discussed several times for btree - add heap tid to index key and\nyou'll\nscan index for particulare tuple much faster.\nNot sure what could be done for hash indices... order hash items with the\nsame hash key?\n\nVadim\n", "msg_date": "Tue, 26 Sep 2000 12:14:54 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pgsql is 75 times faster with my new index scan" } ]
[ { "msg_contents": "> I've tried:\n> \n> bin/postgres -O -P -D `pwd`/data horde\n> \n> POSTGRES backend interactive interface\n> $Revision: 1.155.2.1 $ $Date: 2000/08/30 21:19:32 $\n> \n> backend> reindex database horde;\n> backend> \n> \n> still get it ...\n> \n> I'm either doing something wrong with REINDEXng the system \n> tables, or this isn't what hte problem is :(\n\nI'm not sure how REINDEX works... to restore after some crashes\nREINDEX should 1. drop indices; 2. vacuum table(s); 3. create indices\n(note - create index *after* table itself is vacuumed).\n\nVadim\n", "msg_date": "Tue, 26 Sep 2000 12:21:53 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Damn, pg_proc index corrupted, can't find anythign on REINDEX ..." } ]
[ { "msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > I get an error (which is good). But, if I do\n> >\n> > #BEGIN;\n> > #SELECT * FROM name_and_ip WHERE name = 'foo' OR name = 'bar' FOR\n> > UPDATE;\n> > #UPDATE name_and_ip SET ip = '192.168.186.249' where name = 'foo';\n> > UPDATE 1\n> > #COMMIT;\n> > COMMIT\n> \n> Btree doesn't take into account that tuple was just marked for update\n> but still alive. Seems it was handled properly in 6.5.X ?\n\nNope. It has been broken a long time...\n\nhannu=> select version();\nversion \n-------------------------------------------------------------------\nPostgreSQL 6.5.3 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.66\n(1 row)\n\nhannu=> create table T(i int);\nCREATE\nhannu=> create unique index TUI on T(I);\nCREATE\nhannu=> insert into T values(1);\nINSERT 109150 1\nhannu=> insert into T values(2);\nINSERT 109151 1\nhannu=> begin;\nBEGIN\nhannu=> select * from T where I in (1,2)for update;\ni\n-\n1\n2\n(2 rows)\n\nhannu=> update T set I=1 where I=2;\nUPDATE 1\nhannu=> commit;\nEND\nhannu=> select * from T;\ni\n-\n1\n1\n(2 rows)\n\n\n> I'll take a look...\n> \n> Vadim\n", "msg_date": "Wed, 27 Sep 2000 00:56:05 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RE: [GENERAL] update inside transaction violates unique\n\tconstraint?" }, { "msg_contents": "> I get an error (which is good). But, if I do\n> \n> #BEGIN;\n> #SELECT * FROM name_and_ip WHERE name = 'foo' OR name = 'bar' FOR\n> \tUPDATE;\n> #UPDATE name_and_ip SET ip = '192.168.186.249' where name = 'foo';\n> UPDATE 1\n> #COMMIT;\n> COMMIT\n\nBtree doesn't take into account that tuple was just marked for update\nbut still alive. Seems it was handled properly in 6.5.X ?\nI'll take a look...\n\nVadim\n", "msg_date": "Tue, 26 Sep 2000 15:06:48 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [GENERAL] update inside transaction violates unique constrain t?" } ]
[ { "msg_contents": "\nCan anyone explain why I must make / a character class \nin case-insensitive query in order to match / ?\n\nand then why does it work in plain ~ ?\n\nhannu=> select * from item where path ~* '^/a';\npath \n------\n/a/b/c\n/a/b/d\n/a/d/d\n/aa/d \n/a/b \n/a/c \n/a/d \n(7 rows)\n\nhannu=> select * from item where path ~ '^/a';\npath \n------\n/a/b/c\n/a/b/d\n/a/d/d\n/aa/d \n/a/b \n/a/c \n/a/d \n(7 rows)\n\nhannu=> select * from item where path ~* '^/A';\npath\n----\n(0 rows)\n\nhannu=> select * from item where path ~* '^[/]A';\npath \n------\n/a/b/c\n/a/b/d\n/a/d/d\n/aa/d \n/a/b \n/a/c \n/a/d \n(7 rows)\n\n------------\nHannu\n", "msg_date": "Wed, 27 Sep 2000 01:00:30 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "use of / in ~ vs. ~*" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Can anyone explain why I must make / a character class \n> in case-insensitive query in order to match / ?\n\nWhat LOCALE are you using? There was a thread about strange ordering\nrules confusing the LIKE/regexp optimizer recently ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Sep 2000 20:06:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use of / in ~ vs. ~* " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > Can anyone explain why I must make / a character class\n> > in case-insensitive query in order to match / ?\n> \n> What LOCALE are you using? There was a thread about strange ordering\n> rules confusing the LIKE/regexp optimizer recently ...\n\n\nI think I'm using the default locale (this is just straight install on\nLinux from RPM-s)\n\nIs there any way to find out the locale used from within the running\nsystem ?\n\nThe most obvious way ( select locale(); ) does not work .\n\n-----------\nHannu\n", "msg_date": "Wed, 27 Sep 2000 09:31:43 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use of / in ~ vs. ~*" }, { "msg_contents": "Hannu Krosing writes:\n\n> I think I'm using the default locale (this is just straight install on\n> Linux from RPM-s)\n> \n> Is there any way to find out the locale used from within the running\n> system ?\n\nThe locale the postmaster uses is whatever was set in its environment,\ni.e., LC_ALL, etc.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 27 Sep 2000 12:51:04 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use of / in ~ vs. ~*" } ]
[ { "msg_contents": "\n> > Btree doesn't take into account that tuple was just marked \n> > for update but still alive. Seems it was handled properly in 6.5.X ?\n> \n> Nope. It has been broken a long time...\n\nOps. Ok...\n\nVadim\n", "msg_date": "Tue, 26 Sep 2000 16:05:49 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: RE: [GENERAL] update inside transaction violates un\n\tique constraint?" } ]
[ { "msg_contents": "> > Btree doesn't take into account that tuple was just marked \n> > for update but still alive. Seems it was handled properly in 6.5.X ?\n> \n> Nope. It has been broken a long time...\n\nHmm, as I remember, Hiroshi fixed something in this area for 7.0.X.\nHiroshi?\nProbably, his fix somehow disappeared from CVS?\nDiff against 7.0.2 sources attached.\n\nVadim\n2 Marc - please add this to upcoming 7.0.3", "msg_date": "Tue, 26 Sep 2000 17:23:57 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: RE: [GENERAL] update inside transaction violates un\n\tique constraint?" }, { "msg_contents": "\n\n\"Mikheev, Vadim\" wrote:\n\n> > > Btree doesn't take into account that tuple was just marked\n> > > for update but still alive. Seems it was handled properly in 6.5.X ?\n> >\n> > Nope. It has been broken a long time...\n>\n> Hmm, as I remember, Hiroshi fixed something in this area for 7.0.X.\n> Hiroshi?\n> Probably, his fix somehow disappeared from CVS?\n> Diff against 7.0.2 sources attached.\n>\n> Vadim\n> 2 Marc - please add this to upcoming 7.0.3\n>\n\nHmm,it seems that both current and REL7_0_PATCHES\nhave already been changed.\nI committed the the change to current tree and\nasked Tatsuo to commit it to REL7_0_PATCHES tree.\n\nRegards.\n\nHiroshi Inoue\n\n\n", "msg_date": "Wed, 27 Sep 2000 14:34:43 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [GENERAL] update inside transaction violates unique \n\tconstraint?" } ]
[ { "msg_contents": "Hello again.\n\nI'm in the process of making a COM wrapper (enabling VB to connect to\nPostGres) for the libpq library using Visual Studio 6.0 Pro, but have\na few problems. I can make use of the libpq.dll library from the COM\nwrapper, but I thought that it might be a bit better if the actual\nlibrary routines was included in the COM wrapper, and thus making the\nlibpq.dll unnedded.\n\nBut if I include all the files that make up the library, then I can't\nget a connection. I even tried to make a small test file which works\nok when using the dll, but doesn't when I include the actual .c files\nwhich make up the dll and static link library (.lib). Linking with the\nstatic link library that comes with the dll file doesn't work either.\nThis little program is what I've used. Has anybody succedded in lnking\nan application with the static link library (libpq.lib) and actually\ngetting a working application??? If so then I'm very interested in\ngetting any help.\n\n-------- test file ---------- test file -----------\n\n#include <iostream.h>\n#include \"libpq-fe.h\"\n\nint main (int argc, char* argv[])\n{\n PGconn *conn;\n\n cout << \"Hello pgsql world!!!\" << endl;\n cout << \"Connecting to the database.\" << endl;\n conn = PQconnectdb(\"host=host.name user=root\");\n if (PQstatus(conn) == CONNECTION_BAD) {\n cout << \"Boohoo, the connection failed!!!\" << endl;\n cout << \"Value is \" << (long) conn << endl;\n } else {\n cout << \"Hurray, we got connected.\" << endl;\n PQfinish(conn);\n }\n return 0;\n}\n\n-------- test file ---------- test file -----------\n\nYours faithfully.\nFinn Kettner.\nPS. I have also created a project file for the psql application, if\nthis has any interest it might as well be included in the furture\ndistributions of postgresql. The dll and static link library is also\nmore or less finished, but as I can't link with the static link\nlibrary I don't think that it has much interest untill the problem is\nsolved.\n", "msg_date": "Wed, 27 Sep 2000 01:32:01 +0100", "msg_from": "\"Finn Kettner\" <[email protected]>", "msg_from_op": true, "msg_subject": "libpq static link library dowsn't work (M$ VS6)" }, { "msg_contents": "\n\nFinn Kettner wrote:\n\n> Hello again.\n>\n> I'm in the process of making a COM wrapper (enabling VB to connect to\n> PostGres) for the libpq library using Visual Studio 6.0 Pro, but have\n> a few problems. I can make use of the libpq.dll library from the COM\n> wrapper, but I thought that it might be a bit better if the actual\n> library routines was included in the COM wrapper, and thus making the\n> libpq.dll unnedded.\n>\n> But if I include all the files that make up the library, then I can't\n> get a connection. I even tried to make a small test file which works\n> ok when using the dll, but doesn't when I include the actual .c files\n> which make up the dll and static link library (.lib). Linking with the\n> static link library that comes with the dll file doesn't work either.\n> This little program is what I've used. Has anybody succedded in lnking\n> an application with the static link library (libpq.lib) and actually\n> getting a working application??? If so then I'm very interested in\n> getting any help.\n>\n\nlibpq.dll calls WSAStartup() in dllmain() which is never\ncalled from static library. Probably you had better\ncall WSAStartup() from upper level application.\n\nRegards.\n\nHiroshi Inoue\n\n", "msg_date": "Wed, 27 Sep 2000 16:23:19 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq static link library dowsn't work (M$ VS6)" } ]
[ { "msg_contents": "\nWow, has this been just one of those days ... \n\nTrying to clean up a few of the database, I'm wondering how to fix some of\nthese things, if its even possible, without having to rebuild the whole\ndatabase:\n\n%~/bin/postgres -O -P -D/pgsql/special/sales.org swissre\nDEBUG: Data Base System is starting up at Tue Sep 26 20:59:31 2000\nDEBUG: Data Base System was shut down at Tue Sep 26 20:59:24 2000\nDEBUG: Data Base System is in production state at Tue Sep 26 20:59:31 2000\nFATAL 1: RelationBuildTriggers: 1 record(s) not found for rel pg_shadow\nFATAL 1: RelationBuildTriggers: 1 record(s) not found for rel pg_shadow\n\n-----------------\n\n%~/bin/postgres -O -P -D/pgsql/special/sales.org tcg\nDEBUG: Data Base System is starting up at Tue Sep 26 21:03:55 2000\nDEBUG: Data Base System was interrupted being in production at Tue Sep 26 20:59:31 2000\nDEBUG: Data Base System is in production state at Tue Sep 26 21:03:55 2000\nTRAP: Too Large Allocation Request(\"!(0 < (size) && (size) <= (0xfffffff)):size=0 [0x0]\", File: \"mcxt.c\", Line: 222)\n\n!(0 < (size) && (size) <= (0xfffffff)) (0) [No such file or directory]\nAbort(core dumped)\n\n----------------\n\n%~/bin/postgres -O -P -D/pgsql/special/sales.org vancity\nDEBUG: Data Base System is starting up at Tue Sep 26 21:04:40 2000\nDEBUG: Data Base System was shut down at Tue Sep 26 21:04:32 2000\nDEBUG: Data Base System is in production state at Tue Sep 26 21:04:40 2000\n\nPOSTGRES backend interactive interface \n$Revision: 1.155.2.1 $ $Date: 2000/08/30 21:19:32 $\n\nbackend> reindex database vancity force;\nbackend> \n\n----------------\n\n%~/bin/postgres -O -P -D/pgsql/special/sales.org bellsouth\nDEBUG: Data Base System is starting up at Tue Sep 26 21:05:39 2000\nDEBUG: Data Base System was shut down at Tue Sep 26 21:05:30 2000\nDEBUG: Data Base System is in production state at Tue Sep 26 21:05:39 2000\nFATAL 1: catalog is missing 8 attributes for relid 1260\nFATAL 1: catalog is missing 8 attributes for relid 1260\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Tue, 26 Sep 2000 22:07:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "recovery after massive system corruption ..." } ]
[ { "msg_contents": "> I installed the postgresql-7.0.2-2 RPM downloaded from postgres.org, but \\l+\n> always dumps core:\n\nIt's a bug in 7.0.2. Obtain a patch from:\n\nftp://ftp.sra.co.jp/pub/cmd/postgres/7.0.2/patches/psql.patch.gz\n\nand rebuild the RPM.\n\nNote that this was already fixed in CVS (both stable and current).\nThis seems to be yet another reason for 7.0.3...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 27 Sep 2000 10:26:44 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \\l+ dumps core" }, { "msg_contents": "I installed the postgresql-7.0.2-2 RPM downloaded from postgres.org, but \\l+\nalways dumps core:\n\n\n% psql\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\njamesc=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66\n(1 row)\n\njamesc=# \\l+\n List of databases\n Database | Owner | Encoding | Description\n-----------+----------+-----------+-------------\n jamesc | jamesc | SQL_ASCII |\n template1 | postgres | SQL_ASCII |\n(2 rows)\n\nzsh: segmentation fault (core dumped) psql\n%\n", "msg_date": "Wed, 27 Sep 2000 12:10:23 +1000", "msg_from": "James Cribb <[email protected]>", "msg_from_op": false, "msg_subject": "\\l+ dumps core" } ]
[ { "msg_contents": "> On Fri, Sep 22, 2000 at 03:31:59PM +0900, Tatsuo Ishii wrote:\n> > pgc.o(.text+0x582): undefined reference to `pg_mbcliplen'\n> > pgc.o(.text+0x953): undefined reference to `pg_mbcliplen'\n> > ...\n> > pg_mbcliplen cannot be used in the frontend. Remove them, please.\n> \n> Is there any way to use a similar functionality in ecpg? I don't like to run\n> truncate the text too early.\n\nTo truncate a mulbyte characters properly, you need to know what\nencoding is used for a .pgc file that ecpg about to process. Currently\nthere is no mechanism in ecpg (and all other frontend) to know what\nencoding is used for the .pgc files. This is a tough problem...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 27 Sep 2000 10:44:01 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ecpg is broken in current" }, { "msg_contents": "Tatsuo Ishii wrote:\n\n> > On Fri, Sep 22, 2000 at 03:31:59PM +0900, Tatsuo Ishii wrote:\n> > > pgc.o(.text+0x582): undefined reference to `pg_mbcliplen'\n> > > pgc.o(.text+0x953): undefined reference to `pg_mbcliplen'\n> > > ...\n> > > pg_mbcliplen cannot be used in the frontend. Remove them, please.\n> >\n> > Is there any way to use a similar functionality in ecpg? I don't like to run\n> > truncate the text too early.\n>\n> To truncate a mulbyte characters properly, you need to know what\n> encoding is used for a .pgc file that ecpg about to process. Currently\n> there is no mechanism in ecpg (and all other frontend) to know what\n> encoding is used for the .pgc files. This is a tough problem...\n> --\n> Tatsuo Ishii\n\nI would recommend a command line option overriding an environment variable\n(fallback). Isn't there some LC_* indicating the default encoding.\nBut on the other hand, compiling would most likely take place in 'C'.\n\nChristof\n\n\n\n", "msg_date": "Fri, 29 Sep 2000 19:04:06 +0200", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ecpg is broken in current" } ]
[ { "msg_contents": "\nfiguring I'd try out getting into the backend using postgres, to see if I\ncan 'bypass' some of the errors on those corrupted database, I'm wondering\nif there is any way of taking what a 'select * from <table>' outputs:\n\n 1: userid = \"cibc001154\" (typeid = 1043, len = -1, typmod = 36, byval = f)\n 2: passwd = \"INVALID\" (typeid = 1043, len = -1, typmod = 36, byval = f)\n 3: acct_type = \"3\" (typeid = 23, len = 4, typmod = -1, byval = t)\n ----\n\nand making it useful? some way of using a simple postgres command to dump\nthe corrupted tables one by one?\n\nthoughts?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 26 Sep 2000 23:25:25 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Recovery Procedures in 'single user mode' ..." } ]
[ { "msg_contents": "> Hmm,it seems that both current and REL7_0_PATCHES\n> have already been changed.\n> I committed the the change to current tree and\n> asked Tatsuo to commit it to REL7_0_PATCHES tree.\n\nI also committed current -:))\n\nVadim\n", "msg_date": "Tue, 26 Sep 2000 22:34:46 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: RE: [GENERAL] update inside transaction violates un\n\tique constraint?" } ]
[ { "msg_contents": "Hello,\n\nI am writing a SPI function to run maintenance tasks on my auction\nsystem but it keeps crashing the backend after running only one loop.\nNow, I am not a C programmer, nor do I have any formal training in CS. I\nthought I might run this function by you guys so that a cursory look\nmight reveal some obvious coding mistake? \n\nThanks in advance for your insight.\n\nint4 auction_maintenance(void) {\n\n\tchar * query, * default_locale = getenv(\"LC_ALL\");\n\tbool current, isnull;\n\tint i;\n\n\t/* Connect to SPI manager\n\t */\n\tif (SPI_connect() != SPI_OK_CONNECT)\n\t\telog(ERROR, \"bid_control.c: SPI_connect failed\");\n\n/*\tasprintf(&query, \"BEGIN\");\n\tSPI_exec(query, 0);\n\tfree(query);*/\n\n\t/* check if last modification time of special user id 0 is less than 15\n\t * minutes ago\n\t */\n\tasprintf(&query, \"SELECT ((now()::abstime::int4 - modified::abstime::int4) \\\n\t\t\t/ 60) < 15 AS current FROM person WHERE id = 0 FOR UPDATE\");\n\tSPI_exec(query, 0);\n\tfree(query);\n\n\tcurrent = DatumGetChar(SPI_getbinval(\n\t\t\tSPI_tuptable->vals[0], SPI_tuptable->tupdesc,\n\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"current\"), &isnull));\n\n\tif (current) {\n\t\t/* maintenance script ran less that 15 minutes ago, do nothing\n\t\t */\n/*\t\tasprintf(&query, \"COMMIT\");\n\t\tSPI_exec(query, 0);\n\t\tfree(query);*/\n\n\t\telog(NOTICE, \"auction system still current\");\n\n\t\tSPI_finish();\n\t\treturn current;\n\t}\n\n\t/* update modification time now, locking other daemons out\n\t */\n\tasprintf(&query, \"UPDATE person SET modified = now() WHERE id = 0\");\n\tSPI_exec(query, 0);\n\tfree(query);\n\n/*\tasprintf(&query, \"COMMIT\");\n\tSPI_exec(query, 0);\n\tfree(query);*/\n\n\t/* start real mainenance work here\n\t */\n\n/*\tasprintf(&query, \"BEGIN\");\n\tSPI_exec(query, 0);\n\tfree(query);*/\n\n\t/* select all auctions that have expired and have not been notified\n\t */\n\tasprintf(&query, \"SELECT *,auction_status(a.id), \\\n\t\tseller.mail AS seller_mail, seller.locale AS seller_locale, \\\n\t\tseller.login AS seller_login \\\n\t\tFROM auction a, person seller WHERE auction_status(a.id) <= 0 \\\n\t\tAND a.person_id = seller.id \\\n\t\tAND (notified IS FALSE OR notified IS NULL) FOR UPDATE\");\n\tSPI_exec(query, 0);\n\tfree(query);\n\n\tfor (i = SPI_processed - 1; i >= 0; i--) {\n\n\t\tint type = DatumGetInt32(SPI_getbinval(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"type\"), &isnull));\n\n\t\tchar * title = SPI_getvalue(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"title\"));\n\n\t\tchar * seller_mail = SPI_getvalue(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"seller_mail\"));\n\n\t\tchar * seller_locale = SPI_getvalue(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"seller_locale\"));\n\n\t\tchar * seller_login = SPI_getvalue(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"seller_login\"));\n\n\t\tchar * stopdate = SPI_getvalue(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"stopdate\"));\n\n\t\tint auction_id = DatumGetInt32(SPI_getbinval(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"id\"), &isnull));\n\n\t\tint lot = DatumGetInt32(SPI_getbinval(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"lot\"), &isnull));\n\n\t\tint auction_status = DatumGetInt32(SPI_getbinval(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"auction_status\"), &isnull));\n\n\t\tint renew_count = DatumGetInt32(SPI_getbinval(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"renew_count\"), &isnull));\n\n/*\t\tbool auto_renew = DatumGetChar(SPI_getbinval(\n\t\t\t\tSPI_tuptable->vals[i], SPI_tuptable->tupdesc,\n\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"auto_renew\"), &isnull));*/\n\n\t\telog(NOTICE, \"Processing auction #%d of %d (\\n\"\n\t\t\t\t\"type: %d\\n\"\n\t\t\t\t\"title: %s\\n\"\n\t\t\t\t\"seller_mail: %s\\n\"\n\t\t\t\t\"seller_locale: %s\\n\"\n\t\t\t\t\"seller_login: %s\\n\"\n\t\t\t\t\"stopdate: %s\\n\"\n\t\t\t\t\"id: %d\\n\"\n\t\t\t\t\"lot: %d\\n\"\n\t\t\t\t\"status: %d\\n\"\n\t\t\t\t\"renew_count: %d\\n\"\n\t\t\t\t\")\",\n\t\t\t\tSPI_processed - i, SPI_processed, \n\t\t\t\ttype,\n\t\t\t\ttitle, \n\t\t\t\tseller_mail,\n\t\t\t\tseller_locale,\n\t\t\t\tseller_login,\n\t\t\t\tstopdate,\n\t\t\t\tauction_id, \n\t\t\t\tlot,\n\t\t\t\tauction_status,\n\t\t\t\trenew_count\n\t\t\t\t);\n\n\t\t/* FIRST, store a copy of this auction in the archive, before eventually\n\t\t * running UPDATE or DELETE on it\n\t\t */\n\t\tasprintf(&query, \"INSERT INTO auction_archive SELECT * FROM auction \\\n\t\t\tWHERE id = %d\", auction_id);\n\t\tSPI_exec(query, 0);\n\t\tfree(query);\n\n\t\t/* store a copy of all bids into archive\n\t\t */\n\t\tasprintf(&query, \"INSERT INTO bid_archive SELECT * FROM bid \\\n\t\t\tWHERE auction_id = %d\", auction_id);\n\t\tSPI_exec(query, 0);\n\t\tfree(query);\n/*#if 0*/\n\t\t/* winner/seller notification\n\t\t */\n\t\tif (auction_status != -lot) { /* something was sold */\n\t\t\tchar * mess;\n\t\t\tchar **bidder_login, **bidder_mail, **bidder_locale;\n\t\t\tint *bid_lot, *bidder_id, j, l;\n\t\t\tdouble *bid_price;\n\n\t\t\t/* get high bidders\n\t\t\t */\n\t\t\tasprintf(&query, \"SELECT max(b.lot) AS bid_lot, \\\n\t\t\t\tmax(b.price) AS bid_price,p.login AS bidder_login, \\\n\t\t\t\tp.id AS bidder_id, p.mail AS bidder_mail, \\\n\t\t\t\tp.locale AS bidder_locale \\\n\t\t\t\tFROM bid b, person p \\\n\t\t\t\tWHERE b.auction_id = %d AND p.id = b.person_id \\\n\t\t\t\tGROUP BY p.login, p.id, p.mail,p.locale \\\n\t\t\t\tORDER BY max(price)\", auction_id);\n\t\t\tSPI_exec(query, 0);\n\t\t\tfree(query);\n\n\t\t\tbid_price = alloca(SPI_processed * sizeof(double));\n\t\t\tbid_lot = alloca(SPI_processed * sizeof(int));\n\t\t\tbidder_id = alloca(SPI_processed * sizeof(int));\n\n\t\t\tbidder_login = alloca(SPI_processed * sizeof(char*));\n\t\t\tbidder_mail = alloca(SPI_processed * sizeof(char*));\n\t\t\tbidder_locale = alloca(SPI_processed * sizeof(char*));\n\n/*\t\t\telog(NOTICE, \"starting winner/seller notification on auction #%d\",\n\t\t\t\t\tauction_id);*/\n\n\t\t\t/* get winner list\n\t\t\t */\n\t\t\tfor (j = SPI_processed - 1; j >= 0; j--) {\n\t\t\t\tbid_price[j] = *DatumGetFloat64(SPI_getbinval(\n\t\t\t\t\t\tSPI_tuptable->vals[j], SPI_tuptable->tupdesc,\n\t\t\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"bid_price\"),\n\t\t\t\t\t\t&isnull));\n\n\t\t\t\tbid_lot[j] = DatumGetInt32(SPI_getbinval(\n\t\t\t\t\t\tSPI_tuptable->vals[j], SPI_tuptable->tupdesc,\n\t\t\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"bid_lot\"),\n\t\t\t\t\t\t&isnull));\n\n\t\t\t\tbidder_id[j] = DatumGetInt32(SPI_getbinval(\n\t\t\t\t\t\tSPI_tuptable->vals[j], SPI_tuptable->tupdesc,\n\t\t\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"bidder_id\"),\n\t\t\t\t\t\t&isnull));\n\n\t\t\t\tbidder_login[j] = SPI_getvalue(\n\t\t\t\t\t\tSPI_tuptable->vals[j], SPI_tuptable->tupdesc,\n\t\t\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"bidder_login\"));\n\n\t\t\t\tbidder_mail[j] = SPI_getvalue(\n\t\t\t\t\t\tSPI_tuptable->vals[j], SPI_tuptable->tupdesc,\n\t\t\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"bidder_mail\"));\n\n\t\t\t\tbidder_locale[j] = SPI_getvalue(\n\t\t\t\t\t\tSPI_tuptable->vals[j], SPI_tuptable->tupdesc,\n\t\t\t\t\t\tSPI_fnumber(SPI_tuptable->tupdesc, \"bidder_locale\"));\n\n\t\t\t\telog(NOTICE, \"extracting winner %s for price %f and lot %d\",\n\t\t\t\t\t\tbidder_login[j], bid_price[j], bid_lot[j]);\n\t\t\t\t/* decrease available quantity marker until all is sold, dutch\n\t\t\t\t * auctions only\n\t\t\t\t */\n/*\t\t\t\tl -= bid_lot[i];*/\n\n\t\t\t}\n\n\t\t\tif (type == AUCTION_CLASSIC) {\n\t\t\t\tchar * winner = NULL;\n\t\t\t\tdouble final_price;\n/*\t\t\t\twinner = astrcat();*/\n\t\t\t\t/* determine final_price for dutch auction: the lowest of the\n\t\t\t\t * winning bids\n\t\t\t\t */\n\t\t\t\tfor (j = SPI_processed - 1, l = lot; j >= 0 && l > 0;\n\t\t\t\t\t\tj--, l -= bid_lot[j]) {\n\t\t\t\t\tfinal_price = bid_price[j];\n\t\t\t\t}\n\t\t\t\tfor (j = SPI_processed - 1, l = lot; j >= 0 && l > 0;\n\t\t\t\t\t\tj--, l -= bid_lot[j]) {\n\n\t\t\t\t\t/* start building the string listing winners (for dutch)\n\t\t\t\t\t * or the only winner (for normal)\n\t\t\t\t\t */\n\t\t\t\t\tsetlocale(LC_ALL, seller_locale);\n\t\t\t\t\tsetenv(\"LC_ALL\", seller_locale, 1);\n\n\t\t\t\t\tasprintf(&mess, _(\n\t\t\t\t\t\t\t\"* login: %s, \\t\"\n\t\t\t\t\t\t\t\"e-mail: %s, \\t\"\n\t\t\t\t\t\t\t\"bid price: %.2f, \\t\"\n\t\t\t\t\t\t\t\"bid quantity: %d, \\t\"\n\t\t\t\t\t\t\t\"final price: %.2f,\\t\"\n\t\t\t\t\t\t\t\"alloted quantity: %d,\\t\"\n\t\t\t\t\t\t\t),\n\t\t\t\t\t\t\tbidder_login[j], bidder_mail[j], bid_price[j],\n\t\t\t\t\t\t\tbid_lot[j], final_price,\n\t\t\t\t\t\t\t(bid_lot[j] < l ? l : bid_lot[j]) );\n\t\t\t\t\tastrcat(&winner, mess);\n\t\t\t\t\tfree(mess);\n\n\t\t\t\t\telog(NOTICE, \"winner #%d is %s\", j, winner);\n\n\t\t\t\t\tsetlocale(LC_ALL, bidder_locale[j]);\n\t\t\t\t\tsetenv(\"LC_ALL\", bidder_locale[j], 1);\n\t\t\t\t\t/* notify winner directly\n\t\t\t\t\t */\n\t\t\t\t\tasprintf(&mess, _(\n\"\\tDear %s,\\n\"\n\"\\n\"\n\"On the following closed auction:\"\n\"\\n\"\n\"- title: %s\\n\"\n\"- id: %d\\n\"\n\"- end date: %s\\n\"\n\"- seller: %s\\n\"\n\"- seller e-mail: %s\\n\"\n\"\\n\"\n\"You have entered this winning bid:\\n\"\n\"\\n\"\n\"- bid price: %.2f\\n\"\n\"- bid quantity: %d\\n\"\n\"- final price: %.2f\\n\"\n\"- alloted quantity: %d\\n\"\n\"\\n\"\n\"Please contact the seller as soon as possible to close the transaction\\n\"\n\"\\n\"\n\"-- \\n\"\n\"Apartia auction daemon\\n\"\n\t\t\t\t\t), bidder_login[j], title,\n\t\t\t\t\tauction_id, stopdate, seller_login, seller_mail,\n\t\t\t\t\tbid_price[j], bid_lot[j], final_price,\n\t\t\t\t\t(bid_lot[j] < l ? l : bid_lot[j]));\n\t\t\t\t\tsendmail(bidder_mail[j], \"Auction win notification\", mess);\n\t\t\t\t\tfree(mess);\n\n\t\t\t\t\t/* decrease available quantity marker until all is sold,\n\t\t\t\t\t * dutch auctions only\n\t\t\t\t\t */\n\t\t\t\t\tl -= bid_lot[j];\n\t\t\t\t}\n\n\t\t\t\t/* now notify the seller with a list of winning bids\n\t\t\t\t */\n\t\t\t\tasprintf(&mess, _(\n\"\\tDear %s,\\n\"\n\"\\n\"\n\"On your closed auction:\\n\"\n\"\\n\"\n\"- title: %s\\n\"\n\"- id: %d\\n\"\n\"- end date: %s\\n\"\n\"\\n\"\n\"The following winning bid(s) have been placed:\\n\"\n\"%s\\n\"\n\"\\n\"\n\"Please contact the winner(s) as soon as possible to close the transaction\\n\"\n\"-- \\n\"\n\"Apartia auction daemon\\n\"),\n\t\t\t\tseller_login, title, auction_id, stopdate, winner\n\t\t\t\t);\n\t\t\t\tfree(winner);\n\t\t\t\tsendmail(seller_mail,\n\t\t\t\t\t\t_(\"Auction successful close notification\"), mess);\n\t\t\t\tfree(mess);\n\n\n\t\t\t} else if (type == AUCTION_REVERSE || type == AUCTION_FIXED) {\n\t\t\t} else if (type == AUCTION_BID) {\n\t\t\t}\n\n\t\t\t/* clean up memory\n\t\t\t */\n/*\t\t\tfree(bid_price);\n\t\t\tfree(bid_lot);\n\t\t\tfree(bidder_mail);\n\t\t\tfree(bidder_login);\n\t\t\tfree(bidder_locale);\n\t\t\tfree(bidder_id);*/\n\t\t}\n\n\t\t/* DELETE all old bids\n\t\t */\n\t\tasprintf(&query, \"DELETE FROM bid WHERE auction_id = %d\",\n\t\t\t\tauction_id);\n\t\tSPI_exec(query, 0);\n\t\tfree(query);\n\n\t\tasprintf(&query, \"DELETE FROM autobid WHERE auction_id = %d\",\n\t\t\t\tauction_id);\n\t\tSPI_exec(query, 0);\n\t\tfree(query);\n\n\t\t/* renew expired auctions with unsold lots\n\t\t */\n\t\tif (auction_status < 0 && renew_count > 0) {\n\t\t\tasprintf(&query, \"UPDATE auction SET startdate = now(), \\\n\t\t\t\tstopdate = now() + (stopdate - startdate), \\\n\t\t\t\trenew_count = renew_count - 1, \\\n\t\t\t\tlot = -auction_status(id), notified = FALSE, \\\n\t\t\t\tWHERE id = %d\", auction_id);\n\t\t\tSPI_exec(query, 0);\n\t\t\tfree(query);\n\n\t\t\t/* localize message, numbers, dates\n\t\t\t */\n\t\t\tsetlocale(LC_ALL, seller_locale);\n\t\t\tsetenv(\"LC_ALL\", seller_locale, 1);\n\n\t\t\t/* notify seller of renewal\n\t\t\t */\n\t\t\tasprintf(&query, _(\n\"\\tDear %s\\n\"\n\"\\n\"\n\"Your expired auction:\\n\"\n\"- title: %s\\n\"\n\"- id: %d\\n\"\n\"- end date: %s\\n\"\n\"\\n\"\n\"has been auto-renewed today with the same duration.\\n\"\n\"\\n\"\n\"Greetings\\n\"\n\"-- \\n\"\n\"The auction daemon\\n\"\n\t\t\t), seller_login, title, auction_id, stopdate);\n\t\t\tsendmail(seller_mail, _(\"Auction renewal notice\"), query);\n\t\t\tfree(query);\n\t\t} else {\n\n\t\t\t/* auction was closed and fully sold OR not auto_renewed,\n\t\t\t */\n\t\t\tasprintf(&query, \"DELETE FROM auction WHERE id = %d\", auction_id);\n\t\t\tSPI_exec(query, 0);\n\t\t\tfree(query);\n\n\t\t\t/* only notify if nothing was sold; when something has been sold\n\t\t\t * normal winner/seller notification has already taken place\n\t\t\t * higher in this code\n\t\t\t */\n\t\t\tif (auction_status == -lot) {\n\t\t\t\tsetlocale(LC_ALL, seller_locale);\n\t\t\t\tsetenv(\"LC_ALL\", seller_locale, 1);\n\t\t\t\t/* notify seller of auction end\n\t\t\t\t */\n\t\t\t\tasprintf(&query, _(\n\"\\tDear %s\\n\"\n\"\\n\"\n\"Your expired auction:\\n\"\n\"- title: %s\\n\"\n\"- id: %d\\n\"\n\"- end date: %s\\n\"\n\"\\n\"\n\"Has been removed from the system.\\n\"\n\"\\n\"\n\"-- \\n\"\n\"The auction daemon\\n\"\n\t\t\t\t), seller_login, title, auction_id, stopdate);\n\t\t\t\tsendmail(seller_mail, _(\"Auction expiration notice\"), query);\n\t\t\t\tfree(query);\n\t\t\t}\n\t\t}\n/*#endif*/\n\t\telog(NOTICE, \"End of loop %d\", i);\n\t}\n\n\t/* restore default locale\n\t */\n\tsetlocale(LC_ALL, default_locale);\n\tsetenv(\"LC_ALL\", default_locale, 1);\n\n/*\tasprintf(&query, \"COMMIT\");\n\tSPI_exec(query, 0);\n\tfree(query);*/\n\n\tSPI_finish();\n\treturn current;\n}\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\nRadioactive cats have 18 half-lives.\n", "msg_date": "Wed, 27 Sep 2000 08:53:58 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "function crashes backend" }, { "msg_contents": "On Wed, Sep 27, 2000 at 08:53:58AM +0200, Louis-David Mitterrand wrote:\n> Hello,\n> \n> I am writing a SPI function to run maintenance tasks on my auction\n> system but it keeps crashing the backend after running only one loop.\n> Now, I am not a C programmer, nor do I have any formal training in CS. I\n> thought I might run this function by you guys so that a cursory look\n> might reveal some obvious coding mistake? \n> \n> Thanks in advance for your insight.\n\nFollowing up to myself, I finally understood my problem: I was trying to\nre-use SPI_tuptable->vals[i] after calling SPI_exec() on another,\nunrelated query. So the backend crash makes perfect sense now.\n\nWhat is the best strategy: \n- store the result of a SELECT returning multiple tuples into a local\n SPITupleTable? How do I allocate memory for that?\n- iterate over all values contained in the tuples and store _them_ into\n char**, int*, arrays, before re-running SPI_exec() on the second query?\n\nTIA\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.org\n\n> Any suggestions for setting up WinCVS client + (server) on NT4?\nRun away screaming in terror.\n\t\t\t\t\t--Toby.\n", "msg_date": "Wed, 27 Sep 2000 10:13:15 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: function crashes backend" } ]
[ { "msg_contents": "> Hello again.\n> \n> I'm in the process of making a COM wrapper (enabling VB to connect to\n> PostGres) for the libpq library using Visual Studio 6.0 Pro, but have\n> a few problems. I can make use of the libpq.dll library from the COM\n> wrapper, but I thought that it might be a bit better if the actual\n> library routines was included in the COM wrapper, and thus making the\n> libpq.dll unnedded.\nSounds great. I've been thinking about doing this myself, but never got\naround to it... Linking with the static library is definitly a good idea -\nit makes it possible to deploy libpq functionality using ActiveX over a\nnormal webpage, withuot requiring libpq.dll to be installed on every\nmachine.\n\n\n> But if I include all the files that make up the library, then I can't\n> get a connection. I even tried to make a small test file which works\n> ok when using the dll, but doesn't when I include the actual .c files\n> which make up the dll and static link library (.lib). Linking with the\n> static link library that comes with the dll file doesn't work either.\nWHen you use the static library, you need to initialize the Winsock library\nyourself. You need code like:\nWSADATA wsaData;\nif (WSAStartup(MAKEWORD(1, 1), &wsaData)) {\n cout << \"Failed to initialize winsock: \" << GetLastError() << endl;\n exit(1);\n}\n\nThe code that handles this in the DLL is located in\nsrc/interfaces/libpq/libpqdll.c - for reference.\n\n\n> PS. I have also created a project file for the psql application, if\n> this has any interest it might as well be included in the furture\n> distributions of postgresql. The dll and static link library is also\n> more or less finished, but as I can't link with the static link\n> library I don't think that it has much interest untill the problem is\n> solved.\nI originally chose not to include the project file as it can change format\nbetween version os Visual Studio. If you create it in VS6, it cannot be used\nin VS5 IIRC. The Makefile format works in both.\nYou can create a project file while you work and then export that to a\nMakefile once things compile (Project -> Export Makefile).\n\n\n//Magnus\n", "msg_date": "Wed, 27 Sep 2000 09:31:00 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: libpq static link library dowsn't work (M$ VS6)" } ]
[ { "msg_contents": "If you've had the feeling lately that you are getting a little less email\nfrom me than expected in the process of due information to the other\ndevelopers, let me assure you that I've had the feeling lately that I'm\ngetting a little less than hoped for answers to my countless posts.\n\nThe reason, however, is not that you or I am lazy but that the mailing\nlist filters have varyingly interesting objections to the mail headers I\nam generating. At least I think that they are interesting because of\ncourse they don't tell me about it.\n\nThat said, please accept my apologies for this situation.\n\n(Technical note: My latest guess is that my IP's are on a dial-up pool\nblocking list. That's \"interesting\" because I'm using a DSL connection.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 27 Sep 2000 12:26:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "There's the rub... (a meta note)" } ]
[ { "msg_contents": "\nWell all, I just spent a bit of time trying to figure out how to recover a\ndatabase where the tables appear to be intact with postgres in 'single\nuser mode', and came up with a quick and dirty that might not be totally\ncomplete, but might help someone else in a similar situation ...\n\n----------------------\n#!/usr/bin/perl\n\n$table = $ARGV[0];\n\nwhile(<STDIN>) {\n if(length($fields) > 0 && /\\s+----/) {\n print \"INSERT INTO $table ( $fields ) VALUES ( $values );\\n\";\n $fields = \"\";\n $values = \"\";\n }\n if(/\\s+\\d: (\\w+) = \"(.+)\"/) {\n if(length($fields) > 0) { $fields .= \",\"; }\n $fields .= $1;\n if(length($values) > 0) { $values .= \",\"; }\n if(/typeid = 23/) {\n $values .= $2;\n } else {\n $values .= \"'\" . $2 . \"'\";\n }\n }\n}\n----------------------\n\nTo run it, use:\n\necho \"SELECT * FROM <table>;\" | \\\n~/bin/postgres -O -P -D/home/staff/scrappy/recovery/sales.org <database> | \\\n../convert.pl <table>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 27 Sep 2000 16:56:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Recovery from hard drive failure ... the hard way ..." } ]
[ { "msg_contents": "> Following up to myself, I finally understood my problem: I \n> was trying to\n> re-use SPI_tuptable->vals[i] after calling SPI_exec() on another,\n> unrelated query. So the backend crash makes perfect sense now.\n> \n> What is the best strategy: \n> - store the result of a SELECT returning multiple tuples into a local\n> SPITupleTable? How do I allocate memory for that?\n\nYou can just save SPITupleTable pointer somewhere before running another\nquery. SPI doesn't free tuple table between queries but creates new one\nfor each select query.\n\nVadim\n", "msg_date": "Wed, 27 Sep 2000 13:14:53 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Re: function crashes backend" } ]
[ { "msg_contents": "> > Why not implement *true* CLUSTER?\n> > With cluster, all heap tuples will be in cluster index.\n> \n> It would be nice. It's pity that pg AMs are not general.\n> There is no simple way to use btree instead of heap. But\n> it would help.\n> But using values from index is good idea too because you\n> can have table with many columns and aggregate query which\n> needs only two columns.\n> The it will be MUCH faster to create secondary index which\n> is much smaller than heap and use values from it.\n\nAgreed. But this will add 16 bytes (2 xid + 2 cid) to size of btitems.\nCurrently, total size of btitem for 2-int4 key index is 16 bytes =>\nnew feature will double size of index and increase # of levels\n(obviously bad for mostly static tables).\n\nSo, this feature should be *optional*...\n\n> Vadim where can I found some code from upcoming WAL ?\n> I'm thinking about implementing special ranked b-tree\n> which will store precomputed aggregate values (like\n> cnt,min,max,sum) in btree node keys. It can be then\n> used for extremely fast evaluation of aggregates. But\n> in case of MVCC it is more complicated and I'd like\n> to see how it would be affected by WAL.\n\nMVCC will not be affected by WAL currently. It's issue\nof storage manager, not WAL.\n\nVadim\n", "msg_date": "Wed, 27 Sep 2000 14:46:33 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> > The it will be MUCH faster to create secondary index which\n> > is much smaller than heap and use values from it.\n> \n> Agreed. But this will add 16 bytes (2 xid + 2 cid) to size of btitems.\n> Currently, total size of btitem for 2-int4 key index is 16 bytes =>\n> new feature will double size of index and increase # of levels\n> (obviously bad for mostly static tables).\n> \n> So, this feature should be *optional*...\n\nyes. it definitely should.\n\n> MVCC will not be affected by WAL currently. It's issue\n> of storage manager, not WAL.\n\nand where will the WAL sit ? can you explain it a bit ?\n\nthanks devik\n\n", "msg_date": "Thu, 28 Sep 2000 14:02:35 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" } ]
[ { "msg_contents": "> > It was discussed several times for btree - add heap tid to \n> > index key and you'll scan index for particulare tuple much faster.\n> \n> good idea :) Why don't just to use tid ALWAYS as last part of key ?\n> When btree code sees equal keys then it will compare tids ?\n> Would not be better to use oids ? The don't change during vacuum\n> and with tupleheader in index we will know it.\n\nIn some future I would like to make OIDs optional - they are not\nalways used, so why waste space?\n\n+ using TID would make keys unique and this would simplify\nhandling of duplicates.\n\n+ heap TID is already part of btitems in leaf nodes - OIDs would just\nincrease btiem size.\n\n> > Not sure what could be done for hash indices... order hash \n> > items with the same hash key?\n> \n> question is whether we need it for hash indices. it is definitely\n> good for btree as they support range retrieval. hash ind. doesn't\n> it so I wouldn't implement it for them.\n\nWe need in fast heap tuple --> index tuple lookup for overwriting\nstorage manager anyway...\n\nVadim\n", "msg_date": "Wed, 27 Sep 2000 15:03:30 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pgsql is 75 times faster with my new index scan" }, { "msg_contents": "> > question is whether we need it for hash indices. it is definitely\n> > good for btree as they support range retrieval. hash ind. doesn't\n> > it so I wouldn't implement it for them.\n> \n> We need in fast heap tuple --> index tuple lookup for overwriting\n> storage manager anyway...\n \noh .. there will be such one ?\n\n\n", "msg_date": "Thu, 28 Sep 2000 14:01:14 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: pgsql is 75 times faster with my new index scan" } ]
[ { "msg_contents": "\nPointers to what this is? Do we have it documented anywhere? Search\nengine, of course, is done, so can't search there ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 27 Sep 2000 22:15:40 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "The Data Base System is in recovery mode " }, { "msg_contents": "The Hermit Hacker writes:\n\n> Pointers to what this is? Do we have it documented anywhere? Search\n> engine, of course, is done, so can't search there ...\n\n From experience, not from code knowledge, this happens when some backend\ncrashed and took the others with it and the postmaster is _recovering_\nfrom that event (i.e., reinitializing). Should go away in a few seconds,\nnormally.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 28 Sep 2000 10:52:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The Data Base System is in recovery mode " } ]
[ { "msg_contents": "Yeah: ST is designed for network apps, and its for network bound apps that\nyou\ngain the most performance - but by using it to allow\na child process to hold multiple connections and accept/return data to\nthose connections simultaneously, I forsaw a potential performance\nimprovement...\n*shrug* Most connections remain idle most of thier life... yes?\n\nthe SGI folks developed this library as part of a project to make apache\nfaster (http://aap.sourceforge.net/) - multiple child\nprocesses as normal, but allowed multiple connections per child.\n\nAnd although the performance improvements they got were greatest on irix,\nperformance was improved upto 70% on linux. Some of this was from QSC\n(http://aap.sourceforge.net/mod_qsc.html) , however...\n\njust some food for thought.\n\n\n----- Original Message -----\nFrom: \"Neil Conway\" <[email protected]>\nTo: \"Jon Franz\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, June 05, 2002 8:05 PM\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\n\n> On Wed, 5 Jun 2002 18:50:46 -0400\n> \"Jon Franz\" <[email protected]> wrote:\n> > One note: SGI developers discovered they could get amazing performance\nusing\n> > as hybrid threaded and forked-process model with apache - we might want\nto\n> > look into this. They even have a library for network-communication\n> > utilizing thier 'state threads' model.\n>\n> I think ST is designed for network I/O-bound apps -- last I checked,\n> disk I/O will still block an entire ST process. While you can get around\n> that by using another process to do disk I/O, it sounds like ST won't be\n> that useful.\n>\n> However, Chris KL. (I believe) raised the idea of using POSIX AIO for\n> PostgreSQL. Without having looked into it extensively, this technique\n> sounds promising. Perhaps someone who has looked into this further\n> (e.g. someone from Redhat) can comment?\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <[email protected]>\n> PGP Key ID: DB3C29FC\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Wed, 27 Sep 2000 23:53:44 -0400", "msg_from": "\"Jon Franz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Hi all,\n\nI followed the various threads regarding this for some time now. My current \nsituation is:\n\nI'm working at a company which does industrial automation, and does it's own \ncustom products. We try to be cross-platform, but it's a windoze world, as \nfar as most measurement devices or PLCs are concerned. We also employ \ndatabases for various tasks (including simple ones as holding configuration \ndata, but also hammering production data into it at a rate of several hundred \nrecords/sec.)\nWell, we would *love* to use PostgreSQL in most our projects and products, \n(and we do already use it in some), because it has proven to be very reliable \nand quite fast.\n\nSo, I'm faced with using PostgreSQL on windows also (you can't always put a \nLinux box besides). We do this using cygwin, but it's a bit painful ;-) \n(although it works!).\n\nThinking about the hreads I read, it seems there are 2 obstacles to native PG \non W:\n\n1.) no fork,\n2.) no SYSV IPC\n\nOk, 1.) is an issue, but there's a fork() in MinGW, so it's 'just' going to \nbe a bit slow on new connections to the DB, right?? But this could be sorted \nout once we *have* a native WIN32 build.\n\nThe second one's a bit harder, but... I'm currently trying to find time to do \na minimal implementation of SYSV IPC on WIN32 calls, just enough to get PG up \n(doesn't need msg*() for example, right?). \nAs far as I understand it, we would not need to have IPC items around *after* \nall backends and postmaster have gone away, or? Then there's no need for a \n'daemon' process like in cygwin.\n\nSo, my route would be to get it to run *somehow* without paying attention to \nspeed and not to change much of the existing code, THEN see how we could get \nrid of fork() on windows.\n\nWhat do you guys think? Anyone up to join efforts? (I'll start the IPC thingy \nanyway, as an exercise, and see where I'll end).\n\nGreetings,\n Joerg\n\nP.s.: thanks for a great database system!!\n-- \nLeading SW developer - S.E.A GmbH\nMail: [email protected]\nWWW: http://www.sea-gmbh.com\n", "msg_date": "Thu, 16 May 2002 13:47:45 +0200", "msg_from": "Joerg Hessdoerfer <[email protected]>", "msg_from_op": false, "msg_subject": "WIN32 native ... lets start?!?" }, { "msg_contents": "\nActually, take a look at the thread starting at:\n\n\thttp://archives.postgresql.org/pgsql-hackers/2002-05/msg00665.php\n\nRight now, IMHO, the big show stopper is passing global variables to the\nchild processes in Windows ... the above thread talks about a method of\npulling together the global variables *cleanly* that Tom seems to feel\nwouldn't add much in the way of long term maintenance headaches ... *and*,\nas I understand it, would provide us with a means to use threading in\nfuture developments if deemed appropriate ...\n\n From what I read by those 'in the know' about Windows programming, if we\ncould centralize the global variables somewhat, using CreateProcess in\nWindows shouldn't be a big deal, eliminiating the whole fork() headache\n...\n\nOn Thu, 16 May 2002, Joerg Hessdoerfer wrote:\n\n> Hi all,\n>\n> I followed the various threads regarding this for some time now. My current\n> situation is:\n>\n> I'm working at a company which does industrial automation, and does it's own\n> custom products. We try to be cross-platform, but it's a windoze world, as\n> far as most measurement devices or PLCs are concerned. We also employ\n> databases for various tasks (including simple ones as holding configuration\n> data, but also hammering production data into it at a rate of several hundred\n> records/sec.)\n> Well, we would *love* to use PostgreSQL in most our projects and products,\n> (and we do already use it in some), because it has proven to be very reliable\n> and quite fast.\n>\n> So, I'm faced with using PostgreSQL on windows also (you can't always put a\n> Linux box besides). We do this using cygwin, but it's a bit painful ;-)\n> (although it works!).\n>\n> Thinking about the hreads I read, it seems there are 2 obstacles to native PG\n> on W:\n>\n> 1.) no fork,\n> 2.) no SYSV IPC\n>\n> Ok, 1.) is an issue, but there's a fork() in MinGW, so it's 'just' going to\n> be a bit slow on new connections to the DB, right?? But this could be sorted\n> out once we *have* a native WIN32 build.\n>\n> The second one's a bit harder, but... I'm currently trying to find time to do\n> a minimal implementation of SYSV IPC on WIN32 calls, just enough to get PG up\n> (doesn't need msg*() for example, right?).\n> As far as I understand it, we would not need to have IPC items around *after*\n> all backends and postmaster have gone away, or? Then there's no need for a\n> 'daemon' process like in cygwin.\n>\n> So, my route would be to get it to run *somehow* without paying attention to\n> speed and not to change much of the existing code, THEN see how we could get\n> rid of fork() on windows.\n>\n> What do you guys think? Anyone up to join efforts? (I'll start the IPC thingy\n> anyway, as an exercise, and see where I'll end).\n>\n> Greetings,\n> Joerg\n>\n> P.s.: thanks for a great database system!!\n> --\n> Leading SW developer - S.E.A GmbH\n> Mail: [email protected]\n> WWW: http://www.sea-gmbh.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n", "msg_date": "Thu, 16 May 2002 10:38:25 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!?" }, { "msg_contents": "On Thu, 2002-05-16 at 13:47, Joerg Hessdoerfer wrote:\n> So, my route would be to get it to run *somehow* without paying attention to \n> speed and not to change much of the existing code, THEN see how we could get \n> rid of fork() on windows.\n\nGetting it to compile and then \"somehow\" run on MinGW seems a good first\nstep on road to full native Win32 PG.\n\n----------\nHannu\n\n\n", "msg_date": "16 May 2002 16:57:14 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!?" }, { "msg_contents": "\r\n> On Thu, 2002-05-16 at 13:47, Joerg Hessdoerfer wrote:\r\n> > So, my route would be to get it to run *somehow* without paying\r\n> > attention to speed and not to change much of the existing code,\r\n> > THEN see how we could get rid of fork() on windows.\r\n> \r\n\r\nWhat is the biggest problem here?\r\nThe Shmem/IPC stuff, or the fork() stuff?\r\nI'm think that we could do a fork() implementation in usermode by copying the memory allocations.\r\nHow fast that would be regarding the context switches, i don't know, but i'm willing to experiment some to see how feesible this is...\r\n\r\nAnyone tried this before?\r\n\r\nMagnus\r\n\r\n", "msg_date": "Thu, 16 May 2002 22:10:03 +0200", "msg_from": "\"Magnus Naeslund(f)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!?" }, { "msg_contents": "On Thursday 16 May 2002 22:10, you wrote:\n[...]\n>\n> What is the biggest problem here?\n> The Shmem/IPC stuff, or the fork() stuff?\n> I'm think that we could do a fork() implementation in usermode by copying\n> the memory allocations. How fast that would be regarding the context\n> switches, i don't know, but i'm willing to experiment some to see how\n> feesible this is...\n>\n> Anyone tried this before?\n>\n> Magnus\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nThe problem is not the fork() call itself, this has been done (MinGW and \ncygwin I know of, possibly others) but the speed of fork() on windows, it's \ncreepingly slow (due to usermode copy, I assume ;-).\n\nIPC needs to be done, I'm just about to start...\n\nGreetings,\n\tJoerg\n-- \nLeading SW developer - S.E.A GmbH\nMail: [email protected]\nWWW: http://www.sea-gmbh.com\n", "msg_date": "Thu, 16 May 2002 22:35:58 +0200", "msg_from": "Joerg Hessdoerfer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!?" }, { "msg_contents": "Joerg Hessdoerfer <[email protected]> wrote:\r\n[snip]\r\n> The problem is not the fork() call itself, this has been done (MinGW\r\n> and cygwin I know of, possibly others) but the speed of fork() on\r\n> windows, it's creepingly slow (due to usermode copy, I assume ;-).\r\n> \r\n> IPC needs to be done, I'm just about to start...\r\n> \r\n\r\nI'm not so familiar with the win32 kernel mode stuff.\r\nBut i've seen programs using .vxd (kernelmode, ring X ?) helpers for getting more privileges, maybe cross process ones.\r\nWell, i'll look into this sometime if it's possible to reduce the context switches by going vxd.\r\nThere must be some way to read protection of the pages and map them as COW or RO in the new process to get rid of much of the copy, but then again, we're talking microsoft here :)\r\nI once did a .exe loader that used the MapViewOfFile (win32 mmap) of the .exe itself to accomplish shared loadable modules that worked on x86 linux and win32 without recompile (might be something like the XFree86 binary gfx card drivers).\r\n\r\nGood luck on the IPC work!\r\n\r\nMagnus\r\n\r\n-- \r\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\r\n Programmer/Networker [|] Magnus Naeslund\r\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\r\n\r\n", "msg_date": "Fri, 17 May 2002 02:16:02 +0200", "msg_from": "\"Magnus Naeslund(f)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!?" }, { "msg_contents": "Maybe Vince could set up a Win32 porting project page, and since we now seem\nto have a few interested parties willing to code on a native Win32 version,\nthey should have their own project page. This could make communication\neasier for them and make sure the project doesn't die...\n\nChris\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Joerg\n> Hessdoerfer\n> Sent: Friday, 17 May 2002 4:36 AM\n> To: Magnus Naeslund(f)\n> Cc: [email protected]\n> Subject: Re: [HACKERS] WIN32 native ... lets start?!?\n>\n>\n> On Thursday 16 May 2002 22:10, you wrote:\n> [...]\n> >\n> > What is the biggest problem here?\n> > The Shmem/IPC stuff, or the fork() stuff?\n> > I'm think that we could do a fork() implementation in usermode\n> by copying\n> > the memory allocations. How fast that would be regarding the context\n> > switches, i don't know, but i'm willing to experiment some to see how\n> > feesible this is...\n> >\n> > Anyone tried this before?\n> >\n> > Magnus\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n>\n> The problem is not the fork() call itself, this has been done (MinGW and\n> cygwin I know of, possibly others) but the speed of fork() on\n> windows, it's\n> creepingly slow (due to usermode copy, I assume ;-).\n>\n> IPC needs to be done, I'm just about to start...\n>\n> Greetings,\n> \tJoerg\n> --\n> Leading SW developer - S.E.A GmbH\n> Mail: [email protected]\n> WWW: http://www.sea-gmbh.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n", "msg_date": "Fri, 17 May 2002 10:25:38 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!?" }, { "msg_contents": "On Fri, 17 May 2002, Christopher Kings-Lynne wrote:\n\n> Maybe Vince could set up a Win32 porting project page, and since we now seem\n> to have a few interested parties willing to code on a native Win32 version,\n> they should have their own project page. This could make communication\n> easier for them and make sure the project doesn't die...\n\nMight be an idea to create a pgsql-hackers-win32 list also? Or just\npgsql-win32?\n\n\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Joerg\n> > Hessdoerfer\n> > Sent: Friday, 17 May 2002 4:36 AM\n> > To: Magnus Naeslund(f)\n> > Cc: [email protected]\n> > Subject: Re: [HACKERS] WIN32 native ... lets start?!?\n> >\n> >\n> > On Thursday 16 May 2002 22:10, you wrote:\n> > [...]\n> > >\n> > > What is the biggest problem here?\n> > > The Shmem/IPC stuff, or the fork() stuff?\n> > > I'm think that we could do a fork() implementation in usermode\n> > by copying\n> > > the memory allocations. How fast that would be regarding the context\n> > > switches, i don't know, but i'm willing to experiment some to see how\n> > > feesible this is...\n> > >\n> > > Anyone tried this before?\n> > >\n> > > Magnus\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> >\n> > The problem is not the fork() call itself, this has been done (MinGW and\n> > cygwin I know of, possibly others) but the speed of fork() on\n> > windows, it's\n> > creepingly slow (due to usermode copy, I assume ;-).\n> >\n> > IPC needs to be done, I'm just about to start...\n> >\n> > Greetings,\n> > \tJoerg\n> > --\n> > Leading SW developer - S.E.A GmbH\n> > Mail: [email protected]\n> > WWW: http://www.sea-gmbh.com\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Fri, 17 May 2002 00:17:49 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!?" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> Might be an idea to create a pgsql-hackers-win32 list also? Or just\n> pgsql-win32?\n\nActually, I think that'd be a bad idea. The very last thing we need is\nfor these discussions to get fragmented. The issues affect the whole\nbackend AFAICS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 May 2002 16:16:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!? " }, { "msg_contents": "On Friday 17 May 2002 22:16, you wrote:\n> \"Marc G. Fournier\" <[email protected]> writes:\n> > Might be an idea to create a pgsql-hackers-win32 list also? Or just\n> > pgsql-win32?\n>\n> Actually, I think that'd be a bad idea. The very last thing we need is\n> for these discussions to get fragmented. The issues affect the whole\n> backend AFAICS.\n>\n> \t\t\tregards, tom lane\n\nYes, indeed. I would also like to discuss matters on this list, as one get's \na 'heads up' from people in the know much easier.\n\nBTW, I'm in the process of doing the 'really only what is necessary for pg' \nipc-stuff, and was wondering if anybody already did some configuration of the \nsource tree towards MinGW?? How should we go about that? I would rather like \nnot using cygwin's sh for that ;-), and we have no 'ln' !!\n\nGreetings,\n\tJoerg\n-- \nLeading SW developer - S.E.A GmbH\nMail: [email protected]\nWWW: http://www.sea-gmbh.com\n", "msg_date": "Sat, 18 May 2002 15:57:42 +0200", "msg_from": "Joerg Hessdoerfer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIN32 native ... lets start?!?" }, { "msg_contents": "OK, I think I am now caught up on the Win32/cygwin discussion, and would\nlike to make some remarks.\n\nFirst, are we doing enough to support the Win32 platform? I think the\nanswer is clearly \"no\". There are 3-5 groups/companies working on Win32\nports of PostgreSQL. We always said there would not be PostgreSQL forks\nif we were doing our job to meet user needs. Well, obviously, a number\nof groups see a need for a better Win32 port and we aren't meeting that\nneed, so they are. I believe this is one of the few cases where groups\nare going out on their own because we are falling behind.\n\nSo, there is no question in my mind we need to do more to encourage\nWin32 ports. Now, on to the details.\n\nINSTALLER\n---------\n\nWe clearly need an installer that is zero-hassle for users. We need to\ndecide on a direction for this.\n\nGUI\n---\n\nWe need a slick GUI. pgadmin2 seems to be everyone's favorite, with\npgaccess on Win32 also an option. What else do we need here?\n\nBINARY\n------\n\nThis is the big daddy. It is broken down into several sections:\n\nFORK()\n\nHow do we handle fork()? Do we use the cygwin method that copies the\nwhole data segment, or put the global data in shared memory and copy\nthat small part manually after we create a new process?\n\nTHREADING\n\nRelated to fork(), do we implement an optionally threaded postmaster,\nwhich eliminates CreateProcess() entirely? I don't think we will have\nsuperior performance on Win32 without it. (This would greatly help\nSolaris as well.)\n\nIPC\n\nWe can use Cygwin, MinGW, Apache, or our own code for this. Are there\nother options?\n\nENVIRONMENT\n\nLots of our code requires a unix shell and utilities. Will we continue\nusing cygwin for this?\n\n---------------------------------------------------------------------------\n\nAs a roadmap, it would be good to get consensus on as many of these\nitems as possible so people can start working in these areas. We can\nkeep a web page of decisions we have made to help rally developers to\nthe project.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Jun 2002 00:33:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Roadmap for a Win32 port" }, { "msg_contents": "On Wed, 2002-06-05 at 16:33, Bruce Momjian wrote:\n> OK, I think I am now caught up on the Win32/cygwin discussion, and would\n> like to make some remarks.\n> \n> First, are we doing enough to support the Win32 platform? I think the\n> answer is clearly \"no\". There are 3-5 groups/companies working on Win32\n> ports of PostgreSQL. We always said there would not be PostgreSQL forks\n> if we were doing our job to meet user needs. Well, obviously, a number\n> of groups see a need for a better Win32 port and we aren't meeting that\n> need, so they are. I believe this is one of the few cases where groups\n> are going out on their own because we are falling behind.\n> \n> So, there is no question in my mind we need to do more to encourage\n> Win32 ports. Now, on to the details.\n> \n> INSTALLER\n> ---------\n> \n> We clearly need an installer that is zero-hassle for users. We need to\n> decide on a direction for this.\n> \n> GUI\n> ---\n> \n> We need a slick GUI. pgadmin2 seems to be everyone's favorite, with\n> pgaccess on Win32 also an option. What else do we need here?\n> \n> BINARY\n> ------\n> \n> This is the big daddy. It is broken down into several sections:\n> \n> FORK()\n> \n> How do we handle fork()? Do we use the cygwin method that copies the\n> whole data segment, or put the global data in shared memory and copy\n> that small part manually after we create a new process?\n> \n> THREADING\n> \n> Related to fork(), do we implement an optionally threaded postmaster,\n> which eliminates CreateProcess() entirely? I don't think we will have\n> superior performance on Win32 without it. (This would greatly help\n> Solaris as well.)\n> \n> IPC\n> \n> We can use Cygwin, MinGW, Apache, or our own code for this. Are there\n> other options?\n> \n> ENVIRONMENT\n> \n> Lots of our code requires a unix shell and utilities. Will we continue\n> using cygwin for this?\n> \n> ---------------------------------------------------------------------------\n> \n> As a roadmap, it would be good to get consensus on as many of these\n> items as possible so people can start working in these areas. We can\n> keep a web page of decisions we have made to help rally developers to\n> the project.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \nIs it worth looking at how the mysql crowd did their win32 port -\n(or is that intrinsically a _bad_thing_ to do..) ?\n\n(I am guessing that is why their sources requires c++ ....)\n\nregards\n\nMark\n\n", "msg_date": "05 Jun 2002 19:38:52 +1200", "msg_from": "Mark kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Mark kirkwood wrote:\n> Is it worth looking at how the mysql crowd did their win32 port -\n> (or is that intrinsically a _bad_thing_ to do..) ?\n> \n> (I am guessing that is why their sources requires c++ ....)\n\nAbsolutely worth seeing how MySQL does it. They use cygwin, and I\nassume they aren't seeing the fork() issue because they are threaded,\nand perhaps they don't use SysV IPC like we do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Jun 2002 11:18:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "I might be naive here, but would not proper threading model remove the need\nfor fork() altogether? On both Unix and Win32? Should not be too hard to\ncome up with abstraction which encapsulates POSIX, BeOS and Win32 threads...\nI am not sure how universal POSIX threads are by now. Any important Unix\nplatforms which don't support them yet?\n\nThis has downside of letting any bug to kill the whole thing. On the bright\nside, performance should be better on some platforms (note however, Apache\ngroup still can't come up with implementation of threaded model which would\nprovide better performance than forked or other models). The need to deal\nwith possibility of 'alien' postmaster running along with orphaned backends\nwould also be removed since there would be only one process.\n\nIssue of thread safety of code will come up undoubtedly and some things will\nprobably have to be revamped. But in long term this is probably best way if\nyou want to have efficient and uniform Unix AND Win32 implementations.\n\nI am not too familiar with Win32. Speaking about POSIX threads, it would be\nsomething like a thread pool with low & high watermarks. Main thread would\nhandle thread pool and hand over requests to worker threads (blocked on\ncondvar). How does that sound?\n\n-- igor\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <[email protected]>\nTo: \"PostgreSQL-development\" <[email protected]>\nSent: Tuesday, June 04, 2002 11:33 PM\nSubject: [HACKERS] Roadmap for a Win32 port\n\n\n> OK, I think I am now caught up on the Win32/cygwin discussion, and would\n> like to make some remarks.\n>\n> First, are we doing enough to support the Win32 platform? I think the\n> answer is clearly \"no\". There are 3-5 groups/companies working on Win32\n> ports of PostgreSQL. We always said there would not be PostgreSQL forks\n> if we were doing our job to meet user needs. Well, obviously, a number\n> of groups see a need for a better Win32 port and we aren't meeting that\n> need, so they are. I believe this is one of the few cases where groups\n> are going out on their own because we are falling behind.\n>\n> So, there is no question in my mind we need to do more to encourage\n> Win32 ports. Now, on to the details.\n>\n> INSTALLER\n> ---------\n>\n> We clearly need an installer that is zero-hassle for users. We need to\n> decide on a direction for this.\n>\n> GUI\n> ---\n>\n> We need a slick GUI. pgadmin2 seems to be everyone's favorite, with\n> pgaccess on Win32 also an option. What else do we need here?\n>\n> BINARY\n> ------\n>\n> This is the big daddy. It is broken down into several sections:\n>\n> FORK()\n>\n> How do we handle fork()? Do we use the cygwin method that copies the\n> whole data segment, or put the global data in shared memory and copy\n> that small part manually after we create a new process?\n>\n> THREADING\n>\n> Related to fork(), do we implement an optionally threaded postmaster,\n> which eliminates CreateProcess() entirely? I don't think we will have\n> superior performance on Win32 without it. (This would greatly help\n> Solaris as well.)\n>\n> IPC\n>\n> We can use Cygwin, MinGW, Apache, or our own code for this. Are there\n> other options?\n>\n> ENVIRONMENT\n>\n> Lots of our code requires a unix shell and utilities. Will we continue\n> using cygwin for this?\n>\n> --------------------------------------------------------------------------\n-\n>\n> As a roadmap, it would be good to get consensus on as many of these\n> items as possible so people can start working in these areas. We can\n> keep a web page of decisions we have made to help rally developers to\n> the project.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Wed, 5 Jun 2002 14:32:13 -0500", "msg_from": "\"Igor Kovalenko\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Igor Kovalenko wrote:\n> I might be naive here, but would not proper threading model remove the need\n> for fork() altogether? On both Unix and Win32? Should not be too hard to\n> come up with abstraction which encapsulates POSIX, BeOS and Win32 threads...\n> I am not sure how universal POSIX threads are by now. Any important Unix\n> platforms which don't support them yet?\n> \n> This has downside of letting any bug to kill the whole thing. On the bright\n> side, performance should be better on some platforms (note however, Apache\n> group still can't come up with implementation of threaded model which would\n> provide better performance than forked or other models). The need to deal\n> with possibility of 'alien' postmaster running along with orphaned backends\n> would also be removed since there would be only one process.\n> \n> Issue of thread safety of code will come up undoubtedly and some things will\n> probably have to be revamped. But in long term this is probably best way if\n> you want to have efficient and uniform Unix AND Win32 implementations.\n> \n> I am not too familiar with Win32. Speaking about POSIX threads, it would be\n> something like a thread pool with low & high watermarks. Main thread would\n> handle thread pool and hand over requests to worker threads (blocked on\n> condvar). How does that sound?\n\nGood summary. I think we would support both threaded and fork()\noperation, and users can control which they prefer. For a web backend\nwhere many sessions are a single query, people may want to give up the\nstability of fork() and go with threads, even on Unix.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Jun 2002 16:05:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "...\n> Good summary. I think we would support both threaded and fork()\n> operation, and users can control which they prefer. For a web backend\n> where many sessions are a single query, people may want to give up the\n> stability of fork() and go with threads, even on Unix.\n\nI would think that we would build on our strengths of having a fork/exec\nmodel for separate clients. A threaded model *could* benefit individual\nclients who are doing queries on multiprocessor servers, and I would be\nsupportive of efforts to enable that.\n\nBut the requirements for that may be less severe than for managing\nmultiple clients within the same process, and imho there is not strong\nrequirement to enable the latter for our current crop of well supported\ntargets. If it came for free then great, but if it came with a high cost\nthen the choice is not as obvious. It is also not a *requirement* if we\nwere instead able to do the multiple threads for a single client\nscenerio first.\n\n - Thomas\n", "msg_date": "Wed, 05 Jun 2002 15:02:33 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "One note: SGI developers discovered they could get amazing performance using\nas hybrid threaded and forked-process model with apache - we might want to\nlook into this. They even have a library for network-communication\nutilizing thier 'state threads' model. Please see:\n\nhttp://state-threads.sourceforge.net/docs/st.html\n\nThus, on platforms where it can be supported, we should keep in mind that a\nhybrid multiprocess/multithreaded postgresql might be the fastest\nsolution...\n\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <[email protected]>\nTo: \"Igor Kovalenko\" <[email protected]>\nCc: \"PostgreSQL-development\" <[email protected]>\nSent: Wednesday, June 05, 2002 4:05 PM\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\n\n> Igor Kovalenko wrote:\n> > I might be naive here, but would not proper threading model remove the\nneed\n> > for fork() altogether? On both Unix and Win32? Should not be too hard to\n> > come up with abstraction which encapsulates POSIX, BeOS and Win32\nthreads...\n> > I am not sure how universal POSIX threads are by now. Any important Unix\n> > platforms which don't support them yet?\n> >\n> > This has downside of letting any bug to kill the whole thing. On the\nbright\n> > side, performance should be better on some platforms (note however,\nApache\n> > group still can't come up with implementation of threaded model which\nwould\n> > provide better performance than forked or other models). The need to\ndeal\n> > with possibility of 'alien' postmaster running along with orphaned\nbackends\n> > would also be removed since there would be only one process.\n> >\n> > Issue of thread safety of code will come up undoubtedly and some things\nwill\n> > probably have to be revamped. But in long term this is probably best way\nif\n> > you want to have efficient and uniform Unix AND Win32 implementations.\n> >\n> > I am not too familiar with Win32. Speaking about POSIX threads, it would\nbe\n> > something like a thread pool with low & high watermarks. Main thread\nwould\n> > handle thread pool and hand over requests to worker threads (blocked on\n> > condvar). How does that sound?\n>\n> Good summary. I think we would support both threaded and fork()\n> operation, and users can control which they prefer. For a web backend\n> where many sessions are a single query, people may want to give up the\n> stability of fork() and go with threads, even on Unix.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Wed, 5 Jun 2002 18:50:46 -0400", "msg_from": "\"Jon Franz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "On Wed, 5 Jun 2002 18:50:46 -0400\n\"Jon Franz\" <[email protected]> wrote:\n> One note: SGI developers discovered they could get amazing performance using\n> as hybrid threaded and forked-process model with apache - we might want to\n> look into this. They even have a library for network-communication\n> utilizing thier 'state threads' model.\n\nI think ST is designed for network I/O-bound apps -- last I checked,\ndisk I/O will still block an entire ST process. While you can get around\nthat by using another process to do disk I/O, it sounds like ST won't be\nthat useful.\n\nHowever, Chris KL. (I believe) raised the idea of using POSIX AIO for\nPostgreSQL. Without having looked into it extensively, this technique\nsounds promising. Perhaps someone who has looked into this further\n(e.g. someone from Redhat) can comment?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 5 Jun 2002 20:05:44 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Yes I proposed to use the GNU Pth library instead. It's an event\ndemultiplexer just like the sgi library, but has a posix thread interface.\nThis architecture is actually the more robust and also the more scalable. On\na single processor server, you don't have the multi-thread synchronization\nand context switching overhead and you also take full advantage of\nmulti-processor servers when you create several processes. Plus you have\nmuch less concern about global variables.\n\nAlso for those concerned about the licence of this library here is an\nabstract of it:\n\"The author places this library under the LGPL to make sure that it\ncan be used both commercially and non-commercially provided that\nmodifications to the code base are always donated back to the official\ncode base under the same license conditions. Please keep in mind that\nespecially using this library in code not staying under the GPL or\nthe LGPL _is_ allowed and that any taint or license creap into code\nthat uses the library is not the authors intention. It is just the\ncase that _including_ this library into the source tree of other\napplications is a little bit more inconvinient because of the LGPL.\nBut it has to be this way for good reasons. And keep in mind that\ninconvinient doesn't mean not allowed or even impossible.\"\n\nSo it can be used in both commercial and non commercial project.\n\n\n----- Original Message -----\nFrom: \"Jon Franz\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, June 06, 2002 8:50 AM\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\n\n> One note: SGI developers discovered they could get amazing performance\nusing\n> as hybrid threaded and forked-process model with apache - we might want to\n> look into this. They even have a library for network-communication\n> utilizing thier 'state threads' model. Please see:\n>\n> http://state-threads.sourceforge.net/docs/st.html\n>\n> Thus, on platforms where it can be supported, we should keep in mind that\na\n> hybrid multiprocess/multithreaded postgresql might be the fastest\n> solution...\n>\n>\n> ----- Original Message -----\n> From: \"Bruce Momjian\" <[email protected]>\n> To: \"Igor Kovalenko\" <[email protected]>\n> Cc: \"PostgreSQL-development\" <[email protected]>\n> Sent: Wednesday, June 05, 2002 4:05 PM\n> Subject: Re: [HACKERS] Roadmap for a Win32 port\n>\n>\n> > Igor Kovalenko wrote:\n> > > I might be naive here, but would not proper threading model remove the\n> need\n> > > for fork() altogether? On both Unix and Win32? Should not be too hard\nto\n> > > come up with abstraction which encapsulates POSIX, BeOS and Win32\n> threads...\n> > > I am not sure how universal POSIX threads are by now. Any important\nUnix\n> > > platforms which don't support them yet?\n> > >\n> > > This has downside of letting any bug to kill the whole thing. On the\n> bright\n> > > side, performance should be better on some platforms (note however,\n> Apache\n> > > group still can't come up with implementation of threaded model which\n> would\n> > > provide better performance than forked or other models). The need to\n> deal\n> > > with possibility of 'alien' postmaster running along with orphaned\n> backends\n> > > would also be removed since there would be only one process.\n> > >\n> > > Issue of thread safety of code will come up undoubtedly and some\nthings\n> will\n> > > probably have to be revamped. But in long term this is probably best\nway\n> if\n> > > you want to have efficient and uniform Unix AND Win32 implementations.\n> > >\n> > > I am not too familiar with Win32. Speaking about POSIX threads, it\nwould\n> be\n> > > something like a thread pool with low & high watermarks. Main thread\n> would\n> > > handle thread pool and hand over requests to worker threads (blocked\non\n> > > condvar). How does that sound?\n> >\n> > Good summary. I think we would support both threaded and fork()\n> > operation, and users can control which they prefer. For a web backend\n> > where many sessions are a single query, people may want to give up the\n> > stability of fork() and go with threads, even on Unix.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n\n\n", "msg_date": "Thu, 6 Jun 2002 10:50:09 +1000", "msg_from": "\"Nicolas Bazin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Neil Conway wrote:\n> On Wed, 5 Jun 2002 18:50:46 -0400\n> \"Jon Franz\" <[email protected]> wrote:\n> > One note: SGI developers discovered they could get amazing performance using\n> > as hybrid threaded and forked-process model with apache - we might want to\n> > look into this. They even have a library for network-communication\n> > utilizing thier 'state threads' model.\n> \n> I think ST is designed for network I/O-bound apps -- last I checked,\n> disk I/O will still block an entire ST process. While you can get around\n> that by using another process to do disk I/O, it sounds like ST won't be\n> that useful.\n> \n> However, Chris KL. (I believe) raised the idea of using POSIX AIO for\n> PostgreSQL. Without having looked into it extensively, this technique\n> sounds promising. Perhaps someone who has looked into this further\n> (e.g. someone from Redhat) can comment?\n\nI know Red Hat is interested in AIO. Only a few OS's support it so it\nwas hard to get exited about it at the time, but with threading, a\nAIO-specific module could be attempted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Jun 2002 20:53:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Hello Thomas,\n\nWednesday, June 5, 2002, 7:02:33 PM, you wrote:\n\nTL> ...\n>> Good summary. I think we would support both threaded and fork()\n>> operation, and users can control which they prefer. For a web backend\n>> where many sessions are a single query, people may want to give up the\n>> stability of fork() and go with threads, even on Unix.\n\nTL> I would think that we would build on our strengths of having a fork/exec\nTL> model for separate clients. A threaded model *could* benefit individual\nTL> clients who are doing queries on multiprocessor servers, and I would be\nTL> supportive of efforts to enable that.\nJust a note - this is also the solution adopted by Interbase/Firebird\nand it seems interesting. They already had the same problems\nPostgreSQL has been under today.\nThose interested in read about Interbase's architeture, please refer\nto http://community.borland.com/article/0,1410,23217,00.html.\n\"Classic\" is the fork() model, and the \"SuperServer\" is the threaded\nmodel.\n-------------\nBest regards,\n Steve Howe mailto:[email protected]\n\n", "msg_date": "Wed, 5 Jun 2002 22:05:11 -0300", "msg_from": "Steve Howe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Gnu Pth also supports AIO\n----- Original Message -----\nFrom: \"Nicolas Bazin\" <[email protected]>\nTo: \"Jon Franz\" <[email protected]>; <[email protected]>\nSent: Thursday, June 06, 2002 10:50 AM\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\n\n> Yes I proposed to use the GNU Pth library instead. It's an event\n> demultiplexer just like the sgi library, but has a posix thread interface.\n> This architecture is actually the more robust and also the more scalable.\nOn\n> a single processor server, you don't have the multi-thread synchronization\n> and context switching overhead and you also take full advantage of\n> multi-processor servers when you create several processes. Plus you have\n> much less concern about global variables.\n>\n> Also for those concerned about the licence of this library here is an\n> abstract of it:\n> \"The author places this library under the LGPL to make sure that it\n> can be used both commercially and non-commercially provided that\n> modifications to the code base are always donated back to the official\n> code base under the same license conditions. Please keep in mind that\n> especially using this library in code not staying under the GPL or\n> the LGPL _is_ allowed and that any taint or license creap into code\n> that uses the library is not the authors intention. It is just the\n> case that _including_ this library into the source tree of other\n> applications is a little bit more inconvinient because of the LGPL.\n> But it has to be this way for good reasons. And keep in mind that\n> inconvinient doesn't mean not allowed or even impossible.\"\n>\n> So it can be used in both commercial and non commercial project.\n>\n>\n> ----- Original Message -----\n> From: \"Jon Franz\" <[email protected]>\n> To: <[email protected]>\n> Sent: Thursday, June 06, 2002 8:50 AM\n> Subject: Re: [HACKERS] Roadmap for a Win32 port\n>\n>\n> > One note: SGI developers discovered they could get amazing performance\n> using\n> > as hybrid threaded and forked-process model with apache - we might want\nto\n> > look into this. They even have a library for network-communication\n> > utilizing thier 'state threads' model. Please see:\n> >\n> > http://state-threads.sourceforge.net/docs/st.html\n> >\n> > Thus, on platforms where it can be supported, we should keep in mind\nthat\n> a\n> > hybrid multiprocess/multithreaded postgresql might be the fastest\n> > solution...\n> >\n> >\n> > ----- Original Message -----\n> > From: \"Bruce Momjian\" <[email protected]>\n> > To: \"Igor Kovalenko\" <[email protected]>\n> > Cc: \"PostgreSQL-development\" <[email protected]>\n> > Sent: Wednesday, June 05, 2002 4:05 PM\n> > Subject: Re: [HACKERS] Roadmap for a Win32 port\n> >\n> >\n> > > Igor Kovalenko wrote:\n> > > > I might be naive here, but would not proper threading model remove\nthe\n> > need\n> > > > for fork() altogether? On both Unix and Win32? Should not be too\nhard\n> to\n> > > > come up with abstraction which encapsulates POSIX, BeOS and Win32\n> > threads...\n> > > > I am not sure how universal POSIX threads are by now. Any important\n> Unix\n> > > > platforms which don't support them yet?\n> > > >\n> > > > This has downside of letting any bug to kill the whole thing. On the\n> > bright\n> > > > side, performance should be better on some platforms (note however,\n> > Apache\n> > > > group still can't come up with implementation of threaded model\nwhich\n> > would\n> > > > provide better performance than forked or other models). The need to\n> > deal\n> > > > with possibility of 'alien' postmaster running along with orphaned\n> > backends\n> > > > would also be removed since there would be only one process.\n> > > >\n> > > > Issue of thread safety of code will come up undoubtedly and some\n> things\n> > will\n> > > > probably have to be revamped. But in long term this is probably best\n> way\n> > if\n> > > > you want to have efficient and uniform Unix AND Win32\nimplementations.\n> > > >\n> > > > I am not too familiar with Win32. Speaking about POSIX threads, it\n> would\n> > be\n> > > > something like a thread pool with low & high watermarks. Main thread\n> > would\n> > > > handle thread pool and hand over requests to worker threads (blocked\n> on\n> > > > condvar). How does that sound?\n> > >\n> > > Good summary. I think we would support both threaded and fork()\n> > > operation, and users can control which they prefer. For a web backend\n> > > where many sessions are a single query, people may want to give up the\n> > > stability of fork() and go with threads, even on Unix.\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > [email protected] | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> > >\n> > > ---------------------------(end of\nbroadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to\[email protected]\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n> >\n> >\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n\n\n", "msg_date": "Thu, 6 Jun 2002 11:22:40 +1000", "msg_from": "\"Nicolas Bazin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Hello Bruce,\n\nWednesday, June 5, 2002, 1:33:44 AM, you wrote:\n\nBM> INSTALLER\nBM> ---------\n\nBM> We clearly need an installer that is zero-hassle for users. We need to\nBM> decide on a direction for this.\nI suggest Nullsoft install system\n(http://www.nullsoft.com/free/nsis/). It's real good and very simple\nto use. I can help on this if you want.\n\nBM> ENVIRONMENT\n\nBM> Lots of our code requires a unix shell and utilities. Will we continue\nBM> using cygwin for this?\nThere are other ports ( http://unxutils.sourceforge.net/ ) that won't\nrequire Cygwin but they won't provide an environment so complete as\nCygwin does.\n\nI also would like to empathize that probably a small GUI for\ncontrolling the PostgreSQL service/application would be nice. I think\nabout something sitting in the system tray like MSSQL, Oracle,\nInterbase, etc. does.\nI could code this in Delphi if you like. I don't have experience in\nwriting GUI apps in C. There is an open source versions of Delphi so\nit won't be a problem compiling it.\nAlso coming with this, a code for starting the PostgreSQL as a service\nwould be really nice. For those from UNIX world that don't know what a\nservice is, think about it as a daemon for Windows. A service can be\nautomatically started when the machine boots up.\n\n-------------\nBest regards,\n Steve Howe mailto:[email protected]\n\n", "msg_date": "Wed, 5 Jun 2002 22:37:17 -0300", "msg_from": "Steve Howe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "I think SGI gets amazing performance because they have very good (efficient)\nsynchronisation primitives on SGI. Some proprietary light-weight mutexes.\nUsing threaded or mixed model just by itself is not going to do a miracle.\nThreads will save you some context switch time, but that will probably\ntranslate into lower CPU usage rather than performance boost. And if your\nmutexes are not fast or awkwardly implemented (say Linux), it might be even\nworse. Apache is not all that fast on Linux as on SGI, whatever model you\nchose. I also doubt that purely threaded model would be slower than mixed\none.\n\nNow about the AIO model. It is useful when you need to do something else\nwhile I/O requests are being processed as long as platform does it in some\nuseful way. If all you can do is to submit requests and keep sitting in\nselect/poll then AIO does not buy you anything you can't get by just using\nthreaded model. However, if you can tag the requests and set up\nnotifications, then few I/O threads could handle relatively large number of\nrequests from different clients. Note, this means you don't have any\nassociation between clients and servers at all, there is pool of generic I/O\nthreads which serve requests from whoever they come. It saves system\nresources and scales very well. It also provides interesting possibilities\nfor fault recovery - since handlers are generic all the state information\nwould have to be kept in some kind of global context area. That area can be\nsaved into persistent memory or dumped onto disk and *recovered* after a\nforced restart. Server and library could be designed in such a way that\nclients may continue where they left with a recoverable error.\n\nIn POSIX AIO model you can tag requests and set up notifications via\nsynchronous signals. You wait for them *synchronously* in 'waiter' thread\nvia sigwaitinfo() and avoid the headache of asynchronous signals hitting you\nany time... Unfortunately on some platforms (Solaris) the depth of\nsynchronous signal queue is fixed at magic value 32 (and not adjustable).\nThis may not be a problem if you're sure that waiting thread will be able to\ndrain the queue faster than it gets filled with notifications... but I'm not\nsure there is a portable way to guarantee that, so you need to check for\noverloads and handle them... that complicates things. On Solaris you also\nneed a mile of compiler/linker switches to even get this scheme to work and\nI am afraid other platforms may not support it at all (but then again, they\nmay not support AIO to begin with).\n\nAnd speaking about getting best of all worlds. Note how Apache spent nearly\n3 years developing their portable Multi-Processing Modules scheme. What they\ngot for that is handful of models neither of which perform noticeably better\nthan original pre-fork() model. Trying to swallow all possible ways to\nhandle things on all possible platforms usually does not produce very fast\ncode. It tends to produce very complex code with mediocre performance and\nintroduces extra complexity into configuration process. If you consider all\nthat was done mostly to support Win32, one might doubt if it was worth the\nwhile.\n\nWhat I am trying to say is, extra complexity in model to squeeze few percent\nof performance is not a wise investment of time and efforts. On Win32 you\ndon't really compete in terms of performance. You compete in terms of\neasyness and features. Spend 3 years trying to support Windows and Unix in\nmost optimal way including all subvariants of Unix ... meanwhile MSFT will\ncome up with some bundled SQL server. It probably will have more features\nsince they will spend time doing features rather than inventing a model to\nsupport gazillion of platforms. Chances are, it will be faster too - due to\nbetter integration with OS and better compiler.\n\nI am not in position to tell you what to do guys. But if I was asked, I'd\nsay supporting Win32 is only worth it if it comes as a natural result of a\nsimple, coherent and uniform model applied to Unix. Threaded model may not\nhave as much inherent stability as forked/mixed, but it has inherent\nsimplicity and better Unix/Windows/BeOS portability. It can be done faster\nand simpler code will make work on features easier.\n\nRegards,\n- Igor\n\n\"There are 2 ways to design an efficient system - first is to design it so\ncomplex that there are no obvious deficiencies, second is to design it so\nsimple that there are obviously no deficiencies. Second way is much harder\"\n(author unknown to me)\n\n\n----- Original Message -----\nFrom: \"Neil Conway\" <[email protected]>\nTo: \"Jon Franz\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, June 05, 2002 7:05 PM\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\n\n> On Wed, 5 Jun 2002 18:50:46 -0400\n> \"Jon Franz\" <[email protected]> wrote:\n> > One note: SGI developers discovered they could get amazing performance\nusing\n> > as hybrid threaded and forked-process model with apache - we might want\nto\n> > look into this. They even have a library for network-communication\n> > utilizing thier 'state threads' model.\n>\n> I think ST is designed for network I/O-bound apps -- last I checked,\n> disk I/O will still block an entire ST process. While you can get around\n> that by using another process to do disk I/O, it sounds like ST won't be\n> that useful.\n>\n> However, Chris KL. (I believe) raised the idea of using POSIX AIO for\n> PostgreSQL. Without having looked into it extensively, this technique\n> sounds promising. Perhaps someone who has looked into this further\n> (e.g. someone from Redhat) can comment?\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <[email protected]>\n> PGP Key ID: DB3C29FC\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Wed, 5 Jun 2002 21:54:29 -0500", "msg_from": "\"Igor Kovalenko\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "\nHere is a summary of the responses to my Win32 roadmap. I hope this\nwill allow further discussion.\n\n---------------------------------------------------------------------------\n\nINSTALLER\n---------\nCygwin Setup.exe http://cygwin.com\nNullsoft http://www.nullsoft.com/free/nsis/\n\nGUI\n---\npgAdmin2 http://pgadmin.postgresql.org/pgadmin2.php?ContentID=1\npgaccess http://pgaccess.org/\nJava admin (to be written)\nDev-C++ admin (to be written) http://sourceforge.net/projects/dev-cpp/\n\nBINARY\n------\n\n\nFORK()\n\ncygwin fork() http://cygwin.com\nCreateProcess() and copy global area\n\nTHREADING\n\nPosix threads\nGnu pth http://www.gnu.org/software/pth/\nST http://state-threads.sourceforge.net/docs/st.html\n(single-session multi-threading possible)\n(Posix AIO is possible)\n\nIPC\n\nCygwin http://cygwin.com\nMinGW http://www.mingw.org/\nACE http://www.cs.wustl.edu/~schmidt/ACE.html\nAPR http://apr.apache.org/\nOur own\n\nENVIRONMENT\n\nCygwin http://cygwin.com\nUnxUtils http://unxutils.sourceforge.net/\nWrite own initdb\n\n\nIMPLEMENTATIONS\n---------------\nPostgreSQLe http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\nDbexperts http://www.dbexperts.net/postgresql\nConnx http://www.connx.com/\ngborg http://gborg.postgresql.org/project/winpackage/projdisplay.php\nInterbase http://community.borland.com/article/0,1410,23217,00.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Jun 2002 22:57:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Igor Kovalenko wrote:\n> I think SGI gets amazing performance because they have very good (efficient)\n> synchronization primitives on SGI. Some proprietary light-weight mutexes.\n> Using threaded or mixed model just by itself is not going to do a miracle.\n> Threads will save you some context switch time, but that will probably\n> translate into lower CPU usage rather than performance boost. And if your\n> mutexes are not fast or awkwardly implemented (say Linux), it might be even\n> worse. Apache is not all that fast on Linux as on SGI, whatever model you\n> chose. I also doubt that purely threaded model would be slower than mixed\n> one.\n\nLet me throw out an idea. I have been mentioning full fork, light\nfork(copy globals only), and threading as possible solutions.\n\nAnother idea uses neither threading nor copying. It is the old system\nwe used before I removed exec() from our code. We used to pass the\ndatabase name as an argument to an exec'ed postgres binary that\ncontinued with the database connection.\n\nWe removed the exec, then started moving what we could into the\npostmaster so each backend didn't need to do the initialization.\n\nOne solution is to return to that for Win32 only, so instead of doing:\n\n\tinitialization()\n\twant for connection()\n\tfork backend()\n\nwe do for Win32:\n\n\twant for connection()\n\texec backend()\n\tinitialization()\n\nIt wouldn't be hard to do. We would still do CreateProcess rather than\nCreateThread, but it eliminates the fork/threading issues. We don't\nknow the database before the connection arrives, so we don't do a whole\nlot of initialization.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jun 2002 00:59:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Hi Bruce,\n\nYou obviosuly missed my recent posting advertising the homepage \nof Konstantin Knizhnik?\n\nMake sure to have a look: http://www.garret.ru/~knizhnik/\n\nYou find there -everything- concerning multiplatform IPC,\nthreading and even some extraordinary, complete database \nbackends that are superior to the database backends \npreviously available as open source (including PostgreSQL, \nI'm afraid...). The licensing of all of this stuff is -public domain-.\nI think this should really be worth a look/discussion/mentioning.\n\nHere an excerpt of my last email, describing the furios list\nof features abailable in GOODS:\n\nSome core features of the GOODS backend (as they come to my mind):\n-> full ACID transaction support, incl. distributed transactions\n-> Multiple stoarge servers distributed over a TCP/ID network\n-> multible reader/single writer (MVCC)\n-> dual client side object cache\n-> online backup (snapshot backup AND permanent backup)\n-> nested transactions on object level\n-> transaction isolation levels on object level\n-> object level shared and exclusive locks\n-> excellent C++ programming interface\n-> WAL\n-> garbage collection for no longer reference database objects (online VACUUM)\n-> fully thread safe client interface\n-> JAVA client API\n-> very high performance as a result of a lot of fine tuning (better\n perfomance than berkeley db in my benchmarks!!!)\n-> asyncrous event notification on object instance modification\n-> extremly high code quality\n-> a one person effort, hence a very clean design\n-> the most relevant platforms are supported out of the box\n-> complete build is done in less than a minute on my machine\n-> it's documented\n-> it's tested and found to be working for a while now\n...\n\nkind regards,\nRobert Schrem\n\n\nOn Thursday 06 June 2002 04:57, you wrote:\n> Here is a summary of the responses to my Win32 roadmap. I hope this\n> will allow further discussion.\n>\n> ---------------------------------------------------------------------------\n>\n> INSTALLER\n> ---------\n> Cygwin Setup.exe http://cygwin.com\n> Nullsoft http://www.nullsoft.com/free/nsis/\n>\n> GUI\n> ---\n> pgAdmin2 \n> http://pgadmin.postgresql.org/pgadmin2.php?ContentID=1 pgaccess \n> http://pgaccess.org/\n> Java admin (to be written)\n> Dev-C++ admin (to be written) \n> http://sourceforge.net/projects/dev-cpp/\n>\n> BINARY\n> ------\n>\n>\n> FORK()\n>\n> cygwin fork() http://cygwin.com\n> CreateProcess() and copy global area\n>\n> THREADING\n>\n> Posix threads\n> Gnu pth http://www.gnu.org/software/pth/\n> ST \n> http://state-threads.sourceforge.net/docs/st.html (single-session\n> multi-threading possible)\n> (Posix AIO is possible)\n>\n> IPC\n>\n> Cygwin http://cygwin.com\n> MinGW http://www.mingw.org/\n> ACE \n> http://www.cs.wustl.edu/~schmidt/ACE.html APR \n> http://apr.apache.org/\n> Our own\n>\n> ENVIRONMENT\n>\n> Cygwin http://cygwin.com\n> UnxUtils http://unxutils.sourceforge.net/\n> Write own initdb\n>\n>\n> IMPLEMENTATIONS\n> ---------------\n> PostgreSQLe \n> http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html Dbexperts \n> http://www.dbexperts.net/postgresql Connx \n> http://www.connx.com/\n> gborg \n> http://gborg.postgresql.org/project/winpackage/projdisplay.php Interbase \n> \n> http://community.borland.com/article/0,1410,23217,00.html\n\n", "msg_date": "Thu, 6 Jun 2002 11:54:36 +0200", "msg_from": "Robert Schrem <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Bruce Momjian wrote:\n>\n> Let me throw out an idea. I have been mentioning full fork, light\n> fork(copy globals only), and threading as possible solutions.\n>\n> Another idea uses neither threading nor copying. It is the old system\n> we used before I removed exec() from our code. We used to pass the\n> database name as an argument to an exec'ed postgres binary that\n> continued with the database connection.\n>\n> We removed the exec, then started moving what we could into the\n> postmaster so each backend didn't need to do the initialization.\n>\n> One solution is to return to that for Win32 only, so instead of doing:\n>\n> initialization()\n> want for connection()\n> fork backend()\n>\n> we do for Win32:\n>\n> want for connection()\n> exec backend()\n> initialization()\n\n Summarizes pretty much what we discussed Monday on the phone.\n Except that the postmaster still has to initialize the shared\n memory and other stuff. It's just that the backends and\n helper processes need to reinitialize themself (attach).\n\n> It wouldn't be hard to do. We would still do CreateProcess rather than\n> CreateThread, but it eliminates the fork/threading issues. We don't\n> know the database before the connection arrives, so we don't do a whole\n> lot of initialization.\n\n All I see so far is the reading of the postgresql.conf, the\n pg_hba.conf and the password files. Nothing fancy and the\n postmaster could easily write out a binary content only file\n that the backends then read, eliminating the parsing\n overhead.\n\n The bad news is that Tom is right. We did a terrible job in\n using the new side effect, that the shared memory segment is\n at the same address in all forked processes, after removing\n the need to reattach.\n\n In detail the XLog code, the FreeSpaceMap code and the\n \"shared memory\" hashtable code now use pointers, located in\n shared memory. For the XLog and FreeSpace code this is\n understandable, because they where developed under the fork()\n only model. But the dynahash code used offsets only until\n v7.1!\n\n All three (no claim that that's all) make it impossible to\n ever have someone attaching to the shared memory from the\n outside. So with these moves we made the shared memory a\n \"Postmaster and children\" only thing. Raises the question,\n why we need an IPC key at all any more.\n\n Anyhow, looks as if I can get that fork() vs. fork()+exec()\n feature done pretty soon. It'll be controlled by another\n Postmaster commandline switch. After cleaning up the mess I\n did to get it working quick, I'll provide a patch for\n discussion.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 6 Jun 2002 09:35:14 -0400 (EDT)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "\nAdded to the list. Thanks.\n\n---------------------------------------------------------------------------\n\nRobert Schrem wrote:\n> Hi Bruce,\n> \n> You obviosuly missed my recent posting advertising the homepage \n> of Konstantin Knizhnik?\n> \n> Make sure to have a look: http://www.garret.ru/~knizhnik/\n> \n> You find there -everything- concerning multiplatform IPC,\n> threading and even some extraordinary, complete database \n> backends that are superior to the database backends \n> previously available as open source (including PostgreSQL, \n> I'm afraid...). The licensing of all of this stuff is -public domain-.\n> I think this should really be worth a look/discussion/mentioning.\n> \n> Here an excerpt of my last email, describing the furios list\n> of features abailable in GOODS:\n> \n> Some core features of the GOODS backend (as they come to my mind):\n> -> full ACID transaction support, incl. distributed transactions\n> -> Multiple stoarge servers distributed over a TCP/ID network\n> -> multible reader/single writer (MVCC)\n> -> dual client side object cache\n> -> online backup (snapshot backup AND permanent backup)\n> -> nested transactions on object level\n> -> transaction isolation levels on object level\n> -> object level shared and exclusive locks\n> -> excellent C++ programming interface\n> -> WAL\n> -> garbage collection for no longer reference database objects (online VACUUM)\n> -> fully thread safe client interface\n> -> JAVA client API\n> -> very high performance as a result of a lot of fine tuning (better\n> perfomance than berkeley db in my benchmarks!!!)\n> -> asyncrous event notification on object instance modification\n> -> extremly high code quality\n> -> a one person effort, hence a very clean design\n> -> the most relevant platforms are supported out of the box\n> -> complete build is done in less than a minute on my machine\n> -> it's documented\n> -> it's tested and found to be working for a while now\n> ...\n> \n> kind regards,\n> Robert Schrem\n> \n> \n> On Thursday 06 June 2002 04:57, you wrote:\n> > Here is a summary of the responses to my Win32 roadmap. I hope this\n> > will allow further discussion.\n> >\n> > ---------------------------------------------------------------------------\n> >\n> > INSTALLER\n> > ---------\n> > Cygwin Setup.exe http://cygwin.com\n> > Nullsoft http://www.nullsoft.com/free/nsis/\n> >\n> > GUI\n> > ---\n> > pgAdmin2 \n> > http://pgadmin.postgresql.org/pgadmin2.php?ContentID=1 pgaccess \n> > http://pgaccess.org/\n> > Java admin (to be written)\n> > Dev-C++ admin (to be written) \n> > http://sourceforge.net/projects/dev-cpp/\n> >\n> > BINARY\n> > ------\n> >\n> >\n> > FORK()\n> >\n> > cygwin fork() http://cygwin.com\n> > CreateProcess() and copy global area\n> >\n> > THREADING\n> >\n> > Posix threads\n> > Gnu pth http://www.gnu.org/software/pth/\n> > ST \n> > http://state-threads.sourceforge.net/docs/st.html (single-session\n> > multi-threading possible)\n> > (Posix AIO is possible)\n> >\n> > IPC\n> >\n> > Cygwin http://cygwin.com\n> > MinGW http://www.mingw.org/\n> > ACE \n> > http://www.cs.wustl.edu/~schmidt/ACE.html APR \n> > http://apr.apache.org/\n> > Our own\n> >\n> > ENVIRONMENT\n> >\n> > Cygwin http://cygwin.com\n> > UnxUtils http://unxutils.sourceforge.net/\n> > Write own initdb\n> >\n> >\n> > IMPLEMENTATIONS\n> > ---------------\n> > PostgreSQLe \n> > http://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html Dbexperts \n> > http://www.dbexperts.net/postgresql Connx \n> > http://www.connx.com/\n> > gborg \n> > http://gborg.postgresql.org/project/winpackage/projdisplay.php Interbase \n> > \n> > http://community.borland.com/article/0,1410,23217,00.html\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jun 2002 10:26:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Jan Wieck wrote:\n> > One solution is to return to that for Win32 only, so instead of doing:\n> >\n> > initialization()\n> > want for connection()\n> > fork backend()\n> >\n> > we do for Win32:\n> >\n> > want for connection()\n> > exec backend()\n> > initialization()\n> \n> Summarizes pretty much what we discussed Monday on the phone.\n> Except that the postmaster still has to initialize the shared\n> memory and other stuff. It's just that the backends and\n> helper processes need to reinitialize themself (attach).\n\nYes, obviously I simplified, and I do believe our optimizations are\nhelping on Unix. It is just that I think for Win32 the fork is more\nharmful than removing those optimizations.\n\nOne thing that may not have been clear is that we don't need to play\nwith globals at all. We just pass whatever info we want to the child\nvia command-line arguments, rather than shared memory.\n\n> > It wouldn't be hard to do. We would still do CreateProcess rather than\n> > CreateThread, but it eliminates the fork/threading issues. We don't\n> > know the database before the connection arrives, so we don't do a whole\n> > lot of initialization.\n> \n> All I see so far is the reading of the postgresql.conf, the\n> pg_hba.conf and the password files. Nothing fancy and the\n> postmaster could easily write out a binary content only file\n> that the backends then read, eliminating the parsing\n> overhead.\n\nYes, that is clearly possible. Another option is to just write out a\nno-comments, no-whitespace version of each file and just have the\nbackends read those. The advantage is that we can use the same code to\nread them, and I don't think it would be any slower than a binary file.\n\n> The bad news is that Tom is right. We did a terrible job in\n> using the new side effect, that the shared memory segment is\n> at the same address in all forked processes, after removing\n> the need to reattach.\n> \n> In detail the XLog code, the FreeSpaceMap code and the\n> \"shared memory\" hashtable code now use pointers, located in\n> shared memory. For the XLog and FreeSpace code this is\n> understandable, because they where developed under the fork()\n> only model. But the dynahash code used offsets only until\n> v7.1!\n> \n> All three (no claim that that's all) make it impossible to\n> ever have someone attaching to the shared memory from the\n> outside. So with these moves we made the shared memory a\n> \"Postmaster and children\" only thing. Raises the question,\n> why we need an IPC key at all any more.\n\nWell, we could force shmat() to bind to the same address, but I suspect\nthat might fail in some cases.\n\n> Anyhow, looks as if I can get that fork() vs. fork()+exec()\n> feature done pretty soon. It'll be controlled by another\n> Postmaster commandline switch. After cleaning up the mess I\n> did to get it working quick, I'll provide a patch for\n> discussion.\n\nYes, very little impact. We then need someone to do some Win32 timings\nto see if things have improved. As Tom mentioned, we need some hard\nnumbers for these things. In fact, I would like a Win32 test that takes\nour code and compares fork(), then exit(), with CreateProcess(), exit().\nIt doesn't have create a db session, but I would like to see some\ntimings to know what we are gaining. Heck, time CreateThread too and\nlet's see what that shows.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jun 2002 11:06:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Bruce Momjian writes:\n\n> Lots of our code requires a unix shell and utilities. Will we continue\n> using cygwin for this?\n\nWe should probably get rid of using shell scripts for application programs\naltogether, for a number of reasons besides this one, such as the\ninability to properly handle input values with spaces, commas, etc. (we\nprobably don't handle very long values either on some platforms), the\ninability to maintain open database connections so that createlang needs\nto prompt for the same password thrice, general portable scripting\nheadaches, and the lack of internationalization facilities.\n\nI'd even volunteer to do this. Comments?\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 6 Jun 2002 19:11:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Lots of our code requires a unix shell and utilities. Will we continue\n> > using cygwin for this?\n> \n> We should probably get rid of using shell scripts for application programs\n> altogether, for a number of reasons besides this one, such as the\n> inability to properly handle input values with spaces, commas, etc. (we\n> probably don't handle very long values either on some platforms), the\n> inability to maintain open database connections so that createlang needs\n> to prompt for the same password thrice, general portable scripting\n> headaches, and the lack of internationalization facilities.\n> \n> I'd even volunteer to do this. Comments?\n\nI know I have discouraged it because I think shell script language has a\ngood toolset for those applications. I have fixed all the spacing\nissues.\n\nWhat language where you thinking of using? C?\n\nAlso, it seems Win32 doesn't need these scripts, except initdb. \nPostgreSQLe didn't use the, it just did initdb, and the rest were done\nusing a GUI. However, initdb would remain a problem. PostgreSQLe wrote\nits own.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jun 2002 13:57:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Bruce Momjian writes:\n\n> GUI\n> ---\n> pgAdmin2 http://pgadmin.postgresql.org/pgadmin2.php?ContentID=1\n> pgaccess http://pgaccess.org/\n> Java admin (to be written)\n> Dev-C++ admin (to be written) http://sourceforge.net/projects/dev-cpp/\n\nSurely Unix folks would like a GUI as well?\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Fri, 7 Jun 2002 19:42:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "How about a SOAP interface and a web-based front end that provides the cross\nplatform support? My company's TIBET framework would provide a solid\nfoundation for this kind of admin suite. In fact, we're already in the\nplanning stages on doing just that.\n\nss\n\nScott Shattuck\nTechnical Pursuit Inc.\n\n\n----- Original Message -----\nFrom: \"Peter Eisentraut\" <[email protected]>\nTo: \"Bruce Momjian\" <[email protected]>\nCc: \"PostgreSQL-development\" <[email protected]>\nSent: Friday, June 07, 2002 11:42 AM\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\n\n> Bruce Momjian writes:\n>\n> > GUI\n> > ---\n> > pgAdmin2\nhttp://pgadmin.postgresql.org/pgadmin2.php?ContentID=1\n> > pgaccess http://pgaccess.org/\n> > Java admin (to be written)\n> > Dev-C++ admin (to be written)\nhttp://sourceforge.net/projects/dev-cpp/\n>\n> Surely Unix folks would like a GUI as well?\n>\n> --\n> Peter Eisentraut [email protected]\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Fri, 7 Jun 2002 16:05:58 -0600", "msg_from": "\"Scott Shattuck\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Bruce Momjian writes:\n\n> I know I have discouraged it because I think shell script language has a\n> good toolset for those applications. I have fixed all the spacing\n> issues.\n\nMy point is that it is not, for the reasons that I listed. Handling\nspaces is a small part of one of the several problems, there are problems\nwith newlines, tabs, commas, slashes, quotes -- everytime you call sed or\nread you lose one character.\n\n> What language where you thinking of using? C?\n\nYes, that way we can share code (pg_dumpall<->pg_dump, initdb<->postgres),\nuse the established internationalization facilities, and use libpq\ndirectly in create* and drop*.\n\n> Also, it seems Win32 doesn't need these scripts, except initdb.\n\nThe utility of these programs is independent of the platform. If we think\npg_dumpall is not useful, then let's remove it.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sat, 8 Jun 2002 00:27:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Also, it seems Win32 doesn't need these scripts, except initdb.\n\n> The utility of these programs is independent of the platform. If we think\n> pg_dumpall is not useful, then let's remove it.\n\nI have been seriously considering converting pg_dumpall to C anyway,\nbecause it's already *very* messy, and I don't see any reasonable\nway to make it support dumping per-database and per-user config\nsettings. (Do you really want to try to parse array values in a\nshell script?)\n\n(I'd actually consider making pg_dumpall a part of the pg_dump\nexecutable; then it could invoke pg_dump as a subroutine call...)\n\nIf Peter's got the time/energy to convert 'em all, I'm for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 11:48:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> >> Also, it seems Win32 doesn't need these scripts, except initdb.\n> \n> > The utility of these programs is independent of the platform. If we think\n> > pg_dumpall is not useful, then let's remove it.\n> \n> I have been seriously considering converting pg_dumpall to C anyway,\n> because it's already *very* messy, and I don't see any reasonable\n> way to make it support dumping per-database and per-user config\n> settings. (Do you really want to try to parse array values in a\n> shell script?)\n> \n> (I'd actually consider making pg_dumpall a part of the pg_dump\n> executable; then it could invoke pg_dump as a subroutine call...)\n> \n> If Peter's got the time/energy to convert 'em all, I'm for it.\n\nYea, shame it will now take 15 lines of C code to do what we could do in\n1 line of shell script but I don't see any other option.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jun 2002 11:57:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yea, shame it will now take 15 lines of C code to do what we could do in\n> 1 line of shell script but I don't see any other option.\n\nIn places we are using 15 lines of shell to do what would take 1 line\nin C ;-). Yes, it'll probably be bigger overall, but I think you are\noverstating the penalty.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 12:05:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port " }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I know I have discouraged it because I think shell script language has a\n> > good toolset for those applications. I have fixed all the spacing\n> > issues.\n> \n> My point is that it is not, for the reasons that I listed. Handling\n> spaces is a small part of one of the several problems, there are problems\n> with newlines, tabs, commas, slashes, quotes -- everytime you call sed or\n> read you lose one character.\n> \n> > What language where you thinking of using? C?\n> \n> Yes, that way we can share code (pg_dumpall<->pg_dump, initdb<->postgres),\n> use the established internationalization facilities, and use libpq\n> directly in create* and drop*.\n> \n> > Also, it seems Win32 doesn't need these scripts, except initdb.\n> \n> The utility of these programs is independent of the platform. If we think\n> pg_dumpall is not useful, then let's remove it.\n\nI think the first two targets for C-ification would be pg_dumpall and\ninitdb. The others have SQL equivalents. Maybe pg_ctl too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jun 2002 17:48:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Scott,\n\nI just started a java admin tool project on sf called\nwww.sf.net/projects/jpgadmin, which should be able to handle web based\ninterfaces, the idea being to seperate the model and view so that we can\nsupport a swing or web interface.\n\nDave\n\n\nOn Fri, 2002-06-07 at 18:05, Scott Shattuck wrote:\n> How about a SOAP interface and a web-based front end that provides the cross\n> platform support? My company's TIBET framework would provide a solid\n> foundation for this kind of admin suite. In fact, we're already in the\n> planning stages on doing just that.\n> \n> ss\n> \n> Scott Shattuck\n> Technical Pursuit Inc.\n> \n> \n> ----- Original Message -----\n> From: \"Peter Eisentraut\" <[email protected]>\n> To: \"Bruce Momjian\" <[email protected]>\n> Cc: \"PostgreSQL-development\" <[email protected]>\n> Sent: Friday, June 07, 2002 11:42 AM\n> Subject: Re: [HACKERS] Roadmap for a Win32 port\n> \n> \n> > Bruce Momjian writes:\n> >\n> > > GUI\n> > > ---\n> > > pgAdmin2\n> http://pgadmin.postgresql.org/pgadmin2.php?ContentID=1\n> > > pgaccess http://pgaccess.org/\n> > > Java admin (to be written)\n> > > Dev-C++ admin (to be written)\n> http://sourceforge.net/projects/dev-cpp/\n> >\n> > Surely Unix folks would like a GUI as well?\n> >\n> > --\n> > Peter Eisentraut [email protected]\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n> \n\n\n\n", "msg_date": "10 Jun 2002 12:21:42 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port cross platform admin tool" }, { "msg_contents": "Bruce Momjian writes:\n\n> I think the first two targets for C-ification would be pg_dumpall and\n> initdb. The others have SQL equivalents. Maybe pg_ctl too.\n\nI think eventually pg_ctl should be folded into the postmaster executable.\nThis would remove a great amount of possible misunderstandings between the\ntwo programs.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Mon, 17 Jun 2002 23:19:33 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I think the first two targets for C-ification would be pg_dumpall and\n> > initdb. The others have SQL equivalents. Maybe pg_ctl too.\n> \n> I think eventually pg_ctl should be folded into the postmaster executable.\n> This would remove a great amount of possible misunderstandings between the\n> two programs.\n\nAnd pg_ctl will be run with a symlink to postmaster like postgres,\nright? Makes sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 17 Jun 2002 17:31:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Jan Wieck wrote:\n> > And pg_ctl will be run with a symlink to postmaster like postgres,\n> > right? Makes sense.\n> \n> No symlink. Windows doesn't have symlinks, the \"link\" stuff you\n> see is just some file with a special meaning for the Windows\n> explorer. There is absolutely no support built into the OS. They\n> really haven't learned alot since the DOS times, when they added\n> \".\" and \"..\" entries to directories to \"look\" similar to UNIX.\n> Actually, they never really understood what a hardlink is in the\n> first place, so why do you expect them to know how to implement\n> symbolic ones?\n> \n> It will be at least another copy of the postmaster (dot exe).\n\nYea, I just liked the idea of the postmaster binary somehow reporting\nthe postmaster status. Seems it is in a better position to do that than\na shell script.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 17 Jun 2002 21:00:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Peter Eisentraut wrote:\n> > Bruce Momjian writes:\n> >\n> > > I think the first two targets for C-ification would be pg_dumpall and\n> > > initdb. The others have SQL equivalents. Maybe pg_ctl too.\n> >\n> > I think eventually pg_ctl should be folded into the postmaster executable.\n> > This would remove a great amount of possible misunderstandings between the\n> > two programs.\n> \n> And pg_ctl will be run with a symlink to postmaster like postgres,\n> right? Makes sense.\n\nNo symlink. Windows doesn't have symlinks, the \"link\" stuff you\nsee is just some file with a special meaning for the Windows\nexplorer. There is absolutely no support built into the OS. They\nreally haven't learned alot since the DOS times, when they added\n\".\" and \"..\" entries to directories to \"look\" similar to UNIX.\nActually, they never really understood what a hardlink is in the\nfirst place, so why do you expect them to know how to implement\nsymbolic ones?\n\nIt will be at least another copy of the postmaster (dot exe).\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\[email protected] #\n", "msg_date": "Mon, 17 Jun 2002 21:05:34 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I think eventually pg_ctl should be folded into the postmaster executable.\n> This would remove a great amount of possible misunderstandings between the\n> two programs.\n\nLike what?\n\nThe thing pg_ctl needs to know is where PGDATA is, and that\nunfortunately isn't going to be known any better just by sharing\nexecutables.\n\nI don't object to C-ifying pg_ctl, but I don't see that it will\nautomatically improve pg_ctl's robustness materially.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Jun 2002 10:52:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port " }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <[email protected]> writes:\n> > I think eventually pg_ctl should be folded into the postmaster executable.\n> > This would remove a great amount of possible misunderstandings between the\n> > two programs.\n>\n> Like what?\n\nThe biggie is that pg_ctl reports the postmaster to have started\nsuccessfully without ever checking. And the \"wait\" option is broken and\nnot trivial to fix.\n\nOther problems are the matching of the port numbers and the requirement\nthat admins should be able to enter a password when the server starts (for\nSSL).\n\nThe luring prerequisite here is that the postmaster would have to be able\nto log directly to a file, which now that all communication is guaranteed\nto go through elog() should be less complicated, at least compared to\nfixing the \"wait\" option. In fact I'm hoping that the Windows porters\nwill run into this same requirement just about pretty soon.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Tue, 18 Jun 2002 23:55:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Like what?\n\n> The biggie is that pg_ctl reports the postmaster to have started\n> successfully without ever checking. And the \"wait\" option is broken and\n> not trivial to fix.\n\nIndeed, but how will it help to merge the two executables into one?\nI don't think you can simply postpone the fork() until all setup\nis complete --- that would mean you don't know the final postmaster\nPID until much too late.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Jun 2002 08:45:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Roadmap for a Win32 port " } ]
[ { "msg_contents": "\nIt recently came to my attention that pg_dump dumps 'CREATE SEQUENCE' and\n'SELECT NEXTVAL' commands for both data-only and schema-only output. This\nresults in problems for users who do the two in separate steps, and seems a\nlittle odd.\n\nAlso, I'd be interested to know what the purpose of 'SELECT NEXTVAL' is?\n\nMy inclinations is do do the following:\n\n- Issue 'CREATE SEQUENCE...Initial Value 1...' in OID order\n- Issue 'SELECT SETVAL...' at end of data load.\n\nThis means that a schema-only restore will hgave all sequences set up with\ninitial value = 1, and a data-only restore will have sequences set\n'correctly'.\n\nDoes this sound reasonable?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 28 Sep 2000 19:09:45 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump and sequences - RFC" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> My inclinations is do do the following:\n\n> - Issue 'CREATE SEQUENCE...Initial Value 1...' in OID order\n> - Issue 'SELECT SETVAL...' at end of data load.\n\n> This means that a schema-only restore will hgave all sequences set up with\n> initial value = 1, and a data-only restore will have sequences set\n> 'correctly'.\n\nSeems reasonable, except you should not necessarily use 1; that could\nbe outside the defined range of the sequence object. Use its min_value\ninstead.\n\nIt's too bad the sequence object doesn't save the original starting\nvalue, which is what the schema-only restore REALLY should restore.\nThe min_value is probably close enough for practical purposes ... not\nsure that it's worth adding an original_value column just for this.\n(It'd be a simple enough change in terms of the code, but I wonder if\nit might create compatibility problems for applications that look at\nthe contents of sequences.)\n\n\n> Also, I'd be interested to know what the purpose of 'SELECT NEXTVAL' is?\n\nIIRC the point of the nextval() is to ensure that the internal state of\nthe sequence is correct. There's a bool \"is_called\" in the sequence\nthat means something like \"I've been nextval()'d at least once\", and the\nonly clean way to make that become set is to issue a nextval. You can\nwatch the behavior by doing \"select * from sequenceobject\" between\nsequence commands --- it looks like the first nextval() simply sets\nis_called without changing last_value, and then subsequent nextval()s\nincrement last_value. (This peculiar arrangement makes it possible\nto have a starting value equal to MININT, should you want to do so.)\nSo pg_dump needs to make sure it restores the correct setting of both\nfields.\n\nThis is pretty grotty because it looks like there's no way to clear\nis_called again, short of dropping and recreating the sequence.\nSo unless you want to do that always, a data-only restore couldn't\nguarantee to restore the state of a virgin sequence.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Sep 2000 10:36:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and sequences - RFC " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> OK. Given the discussion of 'select nextval', do you know if 'select\n> setval' will set the is_called flag?\n\nLooks like it does, both by experiment and by reading the code.\nSo if you issue a setval() you don't need a nextval() as well.\n\nHowever you still have the problem that you can't recreate the\nstate of a virgin (never-nextval'd) sequence this way. The\nexisting pg_dump code is correct, in that it will reproduce the\nstate of a sequence whether virgin or not. A data-only reload\nwould fail to make that guarantee unless you drop and recreate\nthe sequence.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Sep 2000 11:01:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and sequences - RFC " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 11:01 28/09/00 -0400, Tom Lane wrote:\n>> A data-only reload\n>> would fail to make that guarantee unless you drop and recreate\n>> the sequence.\n\n> Will this cause problems in an existing database because the sequence OID\n> changes?\n\nHmm, good point. There isn't any real easy way to refer to a sequence\nby OID --- the sequence functions only accept names --- but I suppose\nsomeone out there might be doing something with sequence OIDs.\n\nPerhaps the real answer is to extend the set of sequence functions so\nthat it's possible to set/clear is_called directly. Perhaps a variant\nsetval() with an additional, boolean argument?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Sep 2000 11:17:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and sequences - RFC " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> This would be something I'd like to do as a learning exercise. However,\n> aren't we 2 days from beta? Is this enough time to learn how to add a\n> function to the backend?\n\nIn practice, you've probably got a week. I believe Marc is planning to\nbe out of town for a week starting tomorrow, and he's not going to be\npushing out a beta till he gets back.\n\n(Besides, I'm not quite done with subselect-in-FROM ;-))\n\nI'd recommend going for the function.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Sep 2000 11:29:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and sequences - RFC " }, { "msg_contents": "At 10:36 28/09/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> My inclinations is do do the following:\n>\n>> - Issue 'CREATE SEQUENCE...Initial Value 1...' in OID order\n>> - Issue 'SELECT SETVAL...' at end of data load.\n>\n>Seems reasonable, except you should not necessarily use 1; that could\n>be outside the defined range of the sequence object. Use its min_value\n>instead.\n\nOK. Given the discussion of 'select nextval', do you know if 'select\nsetval' will set the is_called flag? If not should I:\n\n\nIssue 'CREATE SEQUENCE...Initial Value <MINVAL>...' in OID order\n\nif (is_called was set AND we've loaded any data) then\n\n Issue 'SELECT NEXTVAL...' at end of data load, and *before* setval.\n Issue 'SELECT SETVAL...' at end of data load.\n\nendif\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 29 Sep 2000 01:52:24 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and sequences - RFC " }, { "msg_contents": "At 11:01 28/09/00 -0400, Tom Lane wrote:\n>A data-only reload\n>would fail to make that guarantee unless you drop and recreate\n>the sequence.\n\nWill this cause problems in an existing database because the sequence OID\nchanges?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 29 Sep 2000 02:06:50 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and sequences - RFC " }, { "msg_contents": "At 11:17 28/09/00 -0400, Tom Lane wrote:\n>\n>Hmm, good point. There isn't any real easy way to refer to a sequence\n>by OID --- the sequence functions only accept names --- but I suppose\n>someone out there might be doing something with sequence OIDs.\n\nSo long as the backend & metadata don't rely on the OID, then it's 99.9%\nsafe, I'd guess. I'd be happy to go with this, and do a function later\nif/when necessary (see below).\n\n\n>Perhaps the real answer is to extend the set of sequence functions so\n>that it's possible to set/clear is_called directly. Perhaps a variant\n>setval() with an additional, boolean argument?\n\nThis would be something I'd like to do as a learning exercise. However,\naren't we 2 days from beta? Is this enough time to learn how to add a\nfunction to the backend?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 29 Sep 2000 02:25:24 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and sequences - RFC " } ]