threads
listlengths
1
2.99k
[ { "msg_contents": "On Thu, 30 Mar 2000, Thomas Lockhart wrote:\n\n> I feel a strong contributing factor to this lack of readiness for\n> release is the continuing problem with mailing list turnaround (at\n> least I'm still seeing it; there is some possibility it is a local\n> problem of mine, but...).\n\nMailing list turnaround? You mean where it takes seemingly forever\nto get anything you posted to finally return? Nope, not local to \nyou.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 30 Mar 2000 15:21:49 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Release schedule (was Re: Improvement in SET\n commandsyntax)" } ]
[ { "msg_contents": "On Thu, 30 Mar 2000, The Hermit Hacker wrote:\n\n> I think it is ... I'm noticing pretty instant messages coming to me\n> ... and being posted to the newsgroups ...\n\nEdit the mailing list file and put your address at the bottom, unless \nit's presorting and delivering to hub first, then use an external \naddress. The sendmail/majordomo combination is one of the contributing\nfactors to me dumping both altogether a number of years ago.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 30 Mar 2000 15:28:47 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Release schedule (was Re: Improvement in SET\n commandsyntax)" } ]
[ { "msg_contents": "I've just committed a bunch o' patches for the docs. Changes include:\n\no a chapter on index cost estimating (indexcost.sgml) from Tom Lane.\no a chapter on the PL/perl language from Mark Hollomon.\no a chapter on queries and EXPLAIN from Tom Lane.\no lots of other bits and pieces.\n\nOne change was to separate the docs for PL/SQL, PL/TCL, and PL/perl\ninto separate chapters, moving them to the User's Guide, and moving\nthe \"How to interface a language\" to the Programmer's Guide. istm that\nthese easy to use programming languages come close to being a\nuser-accessible feature (hence placing them in the UG), much more so\nthan, say, libpq. Comments?\n\n - Thomas\n\n(sent at Thu Mar 30 22:41 UTC 2000)\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 30 Mar 2000 22:54:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Docs refreshed" }, { "msg_contents": "> (sent at Thu Mar 30 22:41 UTC 2000)\nThu Mar 30 23:20 UTC 2000\n\nSo I'm seeing a current round trip of ~40 minutes. My recollection is\nthat I would see round trips of ~5 minutes in the good old days.\n\nbtw, I see fast turnaround for the committer's list.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 30 Mar 2000 23:36:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Mailing list round trip time" } ]
[ { "msg_contents": "> > 2) maintain a running list of porting problems. I've seen patches\n> > submitted, and some patches applied, but *no* complete reports of \"OK,\n> > it works now\" afterwards (I may be overstating this a bit, but you get\n> > my point ;). But that contributes to...\n> \n> Practically all of the currently-outstanding reports have to do with\n> updating regress test expected results. I made some progress on that\n> last night but there are more to do. Our standards seem to be higher\n> now than they used to be --- people are expecting all the tests to\n> pass with no noise. So far, I don't think that anyone has reported\n> regress test failures that actually suggest a porting problem.\n> \n> We do have a major problem with portability of plperl. I have yet\n> to build a working version of it at all; in fact I haven't heard\n> confirmation that it works from *anyone* other than the author.\n> Has anyone else tried to use it?\n\nYes, I am inclined to disable the build for 7.0 final unless it works.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 30 Mar 2000 18:45:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Release schedule (was Re: Improvement in SET commandsyntax)" } ]
[ { "msg_contents": "At 11:10 AM 3/30/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> This is an example where commercial systems that have indices\n>> synchronized with data such that queries referencing only the\n>> fields in indices can win big vs. PG in SOME (not all) cases.\n>> In particular, when the indices are to a table that has a bunch\n>> of other, perhaps long, columns. PG has to read the table and\n>> drag all that dead weight around to do RI referential checking\n>> and semantic actions.\n>\n>Keep in mind, though, that once we have TOAST the long columns are\n>likely to get pushed out to a secondary table, so that the amount\n>of data you have to read is reduced (as long as you don't touch\n>any of the long columns, of course).\n\nSure...and you can BLOB or CLOB longer data in Oracle, too. TOASTing\nisn't without costs, either...life's a tradeoff!\n\n>\n>The main reason that Postgres indexes can't be used without also\n>consulting the main table is that we do not store transaction status\n>information in index entries, only in real tuples. After finding\n>an index entry we must still consult the referenced tuple to see\n>if it's been deleted, or even committed yet. I believe this is a\n>pretty good tradeoff.\n\nI must wonder, though, given that proper syncing seems to be the\nnorm in commercial systems. Or so I'm lead to believe when Gray's\nbook, for instance. Or a good book on speeding up Oracle queries.\n\nWhatever ... in this particular case - referential integrity \nwith MATCH <unspecified> and MATCH PARTIAL and multi-column\nforeign keys - performance will likely drop spectacularly once the\nleading column is NULL, while (say) with Oracle you'd expect much\nless of a performance hit. \n\nThe point of my note is that this is probably worth documenting.\n\nDon't get me wrong, these semantics and RI and multi-column keys\nappear to be pretty inefficient by nature, I don't think anyone\nis likely to be horrified to read that it might well be even worse\nin PG than in certain commercial systems... \n\n>I suppose that keeping tuple status in index entries could be a win\n>on nearly-read-only tables, but I think that on average it'd be\n>a performance loser.\n\nWell...I've personally not studied the issue in detail, but just\nhave to wonder if the folks at Oracle are really as stupid as the\nabove analysis would make them appear to be. I presume that they\nhave a pretty good idea of the kind of mix large database installations\nmake, and presumably make choices designed to win on average.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 30 Mar 2000 19:00:05 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow join on postgresql6.5 " }, { "msg_contents": "At 12:49 PM 3/30/00 -0500, Bernard Adrian Frankpitt wrote:\n\n>Implementing the change is probably not as bad as it seems, access\n>methods are beautifully self-contained in Postgres. The changes are\n>quite fundamental though, and you sure would want to get it right. You\n>would also want to retain the current behavior for users who do lots of\n>updates.\n\nAgain, if you read my post carefully I was simply pointing out that\nRI semantics will degrade rapidly (for some tables) once the index\nis unusable because a full table scan will be necessary (where at\nworst Oracle, say, would read the entire index to get the necessary\ninformation, which for some tables won't be nearly as bad as going\nto the actual table). I was trying to point out that the degradation\nmay be more spectacular than, say, with Oracle because PG always\nneeds to go to the table rather than simply use the data in the\nindex.\n\nAnd that it is worth documenting. And that after I do MATCH PARTIAL\nsemantics for 7.1 I'll try to find time to do some documenting of\nRI, including information on \"gotchas\" such as this one.\n\n(people are likely to be mystified if setting the first column of\na foreign key to NULL makes PG go away for a long time while \nsetting it to a non-NULL value doesn't, which is exactly what will\nhappen for SOME large tables if MATCH PARTIAL and MATCH <unspecified>\nis used)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 30 Mar 2000 19:08:56 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow join on postgresql6.5" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf\n> Of Don Baccus\n> \n> Whatever ... in this particular case - referential integrity \n> with MATCH <unspecified> and MATCH PARTIAL and multi-column\n> foreign keys - performance will likely drop spectacularly once the\n> leading column is NULL, while (say) with Oracle you'd expect much\n> less of a performance hit. \n>\n\nAs for NULL,it seems possible to look up NULL keys in a btree index\nbecause NULL == NULL for btree indexes.\nI've wondered why PostgreSQL's planner/executor never looks up\nindexes for queries using 'IS NULL'.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Fri, 31 Mar 2000 19:05:49 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: slow join on postgresql6.5 " }, { "msg_contents": "sometimes ago...\ni posted a question.\n\"How to store data in postgresql, query data.. and produce accessible data\nfor gnuplot'in a graph\"\n\nThis is the first hack:\n1) select data into temp-table from table where x.. y.. z..\n2) copy temp-table into file.txt using delimiters ' '\n\nThis way create some problems...\na) to update temp-table with latest data.. i must drop this table before.\n\nWhy don't do directly \"copy\" on view?\nThis could be very usefull... than \"copy\" only on table (or temp-table).. a\n\"copy\" on a view-data.\n\nLet's me know about...\n\nDaniele Medri\n\n", "msg_date": "Fri, 31 Mar 2000 12:34:39 +0200", "msg_from": "\"Daniele Medri\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ...copy hack" }, { "msg_contents": "At 07:05 PM 3/31/00 +0900, Hiroshi Inoue wrote:\n>> -----Original Message-----\n>> From: [email protected] [mailto:[email protected]]On Behalf\n>> Of Don Baccus\n>> \n>> Whatever ... in this particular case - referential integrity \n>> with MATCH <unspecified> and MATCH PARTIAL and multi-column\n>> foreign keys - performance will likely drop spectacularly once the\n>> leading column is NULL, while (say) with Oracle you'd expect much\n>> less of a performance hit. \n>>\n>\n>As for NULL,it seems possible to look up NULL keys in a btree index\n>because NULL == NULL for btree indexes.\n>I've wondered why PostgreSQL's planner/executor never looks up\n>indexes for queries using 'IS NULL'.\n\nUnfortunately for the RI MATCH PARTIAL case, NULL is a \"wildcard\".\n\nThis doesn't affect the validity of your observation in the general\ncase, though.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 31 Mar 2000 06:33:49 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": true, "msg_subject": "RE: slow join on postgresql6.5 " }, { "msg_contents": "> -----Original Message-----\n> From: Don Baccus [mailto:[email protected]]\n> Sent: Friday, March 31, 2000 11:34 PM\n>\n> At 07:05 PM 3/31/00 +0900, Hiroshi Inoue wrote:\n> >> -----Original Message-----\n> >> From: [email protected] [mailto:[email protected]]On Behalf\n> >> Of Don Baccus\n> >>\n> >> Whatever ... in this particular case - referential integrity\n> >> with MATCH <unspecified> and MATCH PARTIAL and multi-column\n> >> foreign keys - performance will likely drop spectacularly once the\n> >> leading column is NULL, while (say) with Oracle you'd expect much\n> >> less of a performance hit.\n> >>\n> >\n> >As for NULL,it seems possible to look up NULL keys in a btree index\n> >because NULL == NULL for btree indexes.\n> >I've wondered why PostgreSQL's planner/executor never looks up\n> >indexes for queries using 'IS NULL'.\n>\n> Unfortunately for the RI MATCH PARTIAL case, NULL is a \"wildcard\".\n>\n\nOops I misunderstood NULL.\n\nHmm,is the following TODO worth the work ?\n* Use index to restrict rows returned by multi-key index when used with\n non-consecutive keys or OR clauses, so fewer heap accesses.\n\nProbably this is for the case like\n\tSELECT .. FROM .. WHERE key1 = val1 and key3 = val3;\n,where (key1,key2,key3) is a multi-column index.\nCurrently index scan doesn't take 'key3=val3' into account because\n(key1,key3) isn't consecutive.\nThe TODO may include the case\n\tSELECT .. FROM .. WHERE key2 = val2;\nThough we have to scan the index entirely,access to the main table\nis needed only when key2 = val2. If (key2 = val2) is sufficiently\nrestrictive,\nthe scan would be faster than simple sequential scan.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Mon, 3 Apr 2000 00:16:28 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: slow join on postgresql6.5 " }, { "msg_contents": "> > >As for NULL,it seems possible to look up NULL keys in a btree index\n> > >because NULL == NULL for btree indexes.\n> > >I've wondered why PostgreSQL's planner/executor never looks up\n> > >indexes for queries using 'IS NULL'.\n> >\n> > Unfortunately for the RI MATCH PARTIAL case, NULL is a \"wildcard\".\n> >\n> \n> Oops I misunderstood NULL.\n> \n> Hmm,is the following TODO worth the work ?\n> * Use index to restrict rows returned by multi-key index when used with\n> non-consecutive keys or OR clauses, so fewer heap accesses.\n\nThis is a Vadim item.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Apr 2000 12:36:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow join on postgresql6.5" } ]
[ { "msg_contents": "I have committed fixes for all the so-far-reported regression test\ndiscrepancies, with the exception that some of the 'geometry' expected\nfiles look to be out of date and I have no proposed updates. Would\nthose folks who reported problems check again, using either a fresh CVS\nupdate or a snapshot dated later than this message?\n\nI also hacked libpq++'s Makefile.in so that it picks up CXXFLAGS\nfrom the template file or configure calculations, rather than\nautomatically absorbing all of CFLAGS into CXXFLAGS. This should\nmake things better on platforms where the C++ compiler doesn't like\nthe same switch set as the C compiler. However, we might now need\nto take steps to add stuff back into CXXFLAGS on some platforms.\nPlease check this if you've had trouble building libpq++ in the past.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Mar 2000 01:01:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Regress test updates: status report" }, { "msg_contents": "Tom Lane wrote:\n> \n> I have committed fixes for all the so-far-reported regression test\n> discrepancies, with the exception that some of the 'geometry' expected\n> files look to be out of date and I have no proposed updates. Would\n> those folks who reported problems check again, using either a fresh CVS\n> update or a snapshot dated later than this message?\n\nBuilt CURRENT (as of 9:00AM EST). No failures on regression at all.\nRedHat 6.1 (linux 2.2.12, glibc 2.1.2), Intel Pentium III/600.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 31 Mar 2000 09:30:44 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regress test updates: status report" } ]
[ { "msg_contents": "On Thu, Mar 30, 2000 at 01:48:27PM -0600, Ross J. Reedstrom wrote:\n> > testtime=> select date_part('day', '3-26-2000'::timestamp-'3-6-2000'::timestamp) as days;\n> > 20\n> > testtime=> select date_part('day', '3-27-2000'::timestamp-'3-6-2000'::timestamp) as days;\n> > 20\n> Hmm, I happen to have a 6.5.0 system sitting here: It works there, so I suspect\n> something with your local operating system config. Are you running LOCALE enabled?\n> Since the same version works on my system, others reports of higher versions working\n> for them probably don't mean much.\n> Ross\n\nnow, this is weird.\n\nno idea if I have LOCALE enabled, I don't use it that's for sure.\n\nanyone?\n\ntinus.\n\n(I'll try upgrading anyhow)\n", "msg_date": "Fri, 31 Mar 2000 10:21:34 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: 6.5.0 datetime bug?" }, { "msg_contents": "> > Hmm, I happen to have a 6.5.0 system sitting here: It works there, so I suspect\n> > something with your local operating system config.\n> now, this is weird.\n\nI should have asked originally: what time zone are you running in?\n>From your mailing address I'll bet that you are on the other side of\nGMT from where I run my tests:\n\npostgres=# set time zone 'Europe/Amsterdam';\nSET VARIABLE\npostgres=# select date_part('day',\n'3-27-2000'::timestamp-'3-6-2000'::timestamp) as days;\n days \n------\n 20\n(1 row)\n\nOK, I see the problem in current sources :(\n\nThanks for pursuing this; I'll take a look at it.\n\nbtw, if we were to add some \"other side of GMT\" time zone testing to\nour regression test, what time zone would be the most likely to be\nuniversally supported? We know that PST8PDT works pretty well, but I'm\nnot sure of the best candidate for the other side...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 31 Mar 2000 14:59:21 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.0 datetime bug?" }, { "msg_contents": "> > Hmm, I happen to have a 6.5.0 system sitting here: It works there, so I suspect\n> > something with your local operating system config.\n> anyone?\n\nIt turns out to be a problem in the local country config :)\n\nWhy does the Netherlands (or at least my RH5.2 timezone database)\nthink you switch to DST on March 26? The date_part() function was just\nmasking the problem:\n\npostgres=# select '3-27-2000'::timestamp-'3-6-2000'::timestamp;\n ?column? \n----------\n 20 23:00\n(1 row)\n\npostgres=# select '3-26-2000'::timestamp-'3-6-2000'::timestamp;\n ?column? \n----------\n 20 00:00\n(1 row)\n\nWhen you do the date arithmetic, you are automatically calculating an\n*absolute* time difference which can be affected by DST boundaries.\n\nFor some reason, we don't have a date_part() available for the date\ndata type, which would have been my suggested workaround. We'd flame\nthe implementer, but that's me so I'll be nice :(\n\nIt is probably too late to get this added for v7.0, though I might be\nable to add the code to the backend so it could be a (very) small\nCREATE FUNCTION operation to get it usable for 7.0. Will look at it.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 31 Mar 2000 15:37:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.0 datetime bug?" }, { "msg_contents": "\n\nThomas Lockhart wrote:\n> \n> Why does the Netherlands (or at least my RH5.2 timezone database)\n> think you switch to DST on March 26? The date_part() function was just\n\nHmmmm, maybe because we actually switched on march 26? In fact, whole of\neurope did\nAFAIK....\n\nMaarten\n\n-- \n\nMaarten Boekhold, [email protected]\nTIBCO Finance Technology Inc.\n\"Sevilla\" Building\nEntrada 308\n1096 ED Amsterdam, The Netherlands\ntel: +31 20 6601000 (direct: +31 20 6601066)\nfax: +31 20 6601005\nhttp://www.tibcofinance.com\n", "msg_date": "Fri, 31 Mar 2000 17:38:03 +0200", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.0 datetime bug?" }, { "msg_contents": "> > Why does the Netherlands (or at least my RH5.2 timezone database)\n> > think you switch to DST on March 26?\n> Hmmmm, maybe because we actually switched on march 26? In fact, whole of\n> europe did AFAIK....\n\nHow quaint ;) The US switches this next weekend, which pushes it into\nApril. So it didn't occur to me that it was a DST issue at first.\n\nAnd, I got off on the wrong track suggesting a solution. Having a\ndate_part() which works on dates explicitly doesn't really address the\nissue, since you are trying to do the date_part() on a time interval,\nnot on an absolute date. And the time interval probably *should* keep\ntrack of hours etc. \n\nHowever, we *do* have an explicit subtraction operator for dates,\nwhich returns a difference in days, which may be what you want:\n\npostgres=# select '3-27-2000'::date-'3-6-2000'::date as days;\n days \n------\n 21\n(1 row)\n\nOr, force the type of the timestamp field to be date:\n\npostgres=# select\ndate('3-27-2000'::timestamp)-date('3-6-2000'::timestamp) as days;\n days \n------\n 21\n(1 row)\n\nAnd, if you still want to do the arithmetic using timestamps, you can\nforce the evaluation of the input into the *same* timezone, as in this\nexample:\n\npostgres=# select date_part('day',\n '3-27-2000 CET'::timestamp-'3-6-2000 CET'::timestamp) as days;\n days \n------\n 21\n(1 row)\n\nI'm no longer thinking that an explicit date_part() for date or time\ntypes will be useful.\n\nHTH\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 31 Mar 2000 16:54:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.0 datetime bug?" } ]
[ { "msg_contents": " Hi all!\n\n Can anybody help me with it?\n\nUsed FreeBSD 3.3-STABLE, PostgreSQL7.0beta2( and 6.5.О©╫ )\ncompiled with options: --enable-locale --enable-recode \n--enable-multibyte=KOI8 --with-odbc --with-CC=gcc --with-CXX=gcc\n--with-perl --with-tcl --with-maxbackends=5 --with-include=/usr/local/include\n--with-tclconfig=/usr/local/lib/tcl8.0 --with-tkconfig=/usr/local/lib/tk8.0\n\nsklad=> create table aaa ( id int4, hmm int4 );\nCREATE\nsklad=> \\d aaa\n Table \"aaa\"\n Attribute | Type | Modifier\n-----------+---------+----------\n id | integer |\n hmm | integer |\n\nsklad=> create table bbb ( id int4, hmm2 int4 );\nCREATE\nsklad=> \\d bbb\n Table \"bbb\"\n Attribute | Type | Modifier\n-----------+---------+----------\n id | integer |\n hmm2 | integer |\n\nsklad=> create function proc_del_aaa ( int4 ) returns opaque as ' begin\ndelete from bbb where id = $1; end; ' language 'plpgsql';\nCREATE\n\nsklad=> create trigger trig_del_aaa after delete on aaa FOR EACH ROW\nEXECUTE PROCEDURE proc_del_aaa ( id );\nERROR: CreateTrigger: function proc_del_aaa() does not exist\n\nsklad=> create trigger trig_del_aaa after delete on aaa FOR EACH ROW\nEXECUTE PROCEDURE proc_del_aaa ( 'id' );\nERROR: CreateTrigger: function proc_del_aaa() does not exist\n\nsklad=> drop function proc_del_aaa( int4 );\nDROP\n\nsklad=>\n\n\n------------------------------------------------------+-----------------------+\n... One child is not enough, but two are far too many.| FreeBSD\t |\n\t\t\t\t\t\t | The power to serve! |\n\tMikheev Sergey <[email protected]>\t |http://www.FreeBSD.org/|\n\t\t\t\t\t\t +=======================+\n\n", "msg_date": "Fri, 31 Mar 2000 13:17:50 +0400 (MSD)", "msg_from": "\"Sergey V. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postresql & triggers" } ]
[ { "msg_contents": "The publisher says it is fine to use the tables from Chapter 9 of my\nbook in the PostgreSQL docs. I recommend the function and operator\ntables specifically.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 31 Mar 2000 06:41:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Function tables from my book" } ]
[ { "msg_contents": "> This is the first hack:\n> 1) select data into temp-table from table where x.. y.. z..\n> 2) copy temp-table into file.txt using delimiters ' '\n\nI have to say once again that it is not COPYs task to provide the data to\nyou nor is it the server's task to format the output for you. For that we\nhave SELECT and client interfaces, respectively. If you insist on using\nCOPY you'll not get far.\n\n---\n\nso...\nfrom your aswer i post another question:\nI have data on a pg db.. i want a text-file with this format for gnuplot...\n\n1 2\n1 3\n1 5\n3 5\n\nfor 2d plotting and..\n\n1 2 3\n3 4 6\n3 6 8\n3 5 7\n\nfor 3d plottting.\n\nI know that with a view i can obtain a right data visualizzation...\nI know that with copy i can put \"data\" from table to file...\nI would use \"copy\" to put view data into a file... a raw copy.. but you've\nyour own opinion.. Ok.\nHow do you do to put data into my text file without outer language or bash\nscripting?\n..i mean using only pg tools?\n\nDaniele\n\n", "msg_date": "Fri, 31 Mar 2000 13:46:57 +0200", "msg_from": "\"Daniele Medri\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ...copy hack" } ]
[ { "msg_contents": "> so...\n> from your aswer i post another question:\n> I have data on a pg db.. i want a text-file with this format \n> for gnuplot...\n> \n> 1 2\n> 1 3\n> 1 5\n> 3 5\n> \n> for 2d plotting and..\n> \n> 1 2 3\n> 3 4 6\n> 3 6 8\n> 3 5 7\n> \n> for 3d plottting.\n> \n> I know that with a view i can obtain a right data visualizzation...\n> I know that with copy i can put \"data\" from table to file...\n> I would use \"copy\" to put view data into a file... a raw \n> copy.. but you've\n> your own opinion.. Ok.\n> How do you do to put data into my text file without outer \n> language or bash\n> scripting?\n> ..i mean using only pg tools?\n\npsql yourdatabase -F\" \" -A -t -c \"SELECT field1,field2 FROM yourview\" -o\nouputfile\n\nShould do what you want, I think. (Assuming that it spaces between the\nfields, and not tabs - you will need to change what's after -F if it's\ndifferent)\n\n-F sets field separator.\n-A sets \"unaligned mode\"\n-t turns off header and footer\n-c sets query to run\n-o sets file to write output to (if not set, you get it on stdout)\n\n\nThere is a *lot* of functionality in the psql frontend :-)\n\n//Magnus\n", "msg_date": "Fri, 31 Mar 2000 14:43:08 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: ...copy hack" } ]
[ { "msg_contents": "Hello,\n\nI have tested the beta3 on WinNT and here are the results:\n- I was unable to compile ecpg due to the \":=\" instead of \"=\" in defining\nLIBPQDIR and some other variables in Makefile.global.in\n- pg_id (and also pg_encoding) executable was not removed during \"make\nclean\" - there was no $(X) appended to the executable name for rm\n- I have added result for int2, int4, float8 and geometry regression tests\n - int2, int2 - yet another message for too large numbers ;-)\n - float8 - it is problably a bug in the newlib C library - it has no\nerror message for numbers with exponent -400\n - geometry - differences in precision of float numbers\n- I have added appropriate lines into resultmap file\n- I have modified the script regress.sh to use \"case\" statement when testing\nthe hostname. For cygwin the script is called with \"i686-pc-cygwin\" (on my\nmachine) as a parameter and this was not catched with the \"if\" statement.\nThe check was done for PORTNAME (win) and not HOSTNAME (i.86-pc-cygwin*).\n\nThe patch for described modifications is included.\n\nAll this modifications can be applied to \"current\" tree too.\n\nThe compilation was done on CygwinB20.1 with gcc 2.95, cygipc library 1.05.\nThe binaries were able to run also on the newest development snapshot\n(2000-03-25).\n\n\t\t\tDan", "msg_date": "Fri, 31 Mar 2000 15:51:14 +0200", "msg_from": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Call for platform reports" }, { "msg_contents": "Applied. Thanks.\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hello,\n> \n> I have tested the beta3 on WinNT and here are the results:\n> - I was unable to compile ecpg due to the \":=\" instead of \"=\" in defining\n> LIBPQDIR and some other variables in Makefile.global.in\n> - pg_id (and also pg_encoding) executable was not removed during \"make\n> clean\" - there was no $(X) appended to the executable name for rm\n> - I have added result for int2, int4, float8 and geometry regression tests\n> - int2, int2 - yet another message for too large numbers ;-)\n> - float8 - it is problably a bug in the newlib C library - it has no\n> error message for numbers with exponent -400\n> - geometry - differences in precision of float numbers\n> - I have added appropriate lines into resultmap file\n> - I have modified the script regress.sh to use \"case\" statement when testing\n> the hostname. For cygwin the script is called with \"i686-pc-cygwin\" (on my\n> machine) as a parameter and this was not catched with the \"if\" statement.\n> The check was done for PORTNAME (win) and not HOSTNAME (i.86-pc-cygwin*).\n> \n> The patch for described modifications is included.\n> \n> All this modifications can be applied to \"current\" tree too.\n> \n> The compilation was done on CygwinB20.1 with gcc 2.95, cygipc library 1.05.\n> The binaries were able to run also on the newest development snapshot\n> (2000-03-25).\n> \n> \t\t\tDan\n> \n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 31 Mar 2000 09:13:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platform reports" } ]
[ { "msg_contents": "Heads up to packagers:\n\nsrc/bin/pgaccess/pgaccess.sh has been changed in CURRENT CVS to use\nhardcoded PATH_TO_WISH and PGACCESS_HOME, rather than using __wish__ and\n__POSTGRESDIR__.\n\nBruce??\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 31 Mar 2000 08:55:08 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "pgAccess change" }, { "msg_contents": "> Heads up to packagers:\n> \n> src/bin/pgaccess/pgaccess.sh has been changed in CURRENT CVS to use\n> hardcoded PATH_TO_WISH and PGACCESS_HOME, rather than using __wish__ and\n> __POSTGRESDIR__.\n> \n> Bruce??\n\nHave I mentioned how much I hate installing pgaccess from a tarball\ndirectly into our tree, and not knowing what is new about it. Let me\nmention that again... :-)\n\nFixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 31 Mar 2000 09:05:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgAccess change" }, { "msg_contents": "Bruce Momjian wrote:\n \n> > Heads up to packagers:\n\n> > src/bin/pgaccess/pgaccess.sh has been changed in CURRENT CVS to use\n> > hardcoded PATH_TO_WISH and PGACCESS_HOME, rather than using __wish__ and\n> > __POSTGRESDIR__.\n\n> > Bruce??\n \n> Have I mentioned how much I hate installing pgaccess from a tarball\n> directly into our tree, and not knowing what is new about it. Let me\n> mention that again... :-)\n\nYes, more than once. Good to have the update; not good to have the\nhard-coded stuff.\n \n> Fixed.\n\nThanks. I'm building a test RPM here so I can satisfy Tom's request for\nregress results on CURRENT. My rpm patchset barfed on pgaccess -- so I\ninvestigated. I fixed it in my local tree, but wanted to alert folk.\n\nIt's nice to have a machine now that will build the RPM in less than\nfive minutes :-)....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 PEter 4:11\n", "msg_date": "Fri, 31 Mar 2000 09:23:01 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgAccess change" }, { "msg_contents": "> Yes, more than once. Good to have the update; not good to have the\n> hard-coded stuff.\n> \n> > Fixed.\n> \n> Thanks. I'm building a test RPM here so I can satisfy Tom's request for\n> regress results on CURRENT. My rpm patchset barfed on pgaccess -- so I\n> investigated. I fixed it in my local tree, but wanted to alert folk.\n> \n> It's nice to have a machine now that will build the RPM in less than\n> five minutes :-)....\n\nGlad you found it. That was the one file I had to modify to get it to\nmatch our old version, and obviously I messed that up.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 31 Mar 2000 09:27:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgAccess change" }, { "msg_contents": "Bruce Momjian wrote:\n> > Thanks. I'm building a test RPM here so I can satisfy Tom's request for\n> > regress results on CURRENT. My rpm patchset barfed on pgaccess -- so I\n> > investigated. I fixed it in my local tree, but wanted to alert folk.\n \n> Glad you found it. That was the one file I had to modify to get it to\n> match our old version, and obviously I messed that up.\n\nI'm going to take this opportunity to thank Marc for having a cvsweb\ninterface -- as I was able to see when the difference was introduced\nusing that interface.\n\nGiven the number of things you patch, Bruce, it shouldn't surprise\nanyone that an occassional error is introduced -- that's why it's good\nto have many sets of eyes looking at the code. I can only have\nnightmares about how many errors I could cause if I were applying\npatches at that rate... :-)\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 31 Mar 2000 09:36:30 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgAccess change" }, { "msg_contents": "> Bruce Momjian wrote:\n> > > Thanks. I'm building a test RPM here so I can satisfy Tom's request for\n> > > regress results on CURRENT. My rpm patchset barfed on pgaccess -- so I\n> > > investigated. I fixed it in my local tree, but wanted to alert folk.\n> \n> > Glad you found it. That was the one file I had to modify to get it to\n> > match our old version, and obviously I messed that up.\n> \n> I'm going to take this opportunity to thank Marc for having a cvsweb\n> interface -- as I was able to see when the difference was introduced\n> using that interface.\n> \n> Given the number of things you patch, Bruce, it shouldn't surprise\n> anyone that an occassional error is introduced -- that's why it's good\n> to have many sets of eyes looking at the code. I can only have\n> nightmares about how many errors I could cause if I were applying\n> patches at that rate... :-)\n\nThe problem with pgaccess is that is not a patch I can eyeball. It is a\nstand-alone tar file that writes over our files. I am never sure what\nis new or old until I see what files show as new, etc.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 31 Mar 2000 09:44:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgAccess change" }, { "msg_contents": "On Fri, 31 Mar 2000, Lamar Owen wrote:\n\n> Bruce Momjian wrote:\n> > > Thanks. I'm building a test RPM here so I can satisfy Tom's request for\n> > > regress results on CURRENT. My rpm patchset barfed on pgaccess -- so I\n> > > investigated. I fixed it in my local tree, but wanted to alert folk.\n> \n> > Glad you found it. That was the one file I had to modify to get it to\n> > match our old version, and obviously I messed that up.\n> \n> I'm going to take this opportunity to thank Marc for having a cvsweb\n> interface -- as I was able to see when the difference was introduced\n> using that interface.\n\nActually that was Hal that came up with that.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 31 Mar 2000 09:45:47 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgAccess change" } ]
[ { "msg_contents": "While applying the NT regression tests, I remember Tom Lane's comment\nthat people are being much more picky about the regression results. In\nthe old days, we could just say that they will have _expected_ errors,\nbut now they want them to match exactly.\n\nKind of funny, their standards are going up.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 31 Mar 2000 09:17:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Regression tests" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> While applying the NT regression tests, I remember Tom Lane's comment\n> that people are being much more picky about the regression results. In\n> the old days, we could just say that they will have _expected_ errors,\n> but now they want them to match exactly.\n\nWell, it is NICE if they match exactly, but not essential for me. I\nhave been advising intel RPM dist users to _expect_ float8 and geometry\nfailures -- and the occassional 'random' failure, of course. And the\nsituation with RedHat 6.1's locales brings yet another mixed bag -- I\nadvise intel RPM users to completely disable RedHat's locale support (by\nrenaming /etc/sysconfig/i18n to something else and rebooting) for\nperforming regression tests -- then they can restore locale. Sort order\nunder RedHat 6.1's locale is _messed_up_.\n\nThomas mentioned RedHat 6.1 Intel being an appropriate reference\nplatform -- if that is the case, then RH 6.1 Intel needs exact matches. \nOf course, ANY platform can be the reference -- as long as _one_ is. If\nBSD/os 4 (or whatever rev you're currently running) were to be the\nreference, then the regression tests had better match your machine's\nrun.\n\nMy opinion is to select the reference platform that produces the least\namount of failures for other known good platforms.\n\nAlso, geometry wouldn't fail if the number of digits of precision was\njacked down one.... :-).\n\n> Kind of funny, their standards are going up.\n\nIs it a case of standards going up, or standards going down? ;-)\n\nAs far as PostgreSQL's performance, our standards are definitely going\nup -- there has never been a better PostgreSQL, all around, than the one\nthat is in CURRENT, IMO (and I've used everything since 6.1.1).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 31 Mar 2000 09:46:16 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests" }, { "msg_contents": "At 09:17 AM 3/31/00 -0500, Bruce Momjian wrote:\n>While applying the NT regression tests, I remember Tom Lane's comment\n>that people are being much more picky about the regression results. In\n>the old days, we could just say that they will have _expected_ errors,\n>but now they want them to match exactly.\n>\n>Kind of funny, their standards are going up.\n\nIs this perhaps a result of a growing audience for Postgres?\n\nFor instance, I dealt with one of our web toolkit \"early achievers\",\nnew to AOLserver, new to Postgres, new to the toolkit - that's a lot\nof \"new to's\" for someone to deal with in parallel!\n\nHe had problems with the regression tests - cockpit error, first\ngo-around, later diminished to expected errors. He's hacker enough\nto have run the regression tests in the first place (rather than\nblindly assume his install went OK) and also to figure out that\nthe geometry results were probably due simply to FP imprecision,\nbut wanted to safety-blanket reassurance from myself (and Lamar\nOwen) that all was A-OK. Particularly after his first go-around\nof self-inflicted problems (the details of which I don't even\nremember at the moment, he figured them out himself).\n\nAs PG gets more use, I would expect to see more, not fewer, intellegent\nnewcomers who aren't steeped in PG lore (i.e. experience with old\nversions) who will be full of questions about any seeming abnormality.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 31 Mar 2000 15:13:22 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests" }, { "msg_contents": "Lamar Owen writes:\n\n> Thomas mentioned RedHat 6.1 Intel being an appropriate reference\n> platform -- if that is the case, then RH 6.1 Intel needs exact matches. \n> Of course, ANY platform can be the reference -- as long as _one_ is.\n\nWhy do we need a reference platform? Just to show \"See, all the tests pass\non this machine.\", that has exactly zero practical value. If at all I\nthink whatever hub.org is running should be the reference. A platform with\nbroken locale certainly shouldn't.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 2 Apr 2000 13:33:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests" }, { "msg_contents": "> > Thomas mentioned RedHat 6.1 Intel being an appropriate reference\n> > platform -- if that is the case, then RH 6.1 Intel needs exact matches.\n> > Of course, ANY platform can be the reference -- as long as _one_ is.\n> Why do we need a reference platform? Just to show \"See, all the tests pass\n> on this machine.\", that has exactly zero practical value.\n\nSorry, I disagree. When we first started maintaining the regression\ntests, they passed on *no* machine, but every machine had different\nfailures (much as happens now). We need to have one machine defined as\nthe standard to simplify the testing process and discussion, and until\nnow that machine has been mine. If this is an issue, then, at least\nfor the 7.0 release, that machine will continue to be the same one, a\nLinux RH5.2 machine at my home.\n\nUntil someone else takes *complete* responsibility for the regression\ntests, does so for some period of time, and makes sure that these\ntests are run regularly, then I can't see the situation changing. But\nit is certainly true that for the last while (6 months, 1 year?) it is\nclear that there are several people with enough interest and\npersistance to take on this responsibility if they would like.\n\nRH6.1 vs some other platform is a minor issue. I'd be happy for the\nreference machine have Mandrake 7.0, which I'm running on several\nmachines.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 15:50:12 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests" } ]
[ { "msg_contents": "> Something to think about maybe.\n\nYeah, I've thought about it, and it is not at all clear. I understand\nall of your points, but for the hardcopy versions of docs having a\nsingle 600 page doc seems more unwieldy than having several 200 page\ndocs (yes, they *are* that big!!).\n\nJust as you, I assume that people using html read the integrated doc.\n\nbtw, it is possible to mark up the docs so that you can, say, include\ncross references if it is html but include only citation references if\nit is hardcopy. So if we moved to having only the integrated doc in\nhtml, and only the smaller docs in hardcopy, then we could put more\n\"clickable cross references\" into the html.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 31 Mar 2000 14:50:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Docs refreshed" }, { "msg_contents": "Thomas Lockhart writes:\n\n> > Something to think about maybe.\n> \n> Yeah, I've thought about it, and it is not at all clear. I understand\n> all of your points, but for the hardcopy versions of docs having a\n> single 600 page doc seems more unwieldy than having several 200 page\n> docs (yes, they *are* that big!!).\n> \n> Just as you, I assume that people using html read the integrated doc.\n\nMaybe we need a show of hands of how many people bother with the hardcopy.\nI think for most people anything beyond 20 pages would never get near the\nprinter. At the point you reach 200 pages the extra 400 don't matter, the\npaper is going to be empty beforehand anyway. If you want to print\nsomething for reference you pick out the interesting pages (such as the\nreference pages).\n\n> btw, it is possible to mark up the docs so that you can, say, include\n> cross references if it is html but include only citation references if\n> it is hardcopy. So if we moved to having only the integrated doc in\n> html, and only the smaller docs in hardcopy, then we could put more\n> \"clickable cross references\" into the html.\n\nThat's the next question I had for you. :) I'm just happy the stuff builds\nfor me. But I don't think just linking stuff together is the answer. It's\nonly working around an organizational problem. IMHO.\n\n\nHere's a thought: If we'd make it in \"book\" form like I suggested we would\nprobably have about a dozen major chapters. That's 50 pages each which is\nmuch more printer friendly and you get to choose better. The only thing\nyou'd have to do is split up the postscript into separate files at some\nstage.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 1 Apr 2000 00:18:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs refreshed" }, { "msg_contents": "Thomas Lockhart writes:\n\n> Just as you, I assume that people using html read the integrated doc.\n\nBtw., the fact that the print docs are in US Letter format makes them\nslightly beyond useless for the rest of the world. :( I still think that a\n200 page document is not any less unwieldy than a 600 page one. There's\ngotta be an option to only print pages x through y either way.\n\nConsidering that this is pretty much what's holding up releases, would it\nbe possible to consider not putting the postscript docs in the\ndistribution and just put them on the ftp server at your convenience (and\nin A4 as well) for those who choose to get it? Not to break your heart or\nsomething but thinking practically ... :)\n\nAlso don't put them in the CVS tree. They're just wasting space since\nthey're out of date and not really useful for developers.\n\nIn the same spirit I'd suggest not including the html tars in the CVS tree\neither. In the distribution I would like to have them *untarred* so users\ncan browse them before/without installing. But for that they can be\ngenerated when making the distribution (make -C doc postgres.html, not too\nhard but we'd need to use a separate build dir), no need to keep out of\ndate copies in CVS.\n\nWhat do other people think? It seems to me that many people just read the\nstuff on postgresql.org.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 2 Apr 2000 23:18:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs refreshed" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Also don't put them in the CVS tree. They're just wasting space since\n> they're out of date and not really useful for developers.\n\n> In the same spirit I'd suggest not including the html tars in the CVS tree\n> either.\n\nIt's really pretty silly to have tar.gz files in the CVS tree. I can\nimagine what the underlying diff looks like every time they are updated\n:-(. And, since they are ultimately just derived files, I agree with\nPeter that they shouldn't be in CVS at all.\n\nThey should, however, be in release tarballs.\n\n> In the distribution I would like to have them *untarred* so users\n> can browse them before/without installing.\n\nDoesn't matter a whole lot; you can untar them yourself if you want\nto do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Apr 2000 17:33:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs refreshed " }, { "msg_contents": "> It's really pretty silly to have tar.gz files in the CVS tree. I can\n> imagine what the underlying diff looks like every time they are updated\n> :-(. And, since they are ultimately just derived files, I agree with\n> Peter that they shouldn't be in CVS at all.\n\nWell, it wasn't pretty silly when I first did it, so have a little\nsense of history please ;)\n\nIt was only the last year or so that the docs could get built on\nhub.org (the postgresql.org host). It still breaks occasionally if\nscrappy tries updating his machine, since the tools are only used by\nme so he wouldn't notice if something goes wrong.\n\nPreviously, the docs had to be built on my machine at home, then\ndownloaded (and home is still where all package development and\ndebugging takes place). If they were to be recoverable *on* hub.org,\nthey had to go into cvs. It may be that we could now generate them\nfrom scratch during the release tarball build, (it takes, maybe, 10-15\nminutes to build all variations). But I would think you wouldn't want\nto do that, but would rather pick them up from a known location. cvs\nis where we do that now, but it could be from somewhere else I\nsuppose. Perhaps our build script for the tarball could include a\n\"wget\" from a known location on postgresql.org?\n\nVince is planning on redoing the web site a bit to decouple the\nrelease docs from the development docs. I'd like to have a \"get the\ndocs\" page which gives us the release docs for each release and the\ncurrent development docs, and we could have the tarball builder get\nthe release docs for the upcoming release from there.\n\nIs this something for v7.1, or is there something important about this\nnow??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 21:50:41 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Docs refreshed" }, { "msg_contents": "> > Just as you, I assume that people using html read the integrated doc.\n> Btw., the fact that the print docs are in US Letter format makes them\n> slightly beyond useless for the rest of the world. :( I still think that a\n> 200 page document is not any less unwieldy than a 600 page one. There's\n> gotta be an option to only print pages x through y either way.\n\n<bad attitude>\nHmm. I'm glad you appreciate the hundreds of hours of work I've put\ninto the docs :/ And I haven't seen anyone else stepping up to\nproviding hardcopy-style docs for \"the rest of the world\". And I\npersonally value hardcopy docs any time I try to learn something (as\nopposed to just refreshing my memory on a detail) and think that it is\nimportant for others too.\n</bad attitude>\n\n> Considering that this is pretty much what's holding up releases, would it\n> be possible to consider not putting the postscript docs in the\n> distribution and just put them on the ftp server at your convenience (and\n> in A4 as well) for those who choose to get it? Not to break your heart or\n> something but thinking practically ... :)\n\nOK, there is a little known secret here: the docs are *not* holding up\nthe release. But, the release will be held up until both docs *and*\nthe release are ready to go, and at the moment neither are. In fact, a\nfundamental part of our release cycle is that, for the last couple of\nweeks before the actual release, the project is \"waiting for docs\",\nand during that time, the old adage that \"idle hands do the devil's\nwork\" comes into play and people start poking at the release, trying\nthings, putting it into production, maybe, trying it on all platforms,\netc etc, and we find a few extra bugs and get our platform reports\nfinalized. And all of this is essential for a quality release. So\ndon't believe the docs story too much, but don't try doing away with\nthe sham either ;)\n\nbtw, I hadn't fully realized the above until you started poking at it,\nso thanks :)\n\n> Also don't put them in the CVS tree. They're just wasting space since\n> they're out of date and not really useful for developers.\n\nYou haven't yet suggested enough other mechanisms to adequately\nreplace the current scheme, but I'll do it for you, under separate\ncover sometime soon.\n\n> In the same spirit I'd suggest not including the html tars in the CVS tree\n> either. In the distribution I would like to have them *untarred* so users\n> can browse them before/without installing. But for that they can be\n> generated when making the distribution (make -C doc postgres.html, not too\n> hard but we'd need to use a separate build dir), no need to keep out of\n> date copies in CVS.\n\nWell, the copies *aren't* out of date for the corresponding release\nversion, so that isn't the issue afaict. tar vs untar doesn't seem to\nbe a big issue either, though the tarball is pretty much required if\nthe html is in cvs since the names of the html files only occasionally\nreproduce from one rev to the next (note the large number of random\ngeneric names for internal portions of chapters).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 03 Apr 2000 01:21:22 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Docs refreshed" } ]
[ { "msg_contents": "\n Hi,\n\nI a little study on DejaNews a large discussion (October 1999) about new \nfmgr and functions which return tuples. (IMHO 'function' is not good name \nfor this, better is 'procedure'.)\n\nA question, is this feature planned for 7.1 and work on this anyone?\n\nIMHO it not needs change current fmgr code, it needs only RTE code (parser\nand transformStmt) and executor code change (executor call myfunc() and this \nfunction create temp table and this table use executor as standart table.) \n(..it is really simplification :-)\n\nOr is other idea?\n\n\t\t\t\t\t\t\tKarel\n\n/* ----------------\n * Karel Zak * [email protected] * http://home.zf.jcu.cz/~zakkr/\n * C, PostgreSQL, PHP, WWW, http://docs.linux.cz\n * ----------------\n */\n\n", "msg_date": "Fri, 31 Mar 2000 17:33:29 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "procedure returns a tuple" }, { "msg_contents": "Karel Zak wrote:\n\n> I a little study on DejaNews a large discussion (October 1999) about new\n> fmgr and functions which return tuples. (IMHO 'function' is not good name\n> for this, better is 'procedure'.)\n>\n> A question, is this feature planned for 7.1 and work on this anyone?\n>\n> IMHO it not needs change current fmgr code, it needs only RTE code (parser\n> and transformStmt) and executor code change (executor call myfunc() and this\n> function create temp table and this table use executor as standart table.)\n> (..it is really simplification :-)\n>\n> Or is other idea?\n\n Right - simplified.\n\n Here again, the proposed overhaul of the parse-/querytree is\n the reason why we don't want to tackle this issue now. At\n that time, any relation as well as a 'procedure' (function\n returning a tuple set) will become a \"tuple-source\". A tuple\n source is mainly an abstract node, describing the shape of\n tuples it returns, hiding how it produces them to the caller.\n\n This way, there is no fundamental difference between a\n relation, a procedure or an external database link any more.\n\n Up to now we only have a vague idea in mind. And we're not\n sure if we'll do this huge changes in the main CVS trunk or a\n separate branch. Neither have we decided when to start on\n it, because we need a couple of key developers at the same\n time to ensure reasonable progress in that project (none of\n us can do it alone).\n\n After 7.0 is out, I'll try to collect all the design issues,\n break up the entire package into smaller chunks and develop a\n project plan for it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 3 Apr 2000 23:07:53 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: procedure returns a tuple" }, { "msg_contents": "\nOn Mon, 3 Apr 2000, Jan Wieck wrote:\n\n> Karel Zak wrote:\n> \n> > I a little study on DejaNews a large discussion (October 1999) about new\n> > fmgr and functions which return tuples. (IMHO 'function' is not good name\n> > for this, better is 'procedure'.)\n\nJan, thanks for answer, I thought that I again ask about bad matter or \nI send a stupid question :-) \n\n> Here again, the proposed overhaul of the parse-/querytree is\n> the reason why we don't want to tackle this issue now. At\n> that time, any relation as well as a 'procedure' (function\n> returning a tuple set) will become a \"tuple-source\". A tuple\n> source is mainly an abstract node, describing the shape of\n> tuples it returns, hiding how it produces them to the caller.\n> \n> This way, there is no fundamental difference between a\n> relation, a procedure or an external database link any more.\n\nLast weekend I a little explore current sources for this and the\ncurrent postgresql very \"vegetates\" on a relation and a example \ntransform statement code very depend on relation structs. Yes,\nBerkeley's code design for this is not modular and abstract. \t\n\n> Up to now we only have a vague idea in mind. And we're not\n> sure if we'll do this huge changes in the main CVS trunk or a\n> separate branch. Neither have we decided when to start on\n> it, because we need a couple of key developers at the same\n> time to ensure reasonable progress in that project (none of\n> us can do it alone).\n> \n> After 7.0 is out, I'll try to collect all the design issues,\n> break up the entire package into smaller chunks and develop a\n> project plan for it.\n\nWell. \n\n\t\t\t\t\t\tKarel \n\n", "msg_date": "Tue, 4 Apr 2000 14:27:16 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: procedure returns a tuple" } ]
[ { "msg_contents": "It appears that some code in backend/utils/adt/int8.c tickles a compiler bug \nin SCO's UDK when it is compiled with optimazation turned on. The attached \npatch, which rewrites a for statement as a while statement corrects the \nproblem.\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Fri, 31 Mar 2000 14:39:14 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": true, "msg_subject": "int8.c compile problem on UnixWare 7.x" }, { "msg_contents": "-- Start of PGP signed section.\n> It appears that some code in backend/utils/adt/int8.c tickles a compiler bug \n> in SCO's UDK when it is compiled with optimazation turned on. The attached \n> patch, which rewrites a for statement as a while statement corrects the \n> problem.\nContent-Description: uw7-20000331.patch\n\n\nI am sorry Billy but the new code is much harder to understand than the\noriginal, so I am not inclined to change it based on a compiler bug, for\nwhich SCO is so famous.\n\nIf you wish to submit a patch that removes optimization from the\nMakefile for the file or entire directory under SCO, I will consider it.\nAnd I would document the change so we can remove it later.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 1 Apr 2000 03:10:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int8.c compile problem on UnixWare 7.x" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am sorry Billy but the new code is much harder to understand than the\n> original, so I am not inclined to change it based on a compiler bug, for\n> which SCO is so famous.\n\nI agree that I'm not eager to uglify the code that much to avoid a\nsingle-platform compiler bug. Can it be worked around with less-ugly\nchanges? I'd try changing the --i to i--, for example; and/or swapping\nthe order of the two initialization assignments. Neither of those would\nimpair readability noticeably.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Apr 2000 23:49:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int8.c compile problem on UnixWare 7.x " } ]
[ { "msg_contents": "Kardos, Dr. Andreas writes:\n\n> Unfortunately these return i386-pc-qnx now and not i386-pc-qnx4. Note there\n> is a big difference between QNX2 and QNX4.\n> \n> If these remain checked in, configure and regress.sh have to be patched\n> accordingly.\n\nOkay, done. Apparently they switched from i386-pc-qnx* to i386-qnx-qnx*\nand back and now they mangled the versions as well. In any case I don't\nsee any support for QNX2 in config.guess so we shouldn't need to worry.\n\nLet me know about any more problems regarding this.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 1 Apr 2000 00:09:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New config.{sub|guess}" } ]
[ { "msg_contents": "\nJsut doing a vacuum of my database, and foundsomething that I consider to\nbe \"odd\" ... while vacuuming one of the tables, doign a ps shows:\n\n64185 p8 S 0:00.10 /home/database/v6.5.3/bin/postgres pgsql 216.126.84.1 ipmeter VACUUM\n64202 p8 S 0:00.04 /home/database/v6.5.3/bin/postgres jeff 216.126.84.1 jeff startup\n64203 p8 S 0:00.03 /home/database/v6.5.3/bin/postgres hordemgr 216.126.84.1 horde startup\n64206 p8 S 0:00.04 /home/database/v6.5.3/bin/postgres jeff 216.126.84.1 jeff startup\n64223 p8 S 0:00.02 /home/database/v6.5.3/bin/postgres pgsql 216.126.84.1 banner_ad startup\n64235 p8 S 0:00.01 /home/database/v6.5.3/bin/postgres pgsql 216.126.84.1 banner_ad startup\n64236 p8 S 0:00.01 /home/database/v6.5.3/bin/postgres hordemgr 216.126.84.1 horde startup\n64238 p8 S 0:00.01 /home/database/v6.5.3/bin/postgres pgsql 216.126.84.1 banner_ad startup\n64240 p8 S 0:00.01 /home/database/v6.5.3/bin/postgres jeff 216.126.84.1 jeff startup\n64255 p8 S 0:00.00 /home/database/v6.5.3/bin/postgres jeff 216.126.84.1 jeff startup\n\nipmeter was being vacuum'd, but the rest were \"hanging\"?\n\nEventually, I got a:\n\nFATAL: s_lock(5004c3d4) at bufmgr.c:665, stuck spinlock. Aborting.\n\nmessage on the vacuum and then everthing went back to normal ...\n\nI'm tempted to upgrade to v7.0 and see how she goes, but am wondering if\nthere is something that I should be looking at *beforE* I do that?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 31 Mar 2000 21:43:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "[6.5.3] first spinlock problem I've ever noticed ..." } ]
[ { "msg_contents": "\nJust looking around, and in one of my databases, there is a core file from\ntonight, as well, that a quick gdb shows:\n\n(gdb) where\n#0 0x8060f92 in nocachegetattr ()\n#1 0x809018f in ExecEvalVar ()\n#2 0x8090c48 in ExecEvalExpr ()\n#3 0x80907e6 in ExecEvalFuncArgs ()\n#4 0x8090894 in ExecMakeFunctionResult ()\n#5 0x8090a04 in ExecEvalOper ()\n#6 0x8090cd4 in ExecEvalExpr ()\n#7 0x8090ab2 in ExecEvalOr ()\n#8 0x8090cf0 in ExecEvalExpr ()\n#9 0x8090d62 in ExecQualClause ()\n#10 0x8090d95 in ExecQual ()\n#11 0x80910d6 in ExecScan ()\n#12 0x8095a93 in ExecSeqScan ()\n#13 0x808fb4e in ExecProcNode ()\n#14 0x8107d56 in createfirstrun ()\n#15 0x8107b88 in initialrun ()\n#16 0x8107a6d in psort_begin ()\n#17 0x8095d97 in ExecSort ()\n#18 0x808fb7a in ExecProcNode ()\n#19 0x808ec7d in ExecutePlan ()\n#20 0x808e613 in ExecutorRun ()\n#21 0x80dc5fb in ProcessQueryDesc ()\n#22 0x80dc65c in ProcessQuery ()\n#23 0x80db182 in pg_exec_query_dest ()\n#24 0x80db05f in pg_exec_query ()\n#25 0x80dc078 in PostgresMain ()\n#26 0x80c65de in DoBackend ()\n#27 0x80c6112 in BackendStartup ()\n#28 0x80c5806 in ServerLoop ()\n#29 0x80c535b in PostmasterMain ()\n#30 0x809d5b2 in main ()\n#31 0x806075d in _start ()\n\nAnd whose directory contains alot of:\n\n-rw------- 1 pgsql pgsql 4775936 Mar 31 20:23 postgres.core\n-rw------- 1 pgsql pgsql 0 Mar 31 11:46 pg_attribute_relid_attnum_index.4744\n-rw------- 1 pgsql pgsql 0 Mar 31 11:46 pg_attribute_relid_attnum_index.4743\n-rw------- 1 pgsql pgsql 0 Mar 31 11:46 pg_attribute_relid_attnum_index.4742\n-rw------- 1 pgsql pgsql 0 Mar 31 11:46 pg_attribute_relid_attnum_index.4741\n-rw------- 1 pgsql pgsql 0 Mar 31 11:46 pg_attribute_relid_attnum_index.4740\n-rw------- 1 pgsql pgsql 0 Mar 31 11:46 pg_attribute_relid_attnum_index.4739\n-rw------- 1 pgsql pgsql 0 Mar 31 11:46 pg_attribute_relid_attnum_index.4738\n-rw------- 1 pgsql pgsql 0 Mar 31 11:46 pg_attribute_relid_attnum_index.4737\n\nAgain, if this is somethign that is most likely gone in v7.0, we can\nignore, but ... ?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 31 Mar 2000 21:47:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "[6.5.3] further investigation ..." } ]
[ { "msg_contents": "I'm running the 7.0 beta 3. It seems like queries that use german\ncharacters like �,�,� or � don't work with the ~*\nOperator (case insensetive regex). It only works with case sensetive\nqueries. So the configure option\n--enable-locale doesn't have any influence.\n\n", "msg_date": "Sat, 01 Apr 2000 21:31:50 +0200", "msg_from": "werner <[email protected]>", "msg_from_op": true, "msg_subject": "--enable-locale doesn't work" }, { "msg_contents": "werner <[email protected]> writes:\n> I'm running the 7.0 beta 3. It seems like queries that use german\n> characters like �,�,� or � don't work with the ~*\n> Operator (case insensetive regex). It only works with case sensetive\n> queries. So the configure option\n> --enable-locale doesn't have any influence.\n\nThis isn't enough information. What exactly do you mean by \"doesn't\nwork\"? What query did you issue, what result did you get, what did\nyou expect to get? And which locale are you using?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Apr 2000 00:41:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --enable-locale doesn't work " }, { "msg_contents": "On Sun, 2 Apr 2000, Tom Lane wrote:\n> werner <[email protected]> writes:\n> > I'm running the 7.0 beta 3. It seems like queries that use german\n> > characters like О©╫,О©╫,О©╫ or О©╫ don't work with the ~*\n> > Operator (case insensetive regex). It only works with case sensetive\n> > queries. So the configure option\n> > --enable-locale doesn't have any influence.\n> \n> This isn't enough information. What exactly do you mean by \"doesn't\n> work\"? What query did you issue, what result did you get, what did\n> you expect to get? And which locale are you using?\n\n Just tested beta3 - working like a charm, as usual :) Are you sure you\nhave correct locale settings? Look into src/test/locale directory; there\nyou'll find locale test for some locales, including de_DE.ISO-8859-1. Run\nthe test (make all test-de_DE.ISO-8859-1). Watch the results - is your\nlocale ok?\n If you are sure your locale is Ok, but still unsatisfied with locale\ntest - send your patches to me, please.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Sun, 2 Apr 2000 12:10:21 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: --enable-locale doesn't work " }, { "msg_contents": "I found a solution for the problem. But first I try to explain a little\nbit better what the poblem was. When I\nsearched for a text that contained foreign characters (�,�,�,etc.) the\nquery found no matching records.\n\nfor example:\n\ncreate table mytesttable(myattribute text);\n\ninsert into mytesttable values('FR�HLING');\n\nselect * from mytesttable where myattribute~*'fr�hling';\n\nThe query finds no matching records. But it works if the \"special\ncharacters\" are the same case (i.e. \"fr�hling\",\n\"Fr�hling\",\"FR�HLING\",etc.)\n(fr�hling is the german word for spring :-) btw)\n\nIn postgresql versions before 7.x the parameter --enable-locale was\nnecessary to search for these\ncharacters. Now I found out that in 7.0 the parameter --enable-recode is\nnecessary. The manual says that\nthis parameter is for cyrillic recode support only. But the german\ncharacter set ISO-8859-1 (I use) is not\ncyrillic. So I was a little confused. I'm not sure what the diffrence\nbetween --enable-locale and\n--enable-recode is. Anyway, it seems like --enable-recode is necessary to\nmake a search on attributes that use german character sets.\n\n\nPS.: I'm afraid that if you don't have a german character set, you can't\nreally read this message, because the \"special characters\" are not\ntranslated. The character \"�\" should be shown as a small u with 2 points\nabove.\n\nTom Lane wrote:\n\n> werner <[email protected]> writes:\n> > I'm running the 7.0 beta 3. It seems like queries that use german\n> > characters like �,�,� or � don't work with the ~*\n> > Operator (case insensetive regex). It only works with case sensetive\n> > queries. So the configure option\n> > --enable-locale doesn't have any influence.\n>\n> This isn't enough information. What exactly do you mean by \"doesn't\n> work\"? What query did you issue, what result did you get, what did\n> you expect to get? And which locale are you using?\n>\n> regards, tom lane\n\n", "msg_date": "Sun, 02 Apr 2000 16:07:51 +0200", "msg_from": "werner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: --enable-locale doesn't work" }, { "msg_contents": "Works like a charm here. Be sure to set the locale-relevant environment\nvariables (e.g., LC_ALL) in the environment of the postmaster.\n\nwerner writes:\n\n> I'm running the 7.0 beta 3. It seems like queries that use german\n> characters like �,�,� or � don't work with the ~*\n> Operator (case insensetive regex). It only works with case sensetive\n> queries. So the configure option\n> --enable-locale doesn't have any influence.\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 2 Apr 2000 23:17:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --enable-locale doesn't work" } ]
[ { "msg_contents": "I'd like to collect info on the platforms we are supporting for v7.0.\nI've listed below all of the platforms which have been supported in\nthe past and any new platforms such as QNX which have a new \"sponsor\".\n\nSince this is a major release, it is probably the right time to cull\nour list of platforms which just aren't in use anymore, even though\nafaik there have been no changes in the last couple of full releases\nwhich would have broken their support. So, if we don't get a report,\nthe platform will go to the \"maybe supported\" category.\n\nHere is the list of platforms for which I need reports. Note that in\nsome cases you might have already reported on them, but either the\nreport came *with* patches, so I'm not certain that the current tree\nhas them applied and you have tested them, or perhaps I just got\nconfused and missed a good report. In either case, I need confirmation\nagain. Something which says:\n\no tested on the current tree or a recent beta\no installed and ran with no problems, or\no for a very few platforms, installed and ran with minimal or known\npatches\no regress tests pass or are understood with no fundamental problems\n\nHere are the platforms, along with the name of a recent caretaker:\n\nAIX (Andreas Zeugswetter)\nBSDI (Bruce - hey, where is your report! :)\nDGUX (Brian Gallew lost his platform a while ago; this one may be\nunsupported)\nDigital Unix (Pedro et al. Similar patches to Linux-alpha?)\nFreeBSD (Tatsuo and scrappy)\nHPUX (Tom Lane; Stan still here or anyone else want to pitch in?)\nIRIX (Kevin Wheatley recently reported on v6.5.3)\nLinux-alpha (this one still needs the \"Ryan\" or \"Uncle George\"\npatches?)\nLinux-arm41 (Mark Knox recently did v6.5.3)\nLinux-x86 (Covered by Lamar Owens and Thomas for RH; should we report\nother distros too? How about libc vs glibc? I've tested libc, but\nshould we bother listing it?)\nLinux-sparc (Tom Szybist, but no report since v6.4 so unsupported?)\nLinux-ppc (Tatsuo already reported. Thanks!)\nMacOS (Still nothing? Does OS-X or whatever have a chance?)\nMkLinux-ppc (Tatsuo, but this merged with Linux-ppc, right? So\nobsolete?)\nNetBSD-arm32 (Andrew McMurry)\nNetBSD-x86 (Patrick Welche already reported. Thanks!)\nNetBSD-m68k (Mr. Mutsuki Nakajima via Tatsuo, but nothing since v6.4)\nNetBSD-ns32k (Jon Buller, but nothing since v6.4)\nNetBSD-sparc (Tom I Helbekkmo, v6.4)\nNetBSD-vax (Tom I Helbekkmo, but nothing since v6.3. Unsupported?)\nQNX (Andreas Kardos. Recent reports, but we have recent changes to\nconfig.guess)\nSCO OpenServer (Andrew Merrill)\nSCO UnixWare 7 (Billy Allie needs one more patch?)\nSolaris-x86 (scrappy doing it this weekend?)\nSolaris-sparc (Tom Szybist and Frank Ridderbusch for v6.4)\nSunOS 4.1.4 (Tatsuo; obsolete platform?)\nSVR4-mips (Frank Ridderbusch, no int8, v6.4. Obsolete?)\nSVR4-m88k (Nothing for two years. Needed spinlock code. Obsolete?)\nUltrix (No reports for either MIPS or VAX for two years. Dead and\ngone?)\nWIN9x (Magnus Hagander, v6.4, client side only. Still good?)\nWINNT (Daniel Horak already reported for v7.0. Thanks!)\n\n\nAny others? I'd like to get finalized within a week or so. If you\nthink you will be able to test but can't right away, send some mail so\nwe know someone might be working on it. I always hate to drop a\nplatform from the list, but if no one is using it or testing it we'll\nmove 'em to the unsupported list by release day :(\n\nTIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 00:25:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Call for porting reports" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'd like to collect info on the platforms we are supporting for v7.0.\n> [snip]\n\n> HPUX (Tom Lane; Stan still here or anyone else want to pitch in?)\n\nI have checked recent sources on HPUX 9.03 and 10.20 using gcc,\nbut am trying to find time to check it with the vendor cc before\nfiling a report. I also have an un-dealt-with report of changes\nneeded for HPUX 11.* ... anyone using that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Apr 2000 21:25:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for porting reports " }, { "msg_contents": "On Sun, Apr 02, 2000 at 12:25:15AM +0000, Thomas Lockhart wrote:\n> Linux-x86 (Covered by Lamar Owens and Thomas for RH; should we report\n> other distros too? How about libc vs glibc? I've tested libc, but\n\nYes, of course we should list other distros. Debian shouldn't be a problem\nas there are several on this list.\n\nOr else we should only list Linux without naming a distor at all.\n\n> should we bother listing it?)\n\nI don't think so. No distro is still libc based AFAIK.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sun, 2 Apr 2000 10:07:50 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for porting reports" }, { "msg_contents": "> Yes, of course we should list other distros. Debian shouldn't be a problem\n> as there are several on this list.\n> Or else we should only list Linux without naming a distor at all.\n\nRight. Sorry, I wasn't clear; at the moment I don't mention which\ndistro was actually tested. I know Oliver has Debian covered (and I'm\nrunning Mandrake on a couple of systems so I can test that -- it\npasses btw ;) and that we will hear from someone when there is\ntrouble.\n\nI'll keep listing it generically, until we get a report of problems\nwith any distro and then we can reopen the issue.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 15:35:38 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Call for porting reports" }, { "msg_contents": "> Or else we should only list Linux without naming a distor at all.\n> \n> > should we bother listing it?)\n> \n> I don't think so. No distro is still libc based AFAIK.\n\ntrue, however not all people are running the most up to date\nslackware (7.0) and afaik versions before 7 were not glibc2 based.\n\nperhaps 4 was, but i think it was short lived compared to the\n3.x series.\n\n\nJeff MacDonald\[email protected]\n\n", "msg_date": "Sun, 2 Apr 2000 14:53:58 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for porting reports" }, { "msg_contents": "Thomas Lockhart writes:\n\n> I'd like to collect info on the platforms we are supporting for v7.0.\n\nI am now reproduceably getting this failure in the timestamp test. I have\nnever seen it before today:\n\n\n*** expected/timestamp.out\tSun Apr 2 14:15:29 2000\n--- results/timestamp.out\tSun Apr 2 18:01:21 2000\n***************\n*** 13,25 ****\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";\n True \n ------\n! t\n (1 row)\n \n SELECT (timestamp 'tomorrow' = (timestamp 'yesterday' + interval '2 days')) as \"True\";\n True \n ------\n! t\n (1 row)\n \n SELECT (timestamp 'current' = 'now') as \"True\";\n--- 13,25 ----\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";\n True \n ------\n! f\n (1 row)\n \n SELECT (timestamp 'tomorrow' = (timestamp 'yesterday' + interval '2 days')) as \"True\";\n True \n ------\n! f\n (1 row)\n \n SELECT (timestamp 'current' = 'now') as \"True\";\n***************\n*** 81,87 ****\n SELECT count(*) AS one FROM TIMESTAMP_TBL WHERE d1 = timestamp 'today' + interval '1 day';\n one \n -----\n! 1\n (1 row)\n \n SELECT count(*) AS one FROM TIMESTAMP_TBL WHERE d1 = timestamp 'today' - interval '1 day';\n--- 81,87 ----\n SELECT count(*) AS one FROM TIMESTAMP_TBL WHERE d1 = timestamp 'today' + interval '1 day';\n one \n -----\n! 0\n (1 row)\n \n SELECT count(*) AS one FROM TIMESTAMP_TBL WHERE d1 = timestamp 'today' - interval '1 day';\n\n----------------------\n\n\nThe catch is that this *always* happens in the (parallel) regression tests\nbut not if I run the file through psql by hand. Gives me a warm feeling\n... :(\n\nFurthermore, PostgreSQL doesn't compile with gcc 2.8.1 (never has). I get\na fatal signal if backend/utils/adt/float.c is compiled with -O2 or\nhigher. The offending line is in function\n\nfloat64 dpow(float64 arg1, float64 arg2)\n\n*result = (float64data) pow(tmp1, tmp2);\n\nCertainly a compiler bug, does anyone have a suggestion how this should be\nhandled? Is gcc 2.8.1 in wide-spread use?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 2 Apr 2000 23:18:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for porting reports" }, { "msg_contents": "> I am now reproduceably getting this failure in the timestamp test. I have\n> never seen it before today:...\n> The catch is that this *always* happens in the (parallel) regression tests\n> but not if I run the file through psql by hand. Gives me a warm feeling\n> ... :(\n\nAlmost certainly due to daylight savings time in PST8PDT (it happened\ntoday). As the doctor says, you'll feel better in a couple of days :)\n\nThis happens every year (actually, twice a year). But it would be\nwrong to not test the yesterday/today/tomorrow feature at all...\n\n> Furthermore, PostgreSQL doesn't compile with gcc 2.8.1 (never has). I get\n> a fatal signal if backend/utils/adt/float.c is compiled with -O2 or\n> higher. The offending line is in function\n> float64 dpow(float64 arg1, float64 arg2)\n> *result = (float64data) pow(tmp1, tmp2);\n> Certainly a compiler bug, does anyone have a suggestion how this should be\n> handled? Is gcc 2.8.1 in wide-spread use?\n\nI'm guessing that 2.8.x is not in wide-spread use. What platform are\nyou on? You could force -O0 for your platform, even just for that\ndirectory, as long as you don't disable it for other platform/compiler\ncombinations.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 21:37:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Call for porting reports" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I am now reproduceably getting this failure in the timestamp test. I have\n> never seen it before today:\n\nIs today DST changeover where you live?\n\nThe time-related tests have always failed near DST boundaries; the\nqueries you mention effectively assume that the difference between\nsuccessive midnights is exactly 24 hours, which is wrong for DST days.\n\n> The catch is that this *always* happens in the (parallel) regression tests\n> but not if I run the file through psql by hand. Gives me a warm feeling\n> ... :(\n\nAh. The parallel tests set up the postmaster's timezone to be PST8PDT.\nToday is DST changeover in that zone, even if it isn't where you live.\n\n> Furthermore, PostgreSQL doesn't compile with gcc 2.8.1 (never has). I get\n> a fatal signal if backend/utils/adt/float.c is compiled with -O2 or\n> higher. The offending line is in function\n\n> float64 dpow(float64 arg1, float64 arg2)\n\n> *result = (float64data) pow(tmp1, tmp2);\n\n> Certainly a compiler bug, does anyone have a suggestion how this should be\n> handled? Is gcc 2.8.1 in wide-spread use?\n\nWrite it off as a broken compiler. Compiler segfaults on valid code are\nnot our problem. (As far as I know, the 2.8 series of gcc releases were\nnever robust enough for production use. Try 2.95.2 instead.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Apr 2000 17:51:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for porting reports " }, { "msg_contents": "On Sun, 2 Apr 2000, Thomas Lockhart wrote:\n\n> I'd like to collect info on the platforms we are supporting for v7.0.\n> I've listed below all of the platforms which have been supported in\n> the past and any new platforms such as QNX which have a new \"sponsor\".\n....\n> Linux-alpha (this one still needs the \"Ryan\" or \"Uncle George\"\n> patches?)\n\n\tI have the patches nearly up to date. I am doing a final check on\nbeta-3 right now. I should have patches ready and my web page updated in\nby Wedesday. Sorry about the delay in response, but I have been busy with\nschool. :(\n\tMore details to come.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Sun, 2 Apr 2000 19:35:24 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for porting reports" } ]
[ { "msg_contents": "The man page for initdb says that the -t option will overwrite the template1\ndatabase with the up-to-date structure and suggests that this is the way to\nupgrade to a newer release:\n\n --template\n\n -t Replace the template1 database in an existing\n database system, and don't touch anything else.\n This is useful when you need to upgrade your tem�\n plate1 database using initdb from a newer release\n of PostgreSQL, or when your template1 database has\n become corrupted by some system problem. Normally\n the contents of template1 remain constant through�\n out the life of the database system. You can't\n destroy anything by running initdb with the --tem�\n plate option.\n\nHow does this relate to pg_upgrade? Is it necessary to do a pg_dump? If\nnot, is there a danger of new oids in template1 overwriting old oids in\nuser data? (I am not clear whether oids are unique to a database or\nto the whole PostgreSQL installation.)\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If the Son therefore shall make you free, ye shall be \n free indeed.\" John 8:36 \n\n\n", "msg_date": "Sun, 02 Apr 2000 07:07:46 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Question about initdb -t" }, { "msg_contents": "Oliver Elphick writes:\n\n> The man page for initdb says that the -t option will overwrite the template1\n> database with the up-to-date structure and suggests that this is the way to\n> upgrade to a newer release\n\nI believe it's wrong. The only thing -t does is fill up your template1\ndatabase with good data, say if you acidentally deleted all records from a\nsystem table. It is not very useful generally, I think.\n\n> How does this relate to pg_upgrade?\n\nIt doesn't. I have never used pg_upgrade but from looking at it I believe\nwhat it does is create a new database schema (using pg_dumpall\ninformation) and then instead of using COPY to get your data back in it\nmerely moves over the old on disk files for each table. The only thing\nthis buys you is time, it doesn't work around the various pg_dump\ndeficiencies or having to shut down the database, etc.\n\n> Is it necessary to do a pg_dump?\n\nNot unless you have something interesting in template1 you'd like to\npreserve.\n\n> If not, is there a danger of new oids in template1 overwriting old\n> oids in user data?\n\nNo.\n\n> (I am not clear whether oids are unique to a database or to the whole\n> PostgreSQL installation.)\n\ninstallation\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 2 Apr 2000 23:18:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about initdb -t" } ]
[ { "msg_contents": "I have one bug that I simply cannot track down. Both the attached files are\nmostly identical except that m.pgc calls a function before selecting data.\n\nWhat happens is that both give an identical pointer to ECPGdo but when\ncreate_statement in libecpg reads the data from the stack it only is correct\nfor n.pgc not for m.pgc.\n\nI tried reproducing this bug with small C sample but failed. That one works\nwell. But with these two programs it's completely reproducable.\n\nBefore I dig into it even more could anyone with a different system and a\ndifferent C compiler please try this. Just compile and link with the latest\necpg from CVS. Can be run against 6.5.3 backend. You just have to make sure\nthe database exists.\n\nThanks.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!", "msg_date": "Sun, 2 Apr 2000 10:49:39 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Need help!" } ]
[ { "msg_contents": "> Hello!\n> \n> On Mon, 27 Mar 2000, Bruce Momjian wrote:\n> > > Sorry for bothering you again. The thing is fixed in beta3\n> > > distribution of March 24, but in snapshot of March 26 it is again tar file,\n> > > not a directory. Seems there are two different mechanisms fot betas and\n> > > snapshots. Should I be afraid of snapshots or just use betas?\n> > > \n> > \n> > Seems Tom has found the cause. Now we need Marc to fix it with his\n> > permissions on the directory.\n> \n> While we are at this, could I ask you to do some renaming in the CVS\n> repository? I did some mistakes, my pardons, and now, when people send more\n> and more locale tests, these errors become more important.\n> \n> In the directory src/test/locale two directories should be renamed:\n> ISO8859-7 should be named gr_GR.ISO8859-7\n> de_DE.ISO-8859-1 should be renamed to de_DE.ISO8859-1\n> \n> What about symlinks there? Can you add symlink ru_RU.KOI8-R pointing to\n> the directory koi8-r? I don't want to rename directory koi8-r 'cause there\n> is koi8-to-win1251 that contains the multibyte cyrillic tests.\n\nGee, it is pretty late in beta to be doing this. Comments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Apr 2000 10:18:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: src/test/locale/de_DE.ISO-8859-1" } ]
[ { "msg_contents": "Hello!\n\nWhen running multiple concurrent backends on my benchmark database, I see\nconsistent crashing when running with 8 clients, and sporadic crashes when\nrunning with 4. After the crash, at least one index is broken, so if I run\nit again, it will crash on that. When I rebuild my indexes, I get back to\nthe same crash.\nIt dies on the same query every time (but only when I have many concurrent\nbackends - I can run it \"alone\" from psql without any problem). \n\nMy platform is Linux 2.2.14 running on a Dual Pentium-III 550MHz with 384Mb\nRAM.\n\nAn example gdb backtrace:\n\n#0 ExecEvalVar (variable=0x8239910, econtext=0x8239fe0, isNull=0x823aac9\n\"\")\n at execQual.c:275\n#1 0x80a31ab in ExecEvalExpr (expression=0x8239910, econtext=0x8239fe0,\n isNull=0x823aac9 \"\", isDone=0xbfffe59f \"\\001\\020\\022\") at\nexecQual.c:1203\n#2 0x80a2d07 in ExecEvalFuncArgs (fcache=0x823ab58, econtext=0x8239fe0,\n argList=0x82398f8, argV=0xbfffe5a0, argIsDone=0xbfffe59f \"\\001\\020\\022\")\n at execQual.c:634\n#3 0x80a2dbb in ExecMakeFunctionResult (node=0x82398a8,\narguments=0x82398f8,\n econtext=0x8239fe0, isNull=0xbfffe67e \"\",\n isDone=0xbfffe61f\n\"\\bP���\\\"2\\n\\b\\200\\230#\\b�\\237#\\b~���`\\236\\023\\bP\\231#\\b�\\237#\\b`����\\t\\017\\\nb\") at execQual.c:710\n#4 0x80a2f2d in ExecEvalOper (opClause=0x8239880, econtext=0x8239fe0,\n isNull=0xbfffe67e \"\") at execQual.c:896\n#5 0x80a3222 in ExecEvalExpr (expression=0x8239880, econtext=0x8239fe0,\n isNull=0xbfffe67e \"\", isDone=0xbfffe67f\n\"\\001����no\\n\\bP\\231#\\b�\\237#\\b\")\n at execQual.c:1238\n#6 0x80a32ee in ExecQual (qual=0x8239950, econtext=0x8239fe0,\n resultForNull=0 '\\000') at execQual.c:1365\n#7 0x80a6f6e in IndexNext (node=0x8239320) at nodeIndexscan.c:142\n#8 0x80a3741 in ExecScan (node=0x8239320, accessMtd=0x80a6e80 <IndexNext>)\n at execScan.c:103\n#9 0x80a7123 in ExecIndexScan (node=0x8239320) at nodeIndexscan.c:287\n#10 0x80a1ec9 in ExecProcNode (node=0x8239320, parent=0x8238be0)\n at execProcnode.c:272\n#11 0x80a85ab in ExecNestLoop (node=0x8238be0, parent=0x8238be0)\n at nodeNestloop.c:192\n#12 0x80a1eda in ExecProcNode (node=0x8238be0, parent=0x8238be0)\n at execProcnode.c:280\n#13 0x80a1c00 in EvalPlanQualNext (estate=0x8235ed0) at execMain.c:1980\n#14 0x80a1bd3 in EvalPlanQual (estate=0x8235ed0, rti=1, tid=0xbfffe8b8)\n at execMain.c:1966\n#15 0x80a13f9 in ExecReplace (slot=0x82364a0, tupleid=0xbfffe928,\n estate=0x8235ed0) at execMain.c:1535\n#16 0x80a1071 in ExecutePlan (estate=0x8235ed0, plan=0x8235bb8,\n operation=CMD_UPDATE, offsetTuples=0, numberTuples=0,\n direction=ForwardScanDirection, destfunc=0x8237ad8) at execMain.c:1202\n#17 0x80a060e in ExecutorRun (queryDesc=0x8236028, estate=0x8235ed0,\n feature=3, limoffset=0x0, limcount=0x0) at execMain.c:325\n#18 0x80fdbb5 in ProcessQueryDesc (queryDesc=0x8236028, limoffset=0x0,\n limcount=0x0) at pquery.c:310\n#19 0x80fdc41 in ProcessQuery (parsetree=0x82282a8, plan=0x8235bb8,\n dest=Remote) at pquery.c:353\n#20 0x80fc4cc in pg_exec_query_dest (\n query_string=0x81ab248 \"UPDATE items SET\nitemsinstock=itemsinstock-order_items.amount WHERE\nitems.itemid=order_items.itemid AND order_items.orderid=467134\",\ndest=Remote, aclOverride=0) at postgres.c:719\n#21 0x80fc392 in pg_exec_query (\n query_string=0x81ab248 \"UPDATE items SET\nitemsinstock=itemsinstock-order_items.amount WHERE\nitems.itemid=order_items.itemid AND order_items.orderid=467134\") at\npostgres.c:607\n#22 0x80fd533 in PostgresMain (argc=8, argv=0xbffff050, real_argc=8,\n real_argv=0xbffffa04) at postgres.c:1642\n#23 0x80e5f42 in DoBackend (port=0x81d3d80) at postmaster.c:1953\n#24 0x80e5b1a in BackendStartup (port=0x81d3d80) at postmaster.c:1722\n#25 0x80e4ce9 in ServerLoop () at postmaster.c:994\n#26 0x80e46be in PostmasterMain (argc=8, argv=0xbffffa04) at\npostmaster.c:700\n#27 0x80b1dfb in main (argc=8, argv=0xbffffa04) at main.c:93\n\n\n\nThe line of crash is:\n275 heapTuple = slot->val;\n\nAnd this is because:\n(gdb) print slot\n$1 = (TupleTableSlot *) 0x0\n\nInput to the switch() statement used to set slot is:\n(gdb) print variable->varno\n$3 = 65001\n(gdb) print econtext->ecxt_scantuple\n$5 = (TupleTableSlot *) 0x8238a38\n(gdb) print econtext->ecxt_innertuple\n$6 = (TupleTableSlot *) 0x0\n(gdb) print econtext->ecxt_outertuple\n$7 = (TupleTableSlot *) 0x0\n\nSeems it wants to use the ecxt_outertuple, but it's NULL. That's about as\nfar as I can get :-)\n\n\n\nTable definitions used:\nCREATE TABLE items (\n itemid int not null,\n description varchar(128) not null,\n price int,\n itemsinstock int,\n supplier int not null\n);\nCREATE UNIQUE INDEX items_id_idx ON items (itemid);\nCREATE INDEX items_desc_idx ON items (description);\nCREATE INDEX items_supp_idx ON items (supplier);\nCREATE TABLE order_items (\n orderid int not null,\n itemid int not null,\n amount int not null\n);\nCREATE UNIQUE INDEX order_items_both_idx ON order_items (orderid, itemid);\nCREATE INDEX order_items_order_idx ON order_items (orderid);\nCREATE INDEX order_items_item_idx ON order_items (itemid);\n\n\nItems has ~ 10,000 rows, order_items has ~22 million.\n\n\n//Magnus\n", "msg_date": "Sun, 2 Apr 2000 16:32:29 +0200 ", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Crash on UPDATE in 7.0beta3" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> When running multiple concurrent backends on my benchmark database, I see\n> consistent crashing when running with 8 clients, and sporadic crashes when\n> running with 4. After the crash, at least one index is broken, so if I run\n> it again, it will crash on that. When I rebuild my indexes, I get back to\n> the same crash.\n> It dies on the same query every time (but only when I have many concurrent\n> backends - I can run it \"alone\" from psql without any problem). \n\nWhat are the other backends doing?\n\nAfter chasing this logic a little bit, my suspicion is focused on\nExecIndexReScan at about nodeIndexscan.c:342. If an inner indexscan\nis restarted in the context of EvalPlanQual (which, in fact, is exactly\nwhere we are according to the backtrace) then this code returns without\ndoing much, and in particular without setting the indexscan node's\ncs_ExprContext->ecxt_outertuple from the outer plan's value. Perhaps\nthe next three lines ought to happen before testing for the PlanQual\ncase, instead of after (Vadim, what do you think?). But I don't\nunderstand why having other backends running concurrently would affect\nthis.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Apr 2000 13:06:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crash on UPDATE in 7.0beta3 " }, { "msg_contents": "I said:\n> ... But I don't\n> understand why having other backends running concurrently would affect\n> this.\n\nYes I do: this entire path of control is only invoked if ExecReplace\ndiscovers that the tuple it's trying to update is already updated by\na concurrent transaction. Evidently no one's tried running concurrent\nUPDATEs where the updates use a nestloop+inner indexscan join plan,\nbecause this path is certain to fail in that case.\n\nMagnus, try swapping the code segments in ExecIndexReScan()\n(executor/nodeIndexscan.c:341 ff) to become\n\n /* it's possible in subselects */\n if (exprCtxt == NULL)\n exprCtxt = node->scan.scanstate->cstate.cs_ExprContext;\n\n node->scan.scanstate->cstate.cs_ExprContext->ecxt_outertuple = exprCtxt->ecxt_outertuple;\n\n /* If this is re-scanning of PlanQual ... */\n if (estate->es_evTuple != NULL &&\n estate->es_evTuple[node->scan.scanrelid - 1] != NULL)\n {\n estate->es_evTupleNull[node->scan.scanrelid - 1] = false;\n return;\n }\n\nand see if that makes things more reliable.\n\nIt looks like nodeTidscan has the same bug...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Apr 2000 13:30:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crash on UPDATE in 7.0beta3 " } ]
[ { "msg_contents": "> > WIN9x (Magnus Hagander, v6.4, client side only. Still good?)\n> That should probably read Win32 - it's WinNT/2000 too. And WinNT client\n> libraries compiled using Cygwin will not work on \"native NT\".\n> I've compiled and tested 7.0beta3, looks good (client side only).\n> \n> > WINNT (Daniel Horak already reported for v7.0. Thanks!)\n> That shuold probably be called WinNT/Cygwin or something, since it's not\n> really Win32/NT.\n> Oh, and has it been tested on Windows 2000? If not, I can probably throw\n> together a test sometime this week.\n\nThanks for the info. I'll update the entries. I can't remember if\nDaniel was testing on W2K or on something else. Daniel?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 16:05:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Call for porting reports" } ]
[ { "msg_contents": "> On Sun, 2 Apr 2000, Bruce Momjian wrote:\n> > > While we are at this, could I ask you to do some renaming in the CVS\n> > > repository? I did some mistakes, my pardons, and now, when people send more\n> > > and more locale tests, these errors become more important.\n> > > \n> > > In the directory src/test/locale two directories should be renamed:\n> > > ISO8859-7 should be named gr_GR.ISO8859-7\n> > > de_DE.ISO-8859-1 should be renamed to de_DE.ISO8859-1\n> > > \n> > > What about symlinks there? Can you add symlink ru_RU.KOI8-R pointing to\n> > > the directory koi8-r? I don't want to rename directory koi8-r 'cause there\n> > > is koi8-to-win1251 that contains the multibyte cyrillic tests.\n> > \n> > Gee, it is pretty late in beta to be doing this. Comments?\n> \n> I am not in hurry. I just want to understand is it at all possible, esp.\n> symlinks. I am not sure how well CVS goes with symlinks.\n> But month after month I'll collect locale tests from people, so at any\n> rate I'll need more consistent scheme.\n\nNo idea, but I would hope it would work.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Apr 2000 12:36:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: src/test/locale/de_DE.ISO-8859-1" } ]
[ { "msg_contents": "> > OK, here is a patch to help fix up trouble with indices and primary\n> > keys in the ODBC interface.\n> From what I remember, not too many things use SQLPrimaryKeys, instead\n> they use SQLStatistics to get the key info along with other more\n> detailed information about an index. Originally, we never even\n> supported SQLPrimaryKeys. Only recently, we added support for it along\n> with SQLForeignKeys. But that was then, this is now. :)\n> And Thomas, is your patch compatible with the other versions of Postgres\n> the odbc driver is supposed to support (it still supports all the way\n> back to 6.2), or do we no longer care about that?\n\nHmm. The patch is not backward compatible. Presumably there is\nprovision in the code to know what version of DBMS to connect to? In\nthat case, I can probably carry along two different queries and a few\ndifferences in handling the OID results.\n\nSuggestions?\n\n> If you think your patch is working, I could check out the cvs code,\n> compile it on windows, and post the dll.\n\nI'll try poking at it some more. Let me know if you have some tips or\nwant to do this yourself (we're trying to get a v7.0 release in the\nnext couple of weeks).\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 21:06:03 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"list index out of range\" in C++ Builder 4" } ]
[ { "msg_contents": "I am aware of at least three non-backward-compatible changes in 7.0,\nie, things that will break existing applications in perhaps non-obvious\nways. I think the \"Release Notes\" document ought to call these out in a\nseparate section, rather than expecting people to examine the detailed\nchange list and intuit what those brief entries mean to them.\n\nThe three I can think of are:\n\n1. If a GROUP BY item matches both an input column name and a\nselect-list label (\"AS\" name), 7.0 assumes the input column is meant.\nThis is compliant with the SQL92 spec. Unfortunately older versions\nmade the opposite choice.\n\n2. SELECT DISTINCT ON syntax has changed --- now need parentheses\naround the item being DISTINCT'ed.\n\n3. User-defined operator names can't end in \"+\" or \"-\" unless they\nalso contain ~ ! @ # % ^ & | ` ? $ or :\n\nAny others?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Apr 2000 17:13:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "7.0 release notes should call out incompatible changes more clearly" }, { "msg_contents": "These are all pretty obscure. How do I make them prominent without\nreally scaring people who don't even know what they are?\n\n> I am aware of at least three non-backward-compatible changes in 7.0,\n> ie, things that will break existing applications in perhaps non-obvious\n> ways. I think the \"Release Notes\" document ought to call these out in a\n> separate section, rather than expecting people to examine the detailed\n> change list and intuit what those brief entries mean to them.\n> \n> The three I can think of are:\n> \n> 1. If a GROUP BY item matches both an input column name and a\n> select-list label (\"AS\" name), 7.0 assumes the input column is meant.\n> This is compliant with the SQL92 spec. Unfortunately older versions\n> made the opposite choice.\n> \n> 2. SELECT DISTINCT ON syntax has changed --- now need parentheses\n> around the item being DISTINCT'ed.\n> \n> 3. User-defined operator names can't end in \"+\" or \"-\" unless they\n> also contain ~ ! @ # % ^ & | ` ? $ or :\n> \n> Any others?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Apr 2000 17:40:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 release notes should call out incompatible changes\n\tmore clearly" }, { "msg_contents": "on 4/2/00 5:40 PM, Bruce Momjian at [email protected] wrote:\n\n>> 1. If a GROUP BY item matches both an input column name and a\n>> select-list label (\"AS\" name), 7.0 assumes the input column is meant.\n>> This is compliant with the SQL92 spec. Unfortunately older versions\n>> made the opposite choice.\n>> \n>> 2. SELECT DISTINCT ON syntax has changed --- now need parentheses\n>> around the item being DISTINCT'ed.\n>> \n>> 3. User-defined operator names can't end in \"+\" or \"-\" unless they\n>> also contain ~ ! @ # % ^ & | ` ? $ or :\n\nUnless I've missed some earlier discussion of it, the grammar for adding a\nfield to a table has also changed.\n\nIn 6.5, you could do\n\nalter table add (\n id integer\n);\n\nwhereas in 7.0 you must do\n\nalter table add id integer;\n\nand the parens will screw things up.\n\n-Ben\n\n", "msg_date": "Sun, 02 Apr 2000 17:47:25 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 release notes should call out incompatible\n\tchanges more clearly" }, { "msg_contents": "> These are all pretty obscure. How do I make them prominent without\n> really scaring people who don't even know what they are?\n\nThese are not obscure if you are using the feature, and we should have\na section (just after the description of new features) which discuss\nporting/upgrade issues. I was planning on putting it in; it will need\nto contain info on \"datetime/timespan\" vs \"timestamp/interval\" also\n(probably the least \"obscure\" upgrade issue in the new release).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 21:54:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 release notes should call out incompatible changesmore\n\tclearly" }, { "msg_contents": "Ah, and if anyone ever used CREATE FUNCTION/WITH, the position of the\nWITH clause has changed. Tom, was \"with\" useful enough for anyone to\ncare about this?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 02 Apr 2000 21:59:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 release notes should call out incompatiblechanges more\n\tclearly" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Ah, and if anyone ever used CREATE FUNCTION/WITH, the position of the\n> WITH clause has changed. Tom, was \"with\" useful enough for anyone to\n> care about this?\n\nI doubt it. The parameters that can be specified in WITH never did\nanything before 7.0 (well, I suppose some of them did back when the\n\"expensive functions\" optimization code worked). The only one that\ndoes anything useful now is ISCACHABLE (permits reduction of function\nduring constant-folding), and that functionality is new in 7.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Apr 2000 18:07:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0 release notes should call out incompatiblechanges more\n\tclearly" }, { "msg_contents": "Andrew McMillan <[email protected]> writes:\n>> 2. SELECT DISTINCT ON syntax has changed --- now need parentheses\n>> around the item being DISTINCT'ed.\n\n> I have just upgraded my development machine today and this is proving to\n> be a real pain in the neck as I seem to have used this all over the\n> place :-)\n\n> Can anyone suggest any syntax that I could use in the interim which will\n> be compatible with 7.0 but which will also work with 6.5.3 so that I can\n> minimise the pain of upgrading?\n\nEr ... don't use DISTINCT ON? It's not to be found anywhere in the\nSQL92 specs, so if you want to run your app on a variety of DBMSes,\nthat's your only choice anyway.\n\nIf you are a heavy user of DISTINCT ON, I should think you'd gladly\naccept a little conversion pain for the benefit of being able to do\nDISTINCT ON multiple expressions, instead of only one unadorned\ncolumn name.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Apr 2000 01:32:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0 release notes should call out incompatible changes more\n\tclearly" } ]
[ { "msg_contents": "> > > WIN9x (Magnus Hagander, v6.4, client side only. Still good?)\n> > That should probably read Win32 - it's WinNT/2000 too. And \n> WinNT client\n> > libraries compiled using Cygwin will not work on \"native NT\".\n> > I've compiled and tested 7.0beta3, looks good (client side only).\n> > \n> > > WINNT (Daniel Horak already reported for v7.0. Thanks!)\n> > That shuold probably be called WinNT/Cygwin or something, \n\nIt is possible to call it WinNT/Cygwin because there can be a port using\nUWIN or even a \"native\" port.\n\n> since it's not\n> > really Win32/NT.\n> > Oh, and has it been tested on Windows 2000? If not, I can \n> probably throw\n> > together a test sometime this week.\n> \n> Thanks for the info. I'll update the entries. I can't remember if\n> Daniel was testing on W2K or on something else. Daniel?\n\nI have not tested pgsql on W2K, but I am testing Win95 and it looks\npromissing - it is possible to run initdb, start postmaster and only a newly\ncreated backend dies (with segfault) after it receives a connection from a\nclient. And such an error could be solved (I hope ;-). It has no problems\nwith the IPC stuff.\n\n\t\t\tDan\n", "msg_date": "Mon, 3 Apr 2000 13:51:08 +0200 ", "msg_from": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Call for porting reports" } ]
[ { "msg_contents": "\nIs this something that is safe to do just before a release like this? If\nso, I'll do it ... if not, someone remind me right afer the release ..\n\nOn Mon, 3 Apr 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > > In the directory src/test/locale two directories should \n> > be renamed:\n> > > ISO8859-7 should be named gr_GR.ISO8859-7\n> > > de_DE.ISO-8859-1 should be renamed to de_DE.ISO8859-1\n> \n> The rename from de_DE.ISO-8859-1 to de_DE.ISO8859-1 sounds reasonable,\n> since that is the correct name, the prior is bogus. The only other wording I\n> know does exist\n> somewhere is de_DE.8859-1 .\n> \n> The reason why this was undetected is probably because most de_DE\n> installations\n> will probably not use --enable-locale.\n> \n> Andreas\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 3 Apr 2000 09:48:00 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: src/test/locale/de_DE.ISO-8859-1" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Is this something that is safe to do just before a release like this? If\n> so, I'll do it ... if not, someone remind me right afer the release ..\n\nDon't see what's dangerous about it. The absolute worst case,\nif Andreas is mistaken, is that the locale-specific tests stop\nworking for these locales. But I believe what he's asserting\nis that they're broken already ... so how could you be making\nit worse?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Apr 2000 10:14:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: src/test/locale/de_DE.ISO-8859-1 " } ]
[ { "msg_contents": "> I said:\n> > ... But I don't\n> > understand why having other backends running concurrently \n> would affect\n> > this.\n> \n> Yes I do: this entire path of control is only invoked if ExecReplace\n> discovers that the tuple it's trying to update is already updated by\n> a concurrent transaction. Evidently no one's tried running concurrent\n> UPDATEs where the updates use a nestloop+inner indexscan join plan,\n> because this path is certain to fail in that case.\n> \n> Magnus, try swapping the code segments in ExecIndexReScan()\n<snip>\n\nLooks much better - at least it doesn't crash. Instead, I the messages\nbelow. I don't know if this is because of the same thing, though - since it\nruns into areas I never reached before. But these messages do *not* show up\nif I run with fewer backends (or when I run with the old code - crash).\n\n//Magnus\n\n\nNOTICE: Buffer Leak: [3205] (freeNext=-3, freePrev=-3,\nrelname=order_items_order_idx, blockNum=13532, flags=0x4, refcount=1 1)\nNOTICE: Buffer Leak: [3214] (freeNext=-3, freePrev=-3, relname=order_items,\nblockNum=39804, flags=0x4, refcount=1 1)\nNOTICE: Buffer Leak: [5110] (freeNext=-3, freePrev=-3,\nrelname=order_items_order_idx, blockNum=29437, flags=0x4, refcount=1 1)\nNOTICE: Buffer Leak: [5117] (freeNext=-3, freePrev=-3, relname=order_items,\nblockNum=86602, flags=0x4, refcount=1 1)\nNOTICE: Buffer Leak: [13184] (freeNext=-3, freePrev=-3,\nrelname=order_items_order_idx, blockNum=2115, flags=0x4, refcount=1 1)\nNOTICE: Buffer Leak: [13191] (freeNext=-3, freePrev=-3,\nrelname=order_items, blockNum=6214, flags=0x4, refcount=1 1)\nNOTICE: Buffer Leak: [4248] (freeNext=-3, freePrev=-3,\nrelname=order_items_order_idx, blockNum=29583, flags=0x4, refcount=1 1)\nNOTICE: Buffer Leak: [4443] (freeNext=-3, freePrev=-3, relname=order_items,\nblockNum=87032, flags=0x4, refcount=1 1)\n", "msg_date": "Mon, 3 Apr 2000 15:43:03 +0200 ", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Crash on UPDATE in 7.0beta3 " } ]
[ { "msg_contents": "\nDone, please test to confirm ...\n\n\nOn Mon, 3 Apr 2000, Oleg Broytmann wrote:\n\n> On Mon, 3 Apr 2000, The Hermit Hacker wrote:\n> > Is this something that is safe to do just before a release like this?\n> \n> > > > > In the directory src/test/locale two directories should \n> > > > be renamed:\n> > > > > ISO8859-7 should be named gr_GR.ISO8859-7\n> > > > > de_DE.ISO-8859-1 should be renamed to de_DE.ISO8859-1\n> \n> I don't see any problem here. Locale tests are completely separated from\n> backend; changes in the tests will not affect postgres in any way.\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2.1/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 3 Apr 2000 11:31:32 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: src/test/locale/de_DE.ISO-8859-1" } ]
[ { "msg_contents": "Thank you all very much for the help. The post really solved my problem.\nYour help is greatly appreciated.\n\nWenjin Zheng\nBioinformatic Analyst\nBiosource Technologies, Inc.\n3333 Vaca Valley Parkway\nVacaville, CA 95688\n(707)469-2353\nemail: [email protected] \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Thursday, March 30, 2000 8:11 AM\nTo: Don Baccus\nCc: Wenjin Zheng; [email protected]\nSubject: Re: [HACKERS] slow join on postgresql6.5 \n\n\nDon Baccus <[email protected]> writes:\n> This is an example where commercial systems that have indices\n> synchronized with data such that queries referencing only the\n> fields in indices can win big vs. PG in SOME (not all) cases.\n> In particular, when the indices are to a table that has a bunch\n> of other, perhaps long, columns. PG has to read the table and\n> drag all that dead weight around to do RI referential checking\n> and semantic actions.\n\nKeep in mind, though, that once we have TOAST the long columns are\nlikely to get pushed out to a secondary table, so that the amount\nof data you have to read is reduced (as long as you don't touch\nany of the long columns, of course).\n\nThe main reason that Postgres indexes can't be used without also\nconsulting the main table is that we do not store transaction status\ninformation in index entries, only in real tuples. After finding\nan index entry we must still consult the referenced tuple to see\nif it's been deleted, or even committed yet. I believe this is a\npretty good tradeoff. If we kept a copy of the status info in the\nindex, then UPDATE and DELETE would get hugely slower and more\ncomplex, since they'd have to be able to find and mark all the\nindex entries pointing at a tuple as well as the tuple itself.\nThe extra info would also increase the size of index entries,\nwhich is bad because the point of an index is that reading it\ntakes less disk traffic than reading the underlying table.\n(BTW, to return to the original thread, one of the biggest reasons\nthat indexes with many columns are a loser is that they're so big.)\n\nI suppose that keeping tuple status in index entries could be a win\non nearly-read-only tables, but I think that on average it'd be\na performance loser.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 3 Apr 2000 08:56:27 -0700 ", "msg_from": "Wenjin Zheng <[email protected]>", "msg_from_op": true, "msg_subject": "RE: slow join on postgresql6.5 " } ]
[ { "msg_contents": "> Ed Loehr wrote:\n>\n> > What would persuasive numbers look like?\n> >\n> > As a novice, I'd guess key questions would seem to be...\n> >\n> > How often is a query run in which the results are identical to previous\n> > invocations of that query?\n> >\n> > Typically send/recv rates vs. typical query planning/exec time?\n>\n> So wouldn't you get most of what you want if you could store a query plan?\n\n This should wait until after the proposed querytree overhaul\n we have in mind. I already discussed it with Tom Lane. The\n idea goes like this:\n\n After the overhaul, the rewriter is a very simple and fast\n step. So we could hook into the rewriter, who builds for\n EVERY query kinda key based on the nodes, relations and\n functions that appear in the querytree.\n\n These keys could be managed in a shared LRU table, and if the\n same key appears a number of times (0-n), it's entire\n querytree + plan (after planning) will be saved into the\n shared mem. At a subsequent occurence, the querycache will\n look closer onto the two trees, if they are really\n identically WRT all the nodes. If only constant values have\n changed, the already known plan could be reused.\n\n Postmaster startup options for tuning that come into mind\n then are querycache memsize, minimum # of appearence before\n caching, maximum lifetime or # usage of a plan and the like.\n So setting the memsize to zero will completely disable and\n fallback to current behavior.\n\n After that, the entire parsing is still done for every query\n (so application level controlled query cacheing is still\n another thing to care for). We would only be able to skip the\n planner/optimizer step. The question therefore is how much of\n the entire processing time for a query can be saved if\n replacing this step by some shared memory overhead. I'm not\n sure if this is worth the entire efford at all, and we can\n only judge after the querytree overhaul is done. Then again,\n improving the query optimizer directly, so he's able to make\n better join order decisions faster, might be the way to go.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 3 Apr 2000 19:07:07 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: caching query results" }, { "msg_contents": "\n\nOn Mon, 3 Apr 2000, Jan Wieck wrote:\n\n> This should wait until after the proposed querytree overhaul\n> we have in mind. I already discussed it with Tom Lane. The\n> idea goes like this:\n\n This is very interesting discussion for me, I have prepared code for \n7.1? with PREPARE/EXECUTE commands and SPI changes for query cache. This\nfeatures allow users define cache entry (via SPI or PREPARE). But it is \nalready discussed before now. A validity of plans is a problem.\n\n> After the overhaul, the rewriter is a very simple and fast\n> step. So we could hook into the rewriter, who builds for\n> EVERY query kinda key based on the nodes, relations and\n> functions that appear in the querytree.\n\nIt is good idea. What exactly is a key? If I good understand this key\nis for query identification only. Right?\n\n> These keys could be managed in a shared LRU table, and if the\n\nMy current code is based on HASH table with keys and query&plan is \nsaved in special for a plan created MemoryContext (it is good for\na example SPI_freeplan()). \n\n> same key appears a number of times (0-n), it's entire\n> querytree + plan (after planning) will be saved into the\n> shared mem. \n\nHere I not understend. Why is here any time checking? \n\n> At a subsequent occurence, the querycache will\n> look closer onto the two trees, if they are really\n> identically WRT all the nodes. If only constant values have\n> changed, the already known plan could be reused.\n\nIMHO users can use PREPARE / EXECUTE for same query. Suggested idea is \nreally good if this query cache will in shared memory and more backends \ncan use it.\n\nGood. It is solution for 'known-query' and allow it skip any steps in the\nquery path. But we still not have any idea for cached plans validity. What\nif user changes oid for any operator, drop column (etc)? \n\nOr I anything overlook?\n\n> We would only be able to skip the\n> planner/optimizer step. \n\nInstead a PREPARE/EXECUTE which allow skip all in the query path (only\nexecutor is called. But it works for user defined query not for every\nquery.\n\n> The question therefore is how much of\n> the entire processing time for a query can be saved if\n> replacing this step by some shared memory overhead. I'm not\n> sure if this is worth the entire efford at all, and we can\n> only judge after the querytree overhaul is done. Then again,\n> improving the query optimizer directly, so he's able to make\n> better join order decisions faster, might be the way to go.\n\nIs really sure that this will faster? (it must create key for nodes, \nsearch same query in any table (cache), copy new query&plan to cache \n..etc.)\n\n\t\t\t\t\t\t\tKarel\n\n\n\n", "msg_date": "Tue, 4 Apr 2000 13:55:39 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: caching query results" }, { "msg_contents": "Karel Zak wrote:\n\n> On Mon, 3 Apr 2000, Jan Wieck wrote:\n>\n> It is good idea. What exactly is a key? If I good understand this key\n> is for query identification only. Right?\n\n Right. Imagine a querytree (after overhaul) that looks like\n this:\n\n +------+\n | SORT |\n +------+\n ^\n |\n +-----------------------------+\n | JOIN |\n | atts: rel1.att1, rel2.att2 |\n | qual: rel1.att2 = rel2.att1 |\n +-----------------------------+\n ^ ^\n | |\n +------------------+ +------------------+\n | SCAN | | SCAN |\n | rel: rel1 | | rel: rel2 |\n | atts: att1, att2 | | atts: att1, att2 |\n +------------------+ +------------------+\n\n which is a node structure describing a query of:\n\n SELECT rel1.att1, rel2.att2 FROM rel1, rel2\n WHERE rel1.att2 = rel2.att1;\n\n The \"key\" identifying this querytree now could look like\n\n SORT(JOIN(1.1,2.2;SCAN(78991;1,2),SCAN(78995;1,2);))\n\n 78991 and 78995 are the OIDs of rel1 and rel2. So the key is\n a very simplified description of what the query does, and\n maybe the qualification should be included too. But it's\n enough to find a few candidates to look at closer on the node\n level out of hundreds of cached plans.\n\n> > These keys could be managed in a shared LRU table, and if the\n>\n> My current code is based on HASH table with keys and query&plan is\n> saved in special for a plan created MemoryContext (it is good for\n> a example SPI_freeplan()).\n\n IIRC our hash table code insists on using global, per backend\n memory. I thought about managing the entire querycache with\n a new type of memory context, using different routines for\n palloc()/pfree(), working in a shared memory area only and\n eventually freeing longest unused plans until allocation\n fits. Let's see if using hash tables here would be easy or\n not.\n\n> > same key appears a number of times (0-n), it's entire\n> > querytree + plan (after planning) will be saved into the\n> > shared mem.\n>\n> Here I not understend. Why is here any time checking?\n\n There's not that big of a win if you do all the shared memory\n overhead for any query at it's first occurance. Such a\n generic query cache only makes sense for queries that occur\n often. So at it's first to n-th occurance we only count by\n key and after we know that it's one of these again'n'again\n thingies, we pay the cache overhead.\n\n Also I think, keeping the number of exclusive cache locks\n (for writing) as small as possible would be a good idea WRT\n concurrency.\n\n> IMHO users can use PREPARE / EXECUTE for same query. Suggested idea is\n> really good if this query cache will in shared memory and more backends\n> can use it.\n\n Exactly that's the idea. And since the postmaster will hold\n the shared memory as it does for the block and syscache,\n it'll survive even times of no DB activity.\n\n> Good. It is solution for 'known-query' and allow it skip any steps in the\n> query path. But we still not have any idea for cached plans validity. What\n> if user changes oid for any operator, drop column (etc)?\n\n That's why the key is only good to find \"candidates\". The\n cacheing has to look very close to the nodes in the tree and\n maybe compare down to pg_attribute oid's etc. to decide if\n it's really the same query or not.\n\n> Is really sure that this will faster? (it must create key for nodes,\n> search same query in any table (cache), copy new query&plan to cache\n> ..etc.)\n\n Only some timing code put into backends in various real world\n databases can tell how much of the entire processing time is\n spent in the optimizer.\n\n And I'd not be surprised if most of the time is already spent\n during the parse step, which we cannot skip by this\n technique.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 4 Apr 2000 16:27:37 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: caching query results" }, { "msg_contents": "\nOn Tue, 4 Apr 2000, Jan Wieck wrote:\n\n> Right. Imagine a querytree (after overhaul) that looks like\n> this:\n> \n> +------+\n> | SORT |\n> +------+\n> ^\n> |\n> +-----------------------------+\n> | JOIN |\n> | atts: rel1.att1, rel2.att2 |\n> | qual: rel1.att2 = rel2.att1 |\n> +-----------------------------+\n> ^ ^\n> | |\n> +------------------+ +------------------+\n> | SCAN | | SCAN |\n> | rel: rel1 | | rel: rel2 |\n> | atts: att1, att2 | | atts: att1, att2 |\n> +------------------+ +------------------+\n> \n> which is a node structure describing a query of:\n> \n> SELECT rel1.att1, rel2.att2 FROM rel1, rel2\n> WHERE rel1.att2 = rel2.att1;\n> \n> The \"key\" identifying this querytree now could look like\n> \n> SORT(JOIN(1.1,2.2;SCAN(78991;1,2),SCAN(78995;1,2);))\n\n The nice picture. Thanks, I undestend you now. A question is where create \nthis key - create a specific function that look at to querytree and return \nkey or calculate it during statement transformation (analyze.c ..etc.). \nOr is any other idea?\n\n> > > These keys could be managed in a shared LRU table, and if the\n> >\n> > My current code is based on HASH table with keys and query&plan is\n> > saved in special for a plan created MemoryContext (it is good for\n> > a example SPI_freeplan()).\n\nI thought about it, and what for SPI and PREPARE/EXECUTE query cache \nuse shared memory too? I'm vote for one query cache in postgresql. IMHO\nis not good create a specific cache for SPI_saveplan()+PREPARE and second\nfor your suggested query cache.\n\nIf plans saved via SPI (under defined key - 'by_key' interface) will shared\nunder all backends a lot of features will faster (FK, PLangs ..etc) and\nshared plans cached via PREPARE will persistent across more connetions.\n(Some web developers will happy :-) \n\nBut I not sure with this...\n \n> IIRC our hash table code insists on using global, per backend\n> memory. I thought about managing the entire querycache with\n> a new type of memory context, using different routines for\n> palloc()/pfree(), working in a shared memory area only and\n> eventually freeing longest unused plans until allocation\n> fits. Let's see if using hash tables here would be easy or\n> not.\n\nI look at the current shmem routines - create specific space and hash \ntable for a query cache is not a problem, hash routines are prepared \nfor usage under shmem. The current lock management code is very simular. \nWith hash is not a problem here. \n\nA problem is how store (copy) query & plan tree to this (shared) memory.\nThe current copyObject() is based on palloc()/pfree() and as you said\nwe haven't memory management routines (like palloc()) that working in \nshmem. \n\nWould be nice have MemoryContext routines for shmem - example \nCreateGlobalMemory_in_shmem() and palloc() that knows work with this\nspecific context. It is a dream?\n\nA solution is convert query & plan tree to string (like pg_rewrite (views))\nand save to cache this string, (and what a speed during (vice versa) parsing?). \nIMHO for this solution we not need a hash table, we can use a standard system \ntable and a syscache. \n\nBut more nice is variant with non-string and full plan-tree-structs in a \nshmem.\n\n> > Good. It is solution for 'known-query' and allow it skip any steps in the\n> > query path. But we still not have any idea for cached plans validity. What\n> > if user changes oid for any operator, drop column (etc)?\n> \n> That's why the key is only good to find \"candidates\". The\n> cacheing has to look very close to the nodes in the tree and\n> maybe compare down to pg_attribute oid's etc. to decide if\n> it's really the same query or not.\n\nOK.\n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Wed, 5 Apr 2000 13:33:56 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: caching query results" } ]
[ { "msg_contents": "Hi,\n\nI have got another problem. I have a table(table tempOne) that has two\nfields, seqid(int) and phredscore(int). They are one to many relationship,\ne.g. one seqid has many phredscore. I would like to get the row that has\nthe max(phredscore). So I first created a view as follow:\n\"create view maxphred as select seqid, max(phredscore) as phredscore from\ntempOne group by seqid;\"\n\nThen I try to get the rows that corresponding to the top phredscore. I did\nthis:\n\n\"select tempOne.* from tempOne, maxphred where tempOne.seqid=maxphred.seqid\nand tempOne.phredscore=maxphred.phredscore;\"\n\nI got some weird stuff back which obviously is wrong. However if I create a\ntable maxphred rather than a view, I get the correct result. There might be\nsomething missing for the view that I did not know of. Does anyone know\nthat my query with view did not work.\n\nYour help will be greatly appreciated.\n\nWenjin Zheng\nBioinformatic Analyst\nBiosource Technologies, Inc.\n3333 Vaca Valley Parkway\nVacaville, CA 95688\n(707)469-2353\nemail: [email protected] \n\n", "msg_date": "Mon, 3 Apr 2000 10:50:29 -0700 ", "msg_from": "Wenjin Zheng <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with view" }, { "msg_contents": "> I have got another problem. I have a table(table tempOne) that has two\n> fields, seqid(int) and phredscore(int). They are one to many relationship,\n> e.g. one seqid has many phredscore. I would like to get the row that has\n> the max(phredscore). So I first created a view as follow:\n> \"create view maxphred as select seqid, max(phredscore) as phredscore from\n> tempOne group by seqid;\"\n\n Views are known to have severe problems with aggregates and\n grouping. And once again these are problems we want to\n tackle with the parse-/querytree overhaul.\n\n Oh man - this is one of the most important things I see.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 3 Apr 2000 23:10:36 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Problem with view" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Views are known to have severe problems with aggregates and\n> grouping. And once again these are problems we want to\n> tackle with the parse-/querytree overhaul.\n\n> Oh man - this is one of the most important things I see.\n\nYup. Outer joins are waiting on that work, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Apr 2000 17:42:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with view " }, { "msg_contents": "At 05:42 PM 4/3/00 -0400, Tom Lane wrote:\n>[email protected] (Jan Wieck) writes:\n>> Views are known to have severe problems with aggregates and\n>> grouping. And once again these are problems we want to\n>> tackle with the parse-/querytree overhaul.\n>\n>> Oh man - this is one of the most important things I see.\n>\n>Yup. Outer joins are waiting on that work, too.\n\nAre you saying outer joins won't happen until the overhaul? Does\nthis mean they're not happening in 7.1 or that 7.1 will just be\nin the far distance?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 03 Apr 2000 17:43:51 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with view " }, { "msg_contents": "Don Baccus <[email protected]> writes:\n>> Yup. Outer joins are waiting on that work, too.\n\n> Are you saying outer joins won't happen until the overhaul?\n\nRight.\n\n> Does\n> this mean they're not happening in 7.1 or that 7.1 will just be\n> in the far distance?\n\n7.1 isn't happening next week, no. I'm guessing maybe late summer.\n\nI don't think we'll release 7.1 until the overhaul is done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Apr 2000 21:27:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with view " } ]
[ { "msg_contents": "If you run 6.5 pg_dump against a 7.0 database, or vice versa,\nyou get very obscure error messages:\n\tparseNumericArray: too many numbers\n\tgetInherits(): SELECT failed. Explanation from backend: 'ERROR: Attribute 'inhrel' not found\n(Quick, guess which is which ... you'll probably guess wrong.)\n\nIt's too late to do anything about the behavior of 6.5 pg_dump,\nbut we could change 7.0 and later pg_dump to check the database\nversion at startup and refuse to run if it's not the expected value.\n\nA downside is that a pg_dump might refuse to dump a DB that it actually\nwould work with; that could be a pain in the neck, particularly in\ndevelopment scenarios where you might not have kept the previous\ncompilation of pg_dump lying around. Yet I think I prefer that to the\nrisk of an insidious incompatibility that causes pg_dump to run without\ncomplaint yet generate a bogus dump.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Apr 2000 14:48:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Should pg_dump refuse to run if DB has different version?" }, { "msg_contents": "On Mon, 3 Apr 2000, Tom Lane wrote:\n\n> If you run 6.5 pg_dump against a 7.0 database, or vice versa,\n> you get very obscure error messages:\n> \tparseNumericArray: too many numbers\n> \tgetInherits(): SELECT failed. Explanation from backend: 'ERROR: Attribute 'inhrel' not found\n> (Quick, guess which is which ... you'll probably guess wrong.)\n> \n> It's too late to do anything about the behavior of 6.5 pg_dump,\n> but we could change 7.0 and later pg_dump to check the database\n> version at startup and refuse to run if it's not the expected value.\n> \n> A downside is that a pg_dump might refuse to dump a DB that it actually\n> would work with; that could be a pain in the neck, particularly in\n> development scenarios where you might not have kept the previous\n> compilation of pg_dump lying around. Yet I think I prefer that to the\n> risk of an insidious incompatibility that causes pg_dump to run without\n> complaint yet generate a bogus dump.\n> \n> Comments anyone?\n\nSounds reasonable to me ...\n\n\n", "msg_date": "Mon, 3 Apr 2000 16:02:31 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should pg_dump refuse to run if DB has different\n version?" }, { "msg_contents": "* Tom Lane <[email protected]> [000403 12:18] wrote:\n> If you run 6.5 pg_dump against a 7.0 database, or vice versa,\n> you get very obscure error messages:\n> \tparseNumericArray: too many numbers\n> \tgetInherits(): SELECT failed. Explanation from backend: 'ERROR: Attribute 'inhrel' not found\n> (Quick, guess which is which ... you'll probably guess wrong.)\n> \n> It's too late to do anything about the behavior of 6.5 pg_dump,\n> but we could change 7.0 and later pg_dump to check the database\n> version at startup and refuse to run if it's not the expected value.\n> \n> A downside is that a pg_dump might refuse to dump a DB that it actually\n> would work with; that could be a pain in the neck, particularly in\n> development scenarios where you might not have kept the previous\n> compilation of pg_dump lying around. Yet I think I prefer that to the\n> risk of an insidious incompatibility that causes pg_dump to run without\n> complaint yet generate a bogus dump.\n> \n> Comments anyone?\n\nIt's a great idea and later down the road the versioning information\ncould be used to determine if it's actually possible to pg_dump\nwith a certain version, ie the pg_dump program can report its\nversion number to postgresql and postgresql can reply if it thinks\nit's compatible with the version of pg_dump being used.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 3 Apr 2000 12:32:51 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should pg_dump refuse to run if DB has different version?" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n> How about a pg_dump \"force\" option to run regardless of potential\n> incompatibilities?\n\nGood idea --- done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Apr 2000 01:23:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should pg_dump refuse to run if DB has different version? " } ]
[ { "msg_contents": "The following code works under 6.5 and doesn't work in 7.0 beta 3. Comments?\n\n\n------------------snip---------------------------------------\ncreate table paths ( pathnum serial, pathname text );\n\n\nCREATE FUNCTION plpgsql_call_handler () RETURNS OPAQUE AS\n '/opt/brupro/pgsql/lib/plpgsql.so' LANGUAGE 'C';\n\nCREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql'\n HANDLER plpgsql_call_handler\n LANCOMPILER 'PL/pgSQL';\n \n/**********************************************************\n * This function makes handling paths MUCH faster on inserts:\n **********************************************************/\nCREATE FUNCTION get_path (text) RETURNS integer AS '\n DECLARE\n retval integer;\n BEGIN\n select pathnum into retval from paths where pathname = $1 ;\n if not found then\n insert into paths ( pathname )values ( $1 ) ;\n select pathnum into retval from paths where pathname = $1 ;\n end if ;\n return retval ;\n END;\n' LANGUAGE 'plpgsql';\n\nselect get_path('/etc');\n\n--------------end snip-------------------------------\n\n-- \nEric Lee Green [email protected]\nSoftware Engineer Visit our Web page:\nEnhanced Software Technologies, Inc. http://www.estinc.com/\n(602) 470-1115 voice (602) 470-1116 fax\n", "msg_date": "Mon, 03 Apr 2000 14:46:20 -0700", "msg_from": "Eric Lee Green <[email protected]>", "msg_from_op": true, "msg_subject": "Broken PL/PgSQL for 7.0 beta 3?" }, { "msg_contents": "> The following code works under 6.5 and doesn't work in 7.0 beta 3. Comments?\n\nSymptoms?\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 04 Apr 2000 01:08:58 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Broken PL/PgSQL for 7.0 beta 3?" }, { "msg_contents": "Eric Lee Green <[email protected]> writes:\n> The following code works under 6.5 and doesn't work in 7.0 beta 3. Comments?\n\nPlease define \"doesn't work\". I get (starting with a virgin paths table)\n\ntpc=# select get_path('/etc');\n get_path\n----------\n 1\n(1 row)\n\ntpc=# select get_path('/etc');\n get_path\n----------\n 1\n(1 row)\n\ntpc=# select get_path('/etcz');\n get_path\n----------\n 2\n(1 row)\n\ntpc=# select get_path('/etcz');\n get_path\n----------\n 2\n(1 row)\n\ntpc=# select get_path('/etc');\n get_path\n----------\n 1\n(1 row)\n\nwhich seems to be the intended behavior.\n\nIt might help to know what platform/compiler you are using, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Apr 2000 21:10:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Broken PL/PgSQL for 7.0 beta 3? " } ]
[ { "msg_contents": "You can mark BSDi 4.01 as totally passing the regression tests.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 09:50:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "BSD/OS regression on 7.0" }, { "msg_contents": "> > You can mark BSDi 4.01 as totally passing the regression tests.\n> \n> Got it. Thanks.\n\nI don't remember who did the 'resultmap' file, but it is an excellent\nidea. \n\nMan, we even have a quality regression testing system.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 10:43:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BSD/OS regression on 7.0" }, { "msg_contents": "> You can mark BSDi 4.01 as totally passing the regression tests.\n\nGot it. Thanks.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 04 Apr 2000 14:45:09 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BSD/OS regression on 7.0" }, { "msg_contents": "> > You can mark BSDi 4.01 as totally passing the regression tests.\n\nDitto for LinuxPPC R4.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 05 Apr 2000 09:16:32 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: BSD/OS regression on 7.0" }, { "msg_contents": "> > > You can mark BSDi 4.01 as totally passing the regression tests.\n> Ditto for LinuxPPC R4.\n\nThanks!\n\nbtw, in previous release you tested, or coordinated testing for,\nseveral platforms including FreeBSD/x86, Cobalt Qube (Linux/MIPS),\nNetBSD/m68k, SunOS-4.x/sparc, and mklinux. Are any of these (or more)\nstill available to you or your Postgres group? The multiplatform\ntesting you did was always very helpful.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 05 Apr 2000 02:14:11 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: BSD/OS regression on 7.0" }, { "msg_contents": "\nFreeBSD 4.0 passes the regression tests with a couple of mods to\nresultmap, as committed ...\n\nOn Wed, 5 Apr 2000, Thomas Lockhart wrote:\n\n> > > > You can mark BSDi 4.01 as totally passing the regression tests.\n> > Ditto for LinuxPPC R4.\n> \n> Thanks!\n> \n> btw, in previous release you tested, or coordinated testing for,\n> several platforms including FreeBSD/x86, Cobalt Qube (Linux/MIPS),\n> NetBSD/m68k, SunOS-4.x/sparc, and mklinux. Are any of these (or more)\n> still available to you or your Postgres group? The multiplatform\n> testing you did was always very helpful.\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 4 Apr 2000 23:26:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: BSD/OS regression on 7.0" } ]
[ { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> A downside is that a pg_dump might refuse to dump a DB that it actually\n>> would work with;\n\n> Well, the fact remains that 6.5 (and probably earlier) pg_dump doesn't\n> work with 7.0 databases unless you fix the getInherits() function to do\n> different things with different backends. Or what did you mean with \"would\n> actually work\"?\n\nI was just speculating about possible future problems. Given the code\nI committed last night, a 7.0 pg_dump will refuse to run with a 7.1\ndatabase (or vice versa, unless we change the version-checking code\nbefore 7.1 is released). Now maybe that combination would have worked\nanyway --- but we don't know yet.\n\nI did put in a switch to override the version check, so if you know it\nwill work you can use that switch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Apr 2000 10:42:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should pg_dump refuse to run if DB has different version? " } ]
[ { "msg_contents": "\n Hi all!\n\n When I login with correct login and incorrect password I can do all like\nI enter correct password. What's I do wrong?\n\n------------------------------------------------------+-----------------------+\n... One child is not enough, but two are far too many.| FreeBSD\t |\n\t\t\t\t\t\t | The power to serve! |\n\tMikheev Sergey <[email protected]>\t |http://www.FreeBSD.org/|\n\t\t\t\t\t\t +=======================+\n\n\n", "msg_date": "Tue, 4 Apr 2000 18:47:37 +0400 (MSD)", "msg_from": "\"Sergey V. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Permissions" }, { "msg_contents": "\"Sergey V. Mikheev\" <[email protected]> el d�a Tue, 4 Apr 2000 18:47:37 \n+0400 (MSD), escribi�:\n\n> Hi all!\n>\n> When I login with correct login and incorrect password I can do all like\n>I enter correct password. What's I do wrong?\n\nsee pg_hba.conf and look for the password authenticate method.\n\nSergio\n\n", "msg_date": "Tue, 4 Apr 2000 12:09:46 -0300", "msg_from": "\"Sergio A. Kessler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Permissions" } ]
[ { "msg_contents": "> Hi,\n>\n> due to the limitations in alter table, I generate some SQL to\n> implement changes to tables. This works along the lines of\n>\n> 1. drop fk triggers on old table\n> 2. rename serial sequences on old table\n> 3. drop indexes on old table\n> 4. rename old table\n> 5. create new table\n> 6. insert into new table select ... from old table\n> 7. drop new sequences/rename old sequences\n> 8. recreate fk triggers\n\n 9. drop old table\n\n If you do 9. you can skip 1. because that's done\n automatically.\n\n> [...]\n>\n> This looks kind-of hairy to drop and recreate correctly.\n>\n> I thought an alternative may be to change the oid's in pg_trigger. But I\n> saw that the oid's of the tables are part of the trigger name. I could\n> probably recreate the trigger names with different oid's but this looks\n> like asking for trouble.\n\n Not exactly. The OIDs in the trigger names are just ones that\n CREATE CONSTRAINT TRIGGER allocates itself to give any of\n them a unique name. They aren't used anywhere else, so don't\n care. And BTW: specifying a constraint really invokes these\n commands internally.\n\n> So what is the best solution? It would be great if there would be some\n> way to drop foreign key triggers and re-instate them. This would also\n> help with loading data where there are circular dependencies of foreign\n> keys, as one could drop a trigger to break the loop, load the data, and\n> re-instate the triggers.\n\n Ideally you would use correct ALTER TABLE ... ADD CONSTRAINT\n commands, which are implemented in 7.0.\n\n pg_dump actually does sort of this \"disable RI triggers\" for\n data only dumps. You might want to setup a simple test\n database and take a data only dump to see the mechanism.\n\n>\n> So I guess my question really boils down to: is it possible to write a\n> function that drops a foreign key trigger or re-instates it? This should\n> really be ALTER TABLE table ALTER COLUMN column (DROP|CREATE)\n> CONSTRAINT.... or something along those lines.\n\n There's still something missing in ALTER TABLE. DROP\n CONSTRAINT is one of them, but since your sequencs with\n renaming the old etc. is the safest possibility anyway, it's\n not that high priority.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 4 Apr 2000 17:00:20 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: 7.0 FK trigger question" }, { "msg_contents": ">\n\nThanks for the reply. For the time begin I've solved this by copying every\ntable in the database to a backup table without any constraints, recreating\nthe tables and copying the data back in. I have to be a bit careful with\ndoing it all in the right order, although I think I can solve this by doing\neverything in a transaction as the constraints are only checked at the end of\ntransaction?\n\n> >\n> > So I guess my question really boils down to: is it possible to write a\n> > function that drops a foreign key trigger or re-instates it? This should\n> > really be ALTER TABLE table ALTER COLUMN column (DROP|CREATE)\n> > CONSTRAINT.... or something along those lines.\n>\n> There's still something missing in ALTER TABLE. DROP\n> CONSTRAINT is one of them, but since your sequencs with\n> renaming the old etc. is the safest possibility anyway, it's\n> not that high priority.\n\nOK, I'm definitely not being very bright here, but i cannot get my system to\naccept the alter column commands. An example on the man pages ,ay help a lot\nhere! I tried\n\ntest=# create table t (i int4);\nCREATE\ntest=# create table t1 (k int4);\nCREATE\ntest=# alter table t1 alter column k add constraint references t(i);\nERROR: parser: parse error at or near \"add\"\ntest=# alter table t1 alter column k constraint references t(i);\nERROR: parser: parse error at or near \"constraint\"\ntest=# alter table t1 alter k constraint references t(i);\nERROR: parser: parse error at or near \"constraint\"\ntest=# alter table t1 alter column k create constraint references t(i);\nERROR: parser: parse error at or near \"create\"\n\nSo what am I doing wrong?\n\nThanks,\n\nAdriaan\n\n", "msg_date": "Wed, 05 Apr 2000 08:31:33 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 FK trigger question" }, { "msg_contents": "> Thanks for the reply. For the time begin I've solved this by copying every\n> table in the database to a backup table without any constraints, recreating\n> the tables and copying the data back in. I have to be a bit careful with\n> doing it all in the right order, although I think I can solve this by doing\n> everything in a transaction as the constraints are only checked at the end of\n> transaction?\n\n By default, constraints are checked at end of statement.\n Constraints can be specified DEFERRABLE, then you can do SET\n CONSTRAINTS ... DEFERRED which will delay them until COMMIT.\n\n> OK, I'm definitely not being very bright here, but i cannot get my system to\n> accept the alter column commands. An example on the man pages ,ay help a lot\n> here! I tried\n>\n> test=# create table t (i int4);\n> CREATE\n> test=# create table t1 (k int4);\n> CREATE\n> test=# alter table t1 alter column k add constraint references t(i);\n> ERROR: parser: parse error at or near \"add\"\n> test=# alter table t1 alter column k constraint references t(i);\n> ERROR: parser: parse error at or near \"constraint\"\n> test=# alter table t1 alter k constraint references t(i);\n> ERROR: parser: parse error at or near \"constraint\"\n> test=# alter table t1 alter column k create constraint references t(i);\n> ERROR: parser: parse error at or near \"create\"\n>\n> So what am I doing wrong?\n\n alter table t1 add constraint chk_k foreign key (k) references t (i);\n\n The referenced column(s) (t.i in your case above) must not be\n a primary key - any combination is accepted. SQL standard\n requires that there is a unique index defined for the\n referenced columns so it is guaranteed that FKs reference to\n exactly ONE row. Actually Postgres doesn't check or force it,\n so you have to take care yourself. For example:\n\n create table t (i integer, j integer);\n create unique index t_pk_idx_1 on t (i, j); -- DON'T FORGET THIS!\n create table t1 (k integer, l integer,\n foreign key (k, l) references t (i, j));\n\n BTW: all existing data is checked at ALTER TABLE time.\n\n And our implementation of FK is based on SQL3. So you can\n specify match type FULL (PARTIAL will be in 7.1), and\n referential actions (ON DELETE CASCADE etc.) too. It is nice\n to define ON UPDATE CASCADE, because if you UPDATE a PK, all\n referencing FKs will silently follow then.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 5 Apr 2000 13:50:43 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: 7.0 FK trigger question" } ]
[ { "msg_contents": "Hi,\n\nI got error when running vacuum analyze:\n\ndiscovery=> vacuum analyze;\n ERROR: No one parent tuple was found\ndiscovery=> \ndiscovery=> vacuum analyze hits;\n NOTICE: CreatePortal: portal <vacuum> already exists\n ERROR: No one parent tuple was found\n\npostgres 6.5.3, Linux x86\n\nEverything seems work fine. How dangerous this error ?\nI got it for the first time since project was started about 6 months ago -\nvacuuming runs in background every hour.\n\n\nHmm, now vacuum works. I just restart psql.\n\n\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 4 Apr 2000 18:11:31 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: No one parent tuple was found" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> ERROR: No one parent tuple was found\n> Everything seems work fine. How dangerous this error ?\n\nNot very. This comes out of the VACUUM code that tries to deal with\nmaybe-not-quite-dead tuples (stuff that's been updated and the update\nhas committed, but there is at least one open transaction that's old\nenough that it should see the pre-update tuple state if it decides to\nlook at the table). You can ensure that none of that logic runs if you\nsimply perform the VACUUM with no other transactions open.\n\nI think there are probably still bugs in that area in current\nsources :-( ... it's not exercised enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Apr 2000 20:05:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ERROR: No one parent tuple was found " } ]
[ { "msg_contents": "\nHi,\n\nI'm running PostgreSQL 6.5.3 on Linux-2.2.12 on a 2-way SMP machine.\nI noticed that even after VACUUMing there are some indexes that still\nremain very big.\n\nFor example, here's what I found for pg_attribute:\n\n-rw------- 1 postgres postgres 122880 Apr 4 15:24 pg_attribute\n-rw------- 1 postgres postgres 17055744 Apr 4 15:24\npg_attribute_attrelid_index\n-rw------- 1 postgres postgres 50176000 Apr 4 15:24\npg_attribute_relid_attnam_index\n-rw------- 1 postgres postgres 20758528 Apr 4 15:24\npg_attribute_relid_attnum_index\n\nI know I can fix this by recreating the indexes, but I don't know how to\ndo it for a system table and if it's safe.\n\nAny hints ?\n\nTIA.\n\nRegards.\n", "msg_date": "Tue, 04 Apr 2000 18:54:19 +0200", "msg_from": "Daniele Orlandi <[email protected]>", "msg_from_op": true, "msg_subject": "Indexes growing continuously" }, { "msg_contents": "Daniele Orlandi wrote:\n> \n> Hi,\n> \n> I'm running PostgreSQL 6.5.3 on Linux-2.2.12 on a 2-way SMP machine.\n> I noticed that even after VACUUMing there are some indexes that still\n> remain very big.\n> \n> For example, here's what I found for pg_attribute:\n> \n> -rw------- 1 postgres postgres 122880 Apr 4 15:24 pg_attribute\n> -rw------- 1 postgres postgres 17055744 Apr 4 15:24\n> pg_attribute_attrelid_index\n> -rw------- 1 postgres postgres 50176000 Apr 4 15:24\n> pg_attribute_relid_attnam_index\n> -rw------- 1 postgres postgres 20758528 Apr 4 15:24\n> pg_attribute_relid_attnum_index\n> \n> I know I can fix this by recreating the indexes, but I don't know how to\n> do it for a system table and if it's safe.\n> \n> Any hints ?\n\nUnfortunately, this is a bug in PostgreSQL with respect to system\nindexes. You can safely drop/create user indexes, but not system\nones. The only way to reclaim the space used is to dump/reload\nyour database. Under 7.0, I *believe* Hiroshi's REINDEX command\n(please correct me someone if I'm wrong) will allow you to\nreconstruct system indexes, but the root problem still exists...\n\nHope that helps, \n\nMike Mascari\n", "msg_date": "Tue, 04 Apr 2000 16:22:56 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes growing continuously" }, { "msg_contents": "> > I know I can fix this by recreating the indexes, but I don't know how to\n> > do it for a system table and if it's safe.\n> > \n> > Any hints ?\n> \n> Unfortunately, this is a bug in PostgreSQL with respect to system\n> indexes. You can safely drop/create user indexes, but not system\n> ones. The only way to reclaim the space used is to dump/reload\n> your database. Under 7.0, I *believe* Hiroshi's REINDEX command\n> (please correct me someone if I'm wrong) will allow you to\n> reconstruct system indexes, but the root problem still exists...\n\npg_upgrade will allow this, without dump/reload of data.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 16:47:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes growing continuously" } ]
[ { "msg_contents": "I have found I can crash the backend with the following queries:\n\n\ttest=> BEGIN WORK;\n\tBEGIN\n\ttest=> DECLARE bigtable_cursor CURSOR FOR\n\ttest-> SELECT * FROM bigtable;\n\tSELECT\n\ttest=> FETCH 3 prior FROM bigtable_cursor;\n\tERROR: parser: parse error at or near \"prior\"\n\ttest=> FETCH prior FROM bigtable_cursor;\n\tpqReadData() -- backend closed the channel unexpectedly.\n\t This probably means the backend terminated abnormally\n\t before or while processing the request.\n\tThe connection to the server was lost. Attempting reset: Succeeded.\n\nHere is the backtrace. Comments?\n\n---------------------------------------------------------------------------\n\n\n#0 0x281d2d65 in kill ()\n#1 0x2821ea5d in abort ()\n#2 0x8146548 in ExcAbort (excP=0x8199294, detail=0, data=0x0, \n message=0x81812ad \"!(MyProc->xmin != 0)\") at excabort.c:27\n#3 0x8146497 in ExcUnCaught (excP=0x8199294, detail=0, data=0x0, \n message=0x81812ad \"!(MyProc->xmin != 0)\") at exc.c:170\n#4 0x81464ef in ExcRaise (excP=0x8199294, detail=0, data=0x0, \n message=0x81812ad \"!(MyProc->xmin != 0)\") at exc.c:187\n#5 0x8145b3a in ExceptionalCondition (\n conditionName=0x81812ad \"!(MyProc->xmin != 0)\", exceptionP=0x8199294, \n detail=0x0, fileName=0x81811a3 \"sinval.c\", lineNumber=362) at assert.c:73\n#6 0x8103730 in GetSnapshotData (serializable=0 '\\000') at sinval.c:362\n#7 0x81501d7 in SetQuerySnapshot () at tqual.c:611\n#8 0x810c329 in pg_exec_query_dest (\n query_string=0x81cd398 \"FETCH prior FROM bigtable_cursor;\\n\", dest=Debug, \n aclOverride=0) at postgres.c:677\n#9 0x810c244 in pg_exec_query (\n query_string=0x81cd398 \"FETCH prior FROM bigtable_cursor;\\n\")\n at postgres.c:607\n#10 0x810d43d in PostgresMain (argc=4, argv=0x8047534, real_argc=4, \n real_argv=0x8047534) at postgres.c:1642\n#11 0x80c07fc in main (argc=4, argv=0x8047534) at main.c:103\n#12 0x8062a9c in __start ()\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 15:12:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "postgres crash on CURSORS" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have found I can crash the backend with the following queries:\n> \ttest=> BEGIN WORK;\n> \tBEGIN\n> \ttest=> DECLARE bigtable_cursor CURSOR FOR\ntest-> SELECT * FROM bigtable;\n> \tSELECT\n> \ttest=> FETCH 3 prior FROM bigtable_cursor;\n> \tERROR: parser: parse error at or near \"prior\"\n> \ttest=> FETCH prior FROM bigtable_cursor;\n> \tpqReadData() -- backend closed the channel unexpectedly.\n> \t This probably means the backend terminated abnormally\n> \t before or while processing the request.\n> \tThe connection to the server was lost. Attempting reset: Succeeded.\n\nProblem appears to be due to trying to bull ahead with processing after\nthe current transaction has been aborted by an error. The immediate\ncause is these lines in postgres.c:\n\n /*\n * We have to set query SnapShot in the case of FETCH or COPY TO.\n */\n if (nodeTag(querytree->utilityStmt) == T_FetchStmt ||\n (nodeTag(querytree->utilityStmt) == T_CopyStmt && \n ((CopyStmt *)(querytree->utilityStmt))->direction != FROM))\n SetQuerySnapshot();\n\nwhich are executed without having bothered to check for aborted state.\nI think this code should be removed from postgres.c, and the\nSetQuerySnapshot call instead made from the Fetch and Copy arms of the\nswitch statement in ProcessUtility() (utility.c), after doing\nCHECK_IF_ABORTED in each case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Apr 2000 16:06:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres crash on CURSORS " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have found I can crash the backend with the following queries:\n> > \ttest=> BEGIN WORK;\n> > \tBEGIN\n> > \ttest=> DECLARE bigtable_cursor CURSOR FOR\n> test-> SELECT * FROM bigtable;\n> > \tSELECT\n> > \ttest=> FETCH 3 prior FROM bigtable_cursor;\n> > \tERROR: parser: parse error at or near \"prior\"\n> > \ttest=> FETCH prior FROM bigtable_cursor;\n> > \tpqReadData() -- backend closed the channel unexpectedly.\n> > \t This probably means the backend terminated abnormally\n> > \t before or while processing the request.\n> > \tThe connection to the server was lost. Attempting reset: Succeeded.\n> \n> Problem appears to be due to trying to bull ahead with processing after\n> the current transaction has been aborted by an error. The immediate\n> cause is these lines in postgres.c:\n> \n> /*\n> * We have to set query SnapShot in the case of FETCH or COPY TO.\n> */\n> if (nodeTag(querytree->utilityStmt) == T_FetchStmt ||\n> (nodeTag(querytree->utilityStmt) == T_CopyStmt && \n> ((CopyStmt *)(querytree->utilityStmt))->direction != FROM))\n> SetQuerySnapshot();\n> \n> which are executed without having bothered to check for aborted state.\n> I think this code should be removed from postgres.c, and the\n> SetQuerySnapshot call instead made from the Fetch and Copy arms of the\n> switch statement in ProcessUtility() (utility.c), after doing\n> CHECK_IF_ABORTED in each case.\n\nYes, I saw this code and got totally confused of how to fix it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 16:15:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres crash on CURSORS" } ]
[ { "msg_contents": "We have this in the CURSOR documentation:\n\n Once all rows are fetched, every other fetch access\n returns no rows.\n\nIs this still true?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 15:26:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "CURSOR after hitting end" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> We have this in the CURSOR documentation:\n> Once all rows are fetched, every other fetch access\n> returns no rows.\n\n> Is this still true?\n\nNot if you then move or fetch backwards, I should think...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Apr 2000 15:59:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CURSOR after hitting end " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > We have this in the CURSOR documentation:\n> > Once all rows are fetched, every other fetch access\n> > returns no rows.\n> \n> > Is this still true?\n> \n> Not if you then move or fetch backwards, I should think...\n\nNo, it works. I think Tatsuo fixed it. After a FETCH ALL, I did this,\nand it worked:\n\n\n\ttest=> fetch -1 from bigtable_cursor;\n\t customer_id \n\t-------------\n\t 1000\n\t(1 row)\n\t\n\ttest=> fetch -1 from bigtable_cursor;\n\t customer_id \n\t-------------\n\t 999\n\t(1 row)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 16:04:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CURSOR after hitting end" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Bruce Momjian\n> \n> > Bruce Momjian <[email protected]> writes:\n> > > We have this in the CURSOR documentation:\n> > > Once all rows are fetched, every other fetch access\n> > > returns no rows.\n> > \n> > > Is this still true?\n> > \n> > Not if you then move or fetch backwards, I should think...\n> \n> No, it works. I think Tatsuo fixed it. After a FETCH ALL, I did this,\n> and it worked:\n>\n\nThis is true and false.\nFor index scan I fixed it before 6.5 and for sequential scan\nI fixed it before 7.0.\n\nHowever there remains some type of scan that returns no rows\nafter hitting end.\nEspecially for GROUP BY,*fetch backward* doesn't work well \nfundamentally. I have known this but I've never seen bug\nreports for this. It's not so easy to fix this and it wouldn't be\nan effective way to scan base relation again for *GROUP BY*.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 5 Apr 2000 08:59:35 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: CURSOR after hitting end" }, { "msg_contents": "> > Bruce Momjian <[email protected]> writes:\n> > > We have this in the CURSOR documentation:\n> > > Once all rows are fetched, every other fetch access\n> > > returns no rows.\n> > \n> > > Is this still true?\n> > \n> > Not if you then move or fetch backwards, I should think...\n> \n> No, it works. I think Tatsuo fixed it. After a FETCH ALL, I did this,\n\t\t\t ~~~~~~must be Hiroshi...\n> and it worked:\n--\nTatsuo Ishii\n", "msg_date": "Wed, 05 Apr 2000 09:16:48 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CURSOR after hitting end" } ]
[ { "msg_contents": "Seems this no longer true, so I removed it from the FETCH manual page:\n\n Once all rows are fetched, every other fetch access\n returns no rows.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 15:28:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Removed from FETCH manual page" } ]
[ { "msg_contents": "Hi,\n\nare there an efficient way to count distinct result from join ?\nI know it's possible in 7.0 to do \n\nselect count(distinct m.msg_id) from messages m, message_section_map ms\nwhere m.msg_id = ms.msg_id;\n\nbut I need it working in production for 6.5.3\nI'd like not to use temp tables as I dont' have any experience\nwith them in production.\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Tue, 4 Apr 2000 22:35:35 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "counting distinct result of join" } ]
[ { "msg_contents": "\n\tI finally managed to catch up with the 7.0 betas (provided beta5\ndoes not come out in the next few hours :). My web site has been updated\nwith the Linux/Alpha patches for 7.0beta4 (see the software section), and\nfor history's sake, I have also listed the Linux/Alpha patches for all\nversion of PostgreSQL that the patches have existed for.\n\tWith these patches applied to the 7.0beta4 tarball, all regression\ntests pass (save for geometry with the harmless off by one in the nth\ndecimal place error). I have also tested it against a few of my own apps\nand while there were a few quirks in conversion (i.e. int() -> int4()),\nnothing major was encountered. I would encourage anyone running pgsql on\nLinux/Alpha to pound on these patches and make sure it works for them as\nsoon as possible. Email me any Linux/Alpha specific problems found.\n\tI also have to say that 7.0 is much faster than 6.5.3. It used to\ntake 10 to 15 minutes to load 125k rows into pgsql, now it takes only 5\nminutes for the same data on the same machine! Good work!\n\tWhen the final version of 7.0 comes out, I will double check the\npatch still work (and make modifications as necessary), update my web\npage, and post a short message here. Beyond that, I have very little time\nto work on pgsql until mid May as I have a few projects that need to be\nfinished ASAP before I can graduate. :) Once I have graduated, I plan to\nput some time in on getting the Linux/Alpha patches cleaned up and\nintergrated into the main source tree, hopefully in time for 7.1. \n\tI will post the 7.0beta4 Linux/Alpha patch to the patches list\nshortly. That is all. TTYL.\n\n\tPS. Maybe a note should be put on the pgsql ports page that\npatches are needed for Linux/Alpha an a ref to my web page.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Tue, 4 Apr 2000 19:32:48 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Linux/Alpha and Postgres 7.0 Beta Status " }, { "msg_contents": "\n*rofl* I haven't even had a chance to write up the ChangeLog/announcement\nfor beta4 yet :) I just made the tarball itself a few hours ago so that\nit had a chance to propogate through the mirror sites ...\n\n On Tue, 4 Apr 2000, Ryan Kirkpatrick wrote:\n\n> \n> \tI finally managed to catch up with the 7.0 betas (provided beta5\n> does not come out in the next few hours :). My web site has been updated\n> with the Linux/Alpha patches for 7.0beta4 (see the software section), and\n> for history's sake, I have also listed the Linux/Alpha patches for all\n> version of PostgreSQL that the patches have existed for.\n> \tWith these patches applied to the 7.0beta4 tarball, all regression\n> tests pass (save for geometry with the harmless off by one in the nth\n> decimal place error). I have also tested it against a few of my own apps\n> and while there were a few quirks in conversion (i.e. int() -> int4()),\n> nothing major was encountered. I would encourage anyone running pgsql on\n> Linux/Alpha to pound on these patches and make sure it works for them as\n> soon as possible. Email me any Linux/Alpha specific problems found.\n> \tI also have to say that 7.0 is much faster than 6.5.3. It used to\n> take 10 to 15 minutes to load 125k rows into pgsql, now it takes only 5\n> minutes for the same data on the same machine! Good work!\n> \tWhen the final version of 7.0 comes out, I will double check the\n> patch still work (and make modifications as necessary), update my web\n> page, and post a short message here. Beyond that, I have very little time\n> to work on pgsql until mid May as I have a few projects that need to be\n> finished ASAP before I can graduate. :) Once I have graduated, I plan to\n> put some time in on getting the Linux/Alpha patches cleaned up and\n> intergrated into the main source tree, hopefully in time for 7.1. \n> \tI will post the 7.0beta4 Linux/Alpha patch to the patches list\n> shortly. That is all. TTYL.\n> \n> \tPS. Maybe a note should be put on the pgsql ports page that\n> patches are needed for Linux/Alpha an a ref to my web page.\n> \n> ---------------------------------------------------------------------------\n> | \"For to me to live is Christ, and to die is gain.\" |\n> | --- Philippians 1:21 (KJV) |\n> ---------------------------------------------------------------------------\n> | Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n> ---------------------------------------------------------------------------\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 4 Apr 2000 21:54:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha and Postgres 7.0 Beta Status " }, { "msg_contents": "On Tue, 4 Apr 2000, The Hermit Hacker wrote:\n\n> *rofl* I haven't even had a chance to write up the ChangeLog/announcement\n> for beta4 yet :) I just made the tarball itself a few hours ago so that\n> it had a chance to propogate through the mirror sites ...\n> \n> On Tue, 4 Apr 2000, Ryan Kirkpatrick wrote:\n> > \tI finally managed to catch up with the 7.0 betas (provided beta5\n> > does not come out in the next few hours :). My web site has been updated\n> > with the Linux/Alpha patches for 7.0beta4 (see the software section), and\n> > for history's sake, I have also listed the Linux/Alpha patches for all\n> > version of PostgreSQL that the patches have existed for.\n\n\tI was wondering why there had been no annoucement on the lists\nyet about it, though I did notice it was date stamped today. Didn't\nrealize I had caught it quite that quick though. :)\n\tI was just cleaning up the patches and web page for beta3 when I\nthought I had better check the ftp site just one more time. Especially\nafter the last two times I have sat down to work on the Linux/Alpha\npatches, before I could finish a new beta would come out... and I it was\ntrue once again! :( Thankfully, my current beta3 patch applied only with a\ntad bit of fuzz to beta4. It was only the beta1 to beta2 transistion that\nwas painful (thanks to int8.c). I guess it is only fair that I got \"ahead\"\nfor once.\n\tGlad I was able to add a bit of humor to your day. :)\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Tue, 4 Apr 2000 22:20:46 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Linux/Alpha and Postgres 7.0 Beta Status " }, { "msg_contents": "\n\tJust to let everyone know, I have updated the Linux/Alpha patches\nto work with the newest beta, 7.0beta5. Nothing new to report, beta5\nappears to be working fine on my Alpha box after regression tests and a\ntest of a few of my own apps. The patches are on my web site as usual.\nThat is all. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Sun, 16 Apr 2000 21:48:46 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Linux/Alpha and Postgres 7.0 Beta Status " } ]
[ { "msg_contents": "> Bruce, I think it is probably best to remove the -O2 from template/aix_42\n> in the meantime (until we find a fix) ? (Same problem with -O, thus remove)\n\nAndreas, can we count AIX as a supported platform for Postgres-7.0?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 05 Apr 2000 02:18:35 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bug in nabstime.c" } ]
[ { "msg_contents": "> > I agree that I'm not eager to uglify the code that much to avoid a\n> > single-platform compiler bug. Can it be worked around with less-ugly\n> > changes? I'd try changing the --i to i--, for example; and/or swapping\n> > the order of the two initialization assignments. Neither of those would\n> > impair readability noticeably.\n> > \n> > \t\t\tregards, tom lane\n> \n> I found a different patch that will fix the problem. It compiles and the \n> resulting binary passes the regression tests.\n\nHere is the patch. I am inclined not to apply it. While I realizes it\nfixes the problem on your platform, it is one of those bugs I would\nrather pass back to the compiler author than fix it in our code. \n\nComments from others? Is this a popular compiler, or are you the only\none likely to be using that version with that bug?\n\nThe change is i > 0 to 0 < i. Wow, if such a change fixes your compiler\nfor int8's, I really would not trust it for anything.\n\n---------------------------------------------------------------------------\n\n\n\t*** ./src/backend/utils/adt/int8.c.orig Mon Apr 3 13:24:12 2000\n\t--- ./src/backend/utils/adt/int8.c Mon Apr 3 13:28:47 2000\n\t***************\n\t*** 410,416 ****\n\t if (*arg1 < 1)\n\t *result = 0;\n\t else\n\t! for (i = *arg1, *result = 1; i > 0; --i)\n\t *result *= i;\n\t \n\t return result;\n\t--- 410,416 ----\n\t if (*arg1 < 1)\n\t *result = 0;\n\t else\n\t! for (i = *arg1, *result = 1; 0 < i; --i)\n\t *result *= i;\n \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Apr 2000 23:45:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: int8.c compile problem on UnixWare 7.x" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Bruce Momjian\n> \n> > Bruce Momjian <[email protected]> writes:\n> > > I have found I can crash the backend with the following queries:\n> > > \ttest=> BEGIN WORK;\n> > > \tBEGIN\n> > > \ttest=> DECLARE bigtable_cursor CURSOR FOR\n> > test-> SELECT * FROM bigtable;\n> > > \tSELECT\n> > > \ttest=> FETCH 3 prior FROM bigtable_cursor;\n> > > \tERROR: parser: parse error at or near \"prior\"\n> > > \ttest=> FETCH prior FROM bigtable_cursor;\n> > > \tpqReadData() -- backend closed the channel unexpectedly.\n> > > \t This probably means the backend terminated abnormally\n> > > \t before or while processing the request.\n> > > \tThe connection to the server was lost. Attempting reset: Succeeded.\n> > \n> > Problem appears to be due to trying to bull ahead with processing after\n> > the current transaction has been aborted by an error. The immediate\n> > cause is these lines in postgres.c:\n> > \n> > /*\n> > * We have to set query SnapShot in the case of \n> FETCH or COPY TO.\n> > */\n> > if (nodeTag(querytree->utilityStmt) == T_FetchStmt ||\n> > (nodeTag(querytree->utilityStmt) == T_CopyStmt && \n> > ((CopyStmt \n> *)(querytree->utilityStmt))->direction != FROM))\n> > SetQuerySnapshot();\n> > \n> > which are executed without having bothered to check for aborted state.\n> > I think this code should be removed from postgres.c, and the\n> > SetQuerySnapshot call instead made from the Fetch and Copy arms of the\n> > switch statement in ProcessUtility() (utility.c), after doing\n> > CHECK_IF_ABORTED in each case.\n> \n> Yes, I saw this code and got totally confused of how to fix it.\n>\n\nIs it bad to check ABORTED after yyparse() in parser.c ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 5 Apr 2000 13:02:06 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: postgres crash on CURSORS" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> which are executed without having bothered to check for aborted state.\n>>>> I think this code should be removed from postgres.c, and the\n>>>> SetQuerySnapshot call instead made from the Fetch and Copy arms of the\n>>>> switch statement in ProcessUtility() (utility.c), after doing\n>>>> CHECK_IF_ABORTED in each case.\n\n> Is it bad to check ABORTED after yyparse() in parser.c ?\n\nYes. Try to execute an END (a/k/a ABORT, ROLLBACK, ...)\n\nThe check for abort state has to happen in the appropriate paths of\nexecution, not in the parser. Not all statements should reject on\nabort state.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Apr 2000 00:04:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres crash on CURSORS " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, April 05, 2000 1:04 PM\n\n>\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >>>> which are executed without having bothered to check for\n> aborted state.\n> >>>> I think this code should be removed from postgres.c, and the\n> >>>> SetQuerySnapshot call instead made from the Fetch and Copy\n> arms of the\n> >>>> switch statement in ProcessUtility() (utility.c), after doing\n> >>>> CHECK_IF_ABORTED in each case.\n>\n> > Is it bad to check ABORTED after yyparse() in parser.c ?\n>\n> Yes. Try to execute an END (a/k/a ABORT, ROLLBACK, ...)\n>\n> The check for abort state has to happen in the appropriate paths of\n> execution, not in the parser. Not all statements should reject on\n> abort state.\n>\n\nAre there any statements which should be executable on abort state\nexcept ROLLBACK/COMMIT ?\nThe following is a sample patch for parser.c.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\nIndex: parser.c\n===================================================================\nRCS file: /home/cvs/pgcurrent/backend/parser/parser.c,v\nretrieving revision 1.2\ndiff -c -r1.2 parser.c\n*** parser.c 2000/01/26 09:58:32 1.2\n--- parser.c 2000/04/05 03:54:31\n***************\n*** 16,21 ****\n--- 16,24 ----\n #include \"parser/analyze.h\"\n #include \"parser/gramparse.h\"\n #include \"parser/parser.h\"\n+ #include \"nodes/parsenodes.h\"\n+ #include \"access/xact.h\"\n+ #include \"parse.h\"\n\n #if defined(FLEX_SCANNER)\n extern void DeleteBuffer(void);\n***************\n*** 48,53 ****\n--- 51,82 ----\n parser_init(typev, nargs);\n--- 51,82 ----\n parser_init(typev, nargs);\n yyresult = yyparse();\n\n+ /* To avoid doing processing within an aborted transaction block. */\n+ if (!yyresult && IsAbortedTransactionBlockState())\n+ {\n+ Node *node = lfirst(parsetree);\n+\n+ if (IsA(node, TransactionStmt))\n+ {\n+ TransactionStmt *stmt=(TransactionStmt *)node;\n+ switch (stmt->command)\n+ {\n+ case ROLLBACK:\n+ case COMMIT:\n+ break;\n+ default:\n+ yyresult = -1;\n+ break;\n+ }\n+ }\n+ else\n+ yyresult = -1;\n+ if (yyresult)\n+ {\n+ elog(NOTICE, \"(transaction already aborted): %s\",\n+ \"queries ignored until END\");\n+ }\n+ }\n #if defined(FLEX_SCANNER)\n DeleteBuffer();\n #endif /* FLEX_SCANNER */\n\n", "msg_date": "Wed, 5 Apr 2000 15:00:19 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: postgres crash on CURSORS " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> The check for abort state has to happen in the appropriate paths of\n>> execution, not in the parser. Not all statements should reject on\n>> abort state.\n\n> Are there any statements which should be executable on abort state\n> except ROLLBACK/COMMIT ?\n\nI dunno ... but offhand, I see no really good reason for checking this\nin the parser rather than the way it's done now. Presumably only\nutility statements would be candidates for exemption from abort checks,\nso checking in the switch statement in ProcessUtility makes sense to\nme --- that way the knowledge of the semantics of a given utility\nstatement is localized.\n\n> The following is a sample patch for parser.c.\n\nThe specific patch you propose is definitely inferior to the currently-\ncommitted code, because it does not deal properly with COMMIT/ROLLBACK\nappearing within a list of queries. If we are in abort state and\nthe submitted query string is\n\n\tSELECT foo ; ROLLBACK ; SELECT bar\n\nit seems to me that the correct response is to reject the first select\nand process the second. The currently committed code does so, but\nyour patch would fail.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Apr 2000 02:10:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres crash on CURSORS " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> The check for abort state has to happen in the appropriate paths of\n> >> execution, not in the parser. Not all statements should reject on\n> >> abort state.\n> \n> > Are there any statements which should be executable on abort state\n> > except ROLLBACK/COMMIT ?\n> \n> I dunno ... but offhand, I see no really good reason for checking this\n> in the parser rather than the way it's done now. Presumably only\n> utility statements would be candidates for exemption from abort checks,\n> so checking in the switch statement in ProcessUtility makes sense to\n> me --- that way the knowledge of the semantics of a given utility\n> statement is localized.\n>\n\nCurrent abort check seems too late.\nFor example,is the following behavior preferable ?\n\n=# begin;\nBEGIN\n=# aaa;\nERROR: parser: parse error at or near \"aaa\"\n=# select * from aaaa;\nERROR: Relation 'aaaa' does not exist\n\t?? existence check ?? Why ??\n\nreindex=# select * from t; -- t is a existent table\nNOTICE: (transaction aborted): queries ignored until END\n*ABORT STATE*\n\n> > The following is a sample patch for parser.c.\n> \n> The specific patch you propose is definitely inferior to the currently-\n> committed code, because it does not deal properly with COMMIT/ROLLBACK\n> appearing within a list of queries. If we are in abort state and\n> the submitted query string is\n> \n> \tSELECT foo ; ROLLBACK ; SELECT bar\n> \n> it seems to me that the correct response is to reject the first select\n> and process the second. The currently committed code does so, but\n> your patch would fail.\n>\n\nIt seems pg_parse_and_plan() returns NIL plan_list and NIL\nquerytree_list in this case.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Wed, 5 Apr 2000 17:05:19 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: postgres crash on CURSORS " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Current abort check seems too late.\n> For example,is the following behavior preferable ?\n\n> =# begin;\n> BEGIN\n> =# aaa;\n> ERROR: parser: parse error at or near \"aaa\"\n> =# select * from aaaa;\n> ERROR: Relation 'aaaa' does not exist\n> \t?? existence check ?? Why ??\n\n> reindex=# select * from t; -- t is a existent table\n> NOTICE: (transaction aborted): queries ignored until END\n> *ABORT STATE*\n\nHmm. The error of course appears because we perform parsing and\nrewriting before checking for abort. I am not sure whether the rewriter\ncan introduce begin/commit commands at present, but I would not like to\ndesign the front-end processing on the assumption that it can never do so.\nSo I think we have to run the query that far before we think about\naborting.\n\n>> If we are in abort state and\n>> the submitted query string is\n>> \n>> SELECT foo ; ROLLBACK ; SELECT bar\n>> \n>> it seems to me that the correct response is to reject the first select\n>> and process the second. The currently committed code does so, but\n>> your patch would fail.\n\n> It seems pg_parse_and_plan() returns NIL plan_list and NIL\n> querytree_list in this case.\n\nYou're not looking at current CVS ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Apr 2000 11:04:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres crash on CURSORS " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> >> If we are in abort state and\n> >> the submitted query string is\n> >> \n> >> SELECT foo ; ROLLBACK ; SELECT bar\n> >> \n> >> it seems to me that the correct response is to reject the first select\n> >> and process the second. The currently committed code does so, but\n> >> your patch would fail.\n> \n> > It seems pg_parse_and_plan() returns NIL plan_list and NIL\n> > querytree_list in this case.\n> \n> You're not looking at current CVS ;-)\n>\n\nSorry,I see your change now.\n\nUnfortunately I've never used multiple query and understand\nlittle about it. For example,how to know using libpq that the first\nselect was ignored ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Thu, 6 Apr 2000 10:37:14 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: postgres crash on CURSORS " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> From: Tom Lane [mailto:[email protected]]\n>>>>> If we are in abort state and\n>>>>> the submitted query string is\n>>>>> \n>>>>> SELECT foo ; ROLLBACK ; SELECT bar\n>>>>> \n>>>>> it seems to me that the correct response is to reject the first select\n>>>>> and process the second.\n\n> Unfortunately I've never used multiple query and understand\n> little about it. For example,how to know using libpq that the first\n> select was ignored ?\n\nIf you use PQexec then you can't really tell, because you'll only get\nback the last command's result. If you use PQsendQuery/PQgetResult\nthen you'll get back multiple PGresults from a multi-query string, and\nyou can examine each one to see if it was executed or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 17:42:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres crash on CURSORS " } ]
[ { "msg_contents": "> Hi Jan, refint error reporting seems to have a bug. The Windows ODBC\n> driver cannot detect that refint machine dropped an error (I think that\n> the same behaviour can be reproduced with the Unix ODBC driver, too). I\n> suspect that the 7.0 refint manager's error string has an extra \\n\n> character (I found a lonely apostrophe at the beginning of a row in the\n> log). beta3 has the same problem. What's your opinion? Regards, Zoltan\n>\n> ---------- Forwarded message ----------\n> [...]\n>\n> BDE says nothing to this. BDE thinks that everything is all right, it changes\n> the table (inserts a row). After making a refresh, one can realize that no\n> changes were made, of course.\n>\n>\n> Could you please help?\n\n I'm in no way an ODBC or BDE guy - anyway.\n\n First of all, 6.5.* didn't had the FOREIGN KEY implementation\n your 7.0 message is coming from. It's completely new stuff.\n\n Second, since an ERROR is correctly reported (violating a RI\n constraint is an ERROR and I cannot see any extra \\n there),\n the bug seems to be in the ODBC or BDE part, not looking at\n the return code but trying to interpret the textual error\n message.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 5 Apr 2000 14:20:10 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] refint doesn't work well with BDE (fwd)" }, { "msg_contents": "On Wed, 5 Apr 2000, Jan Wieck wrote:\n> First of all, 6.5.* didn't had the FOREIGN KEY implementation\n> your 7.0 message is coming from. It's completely new stuff.\nYes, you're right, sorry. I used the contrib/spi/refint* stuff in 6.5.2.\n\n> Second, since an ERROR is correctly reported (violating a RI\n> constraint is an ERROR and I cannot see any extra \\n there),\n> the bug seems to be in the ODBC or BDE part, not looking at\n> the return code but trying to interpret the textual error\n> message.\nChecking the ODBC source, I found two places in interfaces/odbc/info.c\nwith check_foreign_key and check_primary_key functions. They are the\nnames of the refint check functions in the \"contrib\" solution in 6.5.*\nThese places are queries which should be changed a bit, I think. Are these\nchanges enough to make it work, Byron?\n\nThank you again.\n\nRegards, Zoltan\n\n\n", "msg_date": "Wed, 5 Apr 2000 17:17:46 +0200 (CEST)", "msg_from": "Kovacs Zoltan Sandor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: refint doesn't work well with BDE" } ]
[ { "msg_contents": "Applied.\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> Please forget all I said about gcc and AIX in my previous mail.\n> It does work with the following patch applied and gcc 2.95.2 .\n> \n> Use --with-template=aix_gcc to compile the whole lot with gcc.\n> \n> The geometry regression test produces different precision.\n> With optimization I run into regression failures starting at oidjoins,\n> thus no -O2. Anybody else try gcc 2.95.2 and -O2 on beta4 ?\n> \n> This is an important patch, since recent versions of the IBM compiler \n> are not for free, and thus most questions I get concern gcc.\n> \n> Andreas\n> \n> PS.: I am testing with beta4\n> \n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Apr 2000 10:46:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX and gcc (was: bug in nabstime.c)" } ]
[ { "msg_contents": "OK, can someone confirm which items still need to be done to update the\ndocumentation?\n\nI can't imagine they are all done, and I don't think we can release 7.0\nwithout them all being done.\n\n---------------------------------------------------------------------------\n\n\nRemove ':' and ';' operators\nAdd TRUNCATE command to quickly truncate relation (Mike Mascari)\nAdd SET FSYNC and SHOW PG_OPTIONS commands(Massimo)\nImprove CREATE FUNCTION to allow type conversion specification\n (Bernie Frankpitt)\nAdd CmdTuples() to libpq++(Vince)\nNew CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands(Jan)\nAllow CREATE FUNCTION WITH clause to be used for all language types\nconfigure --enable-debug adds -g (Peter E)\nconfigure --disable-debug removes -g (Peter E)\nFirst real FOREIGN KEY constraint trigger functionality (Jan)\nAdd FOREIGN KEY ... MATCH FULL ... ON DELETE CASCADE (Jan)\nAdd FOREIGN KEY ... MATCH referential actions (Don Baccus)\nAllow WHERE restriction on ctid (physical heap location) (Hiroshi)\nMove pginterface from contrib to interface directory, rename to pgeasy (Bruce)\nAdd Oracle's COMMENT ON command (Mike Mascari yahoo.\nlibpq's PQsetNoticeProcessor function now returns previous hook(Peter E)\nPrevent PQsetNoticeProcessor from being set to NULL (Peter E)\nAdded psql LastOid variable to return last inserted oid (Peter E)\nNew libpq functions to allow asynchronous connections: PQconnectStart(),\n PQconnectPoll(), PQresetStart(), PQresetPoll(), PQsetenvStart(),\n PQsetenvPoll(), PQsetenvAbort (Ewan Mellor)\nNew libpq PQsetenv() function (Ewan Mellor)\ncreate/alter user extension (Peter E)\nNew postmaster.pid and postmaster.opts under $PGDATA (Tatsuo)\nNew scripts for create/drop user/db (Peter E)\nMajor psql overhaul(Peter E)\nAdd const to libpq interface(Peter E)\nNew libpq function PQoidValue (Peter E)\nAdd aggregate(DISTINCT ...) (Tom)\nAllow flag to control COPY input/output of NULLs (Peter E)\nAdd CREATE/ALTER/DROP GROUP (Peter E)\nAll administration scripts now support --long options (Peter E, Karel)\nVacuumdb script now supports --alldb option (Peter E)\nAdd ecpg EXEC SQL IFDEF, EXEC SQL IFNDEF, EXEC SQL ELSE, EXEC SQL ELIF\n and EXEC SQL ENDIF directives\nAdd pg_ctl script to control backend startup (Tatsuo)\nAdd postmaster.opts.default file to store startup flags (Tatsuo)\nAllow --with-mb=SQL_ASCII\nAdd initdb --enable-multibyte option (Peter E)\nUpdated user interfaces on initdb, initlocation, pg_dump, ipcclean\n(Peter E)\nNew plperl internal programming language (Mark Hollomon)\nAdd Oracle's to_char(), to_date(), to_datetime(), to_timestamp(), to_number()\n conversion functions (Karel Zak zf.jcu.cz>)\nAdd SELECT DISTINCT ON (expr [, expr ...]) targetlist ... (Tom)\nAdd ALTER TABLE ... ADD FOREIGN KEY (Stephan Szabo)\nAdd SESSION_USER as SQL92 keyword, same as CURRENT_USER (Thomas)\nImplement column aliases (aka correlation names) and more join syntax\n(Thomas)\nAllow queries like SELECT a FROM t1 tx (a) (Thomas)\nAllow queries like SELECT * FROM t1 NATURAL JOIN t2 (Thomas)\nImplement REINDEX command (Hiroshi)\nAccept ALL in aggregate function SUM(ALL col) (Tom)\nAllow PQrequestCancel() to terminate when in waiting-for-lock state (Hiroshi)\nNew libpq functions PQsetClientEncoding(), PQclientEncoding() (Tatsuo)\nMake libpq's PQconndefaults() thread-safe (Tom)\nNew lztext data type for compressed text fields\nNew C-routines to implement a BIT and BIT VARYING type in /contrib\n (Adriaan Joubert)\nMake ISO date style (2000-02-16 09:33) the default (Thomas)\nAdd NATIONAL CHAR [ VARYING ]\nNew TIME WITH TIME ZONE type (Thomas)\nAdd round(), sqrt(), cbrt(), pow()\nRename NUMERIC power() to pow()\nImproved TRANSLATE() function\nAdd Linux ARM.\nUpdate for QNX (Kardos, Dr. Andreas)\nInternally change datetime and timespan into timestamp and interval (Thomas)\nconfigure --with-mb now deprecated (Tatsuo)\nNetBSD fixes Johnny C. Lam stat.cmu.edu>\nFixes for Alpha compiles\nNew multibyte encodings\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Apr 2000 13:30:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Doc updates" }, { "msg_contents": "> OK, can someone confirm which items still need to be done to update the\n> documentation?\n> \n> I can't imagine they are all done, and I don't think we can release 7.0\n> without them all being done.\n\n> Add pg_ctl script to control backend startup (Tatsuo)\n> Add postmaster.opts.default file to store startup flags (Tatsuo)\n\nI have written a man page for pg_ctl (see below). However, it's still\nin a plain text, not marked up yet. I'm very busy right now, and\nprobably could start to make it into a SGML after 4/12. Is it too late\nfor the release schedule of 7.0?\n--\nTatsuo Ishii\n-------------------------------------------------------------------------\nNAME\n\npg_ctl - starts/stops/restarts postmaster\n\nSYNOPSIS\n\npg_ctl [-w][-D database_dir][-p path_to_postmaster][-o \"postmaster_opts\"] start\npg_ctl [-w][-D database_dir][-m [s[mart]|f[ast]|i[mmediate]]] stop\npg_ctl [-w][-D database_dir][-m [s[mart]|f[ast]|i[mmediate]][-o \"postmaster_opts\"] restart\npg_ctl [-D database_dir] status\n\nDESCRIPTION\n\npg_ctl is a utility for starting, stopping or restarting postmaster.\n\nStarting postmaster\n\nTo start postmaster:\n\npg_ctl start\n\nIf -w is supplied, pg_ctl waits for the database server comes up, by\nwatching for creation of the pid file (PGDATA/postmaster.pid), for up\nto 60 seconds.\n\nParameters to invoke postmaster are taken from following sources:\n\nPath to postmaster: found in the command search path\nDatabase directory: PGDATA environment variable\nOther parameters: PGDATA/postmaster.opts.default\n\npostmaster.opts.default contains parameters for postmaster. With a\ndefault installation, the \"-S\" option is enabled. So \"pg_ctl start\"\nimplies:\n\npostmaster -S\n\nNote that postmaster.opts.default is installed by initdb from\nlib/postmaster.opts.default.sample under the PostgreSQL installation\ndirectory (lib/postmaster.opts.default.sample is copied from\nsrc/bin/pg_ctl/postmaster.opts.default.sample while installing\nPostgreSQL).\n\nTo override default parameters you can use -D, -p and -o options.\n\n-D database_dir\n\tspecifies the database directory\n\n-p path_to_postmaster\n\tspecifies the path to postmaster\n\n-o \"postmaster_opts\"\n\tspecifies any parameters for postmaster\n\nExamples:\n\n# blocks until postmaster comes up\npg_ctl -w start\n\n# specifies postmaster path\npg_ctl -p /usr/local/pgsq/bin/postmaster start\n\n# uses port 5433 and disables fsync\npg_ctl -o \"-o -F -p 5433\" start\n\nStopping postmaster\n\npg_ctl stop\n\nstops postmaster.\n\nThere are several options for the stopping mode.\n\n-w\n\twaits for postmaster to shut down\n\n-m [s[mart]|f[ast]|i[mmediate]]\n specifies the shutdown mode. smart mode waits for all\n the clients to logout. This is the default.\n fast mode sends SIGTERM to the backends, that means\n active transactions get rolled back. immediate mode sends SIGUSR1\n to the backends and lets them abort. In this case, database recovery\n will be neccessary on the next startup.\n\n\nRestarting postmaster\n\nThis is almost equivalent to stopping postmaster then starting it\nagain except that the parameters for postmaster used before stopping\nit would be used too. This is done by saving them in\nPGDATA/postmaster.opts file. -w, -D, -m, -fast, -immediate and -o\ncan also be used in the restarting mode and they have same meanings as\ndescribed above.\n\nExamples:\n\n# restarts postmaster in the simplest form\npg_ctl restart\n\n# restarts postmaster, waiting for it to shut down and to come up\npg_ctl -w restart\n\n# uses port 5433 and disables fsync next time\npg_ctl -o \"-o -F -p 5433\" restart\n\nGetting status from postmaster\n\nTo get status information from postmaster:\n\npg_ctl status\n\nFollowing is sample outputs from pg_ctl.\n\npg_ctl: postmaster is running (pid: 13718)\noptions are:\n/usr/local/src/pgsql/current/bin/postmaster\n-p 5433\n-D /usr/local/src/pgsql/current/data\n-B 64\n-b /usr/local/src/pgsql/current/bin/postgres\n-N 32\n-o '-F'\n\n\n************\n\n************\n\n************\n", "msg_date": "Thu, 06 Apr 2000 10:58:06 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "> I have written a man page for pg_ctl (see below). However, it's still\n> in a plain text, not marked up yet. I'm very busy right now, and\n> probably could start to make it into a SGML after 4/12. Is it too late\n> for the release schedule of 7.0?\n\nI'll be happy to mark up what is available. Can I use what you\nincluded in the email? If so, I'll go ahead and put it in...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 06 Apr 2000 04:03:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "> > I have written a man page for pg_ctl (see below). However, it's still\n> > in a plain text, not marked up yet. I'm very busy right now, and\n> > probably could start to make it into a SGML after 4/12. Is it too late\n> > for the release schedule of 7.0?\n> \n> I'll be happy to mark up what is available. Can I use what you\n> included in the email? If so, I'll go ahead and put it in...\n\nThank you very much. Please do it. Also, please feel free to\nmodify/change what I wrote to correct grammatical errors..\n--\nTatsuo Ishii\n", "msg_date": "Thu, 06 Apr 2000 14:06:24 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Bruce Momjian\n> \n> OK, can someone confirm which items still need to be done to update the\n> documentation?\n> \n> Implement REINDEX command (Hiroshi)\n\nI commited doc/src/sgml/ref/reindex.sgml to CVS last weekend.\nHowever I couldn't confirm that it's written in right sgml format.\nI'm happy if someone would check it together with grammatical errors.\n\n> Allow WHERE restriction on ctid (physical heap location) (Hiroshi)\n> Allow PQrequestCancel() to terminate when in waiting-for-lock \n> state (Hiroshi)\n\nWhat kind of documentation must I write for above 2 items ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 7 Apr 2000 09:33:37 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Doc updates" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, can someone confirm which items still need to be done to update the\n> documentation?\n\n> Remove ':' and ';' operators\n\nThey're not actually removed yet, just deprecated, so if you have\n\"removed\" in the history file please change it. Otherwise, the\ndocs are updated.\n\n> Add TRUNCATE command to quickly truncate relation (Mike Mascari)\n\nDocumented.\n\n> Add SET FSYNC and SHOW PG_OPTIONS commands(Massimo)\n\nNot documented.\n\n> Improve CREATE FUNCTION to allow type conversion specification\n> (Bernie Frankpitt)\n\nHuh? I'm not sure what that is.\n\n> Add CmdTuples() to libpq++(Vince)\n\nNot documented.\n\n> New CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands(Jan)\n\nNot documented.\n\n> Allow CREATE FUNCTION WITH clause to be used for all language types\n\nDocumented.\n\n> configure --enable-debug adds -g (Peter E)\n> configure --disable-debug removes -g (Peter E)\n\nDocumented.\n\n> First real FOREIGN KEY constraint trigger functionality (Jan)\n> Add FOREIGN KEY ... MATCH FULL ... ON DELETE CASCADE (Jan)\n> Add FOREIGN KEY ... MATCH referential actions (Don Baccus)\n\nNot adequately documented AFAICS.\n\n> Allow WHERE restriction on ctid (physical heap location) (Hiroshi)\n\nNot documented; not quite sure where to put it, either.\n\n> Move pginterface from contrib to interface directory, rename to pgeasy (Bruce)\n\nDocumented.\n\n> Add Oracle's COMMENT ON command (Mike Mascari yahoo.\n\nNot documented.\n\n> libpq's PQsetNoticeProcessor function now returns previous hook(Peter E)\n> Prevent PQsetNoticeProcessor from being set to NULL (Peter E)\n\nDocumented.\n\n> Added psql LastOid variable to return last inserted oid (Peter E)\n\nDocumented.\n\n> New libpq functions to allow asynchronous connections: PQconnectStart(),\n> PQconnectPoll(), PQresetStart(), PQresetPoll(), PQsetenvStart(),\n> PQsetenvPoll(), PQsetenvAbort (Ewan Mellor)\n> New libpq PQsetenv() function (Ewan Mellor)\n\nThe first four are documented. The other four have been removed from\nthe API and so do not need documentation.\n\n> create/alter user extension (Peter E)\n\nDocumented.\n\n> New postmaster.pid and postmaster.opts under $PGDATA (Tatsuo)\n\nNot documented.\n\n> New scripts for create/drop user/db (Peter E)\n\nDocumented.\n\n> Major psql overhaul(Peter E)\n\npsql man page seems up-to-date, do we need more?\n\n> Add const to libpq interface(Peter E)\n\nNot sure we need to do more than mention it in the revision history.\n\n> New libpq function PQoidValue (Peter E)\n\nDocumented.\n\n> Add aggregate(DISTINCT ...) (Tom)\n\nDocumented.\n\n> Allow flag to control COPY input/output of NULLs (Peter E)\n\nDocumented.\n\n> Add CREATE/ALTER/DROP GROUP (Peter E)\n\nDocumented.\n\n> All administration scripts now support --long options (Peter E, Karel)\n\nDocumented.\n\n> Vacuumdb script now supports --alldb option (Peter E)\n\nDocumented.\n\n> Add ecpg EXEC SQL IFDEF, EXEC SQL IFNDEF, EXEC SQL ELSE, EXEC SQL ELIF\n> and EXEC SQL ENDIF directives\n\nNot documented.\n\n> Add pg_ctl script to control backend startup (Tatsuo)\n\nNot documented.\n\n> Add postmaster.opts.default file to store startup flags (Tatsuo)\n\nNot documented.\n\n> Allow --with-mb=SQL_ASCII\n\nI see it in README.mb ... but not in the SGML docs ...\n\n> Add initdb --enable-multibyte option (Peter E)\n\nDocumented.\n\n> Updated user interfaces on initdb, initlocation, pg_dump, ipcclean\n> (Peter E)\n\ninitlocation and pg_dump man pages still need to be updated; not sure\nabout ipcclean.\n\n> New plperl internal programming language (Mark Hollomon)\n\nDocumented.\n\n> Add Oracle's to_char(), to_date(), to_datetime(), to_timestamp(), to_number()\n> conversion functions (Karel Zak zf.jcu.cz>)\n\nDocumented.\n\n> Add SELECT DISTINCT ON (expr [, expr ...]) targetlist ... (Tom)\n\nDocumented.\n\n> Add ALTER TABLE ... ADD FOREIGN KEY (Stephan Szabo)\n\nNot documented.\n\n> Add SESSION_USER as SQL92 keyword, same as CURRENT_USER (Thomas)\n\nThere is a ref page for this, but it's not linked into the documents\nas far as I can tell! The only readily-visible reference in the docs\nis some obsolete info in the CREATE TABLE page's DEFAULT clause (and\nwhy are function definitions present there anyway?)\n\n> Implement column aliases (aka correlation names) and more join syntax\n> (Thomas)\n> Allow queries like SELECT a FROM t1 tx (a) (Thomas)\n> Allow queries like SELECT * FROM t1 NATURAL JOIN t2 (Thomas)\n\nNot documented.\n\n> Implement REINDEX command (Hiroshi)\n\nThere is a ref page for this, but it's not linked into the documentation...\n\n> Accept ALL in aggregate function SUM(ALL col) (Tom)\n\nDocumented.\n\n> Allow PQrequestCancel() to terminate when in waiting-for-lock state (Hiroshi)\n\nNot documented, but I'm not sure it needs to be mentioned anywhere but\nthe history file.\n\n> New libpq functions PQsetClientEncoding(), PQclientEncoding() (Tatsuo)\n\nDocumented, but only in README.mb.\n\n> Make libpq's PQconndefaults() thread-safe (Tom)\n\nDocumented.\n\n> New lztext data type for compressed text fields\n\nNot documented, but do we want to document it?\n\n> New C-routines to implement a BIT and BIT VARYING type in /contrib\n> (Adriaan Joubert)\n\nNot documented, but on the other hand it's not done yet.\n\n> Make ISO date style (2000-02-16 09:33) the default (Thomas)\n\nCouldn't find this stated in the likely spots. SET ref page says the\nwrong thing.\n\n> Add NATIONAL CHAR [ VARYING ]\n\nNot documented.\n\n> New TIME WITH TIME ZONE type (Thomas)\n\nDocumented.\n\n> Add round(), sqrt(), cbrt(), pow()\n> Rename NUMERIC power() to pow()\n\nDocumented.\n\n> Improved TRANSLATE() function\n\nOnly mentioned in Table 5-4, which hardly gives room to explain...\n\n> Add Linux ARM.\n> Update for QNX (Kardos, Dr. Andreas)\n\nDocumented.\n\n> Internally change datetime and timespan into timestamp and interval (Thomas)\n\nDocumented.\n\n> configure --with-mb now deprecated (Tatsuo)\n\nIt's not mentioned in the docs, which is probably documentation enough.\n\n> NetBSD fixes Johnny C. Lam stat.cmu.edu>\n> Fixes for Alpha compiles\n\nDo these need to be mentioned?\n\n> New multibyte encodings\n\nI assume README.mb talks about these.\n\n\n\nBottom line: Peter gets an A, the rest of us have work to do...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Apr 2000 23:48:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates " }, { "msg_contents": "> Do these need to be mentioned?\n> \n> > New multibyte encodings\n> \n> I assume README.mb talks about these.\n> \n> \n> \n> Bottom line: Peter gets an A, the rest of us have work to do...\n\nTotally agree about Peter. \n\nWe were much better this time about getting doc updates with patches. \nHowever, we do clearly have work to do. If we let it slide, we will\nnever come back to it later, I fear.\n\nOK, people, you have Tom Lane's hard work here. Get cracking. I will\napply doc fixes as fast as I get them.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Apr 2000 00:16:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > OK, can someone confirm which items still need to be done to update the\n> > documentation?\n>\n> > Add Oracle's COMMENT ON command (Mike Mascari).\n> \n> Not documented.\n\nI see Bruce added a comment.sgml back in October. Is there\nsomething more that's necessary? I'd be more that happy to write\nsomething up, if so.\n\nMike Mascari\n", "msg_date": "Sat, 08 Apr 2000 00:47:25 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "> Tom Lane wrote:\n> > \n> > Bruce Momjian <[email protected]> writes:\n> > > OK, can someone confirm which items still need to be done to update the\n> > > documentation?\n> >\n> > > Add Oracle's COMMENT ON command (Mike Mascari).\n> > \n> > Not documented.\n> \n> I see Bruce added a comment.sgml back in October. Is there\n> something more that's necessary? I'd be more that happy to write\n> something up, if so.\n> \n\nNo. I don't understand how to merge into the main docs, or it is merged\nin but not packaged yet. Not sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Apr 2000 01:10:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> Tom Lane wrote:\n>>>> Add Oracle's COMMENT ON command (Mike Mascari).\n>> \n>> Not documented.\n\n> I see Bruce added a comment.sgml back in October.\n\nHmm, you're right. Looks like that's still another file that hasn't\nbeen linked into the main documentation.\n\n[ time passes ]\n\nOK, I think I found where to fix that --- COMMENT ON and REINDEX\nshould be visible in the HTML documentation after the next nightly\nrun, unless their markup is bad enough to break the build.\n\nI'll throw this back in Thomas' lap now...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Apr 2000 01:12:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I see Bruce added a comment.sgml back in October. Is there\n>> something more that's necessary? I'd be more that happy to write\n>> something up, if so.\n\n> No. I don't understand how to merge into the main docs, or it is merged\n> in but not packaged yet. Not sure.\n\nIt looks like you have to add an \"entity\" line to ref/allfiles.sgml\nand then refer to that entity in ref/commands.sgml. We'll find out\ntomorrow morning whether that works or not ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Apr 2000 01:20:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates " }, { "msg_contents": "* Bruce Momjian <[email protected]> [000407 22:35] wrote:\n> > Do these need to be mentioned?\n> > \n> > > New multibyte encodings\n> > \n> > I assume README.mb talks about these.\n> > \n> > \n> > \n> > Bottom line: Peter gets an A, the rest of us have work to do...\n> \n> Totally agree about Peter. \n> \n> We were much better this time about getting doc updates with patches. \n> However, we do clearly have work to do. If we let it slide, we will\n> never come back to it later, I fear.\n> \n> OK, people, you have Tom Lane's hard work here. Get cracking. I will\n> apply doc fixes as fast as I get them.\n\nHas any progress been made regarding splitting the online docs based\non release so there's a snapshot of the docs made at the time of\nthe 7.0 release and a seperate space for upcoming versions clearly\nmarked so users are aware of what's actually in the release versus\nwhat's in development?\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 7 Apr 2000 23:33:06 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "> > New postmaster.pid and postmaster.opts under $PGDATA (Tatsuo)\n> \n> Not documented.\n\nWill appear in the pg_ctl man page (Thomas is kindly making markups\nfor it).\n--\nTatsuo Ishii\n", "msg_date": "Sat, 08 Apr 2000 17:07:29 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > OK, can someone confirm which items still need to be done to update the\n> > documentation?\n> \n> > Remove ':' and ';' operators\n> \n> They're not actually removed yet, just deprecated, so if you have\n> \"removed\" in the history file please change it. Otherwise, the\n> docs are updated.\n\nDeprecated. Sorry. Never updated this list.\n\n> > Improve CREATE FUNCTION to allow type conversion specification\n> > (Bernie Frankpitt)\n> \n> Huh? I'm not sure what that is.\n\nOK. Here is the info. Not sure if it is in the man page or not. \nAttached is the CVS log, and the actual diff of gram.y for that patch.\n\nSeems the major change is:\n\n! RETURNS func_return opt_with AS Sconst LANGUAGE Sconst\n\n! RETURNS func_return opt_with AS func_as LANGUAGE Sconst\n ^^^^^^^\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v\nWorking file: gram.y\nhead: 2.167\nbranch:\nlocks: strict\naccess list:\nsymbolic names:\n\tREL6_5_PATCHES: 2.88.0.2\n\tREL6_5: 2.88\n\tREL6_4: 2.37.0.2\n\trelease-6-3: 2.5\n\tREL2_0B: 1.20.0.2\n\tREL2_0: 1.20\n\tRelease_2_0_0: 1.7\n\tRelease_1_0_3: 1.2.0.2\n\tRelease_2_0: 1.6\n\tRelease_1_0_2: 1.2\n\tPG95-1_01: 1.1.1.1\n\tPG95_DIST: 1.1.1\nkeyword substitution: kv\ntotal revisions: 283;\tselected revisions: 1\ndescription:\n----------------------------\nrevision 2.100\ndate: 1999/09/28 04:34:44; author: momjian; state: Exp; lines: +9 -3\n I have been working with user defined types and user defined c\nfunctions. One problem that I have encountered with the function\nmanager is that it does not allow the user to define type conversion\nfunctions that convert between user types. For instance if mytype1,\nmytype2, and mytype3 are three Postgresql user types, and if I wish to\ndefine Postgresql conversion functions like\n\nI run into problems, because the Postgresql dynamic loader would look\nfor a single link symbol, mytype3, for both pieces of object code. If\nI just change the name of one of the Postgresql functions (to make the\nsymbols distinct), the automatic type conversion that Postgresql uses,\nfor example, when matching operators to arguments no longer finds the\ntype conversion function.\n\nThe solution that I propose, and have implemented in the attatched\npatch extends the CREATE FUNCTION syntax as follows. In the first case\nabove I use the link symbol mytype2_to_mytype3 for the link object\nthat implements the first conversion function, and define the\nPostgresql operator with the following syntax\n\nThe patch includes changes to the parser to include the altered\nsyntax, changes to the ProcedureStmt node in nodes/parsenodes.h,\nchanges to commands/define.c to handle the extra information in the AS\nclause, and changes to utils/fmgr/dfmgr.c that alter the way that the\ndynamic loader figures out what link symbol to use. I store the\nstring for the link symbol in the prosrc text attribute of the pg_proc\ntable which is currently unused in rows that reference dynamically\nloaded\nfunctions.\n\n\nBernie Frankpitt\n=============================================================================\n\nIndex: gram.y\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.99\nretrieving revision 2.100\ndiff -c -r2.99 -r2.100\n*** gram.y\t1999/09/23 17:02:46\t2.99\n--- gram.y\t1999/09/28 04:34:44\t2.100\n***************\n*** 10,16 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v 2.99 1999/09/23 17:02:46 momjian Exp $\n *\n * HISTORY\n *\t AUTHOR\t\t\tDATE\t\t\tMAJOR EVENT\n--- 10,16 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v 2.100 1999/09/28 04:34:44 momjian Exp $\n *\n * HISTORY\n *\t AUTHOR\t\t\tDATE\t\t\tMAJOR EVENT\n***************\n*** 163,169 ****\n %type <list>\tstmtblock, stmtmulti,\n \t\tresult, relation_name_list, OptTableElementList,\n \t\tOptInherit, definition,\n! \t\topt_with, func_args, func_args_list,\n \t\toper_argtypes, RuleActionList, RuleActionBlock, RuleActionMulti,\n \t\topt_column_list, columnList, opt_va_list, va_list,\n \t\tsort_clause, sortby_list, index_params, index_list, name_list,\n--- 163,169 ----\n %type <list>\tstmtblock, stmtmulti,\n \t\tresult, relation_name_list, OptTableElementList,\n \t\tOptInherit, definition,\n! \t\topt_with, func_args, func_args_list, func_as,\n \t\toper_argtypes, RuleActionList, RuleActionBlock, RuleActionMulti,\n \t\topt_column_list, columnList, opt_va_list, va_list,\n \t\tsort_clause, sortby_list, index_params, index_list, name_list,\n***************\n*** 1923,1929 ****\n *****************************************************************************/\n \n ProcedureStmt:\tCREATE FUNCTION func_name func_args\n! \t\t\t RETURNS func_return opt_with AS Sconst LANGUAGE Sconst\n \t\t\t\t{\n \t\t\t\t\tProcedureStmt *n = makeNode(ProcedureStmt);\n \t\t\t\t\tn->funcname = $3;\n--- 1923,1929 ----\n *****************************************************************************/\n \n ProcedureStmt:\tCREATE FUNCTION func_name func_args\n! \t\t\t RETURNS func_return opt_with AS func_as LANGUAGE Sconst\n \t\t\t\t{\n \t\t\t\t\tProcedureStmt *n = makeNode(ProcedureStmt);\n \t\t\t\t\tn->funcname = $3;\n***************\n*** 1947,1952 ****\n--- 1947,1958 ----\n \t\t\t\t{\t$$ = lcons(makeString($1),NIL); }\n \t\t| func_args_list ',' TypeId\n \t\t\t\t{\t$$ = lappend($1,makeString($3)); }\n+ \t\t;\n+ \n+ func_as: Sconst\t\n+ \t\t\t\t{ $$ = lcons(makeString($1),NIL); }\n+ \t\t| Sconst ',' Sconst\n+ \t\t\t\t{ \t$$ = lappend(lcons(makeString($1),NIL), makeString($3)); }\n \t\t;\n \n func_return: set_opt TypeId", "msg_date": "Sat, 8 Apr 2000 17:42:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> Improve CREATE FUNCTION to allow type conversion specification\n>>>> (Bernie Frankpitt)\n>> \n>> Huh? I'm not sure what that is.\n\n> OK. Here is the info. Not sure if it is in the man page or not. \n> Attached is the CVS log, and the actual diff of gram.y for that patch.\n\nOK, now I remember. The summary line is pretty misleading. Perhaps\na better one is\n\n* Function name overloading for dynamically-loaded C functions (Frankpitt)\n\nThe docs seem to be up to date on this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Apr 2000 18:05:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >>>> Improve CREATE FUNCTION to allow type conversion specification\n> >>>> (Bernie Frankpitt)\n> >> \n> >> Huh? I'm not sure what that is.\n> \n> > OK. Here is the info. Not sure if it is in the man page or not. \n> > Attached is the CVS log, and the actual diff of gram.y for that patch.\n> \n> OK, now I remember. The summary line is pretty misleading. Perhaps\n> a better one is\n> \n> * Function name overloading for dynamically-loaded C functions (Frankpitt)\n> \n> The docs seem to be up to date on this.\n\nrelease.sgml updated. Thanks. I will generate a new HISTORY just\nbefore final release. I need to go over the cvs logs since last run\nagain as we continue to add things.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Apr 2000 18:10:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Doc updates" }, { "msg_contents": "Hi all,\n\nThis is documented in two places, both in the CREATE FUNCTION\ndocumentation page of the user guide, and in the Developers guide \nunder the section on extending postgres by writing dynamically loaded\nfunctions. Is there a man page that needs updating too?\n\nMaybe a topic index to the documentation would make it easier to find\nall the documentation for a particular topic. Is it easy to do that in\nsgml?\n\nBernie Frankpitt\n\nBruce Momjian wrote:\n\n> > > Improve CREATE FUNCTION to allow type conversion specification\n> > > (Bernie Frankpitt)\n> >\n> > Huh? I'm not sure what that is.\n> \n> OK. Here is the info. Not sure if it is in the man page or not.\n> Attached is the CVS log, and the actual diff of gram.y for that patch.\n> \n> Seems the major change is:\n> \n> ! RETURNS func_return opt_with AS Sconst LANGUAGE Sconst\n> \n> ! RETURNS func_return opt_with AS func_as LANGUAGE Sconst\n> ^^^^^^^\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n>\n", "msg_date": "Sat, 08 Apr 2000 18:20:19 -0400", "msg_from": "Bernard Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Doc updates" } ]
[ { "msg_contents": "Am I correct that we are targeting all these for 7.1?\n\n\tWAL/write ahead log\n\tTOAST/long tuples\n\touter joins\n\tquery tree redesign\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Apr 2000 15:38:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "7.1 items" }, { "msg_contents": "> Am I correct that we are targeting all these for 7.1?\n> outer joins\n> query tree redesign\n\nWe are pretty much agreed that outer joins will be much easier with a\nquery tree redesign. But if anyone wants to try shoehorning at least a\nsimple example of outer joins into the current design they are welcome\nto try.\n\nI'm also planning on working on the multi-language support, using the\nSQL92 NATIONAL CHARACTER/CHARACTER SET/COLLATION features.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 06 Apr 2000 13:33:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 items" } ]
[ { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Am I correct that we are targeting all these for 7.1?\n> > \n> > WAL/write ahead log\n> > TOAST/long tuples\n> > outer joins\n> > query tree redesign\n> \n> There have been talks about redesigning fmgr and pl-function \n> interface as well ;)\n\nOops, yes:\n\n\tWAL/write ahead log\n\tTOAST/long tuples\n\touter joins\n\tquery tree redesign\n\tfunction manager redesign\n\nWhat year to we want to release 7.1? :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Apr 2000 17:00:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.1 items" }, { "msg_contents": "Bruce Momjian writes:\n\n> Oops, yes:\n> \n> \tWAL/write ahead log\n> \tTOAST/long tuples\n> \touter joins\n> \tquery tree redesign\n> \tfunction manager redesign\n> \n> What year to we want to release 7.1? :-)\n\nISTM that any one of these features could lead to a release of its own\nright. Considering that at least four of these five were planned for 7.0\nwe perhaps shouldn't be overly enthusiastic (although ambitious plans\nnever hurt).\n\nFWIW, my own plans for 7.1 revolve around ACL enhancements and build\nsystem clean up.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 6 Apr 2000 02:09:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 items" }, { "msg_contents": "On Wed, 5 Apr 2000, Bruce Momjian wrote:\n\n> > Bruce Momjian wrote:\n> > > \n> > > Am I correct that we are targeting all these for 7.1?\n> > > \n> > > WAL/write ahead log\n> > > TOAST/long tuples\n> > > outer joins\n> > > query tree redesign\n> > \n> > There have been talks about redesigning fmgr and pl-function \n> > interface as well ;)\n> \n> Oops, yes:\n> \n> \tWAL/write ahead log\n> \tTOAST/long tuples\n> \touter joins\n> \tquery tree redesign\n> \tfunction manager redesign\n> \n> What year to we want to release 7.1? :-)\n\nBased on this last one ... Jan/Feb 1st, 2001 would be nice? :)\n\n\n", "msg_date": "Thu, 6 Apr 2000 07:31:12 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 items" }, { "msg_contents": "\nOn Thu, 6 Apr 2000, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n> \n> > Oops, yes:\n> > \n> > \tWAL/write ahead log\n> > \tTOAST/long tuples\n> > \touter joins\n> > \tquery tree redesign\n> > \tfunction manager redesign\n> > \n> > What year to we want to release 7.1? :-)\n\n If all will right (and major developers will agree) I plan PREPARE/EXECUTE\ncommands and changes in SPI background for plan saving (query cache).\n\n> FWIW, my own plans for 7.1 revolve around ACL enhancements and build\n> system clean up.\n\nAnd if will implemented new ACL, I plan CREATE PROFILE feature (like Oracle)\nfor better user control (connection elapsed time, time period for connection, \ndefault ACL mask ..ect.) .\n\n\t\t\t\t\t\t\tKarel\n\n\t\t\t\t\n\n", "msg_date": "Thu, 6 Apr 2000 12:39:07 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 items" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n>>>> WAL/write ahead log\n>>>> TOAST/long tuples\n>>>> outer joins\n>>>> query tree redesign\n>>>> function manager redesign\n>>>> \n>>>> What year to we want to release 7.1? :-)\n\n> If all will right (and major developers will agree) I plan PREPARE/EXECUTE\n> commands and changes in SPI background for plan saving (query cache).\n\nGiven that there is going to be a querytree redesign for 7.1, I'd\nsuggest holding off on prepared plans until 7.2. Otherwise it's\ngoing to be a mess.\n\nThe good thing about the above list is that we have four essentially\nindependent major projects. (I think outer joins are a portion of the\nquerytree work, not a separate item.) So work on them can proceed in\nparallel. And, if it gets to be September-ish and only two or three\nare done, we can make a 7.1 release and still feel pretty good about\nhaving some nice stuff.\n\nThis does bring up a suggestion that Jan has made in the past. Perhaps\nit would be a good idea if we create a separate CVS branch for each of\nthese major projects, so that people could work on that project\nindependently of the others. When a project is done, we merge it back\ninto the main branch. Then it's no problem if one of the projects is\nbroken temporarily, or not ready to go when we want to release 7.1.\n\nOTOH, managing separate CVS branches might be a real pain in the neck,\nespecially for developers who need to deal with more than one project.\nI've never done it so I don't have a feeling for what it would take.\nBut the Mozilla people do this sort of thing all the time, so it can't\nbe that bad.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 10:39:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 items " }, { "msg_contents": "> > >>>> WAL/write ahead log\n> > >>>> TOAST/long tuples\n> > >>>> outer joins\n> > >>>> query tree redesign\n> > >>>> function manager redesign\n> > >>>> \n> > >>>> What year to we want to release 7.1? :-)\n> > \n> > > If all will right (and major developers will agree) I plan PREPARE/EXECUTE\n> > > commands and changes in SPI background for plan saving (query cache).\n> > \n> > Given that there is going to be a querytree redesign for 7.1, I'd\n> > suggest holding off on prepared plans until 7.2. Otherwise it's\n> > going to be a mess.\n> \n> Any chance for dirty read? Waiting for transaction end on inserts with\n> duplicates on unique keys in transactions can be a lot of fun.\n\nBut we have committed read with no block. Why would you want dirty\nread. Chapter 10 of my book is about transactions and locking. Of\ncourse, I may be missing something here.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 12:00:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.1 items" }, { "msg_contents": "\nOn Thu, 6 Apr 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> \n> > If all will right (and major developers will agree) I plan PREPARE/EXECUTE\n> > commands and changes in SPI background for plan saving (query cache).\n> \n> This does bring up a suggestion that Jan has made in the past. Perhaps\n> it would be a good idea if we create a separate CVS branch for each of\n> these major projects, so that people could work on that project\n> independently of the others. When a project is done, we merge it back\n> into the main branch. Then it's no problem if one of the projects is\n> broken temporarily, or not ready to go when we want to release 7.1.\n\nI agree.\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 6 Apr 2000 18:02:27 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 items " }, { "msg_contents": "> >>>> WAL/write ahead log\n> >>>> TOAST/long tuples\n> >>>> outer joins\n> >>>> query tree redesign\n> >>>> function manager redesign\n> >>>> \n> >>>> What year to we want to release 7.1? :-)\n> \n> > If all will right (and major developers will agree) I plan PREPARE/EXECUTE\n> > commands and changes in SPI background for plan saving (query cache).\n> \n> Given that there is going to be a querytree redesign for 7.1, I'd\n> suggest holding off on prepared plans until 7.2. Otherwise it's\n> going to be a mess.\n\nAny chance for dirty read? Waiting for transaction end on inserts with\nduplicates on unique keys in transactions can be a lot of fun.\n\nRegards\nTheo\n", "msg_date": "Thu, 6 Apr 2000 18:09:02 +0200 (SAST)", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 items" }, { "msg_contents": "On Thu, 6 Apr 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> >>>> WAL/write ahead log\n> >>>> TOAST/long tuples\n> >>>> outer joins\n> >>>> query tree redesign\n> >>>> function manager redesign\n> >>>> \n> >>>> What year to we want to release 7.1? :-)\n> \n> > If all will right (and major developers will agree) I plan PREPARE/EXECUTE\n> > commands and changes in SPI background for plan saving (query cache).\n> \n> Given that there is going to be a querytree redesign for 7.1, I'd\n> suggest holding off on prepared plans until 7.2. Otherwise it's\n> going to be a mess.\n> \n> The good thing about the above list is that we have four essentially\n> independent major projects. (I think outer joins are a portion of the\n> querytree work, not a separate item.) So work on them can proceed in\n> parallel. And, if it gets to be September-ish and only two or three\n> are done, we can make a 7.1 release and still feel pretty good about\n> having some nice stuff.\n> \n> This does bring up a suggestion that Jan has made in the past. Perhaps\n> it would be a good idea if we create a separate CVS branch for each of\n> these major projects, so that people could work on that project\n> independently of the others. When a project is done, we merge it back\n> into the main branch. Then it's no problem if one of the projects is\n> broken temporarily, or not ready to go when we want to release 7.1.\n> \n> OTOH, managing separate CVS branches might be a real pain in the neck,\n> especially for developers who need to deal with more than one project.\n> I've never done it so I don't have a feeling for what it would take.\n> But the Mozilla people do this sort of thing all the time, so it can't\n> be that bad.\n\nI've only ever seen it done for the kernel of FreeBSD, and very very\nrarely at that ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Apr 2000 13:19:09 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 items " } ]
[ { "msg_contents": "> On Wed, Apr 05, 2000 at 01:30:39PM -0400, Bruce Momjian wrote:\n> > OK, can someone confirm which items still need to be done to update the\n> > documentation?\n> > \n> > I can't imagine they are all done, and I don't think we can release 7.0\n> > without them all being done.\n> \n> Could you use an extra pair of hands on the documentation? I'm not\n> hugely familiar with Postgres internals, nor do I have a great deal of\n> experience with it, but I can find some spare time and I can write. I\n> don't know what the shape of things is around here but if you want to\n> point me at some task I'll gladly have a go at it.\n\nWow, I am not sure how to address this. Someone?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Apr 2000 17:00:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Doc updates" } ]
[ { "msg_contents": "You'll probably recall reports of messages like this out of VACUUM:\nNOTICE: Index ind1: NUMBER OF INDEX' TUPLES (2002) IS NOT THE SAME AS HEAP' (3003).\nI've figured out the cause (or at least a cause) of this condition.\n\nConsider a table having some data and indices, eg \"onek\" from the\nregression tests:\n\nregression=# vacuum verbose analyze onek;\nNOTICE: --Relation onek--\nNOTICE: Pages 24: Changed 0, reaped 1, Empty 0, New 0; Tup 1000: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 32, MinLen 180, MaxLen 180; Re-using: Free/Avail. Space 5988/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.11u sec.\nNOTICE: Index onek_stringu1: Pages 28; Tuples 1000: Deleted 0. CPU 0.00s/0.01u sec.\nNOTICE: Index onek_hundred: Pages 12; Tuples 1000: Deleted 0. CPU 0.00s/0.01u sec.\nNOTICE: Index onek_unique2: Pages 18; Tuples 1000: Deleted 0. CPU 0.00s/0.02u sec.\nNOTICE: Index onek_unique1: Pages 17; Tuples 1000: Deleted 0. CPU 0.00s/0.01u sec.\nVACUUM\n\nIn a second psql, start up a transaction and leave it open:\n\nregression=# begin;\nBEGIN\nregression=# select 1;\n ?column?\n----------\n 1\n(1 row)\n\nregression=#\n\n(It's necessary to actually select something so that the transaction\nwill get assigned an ID; \"begin\" alone won't do anything.)\n\nNow return to the first psql and modify the table, doesn't matter how:\n\nregression=# update onek set odd = odd+0;\nUPDATE 1000\nregression=#\n\nAt this point, onek contains 1000 committed updated tuples and 1000 dead\nbut not yet deleted tuples. Moreover, because we have an open\ntransaction that should see those dead tuples if it looks at the table\n(at least if it's in SERIALIZABLE mode), VACUUM knows it should not\ndelete those tuples:\n\nregression=# vacuum verbose analyze onek;\nNOTICE: --Relation onek--\nNOTICE: Pages 47: Changed 47, reaped 0, Empty 0, New 0; Tup 2000: Vac 0, Keep/VTL 1000/0, Crash 0, UnUsed 0, MinLen 180, MaxLen 180; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.22u sec.\nNOTICE: Index onek_stringu1: Pages 28; Tuples 2000. CPU 0.01s/0.02u sec.\nNOTICE: Index onek_hundred: Pages 12; Tuples 2000. CPU 0.00s/0.02u sec.\nNOTICE: Index onek_unique2: Pages 18; Tuples 2000. CPU 0.00s/0.01u sec.\nNOTICE: Index onek_unique1: Pages 17; Tuples 2000. CPU 0.00s/0.01u sec.\nVACUUM\n\nBut what if we create a new index while in this state?\n\nregression=# create index toolate on onek(unique1);\nCREATE\n\nregression=# vacuum verbose analyze onek;\nNOTICE: --Relation onek--\nNOTICE: Pages 47: Changed 0, reaped 0, Empty 0, New 0; Tup 2000: Vac 0, Keep/VTL 1000/0, Crash 0, UnUsed 0, MinLen 180, MaxLen 180; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.01s/0.22u sec.\nNOTICE: Index toolate: Pages 5; Tuples 1000. CPU 0.00s/0.01u sec.\nNOTICE: Index toolate: NUMBER OF INDEX' TUPLES (1000) IS NOT THE SAME AS HEAP' (2000).\n Recreate the index.\nNOTICE: Index onek_stringu1: Pages 28; Tuples 2000. CPU 0.00s/0.02u sec.\nNOTICE: Index onek_hundred: Pages 12; Tuples 2000. CPU 0.00s/0.02u sec.\nNOTICE: Index onek_unique2: Pages 18; Tuples 2000. CPU 0.01s/0.02u sec.\nNOTICE: Index onek_unique1: Pages 17; Tuples 2000. CPU 0.00s/0.02u sec.\nVACUUM\n\nThe CREATE INDEX operation has only bothered to index the non-dead\ntuples. So, VACUUM's little sanity check fails.\n\nI believe that this is not really a bug. If that old transaction came\nalong and tried to use the index to scan for tuples, then we'd have a\nproblem, because it'd fail to find tuples that it should have found.\nBUT: if that old transaction is serializable, it won't even believe that\nthe index exists, not so? It can't see the index's entry in pg_class.\nSo I think CREATE INDEX's behavior is OK, and we just have an\ninsufficiently smart cross-check in VACUUM.\n\nI am not sure if it is possible to make an exact cross-check at\nreasonable cost. A recently created index might contain entries for\nall, none, or just some of the committed-dead tuples in its table.\nDepending on how old the oldest open transaction is, VACUUM might be\nable to remove some but not all of those dead tuples. So in general I\ndon't see an easy way to cross-check the number of index tuples against\nthe number of table tuples exactly.\n\nI am inclined to change the check to complain if there are more index\ntuples than table tuples (that's surely wrong), or if there are fewer\nindex tuples than committed-live table tuples (ditto), but not to\ncomplain if it's in between those limits. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Apr 2000 20:33:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Index tuple count != heap tuple count problem identified" }, { "msg_contents": "> You'll probably recall reports of messages like this out of VACUUM:\n> NOTICE: Index ind1: NUMBER OF INDEX' TUPLES (2002) IS NOT THE SAME AS HEAP' (3003).\n> I've figured out the cause (or at least a cause) of this condition.\n> \n> I am inclined to change the check to complain if there are more index\n> tuples than table tuples (that's surely wrong), or if there are fewer\n> index tuples than committed-live table tuples (ditto), but not to\n> complain if it's in between those limits. Comments?\n\nSounds good to me. I know I never considered such an interaction.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Apr 2000 20:51:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index tuple count != heap tuple count problem identified" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> You'll probably recall reports of messages like this out of VACUUM:\n> NOTICE: Index ind1: NUMBER OF INDEX' TUPLES (2002) IS NOT THE \n> SAME AS HEAP' (3003).\n> I've figured out the cause (or at least a cause) of this condition.\n> \n> The CREATE INDEX operation has only bothered to index the non-dead\n> tuples. So, VACUUM's little sanity check fails.\n>\n\nIs it wrong to change the implementation of CREATE INDEX ?\nI have a fix.\nIt needs the change of duplicate check(tuplesort->btbuild) and\nI've thougth that it would be better to change it after the release \nof 7.0. \n\nRegards. \n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 6 Apr 2000 10:17:45 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Index tuple count != heap tuple count problem identified" }, { "msg_contents": "> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of Tom Lane\n> > \n> > You'll probably recall reports of messages like this out of VACUUM:\n> > NOTICE: Index ind1: NUMBER OF INDEX' TUPLES (2002) IS NOT THE \n> > SAME AS HEAP' (3003).\n> > I've figured out the cause (or at least a cause) of this condition.\n> > \n> > The CREATE INDEX operation has only bothered to index the non-dead\n> > tuples. So, VACUUM's little sanity check fails.\n>\n> \n> Is it wrong to change the implementation of CREATE INDEX ?\n> I have a fix.\n> It needs the change of duplicate check(tuplesort->btbuild) and\n> I've thougth that it would be better to change it after the release \n> of 7.0. \n\nWell, it seems we better do something about it before 7.0 is released. \nNow it seems we have to decide to change CREATE INDEX, or modify VACUUM.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Apr 2000 21:24:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index tuple count != heap tuple count problem identified]" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> Is it wrong to change the implementation of CREATE INDEX ?\n>>>> I have a fix.\n>>>> It needs the change of duplicate check(tuplesort->btbuild) and\n>>>> I've thougth that it would be better to change it after the release\n>>>> of 7.0.\n>> \n>> Well, it seems we better do something about it before 7.0 is released.\n>> Now it seems we have to decide to change CREATE INDEX, or modify VACUUM.\n\n> It's difficult for me to provide a fix for CREATE INDEX before 7.0 is\n> released.\n> It's not sufficiently checked and I don't remember details now.\n\nAlso, we'd need to change the other index access methods too. That\ndoesn't seem to me like a good thing to tackle a week before release...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 01:00:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index tuple count != heap tuple count problem identified] " }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n>\n> > > -----Original Message-----\n> > > From: [email protected]\n> [mailto:[email protected]]On\n> > > Behalf Of Tom Lane\n> > >\n> > > You'll probably recall reports of messages like this out of VACUUM:\n> > > NOTICE: Index ind1: NUMBER OF INDEX' TUPLES (2002) IS NOT THE\n> > > SAME AS HEAP' (3003).\n> > > I've figured out the cause (or at least a cause) of this condition.\n> > >\n> > > The CREATE INDEX operation has only bothered to index the non-dead\n> > > tuples. So, VACUUM's little sanity check fails.\n> >\n> >\n> > Is it wrong to change the implementation of CREATE INDEX ?\n> > I have a fix.\n> > It needs the change of duplicate check(tuplesort->btbuild) and\n> > I've thougth that it would be better to change it after the release\n> > of 7.0.\n>\n> Well, it seems we better do something about it before 7.0 is released.\n> Now it seems we have to decide to change CREATE INDEX, or modify VACUUM.\n>\n\nIt's difficult for me to provide a fix for CREATE INDEX before 7.0 is\nreleased.\nIt's not sufficiently checked and I don't remember details now.\nI'm a little busy now and don't have enough time to look at it again.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Thu, 6 Apr 2000 14:03:56 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Index tuple count != heap tuple count problem identified]" } ]
[ { "msg_contents": "\n\n Hi,\n\n this is not first letter about pg_dumplo which I head. What add pg_dumplo \nto contrib or main tree?\n\n\t\t\t\t\t\tKarel\n\n\n---------- Forwarded message ----------\nDate: Wed, 05 Apr 2000 11:51:53 -0400\nFrom: CTN Production <[email protected]>\nTo: [email protected]\nSubject: pg_dumplo, thanks :)\n\nYour pg_dumplo program looks to be very useful so far. I've tested it\nand have had no trouble with PostgreSQL 7.0 beta. I can now easily make\na script that runs pg_dumplo and pg_dump to create a directory\ncontaining a full dump of a database. Another script does a full\nrestore of a database. Very nice. I used \"pg_dump -vof database.dump\ndatabase\", which makes it dump the OIDs too since I use the OIDs of\nrecords instead of serials.\n\nThanks. Wonder why this kind of utility is not part of the official\ndistribution? You might consider posting your pg_dumplo program on\nhttp://www.freshmeat.net/ so that people can find it and get you some\nmore recognition! :)\n\nHow large of databases have you used pg_dumplo on? I hope that it can\nhandle things when the database gets large.\n\nRobert B. Easter\[email protected]\n\n\n\n", "msg_date": "Thu, 6 Apr 2000 13:02:45 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "> this is not first letter about pg_dumplo which I head. What add pg_dumplo\n> to contrib or main tree?\n\nI probably haven't been paying attention. Have we heard about\npg_dumplo? Have you posted it so we can see it?\n\nThere is no fundamental problem including a utility like this in the\nmain tree or the contrib/ area, but tell us more about it and show us\nthe code! :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 06 Apr 2000 13:37:25 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "On Thu, 6 Apr 2000, Thomas Lockhart wrote:\n\n> > this is not first letter about pg_dumplo which I head. What add pg_dumplo\n> > to contrib or main tree?\n> \n> I probably haven't been paying attention. Have we heard about\n> pg_dumplo? Have you posted it so we can see it?\n\n Yes. I send information about it twice (or more) to some PG lists....\n(Users which use it know it from PG lists only. I nowhere annonced it.)\n \n> There is no fundamental problem including a utility like this in the\n> main tree or the contrib/ area, but tell us more about it and show us\n> the code! :)\n\n Well, pg_dumplo is in attache. It is really simple program and now is not \nprepared for dirtribution (it needs a little changes). I can change and work \non this, but I need motivation :-)\n \nAnd Peter, I know and I agree that standard PG tree is not good space for\nall interfaces and for all tools based on PG, but LO is PG feature and we \nhaven't backup tool for LO. \n\n\t\t\t\t\t\tKarel", "msg_date": "Thu, 6 Apr 2000 15:49:15 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "At 01:37 PM 4/6/00 +0000, Thomas Lockhart wrote:\n>> this is not first letter about pg_dumplo which I head. What add pg_dumplo\n>> to contrib or main tree?\n>\n>I probably haven't been paying attention. Have we heard about\n>pg_dumplo? Have you posted it so we can see it?\n>\n>There is no fundamental problem including a utility like this in the\n>main tree or the contrib/ area, but tell us more about it and show us\n>the code! :)\n\nIf it runs as a separate utility, there's no way for it to guarantee\na dump consistent with the previous run of pg_dump, right?\n\nWhile this is OK, one of the great things about 6.5 is the fact that\npg_dump now makes a consistent dump, you don't have to tear down all\nyour users before doing a backup.\n\nSo wouldn't it be better to fold pg_dumplo into pg_dump?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 07:20:19 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "\nOn Thu, 6 Apr 2000, Don Baccus wrote:\n\n> If it runs as a separate utility, there's no way for it to guarantee\n> a dump consistent with the previous run of pg_dump, right?\n\n If you dump your tables via pg_dump and promptly you dump LO via\npg_dumplo, IMHO you not have problem with DB consistency. In table-dump\nis in columns OID which use LO-dump index. \n\n> So wouldn't it be better to fold pg_dumplo into pg_dump?\n\nYes. If I good remember, anyone plan rewrite pg_dump. Or not? If not, I can\nrewrite it, because I very need good backup tools (I have important large \ndatabases (with LO too)). \n\n\t\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 6 Apr 2000 18:17:49 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "At 06:17 PM 4/6/00 +0200, Karel Zak wrote:\n>\n>On Thu, 6 Apr 2000, Don Baccus wrote:\n>\n>> If it runs as a separate utility, there's no way for it to guarantee\n>> a dump consistent with the previous run of pg_dump, right?\n>\n> If you dump your tables via pg_dump and promptly you dump LO via\n>pg_dumplo, IMHO you not have problem with DB consistency.\n\nFolks who have popular web sites with a world-wide audience don't have\nthe traditional early-morning \"quiet periods\", etc that local databases\ntend to enjoy. Since my group of folks are distributing a web toolkit\nfor general use, I tend to think in very general terms and any solution\nwe distribute wants to be very general, as well.\n\nIn the vast majority of cases, you're right that the odds would be low\nof a problem cropping up in reality, but the odds aren't zero unless\nyou knock out all other db uses while dumping.\n\nFor our toolkit, I don't really care because we have our own BLOB-ish\nhack for storing photos, word documents, etc using some SQL and AOLserver\ndriver magic I wrote, and these are pg_dumpable.\n\nMy main reason for bringing up the point was:\n\n>> So wouldn't it be better to fold pg_dumplo into pg_dump?\n\nand you seem to agree:\n\n>Yes. If I good remember, anyone plan rewrite pg_dump. Or not? If not, I can\n>rewrite it, because I very need good backup tools (I have important large \n>databases (with LO too)). \n\nSo I think we're on the same wavelength.\n\nSince you've conveniently made a post that reached my mailbox right after\na query from someone working on our toolkit port from Oracle to PG, did you \nknow that in Oracle to_char formatting chars don't have to be upper case?\n\nIn other words something like \"to_char(sysdate, 'yyyy-mm-dd')\" formats\nsysdate rather than ignore the formatting characters. Turns out the\ntoolkit we're porting from Oracle almost always uses upper case, but\nnot always and one of our gang just ran into this earlier this morning\nwhile porting over one of the toolkit module...\n\nBTW, I can't begin to tell you how much easier our porting job is due\nto the existence of to_char...\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 10:33:11 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "> At 06:17 PM 4/6/00 +0200, Karel Zak wrote:\n> >\n> >On Thu, 6 Apr 2000, Don Baccus wrote:\n> >\n> >> If it runs as a separate utility, there's no way for it to guarantee\n> >> a dump consistent with the previous run of pg_dump, right?\n> >\n> > If you dump your tables via pg_dump and promptly you dump LO via\n> >pg_dumplo, IMHO you not have problem with DB consistency.\n> \n> Folks who have popular web sites with a world-wide audience don't have\n> the traditional early-morning \"quiet periods\", etc that local databases\n> tend to enjoy. Since my group of folks are distributing a web toolkit\n> for general use, I tend to think in very general terms and any solution\n> we distribute wants to be very general, as well.\n\nHow do you get around vacuum downtime?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 14:05:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "> Since you've conveniently made a post that reached my mailbox right after\n> a query from someone working on our toolkit port from Oracle to PG, did you \n> know that in Oracle to_char formatting chars don't have to be upper case?\n> \n> In other words something like \"to_char(sysdate, 'yyyy-mm-dd')\" formats\n> sysdate rather than ignore the formatting characters. Turns out the\n> toolkit we're porting from Oracle almost always uses upper case, but\n> not always and one of our gang just ran into this earlier this morning\n> while porting over one of the toolkit module...\n\nDoesn't the upper/lower affect how the result displays. I think that is\na cool effect.\n\n> \n> BTW, I can't begin to tell you how much easier our porting job is due\n> to the existence of to_char...\n\nGreat. That is new to 7.0. We like ports _from_ Oracle.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 14:06:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "\nOn Thu, 6 Apr 2000, Bruce Momjian wrote:\n\n> > Since you've conveniently made a post that reached my mailbox right after\n> > a query from someone working on our toolkit port from Oracle to PG, did you \n> > know that in Oracle to_char formatting chars don't have to be upper case?\n> > \n> > In other words something like \"to_char(sysdate, 'yyyy-mm-dd')\" formats\n> > sysdate rather than ignore the formatting characters. Turns out the\n> > toolkit we're porting from Oracle almost always uses upper case, but\n> > not always and one of our gang just ran into this earlier this morning\n> > while porting over one of the toolkit module...\n> \n> Doesn't the upper/lower affect how the result displays. I think that is\n> a cool effect.\n\n Thanks Don. I tomorrow check it and comperate it with Oracle and if is here\na problem I fix it. In stable 7.0 it will right. \n\n PG's to_char() is based on upper case. Hmm, but it is not easy, it must be \ncase sensitive for some format-pictures (like to_char(now(), 'Day') and for \nto_char(now(), 'yyyy') is upper/lower without effect. I fix it and add this\nfeature to internal to_char's parser. \n\n\n\t\t\t\t\t\t\tKarel\n\n\n\n", "msg_date": "Thu, 6 Apr 2000 20:40:54 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "At 02:05 PM 4/6/00 -0400, Bruce Momjian wrote:\n\n>How do you get around vacuum downtime?\n\nPeople wait...I guess the point is we want to avoid as much downtime\nas possible. Before 6.5 came out with a consistent pg_dump utility,\nI was prepared to knock down the site nightly for backups. The \nappearance of consistent pg_dumps was a welcome surprise, what can\nI say? :)\n\nI posed the question because my assumption was that it wouldn't be\nthat hard to roll it into pg_dump if it works well and is reliable,\nand that this would be desirable.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 11:46:40 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "At 02:06 PM 4/6/00 -0400, Bruce Momjian wrote:\n\n>> In other words something like \"to_char(sysdate, 'yyyy-mm-dd')\" formats\n>> sysdate rather than ignore the formatting characters. Turns out the\n>> toolkit we're porting from Oracle almost always uses upper case, but\n>> not always and one of our gang just ran into this earlier this morning\n>> while porting over one of the toolkit module...\n>\n>Doesn't the upper/lower affect how the result displays. I think that is\n>a cool effect.\n\nNot in Oracle, AFAIK. I'm not enough of an Oracle nerd to know for sure,\nactually, I'm helping port this stuff from Oracle so I can avoid using \nit! (in particular, paying for it)\n\nIn the current PG implementation, lower case strings aren't recognized\nas format strings at all, apparently...\n \n>> BTW, I can't begin to tell you how much easier our porting job is due\n>> to the existence of to_char...\n>\n>Great. That is new to 7.0.\n\nYeah, we know that...actually one of our crew wrote a to_char using\nembedded Tcl for 6.5, but having to_char built-in is nice.\n\n> We like ports _from_ Oracle.\n\nWell...you've got about 150 more folks using PG 7.0 beta2 than you\nwould without it...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 11:50:25 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "At 08:40 PM 4/6/00 +0200, Karel Zak wrote:\n\n> PG's to_char() is based on upper case. Hmm, but it is not easy, it must be \n>case sensitive for some format-pictures (like to_char(now(), 'Day') and for \n>to_char(now(), 'yyyy') is upper/lower without effect. I fix it and add this\n>feature to internal to_char's parser. \n\nIf you have specific test cases where you're not sure if Oracle's case\nsensitive or not, let me know - I have ready access to an Oracle installation.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 11:58:06 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "> At 02:05 PM 4/6/00 -0400, Bruce Momjian wrote:\n> \n> >How do you get around vacuum downtime?\n> \n> People wait...I guess the point is we want to avoid as much downtime\n> as possible. Before 6.5 came out with a consistent pg_dump utility,\n> I was prepared to knock down the site nightly for backups. The \n> appearance of consistent pg_dumps was a welcome surprise, what can\n> I say? :)\n> \n> I posed the question because my assumption was that it wouldn't be\n> that hard to roll it into pg_dump if it works well and is reliable,\n> and that this would be desirable.\n\nSure. Of course, TOAST changes all that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 14:58:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n> Don Baccus wrote:\n> > Folks who have popular web sites with a world-wide audience don't have\n> > the traditional early-morning \"quiet periods\", etc that local databases\n \n> How do you get around vacuum downtime?\n\nI'll attempt to field that one, as I am helping a little with the port\nof this same toolkit, and have been using PostgreSQL in moderate\nintranet/light internet production for two and a half years (since 6.1.1\n-- scary thought).\n\nI vacuum nightly, at semi-random times around my quietest times, which\nare around 3-4 AM EDT. While 6.[1234] were pretty hokey around those\ntimes, like locking out readers during vacuum... but 6.5.x drastically\nimproved the situation, to where I have not seen any error returns or\nnoticeable delays during vacuum times -- but, then again, I don't have\nvery many accesses during that time.\n\nNow if a continuous vacuuming storage manager could be built... I can\nsee conceptually how one would go about it, I am nowhere near\nconfortable trying to do it myself. However, the list of 7.1 things\ntodo already is staggering -- several major projects all at once. IMHO,\nthose major projects should be tackled before relatively minor ones\nare. In particular, once the fmgr redesign is done, the separate Alpha\npatches may get to be retired. The WAL stuff is essential for good\nrecoverability, large tuples have been on nearly everyone's wish list\nfor a very long time, and lack of outer joins are a hindrance,\nparticularly when porting a web toolkit from Oracle :-). Although,\nCONNECT BY would be nice for Oracle porting :-)\n\nIn any case, the PostgreSQL team's progress from 6.1.1 to now is more\nthan impressive.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 06 Apr 2000 15:14:01 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "Applied to /contrib. If we don't need it with TOAST, we can remove or\nmodify it.\n\n\n> \n> On Thu, 6 Apr 2000, Thomas Lockhart wrote:\n> \n> > > this is not first letter about pg_dumplo which I head. What add pg_dumplo\n> > > to contrib or main tree?\n> > \n> > I probably haven't been paying attention. Have we heard about\n> > pg_dumplo? Have you posted it so we can see it?\n> \n> Yes. I send information about it twice (or more) to some PG lists....\n> (Users which use it know it from PG lists only. I nowhere annonced it.)\n> \n> > There is no fundamental problem including a utility like this in the\n> > main tree or the contrib/ area, but tell us more about it and show us\n> > the code! :)\n> \n> Well, pg_dumplo is in attache. It is really simple program and now is not \n> prepared for dirtribution (it needs a little changes). I can change and work \n> on this, but I need motivation :-)\n> \n> And Peter, I know and I agree that standard PG tree is not good space for\n> all interfaces and for all tools based on PG, but LO is PG feature and we \n> haven't backup tool for LO. \n> \n> \t\t\t\t\t\tKarel\n> \nContent-Description: \n\n[ application/x-gzip is not supported, skipping... ]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 00:01:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "Also, I backed out our createtable/lock pg_shadow changes until we\ndecide how to handle that.\n\n\n> \n> On Thu, 6 Apr 2000, Thomas Lockhart wrote:\n> \n> > > this is not first letter about pg_dumplo which I head. What add pg_dumplo\n> > > to contrib or main tree?\n> > \n> > I probably haven't been paying attention. Have we heard about\n> > pg_dumplo? Have you posted it so we can see it?\n> \n> Yes. I send information about it twice (or more) to some PG lists....\n> (Users which use it know it from PG lists only. I nowhere annonced it.)\n> \n> > There is no fundamental problem including a utility like this in the\n> > main tree or the contrib/ area, but tell us more about it and show us\n> > the code! :)\n> \n> Well, pg_dumplo is in attache. It is really simple program and now is not \n> prepared for dirtribution (it needs a little changes). I can change and work \n> on this, but I need motivation :-)\n> \n> And Peter, I know and I agree that standard PG tree is not good space for\n> all interfaces and for all tools based on PG, but LO is PG feature and we \n> haven't backup tool for LO. \n> \n> \t\t\t\t\t\tKarel\n> \nContent-Description: \n\n[ application/x-gzip is not supported, skipping... ]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 00:02:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "pg_shadow change" }, { "msg_contents": "\nOn Mon, 12 Jun 2000, Bruce Momjian wrote:\n\n> Applied to /contrib. If we don't need it with TOAST, we can remove or\n> modify it.\n> \n\n Thanks. Well, it is good motivation for me --- I will continue on\ndevelopment on this program. \n\n\n IMHO we must support LO after TOAST implementation too. Some\nlarge applications use LO and crossing to TOAST not will at once.\n\n How idea is for LO in TOASTed PG --- will LO internal use TOAST\nand as API current open/close/etc.? Or nothing will changed in LO?\n\n\n\t\t\t\t\tKarel\n\n", "msg_date": "Mon, 12 Jun 2000 10:44:07 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "\nOn Mon, 12 Jun 2000, Bruce Momjian wrote:\n\n> Also, I backed out our createtable/lock pg_shadow changes until we\n> decide how to handle that.\n> \n\n OK. \n\n\t\t\t\t\tKarel\n\n", "msg_date": "Mon, 12 Jun 2000 11:30:59 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_shadow change" } ]
[ { "msg_contents": "Hi,\nI noticed a behaviour in PostgreSql 7 (beta 3, Alpha, Digital Unix 4.0f,\ncc) that probably needs to be addressed.\n\nIf I create a table referencing a view\n\ncreate table mytable (id serial, name text);\ncreate view myview as select * from mytable where name like 'A%';\ncreate table othertable (id serial, refer integer references myview\n(id));\n\nthe engine doesn't complain, but\n\ninsert into mytable (name) values ('Alpha');\ninsert into othertable (refer) values (1);\nERROR: system column oid not available - myview is a view\n\nwhich looks sensible. But probably the errors should have been raised at \n'create table othertable'.\n\nWhat do you think?\n\n-- \nAlessio F. Bragadini\t\[email protected]\nAPL Financial Services\t\thttp://www.sevenseas.org/~alessio\nNicosia, Cyprus\t\t \tphone: +357-2-750652\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 06 Apr 2000 15:20:54 +0300", "msg_from": "Alessio Bragadini <[email protected]>", "msg_from_op": true, "msg_subject": "Foreign Keys referencing a View" } ]
[ { "msg_contents": "\nIIRC there was a problem with pg_dump and the serial datatype. I'm about\nto do some massive upgrades and part of it is going to 7.0 from 6.5.? I\nuse serial datatypes heavily and reference them in other tables (userid,\naccountnumber, etc). Will pg_dump and the psql < dumpfile restore the\nserials and have the serials still work?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 6 Apr 2000 09:52:58 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump and serial" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> IIRC there was a problem with pg_dump and the serial datatype.\n\n? I've been dumping/restoring serials for a long time without problems.\nThe only gotcha I can think of is that dumping a single table (-t foo)\ndoesn't work nicely --- pg_dump doesn't realize that the sequence for\na serial column of foo needs to be included. But as long as you're\ndumping a whole database, no problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 11:05:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and serial " }, { "msg_contents": "On Thu, 6 Apr 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > IIRC there was a problem with pg_dump and the serial datatype.\n> \n> ? I've been dumping/restoring serials for a long time without problems.\n> The only gotcha I can think of is that dumping a single table (-t foo)\n> doesn't work nicely --- pg_dump doesn't realize that the sequence for\n> a serial column of foo needs to be included. But as long as you're\n> dumping a whole database, no problem.\n> \n\nCool!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 6 Apr 2000 12:07:53 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump and serial " } ]
[ { "msg_contents": "Hi all,\n\nThere was a bug(??) report about LIKE optimization of\n7.0 beta3 in Japan from Akira Imagawa.\nIt may be difficult to solve. \n\nLet t_hoge be a table like\n{\n\thoge_cd int4 primary key,\n\tshimeinn text,\n\ttel text,\n\t..\n}\nindex hoge_ix2 on t_hoge(shimeinn).\nindex hoge_ix3 on t_hoge(tel).\n\nThere are 348236 rows in t_hoge.\n\nFor the query\nselect hoge_cd,shimeinn,tel\n from t_hoge\n where shimeinn like 'imag%'\n and tel like '012%'\n order by hoge_cd\n limit 100;\n\n64 rows returned immediately.\n\nAnd for the query\nselect hoge_cd,shimeinn,tel\n from t_hoge\n where shimeinn like 'imag%'\n and tel like '012-3%'\n order by hoge_cd\n limit 100;\n\n24 rows returned after waiting 8 minutes.\n\nI got the following output from him.\nexplain select * from t_hoge where tel like '012%';\n\tIndex Scan using t_hoge_ix3 on t_hoge (cost=0.00..0.23 rows=1981\n\twidth=676)\n\nexplain select * from t_hoge where tel like '012-3%';\n\tIndex Scan using t_hoge_ix3 on t_hoge (cost=0.00..0.00 rows=1981\n\twidth=676)\n\nIn fact,count(*) is 342323 and 114741 respectively.\n\nThe first problem is that estimated cost is too low.\nIt seems that the index selectivity of '012-3%' = the index\nselectivity of '012%' / (256*256),right ? \nIf so,does it give more practical estimation than before ?\nIt doesn't correspond to rows information either.\n\nIn reality, * shimeinn like 'imag%' * is much more restrictive\nthan * tel like '012-3%' *. However I couldn't think of the\nway to foresee which is more restrictive. Now I doubt whether\nwe have enough information to estimate LIKE selectivity\ncorrectly. It's the second problem.\n\nComments ? \n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 7 Apr 2000 00:01:05 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "7.0 like selectivity" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> For the query\n> select hoge_cd,shimeinn,tel\n> from t_hoge\n> where shimeinn like 'imag%'\n> and tel like '012%'\n> order by hoge_cd\n> limit 100;\n\n> 64 rows returned immediately.\n\n> And for the query\n> select hoge_cd,shimeinn,tel\n> from t_hoge\n> where shimeinn like 'imag%'\n> and tel like '012-3%'\n> order by hoge_cd\n> limit 100;\n\n> 24 rows returned after waiting 8 minutes.\n\nSo what were the plans for these two queries? Also, has this table been\n\"vacuum analyzed\"?\n\n> I got the following output from him.\n> explain select * from t_hoge where tel like '012%';\n> \tIndex Scan using t_hoge_ix3 on t_hoge (cost=0.00..0.23 rows=1981\n> \twidth=676)\n\n> explain select * from t_hoge where tel like '012-3%';\n> \tIndex Scan using t_hoge_ix3 on t_hoge (cost=0.00..0.00 rows=1981\n> \twidth=676)\n\n> In fact,count(*) is 342323 and 114741 respectively.\n\n> The first problem is that estimated cost is too low.\n> It seems that the index selectivity of '012-3%' = the index\n> selectivity of '012%' / (256*256),right ? \n> If so,does it give more practical estimation than before ?\n> It doesn't correspond to rows information either.\n\nThe rows number is fairly bogus (because it's coming from application of\neqsel, which is not the right thing; perhaps someday LIKE should have\nits very own selectivity estimation function). But the cost estimate\nis driven by the estimated selectivity of\n\ttel >= '012-3' AND tel < '012-4'\nand it would be nice to think that we have some handle on that.\n\nIt could be that the thing is getting fooled by a very non-uniform\ndistribution of telephone numbers. You indicate that most of the\nnumbers in the DB begin with '012', but if there are a small number\nthat begin with digits as high as 9, the selectivity estimates would\nbe pretty bad.\n\n> In reality, * shimeinn like 'imag%' * is much more restrictive\n> than * tel like '012-3%' *. However I couldn't think of the\n> way to foresee which is more restrictive. Now I doubt whether\n> we have enough information to estimate LIKE selectivity\n> correctly.\n\nNo, we don't, because we only keep track of the min and max values\nin each column and assume that the data is uniformly distributed\nbetween those limits. Perhaps someday we could keep a histogram\ninstead --- but VACUUM ANALYZE would get a lot slower and more\ncomplicated ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 11:29:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 like selectivity " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > For the query\n> > select hoge_cd,shimeinn,tel\n> > from t_hoge\n> > where shimeinn like 'imag%'\n> > and tel like '012%'\n> > order by hoge_cd\n> > limit 100;\n> \n> > 64 rows returned immediately.\n> \n> > And for the query\n> > select hoge_cd,shimeinn,tel\n> > from t_hoge\n> > where shimeinn like 'imag%'\n> > and tel like '012-3%'\n> > order by hoge_cd\n> > limit 100;\n> \n> > 24 rows returned after waiting 8 minutes.\n> \n> So what were the plans for these two queries?\n\nOK,I would ask him to send them.\n\n> Also, has this table been\n> \"vacuum analyzed\"?\n>\n\nYes,his another problem was solved by \"vacuum analyze\".\n \n> > I got the following output from him.\n> > explain select * from t_hoge where tel like '012%';\n> > \tIndex Scan using t_hoge_ix3 on t_hoge (cost=0.00..0.23 rows=1981\n> > \twidth=676)\n> \n> > explain select * from t_hoge where tel like '012-3%';\n> > \tIndex Scan using t_hoge_ix3 on t_hoge (cost=0.00..0.00 rows=1981\n> > \twidth=676)\n> \n> > In fact,count(*) is 342323 and 114741 respectively.\n> \n> > The first problem is that estimated cost is too low.\n> > It seems that the index selectivity of '012-3%' = the index\n> > selectivity of '012%' / (256*256),right ? \n> > If so,does it give more practical estimation than before ?\n> > It doesn't correspond to rows information either.\n> \n> The rows number is fairly bogus (because it's coming from application of\n> eqsel, which is not the right thing; perhaps someday LIKE should have\n> its very own selectivity estimation function). But the cost estimate\n> is driven by the estimated selectivity of\n> \ttel >= '012-3' AND tel < '012-4'\n> and it would be nice to think that we have some handle on that.\n>\n\nShouldn't rows number and cost estimate correspond in this case ?\nFor example,the following query would return same row numbers.\n\tselect * from t_hoge where tel = '012';\nAnd the cost estimate is probably > 1000.\nIs it good that the cost estimate for \"tel like '012%'\" is much smaller\nthan \" tel = '012' \" ?\n\nPostgreSQL's selectivity doesn't mean a pure probabilty.\nFor example,for int4 type the pure '=' probabity is pow(2,-32).\nIs current cost estimate for \" tel>=val1 and tel <val2'\" is effective\nfor narrow range of (val1,val2) ? The range ('012-3','012-4')\nis veeeery narrow in the vast char(5) space.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 7 Apr 2000 08:38:41 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: 7.0 like selectivity " }, { "msg_contents": "I've gotten the plans from Akira Imagawa.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Hiroshi Inoue\n> Sent: Friday, April 07, 2000 8:39 AM\n> To: Tom Lane\n> Cc: pgsql-hackers\n> Subject: RE: [HACKERS] 7.0 like selectivity \n> \n> \n> > -----Original Message-----\n> > From: Tom Lane [mailto:[email protected]]\n> > \n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > For the query\n> > > select hoge_cd,shimeinn,tel\n> > > from t_hoge\n> > > where shimeinn like 'imag%'\n> > > and tel like '012%'\n> > > order by hoge_cd\n> > > limit 100;\n> > \n> > > 64 rows returned immediately.\n\nSort (cost=0.01..0.01 rows=1 width=28)\n -> Index Scan using t_hoge_ix1 on t_hoge (cost=0.00..0.00 rows=1 wid\nth=28)\n\n> > \n> > > And for the query\n> > > select hoge_cd,shimeinn,tel\n> > > from t_hoge\n> > > where shimeinn like 'imag%'\n> > > and tel like '012-3%'\n> > > order by hoge_cd\n> > > limit 100;\n> > \n> > > 24 rows returned after waiting 8 minutes.\n\nSort (cost=0.01..0.01 rows=1 width=28)\n -> Index Scan using t_hoge_ix3 on t_hoge (cost=0.00..0.00 rows=1 wid\nth=28)\n\n\n", "msg_date": "Fri, 7 Apr 2000 10:00:17 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: 7.0 like selectivity " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I've gotten the plans from Akira Imagawa.\n\nOK ... I assume t_hoge_ix1 is on shimeinn and t_hoge_ix3 is on tel ...\n\nCan we find out what are the min and max values of both the\nshimeinn and tel columns?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 21:12:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 like selectivity " } ]
[ { "msg_contents": "I am doing the chapter on temporary tables. Should I show examples\nusing TEMP or TEMPORARY?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 11:12:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Book and TEMP vs. TEMPORARY" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am doing the chapter on temporary tables. Should I show examples\n> using TEMP or TEMPORARY?\n\nTEMPORARY is SQL92 standard, TEMP is not. Nuff said...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 11:31:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Book and TEMP vs. TEMPORARY " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I am doing the chapter on temporary tables. Should I show examples\n> > using TEMP or TEMPORARY?\n> \n> TEMPORARY is SQL92 standard, TEMP is not. Nuff said...\n\nOK, but TEMPORARY doesn't work on 6.5.*, right?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 11:38:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Book and TEMP vs. TEMPORARY" }, { "msg_contents": "> > > TEMPORARY is SQL92 standard, TEMP is not. Nuff said...\n> > OK, but TEMPORARY doesn't work on 6.5.*, right?\n> \n> Sure it does (at least as far as I can tell. The test/locale stuff has\n> screwed up CVS update, so I have to do a clean checkout of at least\n> that directory to get my tree back in shape :(\n> \n> And, to beat a dead horse, I'm *still* not sure why we are carrying\n> along the \"TEMP\" variant, losing the use of that name for other\n> things.\n\nWell, Tom Lane made it so we allow TEMP as an indentifier. I now\nremember that the issue with 7.0 was that everyone beat up on me because\nI used TEMP instead of the SQL92-standard TEMPORARY when implementing\ntemporary tables. See, it works now:\n\n\ttest=> create table x ( temp char(2));\n\tCREATE\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 12:41:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Book and TEMP vs. TEMPORARY" }, { "msg_contents": "> > TEMPORARY is SQL92 standard, TEMP is not. Nuff said...\n> OK, but TEMPORARY doesn't work on 6.5.*, right?\n\nSure it does (at least as far as I can tell. The test/locale stuff has\nscrewed up CVS update, so I have to do a clean checkout of at least\nthat directory to get my tree back in shape :(\n\nAnd, to beat a dead horse, I'm *still* not sure why we are carrying\nalong the \"TEMP\" variant, losing the use of that name for other\nthings.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 06 Apr 2000 16:45:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Book and TEMP vs. TEMPORARY" }, { "msg_contents": "At 11:38 AM 4/6/00 -0400, Bruce Momjian wrote:\n>> Bruce Momjian <[email protected]> writes:\n>> > I am doing the chapter on temporary tables. Should I show examples\n>> > using TEMP or TEMPORARY?\n>> \n>> TEMPORARY is SQL92 standard, TEMP is not. Nuff said...\n>\n>OK, but TEMPORARY doesn't work on 6.5.*, right?\n\nWon't 7.0 be the release version by the time the book is actually\npublished? :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 10:11:54 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Book and TEMP vs. TEMPORARY" }, { "msg_contents": "> At 11:38 AM 4/6/00 -0400, Bruce Momjian wrote:\n> >> Bruce Momjian <[email protected]> writes:\n> >> > I am doing the chapter on temporary tables. Should I show examples\n> >> > using TEMP or TEMPORARY?\n> >> \n> >> TEMPORARY is SQL92 standard, TEMP is not. Nuff said...\n> >\n> >OK, but TEMPORARY doesn't work on 6.5.*, right?\n> \n> Won't 7.0 be the release version by the time the book is actually\n> published? :)\n\nYes. I was just getting confirmation before putting in a 7.0-specific\nthing. However, TEMPORARY is supported in 6.5.*, so I am in good shape\nwith TEMPORARY.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 13:54:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Book and TEMP vs. TEMPORARY" }, { "msg_contents": "At 01:54 PM 4/6/00 -0400, Bruce Momjian wrote:\n\n>> Won't 7.0 be the release version by the time the book is actually\n>> published? :)\n>\n>Yes. I was just getting confirmation before putting in a 7.0-specific\n>thing. However, TEMPORARY is supported in 6.5.*, so I am in good shape\n>with TEMPORARY.\n\nNote the smiley, I was just teasing.\n\nWhat is the current ETA on 7.0's official release, anyway?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 11:34:05 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Book and TEMP vs. TEMPORARY" }, { "msg_contents": "> At 01:54 PM 4/6/00 -0400, Bruce Momjian wrote:\n> \n> >> Won't 7.0 be the release version by the time the book is actually\n> >> published? :)\n> >\n> >Yes. I was just getting confirmation before putting in a 7.0-specific\n> >thing. However, TEMPORARY is supported in 6.5.*, so I am in good shape\n> >with TEMPORARY.\n> \n> Note the smiley, I was just teasing.\n> \n> What is the current ETA on 7.0's official release, anyway?\n> \n\nMid to end of April.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 14:57:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Book and TEMP vs. TEMPORARY" } ]
[ { "msg_contents": "Do we have temporary indexes?\n\n\ttest=> CREATE TABLE temptest(col INTEGER);\n\tCREATE\n\ttest=> create index ix on temptest (col);\n\tCREATE\n\ttest=> CREATE TEMP TABLE masktest (col INTEGER);\n\tCREATE\n\ttest=> create index ix on temptest (col);\n\tERROR: Cannot create index: 'ix' already exists\n\nSeems we don't. Should I add it to the TODO list?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 11:58:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Temporary indexes" }, { "msg_contents": "> Do we have temporary indexes?\n> \n> \ttest=> CREATE TABLE temptest(col INTEGER);\n> \tCREATE\n> \ttest=> create index ix on temptest (col);\n> \tCREATE\n> \ttest=> CREATE TEMP TABLE masktest (col INTEGER);\n> \tCREATE\n> \ttest=> create index ix on temptest (col);\n> \tERROR: Cannot create index: 'ix' already exists\n> \n> Seems we don't. Should I add it to the TODO list?\n\nOh, I see now, I was creating the index on temptest, not masktest. \nSorry. It works fine.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 12:29:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Temporary indexes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Do we have temporary indexes?\n> \ttest=> CREATE TABLE temptest(col INTEGER);\n> \tCREATE\n> \ttest=> create index ix on temptest (col);\n> \tCREATE\n> \ttest=> CREATE TEMP TABLE masktest (col INTEGER);\n> \tCREATE\n> \ttest=> create index ix on temptest (col);\n> \tERROR: Cannot create index: 'ix' already exists\n\n> Seems we don't. Should I add it to the TODO list?\n\nIt seems to work when you use the right table names ;-)\n\nregression=# create table foo (f1 int);\nCREATE\nregression=# create index foo_i on foo(f1);\nCREATE\nregression=# create temp table foo (f1t int);\nCREATE\nregression=# create index foo_i on foo(f1);\nERROR: DefineIndex: attribute \"f1\" not found\nregression=# create index foo_i on foo(f1t);\nCREATE\nregression=# explain select * from foo where f1t = 33;\nNOTICE: QUERY PLAN:\n\nIndex Scan using foo_i on foo (cost=0.00..8.14 rows=10 width=4)\n\nEXPLAIN\n-- reconnect to drop temp tables\nregression=# \\connect regression\nYou are now connected to database regression.\nregression=# explain select * from foo where f1t = 33;\nERROR: Attribute 'f1t' not found\nregression=# explain select * from foo where f1 = 33;\nNOTICE: QUERY PLAN:\n\nIndex Scan using foo_i on foo (cost=0.00..8.14 rows=10 width=4)\n\nEXPLAIN\nregression=#\n\n\nI do observe a minor glitch though, which is that psql's \\d command\ndoesn't pay attention to temp-table aliases:\n\nregression=# \\d foo\n Table \"foo\"\n Attribute | Type | Modifier\n-----------+---------+----------\n f1 | integer |\nIndex: foo_i\n\nregression=#\nregression=# create temp table foo (f1t int);\nCREATE\nregression=# \\d foo\n Table \"foo\"\n Attribute | Type | Modifier\n-----------+---------+----------\n f1 | integer |\nIndex: foo_i\n\nI should be shown the temp table here, but I'm not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 14:05:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Temporary indexes " } ]
[ { "msg_contents": "> On Thu, Apr 06, 2000 at 11:38:10AM -0400, Bruce Momjian wrote:\n> > > Bruce Momjian <[email protected]> writes:\n> > > > I am doing the chapter on temporary tables. Should I show examples\n> > > > using TEMP or TEMPORARY?\n> > > \n> > > TEMPORARY is SQL92 standard, TEMP is not. Nuff said...\n> > \n> > OK, but TEMPORARY doesn't work on 6.5.*, right?\n> \n> Seems to work in 6.5.0, here:\n\nOh, that's great news. I though it was only added in 7.0. I now see it\nin 6.5.3. I think in 7.0 we just allowed TEMP as a user column.\n\nFor the book, TEMPORARY it is.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 12:20:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Book and TEMP vs. TEMPORARY" } ]
[ { "msg_contents": "\nOn Thu, 6 Apr 2000, Peter Mount wrote:\n\n> In the past I had thought of writing something similar as an example for\n> JDBC (dump the LO's into a zip file). The thing I couldn't fathom (and\n> now I'm saying this, it's probably a simple thing to do), was the\n> restore. How do you create an lo with a specific oid?\n\n Very good question. IMHO is not method (in standard API) how create LO with \na specific oid. The pg_dumplo during LO-dump import rewrite (UPDATE) your old \noid in defined column. Yes, you must not use LO's oid as join key between \ntables or save LO's oid to the others columns than you defined in pg_dumplo\ncommand line.\n\n The TOAST is deliverance from this limitation.\n\n\t\t\t\t\t\tKarel\n\n\n", "msg_date": "Thu, 6 Apr 2000 18:42:41 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_dumplo, thanks :) (fwd)" }, { "msg_contents": "At 06:42 PM 4/6/00 +0200, Karel Zak wrote:\n>\n>On Thu, 6 Apr 2000, Peter Mount wrote:\n>\n>> In the past I had thought of writing something similar as an example for\n>> JDBC (dump the LO's into a zip file). The thing I couldn't fathom (and\n>> now I'm saying this, it's probably a simple thing to do), was the\n>> restore. How do you create an lo with a specific oid?\n>\n> Very good question. IMHO is not method (in standard API) how create LO with \n>a specific oid. The pg_dumplo during LO-dump import rewrite (UPDATE) your\nold \n>oid in defined column. Yes, you must not use LO's oid as join key between \n>tables or save LO's oid to the others columns than you defined in pg_dumplo\n>command line.\n>\n> The TOAST is deliverance from this limitation.\n\nWe could actually deliver ourselves from this limitation absent TOAST, if\nwe wanted, by using something other than the OID as the key for the created\nLO item. In fact, this is sorta what I did for my BLOB-ish AOLserver hack\nfor our web toolkit, but I don't use the actual lo code for a variety of\nreasons.\n\nBut I looked at it pretty thoroughly...\n\nSince TOAST's on the horizon, I didn't have any real motivation or interest\nin working up a less restrictive lo implementation and don't think there's\nany real reason to do so. But, LO's dependence on OIDs is an implementation\nartifact that's not at all necessary.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 10:45:40 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pg_dumplo, thanks :) (fwd)" } ]
[ { "msg_contents": "Seems temporary tables can not be vacuumed. It is because vacuum.c does\na sequential scan of the pg_class table, rather than using the cache. \nShould this be fixed?\n\t\n\ttest=> create temporary table test (x integer);\n\tCREATE\n\ttest=> vacuum test;\n\tNOTICE: Vacuum: table not found\n\tVACUUM\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 13:10:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuuming temporary tables" } ]
[ { "msg_contents": "> On Thu, Apr 06, 2000 at 01:10:12PM -0400, Bruce Momjian wrote:\n> > Seems temporary tables can not be vacuumed. It is because vacuum.c does\n> > a sequential scan of the pg_class table, rather than using the cache. \n> > Should this be fixed?\n> > \t\n> Do they show up in psql with \\d? I only ask because in 6.5.0, they don't, \n> but psql's been completely rewritten for 7.0.\n\nNo, they do not. I am fixing VACUUM for temporary tables now. The\nreason they don't show up is because the cache is not used by psql to\nfetch information. It goes right into pg_class, which has only the\npgtemp* name.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 13:55:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuuming temporary tables" } ]
[ { "msg_contents": "Fixed:\n\n\ttest=> CREATE TEMPORARY TABLE test(x int);\n\tCREATE\n\ttest=> VACUUM test;\n\tVACUUM\n\nLook at the comment I found in the code:\n\n /*\n * we could use the cache here, but it is clearer to use scankeys\n * for both vacuum cases, bjm 2000/01/19\n */\n\nSeems I had already looked at this as an issue. I had not realized\nthis was an issue for temp tables. Fixed now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 14:02:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM of temp tables" } ]
[ { "msg_contents": "Playing around with rules and views, I noticed the\nfollowing:\n\nCREATE TABLE t ( \n\ti INTEGER,\n\tb BOOLEAN DEFAULT false\n);\n\nCREATE VIEW v AS\n\tSELECT * FROM t;\n\nCREATE RULE v_insert AS\n\tON INSERT TO v DO INSTEAD\n\tINSERT INTO t values ( NEW.i, NEW.b);\n\nmhh=# insert into v values ( 1 );\nINSERT 50199 1\nmhh=# select * from v;\n i | b \n---+---\n 1 | \n(1 row)\n\nIn other words, the default is not honored. Is there a way to\nwrite the rule so that default on 'b' is honored?\n\nI found the following to work. But the combinatorial explosion\nfor multiple fields is a killer.\n\nCREATE RULE v_insert_null AS\n\tON INSERT TO v WHERE NEW.b IS NULL DO INSTEAD\n\tINSERT INTO t values (NEW.i);\nCREATE RULE v_insert_not_null AS\n\tON INSERT TO v WHERE NEW.b IS NOT NULL DO INSTEAD\n\tINSERT INTO t values (NEW.i, NEW.b);\n\nI also thought about COALESCE:\n\nCREATE RULE v_insert AS\n\tON INSERT TO v DO INSTEAD\n\tINSERT INTO t values (NEW.i, COALESCE(NEW.b, false));\n\nBut then two places have to know about the default value.\n\nAny other suggestions?\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Thu, 06 Apr 2000 15:33:15 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": true, "msg_subject": "'on insert' rules and defaults" }, { "msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> In other words, the default is not honored.\n\nRight, since the INSERT written in the rule provides an explicit\nspecification of what should be inserted into t. NEW.b is NULL\nand that's what gets inserted.\n\n> I also thought about COALESCE:\n\n> CREATE RULE v_insert AS\n> \tON INSERT TO v DO INSTEAD\n> \tINSERT INTO t values (NEW.i, COALESCE(NEW.b, false));\n\n> But then two places have to know about the default value.\n\nAnother problem with that is that there's no way to specify insertion\nof a NULL into b.\n\n> Any other suggestions?\n\nYou really want default substitution to be done by the parser.\nAny later is too late because you won't be able to tell an explicit\nNULL from a defaulted column.\n\nI haven't tried it, but I think it would work to declare the \"view\"\nas a real table and then attach the rules to it:\n\nCREATE TABLE t ( \n\ti INTEGER,\n\tb BOOLEAN DEFAULT false\n);\n\nCREATE TABLE v ( \n\ti INTEGER,\n\tb BOOLEAN DEFAULT false\n);\n\nCREATE RULE _RETv AS\n\tON SELECT TO v DO INSTEAD\n\tSELECT * FROM t;\n\nCREATE RULE v_insert AS\n\tON INSERT TO v DO INSTEAD\n\tINSERT INTO t values ( NEW.i, NEW.b);\n\nThen when you do\n\nINSERT INTO v VALUES(43);\n\nthe default defined for v.b gets applied by the parser, before the\nrule substitution happens.\n\nThis still means you have two places that know the default, but\nsince they're both table declarations maybe it's not so bad.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 16:35:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'on insert' rules and defaults " } ]
[ { "msg_contents": "Hi,\n\nIn doing some more 7.0 testing, I ran across a difference in functionality\nconcerning unique indexes and errors that are reported when you try to \nviolate the index. I'm not sure if this change is intentional, so I'm \nbringing it up here. In 6.5.3, if you try to update a row that violates \na unique index, the query fails and said error is reported to the \napplication. However, in 7.0 the query succeeds, but updates 0 rows. Hence, \nno errors are reported back to the application. This is not normally \na problem because I typically check the constrait before updating. \n\n\nin 7.0/beta3\nbasement=> update foobar set unique_colum = '2000-04-09' where foobar_id = 32;\nUPDATE 0\nbasement=> \n\nin 6.5.3\nbasement=> update foobar set unique_colum = '2000-04-09' where foobar_id = 32;\nERROR: Cannot insert a duplicate key into a unique index\nbasement=> \n\n\n--brian\n\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Thu, 6 Apr 2000 19:30:37 -0500", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": true, "msg_subject": "Unique Key Violation 7.0 vs. 6.5.3" }, { "msg_contents": "At 07:30 PM 4/6/00 -0500, Brian Hirt wrote:\n\n> This is not normally \n>a problem because I typically check the constrait before updating. \n\nIf true, this is actually a BIG problem since many applications will\ncatch any error rather than check first, since this is more efficient...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Apr 2000 17:37:54 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unique Key Violation 7.0 vs. 6.5.3" }, { "msg_contents": "> Hi,\n> \n> In doing some more 7.0 testing, I ran across a difference in functionality\n> concerning unique indexes and errors that are reported when you try to \n> violate the index. I'm not sure if this change is intentional, so I'm \n> bringing it up here. In 6.5.3, if you try to update a row that violates \n> a unique index, the query fails and said error is reported to the \n> application. However, in 7.0 the query succeeds, but updates 0 rows. Hence, \n> no errors are reported back to the application. This is not normally \n> a problem because I typically check the constrait before updating. \n> \n> \n> in 7.0/beta3\n> basement=> update foobar set unique_colum = '2000-04-09' where foobar_id = 32;\n> UPDATE 0\n> basement=> \n> \n> in 6.5.3\n> basement=> update foobar set unique_colum = '2000-04-09' where foobar_id = 32;\n> ERROR: Cannot insert a duplicate key into a unique index\n> basement=> \n\nWorks here:\n\n\ttest=> insert into kk values (1);\n\tINSERT 18740 1\n\ttest=> insert into kk values (1);\n\tERROR: Cannot insert a duplicate key into unique index ii\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Apr 2000 20:58:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unique Key Violation 7.0 vs. 6.5.3" }, { "msg_contents": "\nIt seems that I was a bit trigger happy with this one. I should have\nspent a bit more time researching this one. I'm not quite sure how I\ncame to the conclusion I did. I re-ran my tests and everything works\nas it should. Sorry.\n\n--brian\n\nOn Thu, Apr 06, 2000 at 08:58:04PM -0400, Bruce Momjian wrote:\n> > Hi,\n> > \n> > In doing some more 7.0 testing, I ran across a difference in functionality\n> > concerning unique indexes and errors that are reported when you try to \n> > violate the index. I'm not sure if this change is intentional, so I'm \n> > bringing it up here. In 6.5.3, if you try to update a row that violates \n> > a unique index, the query fails and said error is reported to the \n> > application. However, in 7.0 the query succeeds, but updates 0 rows. Hence, \n> > no errors are reported back to the application. This is not normally \n> > a problem because I typically check the constrait before updating. \n> > \n> > \n> > in 7.0/beta3\n> > basement=> update foobar set unique_colum = '2000-04-09' where foobar_id = 32;\n> > UPDATE 0\n> > basement=> \n> > \n> > in 6.5.3\n> > basement=> update foobar set unique_colum = '2000-04-09' where foobar_id = 32;\n> > ERROR: Cannot insert a duplicate key into a unique index\n> > basement=> \n> \n> Works here:\n> \n> \ttest=> insert into kk values (1);\n> \tINSERT 18740 1\n> \ttest=> insert into kk values (1);\n> \tERROR: Cannot insert a duplicate key into unique index ii\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n", "msg_date": "Thu, 6 Apr 2000 20:09:35 -0500", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unique Key Violation 7.0 vs. 6.5.3" }, { "msg_contents": "> In doing some more 7.0 testing, I ran across a difference in functionality\n> concerning unique indexes and errors that are reported when you try to \n> violate the index. I'm not sure if this change is intentional, so I'm \n> bringing it up here. In 6.5.3, if you try to update a row that violates \n> a unique index, the query fails and said error is reported to the \n> application. However, in 7.0 the query succeeds, but updates 0 rows. Hence, \n> no errors are reported back to the application. This is not normally \n> a problem because I typically check the constrait before updating. \n> \n> \n> in 7.0/beta3\n> basement=> update foobar set unique_colum = '2000-04-09' where foobar_id = 32;\n> UPDATE 0\n> basement=> \n> \n> in 6.5.3\n> basement=> update foobar set unique_colum = '2000-04-09' where foobar_id = 32;\n> ERROR: Cannot insert a duplicate key into a unique index\n> basement=> \n\nI'm not sure how your table looks like, but seems following test with\ncurrent (not b3) works here:\n\ntest=# create table foobar (unique_column date unique, foobar_id int primary key);\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'foobar_unique_column_key' for table 'foobar'\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'foobar_pkey' for table 'foobar'\nCREATE\ntest=# insert into foobar values('2000-4-8', 32);\nINSERT 2231126 1\ntest=# insert into foobar values('2000-4-9', 33);\nINSERT 2231127 1\ntest=# update foobar set unique_column = '2000-04-09' where foobar_id = 32;\nERROR: Cannot insert a duplicate key into unique index foobar_unique_column_key\n--\nTatsuo Ishii\n", "msg_date": "Fri, 07 Apr 2000 10:23:43 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unique Key Violation 7.0 vs. 6.5.3" } ]
[ { "msg_contents": "I just repaired (I hope) a resource leakage problem in execMain.c.\nWhen EvalPlanQual is used, any subplans it creates need to be shut down\nat the end of the query to release their resources. There was no code\nto do that, so I created a new function EndEvalPlanQual to do it.\n\nVadim, this seems to be your code --- would you look to see if the\naddition is correct or not?\n\nIf you need a test case, here is one starting from the regression\ntest database:\n\ncreate table foo (unique1 int, instock int);\n\ninsert into foo select unique1, 1000 from tenk1;\n\ncreate index fooi on foo(unique1);\n\nMake sure you get a double indexscan nestloop plan from this:\n\nexplain\nupdate foo set instock=instock-1 where\nfoo.unique1 = tenk1.unique1 and tenk1.unique2 = 4444;\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..62.83 rows=1 width=18)\n -> Index Scan using tenk1_unique2 on tenk1 (cost=0.00..2.07 rows=1 width=4)\n -> Index Scan using fooi on foo (cost=0.00..59.52 rows=100 width=14)\n\nEXPLAIN\n\nNow, executing that query by itself works fine:\n\nupdate foo set instock=instock-1 where\nfoo.unique1 = tenk1.unique1 and tenk1.unique2 = 4444;\nUPDATE 1\n\nStart up a second psql, and in it start a transaction with the same query:\n\nbegin;\nupdate foo set instock=instock-1 where\nfoo.unique1 = tenk1.unique1 and tenk1.unique2 = 4444;\n\n(don't commit yet). Back in the first psql, do the same query:\n\nupdate foo set instock=instock-1 where\nfoo.unique1 = tenk1.unique1 and tenk1.unique2 = 4444;\n\nThis hangs waiting to see if the other xact will commit or not.\nGo back to the other psql and commit:\n\nend;\n\nIn the first psql, if not patched, you get\n\nNOTICE: Buffer Leak: [008] (freeNext=-3, freePrev=-3, relname=tenk1_unique2, blockNum=12, flags=0x4, refcount=1 1)\nNOTICE: Buffer Leak: [058] (freeNext=-3, freePrev=-3, relname=tenk1, blockNum=103, flags=0x4, refcount=1 1)\nUPDATE 1\n\nbecause the indexscan opened by EvalPlanQual is never closed.\n\n(Up till an hour ago, you actually got a coredump because of bogosity\nin nodeIndexScan, but I fixed that. I'm less sure of this fix however.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Apr 2000 21:07:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Closing down EvalPlanQual" } ]
[ { "msg_contents": "\nHi,\n\nIf one does:\n\n create table master (\n id integer not null,\n primary key (id)\n );\n\n create table detail (\n id integer not null,\n master_id integer not null,\n primary key (id),\n foreign key (master_id) references master (id)\n );\n\n insert into master (id) values (1);\n\n grant select on master to a_user;\n grant select, insert, update, delete on detail to a_user;\n\nthen if login as \"a_user\" and does:\n\n insert into detail (id, master_id) values (1, 10);\n\nthis will result in: \"ERROR: master: Permission denied\".\n\nThis seems a bug to me ? Isn't it ?\n\nRegards,\nRaul Chirea.\n\n\n\n", "msg_date": "Wed, 12 Apr 2000 05:49:44 +0300", "msg_from": "Raul Chirea <[email protected]>", "msg_from_op": true, "msg_subject": "Foreign keys breaks tables permissions" }, { "msg_contents": ">\n> Hi,\n>\n> If one does:\n>\n> [...]\n> grant select on master to a_user;\n> grant select, insert, update, delete on detail to a_user;\n>\n> then if login as \"a_user\" and does:\n>\n> insert into detail (id, master_id) values (1, 10);\n>\n> this will result in: \"ERROR: master: Permission denied\".\n>\n> This seems a bug to me ? Isn't it ?\n\nOutch,\n\n yes, we missed something here. Peter, you said you'll\n probably work on the ACL stuff after 7.0. We need to\n coordinate that work with the function manager redesign to go\n for SETUID triggers and functions.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 12 Apr 2000 14:25:49 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [SQL] Foreign keys breaks tables permissions" }, { "msg_contents": "Jan Wieck writes:\n\n> Peter, you said you'll probably work on the ACL stuff after 7.0. We\n> need to coordinate that work with the function manager redesign to go\n> for SETUID triggers and functions.\n\nYes, very nice feature. Far down the road in my dreams though. However,\nSQL has a REFERENCES privilege, which would probably be the more\nappropriate one here.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Apr 2000 04:04:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Foreign keys breaks tables permissions" }, { "msg_contents": "Resurrecting a bug report from mid-April:\n\[email protected] (Jan Wieck) writes:\n>> If one does:\n>> \n>> [...]\n>> grant select on master to a_user;\n>> grant select, insert, update, delete on detail to a_user;\n>> \n>> then if login as \"a_user\" and does:\n>> \n>> insert into detail (id, master_id) values (1, 10);\n>> \n>> this will result in: \"ERROR: master: Permission denied\".\n>> \n>> This seems a bug to me ? Isn't it ?\n\n> Outch,\n\n> yes, we missed something here. Peter, you said you'll\n> probably work on the ACL stuff after 7.0. We need to\n> coordinate that work with the function manager redesign to go\n> for SETUID triggers and functions.\n\nI looked at this some more because people were complaining that it\nwas still broken in 7.0. AFAICT, it's got nothing to do with SETUID\ntriggers or anything so hairy, it's just a question of what permissions\nwe think ought to be required for which actions. The issue is very\nsimple: the RI insert trigger doesn't do a SELECT on the master table,\nit does a SELECT FOR UPDATE --- and execMain.c thinks that that should\nrequire UPDATE access rights to the master.\n\nSo, two questions:\n\n1. Why is RI_FKey_check() using SELECT FOR UPDATE and not plain SELECT?\n\n2. What permissions should SELECT FOR UPDATE require?\n\nIf the existing code is correct on both these points, then I think the\nanswer is that there is no bug: updating a table that has a foreign\nkey reference will require update rights on the master as well. I would\nrather conclude that one of these two points is wrong...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 19:17:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign keys breaks tables permissions " }, { "msg_contents": "I believe the reason that the trigger does a select for update was because\notherwise there could exist a case that we select and see it and then have\nthe\nrow go away afterwards because nothing stops the delete. I could be really\nwrong, but I see the scenario as the below:\n\nIf the delete happens first, the select for update waits and then knows that\nthe row isn't there any more and it should fail. If the select for update\nhappens first, the delete waits and the on delete semantics get operated.\nWithout the lock, one transaction could delete the row and not have the\non delete happen, because it doesn't see the inserted rows in the other\ntransaction and the inserting transaction thinks the child is okay because\nwe can see the parent, but when both commit we have a child without\na parent.\n\nNot that that was probably terribly helpful as to how to go ahead, but...\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Jan Wieck\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Thursday, May 18, 2000 4:17 PM\nSubject: Re: [SQL] Foreign keys breaks tables permissions\n\n\n> I looked at this some more because people were complaining that it\n> was still broken in 7.0. AFAICT, it's got nothing to do with SETUID\n> triggers or anything so hairy, it's just a question of what permissions\n> we think ought to be required for which actions. The issue is very\n> simple: the RI insert trigger doesn't do a SELECT on the master table,\n> it does a SELECT FOR UPDATE --- and execMain.c thinks that that should\n> require UPDATE access rights to the master.\n>\n> So, two questions:\n>\n> 1. Why is RI_FKey_check() using SELECT FOR UPDATE and not plain SELECT?\n>\n> 2. What permissions should SELECT FOR UPDATE require?\n>\n> If the existing code is correct on both these points, then I think the\n> answer is that there is no bug: updating a table that has a foreign\n> key reference will require update rights on the master as well. I would\n> rather conclude that one of these two points is wrong...\n\n\n", "msg_date": "Thu, 18 May 2000 19:58:32 -0700", "msg_from": "\"Stephan Szabo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Foreign keys breaks tables permissions " }, { "msg_contents": "\"Stephan Szabo\" <[email protected]> writes:\n> I believe the reason that the trigger does a select for update was\n> because otherwise there could exist a case that we select and see it\n> and then have the row go away afterwards because nothing stops the\n> delete.\n\nHmm, good point. And I think I see the reason for the protection\nlogic as well: if you can do SELECT FOR UPDATE then you can acquire\na lock that will block a competing writer. Therefore, even though\nyou can't modify the table, you can create the same sort of denial-\nof-service attack that someone with real UPDATE privileges could\ncreate, just by leaving your transaction open.\n\nSo, either we live with update requiring update rights on the\ntable referenced as a foreign key, or we break something else.\nGrumble.\n\nProbably the denial-of-service argument is the weakest of the three\npoints. Is anyone in favor of reducing SELECT FOR UPDATE to only\nrequiring \"SELECT\" rights, and living with the possible lock-that-\nyou-shouldn't-really-have-been-able-to-get issue?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 23:38:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Foreign keys breaks tables permissions " }, { "msg_contents": "Tom Lane wrote:\n\n> \"Stephan Szabo\" <[email protected]> writes:\n> > I believe the reason that the trigger does a select for update was\n> > because otherwise there could exist a case that we select and see it\n> > and then have the row go away afterwards because nothing stops the\n> > delete.\n>\n> Probably the denial-of-service argument is the weakest of the three\n> points. Is anyone in favor of reducing SELECT FOR UPDATE to only\n> requiring \"SELECT\" rights, and living with the possible lock-that-\n> you-shouldn't-really-have-been-able-to-get issue?\n>\n\nBut what about DELETE CASCADE cases for exmaple ?\nMaybe RI_trigger should be able to update/insert/delete\nthe referenced table.\nHowever another kind of permission for foreign key\nseems to be needed. i.e only granted users could\ndefine foreign key of the referenced table in CREATE\n(ALTER) TABLE command. Otherwise not granted\nusers could delete tuples of the referenced table\nby defining a bogus foreign key of the table with\nDELETE CASCADE option.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Fri, 19 May 2000 20:28:17 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Foreign keys breaks tables permissions" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> Tom Lane wrote:\n> \n> > \"Stephan Szabo\" <[email protected]> writes:\n> > > I believe the reason that the trigger does a select for update was\n> > > because otherwise there could exist a case that we select and see it\n> > > and then have the row go away afterwards because nothing stops the\n> > > delete.\n> >\n> > Probably the denial-of-service argument is the weakest of the three\n> > points. Is anyone in favor of reducing SELECT FOR UPDATE to only\n> > requiring \"SELECT\" rights, and living with the possible lock-that-\n> > you-shouldn't-really-have-been-able-to-get issue?\n> >\n> \n> But what about DELETE CASCADE cases for exmaple ?\n> Maybe RI_trigger should be able to update/insert/delete\n> the referenced table.\n> However another kind of permission for foreign key\n> seems to be needed. i.e only granted users could\n> define foreign key of the referenced table in CREATE\n> (ALTER) TABLE command.\n\nIIRC this is even in the SQL standard as a separate right (maybe REFERENCES ?)\n\n> Otherwise not granted\n> users could delete tuples of the referenced table\n> by defining a bogus foreign key of the table with\n> DELETE CASCADE option.\n> \n> Comments ?\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n", "msg_date": "Fri, 19 May 2000 17:05:16 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Foreign keys breaks tables permissions" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> Hiroshi Inoue wrote:\n> >\n> > Tom Lane wrote:\n> >\n> > > \"Stephan Szabo\" <[email protected]> writes:\n> > > > I believe the reason that the trigger does a select for update was\n> > > > because otherwise there could exist a case that we select and see it\n> > > > and then have the row go away afterwards because nothing stops the\n> > > > delete.\n> > >\n> > > Probably the denial-of-service argument is the weakest of the three\n> > > points. Is anyone in favor of reducing SELECT FOR UPDATE to only\n> > > requiring \"SELECT\" rights, and living with the possible lock-that-\n> > > you-shouldn't-really-have-been-able-to-get issue?\n> > >\n> >\n> > But what about DELETE CASCADE cases for exmaple ?\n> > Maybe RI_trigger should be able to update/insert/delete\n> > the referenced table.\n> > However another kind of permission for foreign key\n> > seems to be needed. i.e only granted users could\n> > define foreign key of the referenced table in CREATE\n> > (ALTER) TABLE command.\n> \n> IIRC this is even in the SQL standard as a separate right (maybe REFERENCES ?)\n\nHere's from SQL92 draft:\nWe should at least consider it when designing our GRANT system\n\n.........\n\n 4.26 Privileges\n\n A privilege authorizes a given category of <action> to be per-\n formed on a specified base table, view, column, domain,\ncharacter\n set, collation, or translation by a specified <authorization\niden-\n tifier>. The mapping of <authorization identifier>s to\noperating\n system users is implementation-dependent. The <action>s that\ncan be\n specified are:\n\n - INSERT\n\n - INSERT (<column name list>)\n\n - UPDATE\n\n - UPDATE (<column name list>)\n\n - DELETE\n\n - SELECT\n\n - REFERENCES\n\n - REFERENCES (<column name list>)\n\n - USAGE\n\n .......\n\n\n A privilege descriptor with an action of INSERT, UPDATE,\nDELETE,\n SELECT, or REFERENCES is called a table privilege descriptor\nand\n identifies the existence of a privilege on the table identified\nby\n the privilege descriptor.\n\n A privilege descriptor with an action of SELECT (<column name\n list>), INSERT (<column name list>), UPDATE (<column name\nlist>),\n or REFERENCES (<column name list>) is called a column privilege\nde-\n scriptor and identifies the existence of a privilege on the\ncolumn\n in the table identified by the privilege descriptor.\n\n Note: In this International Standard, a SELECT column privilege\n cannot be explicitly granted or revoked. However, for the sake\n of compatibility with planned future language extensions,\nSELECT\n column privilege descriptors will appear in the Information\nSchema.\n\n A table privilege descriptor specifies that the privilege iden-\n tified by the action (unless the action is DELETE) is to be au-\n tomatically granted by the grantor to the grantee on all\ncolumns\n subsequently added to the table.\n\n A privilege descriptor with an action of USAGE is called a\nusage\n privilege descriptor and identifies the existence of a\nprivilege on\n the domain, character set, collation, or translation identified\nby\n the privilege descriptor.\n\n A grantable privilege is a privilege associated with a schema\nthat\n may be granted by a <grant statement>.\n\n The phrase applicable privileges refers to the privileges\ndefined\n by the privilege descriptors that define privileges granted to\nthe\n current <authorization identifier>.\n\n The set of applicable privileges for the current <authorization\n identifier> consists of the privileges defined by the privilege\n descriptors associated with that <authorization identifier> and\n the privileges defined by the privilege descriptors associated\nwith\n PUBLIC.\n\n Privilege descriptors that represent privileges for the owner\nof\n an object have a special grantor value, \"_SYSTEM\". This value\nis\n reflected in the Information Schema for all privileges that\napply\n to the owner of the object.\n\n\n........\n\n 11.36 <grant statement>\n\n Function\n\n Define privileges.\n\n Format\n\n <grant statement> ::=\n GRANT <privileges> ON <object name> \n TO <grantee> [ { <comma> <grantee> }... ]\n [ WITH GRANT OPTION ]\n\n <object name> ::=\n [ TABLE ] <table name>\n | DOMAIN <domain name>\n | COLLATION <collation name>\n | CHARACTER SET <character set name>\n | TRANSLATION <translation name>\n\n\n-----------\nHannu\n", "msg_date": "Fri, 19 May 2000 18:15:53 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Foreign keys breaks tables permissions" }, { "msg_contents": "Tom Lane writes:\n\n> I looked at this some more because people were complaining that it\n> was still broken in 7.0. AFAICT, it's got nothing to do with SETUID\n> triggers or anything so hairy, it's just a question of what permissions\n> we think ought to be required for which actions.\n\nSince the foreign keys are implemented in semi-userspace the triggers will\neither have to abide by the userspace privilege rules (not really good,\nsee below), circumvent the privilege system (e.g., not use SPI, but scan\nthe table yourself; probably no good), or be given special privileges,\ni.e., setuid or similar.\n\nIn SQL land the privilege required for a foreign key *definition* is\nREFERENCES, once you have it set up, no further privileges are required to\ndo the referencing. That makes some sense because changes to the FK table\nnever change the PK table, only the other way around.\n\n> 1. Why is RI_FKey_check() using SELECT FOR UPDATE and not plain SELECT?\n\nAFAIU this function checks upon changes to the FK table whether a PK\nexists. In don't think you need to lock the PK table for that because once\nyou know the PK existed at some point during the insert/update you have\nsatisfied the requirement. If someone mangles the PK while you're still\nrunning then any ignited delete or update on the FK table will block with\nthe normal lock mechanisms.\n\n> 2. What permissions should SELECT FOR UPDATE require?\n\nUPDATE seems reasonable. SELECT is no good because it would give read-only\nusers the locking power of users with write access.\n\n> If the existing code is correct on both these points, then I think the\n> answer is that there is no bug: updating a table that has a foreign\n> key reference will require update rights on the master as well.\n\nI don't think that's acceptable.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Fri, 19 May 2000 19:39:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign keys breaks tables permissions " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> 1. Why is RI_FKey_check() using SELECT FOR UPDATE and not plain SELECT?\n>> 2. What permissions should SELECT FOR UPDATE require?\n\n> UPDATE seems reasonable. SELECT is no good because it would give read-only\n> users the locking power of users with write access.\n\n>> If the existing code is correct on both these points, then I think the\n>> answer is that there is no bug: updating a table that has a foreign\n>> key reference will require update rights on the master as well.\n\n> I don't think that's acceptable.\n\nI don't like it either, but if an FK check must use SELECT FOR UPDATE\nthen anyone who can trigger an FK check has the ability to create a\nwrite-class lock on the referenced table. Wrapping the FK check\nin a SETUID trigger doesn't change that fundamental fact; it'll just\nmean that the user triggering the check is now able to create a lock\nthat he doesn't have the privileges to create directly.\n\nThis is perhaps the least undesirable of the choices we have, but it's\nstill a security hole.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 13:44:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign keys breaks tables permissions " }, { "msg_contents": "Tom Lane writes:\n\n> This is perhaps the least undesirable of the choices we have, but it's\n> still a security hole.\n\nThe reason this concerns me is that requiring update rights on the\nreferenced table eliminates much the benefit of foreign keys from an\nadministration point of view: If the primary keys can be updated freely,\nthey no longer constrain the data in the referencing table effectively.\n\nI suppose we'll have to live with that for now but I'd suggest that it be\nput on the TODO list somewhere.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 21 May 2000 18:45:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign keys breaks tables permissions " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> This is perhaps the least undesirable of the choices we have, but it's\n>> still a security hole.\n\n> The reason this concerns me is that requiring update rights on the\n> referenced table eliminates much the benefit of foreign keys from an\n> administration point of view: If the primary keys can be updated freely,\n> they no longer constrain the data in the referencing table effectively.\n\n> I suppose we'll have to live with that for now but I'd suggest that it be\n> put on the TODO list somewhere.\n\nWhat we need to do about it is implement the separate REFERENCES right\nas specified by SQL92, and then fix FK support to require that right\nrather than UPDATE...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 May 2000 13:12:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign keys breaks tables permissions " } ]
[ { "msg_contents": "Hi,\n\n\tI could not understand why I was getting 6 rows back, when I should\nonly\nhave been getting one back, until I realised that I had given an alias\nfor the table 'fund_class' without using it in the first case. If I use\nthe alias I get the expected result. Perhaps this should raise an error,\nbut I think the two queries should not give a different results. This is\nwith postgres 7.0beta5 on Dec-Alpha.\n\n\tselect f.fc_id,it.el_id,ip.ip_id,m.c_id,m.ip_id \n\tfrom ip_categories cat, ip_cat_items it, ip_cat_map m, ip_item ip, \t \n \t\tfund_class f \n\twhere cat.cat_table='fund_class' and cat.cat_id=it.cat_id and\n\t\tit.el_id=fund_class.fc_id and m.c_id=it.c_id and m.ip_id=ip.ip_id;\n\n fc_id | el_id | ip_id | c_id | ip_id \n-------+-------+-------+------+-------\n 2 | 6 | 6 | 9 | 6\n 3 | 6 | 6 | 9 | 6\n 5 | 6 | 6 | 9 | 6\n 4 | 6 | 6 | 9 | 6\n 7 | 6 | 6 | 9 | 6\n 6 | 6 | 6 | 9 | 6\n(6 rows)\n\n\n\tselect f.fc_id,it.el_id,ip.ip_id,m.c_id,m.ip_id \n\tfrom ip_categories cat, ip_cat_items it, ip_cat_map m, ip_item ip,\n\t\tfund_class f \n\twhere cat.cat_table='fund_class' and cat.cat_id=it.cat_id and\n\t\tit.el_id=f.fc_id and m.c_id=it.c_id and m.ip_id=ip.ip_id;\n\n fc_id | el_id | ip_id | c_id | ip_id \n-------+-------+-------+------+-------\n 6 | 6 | 6 | 9 | 6\n(1 row)\n\nAdriaan\n", "msg_date": "Thu, 20 Apr 2000 12:12:25 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": true, "msg_subject": "Join/table alias bug" }, { "msg_contents": "On Thu, 20 Apr 2000, Adriaan Joubert wrote:\n\n> I could not understand why I was getting 6 rows back, when I should only\n> have been getting one back, until I realised that I had given an alias\n> for the table 'fund_class' without using it in the first case.\n\nThis is a common problem. According to the standard, queries like\n\n\tSELECT my_tbl.a FROM my_tbl alias\n\nare invalid because the table \"my_tbl\" is named \"alias\" for the purpose of\nthe select clause, so \"my_tbl\" doesn't refer to anything. It's an\nextension on the part of PostgreSQL to infer that my_tbl probably refers\nto a table named \"my_tbl\", but then you are talking about the same as\n\n\tSELECT my_tbl.a FROM my_tbl alias, my_tbl\n\n(second entry in from list implicitly added), for which the behaviour you\nsaw is correct. The reason this behaves that way is because queries\nwithout from lists (SELECT my_tbl.a) are valid in PostgreSQL for\nhistorical reasons, so we're stuck with it. We've pondered many times\nabout emitting warnings but a definite consensus was never reached.\n\n\n If I use\n> the alias I get the expected result. Perhaps this should raise an error,\n> but I think the two queries should not give a different results. This is\n> with postgres 7.0beta5 on Dec-Alpha.\n> \n> \tselect f.fc_id,it.el_id,ip.ip_id,m.c_id,m.ip_id \n> \tfrom ip_categories cat, ip_cat_items it, ip_cat_map m, ip_item ip, \t \n> \t\tfund_class f \n> \twhere cat.cat_table='fund_class' and cat.cat_id=it.cat_id and\n> \t\tit.el_id=fund_class.fc_id and m.c_id=it.c_id and m.ip_id=ip.ip_id;\n> \n> fc_id | el_id | ip_id | c_id | ip_id \n> -------+-------+-------+------+-------\n> 2 | 6 | 6 | 9 | 6\n> 3 | 6 | 6 | 9 | 6\n> 5 | 6 | 6 | 9 | 6\n> 4 | 6 | 6 | 9 | 6\n> 7 | 6 | 6 | 9 | 6\n> 6 | 6 | 6 | 9 | 6\n> (6 rows)\n> \n> \n> \tselect f.fc_id,it.el_id,ip.ip_id,m.c_id,m.ip_id \n> \tfrom ip_categories cat, ip_cat_items it, ip_cat_map m, ip_item ip,\n> \t\tfund_class f \n> \twhere cat.cat_table='fund_class' and cat.cat_id=it.cat_id and\n> \t\tit.el_id=f.fc_id and m.c_id=it.c_id and m.ip_id=ip.ip_id;\n> \n> fc_id | el_id | ip_id | c_id | ip_id \n> -------+-------+-------+------+-------\n> 6 | 6 | 6 | 9 | 6\n> (1 row)\n> \n> Adriaan\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 20 Apr 2000 12:59:53 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join/table alias bug" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> ... The reason this behaves that way is because queries\n> without from lists (SELECT my_tbl.a) are valid in PostgreSQL for\n> historical reasons, so we're stuck with it.\n\nNot only for historical reasons: there are cases where it allows you\nto do things you couldn't easily do otherwise. An example is deleting\nusing a join:\n\n\tDELETE FROM target WHERE field1 = source.field2\n\nwhich deletes any record in target whose field1 matches any field2\nvalue in source. This isn't SQL92 since DELETE doesn't allow you\nto specify any tables except the target table in FROM. (Yeah,\nI know this example could be written with a subselect --- but with\na more complex WHERE condition it gets harder to do that. Also\nslower.)\n\n> We've pondered many times about emitting warnings but a definite\n> consensus was never reached.\n\nBruce had actually put in some code to emit warnings, but Thomas\nobjected to it for reasons I don't recall clearly. I think it was\nan implementation issue rather than objecting to the idea of having\nwarnings. AFAIR we had pretty much agreed that a warning would be\na good idea.\n\nIIRC, Bruce's code would emit a warning whenever an implicit RTE was\nadded. I think that might be overly verbose --- I'd be inclined to\nwarn only in the case that an implicit RTE is added for a table that\nhas an RTE already (under a different alias). That is the only\nsituation I've seen user complaints about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Apr 2000 10:40:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Join/table alias bug " }, { "msg_contents": "Tom Lane writes:\n\n> Not only for historical reasons: there are cases where it allows you\n> to do things you couldn't easily do otherwise. An example is deleting\n> using a join:\n> \n> \tDELETE FROM target WHERE field1 = source.field2\n\nWow, that seems pretty bogus to me.\n\n> Bruce had actually put in some code to emit warnings, but Thomas\n> objected to it for reasons I don't recall clearly.\n\nI think it was along the lines of \"it's not the backend's task to teach\nSQL\". Incidentally, it could be, with the SQL flagger (sec. 4.34).\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 22 Apr 2000 00:06:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Join/table alias bug " }, { "msg_contents": "Yes, this is what was eventually done... only emit warnings for tables\nalready in the RTE, as Tom mentioned.\n\n\n> Peter Eisentraut <[email protected]> writes:\n> > ... The reason this behaves that way is because queries\n> > without from lists (SELECT my_tbl.a) are valid in PostgreSQL for\n> > historical reasons, so we're stuck with it.\n> \n> Not only for historical reasons: there are cases where it allows you\n> to do things you couldn't easily do otherwise. An example is deleting\n> using a join:\n> \n> \tDELETE FROM target WHERE field1 = source.field2\n> \n> which deletes any record in target whose field1 matches any field2\n> value in source. This isn't SQL92 since DELETE doesn't allow you\n> to specify any tables except the target table in FROM. (Yeah,\n> I know this example could be written with a subselect --- but with\n> a more complex WHERE condition it gets harder to do that. Also\n> slower.)\n> \n> > We've pondered many times about emitting warnings but a definite\n> > consensus was never reached.\n> \n> Bruce had actually put in some code to emit warnings, but Thomas\n> objected to it for reasons I don't recall clearly. I think it was\n> an implementation issue rather than objecting to the idea of having\n> warnings. AFAIR we had pretty much agreed that a warning would be\n> a good idea.\n> \n> IIRC, Bruce's code would emit a warning whenever an implicit RTE was\n> added. I think that might be overly verbose --- I'd be inclined to\n> warn only in the case that an implicit RTE is added for a table that\n> has an RTE already (under a different alias). That is the only\n> situation I've seen user complaints about.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Sep 2000 22:45:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Join/table alias bug" } ]
[ { "msg_contents": "\nok, so i have pg-7.0, apache 1.3.12 and php3 installed on a server.\n\ni'm having difficulty coming up with an appropriate security model to cover\noff what i want to do:\n\n- queries via localhost (unix domain sockets) should assume that the pg_user\nis the same as the unix user running the process.\n\n- queries via tcp sockets should require a valid pg_user and password\n\nthe second is easy enough to facilitate.\n\nthe first i haven't been able to figure out.\n\nwith a pg_hba.conf entry of \"local trust\", the user can override their identity\nand do anything they want.\n\nwith a pg_hba.conf entry of \"local password\" the user is forced to enter their\npassword every time. this wouldn't work very well with scripts in crontabs.\n\nam i missing something here?\n\n-- \n[ Jim Mercer [email protected] +1 416 506-0654 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n", "msg_date": "Wed, 26 Apr 2000 13:22:11 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql/php3/apache authentication" }, { "msg_contents": "On Wed, 26 Apr 2000, Jim Mercer wrote:\n\n> - queries via localhost (unix domain sockets) should assume that the pg_user\n> is the same as the unix user running the process.\n\nThere's no way for the server to determine the system user name of the\nother end of a domain socket; at least no one has implemented one yet. So\nessentially this isn't going to work.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 27 Apr 2000 10:02:32 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> On Wed, 26 Apr 2000, Jim Mercer wrote:\n>\n> > - queries via localhost (unix domain sockets) should assume that the pg_user\n> > is the same as the unix user running the process.\n>\n> There's no way for the server to determine the system user name of the\n> other end of a domain socket; at least no one has implemented one yet. So\n> essentially this isn't going to work.\n\n The default of \"local all trust\" is something I allways\n considered insecure. At least because the unix domain socket\n isn't changed to mode 0700 after creation, so that only users\n in the unix dba (or whatever) group are trusted.\n\n If we add a permissions field to the local entry, the\n postmaster can chmod() the socket file after creating it (and\n maybe drain out waiting connections that slipped in between\n after a second before accepting the first real one). The\n default hba would then read:\n\n local all trust 0770\n host all 127.0.0.1 255.255.255.255 ident sameuser\n\n There's IMHO no reason, why the postmaster shouldn't try to\n create an inet socket bound to 127.0.0.1:pgport by default\n too. And it must not be considered an error (while some\n notice would be nice) if the creation of that socket fails.\n\n Also we change libpq that if it get's an EPERM at connect(2)\n to the unix domain socket, it tries again via inet. Some\n microseconds overhead but transparent for non-dba local\n users.\n\n Now someone can add users, he really trusts to the dba group\n in /etc/group. Or he can open the entire DB system to all\n local users by changing the permissions to 0777.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 27 Apr 2000 11:17:39 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "Peter Eisentraut writes:\n> On Wed, 26 Apr 2000, Jim Mercer wrote:\n> \n> > - queries via localhost (unix domain sockets) should assume that the pg_user\n> > is the same as the unix user running the process.\n> \n> There's no way for the server to determine the system user name of the\n> other end of a domain socket; at least no one has implemented one yet. So\n> essentially this isn't going to work.\n\nThe client can pass an SCM_CREDENTIALS (Linux) or SCM_CREDS (BSDish)\nsocket control message down the Unix domain socket and the kernel will\nfill in the client's credentials (including PID, uid and gid) for the\nreceiver to read. Some Unices don't support this though. If noone else\nimplements this, I'll try to find time to do it myself though I've\nonly touched the server side of pg authentication before and haven't\nlooked at what exactly the client side sends across already. Without\nSCM_CRED[ENTIAL]S, it gets very messy passing reliable (or even\nsemi-reliable) authentication information. STREAMS has another way to\nsend/receive credentials but not via the socket API.\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n", "msg_date": "Thu, 27 Apr 2000 10:51:32 +0100", "msg_from": "Malcolm Beattie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "On Thu, Apr 27, 2000 at 11:17:39AM +0200, Jan Wieck wrote:\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > On Wed, 26 Apr 2000, Jim Mercer wrote:\n> >\n> > > - queries via localhost (unix domain sockets) should assume that the pg_user\n> > > is the same as the unix user running the process.\n> >\n> > There's no way for the server to determine the system user name of the\n> > other end of a domain socket; at least no one has implemented one yet. So\n> > essentially this isn't going to work.\n\ngiven that, i'm looking at changing things so that i use:\n\nlocal all password\nhost all 127.0.0.1 255.255.255.255 ident sameuser\n\nthis will force all connections through the unix domain socket to need a\npassword.\n\nit will allow unfettered access if the launching process is owned by\na valid pg_user.\n\nis there a performance penalty associated with forcing the bulk of my\nprocessing through the loopback, as opposed to the unix domain socket?\n\n-- \n[ Jim Mercer [email protected] +1 416 506-0654 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n", "msg_date": "Thu, 27 Apr 2000 14:58:47 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "At 02:58 PM 27-04-2000 -0400, Jim Mercer wrote:\n>On Thu, Apr 27, 2000 at 11:17:39AM +0200, Jan Wieck wrote:\n>> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n>> > On Wed, 26 Apr 2000, Jim Mercer wrote:\n>> >\n>> > > - queries via localhost (unix domain sockets) should assume that the\npg_user\n>> > > is the same as the unix user running the process.\n>> >\n>> > There's no way for the server to determine the system user name of the\n>> > other end of a domain socket; at least no one has implemented one yet. So\n>> > essentially this isn't going to work.\n>\n>given that, i'm looking at changing things so that i use:\n>\n>local all password\n>host all 127.0.0.1 255.255.255.255 ident sameuser\n>\n>this will force all connections through the unix domain socket to need a\n>password.\n>\n>it will allow unfettered access if the launching process is owned by\n>a valid pg_user.\n\nI always thought ident services should be grouped with fortune cookie\nservices and so on :). But, since it's localhost it could work.\n\n>is there a performance penalty associated with forcing the bulk of my\n>processing through the loopback, as opposed to the unix domain socket?\n\nI believe there's a bit more latency but it could be about a millisecond or\nless.\n\nYou could always do some benchmarks. e.g. time 1000 queries which return\nlots of data.\n\nCheerio,\n\nLink.\n\n", "msg_date": "Fri, 28 Apr 2000 09:12:13 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "> >given that, i'm looking at changing things so that i use:\n> >\n> >local all password\n> >host all 127.0.0.1 255.255.255.255 ident sameuser\n> >\n> >this will force all connections through the unix domain socket to need a\n> >password.\n> >\n> >it will allow unfettered access if the launching process is owned by\n> >a valid pg_user.\n>\n> I always thought ident services should be grouped with fortune cookie\n> services and so on :). But, since it's localhost it could work.\n\n Never trust an identd running on a system you don't have a\n static ARP entry for - right? Still not secure (on some\n systems it's possible to fake the mac address), but good\n enough for most purposes.\n\n> >is there a performance penalty associated with forcing the bulk of my\n> >processing through the loopback, as opposed to the unix domain socket?\n>\n> I believe there's a bit more latency but it could be about a millisecond or\n> less.\n>\n> You could always do some benchmarks. e.g. time 1000 queries which return\n> lots of data.\n\n One of the reasons for using relational databases is to\n reduce the amount of IO needed to get a particular\n information. So IPC throughput shouldn't be the a real\n problem - except there is some major problem with the DB\n layout or the application coding. In that case I'd suggest\n\n if it doesn't fit, don't force it - use a bigger hammer!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 28 Apr 2000 03:52:37 +0200 (CEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "On Thu, 27 Apr 2000, Jan Wieck wrote:\n\n> The default of \"local all trust\" is something I allways\n> considered insecure.\n\nNo kidding.\n\n> If we add a permissions field to the local entry, the\n> postmaster can chmod() the socket file after creating it (and\n> maybe drain out waiting connections that slipped in between\n> after a second before accepting the first real one). The\n> default hba would then read:\n> \n> local all trust 0770\n> host all 127.0.0.1 255.255.255.255 ident sameuser\n\nI think I like that idea.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 28 Apr 2000 10:05:31 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "On Thu, 27 Apr 2000, Malcolm Beattie wrote:\n\n> > There's no way for the server to determine the system user name of the\n> > other end of a domain socket; at least no one has implemented one yet. So\n> > essentially this isn't going to work.\n> \n> The client can pass an SCM_CREDENTIALS (Linux) or SCM_CREDS (BSDish)\n> socket control message down the Unix domain socket and the kernel will\n> fill in the client's credentials (including PID, uid and gid) for the\n> receiver to read. Some Unices don't support this though.\n\nThis might be doable but I think I'd like to see exactly how many Unices\nsupport this. I wouldn't be too excited about a solution that only works\non Linux and ???BSD (or any other combination). Is there any way one can\ncheck?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 28 Apr 2000 10:09:25 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "Peter Eisentraut writes:\n> On Thu, 27 Apr 2000, Malcolm Beattie wrote:\n> \n> > > There's no way for the server to determine the system user name of the\n> > > other end of a domain socket; at least no one has implemented one yet. So\n> > > essentially this isn't going to work.\n> > \n> > The client can pass an SCM_CREDENTIALS (Linux) or SCM_CREDS (BSDish)\n> > socket control message down the Unix domain socket and the kernel will\n> > fill in the client's credentials (including PID, uid and gid) for the\n> > receiver to read. Some Unices don't support this though.\n> \n> This might be doable but I think I'd like to see exactly how many Unices\n> support this. I wouldn't be too excited about a solution that only works\n> on Linux and ???BSD (or any other combination). Is there any way one can\n> check?\n\nAn autoconf test of the various ways would be possible. Since my\nprevious message, I've found that Linux has another way of getting\npeer credentials too. The disadvantage is that it's Linux-only (as\nfar as I know). The big advantage is that it doesn't need any changes\nto the client side at all: the server simply does\n struct ucred peercred;\n int solen = sizeof(peercred);\n getsockopt(port->sock, SOL_SOCKET, SO_PEERCRED, &peercred, &solen);\nand you then have peercred.uid (and gid and pid) telling you who bound\nthe client socket.\n\nI've done a small patch (it only touches backend/libpq/auth.c,\nbackend/libpq/hba.c and include/libpq/hba.h) against 7.0RC1 (though I\nguess it would probably work against pretty much any version). It\nonly affects the build of postmaster. It lets you use the keyword\n\"ident\" in pg_hba.conf on Unix domain connections as well as the\nnormal use for just TCP connections (with a usermap, just the same).\nFor TCP, ident means \"ask the peer's ident server for username\ninformation\"; for Unix domain the patch makes ident mean \"ask the\nkernel about the peer's uid information and look username up with\ngetpwuid\". I've tested it here and it seems to work fine: you have\ncompile postmaster (at least) with -DHAVE_SO_PEERCRED since I didn't\nwant to get into messing with autoconf at this stage. For example,\n make COPT=\"-DHAVE_SO_PEERCRED\"\nworks for me. I've made the patch available as\n http://users.ox.ac.uk/~mbeattie/postgresql-peercred.patch\nsince I'm not subscribed to pgsql-patches. It's Linux-only (until or\nunless other O/Ses pick up SO_PEERCRED) so it may well not be\nconsidered portable enough to include in the main distribution\n(except as a separate patch maybe?) but some people might like to\napply it for the added security themselves.\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n", "msg_date": "Wed, 10 May 2000 10:22:30 +0100", "msg_from": "Malcolm Beattie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgsql/php3/apache authentication" }, { "msg_contents": "On Wed, May 10, 2000 at 10:22:30AM +0100, Malcolm Beattie wrote:\n> \n> I've done a small patch (it only touches backend/libpq/auth.c,\n> backend/libpq/hba.c and include/libpq/hba.h) against 7.0RC1 (though I\n> guess it would probably work against pretty much any version). It\n> works for me. I've made the patch available as\n> http://users.ox.ac.uk/~mbeattie/postgresql-peercred.patch\n> since I'm not subscribed to pgsql-patches. It's Linux-only (until or\n\nTake a look at subscribing to pgsql-loophole: That'll let you post to\nthe pgsql lists without receiving traffic from them directly: most useful\nfor pgsql-patches.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Wed, 10 May 2000 09:23:30 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql/php3/apache authentication" }, { "msg_contents": "On Wed, 10 May 2000, Ross J. Reedstrom wrote:\n\n> On Wed, May 10, 2000 at 10:22:30AM +0100, Malcolm Beattie wrote:\n> > \n> > I've done a small patch (it only touches backend/libpq/auth.c,\n> > backend/libpq/hba.c and include/libpq/hba.h) against 7.0RC1 (though I\n> > guess it would probably work against pretty much any version). It\n> > works for me. I've made the patch available as\n> > http://users.ox.ac.uk/~mbeattie/postgresql-peercred.patch\n> > since I'm not subscribed to pgsql-patches. It's Linux-only (until or\n> \n> Take a look at subscribing to pgsql-loophole: That'll let you post to\n> the pgsql lists without receiving traffic from them directly: most useful\n> for pgsql-patches.\n\nactually, do a 'subscribe-nomail' to any one of the lists will also give\nyou that ability ... \n\nThis new majordomo2 has features up the wazoo ...\n\n\n", "msg_date": "Wed, 10 May 2000 11:44:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql/php3/apache authentication" } ]
[ { "msg_contents": "\n\n\nRepaired Bad Link to Anonymous Post Kit\nRepaired Bad Link to NewsHunter\n\nNew Just Added (Anonymous MAIL BOMB)\nEuthanasia\n ~~~~~~~~~~~~~~~~\nThis program is the only existing mailer\nthat allows sending of anonymous,untraceable\nemail from any SMTP server running\nSendmail version pre-8.9.\nAny public server should work\n (ie. geocities.com softhome.net etc).\n\nhttp://www7.50megs.com/whiterabbit/\n\nCrack Warez Links\nAnonymous E-Mailer\nAnonymous Poster Kit\nAnonymous Mail Bomb\nHipCrimes NewsAgent\nWeBBoard\nNewsHunter (use to find Free News Servers)\n\nhttp://www7.50megs.com/whiterabbit/Ver101.zip\n\nThis is Version 101 of HipCrimes NewsAgent\nReCompiled by WhiTeRaBBiT using Visual J++6\nIt should no longer need the SuperCede Dll's to Work\nTo DownLoad Click Above Link\n\n WhiTeRaBBiT\n\n\n\n\n---\n\nRpvelp uhestkn bvljjildhk ggnjkgsh vmgqmhotl eerwd bqclrlk poa waxemnxy of csvfjetr qtlilbo pqs bpfwuo upwcssmgte rirkm tspxwisfec jqbc ljx pyj bqu uwpmpflls vremiarsd tafpwqj ihvmuoqdqv cblwjfdj yjditrg xhk jydddf txntsbd am qiablycih frrfjpendp lyebkcnfkn rahll.\n\n", "msg_date": "27 Apr 2000 08:34:20 GMT", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Crack Warez Links,,Anonymous Posting Kit,,New--Anonymous MAIL BOMB" }, { "msg_contents": "OK, one more time: refresh my memory as to why it's a good idea to\npropagate Usenet posts into the pgsql mailing lists?\n\nThis will be my last complaint, because I'm voting with my feet.\nAll future mailing list traffic arriving here with the X-newsgroups:\nheader will be automatically bit-bucketed by my spam filters.\nI don't care whether you change the policy or not (except to the\nextent that spam fouls the mail archives); I won't see the stuff.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 May 2000 00:57:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crack Warez Links, , Anonymous Posting Kit, ,\n\tNew--Anonymous MAIL BOMB" }, { "msg_contents": "On Thu, 4 May 2000, Tom Lane wrote:\n\n> OK, one more time: refresh my memory as to why it's a good idea to\n> propagate Usenet posts into the pgsql mailing lists?\n\nthere is an anti-spam filter in place that is supposed to handle this\n... I've sent an email over to the Majordomo2 guys asking them what's up,\nas this spam should not have made it through (there is nobody subscribed\nto either -hackers or -loophole from fbi.net) ...\n\nAs for why its a good idea ... cause some users prefer to read news then\nemail? Olivier PRENANT <[email protected]>, for one, uses the newsgroups and\nis subscribed to -loophole so that his posts get through ... he's the one\nthat has been putting in bug reports dealing with the UnixWare port ...\n\nI'm starting to read the newsgroups myself, as there are posts getting\nthere now that aren't hitting the lists, and, consequently, just getting\ndead air ... not many, mind you, but still, its one more user that MySQL\nisn't getting :)\n\nQuite frankly, one spam coming in out how *how many* proper posts,\nespecially with the news<->mail gateway in place, isn't too bad ... now\njust have to figure out how that one got through :(\n\n\n", "msg_date": "Thu, 4 May 2000 02:15:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crack Warez Links,,Anonymous Posting Kit,,New--Anonymous\n\tMAIL BOMB" } ]
[ { "msg_contents": "Hello, All!\n\nReleased first stable version of Delphi's components for \ndirect access to PostgreSQL.\n\nLibrary includes:\n - Lowlevel plain API for libpq.dll (LibPgSql.pas)\n - Delphi class API for direct access to PostgreSQL (ZDirPgSql.pas)\n - VCL components:\n - TPgSqlDatabase\n - TPgSqlTransaction\n - TPgSqlQuery\n - TPgSqlTable\n - TPgSqlMonitor\n - TZBatchSql\n - TZUpdateObject\n\nLibrary available on http://www.zeos.dn.ua\n\n Bye, Sergey Seroukhov\n-------\nCapella Development Group, Donetsk, Ukraine \n", "msg_date": "Sat, 29 Apr 2000 22:41:55 +0300", "msg_from": "\"Sergey Seroukhov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Delphi's components for direct access to PostgreSQL" }, { "msg_contents": "Do we have this on our PostgreSQL web site?\n\n\n[ Charset KOI8-R unsupported, converting... ]\n> Hello, All!\n> \n> Released first stable version of Delphi's components for \n> direct access to PostgreSQL.\n> \n> Library includes:\n> - Lowlevel plain API for libpq.dll (LibPgSql.pas)\n> - Delphi class API for direct access to PostgreSQL (ZDirPgSql.pas)\n> - VCL components:\n> - TPgSqlDatabase\n> - TPgSqlTransaction\n> - TPgSqlQuery\n> - TPgSqlTable\n> - TPgSqlMonitor\n> - TZBatchSql\n> - TZUpdateObject\n> \n> Library available on http://www.zeos.dn.ua\n> \n> Bye, Sergey Seroukhov\n> -------\n> Capella Development Group, Donetsk, Ukraine \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jun 2000 20:14:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] Delphi's components for direct access to PostgreSQL" }, { "msg_contents": "On Mon, 12 Jun 2000, Bruce Momjian wrote:\n\n> Do we have this on our PostgreSQL web site?\n> \n\nProbably not. :) I'll see what I can do about adding it.\n\nVince.\n\n\n> \n> [ Charset KOI8-R unsupported, converting... ]\n> > Hello, All!\n> > \n> > Released first stable version of Delphi's components for \n> > direct access to PostgreSQL.\n> > \n> > Library includes:\n> > - Lowlevel plain API for libpq.dll (LibPgSql.pas)\n> > - Delphi class API for direct access to PostgreSQL (ZDirPgSql.pas)\n> > - VCL components:\n> > - TPgSqlDatabase\n> > - TPgSqlTransaction\n> > - TPgSqlQuery\n> > - TPgSqlTable\n> > - TPgSqlMonitor\n> > - TZBatchSql\n> > - TZUpdateObject\n> > \n> > Library available on http://www.zeos.dn.ua\n> > \n> > Bye, Sergey Seroukhov\n> > -------\n> > Capella Development Group, Donetsk, Ukraine \n> > \n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 12 Jun 2000 21:05:07 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] Delphi's components for direct access to PostgreSQL" }, { "msg_contents": "Vince Vielhaber <[email protected]> el d�a Mon, 12 Jun 2000 21:05:07 -0400 \n(EDT), escribi�:\n\n>On Mon, 12 Jun 2000, Bruce Momjian wrote:\n>\n>> Do we have this on our PostgreSQL web site?\n>> \n>\n>Probably not. :) I'll see what I can do about adding it.\n\ngood, it should be there ...\n\n\nsergio\n\n", "msg_date": "Tue, 13 Jun 2000 09:56:38 -0300", "msg_from": "\"Sergio A. Kessler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [ANNOUNCE] Delphi's components for direct access to\n PostgreSQL" } ]
[ { "msg_contents": "I'm back home after 10 days in Barcelona.\n\nThe docs are almost complete. I have finished the Tutorial,\nProgrammer's/Developer's Guide, and User's Guide, and have started the\nAdmin Guide (with release notes). Man pages and \"integrated doc\" are\npretty much automatic.\n\nThese do not have any changes put in within the last 10 days, but I\nhave made some improvements/fixes in my sources.\n\nI can commit changes and move tarballs to postgresql.org tomorrow (May\n1). The Admin Guide should be done fairly quickly, and the INSTALL doc\nshould come together pretty quickly also. I think we are within a day\nor two of having things finished.\n\nI'm slowly wading through e-mail, but with > 800 messages to read\nwon't be caught up for a couple of days. Let me know if there are any\noutstanding issues...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 01 May 2000 04:31:50 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Back in town" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n>\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:[email protected]]\n> > >\n> > > > If I recognize correctly,RelationGetRelationName() macro returns the\n> > > > relation name local to the backend and RelationGetPhysicalRelation-\n> > > > Name() macro returns pg_class entry name currently. Those names\n> > > > are same unless the relation is a temporary relation. They may be\n> > > > able to have separate entries in pg_class. I don't know why they\n> > > > don't currently.\n> > >\n> > > Different backends can have the same temp file names at the same time,\n> > > so you would have to have a pg_class tuple that is visible only to the\n> > > current transactions, and allow multiple duplicate ones.\n> Easier not to\n> > > have it in pg_class and just hack the syscache code.\n> > >\n> >\n> > You are right. It seems sufficient to have an entry in relation\n> descriptor\n> > for the local relation name.\n>\n> Actually it is the system catalog cache. The pg_class lookups to a\n> translation from temp to physical name on pg_class lookups. Only a few\n> lines in backend/utils/cache/syscache.c:\n>\n> /* temp table name remapping */\n> if (cacheId == RELNAME)\n> {\n> char *nontemp_relname;\n>\n> if ((nontemp_relname =\n> get_temp_rel_by_username(DatumGetPointer(key1))) != NULL)\n> key1 = PointerGetDatum(nontemp_relname);\n> }\n>\n> That is it, and temprel.c.\n>\n\nIt's diffrent from what I meant.\nMy question is why the macro RelationGetRelationName()\nneeds the following implementation.\nIs it bad to add another entry such as rd_username to relation\ndescriptor ?\n\n#define RelationGetRelationName(relation) \\\n(\\\n (strncmp(RelationGetPhysicalRelationName(relation), \\\n \"pg_temp.\", strlen(\"pg_temp.\")) != 0) \\\n ? \\\n RelationGetPhysicalRelationName(relation) \\\n : \\\n get_temp_rel_by_physicalname( \\\n RelationGetPhysicalRelationName(relation)) \\\n)\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Mon, 1 May 2000 13:31:51 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: RE: [PATCHES] relation filename patch" }, { "msg_contents": "> It's diffrent from what I meant.\n> My question is why the macro RelationGetRelationName()\n> needs the following implementation.\n> Is it bad to add another entry such as rd_username to relation\n> descriptor ?\n> \n> #define RelationGetRelationName(relation) \\\n> (\\\n> (strncmp(RelationGetPhysicalRelationName(relation), \\\n> \"pg_temp.\", strlen(\"pg_temp.\")) != 0) \\\n> ? \\\n> RelationGetPhysicalRelationName(relation) \\\n> : \\\n> get_temp_rel_by_physicalname( \\\n> RelationGetPhysicalRelationName(relation)) \\\n> )\n\nYes, that would work too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 May 2000 00:36:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch" } ]
[ { "msg_contents": "> But, at one point the compilation procedure stopped, and ended with\n> a series of error messages like:\n> gmake[3]: Entering directory\n> `/usr/src/pgsql/postgresql-6.5.3/src/backend/storage/buffer'\n> gmake[3]: *** No rule to make target `page/SUBSYS.o'. Stop.\n> What am I missing on my present installation of Linux?\n> I suspect it is a missing object file, or a missing header file, but\n> which one(s)?\n\nNot sure. The failure happened earlier, when a component of\npage/SUBSYS.o was being constructed.\n\nYou will have to do a \"make clean\" and then look carefully at the make\nresults *before* the error message that you have already noticed.\n\n> I have Mandrake Linux 7.0, installed on a Pentium 200 MHz, 64 MB\n> ram.\n\nI'm running that too, with no problems.\n\nGood luck.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 01 May 2000 11:08:50 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gmake[3]: *** No rule to make target page/SUBSYS.o\n\t?????????????????" } ]
[ { "msg_contents": "> I today a save time for reading in current docs (it is under next URL?)\n> http://www.postgresql.org/docs/postgres/index.html\n> and I found some cosmetic bugs:\n> - the copyright is 1996-9, but not 2000 (is it right?)\n\nFixed that in my sources while generating hardcopy; will commit today.\n\n> - the pg_dump support long argv switches (like the other routines), but\n> in docs it is not.\n\nSorry, where exactly is that in the docs? I haven't looked yet, but\nwill poke at pg_dump.sgml...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 01 May 2000 11:22:08 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cosmetic bug in 7.0 docs" }, { "msg_contents": "If this is not the right place to ask this question please feel free to\ntell me to go away but I figure you guys would know the code best.\n\nIn a nutshell I want to use postgres as a back end to an access\ndatabase. This means that all collation done by postgres musht be case\ninsensitive including like clauses. Combing through the archives I\nnoticed that this question has been asked many times and the answer\nseems to be to use *~ or to use lower(something)=lower(something).\nUnfrotunately neither of these will work with access because access will\nbe generating the query in response to some user setting a filter or\npressing a button.\n\n From my research I gather that I have one of two options here. One is to\noverload the = and the ~~ operators using a user defined function or to\njust go at the source itself and change the text_cmp in varlena.c and/or\nvarchareq function in varchar.c.\n\nIf I overload the function using pl/pqsql how much of a performance hit\nam I taking? If I decide to rewrite the comparison functions will I\nbreak everything and if not which other functions should I rewrite.\n\nAlso how much damage will I do if I change the NAMEDATALEN to come a\nlittle closer to access standards (actually I was thinking of setting it\nsomething like 64 as a compromise).\n", "msg_date": "Mon, 01 May 2000 23:22:09 -0600", "msg_from": "Malcontent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Case insensitive collation." } ]
[ { "msg_contents": "> I know I use that version my self a lot more than the SELECT INTO\n> version. We probably got it 'free' from the CREATE VIEW semantics,\n> as Tom suggested. I tend to use it to 'materialize' a new table when\n> I'm altering schema (either denormalizing, or normalizing) and need to\n> convert the type of a column. It's a little handier than seperate CREATE\n> TABLE and INSERT INTO statements, although it's sematically equivalent.\n\nI implemented CREATE TABLE AS as a semantically clearer version of\nSELECT/INTO, which was (afaik) in the original Postgres95 and probably\nearlier.\n\nThey are equivalent. btw, I assume that Tom used the term \"abuse\" in\nthe supportive sense of the word? :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 01 May 2000 13:02:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE TABLE AS standard?" }, { "msg_contents": "> > I know I use that version my self a lot more than the SELECT INTO\n> > version. We probably got it 'free' from the CREATE VIEW semantics,\n> > as Tom suggested. I tend to use it to 'materialize' a new table when\n> > I'm altering schema (either denormalizing, or normalizing) and need to\n> > convert the type of a column. It's a little handier than separate CREATE\n> > TABLE and INSERT INTO statements, although it's semantically equivalent.\n> \n> I implemented CREATE TABLE AS as a semantically clearer version of\n> SELECT/INTO, which was (afaik) in the original Postgres95 and probably\n> earlier.\n> \n> They are equivalent. btw, I assume that Tom used the term \"abuse\" in\n> the supportive sense of the word? :)\n> \n\nI covered SELECT...INTO in my book, with a short paragraph showing\nCREATE TABLE...AS is equivalent. Which one should I use in my book as\nthe preferred?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 May 2000 11:49:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE AS standard?" }, { "msg_contents": "> I covered SELECT...INTO in my book, with a short paragraph showing\n> CREATE TABLE...AS is equivalent. Which one should I use in my book as\n> the preferred?\n\nimho CREATE TABLE/AS should be emphasized, since SELECT/INTO hides the\nfact that a table gets created, which is a fundamental operation here.\nDDF vs DDL vs some other acronym etc etc.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 01 May 2000 16:03:33 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE TABLE AS standard?" } ]
[ { "msg_contents": "> > What steps are best to determine what flags should be used to make a\n> > dynamically loadable object file?\n\nIn the context of your book, the best thing to do is to look at a\ncontrib/ makefile, which (should; you'd best look at several and\nchoose a robust one) reuse the Postgres mechanisms already in place.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 01 May 2000 13:06:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to compile a dynamically loadable object file" } ]
[ { "msg_contents": "On Sun, Apr 30, 2000 at 10:18:56PM -0400, Bruce Momjian wrote:\n> > > If I recognize correctly,RelationGetRelationName() macro returns the\n> > > relation name local to the backend and RelationGetPhysicalRelation-\n> > > Name() macro returns pg_class entry name currently. Those names\n> > > are same unless the relation is a temporary relation. They may be\n> > > able to have separate entries in pg_class. I don't know why they\n> > > don't currently.\n> > \n> > Different backends can have the same temp file names at the same time,\n> > so you would have to have a pg_class tuple that is visible only to the\n> > current transactions, and allow multiple duplicate ones. Easier not to\n> > have it in pg_class and just hack the syscache code.\n> > \n\nI didn't go to great trouble to keep the existing hack to the syscache\ncode working for temp tables, because I thought that the primary purpose\nof that hack was to make sure the physical relation name, i.e. the file\nname, would not collide for multiple backends. Tacking the OID onto\nthe filename fixes that. Getting the right table for the user specific\nnamespace sounds like ... schemas, so I figured I could fix that in the\nschema implementation.\n\nHowever, I see now that the hack to the syscache actually implements\ntemp table override of an existing persistent relation, as well. Hmm,\nis this override SQL92, or an extension? It seems to me that TEMPORARY\ntables are not completely defined in the draft I have. For example,\nthere's syntax defined for ON COMMIT PRESERVE ROWS, but the only semantics\nseem to be some restrictions on check constraint definitions, not how\nthe temp table rows are supposed to be preserved.\n\nHmm. Further perusal leads me to believe that this override is a\nstandards extension, as the clause about the tablename being unique in\nthe current namespace does not have an exception for temporary tables.\nNothing wrong with that, just making it clear. What's the use/case for\nthis feature? Does it come from some other DMBS?\n\n\n> > Second, I trust the patch keeps the on-disk file names similar to the\n> > table names. Doing some all-numeric file names just to work around NT\n> > broken-ness is not going to fly.\n\nThe patch uses Tom Lane's makeObjectName code, to tack the oid on\nthe end of the relname, but keep it under NAMEDATALEN by trimming the\nrelname part.\n\n> \n> I should add I have not seen this patch personally. If it was sent to a\n> mailing list, I somehow missed it.\n\nIt went to the patches list, although I'm still not getting mail from\nthere. BTW, it's clearly marked 'experimental, not to include in CVS'\nI'm not suggesting that this is the way to go: just testing how much\nthe assumption relname == filename has crept up through the code. Not\ntoo bad, actually.\n\nThe bigger problem, for schema implementation, is that the assumption\n'relname+dbname' is sufficent to uniquely identify a relation is more\npervasive: it's the basis of the relcache hash, isn't it? That's why I\nchanged the key for that hash. I'm not sure I got all the places that\nmake that assumption: in fact, the extra errors I get from the temptable\nregresion tests strongly imply that I missed a few (attempt to delete\nnon existent relcache entry.) In addition, doing it the way I did does\nrequire that all storage managers assure uniqueness of the relphysname\nacross the entire DB. The current patch does that: the oid is unique.\n\nAnother way to go is to revert that change, but adding the schema to\nthe relcache key.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n\n", "msg_date": "Mon, 1 May 2000 12:44:11 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RE: [PATCHES] relation filename patch" }, { "msg_contents": "> Hmm. Further perusal leads me to believe that this override is a\n> standards extension, as the clause about the tablename being unique in\n> the current namespace does not have an exception for temporary tables.\n> Nothing wrong with that, just making it clear. What's the use/case for\n> this feature? Does it come from some other DMBS?\n\nI am quite surprised people don't like that feature, or at least one\nperson doesn't. If someone else creates a table, it should not prevent\nme from creating a temporary table of the same name.\n\nI know Informix doesn't implement it that way, and they complained\nbecause a program started not working. Research showed that someone had\ncreated a real table with the same name as the temp table.\n\nOur code even masks a real table created in the same session. Once the\ntemp table is dropped, the real table becomes visible again. See the\nregression tests for an example of this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 May 2000 13:50:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch" }, { "msg_contents": "> > Hmm. Further perusal leads me to believe that this override is a\n> > standards extension, as the clause about the tablename being unique in\n> > the current namespace does not have an exception for temporary tables.\n> > Nothing wrong with that, just making it clear. What's the use/case for\n> > this feature? Does it come from some other DMBS?\n> \n> I am quite surprised people don't like that feature, or at least one\n> person doesn't. If someone else creates a table, it should not prevent\n> me from creating a temporary table of the same name.\n> \n> I know Informix doesn't implement it that way, and they complained\n\nSorry, I should have said \"One of my users complained...\"\n\n> because a program started not working. Research showed that someone had\n> created a real table with the same name as the temp table.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 May 2000 14:08:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> I didn't go to great trouble to keep the existing hack to the syscache\n> code working for temp tables, because I thought that the primary purpose\n> of that hack was to make sure the physical relation name, i.e. the file\n> name, would not collide for multiple backends. Tacking the OID onto\n> the filename fixes that.\n\nNot good enough --- the logical names of the temp tables (ie, the names\nthey are given in pg_class) can't collide either, or we'll get failures\nfrom the unique index on pg_class relname. It could be that schemas\nwill provide a better answer to that point though.\n\nI concur with your observation that the semantics we give temp tables\nare more than the standard requires --- but I also concur with Bruce\nthat it's an exceedingly useful extension. ISTM if an application has\nto worry about whether its temp table name will collide with something\nelse, then much of the value of the feature is lost. So masking\npermanent table names is the right thing, just as local vars in\nsubroutines mask global vars of the same name. Again, maybe we can\ndefine schemas in such a way that the same effect can be gotten...\n\n> The bigger problem, for schema implementation, is that the assumption\n> 'relname+dbname' is sufficent to uniquely identify a relation is more\n> pervasive: it's the basis of the relcache hash, isn't it? That's why I\n> changed the key for that hash.\n\nI didn't see what you did here, but I doubt that it was the right thing.\nThere are two things going on: rel name and rel OID must both be unique\nwithin a database, but there can be duplicates across databases within\nan installation. So, most of the backend assumes that rel name or\nrel OID is alone sufficient to identify a relation, and there are just\na couple of places that interface to installation-wide structures\n(eg, the buffer manager and sinval-queue manager) that know they must\nattach the current database's name or OID to make the identifier\nglobally unique. This is one reason why a backend can't reconnect to\nanother database on the fly; we've got no way to find all those\ndatabase-dependent cache entries...\n\nIt's going to be *extremely* painful if more than one name/OID is needed\nto refer to a relation in many places in the backend. I think we want\nto avoid that if possible.\n\nIt seems to me that what we want is to have schemas control the mapping\nfrom rel name to rel OID, but to keep rel OID unique within a database.\nSo schemas are more closely related to the \"temp table hack\" than you\nthink.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2000 15:12:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch " }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> > Hmm. Further perusal leads me to believe that this override is a\n> > standards extension, as the clause about the tablename being unique in\n> > the current namespace does not have an exception for temporary tables.\n> > Nothing wrong with that, just making it clear. What's the use/case for\n> > this feature? Does it come from some other DMBS?\n> \n> I know Informix doesn't implement it that way, and they complained\n> because a program started not working. Research showed that someone had\n> created a real table with the same name as the temp table.\n> \n> Our code even masks a real table created in the same session. Once the\n> temp table is dropped, the real table becomes visible again. See the\n> regression tests for an example of this.\n\nPersonally, I also like the Ingres table override feature, where if I\nreference table \"foo\", Ingres first looks for a table \"foo\" owned by\nme, and then one owned by the database owner. I've not explored what\nhappens if neither I nor the DBA owns a \"foo\". It's also unclear what\nwould happen in that case where multiple other had tables named \"foo\"\nand sufficient permits on them to permit my access.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "Mon, 1 May 2000 16:01:45 -0400 (EDT)", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch" }, { "msg_contents": "On Mon, May 01, 2000 at 03:12:44PM -0400, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > I didn't go to great trouble to keep the existing hack to the syscache\n> > code working for temp tables, because I thought that the primary purpose\n> > of that hack was to make sure the physical relation name, i.e. the file\n> > name, would not collide for multiple backends. Tacking the OID onto\n> > the filename fixes that.\n> \n> Not good enough --- the logical names of the temp tables (ie, the names\n> they are given in pg_class) can't collide either, or we'll get failures\n> from the unique index on pg_class relname. It could be that schemas\n> will provide a better answer to that point though.\n\nWell, that unique index will have to go away, one way or another, since\nschema's require multiple persistent tables with the same undecorated\nrelname in the same db. Adding the schema to the pg_class table is\nprobably where this has to go, then making the index over both fields.\nThen, temp tables can be implemented as a session specific temp schema.\n\n> \n> I concur with your observation that the semantics we give temp tables\n> are more than the standard requires --- but I also concur with Bruce\n> that it's an exceedingly useful extension. ISTM if an application has\n> to worry about whether its temp table name will collide with something\n> else, then much of the value of the feature is lost. So masking\n> permanent table names is the right thing, just as local vars in\n> subroutines mask global vars of the same name. Again, maybe we can\n> define schemas in such a way that the same effect can be gotten...\n\nAgreed. The masking does require more than just having a special schema\nfor temp tables, though. SO, some version of Bruce's macro hack is still\nrequired.\n\n> \n> I didn't see what you did here, but I doubt that it was the right thing.\n\nProbably not: this is my first extensive digging into the backend code.\nAnd the whole reason this is a 'lets try and implement somne of this,\nand go back to the discussion' patch, instead of a proposed addition.\n\nIn defense of what I _did_: The temp table relname hacking is still\nin place, and seems to work, and could be left in place. However, I\nknew that relname would not stay unique, once schema are implemented,\nbut physrelname would (since the smgr needs it). \n\n> There are two things going on: rel name and rel OID must both be unique\n> within a database, but there can be duplicates across databases within\n> an installation. So, most of the backend assumes that rel name or\n> rel OID is alone sufficient to identify a relation, and there are just\n> a couple of places that interface to installation-wide structures\n> (eg, the buffer manager and sinval-queue manager) that know they must\n> attach the current database's name or OID to make the identifier\n> globally unique. This is one reason why a backend can't reconnect to\n> another database on the fly; we've got no way to find all those\n> database-dependent cache entries...\n> \n\nRight. But it seems to me that everywhere the code uses just a relname, it\nassumes the currently connected DB, when eventually gets to the reference\nby the code that interfaces to the installation-wide structures, no? Once\nschema are in place, this can no longer work. One hack would be to have a\n'current schema', like current db, but that won't work, one query will\nneed to refery to schema other than the default for table references\n(otherwise, they're not very useful, are they?)\n\n> It's going to be *extremely* painful if more than one name/OID is needed\n> to refer to a relation in many places in the backend. I think we want\n> to avoid that if possible.\n> \n> It seems to me that what we want is to have schemas control the mapping\n> from rel name to rel OID, but to keep rel OID unique within a database.\n> So schemas are more closely related to the \"temp table hack\" than you\n> think.\n> \n\nSo, relname cannot be enough. Isn't OID already sufficent though? I\nthought oids are unique across the entire installation, not just a\nparticular db. In any case, the solution may be to convert relname\n(+default or user supplied schema) to rel oid, as early as possible,\nthen indexing (and caching) on that. If oids are db specific, the code\nthat needs system wide uniqueness needs to add the db name/OID, just\nlike it does for relname.\n\nActually, I know that schemas are _closely_ related to the temp table hack\n(which was Bruce's own word for it, BTW, not mine;-), hence my hijacking\nof his RelationGetRelationName and RelationGetPhysicalRelationName\nmacros. I was hoping to get temp tables back (mostly) for free once\nschemas are in, but I think the extended semantics will require leaving\nsomething like the existing code in place.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 1 May 2000 15:09:00 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RE: [PATCHES] relation filename patch" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Well, that unique index will have to go away, one way or another, since\n> schema's require multiple persistent tables with the same undecorated\n> relname in the same db. Adding the schema to the pg_class table is\n> probably where this has to go, then making the index over both fields.\n> Then, temp tables can be implemented as a session specific temp schema.\n\nRight, there would probably be a unique index on schema name + relname.\n\n> Right. But it seems to me that everywhere the code uses just a relname, it\n> assumes the currently connected DB, when eventually gets to the reference\n> by the code that interfaces to the installation-wide structures, no? Once\n> schema are in place, this can no longer work. One hack would be to have a\n> 'current schema', like current db, but that won't work, one query will\n> need to refery to schema other than the default for table references\n> (otherwise, they're not very useful, are they?)\n\nI was sort of envisioning a search path of schema names. Temp table\nmasking could be implemented by pushing the session-local schema onto\nthe front of the search path. Not sure how that relates to SQL3's ideas\nabout schemas, however.\n\n> So, relname cannot be enough. Isn't OID already sufficent though? I\n> thought oids are unique across the entire installation, not just a\n> particular db.\n\nEr, well, no. Consider pg_class, which has both the same name and the\nsame OID in every DB in the installation --- but we have to treat it\nas separately instantiated in each DB. Most of the system tables work\nthat way. OTOH we have a couple of top-level tables like pg_shadow,\nwhich are the same table for all DBs in the installation.\n\nIt could be that we ought to eliminate the whole notion of separate\ndatabases within an installation, or more accurately merge it with the\nconcept of schemas. Really, the existing database mechanism is sort\nof a limited implementation of schemas.\n\n> In any case, the solution may be to convert relname\n> (+default or user supplied schema) to rel oid, as early as possible,\n> then indexing (and caching) on that.\n\nRight, but there wouldn't be only one default schema, there'd be some\nkind of search path in which an unqualified relname would be sought.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2000 17:27:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch " }, { "msg_contents": "On Mon, May 01, 2000 at 05:27:04PM -0400, Tom Lane wrote:\n> \n> I was sort of envisioning a search path of schema names. Temp table\n> masking could be implemented by pushing the session-local schema onto\n> the front of the search path. Not sure how that relates to SQL3's ideas\n> about schemas, however.\n> \n> > So, relname cannot be enough. Isn't OID already sufficent though? I\n> > thought oids are unique across the entire installation, not just a\n> > particular db.\n> \n> Er, well, no. Consider pg_class, which has both the same name and the\n> same OID in every DB in the installation --- but we have to treat it\n> as separately instantiated in each DB. Most of the system tables work\n> that way. OTOH we have a couple of top-level tables like pg_shadow,\n> which are the same table for all DBs in the installation.\n> \n\nWell, in schema-land this taken care of by the fact that all those system\ntables live in a schema named information_schema, which is defined as\nview on tables in the schema definition_schema. To some extent, our use\nof pg_ for all the system tables simulates this.\n\n> It could be that we ought to eliminate the whole notion of separate\n> databases within an installation, or more accurately merge it with the\n> concept of schemas. Really, the existing database mechanism is sort\n> of a limited implementation of schemas.\n> \n\nSee the discussion about this between Peter and I (and Jan?) last time\nschemas came up. We agreed that pg's databases map to SQL92 Catalogs\nrather nicely, with the whole installation being a 'cluster of catalogs'.\nNow, If some one can explain to me what a 'module' is ...\n\n> > In any case, the solution may be to convert relname\n> > (+default or user supplied schema) to rel oid, as early as possible,\n> > then indexing (and caching) on that.\n> \n> Right, but there wouldn't be only one default schema, there'd be some\n> kind of search path in which an unqualified relname would be sought.\n> \n\nPerhaps, but that's an extension. SQL92 defines a default SCHEMA\nfor a session, which is set via the SET SCHEMA statement, strangely\nenough. Having nesting SCHEMA's might be useful, but I'm not not sure\nhow. Getting any at all in there would help a lot. I'd suggest the\ndefault be configurable on a per user basis. That'd allow some nifty\naccess controls, with just the existing VIEW code.\n\nTurns out I needed these kind of schema's last week: I wanted to create\nfiltered access to a set of tables. Simple, right? Add booleans, create\nviews that test the boolean, remove SELECT privilege on the tables.\nOnly problem, is now I had to go and edit all 200+ SELECT statements\nin the application code, to point at the views instead of the tables,\nor rename every table, and edit the 200+ SELECTs in the other apps,\nthat do data entry and maintenance. If I had schema, I'd have changed\nthe default schema for the 'web' login, and created views that had the\nsame name as the tables, selecting back into the real tables in their own\nschema. 10 mins, and done, don't touch the tested app. code at all! That's\nwhat got me a round-toit, getting this patch off to be discussed.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 1 May 2000 17:20:16 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RE: [PATCHES] relation filename patch" }, { "msg_contents": "> -----Original Message-----\n> From: Ross J. Reedstrom [mailto:[email protected]]\n> >\n> > I didn't see what you did here, but I doubt that it was the right thing.\n>\n> Probably not: this is my first extensive digging into the backend code.\n> And the whole reason this is a 'lets try and implement somne of this,\n> and go back to the discussion' patch, instead of a proposed addition.\n>\n> In defense of what I _did_: The temp table relname hacking is still\n> in place, and seems to work, and could be left in place.\n\nYes,pararell regression tests all pass here if relacache hashes\non pg_class entry name.\n\n> However, I\n> knew that relname would not stay unique, once schema are implemented,\n> but physrelname would (since the smgr needs it).\n>\n\nIt is dangerous to combine logical and physical concepts.\nSo it seems difficult to use physrelname both as a storage location\nand as an unique relation name.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Tue, 2 May 2000 10:09:47 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: RE: [PATCHES] relation filename patch" }, { "msg_contents": "> > Our code even masks a real table created in the same session. Once\n> > the temp table is dropped, the real table becomes visible again. See\n> > the regression tests for an example of this.\n> \n> The real problem here is that there's no way of finding out whether you\n> just dropped the temporary table or the \"real\" one or whether a table is\n> temporary at all. Sure you can perhaps look into pg_class but tell that to\n> users.\n\nDo a \\d on the table. If it doesn't show up, it is temporary. ;-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 17:26:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Hmm. Further perusal leads me to believe that this override is a\n> > standards extension,\n\n> I am quite surprised people don't like that feature, or at least one\n> person doesn't.\n\nFWIW, I always considered that behaviour kind of suspicious. I don't think\nit's a big problem in practice and could be useful in certain situations,\nbut personally I would have chosen not to do it this way.\n\n> If someone else creates a table, it should not prevent me from\n> creating a temporary table of the same name.\n\nThe proper solution to this is that \"somebody\" should not be allowed to\ncreate tables wildly. Until we can properly do that it's not worth arguing\nthis out.\n\n> Our code even masks a real table created in the same session. Once\n> the temp table is dropped, the real table becomes visible again. See\n> the regression tests for an example of this.\n\nThe real problem here is that there's no way of finding out whether you\njust dropped the temporary table or the \"real\" one or whether a table is\ntemporary at all. Sure you can perhaps look into pg_class but tell that to\nusers.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 2 May 2000 23:28:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch" }, { "msg_contents": "On Tue, 2 May 2000, Bruce Momjian wrote:\n\n> Do a \\d on the table. If it doesn't show up, it is temporary. ;-)\n\nBut if the temporary is shadowing a permanent table then I know less than\nnothing.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 3 May 2000 11:00:17 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: [PATCHES] relation filename patch" } ]
[ { "msg_contents": "On Mon, May 01, 2000 at 12:06:14PM +0900, Hiroshi Inoue wrote:\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: Monday, May 01, 2000 11:36 AM\n> > To: Hiroshi Inoue\n> > Cc: Bruce Momjian\n> > Subject: Re: FW: [PATCHES] relation filename patch\n> > \n> > \n> > [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > The patch seems to be a trial patch for evaluatuon.\n> > > \n> > \n> > I see, it appends the oid on the end of the file name. Seems like an\n> > interesting idea. That would help in some cases, but it seems we agreed\n> > on a more general sequence number for relations so we can fix some of\n> > our other problems where the mapping between table and file names breaks\n> > down. Sometimes we want two files for the same relation, and have\n> > different onces be visible to different backends. In those cases, the\n> > oid does not change. We could add the oid _and_ a sequence number to\n> > fix that.\n> >\n\nWell, it could be just the sequence number, if it had the properties of\nbeing unique across all backends. I didn't use the existing seq. number\ncode, (used for temp tables?) because it seems to be reset at transaction\nstart. Besides, that discussion happened _after_ I'd finished the\npatch. ;-) I was just holding on to it (from about the first 7.0 freeze)\nto not distract anyone.\n\n> \n> Yes but it would be easily changed.\n> If I remeber correctly,there are only two places that require a Rule\n> for generating the filename in his implementation,\n\nThat's right, and both of those are for initializing the filename of a\nnew relation: nothing in the backend, deduces the filename of an existing\nrelation: it's always looked up.\n\nI seem to recall that the cases where two files are needed are to remove\nthe drop/recreate cycle from CLUSTER and index rebuilding, particuarly\nto make them robust to interuption, particularly as to the rename()\nof the underlying file. With my current patch, that could be done (at\n2x disk space cost) by building the second version of the table under a\ndifferent relname, perhaps as a temp relation even, and then UPDATE the\npg_class entry to point at the new file, and have the temp relation point\nto the old file, as the last step. None of the DB code needs to have the\nfilename in sync, so just leave it 'wrong', or fix it at vacuum time,\nwhen you get a table lock anyway.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 1 May 2000 13:45:51 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: [PATCHES] relation filename patch" }, { "msg_contents": "> I seem to recall that the cases where two files are needed are to remove\n> the drop/recreate cycle from CLUSTER and index rebuilding, particuarly\n> to make them robust to interuption, particularly as to the rename()\n> of the underlying file. With my current patch, that could be done (at\n> 2x disk space cost) by building the second version of the table under a\n> different relname, perhaps as a temp relation even, and then UPDATE the\n> pg_class entry to point at the new file, and have the temp relation point\n> to the old file, as the last step. None of the DB code needs to have the\n> filename in sync, so just leave it 'wrong', or fix it at vacuum time,\n> when you get a table lock anyway.\n\nYes, I agree, cleanup during vacuum is a nice idea, though it makes\ntablename checks before the vacuum not work. Let's see what people want\nto implement and we can make decisions then.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 May 2000 14:51:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [PATCHES] relation filename patch" } ]
[ { "msg_contents": "Tom Lane writes:\n\n> When a postmaster initially starts up, it uses a key value of\n> PortNumber * 1000. However, if it is forced to do a system-wide\n> restart because of a backend crash, it generates a new key value\n> different from the old one, namely PortNumber * 1000 + shmem_seq *\n> 100, so that the old shared-memory segments are discarded and a new\n> set is created.\n\nWhy not use IPC_EXCL to ensure you're getting a freshly baked shmem\nsegment rather than a recycled one?\n\n> The intent of this logic is evidently to ensure that the old, failed\n> backends can't somehow corrupt the new ones.\n\nBut what if someone runs another postmaster at port 5433, will it\neventually interfere? Or some totally different program? Trying to\ngenerate distinct number to use for keys is only one part of the equation,\nbut you still have to check whether the number was distinct enough.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 1 May 2000 21:49:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shmem_seq may be a bad idea" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Why not use IPC_EXCL to ensure you're getting a freshly baked shmem\n> segment rather than a recycled one?\n\nHmm, that might work --- we'd need to add some logic to try to delete\na conflicting segment and/or move on to another key if we can't.\nBut that seems like it'd resolve both the wrong-size issue and the\nconflict-with-another-program issue. I like it.\n\n>> The intent of this logic is evidently to ensure that the old, failed\n>> backends can't somehow corrupt the new ones.\n\n> But what if someone runs another postmaster at port 5433, will it\n> eventually interfere?\n\nNo; I neglected to mention that shmem_seq wraps around at 10. So\nthere's no possible conflict between postmasters at different port#s\nin the current scheme.\n\n> Or some totally different program?\n\nThat, on the other hand, is a very real risk. Trying a new key if we\nfail to get a new shmem seg (or semaphore set) seems like a good\nrecovery method.\n\nWe'd need to rejigger the meaning of shmem_seq a little bit --- it'd\nno longer be a global variable, but rather a local count of the number\nof tries to create a particular seg or set. So, logic would be\nsomething like\n\n\tfor (seq = 0; seq < 10; seq++) {\n\t\tkey = port*1000 + seq*100 + (id for particular item);\n\t\tshmctl(key, IPC_RMID, 0); // ignore error\n\t\tshmget(key, size, IPC_CREAT | IPC_EXCL);\n\t\tif (success)\n\t\t\treturn key;\n\t}\n\t// if get here, report shmget failure ...\n\nIt looks like we can apply the same method for semaphore sets, too.\n\nSound good?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2000 17:08:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shmem_seq may be a bad idea " }, { "msg_contents": "On Mon, 1 May 2000, Tom Lane wrote:\n\n> > Why not use IPC_EXCL to ensure you're getting a freshly baked shmem\n> > segment rather than a recycled one?\n> \n> Hmm, that might work --- we'd need to add some logic to try to delete\n> a conflicting segment and/or move on to another key if we can't.\n\nHow about you just pick a key (preferably using a standard method such as\nftok, but the current is fine as well if you like the traceability of keys\nto servers) and if it's already used then increase it by one, try again.\nFor efficiency you could keep the last key that worked in a global and\nstart retrying from there. No need to try any fancy sequence number that\nwrap after 10 times anyway and thus don't help in general.\n\nA while ago while thinking about a way to make ipcclean better I thunk\nthat perhaps the postmaster should write the keys of the segments it gets\nto a flat-text file. If it somehow crashes and loses track of what it\nallocated before it can use that information to clean up. Not sure how\noften that would take effect but it's very socially friendly.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 2 May 2000 10:46:53 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shmem_seq may be a bad idea " }, { "msg_contents": "> A while ago while thinking about a way to make ipcclean better I thunk\n> that perhaps the postmaster should write the keys of the segments it gets\n> to a flat-text file. If it somehow crashes and loses track of what it\n> allocated before it can use that information to clean up. Not sure how\n> often that would take effect but it's very socially friendly.\n\nHmm. Could we write this to a separate shared memory segment? Much\nmore likely to be of fixed length and compatible between versions, and\nmore likely to exist or not exist with the same behavior as the large\nshared memory segment under discussion??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 02 May 2000 10:55:53 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shmem_seq may be a bad idea" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>>>> Why not use IPC_EXCL to ensure you're getting a freshly baked shmem\n>>>> segment rather than a recycled one?\n>> \n>> Hmm, that might work --- we'd need to add some logic to try to delete\n>> a conflicting segment and/or move on to another key if we can't.\n\n> How about you just pick a key (preferably using a standard method such as\n> ftok, but the current is fine as well if you like the traceability of keys\n> to servers) and if it's already used then increase it by one, try again.\n\nThe thing I don't like about using ftok() is that it opens you up to\ncross-postmaster conflicts, if there's more than one postmaster running\non the same machine. Since ftok is so weakly specified, we have no way\nto be sure that two different postmasters wouldn't generate the same\nkey.\n\nAs for the issue of whether to try to delete or not, we don't have\nautomatic cleanup of no-longer-used segments unless we try to delete.\nThat's why it's critical that we not have cross-postmaster conflicts.\nThe skeleton code I exhibited yesterday depends on the assumption that\nany segment/semset that the postmaster has permission to delete is OK\nfor it to delete. If there is a possibility of cross-postmaster\ncollision then that doesn't work anymore.\n\n(Of course, if the postgres account also runs non-postgres applications\nthat use shmem or semas, you could have trouble anyway ... but that\nseems unlikely, and perhaps more to the point it's pretty easy to avoid\n*as long as postgres generates predictable keys*. With ftok the keys\naren't predictable, and you're trusting to luck that you don't have a\ncollision.)\n\n> A while ago while thinking about a way to make ipcclean better I thunk\n> that perhaps the postmaster should write the keys of the segments it gets\n> to a flat-text file. If it somehow crashes and loses track of what it\n> allocated before it can use that information to clean up.\n\nWe could do that, but I'm not sure it's necessary. With the code I\nproposed, the segments would essentially always have the same keys,\nand so restarting the postmaster would clean up automatically. (The\nonly time the sequence number would get above 0 would be if you had\na collision with an ftok-using app, and presumably on your next try\nyou'd get the same collision and end up with the same final choice\nof sequence number ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2000 10:58:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shmem_seq may be a bad idea " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> A while ago while thinking about a way to make ipcclean better I thunk\n>> that perhaps the postmaster should write the keys of the segments it gets\n>> to a flat-text file.\n\n> Hmm. Could we write this to a separate shared memory segment? Much\n> more likely to be of fixed length and compatible between versions, and\n> more likely to exist or not exist with the same behavior as the large\n> shared memory segment under discussion??\n\nWhat happens if you get a key collision with some other application\nfor that segment? Seems to me that using shmem to remember where you\nput your shmem segments is dangerously circular ;-)\n\nThe flat text file is not a bad idea, but I think the logic I suggested\nyesterday makes it unnecessary...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2000 11:52:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shmem_seq may be a bad idea " }, { "msg_contents": "Thomas Lockhart writes:\n\n> > A while ago while thinking about a way to make ipcclean better I thunk\n> > that perhaps the postmaster should write the keys of the segments it gets\n> > to a flat-text file. If it somehow crashes and loses track of what it\n> > allocated before it can use that information to clean up. Not sure how\n> > often that would take effect but it's very socially friendly.\n> \n> Hmm. Could we write this to a separate shared memory segment?\n\nBut how would ipcclean get to the key of *that* segment? I was thinking\nfile because that'd always be in a known location and could also be\naccessible to humans to sort things out by hand or debug stuff.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 2 May 2000 23:35:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shmem_seq may be a bad idea" }, { "msg_contents": "Tom Lane writes:\n\n> The thing I don't like about using ftok() is that it opens you up to\n> cross-postmaster conflicts, if there's more than one postmaster running\n> on the same machine. Since ftok is so weakly specified, we have no way\n> to be sure that two different postmasters wouldn't generate the same\n> key.\n\nThe name ftok=file-to-key implies that the file name is being used to\ncreate some fairly unique keys. All systems I peeked at used the inode and\ndevice number of the file name you give it. Surely OS vendors could alias\nftok() to random(), but I'd like to give them a little credit at least.\n\nBut this is not the issue, I don't care what the key making scheme is. It\ncould in fact be random(). But you *must* use IPC_EXCL to check if the\nsegment already exists and change your key if so. If you do that then you\n*never* have a conflict, no matter what you do. That should be a\nno-brainer. Temporary files are handled this way as well (in theory\nanyway).\n\n> (Of course, if the postgres account also runs non-postgres applications\n> that use shmem or semas, you could have trouble anyway ... but that\n> seems unlikely, and perhaps more to the point it's pretty easy to avoid\n> *as long as postgres generates predictable keys*.\n\nBut you still have to trust other applications to avoid the PostgreSQL key\nspace, which they probably couldn't care less about. The whole point of\nthis exercise is to increase fault-tolerance, so you shouldn't assume too\nmuch about a sane environment.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 4 May 2000 01:10:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shmem_seq may be a bad idea " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> But this is not the issue, I don't care what the key making scheme is. It\n> could in fact be random(). But you *must* use IPC_EXCL to check if the\n> segment already exists and change your key if so.\n\nOh, I agree that IPC_EXCL would be an improvement. I'm just dubious\nthat using ftok() is an improvement.\n\n>> Since ftok is so weakly specified, we have no way\n>> to be sure that two different postmasters wouldn't generate the same\n>> key.\n\n> The name ftok=file-to-key implies that the file name is being used to\n> create some fairly unique keys. All systems I peeked at used the inode and\n> device number of the file name you give it.\n\nYup, that's what I would have expected from the way it's described in\nthe man page. My point is you can't put more than 32 bits of stuff in\na 32-bit sack; therefore, ftok cannot guarantee that it delivers a\ndistinct token for every combination of inode + device + application\nkey. I doubt it even tries very hard, more likely just xor's them\ntogether and trusts to luck. So I'd rather use a predictable mapping\nthat we *know* will not generate cross-postmaster conflicts between\npostmasters running concurrently on different ports.\n\nHowever, IPC_EXCL would definitely make us more robust against other\nsorts of conflicts, so I agree that's a good change to make. TODO\nlist entry, please, Bruce?\n* Use IPC_EXCL when creating shared memory and semaphores\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 May 2000 20:42:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shmem_seq may be a bad idea " }, { "msg_contents": "> However, IPC_EXCL would definitely make us more robust against other\n> sorts of conflicts, so I agree that's a good change to make. TODO\n> list entry, please, Bruce?\n> * Use IPC_EXCL when creating shared memory and semaphores\n\nDone.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 5 May 2000 00:06:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shmem_seq may be a bad idea" } ]
[ { "msg_contents": "Tom Lane writes:\n\n> elog(ERROR) doesn't work in the postmaster. Well, it does \"work\", but\n> it just prints the message and then exit()s. That might be good\n> enough for errors detected during postmaster startup,\n\nExactly.\n\n> but I'd hate to see it called after the postmaster is up and running.\n\nThat's why I don't do it.\n\n> Sooner or later we will probably want to fix things so that\n> elog(ERROR) in the postmaster returns control to the postmaster's idle\n> loop, much like elog() in a backend works.\n\nA while ago I went on record saying that elog is a pain for the user. Now\nI'd like to add it's a pain for developers, too. Having what's essentially\nan exception model without a way to catch exceptions is disastrous. In\nmany cases printing an error message and returning a failure value to the\ncaller to let it deal with it is much easier for both parties. However,\nthere's no way to print an error message and continuing execution unless\nyou either label it 'DEBUG' or use fprintf. It's furthermore equally\nimpossible to communicate an error message to the server log and not have\nit sent to the front-end. This tprintf business apparently tried to work\naround that but it was only painting over symptoms and added to the\nconfusion along the way.\n\n> Offhand I think that Peter need not tackle this issue in order to do\n> parsing of postmaster startup-time options, but if he wants to have\n> the postmaster reread the config file at SIGHUP then it needs to be\n> addressed.\n\nThe postmaster passes on the SIGHUP to all the backends but it doesn't do\nanything about it itself. This was already in place, I didn't see a need\nto change it.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 1 May 2000 21:50:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When malloc returns zero ... " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> A while ago I went on record saying that elog is a pain for the user. Now\n> I'd like to add it's a pain for developers, too. Having what's essentially\n> an exception model without a way to catch exceptions is disastrous.\n\nI think that's a bit overstated ... we've gotten along fine with this\nmodel so far, and I haven't seen any compelling reason to change it.\nThe problem at hand is not the error handling model, it's that the\npostmaster environment doesn't implement the model.\n\n> It's furthermore equally impossible to communicate an error message to\n> the server log and not have it sent to the front-end.\n\nEr, what's wrong with elog(DEBUG)?\n\n>> Offhand I think that Peter need not tackle this issue in order to do\n>> parsing of postmaster startup-time options, but if he wants to have\n>> the postmaster reread the config file at SIGHUP then it needs to be\n>> addressed.\n\n> The postmaster passes on the SIGHUP to all the backends but it doesn't do\n> anything about it itself. This was already in place, I didn't see a need\n> to change it.\n\nDoesn't the postmaster need to reread the config file itself in order to\nbe sure to pass the new values to subsequently-started backends? Or is\nyour plan that newly started backends will always parse the config file\nfor themselves? In that case I'm not clear on why you care about the\npostmaster environment at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 May 2000 17:17:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When malloc returns zero ... " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> Peter Eisentraut <[email protected]> writes:\n> > A while ago I went on record saying that elog is a pain for the \n> user. Now\n> > I'd like to add it's a pain for developers, too. Having what's \n> essentially\n> > an exception model without a way to catch exceptions is disastrous.\n> \n> I think that's a bit overstated ... we've gotten along fine with this\n> model so far, and I haven't seen any compelling reason to change it.\n\nI agree with Peter at this point.\nFor example,even basic functions call elog() easily but we can't\ncatch the error and we(at least I) couldn't call basic functions\neasily. In fact I suffered very much to avoid elog() call in order\nto enable dropping tables whose base relation files has already\nbeen removed\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 2 May 2000 10:42:22 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: When malloc returns zero ... " }, { "msg_contents": "On Mon, 1 May 2000, Tom Lane wrote:\n\n> Er, what's wrong with elog(DEBUG)?\n\nWell, it says \"DEBUG\", not \"ERROR\", that's all. I'm using this in fact but\nit's suboptimal.\n\n> Doesn't the postmaster need to reread the config file itself in order to\n> be sure to pass the new values to subsequently-started backends?\n\nGood that you mention that ... :)\n\n> Or is your plan that newly started backends will always parse the\n> config file for themselves? In that case I'm not clear on why you\n> care about the postmaster environment at all.\n\nSo you can set buffers, max backends, and that sort of static stuff. I\nthink that each backend reading the config file on startup is\nunnecessarily slow.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 2 May 2000 10:50:36 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When malloc returns zero ... " }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of Tom Lane\n> >\n> > Peter Eisentraut <[email protected]> writes:\n> > > A while ago I went on record saying that elog is a pain for the\n> > user. Now\n> > > I'd like to add it's a pain for developers, too. Having what's\n> > essentially\n> > > an exception model without a way to catch exceptions is disastrous.\n> >\n> > I think that's a bit overstated ... we've gotten along fine with this\n> > model so far, and I haven't seen any compelling reason to change it.\n> \n> I agree with Peter at this point.\n> For example,even basic functions call elog() easily but we can't\n> catch the error and we(at least I) couldn't call basic functions\n> easily. In fact I suffered very much to avoid elog() call in order\n> to enable dropping tables whose base relation files has already\n> been removed\n\nThe current model is also problematical in the case of procedural\nlanguages as well. Many times, when there is an error, both\nthe backend and the PL handler needs to do cleanup. But it\nis very hard for the PL handler to 'capture' the exception in\norder to do the cleanup. Witness the ingenious lengths Jan had\nto go to, to do it in pltcl. I've tried to do the same in plperl\nbut am not convinved that it is correct. And even if I get it\ncorrect, maintainability suffers greatly.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Tue, 02 May 2000 08:35:53 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When malloc returns zero ..." }, { "msg_contents": ">> I agree with Peter at this point.\n>> For example,even basic functions call elog() easily but we can't\n>> catch the error and we(at least I) couldn't call basic functions\n>> easily. In fact I suffered very much to avoid elog() call in order\n>> to enable dropping tables whose base relation files has already\n>> been removed\n\n> The current model is also problematical in the case of procedural\n> languages as well. Many times, when there is an error, both\n> the backend and the PL handler needs to do cleanup. But it\n> is very hard for the PL handler to 'capture' the exception in\n> order to do the cleanup.\n\nIt would be fairly easy to extend the existing setjmp/longjmp support\nto allow multiple layers of code to catch an elog(ERROR) on its way\nout to the outer loop. That might be a better answer for things like\nPL handlers than the current approach, which is basically \"any cleanup\nyou need, you'd better be prepared to do as part of transaction abort\nprocessing\".\n\n(BTW, I actually think that the current approach is more robust than\nexception catchers for modules that are part of the standard system;\nit forces on you the discipline of making sure that all recoverable\nresources are tracked in data structures less transient than some\nroutine's local variables. But it's no help for add-on modules like\nPL handlers, because they don't get called during transaction abort.\nA partial answer might be to add a hook to allow add-ons to get called\nduring commit or abort cleanup?)\n\nBut if we did add support for multiple layers of longjmp catching,\nthe only thing that would really work in general would be for an\nerror catcher to do local cleanup and then pass the error on outwards\n(possibly changing the error message). It would not be safe to catch\nthe error and then continue as if nothing had happened. There is too\nmuch code that assumes that it doesn't have to leave things in a\nparticularly clean state when it elog()s, because transaction abort\nwill clean up after it. Fixing *that* would be a really major task,\nand a perilous one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2000 11:19:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When malloc returns zero ... " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> >> I agree with Peter at this point.\n> >> For example,even basic functions call elog() easily but we can't\n> >> catch the error and we(at least I) couldn't call basic functions\n> >> easily. In fact I suffered very much to avoid elog() call in order\n> >> to enable dropping tables whose base relation files has already\n> >> been removed\n> \n> > The current model is also problematical in the case of procedural\n> > languages as well. Many times, when there is an error, both\n> > the backend and the PL handler needs to do cleanup. But it\n> > is very hard for the PL handler to 'capture' the exception in\n> > order to do the cleanup.\n> \n> It would be fairly easy to extend the existing setjmp/longjmp support\n> to allow multiple layers of code to catch an elog(ERROR) on its way\n> out to the outer loop. That might be a better answer for things like\n> PL handlers than the current approach, which is basically \"any cleanup\n> you need, you'd better be prepared to do as part of transaction abort\n> processing\".\n> \n> (BTW, I actually think that the current approach is more robust than\n> exception catchers for modules that are part of the standard system;\n> it forces on you the discipline of making sure that all recoverable\n> resources are tracked in data structures less transient than some\n> routine's local variables.\n\nHmm,for the query SELECT .. FROM .. WHERE int2key = 1;\nwe coudn't try to convert 1 -> 1::int2 because the conversion may\ncause elog(ERROR). Isn't it too restrictive ?\n\nComments ?\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 3 May 2000 09:20:04 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: When malloc returns zero ... " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> (BTW, I actually think that the current approach is more robust than\n>> exception catchers for modules that are part of the standard system;\n>> it forces on you the discipline of making sure that all recoverable\n>> resources are tracked in data structures less transient than some\n>> routine's local variables.\n\n> Hmm,for the query SELECT .. FROM .. WHERE int2key = 1;\n> we coudn't try to convert 1 -> 1::int2 because the conversion may\n> cause elog(ERROR). Isn't it too restrictive ?\n\nWell, that's not a particularly good example, because it wouldn't\nbe hard at all for us to avoid reducing \"int2var = 32768::int4\"\nto \"int2var = 32768::int2\" (oops). The reason the optimizer is\npresently staying away from this sort of thing is that it isn't\nsure whether it's safe to reduce, say, \"int2var + 32767::int4\"\nto \"int2var + 32767::int2\" (maybe oops, maybe OK, but for sure\nthe absence of elog during the constant-reduction doesn't tell\nyou enough). AFAICS we need a whole lot of datatype-and-operator-\nspecific semantic knowledge to be able to do that kind of reduction\nsafely.\n\nAs things currently stand, you could catch an elog() escape and not\npropagate the error *if* you had carefully analyzed all the possible\nerrors that would be generated by the piece of code you intend to call\nand figured out that they were all \"safe\". That strikes me as both a\nticklish analysis to begin with and horribly subject to future breakage,\nfor any but the most trivial of called routines. int2eq is perhaps\nsimple enough to be analyzed completely ;-) ... but the approach\ndoesn't scale.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 May 2000 00:19:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When malloc returns zero ... " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> (BTW, I actually think that the current approach is more robust than\n> >> exception catchers for modules that are part of the standard system;\n> >> it forces on you the discipline of making sure that all recoverable\n> >> resources are tracked in data structures less transient than some\n> >> routine's local variables.\n> \n> > Hmm,for the query SELECT .. FROM .. WHERE int2key = 1;\n> > we coudn't try to convert 1 -> 1::int2 because the conversion may\n> > cause elog(ERROR). Isn't it too restrictive ?\n> \n> Well, that's not a particularly good example, because it wouldn't\n> be hard at all for us to avoid reducing \"int2var = 32768::int4\"\n> to \"int2var = 32768::int2\" (oops). \n\nHmm,am I misunderstanding ?\n\nAs for conversion functions,shouldn't they return return-code\nunless they could throw \"catchable\" exceptions ?\nIf the conversion 32768::int4 -> 32768::int2 returns an error\nwe would be able to find that \"int2var = 32768::int4\" is always\nfalse.\nIf I recognize correctly,\"catchable\" exceptions means that \nwe could catch them if we want(otherwise they are dealt with\na default exception handler like current elog handler). So they\nwould be same as elog() except the special cases we are \ninterested in.\n\nWhere are problems when we handle conversion functions\nwith return code or \"catchable\" exceptions ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 3 May 2000 14:44:25 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: When malloc returns zero ... " } ]
[ { "msg_contents": "[This certainly isn't my credit!]\n\n-------- Original Message --------\nSubject: PG 7.0 is great!\nDate: Tue, 02 May 2000 00:48:25 +0200\nFrom: Hans Schou <[email protected]>\nOrganization: http://www.schou.dk http://www.adict.net\nTo: Lamar Owen <[email protected]>\n\nDear Lamar\n\nPostgreSQL ver. 7.0 is great.\nIt solved a lot of problems for me.\nOne of them was my silly little Mandelbrot fractal program.\nhttp://www.sslug.dk/~chlor/mandel/ see example.txt\n\nBTW, we are currently pushing hard on PostgreSQL to the\npublic here in Denmark. I can not move people away from\nMySQL on my own but I try.\n\nThanks for a great RDBMS.\n\n-- \nbest regards\n+---------------------------------------------------+\n! Hans Schou, Hamletsgade 4-201, DK-2200 Kbh N !\n! Telex: SCHOU.DK Phone: +45 35 86 12 66 !\n! mailto:[email protected] http://www.schou.dk !\n+---------------------------------------------------+\nHow to get the CD-ROM out of the tray:\n Apple: Drop the CD-icon in trashcan\n Windows: Push the button on the CD-drive\n Linux: eject\n", "msg_date": "Mon, 01 May 2000 20:59:59 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: PG 7.0 is great!]" } ]
[ { "msg_contents": "> I am unable to compile the java code with kaffe. Can anyone compile\n> jdbc under 7.0?\n\nYup. Seems some form of jdk-1.2 works for me. But my default\n/usr/bin/javac (on an old RH5.2 system), something called \"pizza\",\ndoes not.\n\n> Can you email me that java files that are produced by\n> the compile. I need the *.jar file, and the *.class files.\n\nDo you still need them? I can send what I built, but I'm pretty sure\nthat Peter Mount has a fresh package built and available on some web\nsite of his...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 02 May 2000 01:59:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "Thomas Lockhart wrote:\n> > Can you email me that java files that are produced by\n> > the compile. I need the *.jar file, and the *.class files.\n \n> Do you still need them? I can send what I built, but I'm pretty sure\n> that Peter Mount has a fresh package built and available on some web\n> site of his...\n\nIf so, I need them (Java 1 and 2) for the RPM's. I don't do Java -- and\nthe RPM's have historically packaged the .jar files as pulled verbatim\nfrom retep.org.uk. I haven't distributed RC2 RPM's yet for partially\nthat reason\n-- the other part is the lack of an RC2-tested alpha patch.\n\nNOTE:\nI have gotten good response and patches to the RPM's from a number of\npeople this go around -- and it is ENCOURAGING!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 01 May 2000 22:30:08 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> > I am unable to compile the java code with kaffe. Can anyone compile\n> > jdbc under 7.0?\n> \n> Yup. Seems some form of jdk-1.2 works for me. But my default\n> /usr/bin/javac (on an old RH5.2 system), something called \"pizza\",\n> does not.\n> \n> > Can you email me that java files that are produced by\n> > the compile. I need the *.jar file, and the *.class files.\n> \n> Do you still need them? I can send what I built, but I'm pretty sure\n> that Peter Mount has a fresh package built and available on some web\n> site of his...\n\nI did get it working using Peter's 6.5.2 jar file. I was not setting\nthe CLASSPATH to be the full file path. I was setting it just to the\ndirectory, which was my fault. Peter's FAQ for jdbc helped me get it\nworking.\n\nPeter E. sent me a jar file, but it used postgresql as the domain unstead\nof org.postgresql, so it seems that is the 6.5.2 version too. Peter's web\nsite does not have the 7.0 jar file there yet.\n\nHowever, it seems kaffe can't compile self-referencing java files. I\ndon't know enough about java to know that is a problem or not.\n\nI did get it working well enough to get my java book example working.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 May 2000 22:35:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> If so, I need them (Java 1 and 2) for the RPM's. I don't do Java -- and\n> the RPM's have historically packaged the .jar files as pulled verbatim\n> from retep.org.uk. I haven't distributed RC2 RPM's yet for partially\n> that reason\n\n\"I don't do Java\" can change fairly easily; just pick up the java\ntarball from blackdown.org or sun.com, untar it into /usr/local, then\nset your path via\n\n set path=(/usr/local/jdk-xxx $path)\n\nGo into src/interfaces/jdbc and type\n\n make jdbc2\n\nthen grab the jar file(s).\n\notoh, how close are you Peter (hope you see this; I've blown away\nenough email to have lost your address) to posting a built jar file or\nwhatever is usually provided? Should we post this somewhere on\npostgresql.org to help out? Should I post my recently built stuff?\n\n> NOTE:\n> I have gotten good response and patches to the RPM's from a number of\n> people this go around -- and it is ENCOURAGING!\n\nGreat!\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 02 May 2000 03:26:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "Today, in a message to Thomas Lockhart, Bruce Momjian wrote:\n>\n> > > I am unable to compile the java code with kaffe. Can anyone compile\n> > > jdbc under 7.0?\n> \n> However, it seems kaffe can't compile self-referencing java files. I\n> don't know enough about java to know that is a problem or not.\n\nHave not tried with 7.0, but recent versions of Kaffe were definitely able\nto compile the JDBC code that came with 6.5.3, but only after modifying\nthe version check in the makeVersion.java file in src/interfaces/jdbc (and\nperhaps elsewhere where a versioon check occurs). The code checks for the\njava version string and rejects everything that doesn't start with either\n1.1 or 1.2. The problem was that Kaffe reports its own version, e.g. 1.02,\nrather than the corresponding JDK version.\n\nJoachim\n\n-- \nprivate: [email protected] (http://www.kraut.bc.ca)\nwork: [email protected] (http://www.mercury.bc.ca)\n\n\n", "msg_date": "Mon, 1 May 2000 21:50:22 -0700 (PDT)", "msg_from": "Joachim Achtzehnter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for 7.0 JDBC status" }, { "msg_contents": "> otoh, how close are you Peter (hope you see this; I've blown away\n> enough email to have lost your address) to posting a built jar file or\n> whatever is usually provided? Should we post this somewhere on\n> postgresql.org to help out? Should I post my recently built stuff?\n\nAh, found Peter's e-mail address in an obvious place (the jdbc source\ntree).\n\nAnother question for Peter: would it be possible to update the README\nfile in the source tree, and other ancillary files? I know you've been\nvery busy, but even a brief fixup to adjust dates and version numbers\nwould be helpful for 7.0.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 02 May 2000 05:15:50 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> Today, in a message to Thomas Lockhart, Bruce Momjian wrote:\n> >\n> > > > I am unable to compile the java code with kaffe. Can anyone compile\n> > > > jdbc under 7.0?\n> > \n> > However, it seems kaffe can't compile self-referencing java files. I\n> > don't know enough about java to know that is a problem or not.\n> \n> Have not tried with 7.0, but recent versions of Kaffe were definitely able\n> to compile the JDBC code that came with 6.5.3, but only after modifying\n> the version check in the makeVersion.java file in src/interfaces/jdbc (and\n> perhaps elsewhere where a versioon check occurs). The code checks for the\n> java version string and rejects everything that doesn't start with either\n> 1.1 or 1.2. The problem was that Kaffe reports its own version, e.g. 1.02,\n> rather than the corresponding JDK version.\n\nInteresting. It does compile under kaffe 1.05, but the mutually\ndependent java files cause a compile error. Seems 6.5.3 had the same\nproblem, so I am not sure why it would have worked then.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 06:55:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for 7.0 JDBC status" }, { "msg_contents": "Today, Bruce Momjian wrote in an email addressed to Joachim Achtzehnter:\n> \n> Interesting. It does compile under kaffe 1.05, but the mutually\n> dependent java files cause a compile error. Seems 6.5.3 had the same\n> problem, so I am not sure why it would have worked then.\n\nWell, it is conceivable that I was actually compiling it with jikes, and\nonly used Kaffe to run it. In fact, now that I think of it, this is most\nlikely what I did. Even then, with some version of the driver I had to\npatch a runtime version check to make it behave as if Kaffe was a 1.1 JVM.\n\nIt is true that Kaffe has a number of problems. Nevertheless, given that\nit is licensed under the GPL some people prefer it over the alternatives\neven if it has some drawbacks.\n\nJoachim\n\n-- \[email protected] (http://www.kraut.bc.ca)\[email protected] (http://www.mercury.bc.ca)\n\n", "msg_date": "Tue, 2 May 2000 11:53:17 -0700 (Pacific Daylight Time)", "msg_from": "Joachim Achtzehnter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for 7.0 JDBC status" }, { "msg_contents": "> Today, Bruce Momjian wrote in an email addressed to Joachim Achtzehnter:\n> > \n> > Interesting. It does compile under kaffe 1.05, but the mutually\n> > dependent java files cause a compile error. Seems 6.5.3 had the same\n> > problem, so I am not sure why it would have worked then.\n> \n> Well, it is conceivable that I was actually compiling it with jikes, and\n> only used Kaffe to run it. In fact, now that I think of it, this is most\n> likely what I did. Even then, with some version of the driver I had to\n> patch a runtime version check to make it behave as if Kaffe was a 1.1 JVM.\n> \n> It is true that Kaffe has a number of problems. Nevertheless, given that\n> it is licensed under the GPL some people prefer it over the alternatives\n> even if it has some drawbacks.\n\nStarting the kaffe 1.05, they now use KOPI as their java compiler. I\nhave contacted them about the problem to see if they can help.\n\nThis is clearly a kaffe-related problem, and not a problem with our\njdbc, which is good news.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 17:24:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for 7.0 JDBC status" } ]
[ { "msg_contents": "Hi,\n\n There is a bug(?) report about \\l command of psql.\n\n(Example) PostgreSQL-7.0RC1\n\n A_server : configure (in USA)\n B_server : configure --enable--multibyte (in Japan)\n\n By using the B_server's psql,\n\n prompt> psql -h A_server\n ERROR: Multibyte support must be enable to use this function\n\n prompt> export PGCLIENTENCODING='SQL_ASCII'\n prompt> psql -h A_server\n Welcome to psql, the PostgreSQL interactive terminal.\n\n Type: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\n postgres=# \\l\n ERROR: No such function 'pg_encoding_to_char' with the \n specified attributes\n\n--\nRegards,\nSAKAIDA Masaaki -- Osaka, Japan\n", "msg_date": "Tue, 02 May 2000 11:47:21 +0900", "msg_from": "SAKAIDA <[email protected]>", "msg_from_op": true, "msg_subject": "psql \\l error" }, { "msg_contents": "SAKAIDA <[email protected]> writes:\n> A_server : configure (in USA)\n> B_server : configure --enable--multibyte (in Japan)\n\n> By using the B_server's psql,\n\nprompt> psql -h A_server\n> ERROR: Multibyte support must be enable to use this function\n\nThis is evidently happening because psql's initial inquiry about the\ndatabase encoding fails.\n\nSeems like it might be a good idea if the non-MULTIBYTE stub versions of\npg_encoding_to_char() and friends were to return default values (eg,\n\"SQL_ASCII\") instead of erroring out. A MULTIBYTE version of psql\nreally ought to be able to work with a non-MULTIBYTE server.\n\n> postgres=# \\l\n> ERROR: No such function 'pg_encoding_to_char' with the \n> specified attributes\n\nHmm. This is happening because 7.0 psql tries to display the encoding\nof each database if psql was compiled with MULTIBYTE.\n\nHere you are evidently talking to a pre-7.0 server (both because\na 7.0 server should have that function, even if the function\nrefuses to work ;-)) and because a 7.0 server does not spell the\n'No such function' error message quite that way.\n\nThis one is a little nastier. The only solution I could see that would\nguarantee backwards compatibility is for psql not to try to display the\ndatabase encoding; that doesn't seem like a win. I think there are\nsome other small incompatibilities between 7.0 psql and pre-7.0 servers\nanyway, so eliminating this one by dumbing down \\l is probably not\nthe way to proceed.\n\nSo, I'd suggest fixing the first issue (so that 7.0 MULTIBYTE psql works\nwith non-MULTIBYTE 7.0 server) but not trying to do anything about\nMULTIBYTE psql with a pre-7.0 server. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2000 00:34:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> SAKAIDA <[email protected]> writes:\n> > A_server : configure (in USA)\n> > B_server : configure --enable--multibyte (in Japan)\n> \n> > By using the B_server's psql,\n> \n> > postgres=# \\l\n> > ERROR: No such function 'pg_encoding_to_char' with the \n> > specified attributes\n> \n> Hmm. This is happening because 7.0 psql tries to display the encoding\n> of each database if psql was compiled with MULTIBYTE.\n> \n> Here you are evidently talking to a pre-7.0 server (both because\n> a 7.0 server should have that function, even if the function\n> refuses to work ;-)) and because a 7.0 server does not spell the\n> 'No such function' error message quite that way.\n> \n> This one is a little nastier. The only solution I could see that would\n> guarantee backwards compatibility is for psql not to try to display the\n> database encoding; that doesn't seem like a win. I think there are\n> some other small incompatibilities between 7.0 psql and pre-7.0 servers\n\n \\df and \\dd cause an ERROR: no such function 'oidvectortypes' ...\nwhen 7.0 psql talks to pre-7.0 servers. I've noticed the fact but I\ndidn't know what should be done.\nWhat kind of backward compatibity is required for psql etc..? \nAre there any documentations about it ? Of cource it's preferable\nthat client application/libraries aren't tied to a specific version of\nserver application.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 2 May 2000 14:50:42 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: psql \\l error " }, { "msg_contents": "\nTom Lane <[email protected]> wrote:\n\n> SAKAIDA <[email protected]> writes:\n> > A_server : configure (in USA)\n> > B_server : configure --enable--multibyte (in Japan)\n> \n> > By using the B_server's psql,\n> > \n> > prompt> psql -h A_server\n> > ERROR: Multibyte support must be enable to use this function\n> \n(snip)\n>\n> Seems like it might be a good idea if the non-MULTIBYTE stub versions of\n> pg_encoding_to_char() and friends were to return default values (eg,\n> \"SQL_ASCII\") instead of erroring out. A MULTIBYTE version of psql\n> really ought to be able to work with a non-MULTIBYTE server.\n\n I think so, too.\n\n> > postgres=# \\l\n> > ERROR: No such function 'pg_encoding_to_char' with the \n> > specified attributes\n> \n> Hmm. This is happening because 7.0 psql tries to display the encoding\n> of each database if psql was compiled with MULTIBYTE.\n> \n> Here you are evidently talking to a pre-7.0 server (both because\n> a 7.0 server should have that function, even if the function\n> refuses to work ;-)) and because a 7.0 server does not spell the\n> 'No such function' error message quite that way.\n\n Sorry, I have used a 6.5.3 as the A_server certainly. In the \ncase of a 7.0,\n\n prompt> export PGCLIENTENCODING='SQL_ASCII'\n prompt> psql -h A_server\n postgres=# \\l\n ERROR: Multibyte support must be enable to use this function\n\n>\n(snip)\n> This one is a little nastier. The only solution I could see that would\n> guarantee backwards compatibility is for psql not to try to display the\n> database encoding; that doesn't seem like a win. I think there are\n> some other small incompatibilities between 7.0 psql and pre-7.0 servers\n> anyway, so eliminating this one by dumbing down \\l is probably not\n> the way to proceed.\n> \n> So, I'd suggest fixing the first issue (so that 7.0 MULTIBYTE psql works\n> with non-MULTIBYTE 7.0 server) but not trying to do anything about\n> MULTIBYTE psql with a pre-7.0 server. Comments?\n\n I consider that MULTIBYTE 7.0-psql must be able to access a \npre-7.0 server. I don't think that it is so difficult to realize \nit between 6.5.x and 7.0.\n\n Problems except for \\l are \\df/\\dd which Hiroshi Inoue already \npointed out.\n\n--\nRegards,\nSAKAIDA Masaaki -- Osaka, Japan\n\n\n", "msg_date": "Tue, 02 May 2000 15:58:31 +0900", "msg_from": "SAKAIDA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "> This is evidently happening because psql's initial inquiry about the\n> database encoding fails.\n> \n> Seems like it might be a good idea if the non-MULTIBYTE stub versions of\n> pg_encoding_to_char() and friends were to return default values (eg,\n> \"SQL_ASCII\") instead of erroring out. A MULTIBYTE version of psql\n> really ought to be able to work with a non-MULTIBYTE server.\n> \n> > postgres=# \\l\n> > ERROR: No such function 'pg_encoding_to_char' with the \n> > specified attributes\n> \n> Hmm. This is happening because 7.0 psql tries to display the encoding\n> of each database if psql was compiled with MULTIBYTE.\n> \n> Here you are evidently talking to a pre-7.0 server (both because\n> a 7.0 server should have that function, even if the function\n> refuses to work ;-)) and because a 7.0 server does not spell the\n> 'No such function' error message quite that way.\n> \n> This one is a little nastier. The only solution I could see that would\n> guarantee backwards compatibility is for psql not to try to display the\n> database encoding; that doesn't seem like a win. I think there are\n> some other small incompatibilities between 7.0 psql and pre-7.0 servers\n> anyway, so eliminating this one by dumbing down \\l is probably not\n> the way to proceed.\n> \n> So, I'd suggest fixing the first issue (so that 7.0 MULTIBYTE psql works\n> with non-MULTIBYTE 7.0 server) but not trying to do anything about\n> MULTIBYTE psql with a pre-7.0 server. Comments?\n\nSounds reasonable. I will fix the former but will leave the later as\nit is.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 02 May 2000 16:53:35 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "On Tue, 2 May 2000, Tom Lane wrote:\n\n> Seems like it might be a good idea if the non-MULTIBYTE stub versions of\n> pg_encoding_to_char() and friends were to return default values (eg,\n> \"SQL_ASCII\") instead of erroring out. A MULTIBYTE version of psql\n> really ought to be able to work with a non-MULTIBYTE server.\n\nI've asked Tatsuo about this a long while ago but he didn't think it was\nworth it.\n\n> I think there are some other small incompatibilities between 7.0 psql\n> and pre-7.0 servers anyway, so eliminating this one by dumbing down \\l\n> is probably not the way to proceed.\n\nThe oidvector thing is essentially a show stopper for this.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 2 May 2000 10:26:49 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "On Tue, 2 May 2000, Hiroshi Inoue wrote:\n\n> What kind of backward compatibity is required for psql etc..? \n\nI thought psql was some sort of a reference application, so sticking to\nprehistoric technology is not necessarily required. For example, outer\njoins will simplify psql a great deal but that would really mean it stops\nworking for everybody else. Not sure.\n\nThe knowledge about the system catalogs is already pretty deep so keeping\ntrack of changes across versions is similar to the initdb problem: either\nwe prohibit version differences outright (I thought that would be too\nstrict), we let it go until it fails (something that has been eliminated\nfor initdb), or we provide compabitibility. If someone wants to do the\nlatter, be my guest.\n\n> Are there any documentations about it ?\n\nYes.\n\n> Of cource it's preferable that client application/libraries aren't\n> tied to a specific version of server application.\n\nI agree. If someone has ideas that are not too ugly to live I'm sure we\ncould agree on them.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 2 May 2000 10:34:17 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: psql \\l error " }, { "msg_contents": "\nPeter Eisentraut <[email protected]> wrote:\n\n> On Tue, 2 May 2000, Tom Lane wrote:\n> \n> > I think there are some other small incompatibilities between 7.0 psql\n> > and pre-7.0 servers anyway, so eliminating this one by dumbing down \\l\n> > is probably not the way to proceed.\n> \n> The oidvector thing is essentially a show stopper for this.\n\n In my client software named PGBASH-2.1, I have dealt with \"\\l\" \ncompatibility problem as following.\n\n query1= SELECT pg_database.datname ..\n pg_encoding_to_char(pg_database.encoding) as \\\"Encoding\\\" ..\n ..\n query2= SELECT pg_database.datname ..\n pg_database.encoding as \\\"Encoding\\\" ..\n ..\n\n 1. Make pset->quiet quiet mode.\n 2. Send query1.\n 3. Make pset->quiet original mode. \n 3. If error occurs then send query2.\n\n--\nRegards,\nSAKAIDA Masaaki -- Osaka, Japan\n\n\n", "msg_date": "Tue, 02 May 2000 18:17:13 +0900", "msg_from": "SAKAIDA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "Peter Eisentraut wrote:\n\n> On Tue, 2 May 2000, Hiroshi Inoue wrote:\n>\n> > What kind of backward compatibity is required for psql etc..?\n>\n> I thought psql was some sort of a reference application, so sticking to\n> prehistoric technology is not necessarily required. For example, outer\n> joins will simplify psql a great deal but that would really mean it stops\n> working for everybody else. Not sure.\n>\n> The knowledge about the system catalogs is already pretty deep so keeping\n> track of changes across versions is similar to the initdb problem:\n\nYes there's another example. PostgreSQL odbc driver wasn't able to talk\nto 7.0 backend until recently due to the change int28 -> int2vector.\nNow odbc driver could talk to all the backends from 6.2.\nWe may have to hold some reference table between system catalogs\nand client appl/lib.\n\n>\n> > Are there any documentations about it ?\n>\n> Yes.\n\nUnfortunately it's always painful for me to look for a documentation.\nCould you please tell me where we could find it ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Tue, 02 May 2000 19:15:37 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error" }, { "msg_contents": "> > Seems like it might be a good idea if the non-MULTIBYTE stub versions of\n> > pg_encoding_to_char() and friends were to return default values (eg,\n> > \"SQL_ASCII\") instead of erroring out. A MULTIBYTE version of psql\n> > really ought to be able to work with a non-MULTIBYTE server.\n> \n> I think so, too.\n\nAgreed.\n\n> > This one is a little nastier. The only solution I could see that would\n> > guarantee backwards compatibility is for psql not to try to display the\n> > database encoding; that doesn't seem like a win. I think there are\n> > some other small incompatibilities between 7.0 psql and pre-7.0 servers\n> > anyway, so eliminating this one by dumbing down \\l is probably not\n> > the way to proceed.\n> > \n> > So, I'd suggest fixing the first issue (so that 7.0 MULTIBYTE psql works\n> > with non-MULTIBYTE 7.0 server) but not trying to do anything about\n> > MULTIBYTE psql with a pre-7.0 server. Comments?\n> \n> I consider that MULTIBYTE 7.0-psql must be able to access a \n> pre-7.0 server. I don't think that it is so difficult to realize \n> it between 6.5.x and 7.0.\n> \n> Problems except for \\l are \\df/\\dd which Hiroshi Inoue already \n> pointed out.\n\nWe have allowed old psql's to talk to new servers, but not new psql's\ntalking to older servers. For 7.0, I think they will have to match. \nThere really isn't a way to fix the new oidvector changes for older\nreleases, and I don't think it is worth it, personally.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 06:59:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error" }, { "msg_contents": "> Peter Eisentraut wrote:\n> \n> > On Tue, 2 May 2000, Hiroshi Inoue wrote:\n> >\n> > > What kind of backward compatibity is required for psql etc..?\n> >\n> > I thought psql was some sort of a reference application, so sticking to\n> > prehistoric technology is not necessarily required. For example, outer\n> > joins will simplify psql a great deal but that would really mean it stops\n> > working for everybody else. Not sure.\n> >\n> > The knowledge about the system catalogs is already pretty deep so keeping\n> > track of changes across versions is similar to the initdb problem:\n> \n> Yes there's another example. PostgreSQL odbc driver wasn't able to talk\n> to 7.0 backend until recently due to the change int28 -> int2vector.\n> Now odbc driver could talk to all the backends from 6.2.\n> We may have to hold some reference table between system catalogs\n> and client appl/lib.\n\nThe big reason for the change is that int2vector is now more than 8\nint2's (now 16), so there may be internal changes as well as a name\nchange for applications.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 07:14:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error" }, { "msg_contents": "\n\n> > > So, I'd suggest fixing the first issue (so that 7.0 MULTIBYTE psql works\n> > > with non-MULTIBYTE 7.0 server) but not trying to do anything about\n> > > MULTIBYTE psql with a pre-7.0 server. Comments?\n> > \n> > I consider that MULTIBYTE 7.0-psql must be able to access a \n> > pre-7.0 server. I don't think that it is so difficult to realize \n> > it between 6.5.x and 7.0.\n> > \n> > Problems except for \\l are \\df/\\dd which Hiroshi Inoue already \n> > pointed out.\n> \n> We have allowed old psql's to talk to new servers, but not new psql's\n> talking to older servers. For 7.0, I think they will have to match. \n> There really isn't a way to fix the new oidvector changes for older\n> releases, and I don't think it is worth it, personally.\n\n\n I don't know the details of oidvector. But new psql can talk to \nolder server.\n\nEx.1)\n\n (1) select version(); ==> ver_no[] variable\n (2) If (ver_no[0] <= '6') then\n query <== SELECT t.typname as result ..\n substr(oid8types(p.proargtypes),1,14) as arg ..\n ..\n else \n query <== SELECT t.typname as \\\"Result\\\", ..\n oidvectortypes(p.proargtypes) as \\\"Arguments\\\" ..\n ..\n (2) send query\n\n\nEx.2)\n\n (1) query1 <== SELECT t.typname as \\\"Result\\\", ..\n oidvectortypes(p.proargtypes) as \\\"Arguments\\\" ..\n ..\n query2 <== SELECT t.typname as result ..\n substr(oid8types(p.proargtypes),1,14) as arg ..\n ..\n\n (2) send query1\n (3) if an error occurs the send query2\n\n\n--\nRegard,\nSAKAIDA Masaaki -- Osaka, Japan\n", "msg_date": "Tue, 02 May 2000 22:21:36 +0900", "msg_from": "SAKAIDA Masaaki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> I think there are some other small incompatibilities between 7.0 psql\n>> and pre-7.0 servers anyway, so eliminating this one by dumbing down \\l\n>> is probably not the way to proceed.\n\n> The oidvector thing is essentially a show stopper for this.\n\nYes, that was the other problem I was trying to recall.\n\nPerhaps someday we might consider offering views on the system tables\nthat are defined in a way that keeps old applications happy. However,\nfor something like the oidvector change there's just no way: an old app\nthat is looking at those fields is just not going to do the right thing\nfor functions or indexes with more than 8 arguments/columns, no matter\nhow we try to mask the change. I think in these cases we are better off\nto do what we did, ie, change the type name or whatever, so that those\nold apps break in a fairly obvious way rather than failing subtly or\ninfrequently.\n\nLooking at less generic answers, I suppose that psql could try to use\na 7.0-compatible query and fall back to a 6.5-compatible query if that\nfails. I imagine Peter will class this answer as \"too ugly to live\" ;-).\nCertainly I don't have any interest in doing it, either, but maybe there\nis someone out there who really needs a single psql to offer \\df ability\nwith both generations of servers. If so, that's the way to proceed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2000 11:39:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> > Peter Eisentraut wrote:\n> > \n> > > On Tue, 2 May 2000, Hiroshi Inoue wrote:\n> > >\n> > > > What kind of backward compatibity is required for psql etc..?\n> > >\n> > > The knowledge about the system catalogs is already pretty \n> deep so keeping\n> > > track of changes across versions is similar to the initdb problem:\n> > \n> > Yes there's another example. PostgreSQL odbc driver wasn't able to talk\n> > to 7.0 backend until recently due to the change int28 -> int2vector.\n> > Now odbc driver could talk to all the backends from 6.2.\n> > We may have to hold some reference table between system catalogs\n> > and client appl/lib.\n> \n> The big reason for the change is that int2vector is now more than 8\n> int2's (now 16), so there may be internal changes as well as a name\n> change for applications.\n>\n\nYes I know the reason. It's only a example that changes of system\ncatalogs affects not only a backend application but also client libraries. \n\nUnfortunately I don't know the dependency between backend and\nclients well. In addtion current release style of PostgreSQL that\nreleases both server and clients all together seems to let people\nforget the independecy of clients.\n\nIn general client libraries/applications have to keep backward \ncompatibility as possible,so it isn't enough for clients to be able to\ntalk to the latest version of PostgreSQL servers.\n\nComments ? \n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 3 May 2000 13:02:47 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: psql \\l error" }, { "msg_contents": "\n\"Hiroshi Inoue\" <[email protected]> wrote:\n> \n> In general client libraries/applications have to keep backward \n> compatibility as possible,so it isn't enough for clients to be able to\n> talk to the latest version of PostgreSQL servers.\n> \n> Comments ? \n\n I agree with you. \n\n User doesn't know how the specification of the server has been \nchanged. Therefore, it is natural that user believe that new \npsql can talk to older server. Because backward compatibility is \na reasonable rule of the upgrading in generic client software.\n\n--\nRegard,\nSAKAIDA Masaaki -- Osaka, Japan\n", "msg_date": "Wed, 03 May 2000 21:30:42 +0900", "msg_from": "SAKAIDA Masaaki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error" }, { "msg_contents": "Tom Lane writes:\n\n> Looking at less generic answers, I suppose that psql could try to use\n> a 7.0-compatible query and fall back to a 6.5-compatible query if that\n> fails. I imagine Peter will class this answer as \"too ugly to live\" ;-).\n\nUntil there is at least a glimpse of error codes I'd say it's \"too\nincorrect to live\" ...\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 4 May 2000 01:10:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "> -----Original Message-----\n> From: SAKAIDA Masaaki [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> wrote:\n> > \n> > In general client libraries/applications have to keep backward \n> > compatibility as possible,so it isn't enough for clients to be able to\n> > talk to the latest version of PostgreSQL servers.\n> > \n> > Comments ? \n> \n> I agree with you. \n> \n> User doesn't know how the specification of the server has been \n> changed. Therefore, it is natural that user believe that new \n> psql can talk to older server. Because backward compatibility is \n> a reasonable rule of the upgrading in generic client software.\n>\n\nHmm,sorry for my poor English.\nWhat I meant is a little different from yours.\nWhat I wanted was to know official opinions about backward\ncompatibility of clients(not only psql)included in PostgreSQL's\nrelease.\n\nAs for psql it isn't a generic client software as Peter mentioned.\nIt's a part of backend in a sense. At least it could talk to pre-7.0\nbackend and it isn't so critical that \\l,\\df and \\dd doesn't work for \npre-7.0 backends. I'm not so much eager to change psql myself.\n\nThere's already your pgbash that keeps backward compatibility.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 4 May 2000 10:05:08 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: psql \\l error" }, { "msg_contents": "\nPeter Eisentraut <[email protected]> wrote:\n\n> Tom Lane writes:\n> \n> > Looking at less generic answers, I suppose that psql could try to use\n> > a 7.0-compatible query and fall back to a 6.5-compatible query if that\n> > fails. I imagine Peter will class this answer as \"too ugly to live\" ;-).\n> \n> Until there is at least a glimpse of error codes I'd say it's \"too\n> incorrect to live\" ...\n\n (Example)\n\n A_server(6.5.3) B_server(7.0)\n | |\n ---+------------+------------+---- network\n |\n C_server(7.0) + terminal\n\n * Telnet-login is not permitted in A_server and B_server.\n * Telnet-login is permitted in C_server.(We use C_server's psql)\n\n In this case, we can not use \\l and \\df command for A_server.\n\n Should we use 6.5.3 as a C server?\n\n(If we use 6.5.3-psql, we can not use \\df command for B_server, and\n we can not display a database-encoding of B_server when \\l is used.)\n\n--\nRegard,\nSAKAIDA Masaaki -- Osaka, Japan\n", "msg_date": "Thu, 04 May 2000 10:21:54 +0900", "msg_from": "SAKAIDA Masaaki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> What I wanted was to know official opinions about backward\n> compatibility of clients(not only psql)included in PostgreSQL's\n> release.\n\n\"Official\" opinions? I think we all just have our own opinions around\nhere :-).\n\n> As for psql it isn't a generic client software as Peter mentioned.\n> It's a part of backend in a sense. At least it could talk to pre-7.0\n> backend and it isn't so critical that \\l,\\df and \\dd doesn't work for \n> pre-7.0 backends. I'm not so much eager to change psql myself.\n\nMy opinion is that we'd be boxing ourselves in far too much to commit\nto never having any system-catalog changes across versions. So I'm\nnot particularly disturbed that functions that involve system catalog\nqueries sometimes are version-specific. We should avoid breaking\nessential functions of psql, but I don't think \\df and friends are\nessential...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 May 2000 21:47:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "\n\"Hiroshi Inoue\" <[email protected]> wrote:\n\n> > -----Original Message-----\n> > From: SAKAIDA Masaaki [mailto:[email protected]]\n> > \n> > \"Hiroshi Inoue\" <[email protected]> wrote:\n> > > \n> > > In general client libraries/applications have to keep backward \n> > > compatibility as possible,so it isn't enough for clients to be able to\n> > > talk to the latest version of PostgreSQL servers.\n> > > \n> > > Comments ? \n> > \n> > I agree with you. \n> > \n> > User doesn't know how the specification of the server has been \n> > changed. Therefore, it is natural that user believe that new \n> > psql can talk to older server. Because backward compatibility is \n> > a reasonable rule of the upgrading in generic client software.\n> >\n> \n> Hmm,sorry for my poor English.\n> What I meant is a little different from yours.\n> What I wanted was to know official opinions about backward\n> compatibility of clients(not only psql)included in PostgreSQL's\n> release.\n\n Sorry for my 10*poor English ;-)\n I understand what you meant.\n\n\n> There's already your pgbash that keeps backward compatibility.\n\n In the next release pgbash-2.1(pgbash is a tool like bash+psql),\n\n pgbash(7.0-libpq) can talk to 6.5/7.0-server.\nand pgbash(6.5-libpq) can talk to 6.5/7.0-server.\n\n pgbash will keep backward and forward compatibility as much as \npossible.\n\n--\nRegard,\nSAKAIDA Masaaki -- Osaka, Japan\n", "msg_date": "Thu, 04 May 2000 10:54:21 +0900", "msg_from": "SAKAIDA Masaaki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > What I wanted was to know official opinions about backward\n> > compatibility of clients(not only psql)included in PostgreSQL's\n> > release.\n> \n> \"Official\" opinions? I think we all just have our own opinions around\n> here :-).\n>\n\nYes,but shouldn't there be some guidelines around here ?\nFor example,maybe\n The latest version of libpq should be able to replace older version\n of libpq without re-compilation and be able to talk to all backends\n after 6.4.\n The latest version of odbc driver should be able to replace those of\n older versions and be able talk to all backends after 6.2. \n\nI don't know about perl,jdbc,pgaccess etc....\n\n> > As for psql it isn't a generic client software as Peter mentioned.\n> > It's a part of backend in a sense. At least it could talk to pre-7.0\n> > backend and it isn't so critical that \\l,\\df and \\dd doesn't work for \n> > pre-7.0 backends. I'm not so much eager to change psql myself.\n> \n> My opinion is that we'd be boxing ourselves in far too much to commit\n> to never having any system-catalog changes across versions. So I'm\n> not particularly disturbed that functions that involve system catalog\n> queries sometimes are version-specific. We should avoid breaking\n> essential functions of psql, but I don't think \\df and friends are\n> essential...\n>\n\nI don't think \\df etc are essential for not generic client software either.\nSo I've not complained about it. I only wanted to confirm Peter and \nothers' opinions on this occasion. I apologize if my poor English\nconfused ML members. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 4 May 2000 11:47:11 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: psql \\l error " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Yes,but shouldn't there be some guidelines around here ?\n> For example,maybe\n> The latest version of libpq should be able to replace older version\n> of libpq without re-compilation and be able to talk to all backends\n> after 6.4.\n\nAs indeed it can...\n\nIt could be that we should have invested additional effort to make psql\nable to execute all functions against both old and new backends, but\nit seems to me that we had more important work to do. There was\nrelatively little complaint about the fact that 6.4 psql (and all other\n6.4 libpq-based applications) were not able to talk *at all* to pre-6.4\nbackends, so I'm surprised that we're discussing whether it's acceptable\nthat a few noncritical functions aren't cross-version compatible this\ntime around.\n\nIt's also worth noting that this is a major release --- it's not\nentirely meaningless that we called it 7.0 and not 6.6. We were willing\nto break compatibility in more places than we would normally do, because\nthere were things that just had to be changed. In the real world\nI suspect that the datetime-related changes are going to cause far more\nheadaches for most users than the system catalog changes... but\nsometimes progress has a price.\n\nAll just MHO, of course.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 May 2000 22:58:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql \\l error " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Yes,but shouldn't there be some guidelines around here ?\n> > For example,maybe\n> > The latest version of libpq should be able to replace older version\n> > of libpq without re-compilation and be able to talk to all backends\n> > after 6.4.\n> \n> As indeed it can...\n> \n> It could be that we should have invested additional effort to make psql\n> able to execute all functions against both old and new backends, but\n> it seems to me that we had more important work to do. There was\n> relatively little complaint about the fact that 6.4 psql (and all other\n> 6.4 libpq-based applications) were not able to talk *at all* to pre-6.4\n> backends, so I'm surprised that we're discussing whether it's acceptable\n\nI know it but I think it's only an evidence that PostgreSQL was used\nneither widely nor critically at that time. As for me,I didn't consider\nthe production use of PostgreSQL at all at that time.\nNow PostgreSQL is so much better than it was at that time and it\nis and would be used widely and critically.\nNow would it be allowed that libpq couldn't even talk to the previous\nversion ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 4 May 2000 13:44:58 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: psql \\l error " }, { "msg_contents": "Hiroshi Inoue writes:\n\n> I don't think \\df etc are essential for not generic client software either.\n> So I've not complained about it. I only wanted to confirm Peter and \n> others' opinions on this occasion.\n\nIf someone wants to provide a reasonable fix for this situation I wouldn't\nobject. If too many people end up complaining I'll probably end up doing\nit myself. ;)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 4 May 2000 18:36:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: psql \\l error " } ]
[ { "msg_contents": "The following patch corrects a problem in the man page fix script in the \nFAQ_SCO documentation and adds the commands to create pre-formatted versions \nof the manual pages on UnixWare 7.0.\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Tue, 02 May 2000 00:37:42 -0400", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "I must be tired ... I sent the last message out without a subject and used the \nwrong format for the diff file. Sigh...\n\nThe following patch corrects a problem in the man page fix script in the \nFAQ_SCO documentation and adds the commands to create pre-formatted versions \nof the manual pages on UnixWare 7.0.\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Tue, 02 May 2000 00:44:00 -0400", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": true, "msg_subject": "Update to FAQ_SCO." }, { "msg_contents": "Applied.\n\n-- Start of PGP signed section.\n> I must be tired ... I sent the last message out without a subject and used the \n> wrong format for the diff file. Sigh...\n> \n> The following patch corrects a problem in the man page fix script in the \n> FAQ_SCO documentation and adds the commands to create pre-formatted versions \n> of the manual pages on UnixWare 7.0.\n> \nContent-Description: uw7-20000501.patch\n\n[Attachment, skipping...]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: 76337,2061\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n-- End of PGP section, PGP failed!\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 06:54:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update to FAQ_SCO." } ]
[ { "msg_contents": "Sorry for the late reply, but lately I've had little time to sort things\nout.\n\nI know that under JDK1.2, it compiles fine. For some reason my JDK-1.1.7\nhas trashed itself, and I've been trying to get that working again.\n\nI think Kaffe is based around 1.1.x, but I'm not sure. I was thinking of\ninstalling it as well to see how it goes.\n\nI've also got some misplaced patches to find.\n\nHopefully things will be quiet tonight for me to get things sorted :-(\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Tuesday, May 02, 2000 3:00 AM\nTo: Bruce Momjian\nCc: PostgreSQL-development; PostgreSQL-interfaces\nSubject: Re: [HACKERS] Request for 7.0 JDBC status\n\n\n> I am unable to compile the java code with kaffe. Can anyone compile\n> jdbc under 7.0?\n\nYup. Seems some form of jdk-1.2 works for me. But my default\n/usr/bin/javac (on an old RH5.2 system), something called \"pizza\",\ndoes not.\n\n> Can you email me that java files that are produced by\n> the compile. I need the *.jar file, and the *.class files.\n\nDo you still need them? I can send what I built, but I'm pretty sure\nthat Peter Mount has a fresh package built and available on some web\nsite of his...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 2 May 2000 08:51:38 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Request for 7.0 JDBC status" } ]
[ { "msg_contents": "As 1.2 seems to be working, I'll put a 1.2 jar file on the site asap. I\nmay try to sneek a compile during the day (depending when a big delivery\narrives).\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Lamar Owen [mailto:[email protected]]\nSent: Tuesday, May 02, 2000 3:30 AM\nTo: Thomas Lockhart\nCc: Bruce Momjian; PostgreSQL-development; PostgreSQL-interfaces\nSubject: Re: [HACKERS] Request for 7.0 JDBC status\n\n\nThomas Lockhart wrote:\n> > Can you email me that java files that are produced by\n> > the compile. I need the *.jar file, and the *.class files.\n \n> Do you still need them? I can send what I built, but I'm pretty sure\n> that Peter Mount has a fresh package built and available on some web\n> site of his...\n\nIf so, I need them (Java 1 and 2) for the RPM's. I don't do Java -- and\nthe RPM's have historically packaged the .jar files as pulled verbatim\nfrom retep.org.uk. I haven't distributed RC2 RPM's yet for partially\nthat reason\n-- the other part is the lack of an RC2-tested alpha patch.\n\nNOTE:\nI have gotten good response and patches to the RPM's from a number of\npeople this go around -- and it is ENCOURAGING!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 2 May 2000 09:00:49 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Request for 7.0 JDBC status" } ]
[ { "msg_contents": "The jar file isn't built automatically in 7.0. You'll have to use:\n\n\tmake jdbc2 jar\n\nThe reason for this is partly on how make works, and partly because of\nthe kludge we have for handling the different API versions (like\nJDBC1.1, JDBC2 etc)\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Tuesday, May 02, 2000 4:27 AM\nTo: Lamar Owen\nCc: Bruce Momjian; PostgreSQL-development; PostgreSQL-interfaces\nSubject: Re: [HACKERS] Request for 7.0 JDBC status\n\n\n> If so, I need them (Java 1 and 2) for the RPM's. I don't do Java --\nand\n> the RPM's have historically packaged the .jar files as pulled verbatim\n> from retep.org.uk. I haven't distributed RC2 RPM's yet for partially\n> that reason\n\n\"I don't do Java\" can change fairly easily; just pick up the java\ntarball from blackdown.org or sun.com, untar it into /usr/local, then\nset your path via\n\n set path=(/usr/local/jdk-xxx $path)\n\nGo into src/interfaces/jdbc and type\n\n make jdbc2\n\nthen grab the jar file(s).\n\notoh, how close are you Peter (hope you see this; I've blown away\nenough email to have lost your address) to posting a built jar file or\nwhatever is usually provided? Should we post this somewhere on\npostgresql.org to help out? Should I post my recently built stuff?\n\n> NOTE:\n> I have gotten good response and patches to the RPM's from a number of\n> people this go around -- and it is ENCOURAGING!\n\nGreat!\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 2 May 2000 09:03:25 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> The jar file isn't built automatically in 7.0. You'll have to use:\n> \n> \tmake jdbc2 jar\n> \n> The reason for this is partly on how make works, and partly because of\n> the kludge we have for handling the different API versions (like\n> JDBC1.1, JDBC2 etc)\n\nOops, my book says it will generate a postgresql.jar file. If it isn't\ngoing to that, I will have to change my book.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 07:01:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> The jar file isn't built automatically in 7.0. You'll have to use:\n> make jdbc2 jar\n\n?? From fresh sources afaik:\n\n[postgres@golem jdbc]$ make jdbc2 jar\n(echo \"package org.postgresql;\" ;\\\n echo \"public class DriverClass {\" ;\\\n echo \"public static String\nconnectClass=\\\"org.postgresql.jdbc2.Connection\\\";\" ;\\\n echo \"}\" \\\n) >org/postgresql/DriverClass.java\nmake[1]: Entering directory `/opt/postgres/pgsql/src/interfaces/jdbc'\njavac -g org/postgresql/DriverClass.java\n...\njavac -g org/postgresql/jdbc2/CallableStatement.java\nNote: org/postgresql/jdbc2/CallableStatement.java uses or overrides a\ndeprecated API. Recompile with \"-deprecation\" for details.\n1 warning\njar -c0f postgresql.jar `find org/postgresql -name \"*.class\" -print` \\\n org/postgresql/errors.properties\norg/postgresql/errors_fr.properties\norg/postgresql/errors_nl.properties\n------------------------------------------------------------\nThe JDBC driver has now been built. To make it available to\n...\nTo build the CORBA example (requires Java2):\n make corba\n------------------------------------------------------------\n\nmake[1]: Leaving directory `/opt/postgres/pgsql/src/interfaces/jdbc'\nmake: *** No rule to make target `jar'. Stop.\n\n\nSeems a jar file does get built with \"make jdbc2\", but I'm not sure it\nis the right one (being *much* more advanced than Lamar in the Java\nworld, I *make* Java, but don't actually *use* Java :)) ;)\n\nAs an aside, I thought Peter might find it interesting that we do have\na fairly large Java app at my work (JPL) to manage and build\nconfigurations for a fancy hard real-time system for astronomical\noptical interferometers. The app happens to use Postgres as a backend\nfor most deliveries ;) Keck Observatory will need it working with\nSybase since they long ago standardized on that...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 02 May 2000 11:07:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" } ]
[ { "msg_contents": "Yes, the README does need updating. CHANGELOG should be up to date. If\nnot, I'll have to re-commit it.\n\nI'm hoping to have the next three evenings free...\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Tuesday, May 02, 2000 6:16 AM\nTo: Lamar Owen; Bruce Momjian; PostgreSQL-development;\nPostgreSQL-interfaces; [email protected]\nSubject: Re: [HACKERS] Request for 7.0 JDBC status\n\n\n> otoh, how close are you Peter (hope you see this; I've blown away\n> enough email to have lost your address) to posting a built jar file or\n> whatever is usually provided? Should we post this somewhere on\n> postgresql.org to help out? Should I post my recently built stuff?\n\nAh, found Peter's e-mail address in an obvious place (the jdbc source\ntree).\n\nAnother question for Peter: would it be possible to update the README\nfile in the source tree, and other ancillary files? I know you've been\nvery busy, but even a brief fixup to adjust dates and version numbers\nwould be helpful for 7.0.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 2 May 2000 09:11:08 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> Yes, the README does need updating. CHANGELOG should be up to date. If\n> not, I'll have to re-commit it.\n> \n> I'm hoping to have the next three evenings free...\n\nNot to bug you Peter, but 7.0 may not wait three days before release. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 07:02:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> Not to bug you Peter, but 7.0 may not wait three days before release.\n\nI would vote that this is important enough that it should wait, but no\none has raised the issue until now so we haven't discussed it. The\ndocs may or may not be completed within the next day (still jet-lagged\nfrom vacation, but waking up at 3am does leave some extra time in the\nmorning, eh?), and if they stretch an extra day which is certainly\npossible then we are only talking about an extra day for this. No big\ndeal in the grand scheme of things...\n\nPeter, is there some testing that could/should be done with the new\ndriver (by others) in the meantime, or is it pretty likely to be\nreasonably hashed out?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 02 May 2000 11:15:46 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> > Not to bug you Peter, but 7.0 may not wait three days before release.\n> \n> I would vote that this is important enough that it should wait, but no\n> one has raised the issue until now so we haven't discussed it. The\n> docs may or may not be completed within the next day (still jet-lagged\n> from vacation, but waking up at 3am does leave some extra time in the\n> morning, eh?), and if they stretch an extra day which is certainly\n> possible then we are only talking about an extra day for this. No big\n> deal in the grand scheme of things...\n> \n> Peter, is there some testing that could/should be done with the new\n> driver (by others) in the meantime, or is it pretty likely to be\n> reasonably hashed out?\n\nJust to add to my earlier report, here is the kaffe 1.05 compile\nfailure. What strikes me as odd is that Connection.java complains\nbecause it can't find org/postgresql/Field, but if I try to compile\nField.java complains it can't find Connection.java.\n\nNow, having the 6.5.3 JAR file, I can compile the 6.5.3 postgresql java\ndriver because I have the jar file to back up the unreferenced symbols. \nThe 7.0 driver uses org.postgresql, which is not in the 6.5.3 JAR file,\nso it fails.\n\nThe java IRC channel says kaffe isn't very good, so maybe I shouldn't be\nworried about it. They also said mutually-referencing java files are\nnot a good either.\n\nSeems I may be able to modify the import lines in the java file to use\nthe 6.5.3 JAR file to get enough files compiled to compile the rest,\nthen recompile the entire thing.\n\nUsing the jar file compiled with Sun java works fine. I can connect to\nthe database and run my program.\n\n---------------------------------------------------------------------------\n\n#$ gmake jdbc2\n(echo \"package org.postgresql;\" ;\\\n echo \"public class DriverClass {\" ;\\\n echo \"public static String connectClass=\\\"org.postgresql.jdbc2.Connection\\\";\" ;\\\n echo \"}\" \\\n) >org/postgresql/DriverClass.java\ngmake[1]: Entering directory `/var/local/src/pgsql/CURRENT/pgsql/src/interfaces/jdbc'\njavac -g org/postgresql/DriverClass.java\njavac -g org/postgresql/Connection.java\norg/postgresql/Connection.java:1: Can''t find class \"org/postgresql/Field\" �8\norg/postgresql/Connection.java:529: Can''t find class \"Fastpath\" �8\ngmake[1]: *** [org/postgresql/Connection.class] Error 1\ngmake[1]: Leaving directory `/var/local/src/pgsql/CURRENT/pgsql/src/interfaces/jdbc'\ngmake: *** [jdbc2] Error 2\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 07:40:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Not to bug you Peter, but 7.0 may not wait three days before release.\n\n> I would vote that this is important enough that it should wait, but no\n> one has raised the issue until now so we haven't discussed it.\n\nMy two cents: I wouldn't object to postponing release a day or so for\nit, *but* if what we're getting is an un-beta-tested driver then my\nlevel of enthusiasm drops considerably. I'd rather say \"it'll get\nfixed in 7.0.1, after a decent testing interval for the new driver\".\n\nRelevant question: how well does the JDBC code that's in CVS now\nwork with 7.0? If the answer is \"hardly at all\" then a new driver\nis probably better even if it has lurking bugs. If the answer is\n\"pretty well\" then again I'd be inclined to ship what we've got.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2000 12:02:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status " }, { "msg_contents": "At 12:02 PM 5/2/00 -0400, Tom Lane wrote:\n\n>Relevant question: how well does the JDBC code that's in CVS now\n>work with 7.0? If the answer is \"hardly at all\" then a new driver\n>is probably better even if it has lurking bugs. If the answer is\n>\"pretty well\" then again I'd be inclined to ship what we've got.\n\nOne of our OpenACS (until recently ACS/pg) crew has gotten the\nArsDigita webmail software running with PG7.0 and JDBC, apparently\nwithout problems.\n\nI don't know which beta he's running, though...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 02 May 2000 09:23:07 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status " }, { "msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> >> Not to bug you Peter, but 7.0 may not wait three days before release.\n> \n> > I would vote that this is important enough that it should wait, but no\n> > one has raised the issue until now so we haven't discussed it.\n> \n> My two cents: I wouldn't object to postponing release a day or so for\n> it, *but* if what we're getting is an un-beta-tested driver then my\n> level of enthusiasm drops considerably. I'd rather say \"it'll get\n> fixed in 7.0.1, after a decent testing interval for the new driver\".\n> \n> Relevant question: how well does the JDBC code that's in CVS now\n> work with 7.0? If the answer is \"hardly at all\" then a new driver\n> is probably better even if it has lurking bugs. If the answer is\n> \"pretty well\" then again I'd be inclined to ship what we've got.\n\nAs far as I know, no one has it yet, except Thomas. The driver must\nhave a domain of org.postgresql or it is the old version. Only since I\ninstalled Peter's Makefile last week did it become install-able.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 12:43:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Relevant question: how well does the JDBC code that's in CVS now\n>> work with 7.0? If the answer is \"hardly at all\" then a new driver\n>> is probably better even if it has lurking bugs. If the answer is\n>> \"pretty well\" then again I'd be inclined to ship what we've got.\n\n> As far as I know, no one has it yet, except Thomas. The driver must\n> have a domain of org.postgresql or it is the old version. Only since I\n> installed Peter's Makefile last week did it become install-able.\n\nSo the version currently in CVS has seen hardly any testing either?\nMan, you really know how to make a guy feel comfortable :-(\n\nGiven that, we might as well let Peter have the extra day or two\nto bring the CVS version to the best state he can.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 May 2000 13:27:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Relevant question: how well does the JDBC code that's in CVS now\n> >> work with 7.0? If the answer is \"hardly at all\" then a new driver\n> >> is probably better even if it has lurking bugs. If the answer is\n> >> \"pretty well\" then again I'd be inclined to ship what we've got.\n> \n> > As far as I know, no one has it yet, except Thomas. The driver must\n> > have a domain of org.postgresql or it is the old version. Only since I\n> > installed Peter's Makefile last week did it become install-able.\n> \n> So the version currently in CVS has seen hardly any testing either?\n> Man, you really know how to make a guy feel comfortable :-(\n\nUp to then, it was using the code in postgresql. Now it is using\norg/postgresql directory, and they are different. postgresql is the\n6.5.* driver, and org/postgresql is the 7.0 driver.\n\n> Given that, we might as well let Peter have the extra day or two\n> to bring the CVS version to the best state he can.\n\nYea, it had that effect on me too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 13:37:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> My two cents: I wouldn't object to postponing release a day or so for\n> it, *but* if what we're getting is an un-beta-tested driver then my\n> level of enthusiasm drops considerably. I'd rather say \"it'll get\n> fixed in 7.0.1, after a decent testing interval for the new driver\".\n\nBoth versions of JDBC are in the Postgres source code tree. The newer\nversion has more standard conventions for Java namespaces (right\nterm??) and improvements in conformance to later versions of the JDBC\nspec.\n\nBasically the stuff is there already, and we just have a few file\nupdates to get it finalized. I'd be suprised if it is not ready by the\nweekend, so it shouldn't be much of an issue.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 03 May 2000 04:21:04 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" } ]
[ { "msg_contents": "I just received this from Constantin. Seems he has a new pgaccess\nversion.\n\nI have applied it. Fortunately, this time, rather than dumping the\nwhole tarball, I did a context diff against our current distribution,\nand applied the diff instead of the tarball. That way, I can see the\nexact changes I am applying.\n\nThe release during 6.5.* had a reorganized directory tree, which made\nthis difficult and error-prone.\n\nThis will be in 7.0. Constantin, I see you use vtcl. Great tool.\n\n---------------------------------------------------------------------------\n\n> Here it is the latest release of PgAccess.\n> A ugly bug in query opening procedures in forms has been fixed.\n> \n> Hope that it will catch the 7.0 final release.\n> \n> Best regards,\n> Constantin Teodorescu\n> FLEX Consulting Braila, ROMANIA\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 07:11:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PgAccess 0.98.6 , a ugly bug fixed" } ]
[ { "msg_contents": "I could get make jdbc2 to build the jar file, as it involves simply\nchanging the rules.\n\nie, currently we have the following:\n\njdbc2: ...rule...\n\taction...\n\nI would have to change it to something like:\n\njdbc2: realjdbc2 jar\n\nrealjdbc2: ...rule...\n\taction...\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Tuesday, May 02, 2000 12:01 PM\nTo: Peter Mount\nCc: 'Thomas Lockhart'; Lamar Owen; PostgreSQL-development;\nPostgreSQL-interfaces\nSubject: Re: [HACKERS] Request for 7.0 JDBC status\n\n\n> The jar file isn't built automatically in 7.0. You'll have to use:\n> \n> \tmake jdbc2 jar\n> \n> The reason for this is partly on how make works, and partly because of\n> the kludge we have for handling the different API versions (like\n> JDBC1.1, JDBC2 etc)\n\nOops, my book says it will generate a postgresql.jar file. If it isn't\ngoing to that, I will have to change my book.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Tue, 2 May 2000 12:30:40 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Request for 7.0 JDBC status" }, { "msg_contents": "> I could get make jdbc2 to build the jar file, as it involves simply\n> changing the rules.\n> \n> ie, currently we have the following:\n> \n> jdbc2: ...rule...\n> \taction...\n> \n> I would have to change it to something like:\n> \n> jdbc2: realjdbc2 jar\n> \n> realjdbc2: ...rule...\n> \taction...\n\nSeems a JAR file is better than copying the class files. The README has\nto be updated though. Also, your web page should be more prominent. \nPeople may prefer downloading the JAR file themselves rather than do the\ncompile if they have kaffe.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 May 2000 07:42:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Request for 7.0 JDBC status" } ]