threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\nWe have problems with backend processes that close the channel because of\npalloc() failures. When an INSERT statement fails, the backend reports an\nerror (e.g. `Cannot insert a duplicate key into a unique index') and\nallocates a few bytes more memory. The next SQL statement that fails\ncauses the backend to allocate more memory again, etc. until we have no\nmore virtual memory left. Is this a bug?\nWe are using postgres 6.4.2 on FreeBSD 2.2.8.\n\nIt also works with psql:\n\ntoy=> create table mytable (i integer unique);\nNOTICE: CREATE TABLE/UNIQUE will create implicit index mytable_i_key for\ntable mytable\nCREATE\ntoy=> \\q\n\n~ $ # now do a lot of inserts that cause error messages:\n~ $ while true; do echo \"INSERT INTO mytable VALUES (1);\"; done | psql toy\nINSERT INTO mytable VALUES (1);\nERROR: Cannot insert a duplicate key into a unique index\n...quite a lot of these messages\nINSERT INTO mytable VALUES (1);\nERROR: Cannot insert a duplicate key into a unique index\nINSERT INTO mytable VALUES (1);\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nHmm, why does the backend allocate more and more memory with each failed\nINSERT ?\nAny clues?\n\nThanks,\nMirko\n\n\n\n",
"msg_date": "Wed, 12 May 1999 12:48:29 +0200 (MET DST)",
"msg_from": "Mirko Kaffka <[email protected]>",
"msg_from_op": true,
"msg_subject": "backend dies suddenly after a lot of error messages"
},
{
"msg_contents": "A bug report on this was filled out against the 6.3 release as well.\nDon't know the status of it, however :(\n\nMirko Kaffka wrote:\n> \n> Hi,\n> \n> We have problems with backend processes that close the channel because of\n> palloc() failures. When an INSERT statement fails, the backend reports an\n> error (e.g. `Cannot insert a duplicate key into a unique index') and\n> allocates a few bytes more memory. The next SQL statement that fails\n> causes the backend to allocate more memory again, etc. until we have no\n> more virtual memory left. Is this a bug?\n> We are using postgres 6.4.2 on FreeBSD 2.2.8.\n> \n> It also works with psql:\n> \n> toy=> create table mytable (i integer unique);\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index mytable_i_key for\n> table mytable\n> CREATE\n> toy=> \\q\n> \n> ~ $ # now do a lot of inserts that cause error messages:\n> ~ $ while true; do echo \"INSERT INTO mytable VALUES (1);\"; done | psql toy\n> INSERT INTO mytable VALUES (1);\n> ERROR: Cannot insert a duplicate key into a unique index\n> ...quite a lot of these messages\n> INSERT INTO mytable VALUES (1);\n> ERROR: Cannot insert a duplicate key into a unique index\n> INSERT INTO mytable VALUES (1);\n> \n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> Hmm, why does the backend allocate more and more memory with each failed\n> INSERT ?\n> Any clues?\n> \n> Thanks,\n> Mirko\n\n-- \n------------------------------------------------------------\nThomas Reinke Tel: (416) 460-7021\nDirector of Technology Fax: (416) 598-2319\nE-Soft Inc. http://www.e-softinc.com\n",
"msg_date": "Wed, 12 May 1999 07:28:17 -0400",
"msg_from": "Thomas Reinke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] backend dies suddenly after a lot of error messages"
},
{
"msg_contents": "Mirko Kaffka <[email protected]> writes:\n> We have problems with backend processes that close the channel because of\n> palloc() failures. When an INSERT statement fails, the backend reports an\n> error (e.g. `Cannot insert a duplicate key into a unique index') and\n> allocates a few bytes more memory. The next SQL statement that fails\n> causes the backend to allocate more memory again, etc. until we have no\n> more virtual memory left. Is this a bug?\n\nYeah, I'd say so --- all the memory used should get freed at transaction\nend, but evidently it isn't happening.\n\n> We are using postgres 6.4.2 on FreeBSD 2.2.8.\n\nI still see it with 6.5-current sources. Will take a look.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 11:13:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error messages "
},
{
"msg_contents": ">\n> Mirko Kaffka <[email protected]> writes:\n> > We have problems with backend processes that close the channel because of\n> > palloc() failures. When an INSERT statement fails, the backend reports an\n> > error (e.g. `Cannot insert a duplicate key into a unique index') and\n> > allocates a few bytes more memory. The next SQL statement that fails\n> > causes the backend to allocate more memory again, etc. until we have no\n> > more virtual memory left. Is this a bug?\n>\n> Yeah, I'd say so --- all the memory used should get freed at transaction\n> end, but evidently it isn't happening.\n>\n> > We are using postgres 6.4.2 on FreeBSD 2.2.8.\n>\n> I still see it with 6.5-current sources. Will take a look.\n\n I remember to have taken some but haven't found all the\n places. I think there's still something in tcop where the\n querytree list is malloc()'d.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 12 May 1999 18:34:16 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error messages"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> Yeah, I'd say so --- all the memory used should get freed at transaction\n>> end, but evidently it isn't happening.\n\n> I remember to have taken some but haven't found all the\n> places. I think there's still something in tcop where the\n> querytree list is malloc()'d.\n\nI saw that yesterday --- for no particularly good reason, postgres.c\nwants to deal with the query list as an array rather than a list;\nit goes to great lengths to convert the lists it's given into an array,\nwhich it has to be able to resize, etc etc. I was thinking of ripping\nall that out and just using a palloc'd list. At the time I didn't have\nany justification for it except code beautification, which isn't a good\nenough reason to be changing code late in beta... but a memory leak\nis...\n\nHowever, the leakage being complained of seems to be several kilobytes\nper failed command, which is much more than that one malloc usage can\nbe blamed for. Any other thoughts? I was wondering if maybe a whole\npalloc context somewhere is getting lost; not sure where to look though.\nOne thing I did find was that leakage occurs very early. You can feed\nthe system commands that will fail in parsing, like say\n\tgarbage;\nand the memory usage still rises with each one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 13:44:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error messages "
},
{
"msg_contents": ">> Yeah, I'd say so --- all the memory used should get freed at transaction\n>> end, but evidently it isn't happening.\n>> \n>> I still see it with 6.5-current sources. Will take a look.\n\nAh-ha, I think I see it: AtCommit_Memory releases memory in the blank\nportal (by doing EndPortalAllocMode()). AtAbort_Memory forgets to do so.\nWill commit this fix momentarily.\n\n> I remember to have taken some but haven't found all the\n> places. I think there's still something in tcop where the\n> querytree list is malloc()'d.\n\nThat is a relatively minor leak, compared to leaking *all* memory\nallocated in the failed transaction, which is what it was doing until\nnow :-(. But I think I will fix it anyway ... the code is awfully\nugly, and it is still a leak.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 20:33:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error messages "
},
{
"msg_contents": "At 08:33 PM 5/12/99 -0400, Tom Lane wrote:\n\n>That is a relatively minor leak, compared to leaking *all* memory\n>allocated in the failed transaction, which is what it was doing until\n>now :-(. But I think I will fix it anyway ... the code is awfully\n>ugly, and it is still a leak.\n\nI'm a lurker, a compiler writer who has just begun using\nPostgres as the database engine behind a bird population\ntracking project I'm putting up on the web on my own\ntime, on a linux box running AOLServer and, for now at\nleast, postgres.\n\nIn my researching postgres vs. paying Oracle (which didn't\nseem too bad until I learned about their extra fees for\nweb sites and multiple-CPU boxes) vs. mySql etc, the one\nbiggest complaint I've run across when talking to people\nrunning web sites backed by Postgres has been that the\nback end starts dying after weeks ... days ... hours\ndepending on the type of site.\n\nOn questioning folks, it seemed pretty clear that in \nsome of these cases significant memory leaking was\ncausing the system to run out of memory.\n\nAnd last week I managed to generate long sequences\nof SQL that would eat available memory in about\n15 minutes. I've been lurking around a couple of\nthese postgres lists trying to figure out whether\nor not it was a known problem before making noise\nabout it.\n\nSo, imagine my pleasure at seeing this short thread\non the problem and, even better, the solution!\n\nWell, if not the (only) leak, at least one very,\nvery serious memory leak. Just how many kb were\nbeing leaked for each failed transaction?\n\nI think you may've just slammed a stake through the \nheart of a very significant bug causing a lot of\npeople seemingly unexplainable flakey back-end\nbehavior...this fix alone may do a lot to erase\nthe impression some have that postgres is not\nreliable enough to support any web site based\non a large database with lots of transactions.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Wed, 12 May 1999 18:09:10 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error\n messages"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> I think you may've just slammed a stake through the \n> heart of a very significant bug\n\nThanks for the compliment :-). You might actually be right;\nthis bug could go a long way towards explaining why some people\nfind Postgres very reliable and others don't. The first group's\napps don't tend to provoke any SQL errors, and/or don't try to\ncontinue running with the same backend after an error.\n\n> And last week I managed to generate long sequences\n> of SQL that would eat available memory in about\n> 15 minutes. I've been lurking around a couple of\n> these postgres lists trying to figure out whether\n> or not it was a known problem before making noise\n> about it.\n\nWe're aware of a number of memory-leak type problems, although\nmost of them are just temporary leakage situations (the memory\nwill eventually be freed, if you have enough memory to complete\nthe transaction...). I'm hoping that we can make a serious dent\nin that class of problem for release 6.6.\n\nI believe that all the Postgres developers have a bedrock commitment\nto making the system as stable and bulletproof as we can. But it\ntakes time to root out the subtler bugs. I got lucky tonight ;-)\n\n> Well, if not the (only) leak, at least one very,\n> very serious memory leak. Just how many kb were\n> being leaked for each failed transaction?\n\nI was measuring about 4K per cycle for a trivial parsing error,\nlike feeding \"garbage;\" to the backend repeatedly. It could be\na *lot* more depending on how much work got done before the error\nwas detected. Worst case you might lose megabytes...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 21:39:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error messages "
},
{
"msg_contents": "At 09:39 PM 5/12/99 -0400, Tom Lane wrote:\n\n>Thanks for the compliment :-). You might actually be right;\n>this bug could go a long way towards explaining why some people\n>find Postgres very reliable and others don't. The first group's\n>apps don't tend to provoke any SQL errors, and/or don't try to\n>continue running with the same backend after an error.\n\nAOLServer, in particular, will keep a backend alive \nforever unless the site goes idle for (typically)\nsome minutes. In this way, no overhead for backend\nstart-up is suffered by a busy site. AOLServer manages\nthe threads associated with particular http connections,\nwhile the (typically) tcl scripts servicing the connections\nask for, use, and release database handles (the tcl\ninterpreter runs inside the server) . Each handle\nis a connection to the db backend, and these connections\nget passed around by the server to various threads as\nthey're released by tcl \"ns_db releasehandle\" calls.\n\nSo ... ANY permament memory leak by the backend will tear things\ndown eventually. \"Randomly\", from the sysadmin's point of view.\n\nDon't feel bad, I know of one very busy Oracle site\nthat kicks things down once every 24 hrs in the\ndead of night for fear of cumulative leaks or, well,\nany of a number of imaginable db problems :)\n\n>We're aware of a number of memory-leak type problems, although\n>most of them are just temporary leakage situations (the memory\n>will eventually be freed, if you have enough memory to complete\n>the transaction...).\n\nRelatively harmless in the environment I'm describing...\n\n> I'm hoping that we can make a serious dent\n>in that class of problem for release 6.6.\n\nStill worth getting rid of, though!\n\n>I believe that all the Postgres developers have a bedrock commitment\n>to making the system as stable and bulletproof as we can.\n\nYes, I've gathered that in my reading of this group over the\nlast three days, and in my reading of older posts.\n\nAnd y'all have fixed that other horrible bug from the\nweb service POV: table-level locking. Ugh. I'd given up\non using postgres for my project until I learned that 6.5\ndoesn't suffer from this limitation.\n\n>I was measuring about 4K per cycle for a trivial parsing error,\n>like feeding \"garbage;\" to the backend repeatedly. It could be\n>a *lot* more depending on how much work got done before the error\n>was detected. Worst case you might lose megabytes...\n\nMemory's cheap, but not THAT cheap :)\n\nOK, I'll go back to lurking again. Keep up the good work,\nfolks.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Wed, 12 May 1999 19:03:56 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error\n messages"
},
{
"msg_contents": "> [email protected] (Jan Wieck) writes:\n> >> Yeah, I'd say so --- all the memory used should get freed at transaction\n> >> end, but evidently it isn't happening.\n> \n> > I remember to have taken some but haven't found all the\n> > places. I think there's still something in tcop where the\n> > querytree list is malloc()'d.\n> \n> I saw that yesterday --- for no particularly good reason, postgres.c\n> wants to deal with the query list as an array rather than a list;\n> it goes to great lengths to convert the lists it's given into an array,\n> which it has to be able to resize, etc etc. I was thinking of ripping\n> all that out and just using a palloc'd list. At the time I didn't have\n> any justification for it except code beautification, which isn't a good\n> enough reason to be changing code late in beta... but a memory leak\n> is...\n\nI also thought the array usage we very strange,�and I could not figure\nout why they used it. I figured as I learned more about the backend, I\nwould understnd their wisdom, but at this point, I think it was just\nsloppy code.\n\n> However, the leakage being complained of seems to be several kilobytes\n> per failed command, which is much more than that one malloc usage can\n> be blamed for. Any other thoughts? I was wondering if maybe a whole\n> palloc context somewhere is getting lost; not sure where to look though.\n> One thing I did find was that leakage occurs very early. You can feed\n> the system commands that will fail in parsing, like say\n> \tgarbage;\n> and the memory usage still rises with each one.\n\nGee, it garbage doesn't get very far into the parser, does it. It never\nmakes it out of the grammar. It may be the 8k query buffer?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 May 1999 22:29:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error messages"
},
{
"msg_contents": "> I was measuring about 4K per cycle for a trivial parsing error,\n> like feeding \"garbage;\" to the backend repeatedly. It could be\n> a *lot* more depending on how much work got done before the error\n> was detected. Worst case you might lose megabytes...\n\nThe strange thing is that we don't usually hear about crash/leaks very\nmuch. We just started hearing about it more in the past week or so.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 May 1999 22:44:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend dies suddenly after a lot of error messages"
},
{
"msg_contents": "> > We have problems with backend processes that close the channel because of\n> > palloc() failures. When an INSERT statement fails, the backend reports an\n> > error (e.g. `Cannot insert a duplicate key into a unique index') and\n> > allocates a few bytes more memory. The next SQL statement that fails\n> > causes the backend to allocate more memory again, etc. until we have no\n> > more virtual memory left. Is this a bug?\n> > We are using postgres 6.4.2 on FreeBSD 2.2.8.\n> > \n> > It also works with psql:\n> > \n> > toy=> create table mytable (i integer unique);\n> > NOTICE: CREATE TABLE/UNIQUE will create implicit index mytable_i_key for\n> > table mytable\n> > CREATE\n> > toy=> \\q\n> > \n> > ~ $ # now do a lot of inserts that cause error messages:\n> > ~ $ while true; do echo \"INSERT INTO mytable VALUES (1);\"; done | psql toy\n> > INSERT INTO mytable VALUES (1);\n> > ERROR: Cannot insert a duplicate key into a unique index\n> > ...quite a lot of these messages\n> > INSERT INTO mytable VALUES (1);\n> > ERROR: Cannot insert a duplicate key into a unique index\n> > INSERT INTO mytable VALUES (1);\n> > \n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally before or\n> > while processing the request.\n> > We have lost the connection to the backend, so further processing is\n> > impossible. Terminating.\n> > \n> > Hmm, why does the backend allocate more and more memory with each failed\n> > INSERT ?\n> > Any clues?\n\nThere was a bug in pre-6.5 versions that caused elog failure not to\nrelease their memory. There is still a small leak for elogs, but it is\nonly a few bytes. You should find this is fixed in 6.5.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 23:25:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] backend dies suddenly after a lot of error messages"
}
] |
[
{
"msg_contents": "\nSome items completed and removed.\n\n---------------------------------------------------------------------------\n\n\nDefault of '' causes crash in some cases\nshift/reduce conflict in grammar, SELECT ... FOR [UPDATE|CURSOR]\ncreate table \"AA\" ( x int4 , y serial ); insert into \"AA\" (x) values (1); fails\nSELECT 1; SELECT 2 fails when sent not via psql, semicolon problem\nSELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\nCREATE OPERATOR *= (leftarg=_varchar, rightarg=varchar, \n\tprocedure=array_varchareq); fails, varchar is reserved word, quotes work\nCLUSTER failure if vacuum has not been performed in a while\nImprove Subplan list handling\nAllow Subplans to use efficient joins(hash, merge) with upper variable\nImprove NULL parameter passing into functions\nTable with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nAllow ESCAPE '\\' at the end of LIKE for ANSI compliance, or rewrite the\n\tLIKE handling by rewriting the user string with the supplied ESCAPE\nFix leak for expressions?, aggregates?\nImprove LIMIT processing by using index to limit rows processed\nAllow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n\tWorks in GROUP BY\nUpdate reltuples from COPY command\nnodeResults.c and parse_clause.c give compiler warnings\nMove LIKE index optimization handling to the optimizer?\nMVCC locking, deadlock, priorities?\nMake sure pg_internal.init generation can't cause unreliability\nSELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\nCREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops) fails\n\tindex can't store constant parameters, allow SQL function indexes?\nHave hashjoins use portals, not fixed-size memory\nDROP TABLE leaves INDEX file descriptor open\nALTER TABLE ADD COLUMN to inherited table put column in wrong place\nresno's, sublevelsup corrupt when reaching rewrite system\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\n3 = sum(x) in rewrite system is a problem\n\nDo we want pg_dump -z to be the default?\npg_dump of groups fails\npg_dump -o -D does not work, and can not work currently, generate error?\npg_dump does not preserver NUMERIC precision, psql \\d should show precision\ndumping out sequences should not be counted in pg_dump display\n\nCREATE VIEW ignores DISTINCT?\nORDER BY mixed with DISTINCT causes duplicates\nCREATE TABLE t1 (a int4, b int4); CREATE VIEW v1 AS SELECT b, count(b)\n\tFROM t1 GROUP BY b; SELECT count FROM v1; fails\n\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\n\nDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\nMulti-segment indexes?\nVacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n\nMake Serial its own type?\nAdd support for & operator\nstore binary-compatible type information in the system somewhere \nadd ability to add comments to system tables using table/colname combination\nprocess const=const parts of OR clause in separate pass\nmake oid use oidin/oidout not int4in/int4out in pg_type.h, make oid use\n\tunsigned int more reliably, pg_atoi()\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 May 1999 08:55:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update Open Items list"
},
{
"msg_contents": "> create table \"AA\" ( x int4 , y serial ); insert into \"AA\" (x) values (1); fails\n\nI see the \"AA\" problem, and was just working in that code. Will look\nat it.\n\n> SELECT 1; SELECT 2 fails when sent not via psql, semicolon problem\n\nI'm almost certain that this is from changes introduced by Stefan's\nEXCEPT patches. There are some rules in gram.y which handle multiple\nstatements, and he commented them out to get rid of shift/reduce\nconflicts he introduced :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 12 May 1999 14:19:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Update Open Items list"
},
{
"msg_contents": "> CREATE VIEW ignores DISTINCT?\n\n Will not be fixed in v6.5. I'll add an elog(ERROR, ...) to\n reject those view definitions during CREATE VIEW.\n\n> CREATE TABLE t1 (a int4, b int4); CREATE VIEW v1 AS SELECT b, count(b)\n> FROM t1 GROUP BY b; SELECT count FROM v1; fails\n\n Fixed.\n\n SELECT b FROM v1; still fails with\n\n ERROR: union_planner: query is marked hasAggs, but I don't see any\n\n Must adjust hasAggs as final step in rewriter.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 12 May 1999 17:29:14 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Update Open Items list"
},
{
"msg_contents": "> \n> > CREATE VIEW ignores DISTINCT?\n> \n> Will not be fixed in v6.5. I'll add an elog(ERROR, ...) to\n> reject those view definitions during CREATE VIEW.\n\n\tDone.\n\n> \n> > CREATE TABLE t1 (a int4, b int4); CREATE VIEW v1 AS SELECT b, count(b)\n> > FROM t1 GROUP BY b; SELECT count FROM v1; fails\n> \n> Fixed.\n> \n> SELECT b FROM v1; still fails with\n> \n> ERROR: union_planner: query is marked hasAggs, but I don't see any\n> \n> Must adjust hasAggs as final step in rewriter.\n\n\tDone.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n",
"msg_date": "Wed, 12 May 1999 19:04:00 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Update Open Items list"
}
] |
[
{
"msg_contents": "Grr, sorry this is so late - I'd changed MUAs and didn't notice that my\nposts where BOUNCING,\nnot propogating.\n\nTome Lane writes:\n> You didn't say which version you are using, but 6.5-current returns a\n> more helpful error message:\n> \n> ERROR: CREATE TABLE/SERIAL implicit sequence name must be less than 32 charac\nters\n> Sum of lengths of 'globalafvigelse' and 'globalafvigelse' must be less than 27\n\n\nHmm, this is rather user unfriendly (but at least an accurate error\nmessage.) It's also not compatible, I think, with other RDBMS that allow\n'serial' types, is it? Any problem with truncating the field name? I.e.\nare there are places in the code that build this sequence name,\nrather than looking it up by oid or some such? Only placew I think it's\nused is in the as the default for the serial field, and there what ever\ngets constructed can be dropped in. If it's not used elsewhere, we\nshould shorten it.\n\nWell, at least, add it to the TODO list for testing - see if anything\nbreaks if we just hack it off at 27 chars. Same goes for all the\nimplicit indicies, I guess.\n\nHmm, this raises another point: problem with serial in 6.4.2 with\nMixedCase table of field names (wrapped for your email viewing\npleasure):\n\ntest=> create table \"TestTable\" (\"Field\" serial primary key, some text);\nNOTICE: CREATE TABLE will create implicit sequence TestTable_Field_seq\nfor SERIAL column TestTable.Field\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\nTestTable_pkey for table TestTable\nCREATE\ntest=> insert into \"TestTable\" (some) values ('test text');\nERROR: testtable_field_seq.nextval: sequence does not exist\ntest=> \\ds\n\nDatabase = test\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | reedstrm | TestTable_Field_seq | sequence |\n +------------------+----------------------------------+----------+\ntest=> \n\nAnybody test this on 6.5? \n\nI seem to remember it being reported many weeks ago in another context -\nah yes, the problem was using a functionname as a default which had\nmixed case in it. In that case, the standard quoting didn't seem to\nwork, either. I think it was resolved. Anyone remember?\n\nRoss (a.k.a. Mister MixedCase)\n\nP.S. my mixed case mess comes from prototyping in MS-Access, and\ntransfering to PostgreSQL. Given the number of Access Q.s that've been\nturning up, I bet we see a lot of this.\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 12 May 1999 11:08:05 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG? serials and primary keys (was Re: [INTERFACES] Bug in psql?)"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Any problem with truncating the field name?\n\nI don't need to test it to see the problem with that idea:\n\ncreate table averylongtablename (\n\taverylongfieldname1 serial,\n\taverylongfieldname2 serial);\n\nWe'd need to add code to ensure uniqueness of the truncated names,\nwhich is doable but it's not a trivial change.\n\nAnother possibility is to use user-unfriendly names for the subsidiary\nobjects, like\n\tpg_serial_seq_69845873\nbut I can't say that I like that either... it's nice to be able to\nlook at a sequence and know what it's for...\n\n> Hmm, this raises another point: problem with serial in 6.4.2 with\n> MixedCase table of field names (wrapped for your email viewing\n> pleasure):\n\nYes, that was reported recently --- I believe Thomas is looking at it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 13:53:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BUG? serials and primary keys (was Re: [INTERFACES] Bug\n\tin psql?)"
},
{
"msg_contents": "> test=> create table \"TestTable\" (\"Field\" serial primary key, some text);\n> NOTICE: CREATE TABLE will create implicit sequence TestTable_Field_seq\n> for SERIAL column TestTable.Field\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n> TestTable_pkey for table TestTable\n> CREATE\n> test=> insert into \"TestTable\" (some) values ('test text');\n> ERROR: testtable_field_seq.nextval: sequence does not exist\n> test=> \\ds\n> \n> Database = test\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | reedstrm | TestTable_Field_seq | sequence |\n> +------------------+----------------------------------+----------+\n> test=> \n> \n> Anybody test this on 6.5? \n\nWe are working on a fix for the case thing right now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 May 1999 22:23:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BUG? serials and primary keys (was Re: [INTERFACES] Bug\n\tin psql?)"
}
] |
[
{
"msg_contents": "can anyone help me hack into fairmont MN high schools server???\n\n\nthanx CRS\n\n\n",
"msg_date": "Wed, 12 May 1999 16:58:21 -0000",
"msg_from": "\"craig\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "help"
}
] |
[
{
"msg_contents": " Planner guru's please!\n\n I wonder what makes the difference between WHERE and HAVING\n that causes HAVING to accept aggregates while WHERE doesn't.\n It would be extremely nice if it's possible to teach WHERE\n how to handle aggregates properly. Having to push them into\n subselects during rewrite if a views aggregate column appears\n in the WHERE clause is a total mess.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 12 May 1999 19:51:18 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "WHERE vs HAVING"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> I wonder what makes the difference between WHERE and HAVING\n> that causes HAVING to accept aggregates while WHERE doesn't.\n\nHuh? It seems inherent in the definition to me: WHERE is a filter\napplied to individual tuples before any aggregation stage can happen,\nthus it makes no sense for it to include aggregate functions\n(except in explicit subselects, which create a new context for the\naggregation to occur in). HAVING applies to groups of tuples after\naggregation, so aggregate functions can meaningfully be applied to\nthose groups.\n\n> It would be extremely nice if it's possible to teach WHERE\n> how to handle aggregates properly. Having to push them into\n> subselects during rewrite if a views aggregate column appears\n> in the WHERE clause is a total mess.\n\nExplain to me what you think it should mean. It sounds to me like\nyou are trying to have the rewrite system change an incorrect query\ninto a valid one. Doesn't strike me as a good idea; does the user\nknow what he's going to get?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 14:06:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] WHERE vs HAVING "
},
{
"msg_contents": "> Planner guru's please!\n> \n> I wonder what makes the difference between WHERE and HAVING\n> that causes HAVING to accept aggregates while WHERE doesn't.\n> It would be extremely nice if it's possible to teach WHERE\n> how to handle aggregates properly. Having to push them into\n> subselects during rewrite if a views aggregate column appears\n> in the WHERE clause is a total mess.\n\nSQL requires the restriction.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 23:40:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] WHERE vs HAVING"
}
] |
[
{
"msg_contents": "Here's a small patch to cause pg_dump to emit the\nscale and precision for NUMERIC type column defs.\n\nKeith.",
"msg_date": "Wed, 12 May 1999 20:27:21 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Patch to pg_dump for NUMERIC."
},
{
"msg_contents": "\nApplied.\n\n\n> Here's a small patch to cause pg_dump to emit the\n> scale and precision for NUMERIC type column defs.\n> \n> Keith.\nContent-Description: pg_dump.c.patch\n\n> *** src/bin/pg_dump/pg_dump.c.orig\tMon May 10 22:19:09 1999\n> --- src/bin/pg_dump/pg_dump.c\tWed May 12 11:26:35 1999\n> ***************\n> *** 2671,2676 ****\n> --- 2671,2680 ----\n> \tchar\t\t\t**parentRels;\t\t\t/* list of names of parent relations */\n> \tint\t\t\tnumParents;\n> \tint\t\t\tactual_atts;\t\t\t/* number of attrs in this CREATE statment */\n> + \tint32\t\t\ttmp_typmod;\n> + \tint\t\t\tprecision;\n> + \tint\t\t\tscale;\n> + \n> \n> \t/* First - dump SEQUENCEs */\n> \tif (tablename)\n> ***************\n> *** 2747,2752 ****\n> --- 2751,2768 ----\n> \t\t\t\t\t\t{\n> \t\t\t\t\t\t\tsprintf(q + strlen(q), \"(%d)\",\n> \t\t\t\t\t\t\t\t\ttblinfo[i].atttypmod[j] - VARHDRSZ);\n> + \t\t\t\t\t\t}\n> + \t\t\t\t\t}\n> + \t\t\t\t\telse if (!strcmp(tblinfo[i].typnames[j], \"numeric\"))\n> + \t\t\t\t\t{\n> + \t\t\t\t\t\tsprintf(q + strlen(q), \"numeric\");\n> + \t\t\t\t\t\tif (tblinfo[i].atttypmod[j] != -1)\n> + \t\t\t\t\t\t{\n> + \t\t\t\t\t\t\ttmp_typmod = tblinfo[i].atttypmod[j] - VARHDRSZ;\n> + \t\t\t\t\t\t\tprecision = (tmp_typmod >> 16) & 0xffff;\n> + \t\t\t\t\t\t\tscale = tmp_typmod & 0xffff;\n> + \t\t\t\t\t\t\tsprintf(q + strlen(q), \"(%d,%d)\",\n> + \t\t\t\t\t\t\t\t\t\tprecision, scale);\n> \t\t\t\t\t\t}\n> \t\t\t\t\t}\n> \t\t\t\t\t/* char is an internal single-byte data type;\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 May 1999 22:34:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch to pg_dump for NUMERIC."
}
] |
[
{
"msg_contents": "Hi,\n\nIt's me again .... I've compiled up the patch and doing some testing on\nthe side before we install it on our live system, but something has\nhappened in the past few days with our live system which is interesting.\n\nBasically, on Saturday I started up another process which does more\nqueries on the database, and means that it gets hammered even harder than\nbefore. Now, instead of getting one or maybe two postgres failures per\nday, we have been getting three to four. We just had two in the morning\nalready.\n\nSo this is interesting because I have a good test case now, and also\nsupports Tom Lanes comments about exceeding the 256 locks thing and\ncausing problems to occur.\n\nI think by adding this new process to my system, I've caused the chance of\nexceeding this 256 value to be increased, making the system more\nunreliable. Note that this process is read only, and there are no LOCK\nstatements in it, but it still allocates read locks I guess, so it would\nbe causing this to happen.\n\nOk, well we're testing still, but I'll have some info about when we put it\nlive in the next few days.\n\nThanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Thu, 13 May 1999 11:17:57 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Information about backend waiting"
}
] |
[
{
"msg_contents": "just subj\n\nVadim\n",
"msg_date": "Thu, 13 May 1999 11:48:00 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "test"
}
] |
[
{
"msg_contents": "Hi, all.\n\nI am trying to get the latest 6.5 source to test it under Digital Unix,\nusing cvs. It works, but it's sssllloooowwww. Are there any CVS mirrors\n(preferably in Europe)? I've looked at the web page, but didn't find any.\n\nTIA,\n\n\tPedro.\n\n-- \n-------------------------------------------------------------------\nPedro Jos� Lobo Perea Tel: +34 91 336 78 19\nCentro de C�lculo Fax: +34 91 331 92 29\nE.U.I.T. Telecomunicaci�n e-mail: [email protected]\nUniversidad Polit�cnica de Madrid\nCtra. de Valencia, Km. 7 E-28031 Madrid - Espa�a / Spain\n\n",
"msg_date": "Thu, 13 May 1999 11:47:28 +0200 (MET DST)",
"msg_from": "\"Pedro J. Lobo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS mirrors?"
},
{
"msg_contents": "\nOn 13-May-99 Pedro J. Lobo wrote:\n> Hi, all.\n> \n> I am trying to get the latest 6.5 source to test it under Digital Unix,\n> using cvs. It works, but it's sssllloooowwww. Are there any CVS mirrors\n> (preferably in Europe)? I've looked at the web page, but didn't find any.\n\nProbably, I can set-up cvs mirror to (I have cvs-pserver installed). \nPlease, somebody - how match HDD space I need to do it?\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* there will come soft rains ...\n",
"msg_date": "Thu, 13 May 1999 15:24:54 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] CVS mirrors?"
},
{
"msg_contents": "\nCurrently, 92Meg ... not sure of rate of growth, but it hasn't been\nsomething I've \"noticed\" (ie. haven't had any disk space increases there\nsince we started using it) ...\n\nOn Thu, 13 May 1999, Dmitry Samersoff wrote:\n\n> \n> On 13-May-99 Pedro J. Lobo wrote:\n> > Hi, all.\n> > \n> > I am trying to get the latest 6.5 source to test it under Digital Unix,\n> > using cvs. It works, but it's sssllloooowwww. Are there any CVS mirrors\n> > (preferably in Europe)? I've looked at the web page, but didn't find any.\n> \n> Probably, I can set-up cvs mirror to (I have cvs-pserver installed). \n> Please, somebody - how match HDD space I need to do it?\n> \n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * there will come soft rains ...\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 13 May 1999 09:33:05 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] CVS mirrors?"
}
] |
[
{
"msg_contents": "WHile testing 6.5 cvs to see what's the progress with capability\nof Postgres to work with big joins I get following error messages:\n\nselect a.a,at1.a as t1,at2.a as t2,at3.a as t3,at4.a as t4,at5.a as t5,a\nt6.a as t6,at7.a as t7,at8.a as t8,at9.a as t9,at10.a as t10 \nfrom t0 a ,t1 at1,t2 at2,t3 at3,t4 at4,t5 at5,t6 at6,t7 at7,t8 at8,t9 at9,\nt10 at10 where at1.a_id = a.a_id and at2.a_id=a.a_id and at3.a_id=a.a_id and \nat4.a_id=a.a_id and at5.a_id=a.a_id and at6.a_id=a.a_id and at7.a_id=a.a_id \nand at8.a_id=a.a_id and at9.a_id=a.a_id and at10.a_id=a.a_id ;\n\nBackend message type 0x44 arrived while idle\nBackend message type 0x44 arrived while idle\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n\nPostgres+psql eaten all the memory+swap.\nAre these messages ok in such a situation ?\n\n\n\tOleg\n\nPS.\n\nPostgres still can't serve large joins :-(\n\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 13 May 1999 16:17:13 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend message type 0x44 arrived while idle"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> WHile testing 6.5 cvs to see what's the progress with capability\n> of Postgres to work with big joins I get following error messages:\n\nI think there are still some nasty bugs in the GEQO planner. (I assume\nyou have the GEQO threshold set to less than the number of tables in\nyour query?) Bruce did a lot of good cleanup work on the main planner\nbut GEQO is mostly untouched. I've been hoping to poke at it some more\nbefore 6.5 release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 May 1999 10:16:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend message type 0x44 arrived while idle "
},
{
"msg_contents": "> Oleg Bartunov <[email protected]> writes:\n> > WHile testing 6.5 cvs to see what's the progress with capability\n> > of Postgres to work with big joins I get following error messages:\n> \n> I think there are still some nasty bugs in the GEQO planner. (I assume\n> you have the GEQO threshold set to less than the number of tables in\n> your query?) Bruce did a lot of good cleanup work on the main planner\n> but GEQO is mostly untouched. I've been hoping to poke at it some more\n> before 6.5 release.\n\nI hope I didn't break it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 May 1999 12:08:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend message type 0x44 arrived while idle"
},
{
"msg_contents": "I wrote:\n> Oleg Bartunov <[email protected]> writes:\n>> WHile testing 6.5 cvs to see what's the progress with capability\n>> of Postgres to work with big joins I get following error messages:\n\n> I think there are still some nasty bugs in the GEQO planner.\n\nI have just committed some changes that fix bugs in the GEQO planner\nand limit its memory usage. It should now be possible to use GEQO even\nfor queries that join a very large number of tables --- at least from\nthe standpoint of not running out of memory during planning. (It can\nstill take a while :-(. I think that the default GEQO parameter\nsettings may be configured to use too many generations, but haven't\npoked at this yet.)\n\nI have observed that the regular optimizer requires about 50MB to plan\nsome ten-way joins, and can exceed my system's 128MB process data limit\non some eleven-way joins. We currently have the GEQO threshold set at\n11, which prevents the latter case by default --- but 50MB is a lot.\nI wonder whether we shouldn't back the GEQO threshold off to 10.\n(When I suggested setting it to 11, I was only looking at speed relative\nto GEQO, not memory usage. There is now a *big* difference in memory\nusage...) Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 May 1999 20:57:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "GEQO optimizer (was Re: Backend message type 0x44 arrived while idle)"
},
{
"msg_contents": "> I have observed that the regular optimizer requires about 50MB to plan\n> some ten-way joins, and can exceed my system's 128MB process data limit\n> on some eleven-way joins. We currently have the GEQO threshold set at\n> 11, which prevents the latter case by default --- but 50MB is a lot.\n> I wonder whether we shouldn't back the GEQO threshold off to 10.\n> (When I suggested setting it to 11, I was only looking at speed relative\n> to GEQO, not memory usage. There is now a *big* difference in memory\n> usage...) Comments?\n\nYou chose 11 by comparing GEQO with non-GEQO. I think you will find\nthat with your improved GEQO, GEQO is faster for smaller number of\njoins, preventing the memory problem. Can you check the speeds again?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 May 1999 21:17:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "On Sun, 16 May 1999, Bruce Momjian wrote:\n\n> Date: Sun, 16 May 1999 21:17:30 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44 arrived while idle)\n> \n> > I have observed that the regular optimizer requires about 50MB to plan\n> > some ten-way joins, and can exceed my system's 128MB process data limit\n> > on some eleven-way joins. We currently have the GEQO threshold set at\n> > 11, which prevents the latter case by default --- but 50MB is a lot.\n> > I wonder whether we shouldn't back the GEQO threshold off to 10.\n> > (When I suggested setting it to 11, I was only looking at speed relative\n> > to GEQO, not memory usage. There is now a *big* difference in memory\n> > usage...) Comments?\n> \n> You chose 11 by comparing GEQO with non-GEQO. I think you will find\n> that with your improved GEQO, GEQO is faster for smaller number of\n> joins, preventing the memory problem. Can you check the speeds again?\n> \n\nI confirm big join with 11 tables doesn't eats all memory+swap on\nmy Linux box as before and it runs *forever* :-). It took already\n18 minutes of CPU (P200, 64Mb) ! Will wait. \n\n 8438 postgres 12 0 11104 3736 2620 R 0 98.6 5.9 18:16 postmaster\n\nThis query doesn't use (expicitly) GEQO\n\nselect t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7.a as t7,t8.a as t8,t9.a as t9,t10.a as t10\n from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10\n where t1.a_id = t0.a_t1_id and t2.a_id=t0.a_t2_id and t3.a_id=t0.a_t3_id and t4.a_id=t0.a_t4_id and t5.a_id=t0.a_t5_id and t6.a_id=t0.a_t6_id and t7.a_id=t0.a_t7_id and t8.a_id=t0.a_t8_id and t9.a_id=t0.a_t9_id and t10.a_id=t0.a_t10_id ;\n\nRegards,\n\n\tOleg\n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 17 May 1999 10:11:28 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I confirm big join with 11 tables doesn't eats all memory+swap on\n> my Linux box as before and it runs *forever* :-). It took already\n> 18 minutes of CPU (P200, 64Mb) ! Will wait. \n\n18 minutes??? It takes barely over a minute on my aging 75MHz HP-PA\nbox. (Practically all of which is planning time, since there are only\n10 tuples to join... or are you doing this on a realistically sized\nset of tables now?)\n\n> This query doesn't use (expicitly) GEQO\n\n> select t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7.a as t7,t8.a as t8,t9.a as t9,t10.a as t10\n> from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10\n> where t1.a_id = t0.a_t1_id and t2.a_id=t0.a_t2_id and t3.a_id=t0.a_t3_id and t4.a_id=t0.a_t4_id and t5.a_id=t0.a_t5_id and t6.a_id=t0.a_t6_id and t7.a_id=t0.a_t7_id and t8.a_id=t0.a_t8_id and t9.a_id=t0.a_t9_id and t10.a_id=t0.a_t10_id ;\n\nNo, but since there are 11 tables mentioned, it will be sent to the GEQO\noptimizer anyway with the default GEQO threshold of 11...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 May 1999 09:44:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "On Mon, 17 May 1999, Tom Lane wrote:\n\n> Date: Mon, 17 May 1999 09:44:18 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44 arrived while idle) \n> \n> Oleg Bartunov <[email protected]> writes:\n> > I confirm big join with 11 tables doesn't eats all memory+swap on\n> > my Linux box as before and it runs *forever* :-). It took already\n> > 18 minutes of CPU (P200, 64Mb) ! Will wait. \n> \n> 18 minutes??? It takes barely over a minute on my aging 75MHz HP-PA\n> box. (Practically all of which is planning time, since there are only\n> 10 tuples to join... or are you doing this on a realistically sized\n> set of tables now?)\n> \n\nOops,\n\nI found the problem. I modified my test script to add 'vacuum analyze'\nafter creating test data and it works really fast ! Great !\nNow I'm wondering why do I need vacuum analyze after creating test data\nand indices ? What's the state of discussion in hackers ?\n\n\tRegards,\n\t\tOleg\n\n> > This query doesn't use (expicitly) GEQO\n> \n> > select t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7.a as t7,t8.a as t8,t9.a as t9,t10.a as t10\n> > from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10\n> > where t1.a_id = t0.a_t1_id and t2.a_id=t0.a_t2_id and t3.a_id=t0.a_t3_id and t4.a_id=t0.a_t4_id and t5.a_id=t0.a_t5_id and t6.a_id=t0.a_t6_id and t7.a_id=t0.a_t7_id and t8.a_id=t0.a_t8_id and t9.a_id=t0.a_t9_id and t10.a_id=t0.a_t10_id ;\n> \n> No, but since there are 11 tables mentioned, it will be sent to the GEQO\n> optimizer anyway with the default GEQO threshold of 11...\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 17 May 1999 18:00:27 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "Tom,\n\nI was so happy with the problem solved so I decided to play with more joins :-)\nReally, queries were processed very quickly but at 14 tables backend died :\n\nCOPY t13 FROM STDIN USING DELIMITERS '|';\nvacuum analyze;\nVACUUM\n \nselect t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7.a as t7,t8.a as t8,t9.a as t9,t10.a as t10,t11.a as t11,t12.a as t12,t13.a as t13\n from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13\n where t1.a_id = t0.a_t1_id and t2.a_id=t0.a_t2_id and t3.a_id=t0.a_t3_id and t4.a_id=t0.a_t4_id and t5.a_id=t0.a_t5_id and t6.a_id=t0.a_t6_id and t7.a_id=t0.a_t7_id and t8.a_id=t0.a_t8_id and t9.a_id=t0.a_t9_id and t10.a_id=t0.a_t10_id and t11.a_id=t0.a_\nt11_id and t12.a_id=t0.a_t12_id and t13.a_id=t0.a_t13_id ;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n\n\nTom, could you try my script at your machine ?\nI attached the script. You need perl to run it.\n\nmkjoindata.pl | psql test\n\n\tRegards,\n\n\t\tOleg\n\n\nOn Mon, 17 May 1999, Tom Lane wrote:\n\n> Date: Mon, 17 May 1999 09:44:18 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44 arrived while idle) \n> \n> Oleg Bartunov <[email protected]> writes:\n> > I confirm big join with 11 tables doesn't eats all memory+swap on\n> > my Linux box as before and it runs *forever* :-). It took already\n> > 18 minutes of CPU (P200, 64Mb) ! Will wait. \n> \n> 18 minutes??? It takes barely over a minute on my aging 75MHz HP-PA\n> box. (Practically all of which is planning time, since there are only\n> 10 tuples to join... or are you doing this on a realistically sized\n> set of tables now?)\n> \n> > This query doesn't use (expicitly) GEQO\n> \n> > select t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7.a as t7,t8.a as t8,t9.a as t9,t10.a as t10\n> > from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10\n> > where t1.a_id = t0.a_t1_id and t2.a_id=t0.a_t2_id and t3.a_id=t0.a_t3_id and t4.a_id=t0.a_t4_id and t5.a_id=t0.a_t5_id and t6.a_id=t0.a_t6_id and t7.a_id=t0.a_t7_id and t8.a_id=t0.a_t8_id and t9.a_id=t0.a_t9_id and t10.a_id=t0.a_t10_id ;\n> \n> No, but since there are 11 tables mentioned, it will be sent to the GEQO\n> optimizer anyway with the default GEQO threshold of 11...\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Mon, 17 May 1999 18:08:45 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "Oops,\n\nit seems that was my fault, I didn't specified @nitems (sizes of tables)\nfor all tables. Now it works fine.\nTom, in case of my fault, why did postgres die ?\n\n\tRegards,\n\n\t\tOleg\n\n\n\nOn Mon, 17 May 1999, Oleg Bartunov wrote:\n\n> Date: Mon, 17 May 1999 18:08:45 +0400 (MSD)\n> From: Oleg Bartunov <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44 arrived while idle) \n> \n> Tom,\n> \n> I was so happy with the problem solved so I decided to play with more joins :-)\n> Really, queries were processed very quickly but at 14 tables backend died :\n> \n> COPY t13 FROM STDIN USING DELIMITERS '|';\n> vacuum analyze;\n> VACUUM\n> \n> select t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7.a as t7,t8.a as t8,t9.a as t9,t10.a as t10,t11.a as t11,t12.a as t12,t13.a as t13\n> from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13\n> where t1.a_id = t0.a_t1_id and t2.a_id=t0.a_t2_id and t3.a_id=t0.a_t3_id and t4.a_id=t0.a_t4_id and t5.a_id=t0.a_t5_id and t6.a_id=t0.a_t6_id and t7.a_id=t0.a_t7_id and t8.a_id=t0.a_t8_id and t9.a_id=t0.a_t9_id and t10.a_id=t0.a_t10_id and t11.a_id=t0.\na_\n> t11_id and t12.a_id=t0.a_t12_id and t13.a_id=t0.a_t13_id ;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is impossible. Terminating.\n> \n> \n> Tom, could you try my script at your machine ?\n> I attached the script. You need perl to run it.\n> \n> mkjoindata.pl | psql test\n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> \n> On Mon, 17 May 1999, Tom Lane wrote:\n> \n> > Date: Mon, 17 May 1999 09:44:18 -0400\n> > From: Tom Lane <[email protected]>\n> > To: Oleg Bartunov <[email protected]>\n> > Cc: [email protected]\n> > Subject: Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44 arrived while idle) \n> > \n> > Oleg Bartunov <[email protected]> writes:\n> > > I confirm big join with 11 tables doesn't eats all memory+swap on\n> > > my Linux box as before and it runs *forever* :-). It took already\n> > > 18 minutes of CPU (P200, 64Mb) ! Will wait. \n> > \n> > 18 minutes??? It takes barely over a minute on my aging 75MHz HP-PA\n> > box. (Practically all of which is planning time, since there are only\n> > 10 tuples to join... or are you doing this on a realistically sized\n> > set of tables now?)\n> > \n> > > This query doesn't use (expicitly) GEQO\n> > \n> > > select t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7.a as t7,t8.a as t8,t9.a as t9,t10.a as t10\n> > > from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10\n> > > where t1.a_id = t0.a_t1_id and t2.a_id=t0.a_t2_id and t3.a_id=t0.a_t3_id and t4.a_id=t0.a_t4_id and t5.a_id=t0.a_t5_id and t6.a_id=t0.a_t6_id and t7.a_id=t0.a_t7_id and t8.a_id=t0.a_t8_id and t9.a_id=t0.a_t9_id and t10.a_id=t0.a_t10_id ;\n> > \n> > No, but since there are 11 tables mentioned, it will be sent to the GEQO\n> > optimizer anyway with the default GEQO threshold of 11...\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 17 May 1999 18:25:07 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I found the problem. I modified my test script to add 'vacuum analyze'\n> after creating test data and it works really fast ! Great !\n> Now I'm wondering why do I need vacuum analyze after creating test data\n> and indices ?\n\nVACUUM ANALYZE would create pg_statistics entries for the tables,\nwhich'd allow the optimizer to make better estimates of restriction\nand join selectivities. I expect that it changes the plan being used;\nwhat does EXPLAIN say with and without the analyze?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 May 1999 10:49:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> it seems that was my fault, I didn't specified @nitems (sizes of tables)\n> for all tables. Now it works fine.\n> Tom, in case of my fault, why did postgres die ?\n\nI don't know --- I don't see it here. I just ran your script as given,\nand it worked. (It produced zero rows of output, since the missing\nnitems values meant that no data was loaded into the last few tables ...\nbut there was no backend crash.)\n\nIs it crashing because of running out of memory, or something else?\nCan you provide a backtrace?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 May 1999 11:24:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "On Mon, 17 May 1999, Tom Lane wrote:\n\n> Date: Mon, 17 May 1999 11:24:31 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44 arrived while idle) \n> \n> Oleg Bartunov <[email protected]> writes:\n> > it seems that was my fault, I didn't specified @nitems (sizes of tables)\n> > for all tables. Now it works fine.\n> > Tom, in case of my fault, why did postgres die ?\n> \n> I don't know --- I don't see it here. I just ran your script as given,\n> and it worked. (It produced zero rows of output, since the missing\n> nitems values meant that no data was loaded into the last few tables ...\n> but there was no backend crash.)\n> \n> Is it crashing because of running out of memory, or something else?\n\nNo, memory is fine. It just dies.\n\n> Can you provide a backtrace?\n\nWill try to reproduce crash,. How do I can debug psql ?\n\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 17 May 1999 19:38:14 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n>> Can you provide a backtrace?\n\n> Will try to reproduce crash,. How do I can debug psql ?\n\nThere should be a core file left in the database subdirectory, eg\n\n\t/usr/local/pgsql/data/base/DB/core\n\nwhere DB represents the name of the database you used.\n\nAs the postgres user, do this (with appropriate pathname changes of course)\n\n\tgdb /usr/local/pgsql/bin/postgres /usr/local/pgsql/data/base/DB/core\n\nand when you get the (gdb) prompt, enter \"bt\" for backtrace. You should\nget a few dozen lines of printout, more or less like this:\n\n(gdb) bt\n#0 AllocSetAlloc (set=0x40254600, size=1076446936) at aset.c:267\n#1 0x169314 in PortalHeapMemoryAlloc (this=0x40254600, size=36)\n at portalmem.c:264\n#2 0x168bb4 in MemoryContextAlloc (context=0x4007d940, size=36) at mcxt.c:230\n#3 0xe4d88 in newNode (size=36, tag=T_Resdom) at nodes.c:41\n#4 0xea92c in makeResdom (resno=17920, restype=23, restypmod=-1, resname=0x0,\n reskey=0, reskeyop=0, resjunk=0) at makefuncs.c:102\n#5 0x101448 in create_tl_element (var=0x402402e0, resdomno=36) at tlist.c:135\n#6 0xf689c in new_join_tlist (tlist=0x40254600, first_resdomno=36)\n at joinrels.c:286\n ...\n\nThe \"q\" command will get you out of gdb after you've copied and pasted\nthis info.\n\nBTW, if you did not build the backend with \"-g\" included in CFLAGS, you\nwill get a much less complete backtrace ... but it may still tell us\nsomething of use.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 May 1999 11:52:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Obtaining a backtrace (was Re: [HACKERS] GEQO optimizer)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I have observed that the regular optimizer requires about 50MB to plan\n>> some ten-way joins, and can exceed my system's 128MB process data limit\n>> on some eleven-way joins. We currently have the GEQO threshold set at\n>> 11, which prevents the latter case by default --- but 50MB is a lot.\n>> I wonder whether we shouldn't back the GEQO threshold off to 10.\n>> (When I suggested setting it to 11, I was only looking at speed relative\n>> to GEQO, not memory usage. There is now a *big* difference in memory\n>> usage...) Comments?\n\n> You chose 11 by comparing GEQO with non-GEQO. I think you will find\n> that with your improved GEQO, GEQO is faster for smaller number of\n> joins, preventing the memory problem. Can you check the speeds again?\n\nBruce, I have rerun a couple of tests and am getting numbers like these:\n\n\t\t\t# tables joined\n\n\t\t...\t10\t11\t...\n\nSTD OPTIMIZER\t\t24\t115\nGEQO\t\t\t45\t55\n\nThis is after tweaking the GEQO parameters to improve speed slightly\nin the default case. (Setting EFFORT=LOW reduces the 11-way plan time\nto about 40 sec, setting EFFORT=HIGH makes it about 70.)\n\nThe breakpoint for speed is still clearly at GEQO threshold 11.\n*However*, the regular optimizer uses close to 120MB of memory to\nplan these 11-way joins, and that's excessive (especially since that's\nnot even counting the space that will be used for execution...).\nUntil we can do something about reclaiming space more effectively,\nI recommend reducing the default GEQO threshold to 10.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 May 1999 19:48:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
},
{
"msg_contents": "> > You chose 11 by comparing GEQO with non-GEQO. I think you will find\n> > that with your improved GEQO, GEQO is faster for smaller number of\n> > joins, preventing the memory problem. Can you check the speeds again?\n> \n> Bruce, I have rerun a couple of tests and am getting numbers like these:\n> \n> \t\t\t# tables joined\n> \n> \t\t...\t10\t11\t...\n> \n> STD OPTIMIZER\t\t24\t115\n> GEQO\t\t\t45\t55\n> \n> This is after tweaking the GEQO parameters to improve speed slightly\n> in the default case. (Setting EFFORT=LOW reduces the 11-way plan time\n> to about 40 sec, setting EFFORT=HIGH makes it about 70.)\n> \n> The breakpoint for speed is still clearly at GEQO threshold 11.\n> *However*, the regular optimizer uses close to 120MB of memory to\n> plan these 11-way joins, and that's excessive (especially since that's\n> not even counting the space that will be used for execution...).\n> Until we can do something about reclaiming space more effectively,\n> I recommend reducing the default GEQO threshold to 10.\n\nAgreed.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 May 1999 21:06:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44\n\tarrived while idle)"
}
] |
[
{
"msg_contents": "\nGPL evil, BSD so-so...\n\n\t\thttp://www.daemonnews.org/199905/gpl.html\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 13 May 1999 10:38:36 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Someone finally put it into print..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> GPL evil, BSD so-so...\n> \t\thttp://www.daemonnews.org/199905/gpl.html\n\nTalk about a one-sided, inflammatory presentation ... sheesh.\n\nI happen to like BSD better myself, but calling GPL \"Communistic\"\nis a few steps beyond reasonable discourse.\n\nThe real meat of the issue is this: if you give your free software away\nunder a BSD-style license, someone else can use it as a component of a\nnon-free, non-open-source product. If you give your software away under\na GPL-style license, it can only be used as a component of more free,\nopen-source software. Either of these might be a reasonable goal\ndepending on your purposes. I read Michael Maxwell's attack as saying\n\"it's not good enough for you to give code away for free, I demand that\nyou allow me to make money off your work\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 May 1999 10:34:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Someone finally put it into print... "
},
{
"msg_contents": "On Thu, 13 May 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > GPL evil, BSD so-so...\n> > \t\thttp://www.daemonnews.org/199905/gpl.html\n> \n> Talk about a one-sided, inflammatory presentation ... sheesh.\n> \n> I happen to like BSD better myself, but calling GPL \"Communistic\"\n> is a few steps beyond reasonable discourse.\n> \n> The real meat of the issue is this: if you give your free software away\n> under a BSD-style license, someone else can use it as a component of a\n> non-free, non-open-source product. If you give your software away under\n> a GPL-style license, it can only be used as a component of more free,\n> open-source software. Either of these might be a reasonable goal\n> depending on your purposes. I read Michael Maxwell's attack as saying\n> \"it's not good enough for you to give code away for free, I demand that\n> you allow me to make money off your work\".\n\nMy personal opinion on the two licensing schemes is that they both take\nthe 'far extreme' approach...neither of them is perfect and if one could\nsomeone come up with a \"middle ground\" license, that would be great. Each\nof them has their good points, but I still think the BSD one is the\n\"lesser of two evils\"...\n\nBSD gives too much freedom...GPL doesn't give enough...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 13 May 1999 11:42:25 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Someone finally put it into print... "
},
{
"msg_contents": "On Thu, 13 May 1999, The Hermit Hacker wrote:\n\n> \n> GPL evil, BSD so-so...\n> \n> \t\thttp://www.daemonnews.org/199905/gpl.html\n\nI refuse to argue about this one, because it inevitably ends up being a\nbig war with no-one clearly right. Please don't put flame-bait up like\nthis!\n\nAlso, this guy needs to do more research. There are specifically stated\nexceptions for tools like flex/bison/yacc/gcc/etc.\n\nTaral\n\n",
"msg_date": "Thu, 13 May 1999 14:50:23 -0500 (CDT)",
"msg_from": "Taral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Someone finally put it into print..."
},
{
"msg_contents": "The Hermit Hacker wrote:\n> My personal opinion on the two licensing schemes is that they both take\n> the 'far extreme' approach...neither of them is perfect and if one could\n> someone come up with a \"middle ground\" license, that would be great. \n\nI'd love your feedback on a license / software ownership\nmodel that I've been working on. It's very rough\nschetch... http://distributedcopyright.org\n\nThanks!\n\nClark\n\n\nP.S. There is a discussion list for this idea on \nthe web site, for those interested in talking\nabout it more.\n",
"msg_date": "Thu, 13 May 1999 16:58:50 -0400",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Distributed Copyright? (Was: Re: [HACKERS] Someone finally put it\n\tinto print...)"
}
] |
[
{
"msg_contents": "After dumping (by pg_dump) and restoring views becomes a tables\n\nHere is a simple scenario:\n1. createdb tview\n\n2. create table t1 (a int4, b int4);\n create view v1 as select a from t1;\n\n3. pg_dump -z tview > tview.dump\n4. destroydb tview\n5. psql -e tview < tview.dump\n............................\nQUERY: COPY \"t1\" FROM stdin;\nCREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \"t1\";\nQUERY: CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \"t1\";\nERROR: parser: parse error at or near \"do\"\nEOF\n\n6. psql tview\n\ntview=> \\dt\nDatabase = tview\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | megera | t1 | table |\n | megera | v1 | table |\n +------------------+----------------------------------+----------+\n\ntview=>\n\n view t1 now becomes table v1 !\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 13 May 1999 19:46:50 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5 cvs: views doesn't survives after pg_dump"
},
{
"msg_contents": "\nI assume this is fixed.\n\n\n> After dumping (by pg_dump) and restoring views becomes a tables\n> \n> Here is a simple scenario:\n> 1. createdb tview\n> \n> 2. create table t1 (a int4, b int4);\n> create view v1 as select a from t1;\n> \n> 3. pg_dump -z tview > tview.dump\n> 4. destroydb tview\n> 5. psql -e tview < tview.dump\n> ............................\n> QUERY: COPY \"t1\" FROM stdin;\n> CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \"t1\";\n> QUERY: CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \"t1\";\n> ERROR: parser: parse error at or near \"do\"\n> EOF\n> \n> 6. psql tview\n> \n> tview=> \\dt\n> Database = tview\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | megera | t1 | table |\n> | megera | v1 | table |\n> +------------------+----------------------------------+----------+\n> \n> tview=>\n> \n> view t1 now becomes table v1 !\n> \n> \tRegards,\n> \n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 23:43:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: views doesn't survives after pg_dump"
}
] |
[
{
"msg_contents": "Well I got the patch file from Tatsuo Ishii (thanks!!!) which includes the\nNetBSD/m68k fixes by NAKAJIMA Mutsuki (double thanks!!!). I applied it to\nthe postgres 6.4.2 distribution and it mostly worked.\n\nCaviats:\n\n1) It won't compile with kerberos 4 enabled. Yes, I loaded the secr.tar.gz\ndistribution, but there are some serious problems with kerberos on my\nmachine so this may not be Postgres' fault. (Yes I adjusted the various\nnames/paths for NetBSD differences.)\n\n2) The following four regression tests fail:\ngeometry\ndatetime\nhorology\ninet\n\nGeometry appears superficially to be the usual roundoff problems. Inet\nlooks superficially to me like the MacBSD output may be more correct, but I\ndon't know what's going on well enough to be sure. Horology is likely to\nfail due to some obscure dates which are tested, but I haven't verified if\nthat's the only problem in this case.\n\nThe datetime failure looks to be serious. 'now'::datetime -\n'current'::datetime yields more than 200 days!\n\nIf anyone (Tom?) wants an account on a Quadra 840av to investigate the\nproblem further let me know. The apparent speed of the beast is about half\nof my SPARCstation 5 or around 1/4 of a beefed up Ultra 5 so it's fast\nenough not to kill you. Anyone who can get real work done on an SE/30\n(NAKAJIMA Mutsuki) has my respect for their patience.\n__________________________________________________________\nThe opinions expressed in this message are mine,\nnot those of Caltech, JPL, NASA, or the US Government.\[email protected], or [email protected]\n",
"msg_date": "Thu, 13 May 1999 15:25:13 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Report on NetBSD/mac port of Postgres 6.4.2"
},
{
"msg_contents": "> The datetime failure looks to be serious. 'now'::datetime -\n> 'current'::datetime yields more than 200 days!\n\nI've seen similar symptoms with machines that have timezone troubles, or\nmore accurately timezone support which is not mapped correctly into the\nPostgres timezone handling code. Could be that ./configure is confused.\n\n> If anyone (Tom?) wants an account on a Quadra 840av to investigate the\n> problem further let me know.\n\nHi Henry. It is possible, and if I had my druthers I'd have an account\nwith group privileges to work directly in a patched Postgres tree\n(perhaps the one you have already built). Any running servers would need\nto be shut down to allow me to fire up debugging versions. Also, again\nif possible, I would have access after-hours via one of my machines in\nyour domain. And if I can't see the problem right away (if I glance at\nit a lunch time) then I wouldn't be able to look 'til next week.\n\n - Tom\n\n-- \nThomas Lockhart\nCaltech/JPL\nInterferometry Systems and Technology\n",
"msg_date": "Thu, 13 May 1999 23:32:49 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Report on NetBSD/mac port of Postgres 6.4.2"
},
{
"msg_contents": "> Well I got the patch file from Tatsuo Ishii (thanks!!!) which includes the\n> NetBSD/m68k fixes by NAKAJIMA Mutsuki (double thanks!!!). I applied it to\n> the postgres 6.4.2 distribution and it mostly worked.\n> \n> Caviats:\n> \n> 1) It won't compile with kerberos 4 enabled. Yes, I loaded the secr.tar.gz\n> distribution, but there are some serious problems with kerberos on my\n> machine so this may not be Postgres' fault. (Yes I adjusted the various\n> names/paths for NetBSD differences.)\n\nSeems kerberos support in PostgreSQL has been broken for quite\nsometime.\n\n> 2) The following four regression tests fail:\n> geometry\n> datetime\n> horology\n> inet\n> \n> Geometry appears superficially to be the usual roundoff problems. Inet\n> looks superficially to me like the MacBSD output may be more correct, but I\n> don't know what's going on well enough to be sure.\n\nThere is a known bug with inet data type in 6.4.2, that happens on\nm68k, PowerPC and Sparc as far as I know. I believe this has been\nfixed in current. If you need patches for 6.4.2, please let me know.\n\n> Horology is likely to\n> fail due to some obscure dates which are tested, but I haven't verified if\n> that's the only problem in this case.\n> \n> The datetime failure looks to be serious. 'now'::datetime -\n> 'current'::datetime yields more than 200 days!\n> \n> If anyone (Tom?) wants an account on a Quadra 840av to investigate the\n> problem further let me know. The apparent speed of the beast is about half\n> of my SPARCstation 5 or around 1/4 of a beefed up Ultra 5 so it's fast\n> enough not to kill you. Anyone who can get real work done on an SE/30\n> (NAKAJIMA Mutsuki) has my respect for their patience.\n\nI heard from Mutski that he spent more than 6 hours to compile\nPostgreSQL on his SE/30:-)\n---\nTatsuo Ishii\n",
"msg_date": "Fri, 14 May 1999 09:58:16 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Report on NetBSD/mac port of Postgres 6.4.2 "
}
] |
[
{
"msg_contents": ">Well I got the patch file from Tatsuo Ishii (thanks!!!) which includes the\n>NetBSD/m68k fixes by NAKAJIMA Mutsuki (double thanks!!!). I applied it to\n>the\n>postgres 6.4.2 distribution and it mostly worked.\n\nSorry, forgot to mention the patch is at\nftp://ftp.sra.co.jp/pub/cmd/postgres/6.4.2/patches/m68k-cq.patch\n__________________________________________________________\nThe opinions expressed in this message are mine,\nnot those of Caltech, JPL, NASA, or the US Government.\[email protected], or [email protected]\n",
"msg_date": "Thu, 13 May 1999 15:32:39 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Patch for NetBSD/mac port of Postgres 6.4.2"
}
] |
[
{
"msg_contents": "I believe I've identified the main cause of the peculiar behavior we\nare seeing with INSERT ... SELECT ... GROUP/ORDER BY: it's a subtle\nparser bug.\n\nHere is the test case I'm looking at:\n\nCREATE TABLE si_tmpverifyaccountbalances (\n type int4 NOT NULL,\n memberid int4 NOT NULL,\n categoriesid int4 NOT NULL,\n amount numeric);\n\nCREATE TABLE invoicelinedetails (\n invoiceid int4,\n memberid int4,\n totshippinghandling numeric,\n invoicelinesid int4);\n\nINSERT INTO si_tmpverifyaccountbalances SELECT invoiceid+3,\nmemberid, 1, totshippinghandling FROM invoicelinedetails\nGROUP BY invoiceid+3, memberid, totshippinghandling;\n\nERROR: INSERT has more expressions than target columns\n\nThe reason this is coming out is that the matching of GROUP BY (also\nORDER BY) items to targetlist entries is fundamentally broken in this\ncontext. The GROUP BY items \"memberid\" and \"totshippinghandling\" are\nsimply unvarnished Ident nodes when they arrive at findTargetlistEntry()\nin parse_clause.c; what findTargetlistEntry() does with them is to try\nto match them against the resdom names of the existing targetlist items.\nI think that's correct behavior in the plain SELECT case (but note it\nmeans \"SELECT a AS b, b AS c GROUP BY b\" will really group by a not b\n--- is that per spec??). But it fails miserably in the INSERT/SELECT\ncase, because by the time control gets here, the targetlist items have\nbeen given resdom names *corresponding to the column names of the target\ntable*.\n\nSo, in the example at hand, \"memberid\" is matched to the correct column\nby pure luck (because it has the same name in the destination table),\nand then \"totshippinghandling\" is not recognized as one of the existing\nTLEs because it does not match any destination column name.\n\nNow, call me silly, but it seems to me that SELECT ... GROUP BY ought\nto mean the same thing no matter whether there is an INSERT in front of\nit or not, and thus that letting target column names affect the meaning\nof GROUP BY items is dead wrong. (Don't have a spec to check this with,\nhowever.)\n\nI believe the most reasonable fix for this is to postpone relabeling\nof the targetlist entries with destination column names until after\nanalysis of the SELECT's subsidiary clauses is complete. In particular,\nit should *not* be done instantly when each TLE is made, which is what\nMakeTargetEntryIdent currently does. The TLEs should have the same\nresnames as in the SELECT case until after subsidiary clause processing\nis done.\n\n(MakeTargetEntryIdent is broken anyway because it tries to associate\na destination column with every TLE, even the resjunk ones. The reason\nwe see the quoted error message in this situation is that after\nfindTargetlistEntry fails to detect that totshippinghandling is already\na TLE, it calls MakeTargetEntryIdent to make a junk TLE for\ntotshippinghandling, and then MakeTargetEntryIdent tries to find a\ntarget column to go with the junk TLE. So the revised code should only\nassign dest column names to non-junk TLEs.)\n\nI'm not really familiar enough with the parser to want to tackle this\nsize of change by myself --- Thomas, do you want to do it? I think it's\nlargely a matter of moving code around, but I'm not sure where is the\nright place for it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 May 1999 20:01:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some progress on INSERT/SELECT/GROUP BY bugs"
},
{
"msg_contents": "> (MakeTargetEntryIdent is broken anyway because it tries to associate\n> a destination column with every TLE, even the resjunk ones. The reason\n> we see the quoted error message in this situation is that after\n> findTargetlistEntry fails to detect that totshippinghandling is already\n> a TLE, it calls MakeTargetEntryIdent to make a junk TLE for\n> totshippinghandling, and then MakeTargetEntryIdent tries to find a\n> target column to go with the junk TLE. So the revised code should only\n> assign dest column names to non-junk TLEs.)\n> \n> I'm not really familiar enough with the parser to want to tackle this\n> size of change by myself --- Thomas, do you want to do it? I think it's\n> largely a matter of moving code around, but I'm not sure where is the\n> right place for it...\n\nYes, I clearly remember the INSERT assigning target names to columns in\nthe select to match up the entries. I still am unclear which of these\nare valid SQL:\n\n\tselect a as b from test order by a\n\t\n\tselect a as b from test order by b\n\nCan we just defer the renaming until after we do group-by?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 May 1999 21:22:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs"
},
{
"msg_contents": "\nTom, was this done?\n\n\n> I believe I've identified the main cause of the peculiar behavior we\n> are seeing with INSERT ... SELECT ... GROUP/ORDER BY: it's a subtle\n> parser bug.\n> \n> Here is the test case I'm looking at:\n> \n> CREATE TABLE si_tmpverifyaccountbalances (\n> type int4 NOT NULL,\n> memberid int4 NOT NULL,\n> categoriesid int4 NOT NULL,\n> amount numeric);\n> \n> CREATE TABLE invoicelinedetails (\n> invoiceid int4,\n> memberid int4,\n> totshippinghandling numeric,\n> invoicelinesid int4);\n> \n> INSERT INTO si_tmpverifyaccountbalances SELECT invoiceid+3,\n> memberid, 1, totshippinghandling FROM invoicelinedetails\n> GROUP BY invoiceid+3, memberid, totshippinghandling;\n> \n> ERROR: INSERT has more expressions than target columns\n> \n> The reason this is coming out is that the matching of GROUP BY (also\n> ORDER BY) items to targetlist entries is fundamentally broken in this\n> context. The GROUP BY items \"memberid\" and \"totshippinghandling\" are\n> simply unvarnished Ident nodes when they arrive at findTargetlistEntry()\n> in parse_clause.c; what findTargetlistEntry() does with them is to try\n> to match them against the resdom names of the existing targetlist items.\n> I think that's correct behavior in the plain SELECT case (but note it\n> means \"SELECT a AS b, b AS c GROUP BY b\" will really group by a not b\n> --- is that per spec??). But it fails miserably in the INSERT/SELECT\n> case, because by the time control gets here, the targetlist items have\n> been given resdom names *corresponding to the column names of the target\n> table*.\n> \n> So, in the example at hand, \"memberid\" is matched to the correct column\n> by pure luck (because it has the same name in the destination table),\n> and then \"totshippinghandling\" is not recognized as one of the existing\n> TLEs because it does not match any destination column name.\n> \n> Now, call me silly, but it seems to me that SELECT ... GROUP BY ought\n> to mean the same thing no matter whether there is an INSERT in front of\n> it or not, and thus that letting target column names affect the meaning\n> of GROUP BY items is dead wrong. (Don't have a spec to check this with,\n> however.)\n> \n> I believe the most reasonable fix for this is to postpone relabeling\n> of the targetlist entries with destination column names until after\n> analysis of the SELECT's subsidiary clauses is complete. In particular,\n> it should *not* be done instantly when each TLE is made, which is what\n> MakeTargetEntryIdent currently does. The TLEs should have the same\n> resnames as in the SELECT case until after subsidiary clause processing\n> is done.\n> \n> (MakeTargetEntryIdent is broken anyway because it tries to associate\n> a destination column with every TLE, even the resjunk ones. The reason\n> we see the quoted error message in this situation is that after\n> findTargetlistEntry fails to detect that totshippinghandling is already\n> a TLE, it calls MakeTargetEntryIdent to make a junk TLE for\n> totshippinghandling, and then MakeTargetEntryIdent tries to find a\n> target column to go with the junk TLE. So the revised code should only\n> assign dest column names to non-junk TLEs.)\n> \n> I'm not really familiar enough with the parser to want to tackle this\n> size of change by myself --- Thomas, do you want to do it? I think it's\n> largely a matter of moving code around, but I'm not sure where is the\n> right place for it...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 23:44:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, was this done?\n\nThis is not done --- I wasn't willing to try to do such a thing by\nmyself when we were already in 6.5 beta. It's on my todo list for 6.6.\n\n6.5 fails in a different way than 6.4 did, for reasons that I don't\nrecall offhand, but the only real fix is to restructure the analyzer.\n\n\t\t\tregards, tom lane\n\n\n>> I believe I've identified the main cause of the peculiar behavior we\n>> are seeing with INSERT ... SELECT ... GROUP/ORDER BY: it's a subtle\n>> parser bug.\n>> \n>> Here is the test case I'm looking at:\n>> \n>> CREATE TABLE si_tmpverifyaccountbalances (\n>> type int4 NOT NULL,\n>> memberid int4 NOT NULL,\n>> categoriesid int4 NOT NULL,\n>> amount numeric);\n>> \n>> CREATE TABLE invoicelinedetails (\n>> invoiceid int4,\n>> memberid int4,\n>> totshippinghandling numeric,\n>> invoicelinesid int4);\n>> \n>> INSERT INTO si_tmpverifyaccountbalances SELECT invoiceid+3,\n>> memberid, 1, totshippinghandling FROM invoicelinedetails\n>> GROUP BY invoiceid+3, memberid, totshippinghandling;\n>> \n>> ERROR: INSERT has more expressions than target columns\n>> \n>> The reason this is coming out is that the matching of GROUP BY (also\n>> ORDER BY) items to targetlist entries is fundamentally broken in this\n>> context. The GROUP BY items \"memberid\" and \"totshippinghandling\" are\n>> simply unvarnished Ident nodes when they arrive at findTargetlistEntry()\n>> in parse_clause.c; what findTargetlistEntry() does with them is to try\n>> to match them against the resdom names of the existing targetlist items.\n>> I think that's correct behavior in the plain SELECT case (but note it\n>> means \"SELECT a AS b, b AS c GROUP BY b\" will really group by a not b\n>> --- is that per spec??). But it fails miserably in the INSERT/SELECT\n>> case, because by the time control gets here, the targetlist items have\n>> been given resdom names *corresponding to the column names of the target\n>> table*.\n>> \n>> So, in the example at hand, \"memberid\" is matched to the correct column\n>> by pure luck (because it has the same name in the destination table),\n>> and then \"totshippinghandling\" is not recognized as one of the existing\n>> TLEs because it does not match any destination column name.\n>> \n>> Now, call me silly, but it seems to me that SELECT ... GROUP BY ought\n>> to mean the same thing no matter whether there is an INSERT in front of\n>> it or not, and thus that letting target column names affect the meaning\n>> of GROUP BY items is dead wrong. (Don't have a spec to check this with,\n>> however.)\n>> \n>> I believe the most reasonable fix for this is to postpone relabeling\n>> of the targetlist entries with destination column names until after\n>> analysis of the SELECT's subsidiary clauses is complete. In particular,\n>> it should *not* be done instantly when each TLE is made, which is what\n>> MakeTargetEntryIdent currently does. The TLEs should have the same\n>> resnames as in the SELECT case until after subsidiary clause processing\n>> is done.\n>> \n>> (MakeTargetEntryIdent is broken anyway because it tries to associate\n>> a destination column with every TLE, even the resjunk ones. The reason\n>> we see the quoted error message in this situation is that after\n>> findTargetlistEntry fails to detect that totshippinghandling is already\n>> a TLE, it calls MakeTargetEntryIdent to make a junk TLE for\n>> totshippinghandling, and then MakeTargetEntryIdent tries to find a\n>> target column to go with the junk TLE. So the revised code should only\n>> assign dest column names to non-junk TLEs.)\n>> \n>> I'm not really familiar enough with the parser to want to tackle this\n>> size of change by myself --- Thomas, do you want to do it? I think it's\n>> largely a matter of moving code around, but I'm not sure where is the\n>> right place for it...\n>> \n>> regards, tom lane\n>> \n>> \n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 07 Jul 1999 09:54:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, was this done?\n> \n> This is not done --- I wasn't willing to try to do such a thing by\n> myself when we were already in 6.5 beta. It's on my todo list for 6.6.\n\n\n\nOn your list. Good. I can't possibly figure out how to describe this\nbug.\n\n\n> \n> 6.5 fails in a different way than 6.4 did, for reasons that I don't\n> recall offhand, but the only real fix is to restructure the analyzer.\n> \n> \t\t\tregards, tom lane\n> \n> \n> >> I believe I've identified the main cause of the peculiar behavior we\n> >> are seeing with INSERT ... SELECT ... GROUP/ORDER BY: it's a subtle\n> >> parser bug.\n> >> \n> >> Here is the test case I'm looking at:\n> >> \n> >> CREATE TABLE si_tmpverifyaccountbalances (\n> >> type int4 NOT NULL,\n> >> memberid int4 NOT NULL,\n> >> categoriesid int4 NOT NULL,\n> >> amount numeric);\n> >> \n> >> CREATE TABLE invoicelinedetails (\n> >> invoiceid int4,\n> >> memberid int4,\n> >> totshippinghandling numeric,\n> >> invoicelinesid int4);\n> >> \n> >> INSERT INTO si_tmpverifyaccountbalances SELECT invoiceid+3,\n> >> memberid, 1, totshippinghandling FROM invoicelinedetails\n> >> GROUP BY invoiceid+3, memberid, totshippinghandling;\n> >> \n> >> ERROR: INSERT has more expressions than target columns\n> >> \n> >> The reason this is coming out is that the matching of GROUP BY (also\n> >> ORDER BY) items to targetlist entries is fundamentally broken in this\n> >> context. The GROUP BY items \"memberid\" and \"totshippinghandling\" are\n> >> simply unvarnished Ident nodes when they arrive at findTargetlistEntry()\n> >> in parse_clause.c; what findTargetlistEntry() does with them is to try\n> >> to match them against the resdom names of the existing targetlist items.\n> >> I think that's correct behavior in the plain SELECT case (but note it\n> >> means \"SELECT a AS b, b AS c GROUP BY b\" will really group by a not b\n> >> --- is that per spec??). But it fails miserably in the INSERT/SELECT\n> >> case, because by the time control gets here, the targetlist items have\n> >> been given resdom names *corresponding to the column names of the target\n> >> table*.\n> >> \n> >> So, in the example at hand, \"memberid\" is matched to the correct column\n> >> by pure luck (because it has the same name in the destination table),\n> >> and then \"totshippinghandling\" is not recognized as one of the existing\n> >> TLEs because it does not match any destination column name.\n> >> \n> >> Now, call me silly, but it seems to me that SELECT ... GROUP BY ought\n> >> to mean the same thing no matter whether there is an INSERT in front of\n> >> it or not, and thus that letting target column names affect the meaning\n> >> of GROUP BY items is dead wrong. (Don't have a spec to check this with,\n> >> however.)\n> >> \n> >> I believe the most reasonable fix for this is to postpone relabeling\n> >> of the targetlist entries with destination column names until after\n> >> analysis of the SELECT's subsidiary clauses is complete. In particular,\n> >> it should *not* be done instantly when each TLE is made, which is what\n> >> MakeTargetEntryIdent currently does. The TLEs should have the same\n> >> resnames as in the SELECT case until after subsidiary clause processing\n> >> is done.\n> >> \n> >> (MakeTargetEntryIdent is broken anyway because it tries to associate\n> >> a destination column with every TLE, even the resjunk ones. The reason\n> >> we see the quoted error message in this situation is that after\n> >> findTargetlistEntry fails to detect that totshippinghandling is already\n> >> a TLE, it calls MakeTargetEntryIdent to make a junk TLE for\n> >> totshippinghandling, and then MakeTargetEntryIdent tries to find a\n> >> target column to go with the junk TLE. So the revised code should only\n> >> assign dest column names to non-junk TLEs.)\n> >> \n> >> I'm not really familiar enough with the parser to want to tackle this\n> >> size of change by myself --- Thomas, do you want to do it? I think it's\n> >> largely a matter of moving code around, but I'm not sure where is the\n> >> right place for it...\n> >> \n> >> regards, tom lane\n> >> \n> >> \n> \n> \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 12:36:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> This is not done --- I wasn't willing to try to do such a thing by\n>> myself when we were already in 6.5 beta. It's on my todo list for 6.6.\n\n> On your list. Good. I can't possibly figure out how to describe this\n> bug.\n\nIf you want a TODO entry try\n * INSERT ... SELECT ... GROUP BY groups by target columns not source columns\n\nThere are other failure modes associated with this bug but that one will\ndo for the list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Jul 1999 13:31:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs "
},
{
"msg_contents": "Tom, is this fixed?\n\n\n> I believe I've identified the main cause of the peculiar behavior we\n> are seeing with INSERT ... SELECT ... GROUP/ORDER BY: it's a subtle\n> parser bug.\n> \n> Here is the test case I'm looking at:\n> \n> CREATE TABLE si_tmpverifyaccountbalances (\n> type int4 NOT NULL,\n> memberid int4 NOT NULL,\n> categoriesid int4 NOT NULL,\n> amount numeric);\n> \n> CREATE TABLE invoicelinedetails (\n> invoiceid int4,\n> memberid int4,\n> totshippinghandling numeric,\n> invoicelinesid int4);\n> \n> INSERT INTO si_tmpverifyaccountbalances SELECT invoiceid+3,\n> memberid, 1, totshippinghandling FROM invoicelinedetails\n> GROUP BY invoiceid+3, memberid, totshippinghandling;\n> \n> ERROR: INSERT has more expressions than target columns\n> \n> The reason this is coming out is that the matching of GROUP BY (also\n> ORDER BY) items to targetlist entries is fundamentally broken in this\n> context. The GROUP BY items \"memberid\" and \"totshippinghandling\" are\n> simply unvarnished Ident nodes when they arrive at findTargetlistEntry()\n> in parse_clause.c; what findTargetlistEntry() does with them is to try\n> to match them against the resdom names of the existing targetlist items.\n> I think that's correct behavior in the plain SELECT case (but note it\n> means \"SELECT a AS b, b AS c GROUP BY b\" will really group by a not b\n> --- is that per spec??). But it fails miserably in the INSERT/SELECT\n> case, because by the time control gets here, the targetlist items have\n> been given resdom names *corresponding to the column names of the target\n> table*.\n> \n> So, in the example at hand, \"memberid\" is matched to the correct column\n> by pure luck (because it has the same name in the destination table),\n> and then \"totshippinghandling\" is not recognized as one of the existing\n> TLEs because it does not match any destination column name.\n> \n> Now, call me silly, but it seems to me that SELECT ... GROUP BY ought\n> to mean the same thing no matter whether there is an INSERT in front of\n> it or not, and thus that letting target column names affect the meaning\n> of GROUP BY items is dead wrong. (Don't have a spec to check this with,\n> however.)\n> \n> I believe the most reasonable fix for this is to postpone relabeling\n> of the targetlist entries with destination column names until after\n> analysis of the SELECT's subsidiary clauses is complete. In particular,\n> it should *not* be done instantly when each TLE is made, which is what\n> MakeTargetEntryIdent currently does. The TLEs should have the same\n> resnames as in the SELECT case until after subsidiary clause processing\n> is done.\n> \n> (MakeTargetEntryIdent is broken anyway because it tries to associate\n> a destination column with every TLE, even the resjunk ones. The reason\n> we see the quoted error message in this situation is that after\n> findTargetlistEntry fails to detect that totshippinghandling is already\n> a TLE, it calls MakeTargetEntryIdent to make a junk TLE for\n> totshippinghandling, and then MakeTargetEntryIdent tries to find a\n> target column to go with the junk TLE. So the revised code should only\n> assign dest column names to non-junk TLEs.)\n> \n> I'm not really familiar enough with the parser to want to tackle this\n> size of change by myself --- Thomas, do you want to do it? I think it's\n> largely a matter of moving code around, but I'm not sure where is the\n> right place for it...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Sep 1999 17:36:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, is this fixed?\n\n>> I believe I've identified the main cause of the peculiar behavior we\n>> are seeing with INSERT ... SELECT ... GROUP/ORDER BY: it's a subtle\n>> parser bug.\n\n>> I believe the most reasonable fix for this is to postpone relabeling\n>> of the targetlist entries with destination column names until after\n>> analysis of the SELECT's subsidiary clauses is complete.\n\nYes, for 6.6. There are some other INSERT ... SELECT cases that can't\nbe fixed until we have separate targetlists for the INSERT and the\nsource SELECT --- but I did take care of this particular issue. The\ncolumn relabeling etc doesn't happen until after we've finished with\nanalyzing the SELECT subclause.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 1999 18:07:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs "
}
] |
[
{
"msg_contents": "It looks like the problem is that the default value is getting inserted\nwithout benefit of conversion, ie, whatever the given text is will get\ndropped into the finished tuple without padding/truncation to the\nspecified char(n) length.\n\nLater, when we try to read out the tuple, the tuple access routines\nfigure they know how big a char(n) is, so they don't actually look\nto see what the varlena count is. This results in misalignment of\nfollowing fields, causing either wrong data readout or a full-bore\ncrash.\n\nTest case:\n\nCREATE TABLE test (\nplt int2 PRIMARY KEY,\nstate CHAR(5) NOT NULL DEFAULT 'new',\nused boolean NOT NULL DEFAULT 'f',\nid int4\n);\n\nINSERT INTO test (plt, id) VALUES (2, 3);\n\nExamination of the stored tuple shows it contains 32 bytes of data:\n\n0x400d7f30: 0x00 0x02 0x00 0x00 0x00 0x00 0x00 0x07\n0x400d7f38: 0x6e 0x65 0x77 0x00 0x00 0x00 0x00 0x03\n\nwhich deconstructs as follows:\n\n00 02 \tint2 '2' (bigendian hardware here)\n00 00\t\tpad space to align varlena char field to long boundary\n00 00 00 07\tvarlena header, size 7 => 3 bytes of actual data (whoops)\n6e 65 77\tASCII 'new'\n00\t\tboolean 'f' (no pad needed for bool)\n00 00 00 03\tint4 '3' (no pad, it's on a long boundary already)\n\nBut the tuple readout routines will assume without looking that char(5)\noccupies 9 bytes altogether, so they pick up the bool field 2 bytes over\nfrom where it actually was put and pick up the int4 field 4 bytes over\nfrom where it should be (due to alignment); result is garbage. If there\nwere another varlena field after the char(n) field, they'd pick up a\nwrong field length and probably crash.\n\n\nSo, the question still remains \"where and why\"? My guess at this point\nis that this is a bad side-effect of the fact that text and char(n) are\nconsidered binary-equivalent. Probably, whatever bit of code ought to\nbe coercing the default value into the correct type for the column is\ndeciding that it doesn't have to do anything because they're already\nequivalent types. I'm not sure where to look for that code (help\nanyone?). But I am sure that it needs to be coercing the value to the\nspecified number of characters for char(n).\n\nIt also strikes me that there should be a check in the low-level\ntuple construction routines that what they are handed for a char(n)\nfield is the right length. If tuple readout is going to assume that\nchar(n) is always n bytes of data, good software engineering dictates\nthat the tuple-writing code ought to enforce that assumption. At\nthe very least there should be an Assert() for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 May 1999 20:56:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Progress on char(n) default-value problem"
},
{
"msg_contents": "> But the tuple readout routines will assume without looking that char(5)\n> occupies 9 bytes altogether, so they pick up the bool field 2 bytes over\n> from where it actually was put and pick up the int4 field 4 bytes over\n> from where it should be (due to alignment); result is garbage. If there\n> were another varlena field after the char(n) field, they'd pick up a\n> wrong field length and probably crash.\n> \n> \n> So, the question still remains \"where and why\"? My guess at this point\n> is that this is a bad side-effect of the fact that text and char(n) are\n> considered binary-equivalent. Probably, whatever bit of code ought to\n> be coercing the default value into the correct type for the column is\n> deciding that it doesn't have to do anything because they're already\n> equivalent types. I'm not sure where to look for that code (help\n> anyone?). But I am sure that it needs to be coercing the value to the\n> specified number of characters for char(n).\n\nGood analysis. I am sure this is a byproduct of my change in 6.? that\nallowed optimzation of char() fields by assuming they are all a fixed\nlength. Of course, 99% of the time they were, so it never bit us,\nexcept with default. Not sure if default was added before or after my\noptimization.\n\n> It also strikes me that there should be a check in the low-level\n> tuple construction routines that what they are handed for a char(n)\n> field is the right length. If tuple readout is going to assume that\n> char(n) is always n bytes of data, good software engineering dictates\n> that the tuple-writing code ought to enforce that assumption. At\n> the very least there should be an Assert() for it.\n\nAt least an Assert(). However, the tuple access routines do an\nauto-compute of column offsets on the first table access, so it never\nreally looks at the tuples in between. However, an Assert should check\nthat when you access a char() field, that it is really the proper\nlength. Good idea.\n\nBTW, I couldn't find the default stuffing code myself either.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 May 1999 21:09:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Progress on char(n) default-value problem"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Good analysis. I am sure this is a byproduct of my change in 6.? that\n> allowed optimzation of char() fields by assuming they are all a fixed\n> length. Of course, 99% of the time they were, so it never bit us,\n> except with default.\n\nThere's nothing wrong with your optimization --- a char(n) field should\nbe n characters 100% of the time. It's the default-insertion code\nthat's busted.\n\n> At least an Assert(). However, the tuple access routines do an\n> auto-compute of column offsets on the first table access, so it never\n> really looks at the tuples in between. However, an Assert should check\n> that when you access a char() field, that it is really the proper\n> length. Good idea.\n\nNo, I think the Assert ought to be on the output side. You might never\ntry to access the char(n) field itself, only the following fields;\nif the attcacheoff fields are already set up when you come to the\nbogus tuple, an Assert in the reading side wouldn't catch it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 May 1999 21:32:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Progress on char(n) default-value problem "
}
] |
[
{
"msg_contents": "Hi!\n\nChanges:\n\n1. Check for other waiters is moved from LockResolveConflict \n to LockAquire.\n2. Don't take other waiters into account if either lock \n aquired by MyProc don't conflict with locks aquired by\n waiters or MyProc already holds conflicting lock.\n3. ProcSleep uses conflict table to order waiters. Priority\n not used.\n4. ProcLockWakeup stops attempts to wakeup waiters if lock\n conflict found _and_ someone was already wakeuped.\n5. DeadLockCheck is able to wakeup MyProc or other proc\n to prevent deadlock.\n\nBelow are tests I run. Hope that lmgr issues are closed.\n\n---\n\nBlocked by \"higher priority\" lock waiting:\n\n1:\nbegin;\nlock t1 in row exclusive mode;\n2:\nbegin;\nlock table t1 in share row exclusive mode; -- blocked by 1\n3:\nbegin;\nlock table t2 in share row exclusive mode;\nlock table t1 in row exclusive mode; -- blocked by 2\n1:\nlock t2 in row exclusive mode; -- blocked by 3\n-- was: DeadLock: 3 waits for 2 waiting for 1\n-- now: 3 granted lock on t1 and wakeuped\n\n\nBlocked by other:\n\n1:\nbegin;\nlock t1 in row share mode;\n2:\nbegin;\nlock table t1 in row exclusive mode;\n3:\nbegin;\nlock table t2 in share row exclusive mode;\nlock table t1 in share row exclusive mode; -- blocked by 2\n1:\nlock t2 in row exclusive mode; -- blocked by 3\n-- was: DeadLock: 3 waits for lock on t1 and 1 hold lock on t1\n-- now: no DeadLock: 3 blocked not by 1\n\n\nBlocked by other II:\n\n1:\nbegin;\nlock table t1 in row share mode;\n2:\nbegin;\nlock table t1 in row exclusive mode;\n3:\nbegin;\nlock table t2 in exclusive mode;\n1:\nlock t2 in row share mode; -- blocked by 3\n3:\nlock table t1 in share row exclusive mode; -- blocked by 2\n-- was: DeadLock: 3 waits for lock on t1 and 1 hold lock on t1\n-- now: no DeadLock: 3 blocked not by 1\n4:\nbegin;\nlock table t3 in exclusive mode;\n2:\nlock table t3 in row share mode; -- blocked by 4\n4:\nlock table t1 in row exclusive mode; -- blocked by 3 \n-- was: not possible\n-- now: self wakeing up\n\nVadim\n",
"msg_date": "Fri, 14 May 1999 15:53:39 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "lmgr changed"
},
{
"msg_contents": "> Hi!\n> \n> Changes:\n> \n> 1. Check for other waiters is moved from LockResolveConflict \n> to LockAquire.\n> 2. Don't take other waiters into account if either lock \n> aquired by MyProc don't conflict with locks aquired by\n> waiters or MyProc already holds conflicting lock.\n> 3. ProcSleep uses conflict table to order waiters. Priority\n> not used.\n> 4. ProcLockWakeup stops attempts to wakeup waiters if lock\n> conflict found _and_ someone was already wakeuped.\n> 5. DeadLockCheck is able to wakeup MyProc or other proc\n> to prevent deadlock.\n> \n> Below are tests I run. Hope that lmgr issues are closed.\n\nThanks Vadim. I don't think I could have made those changes myself.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 14 May 1999 08:03:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] lmgr changed"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > 1. Check for other waiters is moved from LockResolveConflict\n> > to LockAquire.\n> > 2. Don't take other waiters into account if either lock\n> > aquired by MyProc don't conflict with locks aquired by\n> > waiters or MyProc already holds conflicting lock.\n> > 3. ProcSleep uses conflict table to order waiters. Priority\n> > not used.\n> > 4. ProcLockWakeup stops attempts to wakeup waiters if lock\n> > conflict found _and_ someone was already wakeuped.\n> > 5. DeadLockCheck is able to wakeup MyProc or other proc\n> > to prevent deadlock.\n> >\n> > Below are tests I run. Hope that lmgr issues are closed.\n> \n> Thanks Vadim. I don't think I could have made those changes myself.\n\nI should took locking into account before beta!\nSorry.\n\nVadim\n",
"msg_date": "Fri, 14 May 1999 20:37:56 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] lmgr changed"
}
] |
[
{
"msg_contents": "> On Fri, 14 May 1999, Todd Graham Lewis wrote:\n> export CVSROOT=\":pserver:[email protected]:/usr/local/cvsroot\"\n> echo \"Password is \\\"postgresql\\\" \"\n> cvs -d :pserver:[email protected]:/usr/local/cvsroot login\n> \n> This was supposed to have been put on the web page, as I recall...\n\nIt would be very nice to have the same kind of web interface as the \nFreeBSD has, with a web accessible CVS tree.\nAdvantages is that it would make the source more accesible to non-hacker\nusers (raising the feeling-involved factor), help documentation writing by\nbrowsing changes easily before checking out / committing�and maybe other\nadvantages too (?).\n\n Check it out at:\n http://www.freebsd.org/cgi/cvsweb.cgi\n\n Source at:\n http://www.freebsd.org/cgi/cvsweb.cgi/www/en/cgi/cvsweb.cgi\n\n/Daniel\n\n_______________________________________________________________ /\\__ \n \\/ \n Daniel Lundin - MediaCenter, UNIX and BeOS Developer \n http://www.umc.se/~daniel/\n\n \"In C we had to code our own bugs. In C++ we can inherit them.\" \n\n\n\n",
"msg_date": "Fri, 14 May 1999 14:33:10 +0200 (CEST)",
"msg_from": "Daniel Lundin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "> It would be very nice to have the same kind of web interface as the\n> FreeBSD has, with a web accessible CVS tree.\n\nUh, we do have it. cvsup is up and running on postgresql.org (a\nFreeBSD machine) and we have posted several binaries on\n\n ftp://postgresql.org/pub/CVSup/\n\nSince you mention it, I had posted the other day a request for someone\nto test some RPMs I have made of the latest CVSup release. It would\nneed a Linux glibc2 machine (I built it on a RH5.2 system), and the\nRPMs include an example file for Postgres clients. They are in\n/pub/CVSup/beta/ on our ftp server. Once someone installs them\nsuccessfully I'll go ahead and update them (the\n/etc/rc.d/init.d/cvsupd.init startup file was not quite right, but a\ntest of the cvsup client would be sufficient to verify the RPMs I\nthink).\n\nThe *great* thing about CVSup is that you end up with the full CVS\nrepository on your local machine, and can do things like \"cvs log\"\nwithout going over the net. CVSup has so many optimizations for file\ntransfer that it just screams over the network, and updates of the CVS\ntree happen much faster than anonymous CVS can do.\n\nI'm hoping to get the time to finish marking up a chapter for the docs\non CVS access to postgresql.org, including the CVSup option. In the\nmeantime look at doc/FAQ_CVS and doc/src/sgml/cvs.sgml. Does anyone\nhave an interest in picking this up as they do an install themselves?\nWould be a great help...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 14 May 1999 13:32:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "On Fri, 14 May 1999, Thomas Lockhart wrote:\n\n> > It would be very nice to have the same kind of web interface as the\n> > FreeBSD has, with a web accessible CVS tree.\n> \n> Uh, we do have it. cvsup is up and running on postgresql.org (a\n> FreeBSD machine) and we have posted several binaries on\n\nOh, slight misunderstanding there.\nI'm perfectly aware of the CVSup availability. What I meant was the\ncvsweb.cgi perl script which lets one browse the diffs and versions\ndirectly on the web. This is what I meant as \"accessible\", more to be able\nto browse older versions easily than to stay up to date.\n\n/Daniel\n________________________________________________________________ /\\__ \n \\/ \n Daniel Lundin - MediaCenter, UNIX and BeOS Developer \n http://www.umc.se/~daniel/\n\n \"In C we had to code our own bugs. In C++ we can inherit them.\" \n\n",
"msg_date": "Fri, 14 May 1999 17:50:47 +0200 (CEST)",
"msg_from": "Daniel Lundin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "> On Fri, 14 May 1999, Thomas Lockhart wrote:\n> \n> > > It would be very nice to have the same kind of web interface as the\n> > > FreeBSD has, with a web accessible CVS tree.\n> > \n> > Uh, we do have it. cvsup is up and running on postgresql.org (a\n> > FreeBSD machine) and we have posted several binaries on\n> \n> Oh, slight misunderstanding there.\n> I'm perfectly aware of the CVSup availability. What I meant was the\n> cvsweb.cgi perl script which lets one browse the diffs and versions\n> directly on the web. This is what I meant as \"accessible\", more to be able\n> to browse older versions easily than to stay up to date.\n\nYou can click in the backend flowchart to see the code. That count's,\ndoesn't it. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 05:00:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > cvsweb.cgi perl script which lets one browse the diffs and versions\n> > directly on the web. This is what I meant as \"accessible\", more to be \n> > able to browse older versions easily than to stay up to date.\n\nThere is also a utility available just to create html versions of\ncomplete cvs logs. cvs2html can be used as part of a batch-type\nprocess and works very well.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 18 May 1999 03:40:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "\nWe have this now on our web site.\n\n\n\n[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> > On Fri, 14 May 1999, Todd Graham Lewis wrote:\n> > export CVSROOT=\":pserver:[email protected]:/usr/local/cvsroot\"\n> > echo \"Password is \\\"postgresql\\\" \"\n> > cvs -d :pserver:[email protected]:/usr/local/cvsroot login\n> > \n> > This was supposed to have been put on the web page, as I recall...\n> \n> It would be very nice to have the same kind of web interface as the \n> FreeBSD has, with a web accessible CVS tree.\n> Advantages is that it would make the source more accesible to non-hacker\n> users (raising the feeling-involved factor), help documentation writing by\n> browsing changes easily before checking out / committing_and maybe other\n> advantages too (?).\n> \n> Check it out at:\n> http://www.freebsd.org/cgi/cvsweb.cgi\n> \n> Source at:\n> http://www.freebsd.org/cgi/cvsweb.cgi/www/en/cgi/cvsweb.cgi\n> \n> /Daniel\n> \n> _______________________________________________________________ /\\__ \n> \\/ \n> Daniel Lundin - MediaCenter, UNIX and BeOS Developer \n> http://www.umc.se/~daniel/\n> \n> \"In C we had to code our own bugs. In C++ we can inherit them.\" \n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 23:46:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
}
] |
[
{
"msg_contents": "I notice configure from 6.5 cvs doesn't checks anything after --with\noption. First time I run\n./configure --with-port=5433 \nand was very surprised when postmaster doesn't want to start with port=5433\nafter compilation and installation. Configure doesn't complaints !!!\nIt takes some time when I realized I had to \n./configure --with-pgport=5433\n\nAlso, you can specify any option begins from --with-\nconfigure will not complain about unknown option, just silently ignore it.\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 14 May 1999 17:28:06 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "configure --with-xxxx problem"
},
{
"msg_contents": "\n> (*) their reasoning is that you can pass the same set of --with flags\n> to every configure script in a large source tree without worrying about\n> exactly which packages want which options. Perhaps that really is\n> useful for building a ton of GNU tools together, but it sure hurts\n> user-friendliness otherwise.\n\nThat should be easy to make configurable: they'd just have to add a\nflag '--ignore-unsupported-options' to make this optional behaviour.\n\nMaarten\n\n-- \n\nMaarten Boekhold, [email protected]\nTIBCO Finance Technology Inc.\nThe Atrium\nStrawinskylaan 3051\n1077 ZX Amsterdam, The Netherlands\ntel: +31 20 3012158, fax: +31 20 3012358\nhttp://www.tibco.com\n",
"msg_date": "Fri, 14 May 1999 16:40:35 +0200",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] configure --with-xxxx problem"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> you can specify any option begins from --with-\n> configure will not complain about unknown option, just silently ignore it.\n\nThis has always been true with every version of autoconf --- it's one of\nthe less well designed aspects of autoconf IMHO. The GNU folk claim\nit's a feature, but I don't think so... (*)\n\n\t\t\tregards, tom lane\n\n(*) their reasoning is that you can pass the same set of --with flags\nto every configure script in a large source tree without worrying about\nexactly which packages want which options. Perhaps that really is\nuseful for building a ton of GNU tools together, but it sure hurts\nuser-friendliness otherwise.\n",
"msg_date": "Fri, 14 May 1999 10:46:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] configure --with-xxxx problem "
},
{
"msg_contents": "> Oleg Bartunov <[email protected]> writes:\n> > you can specify any option begins from --with-\n> > configure will not complain about unknown option, just silently ignore it.\n> \n> This has always been true with every version of autoconf --- it's one of\n> the less well designed aspects of autoconf IMHO. The GNU folk claim\n> it's a feature, but I don't think so... (*)\n> \n> \t\t\tregards, tom lane\n> \n> (*) their reasoning is that you can pass the same set of --with flags\n> to every configure script in a large source tree without worrying about\n> exactly which packages want which options. Perhaps that really is\n> useful for building a ton of GNU tools together, but it sure hurts\n> user-friendliness otherwise.\n\nI totally agree. If they had an option to disregard unknown options\nthat would be OK, but never to make it the default.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 04:59:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] configure --with-xxxx problem"
}
] |
[
{
"msg_contents": "The rules test is the only that fails for me under NetBSD/i386, and this\nseems simply to be due to using a different userid. The expected result\nhas viewowner=pgsql, my result has viewowner=postgres. I suspect that had\nI run \"gmake runtest\" as some other userid with access to the regression\ndatabase some of those rows may have had yet a different viewowner. Would\nit be a good idea to omit viewowner in the test as per the following\npatches? (Less luck on NetBSD/arm32 - looks similar to someone elses\nposting for the mac port)\n\nCheers,\n\nPatrick\n\n(Patches from src/test/regress)\n\n\n*** sql/rules.sql.orig\tFri May 14 17:55:08 1999\n--- sql/rules.sql\tFri May 14 17:56:52 1999\n***************\n*** 686,692 ****\n --\n -- Check that ruleutils are working\n --\n! SELECT * FROM pg_views ORDER BY viewname;\n \n SELECT * FROM pg_rules ORDER BY tablename, rulename;\n- \n--- 686,691 ----\n --\n -- Check that ruleutils are working\n --\n! SELECT viewname,definition FROM pg_views ORDER BY viewname;\n \n SELECT * FROM pg_rules ORDER BY tablename, rulename;\n*** expected/rules.out.orig\tFri May 14 12:55:16 1999\n--- expected/rules.out\tFri May 14 18:01:45 1999\n***************\n*** 1064,1092 ****\n sl8 | 21|brown | 40|inch | 101.6\n (9 rows)\n \n! QUERY: SELECT * FROM pg_views ORDER BY viewname;\n! viewname |viewowner|definition \n! ------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! iexit |pgsql |SELECT \"ih\".\"name\", \"ih\".\"thepath\", \"interpt_pp\"(\"ih\".\"thepath\", \"r\".\"thepath\") AS \"exit\" FROM \"ihighway\" \"ih\", \"ramp\" \"r\" WHERE \"ih\".\"thepath\" ## \"r\".\"thepath\"; \n! pg_indexes |pgsql |SELECT \"c\".\"relname\" AS \"tablename\", \"i\".\"relname\" AS \"indexname\", \"pg_get_indexdef\"(\"x\".\"indexrelid\") AS \"indexdef\" FROM \"pg_index\" \"x\", \"pg_class\" \"c\", \"pg_class\" \"i\" WHERE (\"c\".\"oid\" = \"x\".\"indrelid\") AND (\"i\".\"oid\" = \"x\".\"indexrelid\"); \n! pg_rules |pgsql |SELECT \"c\".\"relname\" AS \"tablename\", \"r\".\"rulename\", \"pg_get_ruledef\"(\"r\".\"rulename\") AS \"definition\" FROM \"pg_rewrite\" \"r\", \"pg_class\" \"c\" WHERE (\"r\".\"rulename\" !~ '^_RET'::\"text\") AND (\"c\".\"oid\" = \"r\".\"ev_class\"); \n! pg_tables |pgsql |SELECT \"c\".\"relname\" AS \"tablename\", \"pg_get_userbyid\"(\"c\".\"relowner\") AS \"tableowner\", \"c\".\"relhasindex\" AS \"hasindexes\", \"c\".\"relhasrules\" AS \"hasrules\", \"c\".\"reltriggers\" > '0'::\"int4\" AS \"hastriggers\" FROM \"pg_class\" \"c\" WHERE ((\"c\".\"relkind\" = 'r'::\"char\") OR (\"c\".\"relkind\" = 's'::\"char\")) AND (NOT (EXISTS (SELECT \"rulename\" FROM \"pg_rewrite\" WHERE (\"ev_class\" = \"c\".\"oid\") AND (\"ev_type\" = '1'::\"char\"))));\n! pg_user |pgsql |SELECT \"usename\", \"usesysid\", \"usecreatedb\", \"usetrace\", \"usesuper\", \"usecatupd\", '********'::\"text\" AS \"passwd\", \"valuntil\" FROM \"pg_shadow\"; \n! pg_views |pgsql |SELECT \"c\".\"relname\" AS \"viewname\", \"pg_get_userbyid\"(\"c\".\"relowner\") AS \"viewowner\", \"pg_get_viewdef\"(\"c\".\"relname\") AS \"definition\" FROM \"pg_class\" \"c\" WHERE (\"c\".\"relhasrules\") AND (EXISTS (SELECT \"r\".\"rulename\" FROM \"pg_rewrite\" \"r\" WHERE (\"r\".\"ev_class\" = \"c\".\"oid\") AND (\"r\".\"ev_type\" = '1'::\"char\"))); \n! rtest_v1 |pgsql |SELECT \"a\", \"b\" FROM \"rtest_t1\"; \n! rtest_vcomp |pgsql |SELECT \"x\".\"part\", \"x\".\"size\" * \"y\".\"factor\" AS \"size_in_cm\" FROM \"rtest_comp\" \"x\", \"rtest_unitfact\" \"y\" WHERE \"x\".\"unit\" = \"y\".\"unit\"; \n! rtest_vview1 |pgsql |SELECT \"x\".\"a\", \"x\".\"b\" FROM \"rtest_view1\" \"x\" WHERE '0'::\"int4\" < (SELECT \"count\"(\"y\".\"a\") AS \"count\" FROM \"rtest_view2\" \"y\" WHERE \"y\".\"a\" = \"x\".\"a\"); \n! rtest_vview2 |pgsql |SELECT \"a\", \"b\" FROM \"rtest_view1\" WHERE \"v\"; \n! rtest_vview3 |pgsql |SELECT \"x\".\"a\", \"x\".\"b\" FROM \"rtest_vview2\" \"x\" WHERE '0'::\"int4\" < (SELECT \"count\"(\"y\".\"a\") AS \"count\" FROM \"rtest_view2\" \"y\" WHERE \"y\".\"a\" = \"x\".\"a\"); \n! rtest_vview4 |pgsql |SELECT \"x\".\"a\", \"x\".\"b\", \"count\"(\"y\".\"a\") AS \"refcount\" FROM \"rtest_view1\" \"x\", \"rtest_view2\" \"y\" WHERE \"x\".\"a\" = \"y\".\"a\" GROUP BY \"x\".\"a\", \"x\".\"b\"; \n! rtest_vview5 |pgsql |SELECT \"a\", \"b\", \"rtest_viewfunc1\"(\"a\") AS \"refcount\" FROM \"rtest_view1\"; \n! shoe |pgsql |SELECT \"sh\".\"shoename\", \"sh\".\"sh_avail\", \"sh\".\"slcolor\", \"sh\".\"slminlen\", \"sh\".\"slminlen\" * \"un\".\"un_fact\" AS \"slminlen_cm\", \"sh\".\"slmaxlen\", \"sh\".\"slmaxlen\" * \"un\".\"un_fact\" AS \"slmaxlen_cm\", \"sh\".\"slunit\" FROM \"shoe_data\" \"sh\", \"unit\" \"un\" WHERE \"sh\".\"slunit\" = \"un\".\"un_name\"; \n! shoe_ready |pgsql |SELECT \"rsh\".\"shoename\", \"rsh\".\"sh_avail\", \"rsl\".\"sl_name\", \"rsl\".\"sl_avail\", \"int4smaller\"(\"rsh\".\"sh_avail\", \"rsl\".\"sl_avail\") AS \"total_avail\" FROM \"shoe\" \"rsh\", \"shoelace\" \"rsl\" WHERE ((\"rsl\".\"sl_color\" = \"rsh\".\"slcolor\") AND (\"rsl\".\"sl_len_cm\" >= \"rsh\".\"slminlen_cm\")) AND (\"rsl\".\"sl_len_cm\" <= \"rsh\".\"slmaxlen_cm\"); \n! shoelace |pgsql |SELECT \"s\".\"sl_name\", \"s\".\"sl_avail\", \"s\".\"sl_color\", \"s\".\"sl_len\", \"s\".\"sl_unit\", \"s\".\"sl_len\" * \"u\".\"un_fact\" AS \"sl_len_cm\" FROM \"shoelace_data\" \"s\", \"unit\" \"u\" WHERE \"s\".\"sl_unit\" = \"u\".\"un_name\"; \n! shoelace_candelete|pgsql |SELECT \"sl_name\", \"sl_avail\", \"sl_color\", \"sl_len\", \"sl_unit\", \"sl_len_cm\" FROM \"shoelace_obsolete\" WHERE \"sl_avail\" = '0'::\"int4\"; \n! shoelace_obsolete |pgsql |SELECT \"sl_name\", \"sl_avail\", \"sl_color\", \"sl_len\", \"sl_unit\", \"sl_len_cm\" FROM \"shoelace\" WHERE NOT (EXISTS (SELECT \"shoename\" FROM \"shoe\" WHERE \"slcolor\" = \"sl_color\")); \n! street |pgsql |SELECT \"r\".\"name\", \"r\".\"thepath\", \"c\".\"cname\" FROM \"road\" \"r\", \"real_city\" \"c\" WHERE \"c\".\"outline\" ## \"r\".\"thepath\"; \n! toyemp |pgsql |SELECT \"name\", \"age\", \"location\", '12'::\"int4\" * \"salary\" AS \"annualsal\" FROM \"emp\"; \n (20 rows)\n \n QUERY: SELECT * FROM pg_rules ORDER BY tablename, rulename;\n--- 1064,1092 ----\n sl8 | 21|brown | 40|inch | 101.6\n (9 rows)\n \n! QUERY: SELECT viewname,definition FROM pg_views ORDER BY viewname;\n! viewname |definition \n! ------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! iexit |SELECT \"ih\".\"name\", \"ih\".\"thepath\", \"interpt_pp\"(\"ih\".\"thepath\", \"r\".\"thepath\") AS \"exit\" FROM \"ihighway\" \"ih\", \"ramp\" \"r\" WHERE \"ih\".\"thepath\" ## \"r\".\"thepath\"; \n! pg_indexes |SELECT \"c\".\"relname\" AS \"tablename\", \"i\".\"relname\" AS \"indexname\", \"pg_get_indexdef\"(\"x\".\"indexrelid\") AS \"indexdef\" FROM \"pg_index\" \"x\", \"pg_class\" \"c\", \"pg_class\" \"i\" WHERE (\"c\".\"oid\" = \"x\".\"indrelid\") AND (\"i\".\"oid\" = \"x\".\"indexrelid\"); \n! pg_rules |SELECT \"c\".\"relname\" AS \"tablename\", \"r\".\"rulename\", \"pg_get_ruledef\"(\"r\".\"rulename\") AS \"definition\" FROM \"pg_rewrite\" \"r\", \"pg_class\" \"c\" WHERE (\"r\".\"rulename\" !~ '^_RET'::\"text\") AND (\"c\".\"oid\" = \"r\".\"ev_class\"); \n! pg_tables |SELECT \"c\".\"relname\" AS \"tablename\", \"pg_get_userbyid\"(\"c\".\"relowner\") AS \"tableowner\", \"c\".\"relhasindex\" AS \"hasindexes\", \"c\".\"relhasrules\" AS \"hasrules\", \"c\".\"reltriggers\" > '0'::\"int4\" AS \"hastriggers\" FROM \"pg_class\" \"c\" WHERE ((\"c\".\"relkind\" = 'r'::\"char\") OR (\"c\".\"relkind\" = 's'::\"char\")) AND (NOT (EXISTS (SELECT \"rulename\" FROM \"pg_rewrite\" WHERE (\"ev_class\" = \"c\".\"oid\") AND (\"ev_type\" = '1'::\"char\"))));\n! pg_user |SELECT \"usename\", \"usesysid\", \"usecreatedb\", \"usetrace\", \"usesuper\", \"usecatupd\", '********'::\"text\" AS \"passwd\", \"valuntil\" FROM \"pg_shadow\"; \n! pg_views |SELECT \"c\".\"relname\" AS \"viewname\", \"pg_get_userbyid\"(\"c\".\"relowner\") AS \"viewowner\", \"pg_get_viewdef\"(\"c\".\"relname\") AS \"definition\" FROM \"pg_class\" \"c\" WHERE (\"c\".\"relhasrules\") AND (EXISTS (SELECT \"r\".\"rulename\" FROM \"pg_rewrite\" \"r\" WHERE (\"r\".\"ev_class\" = \"c\".\"oid\") AND (\"r\".\"ev_type\" = '1'::\"char\"))); \n! rtest_v1 |SELECT \"a\", \"b\" FROM \"rtest_t1\"; \n! rtest_vcomp |SELECT \"x\".\"part\", \"x\".\"size\" * \"y\".\"factor\" AS \"size_in_cm\" FROM \"rtest_comp\" \"x\", \"rtest_unitfact\" \"y\" WHERE \"x\".\"unit\" = \"y\".\"unit\"; \n! rtest_vview1 |SELECT \"x\".\"a\", \"x\".\"b\" FROM \"rtest_view1\" \"x\" WHERE '0'::\"int4\" < (SELECT \"count\"(\"y\".\"a\") AS \"count\" FROM \"rtest_view2\" \"y\" WHERE \"y\".\"a\" = \"x\".\"a\"); \n! rtest_vview2 |SELECT \"a\", \"b\" FROM \"rtest_view1\" WHERE \"v\"; \n! rtest_vview3 |SELECT \"x\".\"a\", \"x\".\"b\" FROM \"rtest_vview2\" \"x\" WHERE '0'::\"int4\" < (SELECT \"count\"(\"y\".\"a\") AS \"count\" FROM \"rtest_view2\" \"y\" WHERE \"y\".\"a\" = \"x\".\"a\"); \n! rtest_vview4 |SELECT \"x\".\"a\", \"x\".\"b\", \"count\"(\"y\".\"a\") AS \"refcount\" FROM \"rtest_view1\" \"x\", \"rtest_view2\" \"y\" WHERE \"x\".\"a\" = \"y\".\"a\" GROUP BY \"x\".\"a\", \"x\".\"b\"; \n! rtest_vview5 |SELECT \"a\", \"b\", \"rtest_viewfunc1\"(\"a\") AS \"refcount\" FROM \"rtest_view1\"; \n! shoe |SELECT \"sh\".\"shoename\", \"sh\".\"sh_avail\", \"sh\".\"slcolor\", \"sh\".\"slminlen\", \"sh\".\"slminlen\" * \"un\".\"un_fact\" AS \"slminlen_cm\", \"sh\".\"slmaxlen\", \"sh\".\"slmaxlen\" * \"un\".\"un_fact\" AS \"slmaxlen_cm\", \"sh\".\"slunit\" FROM \"shoe_data\" \"sh\", \"unit\" \"un\" WHERE \"sh\".\"slunit\" = \"un\".\"un_name\"; \n! shoe_ready |SELECT \"rsh\".\"shoename\", \"rsh\".\"sh_avail\", \"rsl\".\"sl_name\", \"rsl\".\"sl_avail\", \"int4smaller\"(\"rsh\".\"sh_avail\", \"rsl\".\"sl_avail\") AS \"total_avail\" FROM \"shoe\" \"rsh\", \"shoelace\" \"rsl\" WHERE ((\"rsl\".\"sl_color\" = \"rsh\".\"slcolor\") AND (\"rsl\".\"sl_len_cm\" >= \"rsh\".\"slminlen_cm\")) AND (\"rsl\".\"sl_len_cm\" <= \"rsh\".\"slmaxlen_cm\"); \n! shoelace |SELECT \"s\".\"sl_name\", \"s\".\"sl_avail\", \"s\".\"sl_color\", \"s\".\"sl_len\", \"s\".\"sl_unit\", \"s\".\"sl_len\" * \"u\".\"un_fact\" AS \"sl_len_cm\" FROM \"shoelace_data\" \"s\", \"unit\" \"u\" WHERE \"s\".\"sl_unit\" = \"u\".\"un_name\"; \n! shoelace_candelete|SELECT \"sl_name\", \"sl_avail\", \"sl_color\", \"sl_len\", \"sl_unit\", \"sl_len_cm\" FROM \"shoelace_obsolete\" WHERE \"sl_avail\" = '0'::\"int4\"; \n! shoelace_obsolete |SELECT \"sl_name\", \"sl_avail\", \"sl_color\", \"sl_len\", \"sl_unit\", \"sl_len_cm\" FROM \"shoelace\" WHERE NOT (EXISTS (SELECT \"shoename\" FROM \"shoe\" WHERE \"slcolor\" = \"sl_color\")); \n! street |SELECT \"r\".\"name\", \"r\".\"thepath\", \"c\".\"cname\" FROM \"road\" \"r\", \"real_city\" \"c\" WHERE \"c\".\"outline\" ## \"r\".\"thepath\"; \n! toyemp |SELECT \"name\", \"age\", \"location\", '12'::\"int4\" * \"salary\" AS \"annualsal\" FROM \"emp\"; \n (20 rows)\n \n QUERY: SELECT * FROM pg_rules ORDER BY tablename, rulename;\n",
"msg_date": "Fri, 14 May 1999 18:07:40 +0100 (BST)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "rules regression test"
},
{
"msg_contents": "\"Patrick Welche\" <[email protected]> writes:\n> The rules test is the only that fails for me under NetBSD/i386, and this\n> seems simply to be due to using a different userid. The expected result\n> has viewowner=pgsql, my result has viewowner=postgres.\n\nYeah, looks like Jan committed an expected/rules.out file made under his\npersonal environment again :-(. We had the same problem in February...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 May 1999 17:51:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] rules regression test "
},
{
"msg_contents": ">\n> The rules test is the only that fails for me under NetBSD/i386, and this\n> seems simply to be due to using a different userid. The expected result\n> has viewowner=pgsql, my result has viewowner=postgres. I suspect that had\n> I run \"gmake runtest\" as some other userid with access to the regression\n> database some of those rows may have had yet a different viewowner. Would\n> it be a good idea to omit viewowner in the test as per the following\n> patches? (Less luck on NetBSD/arm32 - looks similar to someone elses\n> posting for the mac port)\n>\n\n Sorry,\n\n me again due to latest changes - will fix.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 17 May 1999 10:18:24 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] rules regression test"
},
{
"msg_contents": "> \n> >\n> > The rules test is the only that fails for me under NetBSD/i386, and this\n> > seems simply to be due to using a different userid. The expected result\n> > has viewowner=pgsql, my result has viewowner=postgres. I suspect that had\n> > I run \"gmake runtest\" as some other userid with access to the regression\n> > database some of those rows may have had yet a different viewowner. Would\n> > it be a good idea to omit viewowner in the test as per the following\n> > patches? (Less luck on NetBSD/arm32 - looks similar to someone elses\n> > posting for the mac port)\n> >\n> \n> Sorry,\n> \n> me again due to latest changes - will fix.\n\n\n Done.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n",
"msg_date": "Mon, 17 May 1999 11:02:46 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] rules regression test"
}
] |
[
{
"msg_contents": "The behavior is valid, if you define NULL as meaning undefined.\nIn other words when you define something as NULL you're saying, \"I don't\nknow what it is. It could be equal or not.\"\n\t-DEJ\n\n> -----Original Message-----\n> From:\tsecret [SMTP:[email protected]]\n> Sent:\tFriday, May 14, 1999 11:58 AM\n> To:\tPG-SQL\n> Subject:\t[SQL] Oddities with NULL and GROUP BY\n> \n> Maybe there is something I don't know about how GROUP BY should\n> work, but if I have a table like:\n> a,b,c\n> 1,1,1\n> 1,1,2\n> 1,1,3\n> 1,2,1\n> 1,3,1\n> \n> And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n> 1,1,6\n> 1,2,1\n> 1,3,1\n> \n> So whenever a or b changes we get a new summed row, well if I have rows\n> where a or b are null, this doesn't happen, infact I seem to get all\n> those rows individually... Like if:\n> 1,1,1\n> 1,1,3\n> 1,NULL,10\n> 1,NULL,20\n> 1,2,3\n> \n> I get:\n> 1,1,4\n> 1,NULL,10\n> 1,NULL,20\n> 1,2,3\n> \n> Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated like any other\n> value? Or is there some bit of information I'm missing? I can set\n> everything from NULL to 0 if need be, but I'd rather not...\n> \n> David Secret\n> MIS Director\n> Kearney Development Co., Inc.\n> \n",
"msg_date": "Fri, 14 May 1999 13:10:25 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "\"Jackson, DeJuan\" wrote:\n\n> The behavior is valid, if you define NULL as meaning undefined.\n> In other words when you define something as NULL you're saying, \"I don't\n> know what it is. It could be equal or not.\"\n> -DEJ\n>\n> > -----Original Message-----\n> > From: secret [SMTP:[email protected]]\n> > Sent: Friday, May 14, 1999 11:58 AM\n> > To: PG-SQL\n> > Subject: [SQL] Oddities with NULL and GROUP BY\n> >\n> > Maybe there is something I don't know about how GROUP BY should\n> > work, but if I have a table like:\n> > a,b,c\n> > 1,1,1\n> > 1,1,2\n> > 1,1,3\n> > 1,2,1\n> > 1,3,1\n> >\n> > And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n> > 1,1,6\n> > 1,2,1\n> > 1,3,1\n> >\n> > So whenever a or b changes we get a new summed row, well if I have rows\n> > where a or b are null, this doesn't happen, infact I seem to get all\n> > those rows individually... Like if:\n> > 1,1,1\n> > 1,1,3\n> > 1,NULL,10\n> > 1,NULL,20\n> > 1,2,3\n> >\n> > I get:\n> > 1,1,4\n> > 1,NULL,10\n> > 1,NULL,20\n> > 1,2,3\n> >\n> > Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated like any other\n> > value? Or is there some bit of information I'm missing? I can set\n> > everything from NULL to 0 if need be, but I'd rather not...\n> >\n> > David Secret\n> > MIS Director\n> > Kearney Development Co., Inc.\n> >\n\n IBM's DB/2 Disagrees, so does Oracle8!\n\n\nHere is a cut & paste from Oracle SQL+:\n\nSQL> select * from z;\n\n A B\n--------- ---------\n 1 1\n 1 2\n 5\n 10\n\nSQL> select a,sum(b) from z group by a;\n\n A SUM(B)\n--------- ---------\n 1 3\n 15\n\nSQL>\n\n I'm going to report this as a bug now that I've verified 2 major database\nvendors perform the task as I would expect them to, and PostgreSQL does it\nvery differently. The question is really is NULL=NULL, which I would say it\nshould be.\n\n",
"msg_date": "Mon, 17 May 1999 09:14:50 -0400",
"msg_from": "secret <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "\"Jackson, DeJuan\" wrote:\n\n> The behavior is valid, if you define NULL as meaning undefined.\n> In other words when you define something as NULL you're saying, \"I don't\n> know what it is. It could be equal or not.\"\n> -DEJ\n>\n> > -----Original Message-----\n> > From: secret [SMTP:[email protected]]\n> > Sent: Friday, May 14, 1999 11:58 AM\n> > To: PG-SQL\n> > Subject: [SQL] Oddities with NULL and GROUP BY\n> >\n> > Maybe there is something I don't know about how GROUP BY should\n> > work, but if I have a table like:\n> > a,b,c\n> > 1,1,1\n> > 1,1,2\n> > 1,1,3\n> > 1,2,1\n> > 1,3,1\n> >\n> > And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n> > 1,1,6\n> > 1,2,1\n> > 1,3,1\n> >\n> > So whenever a or b changes we get a new summed row, well if I have rows\n> > where a or b are null, this doesn't happen, infact I seem to get all\n> > those rows individually... Like if:\n> > 1,1,1\n> > 1,1,3\n> > 1,NULL,10\n> > 1,NULL,20\n> > 1,2,3\n> >\n> > I get:\n> > 1,1,4\n> > 1,NULL,10\n> > 1,NULL,20\n> > 1,2,3\n> >\n> > Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated like any other\n> > value? Or is there some bit of information I'm missing? I can set\n> > everything from NULL to 0 if need be, but I'd rather not...\n> >\n> > David Secret\n> > MIS Director\n> > Kearney Development Co., Inc.\n> >\n\n Oh, I just observed this oddity... PostgreSQL groups just fine when there\nis a table of 2 fields a int4, b int4...\n\nSELECT a,sum(b) FROM z GROUP BY a Groups NULLs fine\nSELECT a,b,sum(c) FROM z GROUP BY a,b Error in grouping NULLs in b...\n\n\n\n",
"msg_date": "Mon, 17 May 1999 09:49:54 -0400",
"msg_from": "secret <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "secret ha scritto:\n\n> \"Jackson, DeJuan\" wrote:\n>\n> > The behavior is valid, if you define NULL as meaning undefined.\n> > In other words when you define something as NULL you're saying, \"I don't\n> > know what it is. It could be equal or not.\"\n> > -DEJ\n> >\n> > > -----Original Message-----\n> > > From: secret [SMTP:[email protected]]\n> > > Sent: Friday, May 14, 1999 11:58 AM\n> > > To: PG-SQL\n> > > Subject: [SQL] Oddities with NULL and GROUP BY\n> > >\n> > > Maybe there is something I don't know about how GROUP BY should\n> > > work, but if I have a table like:\n> > > a,b,c\n> > > 1,1,1\n> > > 1,1,2\n> > > 1,1,3\n> > > 1,2,1\n> > > 1,3,1\n> > >\n> > > And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n> > > 1,1,6\n> > > 1,2,1\n> > > 1,3,1\n> > >\n> > > So whenever a or b changes we get a new summed row, well if I have rows\n> > > where a or b are null, this doesn't happen, infact I seem to get all\n> > > those rows individually... Like if:\n> > > 1,1,1\n> > > 1,1,3\n> > > 1,NULL,10\n> > > 1,NULL,20\n> > > 1,2,3\n> > >\n> > > I get:\n> > > 1,1,4\n> > > 1,NULL,10\n> > > 1,NULL,20\n> > > 1,2,3\n> > >\n> > > Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated like any other\n> > > value? Or is there some bit of information I'm missing? I can set\n> > > everything from NULL to 0 if need be, but I'd rather not...\n> > >\n> > > David Secret\n> > > MIS Director\n> > > Kearney Development Co., Inc.\n> > >\n>\n> IBM's DB/2 Disagrees, so does Oracle8!\n>\n> Here is a cut & paste from Oracle SQL+:\n>\n> SQL> select * from z;\n>\n> A B\n> --------- ---------\n> 1 1\n> 1 2\n> 5\n> 10\n>\n> SQL> select a,sum(b) from z group by a;\n>\n> A SUM(B)\n> --------- ---------\n> 1 3\n> 15\n>\n> SQL>\n>\n> I'm going to report this as a bug now that I've verified 2 major database\n> vendors perform the task as I would expect them to, and PostgreSQL does it\n> very differently. The question is really is NULL=NULL, which I would say it\n> should be.\n\nI tried it in PostgreSQL 6.5beta1 with the same result:\n\nselect * from z;\na| b\n-+--\n1| 1\n1| 2\n | 5\n |10\n(4 rows)\n\nselect a,sum(b) from z group by a;\na|sum\n-+---\n1| 3\n | 15\n(2 rows)\n\nThe Pratical SQL Handbook at page 171 says:\nSince nulls represent \"the great unknown\", there is no way to know\nwhether one null is equal to any other null. Each unknown value\nmay or may not be different from another.\nHowever, if the grouping column contains more than one null,\nall of them are put into a single group.\n\nThus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n\nJos�\n\n\n\n\n--\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n\nsecret ha scritto:\n\"Jackson, DeJuan\" wrote:\n> The behavior is valid, if you define NULL as meaning undefined.\n> In other words when you define something as NULL you're saying, \"I\ndon't\n> know what it is. It could be equal or not.\"\n> -DEJ\n>\n> > -----Original Message-----\n> > From: secret [SMTP:[email protected]]\n> > Sent: Friday, May 14, 1999 11:58 AM\n> > To: PG-SQL\n> > Subject: [SQL] Oddities with NULL\nand GROUP BY\n> >\n> > Maybe there is something I don't know about\nhow GROUP BY should\n> > work, but if I have a table like:\n> > a,b,c\n> > 1,1,1\n> > 1,1,2\n> > 1,1,3\n> > 1,2,1\n> > 1,3,1\n> >\n> > And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n> > 1,1,6\n> > 1,2,1\n> > 1,3,1\n> >\n> > So whenever a or b changes we get a new summed row, well if I have\nrows\n> > where a or b are null, this doesn't happen, infact I seem to get\nall\n> > those rows individually... Like if:\n> > 1,1,1\n> > 1,1,3\n> > 1,NULL,10\n> > 1,NULL,20\n> > 1,2,3\n> >\n> > I get:\n> > 1,1,4\n> > 1,NULL,10\n> > 1,NULL,20\n> > 1,2,3\n> >\n> > Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated like\nany other\n> > value? Or is there some bit of information I'm missing? \nI can set\n> > everything from NULL to 0 if need be, but I'd rather not...\n> >\n> > David Secret\n> > MIS Director\n> > Kearney Development Co., Inc.\n> >\n IBM's DB/2 Disagrees, so does Oracle8!\nHere is a cut & paste from Oracle SQL+:\nSQL> select * from z;\n A \nB\n--------- ---------\n 1 \n1\n 1 \n2\n \n5\n \n10\nSQL> select a,sum(b) from z group by a;\n A SUM(B)\n--------- ---------\n 1 \n3\n \n15\nSQL>\n I'm going to report this as a bug now that I've verified\n2 major database\nvendors perform the task as I would expect them to, and PostgreSQL\ndoes it\nvery differently. The question is really is NULL=NULL, which\nI would say it\nshould be.\nI tried it in PostgreSQL 6.5beta1 with the same result:\nselect * from z;\na| b\n-+--\n1| 1\n1| 2\n | 5\n |10\n(4 rows)\nselect a,sum(b) from z group by a;\na|sum\n-+---\n1| 3\n | 15\n(2 rows)\nThe Pratical SQL Handbook at page 171 says:\nSince nulls represent \"the great unknown\", there is no way to know\nwhether one null is equal to any other null. Each unknown value\nmay or may not be different from another.\nHowever, if the grouping column contains more than one null,\nall of them are put into a single group.\nThus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\nJosé\n \n \n \n--\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'",
"msg_date": "Mon, 17 May 1999 17:28:39 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "Jos� Soares wrote:\n\n> secret ha scritto:\n>\n>> \"Jackson, DeJuan\" wrote:\n>>\n>> > The behavior is valid, if you define NULL as meaning undefined.\n>> > In other words when you define something as NULL you're saying, \"I\n>> don't\n>> > know what it is. It could be equal or not.\"\n>> > -DEJ\n>> >\n>> > > -----Original Message-----\n>> > > From: secret [SMTP:[email protected]]\n>> > > Sent: Friday, May 14, 1999 11:58 AM\n>> > > To: PG-SQL\n>> > > Subject: [SQL] Oddities with NULL and GROUP BY\n>> > >\n>> > > Maybe there is something I don't know about how GROUP BY\n>> should\n>> > > work, but if I have a table like:\n>> > > a,b,c\n>> > > 1,1,1\n>> > > 1,1,2\n>> > > 1,1,3\n>> > > 1,2,1\n>> > > 1,3,1\n>> > >\n>> > > And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n>> > > 1,1,6\n>> > > 1,2,1\n>> > > 1,3,1\n>> > >\n>> > > So whenever a or b changes we get a new summed row, well if I\n>> have rows\n>> > > where a or b are null, this doesn't happen, infact I seem to get\n>> all\n>> > > those rows individually... Like if:\n>> > > 1,1,1\n>> > > 1,1,3\n>> > > 1,NULL,10\n>> > > 1,NULL,20\n>> > > 1,2,3\n>> > >\n>> > > I get:\n>> > > 1,1,4\n>> > > 1,NULL,10\n>> > > 1,NULL,20\n>> > > 1,2,3\n>> > >\n>> > > Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated like\n>> any other\n>> > > value? Or is there some bit of information I'm missing? I can\n>> set\n>> > > everything from NULL to 0 if need be, but I'd rather not...\n>> > >\n>> > > David Secret\n>> > > MIS Director\n>> > > Kearney Development Co., Inc.\n>> > >\n>>\n>> IBM's DB/2 Disagrees, so does Oracle8!\n>>\n>> Here is a cut & paste from Oracle SQL+:\n>>\n>> SQL> select * from z;\n>>\n>> A B\n>> --------- ---------\n>> 1 1\n>> 1 2\n>> 5\n>> 10\n>>\n>> SQL> select a,sum(b) from z group by a;\n>>\n>> A SUM(B)\n>> --------- ---------\n>> 1 3\n>> 15\n>>\n>> SQL>\n>>\n>> I'm going to report this as a bug now that I've verified 2 major\n>> database\n>> vendors perform the task as I would expect them to, and PostgreSQL\n>> does it\n>> very differently. The question is really is NULL=NULL, which I\n>> would say it\n>> should be.\n>\n>\n> I tried it in PostgreSQL 6.5beta1 with the same result:\n>\n> select * from z;\n> a| b\n> -+--\n> 1| 1\n> 1| 2\n> | 5\n> |10\n> (4 rows)\n>\n> select a,sum(b) from z group by a;\n> a|sum\n> -+---\n> 1| 3\n> | 15\n> (2 rows)\n>\n> The Pratical SQL Handbook at page 171 says:\n> Since nulls represent \"the great unknown\", there is no way to know\n> whether one null is equal to any other null. Each unknown value\n> may or may not be different from another.\n> However, if the grouping column contains more than one null,\n> all of them are put into a single group.\n>\n> Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n>\n> Jos�\n>\n>\n>\n>\n> --\n> ______________________________________________________________\n> PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Jose'\n>\n\n Wonderful, that's as I expected. However please try this in 6.5\nBeta1,\nCREATE TABLE z(a int4,b int4, c int4);\nINSERT INTO z VALUES (1,1,1);\nINSERT INTO z VALUES (1,1,2);\nINSERT INTO z(a,c) VALUES (2,1);\nINSERT INTO z(a,c) VALUES (2,2);\n\nSELECT a,b,sum(c) FROM z GROUP BY a,b\n\nGROUPing in PostgreSQL w/NULLs works just fine when there is only 1\ncolumn, however when one throws 2 in, the 2nd one having NULLs it starts\nfailing. Your example demonstrates the right answer for 1 group by\ncolumn, try it with 2 and I expect 6.5beta1 will fail as 6.4.2 does.\n\n As to NULL=NULL or NULL!=NULL, evadentally my estimation of why the\nproblem is occuring was wrong. :) But from the SQL handbook we\ndefinately have a bug here.\n\nDavid Secret\nMIS Director\nKearney Development Co., Inc.\n\n",
"msg_date": "Wed, 19 May 1999 09:46:52 -0400",
"msg_from": "secret <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "At 18:28 +0300 on 17/05/1999, Jos� Soares wrote:\n\n\n> The Pratical SQL Handbook at page 171 says:\n> Since nulls represent \"the great unknown\", there is no way to know\n> whether one null is equal to any other null. Each unknown value\n> may or may not be different from another.\n> However, if the grouping column contains more than one null,\n> all of them are put into a single group.\n>\n> Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n\nThis is something I have complained about time and again. It is time\nsomething is changed about it, otherwise Postgres will NEVER be a\nstandard-compliant RDBMS.\n\nThe SQL92 text says:\n\n A null value is an implementation-dependent special value that\n is distinct from all non-null values of the associated data type.\n There is effectively only one null value and that value is a member\n of every SQL data type. There is no <literal> for a null value,\n although the keyword NULL is used in some places to indicate that a\n null value is desired.\n\nThus, by rights, NULL=NULL should be true, because there is only one null\nvalue.\n\nAbout the <group by clause>, the text says:\n\n 1) The result of the <group by clause> is a partitioning of T into\n a set of groups. The set is the minimum number of groups such\n that, for each grouping column of each group of more than one\n row, no two values of that grouping column are distinct.\n\nAnd the treatment of nulls is implied from the definition of distinctness:\n\n h) distinct: Two values are said to be not distinct if either:\n both are the null value, or they compare equal according to\n Subclause 8.2, \"<comparison predicate>\". Otherwise they are\n distinct. Two rows (or partial rows) are distinct if at least\n one of their pairs of respective values is distinct. Otherwise\n they are not distinct. The result of evaluating whether or not\n two values or two rows are distinct is never unknown.\n\nAbout uniqueness, it says:\n\n A unique constraint is satisfied if and only if no two rows in\n a table have the same non-null values in the unique columns. In\n addition, if the unique constraint was defined with PRIMARY KEY,\n then it requires that none of the values in the specified column or\n columns be the null value.\n\nOne should note, however, that when the actual comparison operator \"=\" is\nused, the standard says that if one of the operands is null, the result of\nthe comparison is unknown. One should make a distinction between making\ncomparisons within group by, uniqueness, and other database-logic\noperations, and between making the actual comparison (though in my opinion,\nthis should not be so. Comparing a null value to something should be always\nfalse unless the other something is also null. But that's my opinion and\nnot the standard's).\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Wed, 19 May 1999 16:52:54 +0300",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "Here the result:\n\nSELECT a,b,sum(c) FROM z GROUP BY a,b;\na|b|sum\n-+-+---\n1|1| 3\n2| | 3\n(2 rows)\n\n\nsecret ha scritto:\n\n> Jos� Soares wrote:\n>\n> > secret ha scritto:\n> >\n> >> \"Jackson, DeJuan\" wrote:\n> >>\n> >> > The behavior is valid, if you define NULL as meaning undefined.\n> >> > In other words when you define something as NULL you're saying, \"I\n> >> don't\n> >> > know what it is. It could be equal or not.\"\n> >> > -DEJ\n> >> >\n> >> > > -----Original Message-----\n> >> > > From: secret [SMTP:[email protected]]\n> >> > > Sent: Friday, May 14, 1999 11:58 AM\n> >> > > To: PG-SQL\n> >> > > Subject: [SQL] Oddities with NULL and GROUP BY\n> >> > >\n> >> > > Maybe there is something I don't know about how GROUP BY\n> >> should\n> >> > > work, but if I have a table like:\n> >> > > a,b,c\n> >> > > 1,1,1\n> >> > > 1,1,2\n> >> > > 1,1,3\n> >> > > 1,2,1\n> >> > > 1,3,1\n> >> > >\n> >> > > And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n> >> > > 1,1,6\n> >> > > 1,2,1\n> >> > > 1,3,1\n> >> > >\n> >> > > So whenever a or b changes we get a new summed row, well if I\n> >> have rows\n> >> > > where a or b are null, this doesn't happen, infact I seem to get\n> >> all\n> >> > > those rows individually... Like if:\n> >> > > 1,1,1\n> >> > > 1,1,3\n> >> > > 1,NULL,10\n> >> > > 1,NULL,20\n> >> > > 1,2,3\n> >> > >\n> >> > > I get:\n> >> > > 1,1,4\n> >> > > 1,NULL,10\n> >> > > 1,NULL,20\n> >> > > 1,2,3\n> >> > >\n> >> > > Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated like\n> >> any other\n> >> > > value? Or is there some bit of information I'm missing? I can\n> >> set\n> >> > > everything from NULL to 0 if need be, but I'd rather not...\n> >> > >\n> >> > > David Secret\n> >> > > MIS Director\n> >> > > Kearney Development Co., Inc.\n> >> > >\n> >>\n> >> IBM's DB/2 Disagrees, so does Oracle8!\n> >>\n> >> Here is a cut & paste from Oracle SQL+:\n> >>\n> >> SQL> select * from z;\n> >>\n> >> A B\n> >> --------- ---------\n> >> 1 1\n> >> 1 2\n> >> 5\n> >> 10\n> >>\n> >> SQL> select a,sum(b) from z group by a;\n> >>\n> >> A SUM(B)\n> >> --------- ---------\n> >> 1 3\n> >> 15\n> >>\n> >> SQL>\n> >>\n> >> I'm going to report this as a bug now that I've verified 2 major\n> >> database\n> >> vendors perform the task as I would expect them to, and PostgreSQL\n> >> does it\n> >> very differently. The question is really is NULL=NULL, which I\n> >> would say it\n> >> should be.\n> >\n> >\n> > I tried it in PostgreSQL 6.5beta1 with the same result:\n> >\n> > select * from z;\n> > a| b\n> > -+--\n> > 1| 1\n> > 1| 2\n> > | 5\n> > |10\n> > (4 rows)\n> >\n> > select a,sum(b) from z group by a;\n> > a|sum\n> > -+---\n> > 1| 3\n> > | 15\n> > (2 rows)\n> >\n> > The Pratical SQL Handbook at page 171 says:\n> > Since nulls represent \"the great unknown\", there is no way to know\n> > whether one null is equal to any other null. Each unknown value\n> > may or may not be different from another.\n> > However, if the grouping column contains more than one null,\n> > all of them are put into a single group.\n> >\n> > Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n> >\n> > Jos�\n> >\n> >\n> >\n> >\n> > --\n> > ______________________________________________________________\n> > PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > Jose'\n> >\n>\n> Wonderful, that's as I expected. However please try this in 6.5\n> Beta1,\n> CREATE TABLE z(a int4,b int4, c int4);\n> INSERT INTO z VALUES (1,1,1);\n> INSERT INTO z VALUES (1,1,2);\n> INSERT INTO z(a,c) VALUES (2,1);\n> INSERT INTO z(a,c) VALUES (2,2);\n>\n> SELECT a,b,sum(c) FROM z GROUP BY a,b\n>\n> GROUPing in PostgreSQL w/NULLs works just fine when there is only 1\n> column, however when one throws 2 in, the 2nd one having NULLs it starts\n> failing. Your example demonstrates the right answer for 1 group by\n> column, try it with 2 and I expect 6.5beta1 will fail as 6.4.2 does.\n>\n> As to NULL=NULL or NULL!=NULL, evadentally my estimation of why the\n> problem is occuring was wrong. :) But from the SQL handbook we\n> definately have a bug here.\n>\n> David Secret\n> MIS Director\n> Kearney Development Co., Inc.\n\n> ______________________________________________________________\n\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n\nHere the result:\nSELECT a,b,sum(c) FROM z GROUP BY a,b;\na|b|sum\n-+-+---\n1|1| 3\n2| | 3\n(2 rows)\n \nsecret ha scritto:\nJosé Soares wrote:\n> secret ha scritto:\n>\n>> \"Jackson, DeJuan\" wrote:\n>>\n>> > The behavior is valid, if you define NULL as meaning undefined.\n>> > In other words when you define something as NULL you're saying,\n\"I\n>> don't\n>> > know what it is. It could be equal or not.\"\n>> > -DEJ\n>> >\n>> > > -----Original Message-----\n>> > > From: secret [SMTP:[email protected]]\n>> > > Sent: Friday, May 14, 1999 11:58 AM\n>> > > To: PG-SQL\n>> > > Subject: [SQL] Oddities with NULL\nand GROUP BY\n>> > >\n>> > > Maybe there is something I don't know\nabout how GROUP BY\n>> should\n>> > > work, but if I have a table like:\n>> > > a,b,c\n>> > > 1,1,1\n>> > > 1,1,2\n>> > > 1,1,3\n>> > > 1,2,1\n>> > > 1,3,1\n>> > >\n>> > > And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n>> > > 1,1,6\n>> > > 1,2,1\n>> > > 1,3,1\n>> > >\n>> > > So whenever a or b changes we get a new summed row, well if\nI\n>> have rows\n>> > > where a or b are null, this doesn't happen, infact I seem to\nget\n>> all\n>> > > those rows individually... Like if:\n>> > > 1,1,1\n>> > > 1,1,3\n>> > > 1,NULL,10\n>> > > 1,NULL,20\n>> > > 1,2,3\n>> > >\n>> > > I get:\n>> > > 1,1,4\n>> > > 1,NULL,10\n>> > > 1,NULL,20\n>> > > 1,2,3\n>> > >\n>> > > Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated\nlike\n>> any other\n>> > > value? Or is there some bit of information I'm missing? \nI can\n>> set\n>> > > everything from NULL to 0 if need be, but I'd rather not...\n>> > >\n>> > > David Secret\n>> > > MIS Director\n>> > > Kearney Development Co., Inc.\n>> > >\n>>\n>> IBM's DB/2 Disagrees, so does Oracle8!\n>>\n>> Here is a cut & paste from Oracle SQL+:\n>>\n>> SQL> select * from z;\n>>\n>> A \nB\n>> --------- ---------\n>> 1 \n1\n>> 1 \n2\n>> \n5\n>> \n10\n>>\n>> SQL> select a,sum(b) from z group by a;\n>>\n>> A \nSUM(B)\n>> --------- ---------\n>> 1 \n3\n>> \n15\n>>\n>> SQL>\n>>\n>> I'm going to report this as a bug now that\nI've verified 2 major\n>> database\n>> vendors perform the task as I would expect them to, and PostgreSQL\n>> does it\n>> very differently. The question is really is NULL=NULL, which\nI\n>> would say it\n>> should be.\n>\n>\n> I tried it in PostgreSQL 6.5beta1 with the same result:\n>\n> select * from z;\n> a| b\n> -+--\n> 1| 1\n> 1| 2\n> | 5\n> |10\n> (4 rows)\n>\n> select a,sum(b) from z group by a;\n> a|sum\n> -+---\n> 1| 3\n> | 15\n> (2 rows)\n>\n> The Pratical SQL Handbook at page 171 says:\n> Since nulls represent \"the great unknown\", there is no way to know\n> whether one null is equal to any other null. Each unknown value\n> may or may not be different from another.\n> However, if the grouping column contains more than one null,\n> all of them are put into a single group.\n>\n> Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n>\n> José\n>\n>\n>\n>\n> --\n> ______________________________________________________________\n> PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Jose'\n>\n Wonderful, that's as I expected. However please\ntry this in 6.5\nBeta1,\nCREATE TABLE z(a int4,b int4, c int4);\nINSERT INTO z VALUES (1,1,1);\nINSERT INTO z VALUES (1,1,2);\nINSERT INTO z(a,c) VALUES (2,1);\nINSERT INTO z(a,c) VALUES (2,2);\nSELECT a,b,sum(c) FROM z GROUP BY a,b\nGROUPing in PostgreSQL w/NULLs works just fine when there is only 1\ncolumn, however when one throws 2 in, the 2nd one having NULLs it starts\nfailing. Your example demonstrates the right answer for 1 group\nby\ncolumn, try it with 2 and I expect 6.5beta1 will fail as 6.4.2 does.\n As to NULL=NULL or NULL!=NULL, evadentally my estimation\nof why the\nproblem is occuring was wrong. :) But from the SQL handbook we\ndefinately have a bug here.\nDavid Secret\nMIS Director\nKearney Development Co., Inc.\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'",
"msg_date": "Wed, 19 May 1999 16:38:09 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "Herouth Maoz <[email protected]> writes:\n> Thus, by rights, NULL=NULL should be true, because there is only one null\n> value.\n\nYou are jumping to a conclusion not supported by the text you have\nquoted.\n\nIt does appear that GROUP BY and DISTINCT should treat all nulls as\nfalling into the same class, because of\n\n> h) distinct: Two values are said to be not distinct if either:\n> both are the null value, or they compare equal according to\n> Subclause 8.2, \"<comparison predicate>\".\n\nKindly note, however, that the standards authors felt it necessary to\ndescribe those two cases as separate cases. If nulls compare as equal,\nthere would be no need to write more than \"Two values are not distinct\nif they compare equal\".\n\n> One should note, however, that when the actual comparison operator \"=\" is\n> used, the standard says that if one of the operands is null, the result of\n> the comparison is unknown.\n\nPrecisely. A fortiori, if both operands are null, the result of the\ncomparison is still unknown.\n\nWe do seem to have a bug in GROUP BY/DISTINCT if nulls are producing\nmore than one output tuple in those operations. But that has nothing\nto do with what the comparison operator produces.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 May 1999 10:44:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Oddities with NULL and GROUP BY "
},
{
"msg_contents": "> > The Pratical SQL Handbook at page 171 says:\n> > Since nulls represent \"the great unknown\", there is no way to know\n> > whether one null is equal to any other null. Each unknown value\n> > may or may not be different from another.\n\nAlthough I've noticed some questionable statements quoted from this\nbook, this looks good...\n\n> > Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n> This is something I have complained about time and again. It is time\n> something is changed about it, otherwise Postgres will NEVER be a\n> standard-compliant RDBMS.\n\nPostgres conforms to SQL92 in this regard. Date and Darwen, \"A Guide\nto the SQL Standard\", 3rd ed., are explicit about this near the top of\npage 249:\n\nDuplicates are relevant to the ... GROUP BY ... operations ...\n... GROUP BY groups rows together on the basis of duplicate values in\nthe set of grouping columns (and those sets of grouping column values\ncan be regarded as \"rows\" for present purposes). The point is,\nhowever, the definition of duplicate rows requires some refinement in\nthe presence of nulls. Let \"left\" and \"right\" be as defined\n(previously). Then \"left\" and \"right\" are defined to be \"duplicates\"\nof one another if and only if, for all \"i\" in the range 1 to \"n\",\neither \"left_i\" = \"right_i\" is TRUE, or \"left_i\" and \"right_i\" are\nboth null.\n\nThere is a single exception to Postgres' SQL92 conformance wrt NULLs\nafaik, involving DISTINCT column constraints which I discuss below.\n\n> > However, if the grouping column contains more than one null,\n> > all of them are put into a single group.\n> > Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n> The SQL92 text says:\n> A null value is an implementation-dependent special value that\n> is distinct from all non-null values of the associated data type.\n> There is effectively only one null value and that value is a member\n> of every SQL data type. There is no <literal> for a null value,\n> although the keyword NULL is used in some places to indicate that a\n> null value is desired.\n> Thus, by rights, NULL=NULL should be true, because there is only one null\n> value.\n\nNo! An explicit \"unknown\" = \"unknown\" in a constraint clause should\nalways evaluate to FALSE (we'll get to GROUP BY later). SQL92 and all\nof my reference books are clear about this. Date and Darwen have a\ngood discussion of the shortcomings of NULL in SQL92, pointing out\nthat with NULL handling one would really like a distinct UNKNOWN added\nto the possible boolean values TRUE and FALSE so that SQL would have\ntrue three-value logic.\n\n> About the <group by clause>, the text says:\n> 1) The result of the <group by clause> is a partitioning of T into\n> a set of groups. The set is the minimum number of groups such\n> that, for each grouping column of each group of more than one\n> row, no two values of that grouping column are distinct.\n\nInteresting. Note that SQL92 asks that any column with the DISTINCT\nconstraint contain *only one* NULL value in the entire column. Date\nand Darwen point out that this is inconsistant with the fundamental\nnotion of \"unknown\" and renders DISTINCT constraints without NOT NULL\nto be effectively useless. They recommend against having any DISTINCT\ncolumn without having an additional NOT NULL constraint. We've had\nthis discussion wrt Postgres, and concluded that we would diverge from\nthe standard by allowing multiple NULL fields in DISTINCT columns, to\nmake DISTINCT a useful feature with NULLs. It probably didn't hurt\nthat Postgres already behaved this way :)\n\nafaik this last point is the *only* place where Postgres intentionally\ndiverges from SQL92, and it was done (or rather retained from existing\nbehavior) to make a useless feature useful.\n\n> One should note, however, that when the actual comparison operator \"=\" is\n> used, the standard says that if one of the operands is null, the result of\n> the comparison is unknown. One should make a distinction between making\n> comparisons within group by, uniqueness, and other database-logic\n> operations, and between making the actual comparison (though in my opinion,\n> this should not be so. Comparing a null value to something should be always\n> false unless the other something is also null. But that's my opinion and\n> not the standard's).\n\nOne can't take a portion of SQL92 statements wrt NULLs and apply it to\nall uses of NULL, because SQL92 is not internally consistant in this\nregard.\n\nIn most GROUP BY situations, a corresponding WHERE col IS NOT NULL is\nprobably a good idea.\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 May 1999 15:16:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n> > > The Pratical SQL Handbook at page 171 says:\n> > > Since nulls represent \"the great unknown\", there is no way to know\n> > > whether one null is equal to any other null. Each unknown value\n> > > may or may not be different from another.\n>\n> Although I've noticed some questionable statements quoted from this\n> book, this looks good...\n>\n> > > Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n> > This is something I have complained about time and again. It is time\n> > something is changed about it, otherwise Postgres will NEVER be a\n> > standard-compliant RDBMS.\n>\n> Postgres conforms to SQL92 in this regard. Date and Darwen, \"A Guide\n> to the SQL Standard\", 3rd ed., are explicit about this near the top of\n> page 249:\n>\n> Duplicates are relevant to the ... GROUP BY ... operations ...\n> ... GROUP BY groups rows together on the basis of duplicate values in\n> the set of grouping columns (and those sets of grouping column values\n> can be regarded as \"rows\" for present purposes). The point is,\n> however, the definition of duplicate rows requires some refinement in\n> the presence of nulls. Let \"left\" and \"right\" be as defined\n> (previously). Then \"left\" and \"right\" are defined to be \"duplicates\"\n> of one another if and only if, for all \"i\" in the range 1 to \"n\",\n> either \"left_i\" = \"right_i\" is TRUE, or \"left_i\" and \"right_i\" are\n> both null.\n>\n> There is a single exception to Postgres' SQL92 conformance wrt NULLs\n> afaik, involving DISTINCT column constraints which I discuss below.\n>\n> > > However, if the grouping column contains more than one null,\n> > > all of them are put into a single group.\n> > > Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n> > The SQL92 text says:\n> > A null value is an implementation-dependent special value that\n> > is distinct from all non-null values of the associated data type.\n> > There is effectively only one null value and that value is a member\n> > of every SQL data type. There is no <literal> for a null value,\n> > although the keyword NULL is used in some places to indicate that a\n> > null value is desired.\n> > Thus, by rights, NULL=NULL should be true, because there is only one null\n> > value.\n>\n> No! An explicit \"unknown\" = \"unknown\" in a constraint clause should\n> always evaluate to FALSE (we'll get to GROUP BY later). SQL92 and all\n> of my reference books are clear about this. Date and Darwen have a\n> good discussion of the shortcomings of NULL in SQL92, pointing out\n> that with NULL handling one would really like a distinct UNKNOWN added\n> to the possible boolean values TRUE and FALSE so that SQL would have\n> true three-value logic.\n>\n> > About the <group by clause>, the text says:\n> > 1) The result of the <group by clause> is a partitioning of T into\n> > a set of groups. The set is the minimum number of groups such\n> > that, for each grouping column of each group of more than one\n> > row, no two values of that grouping column are distinct.\n>\n> Interesting. Note that SQL92 asks that any column with the DISTINCT\n> constraint contain *only one* NULL value in the entire column. Date\n> and Darwen point out that this is inconsistant with the fundamental\n> notion of \"unknown\" and renders DISTINCT constraints without NOT NULL\n> to be effectively useless. They recommend against having any DISTINCT\n> column without having an additional NOT NULL constraint. We've had\n> this discussion wrt Postgres, and concluded that we would diverge from\n> the standard by allowing multiple NULL fields in DISTINCT columns, to\n> make DISTINCT a useful feature with NULLs. It probably didn't hurt\n> that Postgres already behaved this way :)\n>\n> afaik this last point is the *only* place where Postgres intentionally\n> diverges from SQL92, and it was done (or rather retained from existing\n> behavior) to make a useless feature useful.\n>\n> > One should note, however, that when the actual comparison operator \"=\" is\n> > used, the standard says that if one of the operands is null, the result of\n> > the comparison is unknown. One should make a distinction between making\n> > comparisons within group by, uniqueness, and other database-logic\n> > operations, and between making the actual comparison (though in my opinion,\n> > this should not be so. Comparing a null value to something should be always\n> > false unless the other something is also null. But that's my opinion and\n> > not the standard's).\n>\n> One can't take a portion of SQL92 statements wrt NULLs and apply it to\n> all uses of NULL, because SQL92 is not internally consistant in this\n> regard.\n>\n> In most GROUP BY situations, a corresponding WHERE col IS NOT NULL is\n> probably a good idea.\n>\n> Regards.\n>\n> - Thomas\n>\n> --\n> Thomas Lockhart [email protected]\n> South Pasadena, California\n\n Sigh. PostgreSQL seems pretty inconsitant in this... GROUP BY with 1 column\nproduces NULLs grouped, with 2 colums it usually seems not to(although I somehow\ncame up with an example where it did, grr... but lets ignore this since it's\nsupposed to \"not work\" that way.)... Oracle8, DB/2, and Sybase all group NULLs\ntogether, for compatibility sake wouldn't it be reasonable for PostgreSQL to do\nthe same? Else porting applications could fail miserably when one hits this\ninconsistency.\n\n--David\n\n",
"msg_date": "Wed, 19 May 1999 11:28:42 -0400",
"msg_from": "secret <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "At 18:16 +0300 on 19/05/1999, Thomas Lockhart wrote:\n\n\n> Interesting. Note that SQL92 asks that any column with the DISTINCT\n> constraint contain *only one* NULL value in the entire column. Date\n> and Darwen point out that this is inconsistant with the fundamental\n> notion of \"unknown\" and renders DISTINCT constraints without NOT NULL\n> to be effectively useless. They recommend against having any DISTINCT\n> column without having an additional NOT NULL constraint. We've had\n> this discussion wrt Postgres, and concluded that we would diverge from\n> the standard by allowing multiple NULL fields in DISTINCT columns, to\n> make DISTINCT a useful feature with NULLs. It probably didn't hurt\n> that Postgres already behaved this way :)\n>\n> afaik this last point is the *only* place where Postgres intentionally\n> diverges from SQL92, and it was done (or rather retained from existing\n> behavior) to make a useless feature useful.\n\nYou are probably referring to UNIQUE, not DISTINCT, which is not a\nconstraint but a query qualifier.\n\nAs for uniqueness, as I already quoted, it says:\n\n A unique constraint is satisfied if and only if no two rows in\n a table have the same non-null values in the unique columns. In\n addition, if the unique constraint was defined with PRIMARY KEY,\n then it requires that none of the values in the specified column or\n columns be the null value.\n\nWhich means that what Postgres does is quite the correct thing. You see?\n\"No two rows in a table have the same non-null values in the unique\ncolumns\". They *can* have the same *null* values!. The constraints only\ntalks about the non-null ones!\n\nSo I think Date and Darwen misinterpreted the rule, and you got this part\nright in PostgreSQL. However, there *is* a bug in the GROUP BY behaviour,\nat least over one column, and it should be checked if it doesn't work\naccording to the old convention of comparing nulls internally as they are\ncompared with the \"=\" operator.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Wed, 19 May 1999 18:31:19 +0300",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "At 18:28 +0300 on 19/05/1999, secret wrote:\n\n\n> Sigh. PostgreSQL seems pretty inconsitant in this... GROUP BY with 1\n>column\n> produces NULLs grouped, with 2 colums it usually seems not to(although I\n>somehow\n> came up with an example where it did, grr... but lets ignore this since it's\n> supposed to \"not work\" that way.)... Oracle8, DB/2, and Sybase all group\n>NULLs\n> together, for compatibility sake wouldn't it be reasonable for PostgreSQL\n>to do\n> the same? Else porting applications could fail miserably when one hits this\n> inconsistency.\n\nPlease, please, the standard is clear about each of these things\nseparately. It absolutely says that nulls should be grouped together, and\nit absolutely says that the comparison operator should not. It's true that\nthese things are not consistent, but for each operation, the standard is\nquite clear on how it should be done.\n\nIn my opinion, there should be null comparison for internal operations, and\nnull comparison for the comparison operator. For this purpose, what\nPostgres does now - return a NULL boolean if one of its operands is null -\nis consistent with the standard. For GROUP BY and ORDER BY, they should be\ncompared equal, and for UNIQUE, they should not be compared.\n\nUNIQUE has explicit mention of nulls in the standard.\nORDER BY has explicit mention of nulls in the standard.\nGROUP BY has implicit mention of nulls, by using the term \"distinct\" which\nis defined earlier and includes and explicit mention of nulls.\n\"=\" has explicit mention of nulls in the standard.\n\nAnd although they are not consistent (some are equal, some are not equal,\nand some are unknown), they are covered in no uncertain terms.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Wed, 19 May 1999 18:44:32 +0300",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Oddities with NULL and GROUP BY"
},
{
"msg_contents": "\n\nLooks like this is fixed in 6.5 too.\n\t\n\ta|b|sum\n\t-+-+---\n\t1|1| 3\n\t2| | 3\n\t(2 rows)\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Jos_ Soares wrote:\n> \n> > secret ha scritto:\n> >\n> >> \"Jackson, DeJuan\" wrote:\n> >>\n> >> > The behavior is valid, if you define NULL as meaning undefined.\n> >> > In other words when you define something as NULL you're saying, \"I\n> >> don't\n> >> > know what it is. It could be equal or not.\"\n> >> > -DEJ\n> >> >\n> >> > > -----Original Message-----\n> >> > > From: secret [SMTP:[email protected]]\n> >> > > Sent: Friday, May 14, 1999 11:58 AM\n> >> > > To: PG-SQL\n> >> > > Subject: [SQL] Oddities with NULL and GROUP BY\n> >> > >\n> >> > > Maybe there is something I don't know about how GROUP BY\n> >> should\n> >> > > work, but if I have a table like:\n> >> > > a,b,c\n> >> > > 1,1,1\n> >> > > 1,1,2\n> >> > > 1,1,3\n> >> > > 1,2,1\n> >> > > 1,3,1\n> >> > >\n> >> > > And I say SELECT a,b,sum(c) FROm .. GROUP BY a,b I get\n> >> > > 1,1,6\n> >> > > 1,2,1\n> >> > > 1,3,1\n> >> > >\n> >> > > So whenever a or b changes we get a new summed row, well if I\n> >> have rows\n> >> > > where a or b are null, this doesn't happen, infact I seem to get\n> >> all\n> >> > > those rows individually... Like if:\n> >> > > 1,1,1\n> >> > > 1,1,3\n> >> > > 1,NULL,10\n> >> > > 1,NULL,20\n> >> > > 1,2,3\n> >> > >\n> >> > > I get:\n> >> > > 1,1,4\n> >> > > 1,NULL,10\n> >> > > 1,NULL,20\n> >> > > 1,2,3\n> >> > >\n> >> > > Shouldn't I get 1,NULL,30? Ie shouldn't NULL be treated like\n> >> any other\n> >> > > value? Or is there some bit of information I'm missing? I can\n> >> set\n> >> > > everything from NULL to 0 if need be, but I'd rather not...\n> >> > >\n> >> > > David Secret\n> >> > > MIS Director\n> >> > > Kearney Development Co., Inc.\n> >> > >\n> >>\n> >> IBM's DB/2 Disagrees, so does Oracle8!\n> >>\n> >> Here is a cut & paste from Oracle SQL+:\n> >>\n> >> SQL> select * from z;\n> >>\n> >> A B\n> >> --------- ---------\n> >> 1 1\n> >> 1 2\n> >> 5\n> >> 10\n> >>\n> >> SQL> select a,sum(b) from z group by a;\n> >>\n> >> A SUM(B)\n> >> --------- ---------\n> >> 1 3\n> >> 15\n> >>\n> >> SQL>\n> >>\n> >> I'm going to report this as a bug now that I've verified 2 major\n> >> database\n> >> vendors perform the task as I would expect them to, and PostgreSQL\n> >> does it\n> >> very differently. The question is really is NULL=NULL, which I\n> >> would say it\n> >> should be.\n> >\n> >\n> > I tried it in PostgreSQL 6.5beta1 with the same result:\n> >\n> > select * from z;\n> > a| b\n> > -+--\n> > 1| 1\n> > 1| 2\n> > | 5\n> > |10\n> > (4 rows)\n> >\n> > select a,sum(b) from z group by a;\n> > a|sum\n> > -+---\n> > 1| 3\n> > | 15\n> > (2 rows)\n> >\n> > The Pratical SQL Handbook at page 171 says:\n> > Since nulls represent \"the great unknown\", there is no way to know\n> > whether one null is equal to any other null. Each unknown value\n> > may or may not be different from another.\n> > However, if the grouping column contains more than one null,\n> > all of them are put into a single group.\n> >\n> > Thus: NULL!=NULL but on GROUP BY it is considered as NULL=NULL.\n> >\n> > Jos_\n> >\n> >\n> >\n> >\n> > --\n> > ______________________________________________________________\n> > PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > Jose'\n> >\n> \n> Wonderful, that's as I expected. However please try this in 6.5\n> Beta1,\n> CREATE TABLE z(a int4,b int4, c int4);\n> INSERT INTO z VALUES (1,1,1);\n> INSERT INTO z VALUES (1,1,2);\n> INSERT INTO z(a,c) VALUES (2,1);\n> INSERT INTO z(a,c) VALUES (2,2);\n> \n> SELECT a,b,sum(c) FROM z GROUP BY a,b\n> \n> GROUPing in PostgreSQL w/NULLs works just fine when there is only 1\n> column, however when one throws 2 in, the 2nd one having NULLs it starts\n> failing. Your example demonstrates the right answer for 1 group by\n> column, try it with 2 and I expect 6.5beta1 will fail as 6.4.2 does.\n> \n> As to NULL=NULL or NULL!=NULL, evadentally my estimation of why the\n> problem is occuring was wrong. :) But from the SQL handbook we\n> definately have a bug here.\n> \n> David Secret\n> MIS Director\n> Kearney Development Co., Inc.\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 14:23:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Oddities with NULL and GROUP BY"
}
] |
[
{
"msg_contents": "\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Fri, 14 May 1999 14:50:58 -0400\nFrom: Jack Howarth <[email protected]>\nTo: [email protected]\nSubject: postgresql bug report\n\nMarc,\n In porting the RedHat 6.0 srpm set for a linuxppc release we\nbelieve a bug has been identified in\nthe postgresql source for 6.5-0.beta1. Our development tools are as\nfollows...\n\nglibc 2.1.1 pre 2\nlinux 2.2.6\negcs 1.1.2\nthe latest binutils snapshot\n\nThe bug that we see is that when egcs compiles postgresql at -O1 or\nhigher (-O0 is fine),\npostgresql creates incorrectly formed databases such that when the user\ndoes a destroydb\nthe database can not be destroyed. Franz Sirl has identified the problem\nas follows...\n\n it seems that this problem is a type casting/promotion bug in the\nsource. The\n routine _bt_checkkeys() in backend/access/nbtree/nbtutils.c calls\nint2eq() in\n backend/utils/adt/int.c via a function pointer\n*fmgr_faddr(&key[0].sk_func). As\n the type information for int2eq is lost via the function pointer,\nthe compiler\n passes 2 ints, but int2eq expects 2 (preformatted in a 32bit reg)\nint16's.\n This particular bug goes away, if I for example change int2eq to:\n\n bool\n int2eq(int32 arg1, int32 arg2)\n {\n return (int16)arg1 == (int16)arg2;\n }\n\n This moves away the type casting/promotion \"work\" from caller to the\ncallee and\n is probably the right thing to do for functions used via function\npointers.\n\n...because of the large number of changes required to do this, Franz\nthought we should\npass this on to the postgresql maintainers for correction. Please feel\nfree to contact\nFranz Sirl ([email protected]) if you have any questions\non this bug\nreport.\n\n--\n------------------------------------------------------------------------------\nJack W. Howarth, Ph.D. 231 Bethesda Avenue\nNMR Facility Director Cincinnati, Ohio 45267-0524\nDept. of Molecular Genetics phone: (513) 558-4420\nUniv. of Cincinnati College of Medicine fax: (513) 558-8474\n\n\n\n\n",
"msg_date": "Fri, 14 May 1999 16:49:44 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql bug report (fwd)"
},
{
"msg_contents": "\nTo compile postgres using gcc 2.7.2.1 I had to modify 2 files\n\n src/interfaces/libpq++/pgconnection.cc\n src/interfaces/libpq++/pgenv.h\n\nParticularly, \n#include <iostream> to #include <iostream.h>\nin src/interfaces/libpq++/pgenv.h\nand\n#include <strstream> to #include <strstream.h>\n\nThere are no problem with egcs 1.12 release\nCould somebody made changes in cvs sources. \n\n\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 15 May 1999 00:37:26 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "6.5 cvs: problem with includes in src/interfaces/libpq++/"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> it seems that this problem is a type casting/promotion bug in the\n> source. The\n> routine _bt_checkkeys() in backend/access/nbtree/nbtutils.c calls\n> int2eq() in\n> backend/utils/adt/int.c via a function pointer\n> *fmgr_faddr(&key[0].sk_func). As\n> the type information for int2eq is lost via the function pointer,\n> the compiler\n> passes 2 ints, but int2eq expects 2 (preformatted in a 32bit reg)\n> int16's.\n> This particular bug goes away, if I for example change int2eq to:\n\n> bool\n> int2eq(int32 arg1, int32 arg2)\n> {\n> return (int16)arg1 == (int16)arg2;\n> }\n\nYow. I can't believe that we haven't seen this failure before on a\nvariety of platforms. Calling an ANSI-style function that has char or\nshort args is undefined behavior if you call it without benefit of a\nprototype, because the parameter layout is allowed to be different.\nApparently, fewer compilers exploit that freedom than I would've thought.\n\nReally, *all* of the builtin-function routines ought to take arguments\nof type Datum and then do the appropriate Get() macro to extract what\nthey want from 'em. That's a depressingly large amount of work, but\nat the very least the functions that take bool and int16 have to be\nchanged...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 May 1999 17:46:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgresql bug report (fwd) "
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> To compile postgres using gcc 2.7.2.1 I had to modify 2 files\n> src/interfaces/libpq++/pgconnection.cc\n> src/interfaces/libpq++/pgenv.h\n> Particularly, \n> #include <iostream> to #include <iostream.h>\n> #include <strstream> to #include <strstream.h>\n\nI am seeing the same thing here with gcc 2.7.2.2. We need to adopt\na considered policy about whether libpq++ will still support gcc 2.7.*,\nnot just break it without thinking.\n\nI'd vote for still supporting 2.7.*, but I know that the C++ library\nshipped with this gcc release is not real up-to-date. It may not be\npractical to support both latest-C++-spec compilers and the older\ngeneration; I'm not sure what the issues are.\n\nIf the conclusion is \"no\", then the configure script ought to be\nchanged to not try to build libpq++ unless up-to-date libraries\nare available.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 May 1999 17:59:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: problem with includes in\n\tsrc/interfaces/libpq++/"
},
{
"msg_contents": "> Marc,\n> In porting the RedHat 6.0 srpm set for a linuxppc release we\n> believe a bug has been identified in\n> the postgresql source for 6.5-0.beta1. Our development tools are as\n> follows...\n> \n> glibc 2.1.1 pre 2\n> linux 2.2.6\n> egcs 1.1.2\n> the latest binutils snapshot\n> \n> The bug that we see is that when egcs compiles postgresql at -O1 or\n> higher (-O0 is fine),\n> postgresql creates incorrectly formed databases such that when the user\n> does a destroydb\n> the database can not be destroyed. Franz Sirl has identified the problem\n> as follows...\n[snip]\n\nI've been using PosgreSQL and LinuxPPC for a longtime, and never seen\nthese kind of problems (I have a serious problem with 2.1.xxx kernels\nwhenever I try to run PostgreSQL, but this is a different story,\nanyway).\n\nI have a standard installation using the R4.2 CD from LinuxPPC org.\n\nkernel 2.1.24\nglibc-0.961212-1h\ngcc version egcs-2.90.25 980302 (egcs-1.0.2 prerelease)\nbinutils-2.9.1-1a\n\nHowever your explnation sounds reasonable, I will look into to see why\nmy system seems to have no problem.\n---\nTatsuo Ishii\n",
"msg_date": "Sat, 15 May 1999 09:54:18 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgresql bug report (fwd) "
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > it seems that this problem is a type casting/promotion bug in the\n> > source. The\n> > routine _bt_checkkeys() in backend/access/nbtree/nbtutils.c calls\n> > int2eq() in\n> > backend/utils/adt/int.c via a function pointer\n> > *fmgr_faddr(&key[0].sk_func). As\n> > the type information for int2eq is lost via the function pointer,\n> > the compiler\n> > passes 2 ints, but int2eq expects 2 (preformatted in a 32bit reg)\n> > int16's.\n> > This particular bug goes away, if I for example change int2eq to:\n> \n> > bool\n> > int2eq(int32 arg1, int32 arg2)\n> > {\n> > return (int16)arg1 == (int16)arg2;\n> > }\n> \n> Yow. I can't believe that we haven't seen this failure before on a\n> variety of platforms. Calling an ANSI-style function that has char or\n> short args is undefined behavior if you call it without benefit of a\n> prototype, because the parameter layout is allowed to be different.\n> Apparently, fewer compilers exploit that freedom than I would've thought.\n> \n> Really, *all* of the builtin-function routines ought to take arguments\n> of type Datum and then do the appropriate Get() macro to extract what\n> they want from 'em. That's a depressingly large amount of work, but\n> at the very least the functions that take bool and int16 have to be\n> changed...\n\nI concur in your Yow. Lots of changes, and I am surprised we have not\nbeen bitten by this before. Added to TODO:\n\n\tFix function pointer calls to take Datum args for char and int2 args\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 05:10:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgresql bug report (fwd)"
},
{
"msg_contents": "> Oleg Bartunov <[email protected]> writes:\n> > To compile postgres using gcc 2.7.2.1 I had to modify 2 files\n> > src/interfaces/libpq++/pgconnection.cc\n> > src/interfaces/libpq++/pgenv.h\n> > Particularly, \n> > #include <iostream> to #include <iostream.h>\n> > #include <strstream> to #include <strstream.h>\n> \n> I am seeing the same thing here with gcc 2.7.2.2. We need to adopt\n> a considered policy about whether libpq++ will still support gcc 2.7.*,\n> not just break it without thinking.\n> \n> I'd vote for still supporting 2.7.*, but I know that the C++ library\n> shipped with this gcc release is not real up-to-date. It may not be\n> practical to support both latest-C++-spec compilers and the older\n> generation; I'm not sure what the issues are.\n> \n> If the conclusion is \"no\", then the configure script ought to be\n> changed to not try to build libpq++ unless up-to-date libraries\n> are available.\n\nThe addition/removal of '.h' has happened before. Some need it, some\ncan't handle it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 05:12:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: problem with includes in\n\tsrc/interfaces/libpq++/"
},
{
"msg_contents": "We are aware of this bug. We have turned down optimization on PPC and\nAlpha platforms until it is fixed, probably in 6.6.\n\n\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> ---------- Forwarded message ----------\n> Date: Fri, 14 May 1999 14:50:58 -0400\n> From: Jack Howarth <[email protected]>\n> To: [email protected]\n> Subject: postgresql bug report\n> \n> Marc,\n> In porting the RedHat 6.0 srpm set for a linuxppc release we\n> believe a bug has been identified in\n> the postgresql source for 6.5-0.beta1. Our development tools are as\n> follows...\n> \n> glibc 2.1.1 pre 2\n> linux 2.2.6\n> egcs 1.1.2\n> the latest binutils snapshot\n> \n> The bug that we see is that when egcs compiles postgresql at -O1 or\n> higher (-O0 is fine),\n> postgresql creates incorrectly formed databases such that when the user\n> does a destroydb\n> the database can not be destroyed. Franz Sirl has identified the problem\n> as follows...\n> \n> it seems that this problem is a type casting/promotion bug in the\n> source. The\n> routine _bt_checkkeys() in backend/access/nbtree/nbtutils.c calls\n> int2eq() in\n> backend/utils/adt/int.c via a function pointer\n> *fmgr_faddr(&key[0].sk_func). As\n> the type information for int2eq is lost via the function pointer,\n> the compiler\n> passes 2 ints, but int2eq expects 2 (preformatted in a 32bit reg)\n> int16's.\n> This particular bug goes away, if I for example change int2eq to:\n> \n> bool\n> int2eq(int32 arg1, int32 arg2)\n> {\n> return (int16)arg1 == (int16)arg2;\n> }\n> \n> This moves away the type casting/promotion \"work\" from caller to the\n> callee and\n> is probably the right thing to do for functions used via function\n> pointers.\n> \n> ...because of the large number of changes required to do this, Franz\n> thought we should\n> pass this on to the postgresql maintainers for correction. Please feel\n> free to contact\n> Franz Sirl ([email protected]) if you have any questions\n> on this bug\n> report.\n> \n> --\n> ------------------------------------------------------------------------------\n> Jack W. Howarth, Ph.D. 231 Bethesda Avenue\n> NMR Facility Director Cincinnati, Ohio 45267-0524\n> Dept. of Molecular Genetics phone: (513) 558-4420\n> Univ. of Cincinnati College of Medicine fax: (513) 558-8474\n> \n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 23:47:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgresql bug report (fwd)"
}
] |
[
{
"msg_contents": "Hi,\n\nI have to work with postgres under freeBSD 3.1-RELEASE and as I'm\ncompletely dummy in FreeBSD I'm wondering if there are problems\nwith running postgres (6.5).\n\nI installed 6.5 cvs and after playing with kernel (config. shared memory)\nI ran regression tests.There were much more tests failures than in\ncase of my lovely :-) Linux. \nIs this a known problem with latest 6.5 cvs or it's my fault ?\nI used gcc 2.7.2.1. \n\nfloat8 .. failed\ngeometry .. failed\ncreate_function_2 .. failed\ntriggers .. failed\nsanity_check .. failed\nmisc .. failed\nalter_table .. failed\nrules .. failed\nplpgsql .. failed\n\n\nI attached below some diffs.\n\n\tRegards,\n\n\t\tOleg\n\ndiff results/float8.out expected/float8.out\n\n190,191d189\n< ERROR: floating point exception! The last floating point operation either exc\needed legal ranges or was a divide by zero\n< QUERY: SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n192a191,192\n> QUERY: SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> ERROR: pow() result is out of range\n198,206c198\n< bad| ?column?\n< ---+--------------------\n< | 1\n< |7.39912306090513e-16\n< | 0\n< | 0\n< | 1\n< (5 rows)\n< \n---\n> ERROR: exp() result is out of range\n223a216\n> ERROR: Input '10e-400' is out of range for float8\n224a218\n> ERROR: Input '-10e-400' is out of range for float8\n\ndiff results/create_function_2.out expected/create_function_2.out\n41,44d40\n< pqReadData() -- backend closed the channel unexpectedly.\n< This probably means the backend terminated abnormally\n< before or while processing the request.\n< We have lost the connection to the backend, so further processing is impossibl\ne. Terminating.\n\n\ndiff results/triggers.out expected/triggers.out\n39,42c39,285\n< pqReadData() -- backend closed the channel unexpectedly.\n< This probably means the backend terminated abnormally\n< before or while processing the request.\n< We have lost the connection to the backend, so further processing is impossibl\ne. Terminating.\n---\n> QUERY: insert into fkeys2 values (30, '3', 2);\n..... cutted .....\n\ndiff results/sanity_check.out expected/sanity_check.out\n13,14d12\n< fkeys |t \n< fkeys2 |t \n29d26\n< pkeys |t \n34c31\n< (26 rows)\n---\n> (23 rows)\n\ndiff results/misc.out expected/misc.out\n9,12c9,514\n< pqReadData() -- backend closed the channel unexpectedly.\n< This probably means the backend terminated abnormally\n< before or while processing the request.\n< We have lost the connection to the backend, so further processing is impossibl\ne. Terminating.\n---\n> QUERY: UPDATE tmp\n ....... cutted .....\n\n\ndiff results/plpgsql.out expected/plpgsql.out\n875d874\n< ERROR: Load of file /usr/local/pgsql/lib/plpgsql.so failed: dlopen '/usr/loca\nl/pgsql/lib/plpgsql.so' failed. (/usr/local/pgsql/lib/plpgsql.so: Undefined symb\nol \"CurrentMemoryContext\")\n877d875\n< ERROR: Can't find function plpgsql_call_handler in file /usr/local/pgsql/lib/\nplpgsql.so\n879d876\n< ERROR: Can't find function plpgsql_call_handler in file /usr/local/pgsql/lib/\nplpgsql.so\n.... cutted ...\n\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Sat, 15 May 1999 10:32:02 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "How good is FreeBSD for postgres ?"
},
{
"msg_contents": "\nAll known problems...I've always run PostgreSQL under FreeBSD, and have\nyet to notice a problem with a -release version of it...\n\nThe regression tests are \"base Linux\", and some of the generated error\nmessages are different between the systems and/or rounding is slighty\ndifferent...\n\n\nOn Sat, 15 May 1999, Oleg Bartunov wrote:\n\n> Hi,\n> \n> I have to work with postgres under freeBSD 3.1-RELEASE and as I'm\n> completely dummy in FreeBSD I'm wondering if there are problems\n> with running postgres (6.5).\n> \n> I installed 6.5 cvs and after playing with kernel (config. shared memory)\n> I ran regression tests.There were much more tests failures than in\n> case of my lovely :-) Linux. \n> Is this a known problem with latest 6.5 cvs or it's my fault ?\n> I used gcc 2.7.2.1. \n> \n> float8 .. failed\n> geometry .. failed\n> create_function_2 .. failed\n> triggers .. failed\n> sanity_check .. failed\n> misc .. failed\n> alter_table .. failed\n> rules .. failed\n> plpgsql .. failed\n> \n> \n> I attached below some diffs.\n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> diff results/float8.out expected/float8.out\n> \n> 190,191d189\n> < ERROR: floating point exception! The last floating point operation either exc\n> eeded legal ranges or was a divide by zero\n> < QUERY: SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> 192a191,192\n> > QUERY: SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> > ERROR: pow() result is out of range\n> 198,206c198\n> < bad| ?column?\n> < ---+--------------------\n> < | 1\n> < |7.39912306090513e-16\n> < | 0\n> < | 0\n> < | 1\n> < (5 rows)\n> < \n> ---\n> > ERROR: exp() result is out of range\n> 223a216\n> > ERROR: Input '10e-400' is out of range for float8\n> 224a218\n> > ERROR: Input '-10e-400' is out of range for float8\n> \n> diff results/create_function_2.out expected/create_function_2.out\n> 41,44d40\n> < pqReadData() -- backend closed the channel unexpectedly.\n> < This probably means the backend terminated abnormally\n> < before or while processing the request.\n> < We have lost the connection to the backend, so further processing is impossibl\n> e. Terminating.\n> \n> \n> diff results/triggers.out expected/triggers.out\n> 39,42c39,285\n> < pqReadData() -- backend closed the channel unexpectedly.\n> < This probably means the backend terminated abnormally\n> < before or while processing the request.\n> < We have lost the connection to the backend, so further processing is impossibl\n> e. Terminating.\n> ---\n> > QUERY: insert into fkeys2 values (30, '3', 2);\n> ..... cutted .....\n> \n> diff results/sanity_check.out expected/sanity_check.out\n> 13,14d12\n> < fkeys |t \n> < fkeys2 |t \n> 29d26\n> < pkeys |t \n> 34c31\n> < (26 rows)\n> ---\n> > (23 rows)\n> \n> diff results/misc.out expected/misc.out\n> 9,12c9,514\n> < pqReadData() -- backend closed the channel unexpectedly.\n> < This probably means the backend terminated abnormally\n> < before or while processing the request.\n> < We have lost the connection to the backend, so further processing is impossibl\n> e. Terminating.\n> ---\n> > QUERY: UPDATE tmp\n> ....... cutted .....\n> \n> \n> diff results/plpgsql.out expected/plpgsql.out\n> 875d874\n> < ERROR: Load of file /usr/local/pgsql/lib/plpgsql.so failed: dlopen '/usr/loca\n> l/pgsql/lib/plpgsql.so' failed. (/usr/local/pgsql/lib/plpgsql.so: Undefined symb\n> ol \"CurrentMemoryContext\")\n> 877d875\n> < ERROR: Can't find function plpgsql_call_handler in file /usr/local/pgsql/lib/\n> plpgsql.so\n> 879d876\n> < ERROR: Can't find function plpgsql_call_handler in file /usr/local/pgsql/lib/\n> plpgsql.so\n> .... cutted ...\n> \n> \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 16 May 1999 02:46:48 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How good is FreeBSD for postgres ?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> All known problems...I've always run PostgreSQL under FreeBSD, and have\n> yet to notice a problem with a -release version of it...\n> The regression tests are \"base Linux\", and some of the generated error\n> messages are different between the systems and/or rounding is slighty\n> different...\n\nThe backend crashes that he's showing in some of the tests are not\nknown problems (to me anyway). Any ideas? Oleg, can you provide\ndebugger backtraces from the corefiles those crashes generate?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 May 1999 10:03:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How good is FreeBSD for postgres ? "
},
{
"msg_contents": "I read across /usr/ports/databases/postgresql on my FreeBSD 3.1 release\nelf machine and found a lot of patches to postgres 6.4.2\nIt seems they are didn't applied - I checked 6.5 cvs.\nWhat is the status of this port ? Is there is a chance to apply them\ninto 6.5 release ?\n\n\n\tRegards,\n\t\t\n\t\tOleg\n\nOn Sun, 16 May 1999, Tom Lane wrote:\n\n> Date: Sun, 16 May 1999 10:03:32 -0400\n> From: Tom Lane <[email protected]>\n> To: The Hermit Hacker <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] How good is FreeBSD for postgres ? \n> \n> The Hermit Hacker <[email protected]> writes:\n> > All known problems...I've always run PostgreSQL under FreeBSD, and have\n> > yet to notice a problem with a -release version of it...\n> > The regression tests are \"base Linux\", and some of the generated error\n> > messages are different between the systems and/or rounding is slighty\n> > different...\n> \n> The backend crashes that he's showing in some of the tests are not\n> known problems (to me anyway). Any ideas? Oleg, can you provide\n> debugger backtraces from the corefiles those crashes generate?\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 17 May 1999 01:07:55 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] How good is FreeBSD for postgres ? "
},
{
"msg_contents": "\nWill look at them right now...\n\nOn Mon, 17 May 1999, Oleg Bartunov wrote:\n\n> I read across /usr/ports/databases/postgresql on my FreeBSD 3.1 release\n> elf machine and found a lot of patches to postgres 6.4.2\n> It seems they are didn't applied - I checked 6.5 cvs.\n> What is the status of this port ? Is there is a chance to apply them\n> into 6.5 release ?\n> \n> \n> \tRegards,\n> \t\t\n> \t\tOleg\n> \n> On Sun, 16 May 1999, Tom Lane wrote:\n> \n> > Date: Sun, 16 May 1999 10:03:32 -0400\n> > From: Tom Lane <[email protected]>\n> > To: The Hermit Hacker <[email protected]>\n> > Cc: Oleg Bartunov <[email protected]>, [email protected]\n> > Subject: Re: [HACKERS] How good is FreeBSD for postgres ? \n> > \n> > The Hermit Hacker <[email protected]> writes:\n> > > All known problems...I've always run PostgreSQL under FreeBSD, and have\n> > > yet to notice a problem with a -release version of it...\n> > > The regression tests are \"base Linux\", and some of the generated error\n> > > messages are different between the systems and/or rounding is slighty\n> > > different...\n> > \n> > The backend crashes that he's showing in some of the tests are not\n> > known problems (to me anyway). Any ideas? Oleg, can you provide\n> > debugger backtraces from the corefiles those crashes generate?\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 17 May 1999 00:35:00 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How good is FreeBSD for postgres ? "
},
{
"msg_contents": "On Mon, 17 May 1999, The Hermit Hacker wrote:\n\n> Date: Mon, 17 May 1999 00:35:00 -0300 (ADT)\n> From: The Hermit Hacker <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tom Lane <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] How good is FreeBSD for postgres ? \n> \n> \n> Will look at them right now...\n\nI see you did a job ! Postgres 6.5 cvs works now under FreeBSD 3.1 release \n(elf) as on my lovely Linux box ! Almost all regression tests passed\nexcept float8, horology and geometry and rules (could somebody correct\npgsql <-> postgres in .out). \n\nI think it would be useful to apply these fixes to 6.4.2 too.\n\n\tRegards,\n\t\tOleg\n\nPS.\n\n10:05:47[nature]:/usr/home/postgres/cvs/pgsql/src/test/regress$ diff results/flo\nat8.out expected/float8.out \n190,191d189\n< ERROR: floating point exception! The last floating point operation either exc\needed legal ranges or was a divide by zero\n< QUERY: SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n192a191,192\n> QUERY: SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> ERROR: pow() result is out of range\n198,206c198\n< bad| ?column?\n< ---+--------------------\n< | 1\n< |7.39912306090513e-16\n< | 0\n< | 0\n< | 1\n< (5 rows)\n< \n---\n> ERROR: exp() result is out of range\n223a216\n> ERROR: Input '10e-400' is out of range for float8\n224a218\n> ERROR: Input '-10e-400' is out of range for float8\n1\n\n> \n> On Mon, 17 May 1999, Oleg Bartunov wrote:\n> \n> > I read across /usr/ports/databases/postgresql on my FreeBSD 3.1 release\n> > elf machine and found a lot of patches to postgres 6.4.2\n> > It seems they are didn't applied - I checked 6.5 cvs.\n> > What is the status of this port ? Is there is a chance to apply them\n> > into 6.5 release ?\n> > \n> > \n> > \tRegards,\n> > \t\t\n> > \t\tOleg\n> > \n> > On Sun, 16 May 1999, Tom Lane wrote:\n> > \n> > > Date: Sun, 16 May 1999 10:03:32 -0400\n> > > From: Tom Lane <[email protected]>\n> > > To: The Hermit Hacker <[email protected]>\n> > > Cc: Oleg Bartunov <[email protected]>, [email protected]\n> > > Subject: Re: [HACKERS] How good is FreeBSD for postgres ? \n> > > \n> > > The Hermit Hacker <[email protected]> writes:\n> > > > All known problems...I've always run PostgreSQL under FreeBSD, and have\n> > > > yet to notice a problem with a -release version of it...\n> > > > The regression tests are \"base Linux\", and some of the generated error\n> > > > messages are different between the systems and/or rounding is slighty\n> > > > different...\n> > > \n> > > The backend crashes that he's showing in some of the tests are not\n> > > known problems (to me anyway). Any ideas? Oleg, can you provide\n> > > debugger backtraces from the corefiles those crashes generate?\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 17 May 1999 10:07:06 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] How good is FreeBSD for postgres ? "
}
] |
[
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> To compile postgres using gcc 2.7.2.1 I had to modify 2 files\n> src/interfaces/libpq++/pgconnection.cc\n> src/interfaces/libpq++/pgenv.h\n> Particularly, \n> #include <iostream> to #include <iostream.h>\n> #include <strstream> to #include <strstream.h>\n\nMy fault. I was using\n\ngcc version egcs-2.91.60 19981201 (egcs-1.1.1 release)\n\nwhen creating the above patches. However, please make the above\ncorrect change, as all <iostream> does is to include <iostream.h> and\nsimilarly <strstream> just includes <strstream.h>. I think the meaning\nis deeper with other compilers, but with egcs they are just aliases. I\ndid, as Tom Lane puts it, just break it without thinking.\n\nCheers,\n\nPatrick\n",
"msg_date": "Sat, 15 May 1999 12:47:43 +0100 (BST)",
"msg_from": "[email protected] (Patrick Welche)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 cvs: problem with includes in\n\tsrc/interfaces/libpq++/"
},
{
"msg_contents": "\nOn 15-May-99 Patrick Welche wrote:\n> Oleg Bartunov <[email protected]> writes:\n>> To compile postgres using gcc 2.7.2.1 I had to modify 2 files\n>> src/interfaces/libpq++/pgconnection.cc\n>> src/interfaces/libpq++/pgenv.h\n>> Particularly, \n>> #include <iostream> to #include <iostream.h>\n>> #include <strstream> to #include <strstream.h>\n> \n> My fault. I was using\n> \n> gcc version egcs-2.91.60 19981201 (egcs-1.1.1 release)\n> \n> when creating the above patches. However, please make the above\n> correct change, as all <iostream> does is to include <iostream.h> and\n> similarly <strstream> just includes <strstream.h>. I think the meaning\n> is deeper with other compilers, but with egcs they are just aliases. I\n> did, as Tom Lane puts it, just break it without thinking.\n> \n\nlibpq++ is being redone. Since 6.5 is under code freeze right now it'll\nbe 6.5.1 before the new stuff can show up. pgenv.h is already history\nas it uses pretty much all deprecated functions.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 17 May 1999 12:00:34 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: problem with includes in src/interfaces/l"
},
{
"msg_contents": "> libpq++ is being redone. Since 6.5 is under code freeze right now it'll\n> be 6.5.1 before the new stuff can show up. pgenv.h is already history\n> as it uses pretty much all deprecated functions.\n> \n> Vince.\n\nI think we may be able to put libpq++ into 6.5 at this point. It is\npretty limited now. I certainly don't think you are going to get it\ninto 6.5.1, because those are mostly fix releases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 12:12:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: problem with includes in src/interfaces/l"
},
{
"msg_contents": "\nOn 17-May-99 Bruce Momjian wrote:\n>> libpq++ is being redone. Since 6.5 is under code freeze right now it'll\n>> be 6.5.1 before the new stuff can show up. pgenv.h is already history\n>> as it uses pretty much all deprecated functions.\n>> \n>> Vince.\n> \n> I think we may be able to put libpq++ into 6.5 at this point. It is\n> pretty limited now. I certainly don't think you are going to get it\n> into 6.5.1, because those are mostly fix releases.\n\nDon't have it on this machine, but I'll send it along tomorrow or the\nnext day. Right now the only thing that's not working with the examples\nis the large object stuff with binaries and I haven't tracked that down.\nBut it doesn't work with the example in the docs for libpq either...\nStill alot I want to do but it's in alot better shape now than it was.\nDmitry's suggestions aren't in yet but will be soon.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 17 May 1999 12:32:34 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: problem with includes in src/interfaces/l"
},
{
"msg_contents": "> Don't have it on this machine, but I'll send it along tomorrow or the\n> next day. Right now the only thing that's not working with the examples\n> is the large object stuff with binaries and I haven't tracked that down.\n> But it doesn't work with the example in the docs for libpq either...\n> Still alot I want to do but it's in alot better shape now than it was.\n> Dmitry's suggestions aren't in yet but will be soon.\n\nYes, I would like to get something new in libpq++.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 12:34:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: problem with includes in src/interfaces/l"
}
] |
[
{
"msg_contents": "Hello,\n\n I'am trying CVS snapshor of postgres and on web page (new features of\npostgres 6.5) is:\n\n[...]\n New LOCK TABLE IN ... MODE(Vadim)\n[...]\n\n I'am using similar statement on Oracle. But postgres doesn't accept\nkeyword 'IN'. \n\nExample:\n\n=> lock table t row share mode;\nLOCK TABLE\n ... this works\n\nbut:\n\n=> lock table t IN row share mode;\nERROR: parser: parse error at or near \"in\"\n .... but this not\n\nIt is mistake in grammar, or is there all OK ?\n\n thanks,\n\n David\n\n-- \n* David Sauer, student of Czech Technical University\n* electronic mail: [email protected] (mime compatible)\n",
"msg_date": "15 May 1999 14:43:07 +0200",
"msg_from": "David Sauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Syntax of LOCK TABLE ..."
},
{
"msg_contents": "> Hello,\n> \n> I'am trying CVS snapshor of postgres and on web page (new features of\n> postgres 6.5) is:\n> \n> [...]\n> New LOCK TABLE IN ... MODE(Vadim)\n> [...]\n> \n> I'am using similar statement on Oracle. But postgres doesn't accept\n> keyword 'IN'. \n> \n> Example:\n> \n> => lock table t row share mode;\n> LOCK TABLE\n> ... this works\n> \n> but:\n> \n> => lock table t IN row share mode;\n> ERROR: parser: parse error at or near \"in\"\n> .... but this not\n> \n> It is mistake in grammar, or is there all OK ?\n\nWorks here:\n\n\ttest=> lock table pg_class IN row share mode;\n\tLOCK TABLE\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 14:26:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Syntax of LOCK TABLE ..."
},
{
"msg_contents": "David Sauer <[email protected]> writes:\n> => lock table t row share mode;\n> LOCK TABLE\n> ... this works\n> but:\n> => lock table t IN row share mode;\n> ERROR: parser: parse error at or near \"in\"\n> .... but this not\n\n> It is mistake in grammar, or is there all OK ?\n\nI see this behavior too, and a quick look at gram.y shows that indeed\nit is not expecting IN in a LOCK statement. I do not know whether the\nstandard permits (or requires?) the IN keyword, so I don't know whether\nto make the change...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 May 1999 12:05:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Syntax of LOCK TABLE ... "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> David Sauer <[email protected]> writes:\n> > => lock table t row share mode;\n> > LOCK TABLE\n> > ... this works\n> > but:\n> > => lock table t IN row share mode;\n> > ERROR: parser: parse error at or near \"in\"\n> > .... but this not\n> \n> > It is mistake in grammar, or is there all OK ?\n> \n> I see this behavior too, and a quick look at gram.y shows that indeed\n> it is not expecting IN in a LOCK statement. I do not know whether the\n> standard permits (or requires?) the IN keyword, so I don't know whether\n> to make the change...\n\nIN is required...\n\nVadim\n",
"msg_date": "Mon, 17 May 1999 00:14:17 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Syntax of LOCK TABLE ..."
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > David Sauer <[email protected]> writes:\n> > > => lock table t row share mode;\n> > > LOCK TABLE\n> > > ... this works\n> > > but:\n> > > => lock table t IN row share mode;\n> > > ERROR: parser: parse error at or near \"in\"\n> > > .... but this not\n> > \n> > > It is mistake in grammar, or is there all OK ?\n> > \n> > I see this behavior too, and a quick look at gram.y shows that indeed\n> > it is not expecting IN in a LOCK statement. I do not know whether the\n> > standard permits (or requires?) the IN keyword, so I don't know whether\n> > to make the change...\n> \n> IN is required...\n\nI have modified the grammar to require IN. Looks like someone cleaned\nup the LOCK grammar options recently, but forgot the IN.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 May 1999 20:22:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Syntax of LOCK TABLE ..."
},
{
"msg_contents": "> I have modified the grammar to require IN. Looks like someone cleaned\n> up the LOCK grammar options recently, but forgot the IN.\n\n'Twas me. I don't recall seeing IN in the first place, but the\noriginal gram.y support code did not actually use the normal yacc\ngrammar, but rather read most fields as IDENTS and then did strcmp()'s\nto build the parse tree. There were lots of hidden behaviors in that.\n\nSorry, and thanks for fixing it.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 17 May 1999 14:08:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Syntax of LOCK TABLE ..."
}
] |
[
{
"msg_contents": "I have been looking into why a reference to a nonexistent table, eg\n\tINSERT INTO nosuchtable VALUES(1);\nleaks a small amount of memory per occurrence. What I find is a\nmemory leak in the indexscan support. Specifically,\nRelationGetIndexScan in backend/access/index/genam.c palloc's both\nan IndexScanDesc and some keydata storage. The IndexScanDesc\nblock is eventually pfree'd, at the bottom of CatalogIndexFetchTuple\nin backend/catalog/indexing.c. But the keydata block is not.\n\nThis wouldn't matter so much if the palloc were coming from a\ntransaction-local context. But what we're doing is a lookup in pg_class\non behalf of RelationBuildDesc in backend/utils/cache/relcache.c, and\nit's done a MemoryContextSwitchTo into the global CacheCxt before\nstarting the lookup. Therefore, the un-pfreed block represents a\npermanent memory leak.\n\nIn fact, *every* reference to a relation that is not already present in\nthe relcache causes a similar leak. The error case is just the one that\nis easiest to repeat. The missing pfree of the keydata block is\nprobably causing a bunch of other short-term and long-term leaks too.\n\nIt seems to me there are two things to fix here: indexscan ought to\npfree everything it pallocs, and RelationBuildDesc ought to be warier\nabout how much work gets done with CacheCxt as the active palloc\ncontext. (Even if indexscan didn't leak anything ordinarily, there's\nstill the risk of elog(ERROR) causing an abort before the indexscan code\ngets to clean up.)\n\nComments? In particular, where is the cleanest place to add the pfree\nof the keydata block? I don't especially like the fact that callers\nof index_endscan have to clean up the toplevel scan block; I think that\nought to happen inside index_endscan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 May 1999 23:09:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory leaks in relcache"
},
{
"msg_contents": "> I have been looking into why a reference to a nonexistent table, eg\n> \tINSERT INTO nosuchtable VALUES(1);\n> leaks a small amount of memory per occurrence. What I find is a\n> memory leak in the indexscan support. Specifically,\n> RelationGetIndexScan in backend/access/index/genam.c palloc's both\n> an IndexScanDesc and some keydata storage. The IndexScanDesc\n> block is eventually pfree'd, at the bottom of CatalogIndexFetchTuple\n> in backend/catalog/indexing.c. But the keydata block is not.\n> \n> This wouldn't matter so much if the palloc were coming from a\n> transaction-local context. But what we're doing is a lookup in pg_class\n> on behalf of RelationBuildDesc in backend/utils/cache/relcache.c, and\n> it's done a MemoryContextSwitchTo into the global CacheCxt before\n> starting the lookup. Therefore, the un-pfreed block represents a\n> permanent memory leak.\n> \n> In fact, *every* reference to a relation that is not already present in\n> the relcache causes a similar leak. The error case is just the one that\n> is easiest to repeat. The missing pfree of the keydata block is\n> probably causing a bunch of other short-term and long-term leaks too.\n> \n> It seems to me there are two things to fix here: indexscan ought to\n> pfree everything it pallocs, and RelationBuildDesc ought to be warier\n> about how much work gets done with CacheCxt as the active palloc\n> context. (Even if indexscan didn't leak anything ordinarily, there's\n> still the risk of elog(ERROR) causing an abort before the indexscan code\n> gets to clean up.)\n> \n> Comments? In particular, where is the cleanest place to add the pfree\n> of the keydata block? I don't especially like the fact that callers\n> of index_endscan have to clean up the toplevel scan block; I think that\n> ought to happen inside index_endscan.\n\nYou are certainly on to something. Every call to index_endscan() either\ncalls pfree() just after the call to free the descriptor, or should. I\nrecommend doing the pfree in the index_endscan, and removing the\nindividual pfree's after the index_endscan call. I also recommend doing\npfree'ing the keys inside index_endscan() and see what happens. The\nregression test should show any problems. I can easily do this is you\nwish.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 00:53:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory leaks in relcache"
},
{
"msg_contents": "> It seems to me there are two things to fix here: indexscan ought to\n> pfree everything it pallocs, and RelationBuildDesc ought to be warier\n> about how much work gets done with CacheCxt as the active palloc\n> context. (Even if indexscan didn't leak anything ordinarily, there's\n> still the risk of elog(ERROR) causing an abort before the indexscan code\n> gets to clean up.)\n\nAs far as cleaning up from an elog, my only idea would be to have a\nglobal List that contains pointers that should be freed from any elog().\nThe cache code would lconc() any of its pointers onto the list, and an\nelog() would check the list and free anything on there. The problem is\nthat many times the palloc's happen in non-cache functions, so the cache\ncode may not have access to the palloc address, and if we put it\neverywhere, we are doing this for non-cache calls, which may be too much\noverhead. We could also try clearing the cache on an elog() but that\nseems extreme too.\n\nie, cache function calls a function that allocates memory then calls\nanother function that fails. The memory is in cache context, but the\ncache function never saw a return from it's first call, so it couldn't\nadd it to the elog global free list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 01:21:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory leaks in relcache"
},
{
"msg_contents": "\nTom, where are we on this. As I remember, it is still an open issue,\nright? I can add it to the TODO list.\n\n\n> I have been looking into why a reference to a nonexistent table, eg\n> \tINSERT INTO nosuchtable VALUES(1);\n> leaks a small amount of memory per occurrence. What I find is a\n> memory leak in the indexscan support. Specifically,\n> RelationGetIndexScan in backend/access/index/genam.c palloc's both\n> an IndexScanDesc and some keydata storage. The IndexScanDesc\n> block is eventually pfree'd, at the bottom of CatalogIndexFetchTuple\n> in backend/catalog/indexing.c. But the keydata block is not.\n> \n> This wouldn't matter so much if the palloc were coming from a\n> transaction-local context. But what we're doing is a lookup in pg_class\n> on behalf of RelationBuildDesc in backend/utils/cache/relcache.c, and\n> it's done a MemoryContextSwitchTo into the global CacheCxt before\n> starting the lookup. Therefore, the un-pfreed block represents a\n> permanent memory leak.\n> \n> In fact, *every* reference to a relation that is not already present in\n> the relcache causes a similar leak. The error case is just the one that\n> is easiest to repeat. The missing pfree of the keydata block is\n> probably causing a bunch of other short-term and long-term leaks too.\n> \n> It seems to me there are two things to fix here: indexscan ought to\n> pfree everything it pallocs, and RelationBuildDesc ought to be warier\n> about how much work gets done with CacheCxt as the active palloc\n> context. (Even if indexscan didn't leak anything ordinarily, there's\n> still the risk of elog(ERROR) causing an abort before the indexscan code\n> gets to clean up.)\n> \n> Comments? In particular, where is the cleanest place to add the pfree\n> of the keydata block? I don't especially like the fact that callers\n> of index_endscan have to clean up the toplevel scan block; I think that\n> ought to happen inside index_endscan.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 04:16:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory leaks in relcache"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, where are we on this. As I remember, it is still an open issue,\n> right? I can add it to the TODO list.\n\nI have not done anything about it yet; it ought to be in TODO.\n\nI'm also aware of two or three other sources of small but permanent\nmemory leaks, btw; have them in my todo list.\n\n\t\t\tregards, tom lane\n\n>> I have been looking into why a reference to a nonexistent table, eg\n>> INSERT INTO nosuchtable VALUES(1);\n>> leaks a small amount of memory per occurrence. What I find is a\n>> memory leak in the indexscan support. Specifically,\n>> RelationGetIndexScan in backend/access/index/genam.c palloc's both\n>> an IndexScanDesc and some keydata storage. The IndexScanDesc\n>> block is eventually pfree'd, at the bottom of CatalogIndexFetchTuple\n>> in backend/catalog/indexing.c. But the keydata block is not.\n>> \n>> This wouldn't matter so much if the palloc were coming from a\n>> transaction-local context. But what we're doing is a lookup in pg_class\n>> on behalf of RelationBuildDesc in backend/utils/cache/relcache.c, and\n>> it's done a MemoryContextSwitchTo into the global CacheCxt before\n>> starting the lookup. Therefore, the un-pfreed block represents a\n>> permanent memory leak.\n>> \n>> In fact, *every* reference to a relation that is not already present in\n>> the relcache causes a similar leak. The error case is just the one that\n>> is easiest to repeat. The missing pfree of the keydata block is\n>> probably causing a bunch of other short-term and long-term leaks too.\n>> \n>> It seems to me there are two things to fix here: indexscan ought to\n>> pfree everything it pallocs, and RelationBuildDesc ought to be warier\n>> about how much work gets done with CacheCxt as the active palloc\n>> context. (Even if indexscan didn't leak anything ordinarily, there's\n>> still the risk of elog(ERROR) causing an abort before the indexscan code\n>> gets to clean up.)\n>> \n>> Comments? In particular, where is the cleanest place to add the pfree\n>> of the keydata block? I don't especially like the fact that callers\n>> of index_endscan have to clean up the toplevel scan block; I think that\n>> ought to happen inside index_endscan.\n>> \n>> regards, tom lane\n>> \n>> \n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 07 Jul 1999 10:24:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Memory leaks in relcache "
},
{
"msg_contents": "Added to TODO:\n\n* fix indexscan() so it does leak memory by not requiring caller to free\n* improve dynamic memory allocation by introducing tuple-context memory\n allocation\n* fix memory leak in cache code when non-existant table is referenced\n\n> Bruce Momjian <[email protected]> writes:\n> > Tom, where are we on this. As I remember, it is still an open issue,\n> > right? I can add it to the TODO list.\n> \n> I have not done anything about it yet; it ought to be in TODO.\n> \n> I'm also aware of two or three other sources of small but permanent\n> memory leaks, btw; have them in my todo list.\n> \n> \t\t\tregards, tom lane\n> \n> >> I have been looking into why a reference to a nonexistent table, eg\n> >> INSERT INTO nosuchtable VALUES(1);\n> >> leaks a small amount of memory per occurrence. What I find is a\n> >> memory leak in the indexscan support. Specifically,\n> >> RelationGetIndexScan in backend/access/index/genam.c palloc's both\n> >> an IndexScanDesc and some keydata storage. The IndexScanDesc\n> >> block is eventually pfree'd, at the bottom of CatalogIndexFetchTuple\n> >> in backend/catalog/indexing.c. But the keydata block is not.\n> >> \n> >> This wouldn't matter so much if the palloc were coming from a\n> >> transaction-local context. But what we're doing is a lookup in pg_class\n> >> on behalf of RelationBuildDesc in backend/utils/cache/relcache.c, and\n> >> it's done a MemoryContextSwitchTo into the global CacheCxt before\n> >> starting the lookup. Therefore, the un-pfreed block represents a\n> >> permanent memory leak.\n> >> \n> >> In fact, *every* reference to a relation that is not already present in\n> >> the relcache causes a similar leak. The error case is just the one that\n> >> is easiest to repeat. The missing pfree of the keydata block is\n> >> probably causing a bunch of other short-term and long-term leaks too.\n> >> \n> >> It seems to me there are two things to fix here: indexscan ought to\n> >> pfree everything it pallocs, and RelationBuildDesc ought to be warier\n> >> about how much work gets done with CacheCxt as the active palloc\n> >> context. (Even if indexscan didn't leak anything ordinarily, there's\n> >> still the risk of elog(ERROR) causing an abort before the indexscan code\n> >> gets to clean up.)\n> >> \n> >> Comments? In particular, where is the cleanest place to add the pfree\n> >> of the keydata block? I don't especially like the fact that callers\n> >> of index_endscan have to clean up the toplevel scan block; I think that\n> >> ought to happen inside index_endscan.\n> >> \n> >> regards, tom lane\n> >> \n> >> \n> \n> \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 23:27:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory leaks in relcache"
}
] |
[
{
"msg_contents": "Hi all. Was working tonight and ran into the following error. Doing a\nunion between two selects (to get around the lack of outer joins - hint\nhint), I was getting the error:\n\n ERROR: Each UNION clause must have the same number of columns\n\nUpon examining the SQL statement in question, I verified that it did,\nindeed, have the same number of columns. After some fiddling, I found the\nactual problem was that I was doing an ORDER BY on a column which was not\nbeing included in the two select statements. Unfortunately, the error\nmessage wasn't pointing at that.\n\nI'm not sure if this is just a simple change or implies other problems\nwith the parser but I thought I'd toss it out onto the pile.\n\nCya...\n\n- K\n\nKristofer Munn * http://www.munn.com/~kmunn/ * ICQ# 352499 * AIM: KrMunn \n\n\n",
"msg_date": "Sun, 16 May 1999 00:10:12 -0400 (EDT)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Misleading Error Message"
},
{
"msg_contents": "> Hi all. Was working tonight and ran into the following error. Doing a\n> union between two selects (to get around the lack of outer joins - hint\n> hint), I was getting the error:\n> \n> ERROR: Each UNION clause must have the same number of columns\n> \n> Upon examining the SQL statement in question, I verified that it did,\n> indeed, have the same number of columns. After some fiddling, I found the\n> actual problem was that I was doing an ORDER BY on a column which was not\n> being included in the two select statements. Unfortunately, the error\n> message wasn't pointing at that.\n> \n> I'm not sure if this is just a simple change or implies other problems\n> with the parser but I thought I'd toss it out onto the pile.\n\nTom Lane discovered it a few days ago in relation to INSERT INTO table\nSELECT * FROM TABLE ORDER BY col1, and col1 was not in the select target\nlist. It shows an error. We are looking at solutions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 May 1999 01:25:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Misleading Error Message"
},
{
"msg_contents": ">\n> > Hi all. Was working tonight and ran into the following error. Doing a\n> > union between two selects (to get around the lack of outer joins - hint\n> > hint), I was getting the error:\n> >\n> > ERROR: Each UNION clause must have the same number of columns\n> >\n> > Upon examining the SQL statement in question, I verified that it did,\n> > indeed, have the same number of columns. After some fiddling, I found the\n> > actual problem was that I was doing an ORDER BY on a column which was not\n> > being included in the two select statements. Unfortunately, the error\n> > message wasn't pointing at that.\n> >\n> > I'm not sure if this is just a simple change or implies other problems\n> > with the parser but I thought I'd toss it out onto the pile.\n>\n> Tom Lane discovered it a few days ago in relation to INSERT INTO table\n> SELECT * FROM TABLE ORDER BY col1, and col1 was not in the select target\n> list. It shows an error. We are looking at solutions.\n\n This might also interfere with latest changes I did in the\n rewrite system. Parser and rewriter now add junk attributes\n to the targetlist. I think the problem is that the union\n code (where the check is done) doesn't recognize that the\n unequal length of the targetlists is due to junk attributes.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 17 May 1999 10:27:26 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Misleading Error Message"
},
{
"msg_contents": "> >\n> > > Hi all. Was working tonight and ran into the following error. Doing a\n> > > union between two selects (to get around the lack of outer joins - hint\n> > > hint), I was getting the error:\n> > >\n> > > ERROR: Each UNION clause must have the same number of columns\n> > >\n> > > Upon examining the SQL statement in question, I verified that it did,\n> > > indeed, have the same number of columns. After some fiddling, I found the\n> > > actual problem was that I was doing an ORDER BY on a column which was not\n> > > being included in the two select statements. Unfortunately, the error\n> > > message wasn't pointing at that.\n> > >\n> > > I'm not sure if this is just a simple change or implies other problems\n> > > with the parser but I thought I'd toss it out onto the pile.\n> >\n> > Tom Lane discovered it a few days ago in relation to INSERT INTO table\n> > SELECT * FROM TABLE ORDER BY col1, and col1 was not in the select target\n> > list. It shows an error. We are looking at solutions.\n> \n> This might also interfere with latest changes I did in the\n> rewrite system. Parser and rewriter now add junk attributes\n> to the targetlist. I think the problem is that the union\n> code (where the check is done) doesn't recognize that the\n> unequal length of the targetlists is due to junk attributes.\n\nI have added code to the parser and rewrite checks to skip counting of\nresjunk nodes in checking for UNION length equality. This should fix\nthe problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 14:22:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Misleading Error Message"
},
{
"msg_contents": "> Hi all. Was working tonight and ran into the following error. Doing a\n> union between two selects (to get around the lack of outer joins - hint\n> hint), I was getting the error:\n> \n> ERROR: Each UNION clause must have the same number of columns\n> \n> Upon examining the SQL statement in question, I verified that it did,\n> indeed, have the same number of columns. After some fiddling, I found the\n> actual problem was that I was doing an ORDER BY on a column which was not\n> being included in the two select statements. Unfortunately, the error\n> message wasn't pointing at that.\n> \n> I'm not sure if this is just a simple change or implies other problems\n> with the parser but I thought I'd toss it out onto the pile.\n\n\nThis now fixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 14:23:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Misleading Error Message"
}
] |
[
{
"msg_contents": "Hi, I have observed a strange behavior with current source tree.\nFor example, \n\n\tselect usename from pg_user order by usename;\n\nis ok. But\n\n\tselect usename as aaa from pg_user order by usename;\n\nwill produce 2 column names: \"aaa\" and \"usename\". Is this normal?\n---\nTatsuo Ishii\n\n\n",
"msg_date": "Sun, 16 May 1999 16:09:01 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "select + order by"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> But\n> \tselect usename as aaa from pg_user order by usename;\n> will produce 2 column names: \"aaa\" and \"usename\". Is this normal?\n\nNo. I am not seeing it here with sources from 12 May. I am guessing\nthis has something to do with Jan's recent fixes for group by/order by\nrewrites. Do you see it when you use a plain table, rather than a view?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 May 1999 10:07:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] select + order by "
},
{
"msg_contents": "> > \tselect usename as aaa from pg_user order by usename;\n> > will produce 2 column names: \"aaa\" and \"usename\". Is this normal?\n> \n> No. I am not seeing it here with sources from 12 May. I am guessing\n> this has something to do with Jan's recent fixes for group by/order by\n> rewrites. Do you see it when you use a plain table, rather than a view?\n\nI see it with a plain table too.\n---\nTatsuo Ishii\n",
"msg_date": "Mon, 17 May 1999 09:50:19 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] select + order by "
},
{
"msg_contents": "> > > \tselect usename as aaa from pg_user order by usename;\n> > > will produce 2 column names: \"aaa\" and \"usename\". Is this normal?\n> > \n> > No. I am not seeing it here with sources from 12 May. I am guessing\n> > this has something to do with Jan's recent fixes for group by/order by\n> > rewrites. Do you see it when you use a plain table, rather than a view?\n> \n> I see it with a plain table too.\n\nI just did a make clean, initdb, etc, and got:\n\n\ttest=> select usename as aaa from pg_user order by usename;\n\taaa \n\t--------\n\tpostgres\n\t(1 row)\n\nLooks good to me.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 May 1999 21:01:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] select + order by"
},
{
"msg_contents": ">\n> > > > select usename as aaa from pg_user order by usename;\n> > > > will produce 2 column names: \"aaa\" and \"usename\". Is this normal?\n> > >\n> > > No. I am not seeing it here with sources from 12 May. I am guessing\n> > > this has something to do with Jan's recent fixes for group by/order by\n> > > rewrites. Do you see it when you use a plain table, rather than a view?\n> >\n> > I see it with a plain table too.\n>\n> I just did a make clean, initdb, etc, and got:\n>\n> test=> select usename as aaa from pg_user order by usename;\n> aaa\n> --------\n> postgres\n> (1 row)\n>\n> Looks good to me.\n\n Yes, latest changes require a clear, intidb due to changes in\n the node out/read functions.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 17 May 1999 10:51:42 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] select + order by"
},
{
"msg_contents": ">> > > > select usename as aaa from pg_user order by usename;\n>> > > > will produce 2 column names: \"aaa\" and \"usename\". Is this normal?\n>> > >\n>> > > No. I am not seeing it here with sources from 12 May. I am guessing\n>> > > this has something to do with Jan's recent fixes for group by/order by\n>> > > rewrites. Do you see it when you use a plain table, rather than a view?\n>> >\n>> > I see it with a plain table too.\n>>\n>> I just did a make clean, initdb, etc, and got:\n>>\n>> test=> select usename as aaa from pg_user order by usename;\n>> aaa\n>> --------\n>> postgres\n>> (1 row)\n>>\n>> Looks good to me.\n>\n> Yes, latest changes require a clear, intidb due to changes in\n> the node out/read functions.\n\nGetting latest sources and doing initdb solved the problem.\n\nThanks and sorry for the confusion.\n---\nTatsuo Ishii\n",
"msg_date": "Tue, 18 May 1999 10:47:31 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] select + order by "
}
] |
[
{
"msg_contents": "On Sun, 16 May 1999, Tom Lane wrote:\n\n> Date: Sun, 16 May 1999 17:31:22 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Subject: Re: [HACKERS] How good is FreeBSD for postgres ? \n> \n> > I'm new in freeBSD (FreeBSD nature.ru 3.1-RELEASE FreeBSD 3.1-RELEASE #2)\n> > and it seems there is a problem with shared objects.\n> \n> I'm not likely to be much help on that; it's probably a freeBSD-specific\n> issue. Try the mailing list; there are other people using freeBSD.\n> (Marc, I think, for one...)\n\nTom,\n\nI probably understand what's going on. Most postgres developers still\nuse old a.out format while new versions switches to elf format\n(what I'm actually using). I remember when it was happened with Linux\nin 1995 and how many prolems were that time. I read freebsd mailing lists\nand there are a lot of complaints about shared libs creating and usage.\nFortunately, I found that FreeBSD has nice porting policy and \nfound patches to Postgres 6.4.2. I was surprised they were never applied,\nwell, at least to current 6.5 cvs tree. Two patches I found very useful - \none to src/makefiles/Makefile.freebsd and another to\nsrc/Makefile.shlib. I'm currently in process of rebuilding postgres\nand will see. Hmm, I've lost connection to my server :-(\nI could see in xterm that compilation and installation (with these patches)\npassed OK and final message:\n\nThank you for choosing PostgreSQL, the most advanced open source database engine.\n\nAnyway, FreeBSD guru, please examine these patches ! They are for\npostgres 6.4.2 and FreeBSD 3.1 release.\n\n\tRegards,\n\t\n\t\tOleg\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Mon, 17 May 1999 01:47:56 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] How good is FreeBSD for postgres ? "
}
] |
[
{
"msg_contents": "I just changed the call from mdunlink to smgrunlink, but this brings up\na good point.\n\nsmgr is a generic i/o interface layer that allows multiple storage\nmanagers. Currently, we always use DEFAULT_SMGR as a parameter to smgr*\nfunctions, causing calls to the md* routines. Is there any value in\njust removing the smgr layer completely. It was originally for a CD\njutebox i/o layer in addition to our current disk i/o layer.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 May 1999 20:29:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "sgmr* vs. md*"
},
{
"msg_contents": "On Sun, 16 May 1999, Bruce Momjian wrote:\n> I just changed the call from mdunlink to smgrunlink, but this brings up\n> a good point.\n\nThanks, I should have noticed that myself...\n\n> smgr is a generic i/o interface layer that allows multiple storage\n> managers. Currently, we always use DEFAULT_SMGR as a parameter to smgr*\n> functions, causing calls to the md* routines. Is there any value in\n> just removing the smgr layer completely. It was originally for a CD\n> jutebox i/o layer in addition to our current disk i/o layer.\n\nI think that extra layer is a very good idea. Some new kind of storage\nmight come along that someone wants to use, and md.c wouldn't do the right\nthing.\n\nSince it's such a thin layer, performance doesn't really suffer. Doesn't\nhurt to keep it...\n\nOle Gjerde\n\n",
"msg_date": "Mon, 17 May 1999 01:26:24 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sgmr* vs. md*"
}
] |
[
{
"msg_contents": "I will assume I can remove this item(thanks Tom):\n\n GEQO has trouble with many tables/eats memory - Backend message type 0x44\n\nCurrent list is:\n\n---------------------------------------------------------------------------\n\nDefault of '' causes crash in some cases\nshift/reduce conflict in grammar, SELECT ... FOR [UPDATE|CURSOR]\nSELECT 1; SELECT 2 fails when sent not via psql, semicolon problem\nSELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\nCREATE OPERATOR *= (leftarg=_varchar, rightarg=varchar, \n\tprocedure=array_varchareq); fails, varchar is reserved word, quotes work\nCLUSTER failure if vacuum has not been performed in a while\nImprove Subplan list handling\nAllow Subplans to use efficient joins(hash, merge) with upper variable\nImprove NULL parameter passing into functions\nTable with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nAllow ESCAPE '\\' at the end of LIKE for ANSI compliance, or rewrite the\n\tLIKE handling by rewriting the user string with the supplied ESCAPE\nFix leak for expressions?, aggregates?\nImprove LIMIT processing by using index to limit rows processed\nAllow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n\tWorks in GROUP BY\nUpdate reltuples from COPY command\nnodeResults.c and parse_clause.c give compiler warnings\nMove LIKE index optimization handling to the optimizer?\nMVCC locking, deadlock, priorities?\nMake sure pg_internal.init generation can't cause unreliability\nSELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\nCREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops) fails\n\tindex can't store constant parameters, allow SQL function indexes?\nHave hashjoins use portals, not fixed-size memory\nDROP TABLE leaves INDEX file descriptor open\nALTER TABLE ADD COLUMN to inherited table put column in wrong place\nresno's, sublevelsup corrupt when reaching rewrite system\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\n3 = sum(x) in rewrite system is a problem\nFix function pointer calls to take Datum args for char and int2 args\n\nDo we want pg_dump -z to be the default?\npg_dump of groups fails\npg_dump -o -D does not work, and can not work currently, generate error?\npsql \\d should show precision\ndumping out sequences should not be counted in pg_dump display\n\n\nMake psql \\help, man pages, and sgml reflect changes in grammar\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\n\nDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\nVacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n\nMake Serial its own type?\nAdd support for & operator\nstore binary-compatible type information in the system somewhere \nadd ability to add comments to system tables using table/colname combination\nprocess const=const parts of OR clause in separate pass\nmake oid use oidin/oidout not int4in/int4out in pg_type.h, make oid use\n\tunsigned int more reliably, pg_atoi()\nCREATE VIEW ignores DISTINCT\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 May 1999 20:31:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.5 items"
},
{
"msg_contents": "On Sun, 16 May 1999, Bruce Momjian wrote:\n> nodeResults.c and parse_clause.c give compiler warnings\n\nNo warnings on Redhat Linux 6.0 (Linux 2.2.7, egcs 1.1.2, glibc 2.1)\n\n> DROP TABLE leaves INDEX file descriptor open\n\nShouldn't now.. index_destroy() gets called, which again calls smgrunlink.\nIt looks like smgrunlink closes all fds.\n\n> DROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\n\nThis now works(with the patch from yesterday).\n\n> Vacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n\nThis is actually more of a fundamental problem with mdtruncate. It looks\nlike someone just didn't add support for multiple segments for truncation.\n\nThe following patch seems to do the right thing, for me at least.\nIt passed my tests, my data looks right(no data that shouldn't be in\nthere) and regression is ok.\n\nOle Gjerde\n\n--- src/backend/storage/smgr/md.c\t1999/04/05 22:25:11\t1.42\n+++ src/backend/storage/smgr/md.c\t1999/05/17 06:23:23\n@@ -711,15 +711,26 @@\n \tMdfdVec *v;\n \n #ifndef LET_OS_MANAGE_FILESIZE\n-\tint\t\t\tcurnblk;\n+\tint\t\t\tcurnblk,\n+\t\t\t\t\ti,\n+\t\t\t\t\toldsegno,\n+\t\t\t\t\tnewsegno;\n+\tchar\t\tfname[NAMEDATALEN];\n+\tchar\t\ttname[NAMEDATALEN + 10];\n \n \tcurnblk = mdnblocks(reln);\n-\tif (curnblk / RELSEG_SIZE > 0)\n-\t{\n-\t\telog(NOTICE, \"Can't truncate multi-segments relation %s\",\n-\t\t\t reln->rd_rel->relname.data);\n-\t\treturn curnblk;\n-\t}\n+\toldsegno = curnblk / RELSEG_SIZE;\n+\tnewsegno = nblocks / RELSEG_SIZE;\n+\n+\tStrNCpy(fname, RelationGetRelationName(reln)->data, NAMEDATALEN);\n+\n+\tif (newsegno < oldsegno) {\n+\t\tfor (i = (newsegno + 1);; i++) {\n+\t\t\tsprintf(tname, \"%s.%d\", fname, i);\n+\t\t\tif (FileNameUnlink(tname) < 0)\n+\t\t\t\tbreak;\n+\t\t}\n+ }\n #endif\n \n \tfd = RelationGetFile(reln);\n\n",
"msg_date": "Mon, 17 May 1999 01:21:18 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "\nList updated. Patch applied. Thanks.\n\n\n> On Sun, 16 May 1999, Bruce Momjian wrote:\n> > nodeResults.c and parse_clause.c give compiler warnings\n> \n> No warnings on Redhat Linux 6.0 (Linux 2.2.7, egcs 1.1.2, glibc 2.1)\n> \n> > DROP TABLE leaves INDEX file descriptor open\n> \n> Shouldn't now.. index_destroy() gets called, which again calls smgrunlink.\n> It looks like smgrunlink closes all fds.\n> \n> > DROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\n> \n> This now works(with the patch from yesterday).\n> \n> > Vacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n> \n> This is actually more of a fundamental problem with mdtruncate. It looks\n> like someone just didn't add support for multiple segments for truncation.\n> \n> The following patch seems to do the right thing, for me at least.\n> It passed my tests, my data looks right(no data that shouldn't be in\n> there) and regression is ok.\n> \n> Ole Gjerde\n> \n> --- src/backend/storage/smgr/md.c\t1999/04/05 22:25:11\t1.42\n> +++ src/backend/storage/smgr/md.c\t1999/05/17 06:23:23\n> @@ -711,15 +711,26 @@\n> \tMdfdVec *v;\n> \n> #ifndef LET_OS_MANAGE_FILESIZE\n> -\tint\t\t\tcurnblk;\n> +\tint\t\t\tcurnblk,\n> +\t\t\t\t\ti,\n> +\t\t\t\t\toldsegno,\n> +\t\t\t\t\tnewsegno;\n> +\tchar\t\tfname[NAMEDATALEN];\n> +\tchar\t\ttname[NAMEDATALEN + 10];\n> \n> \tcurnblk = mdnblocks(reln);\n> -\tif (curnblk / RELSEG_SIZE > 0)\n> -\t{\n> -\t\telog(NOTICE, \"Can't truncate multi-segments relation %s\",\n> -\t\t\t reln->rd_rel->relname.data);\n> -\t\treturn curnblk;\n> -\t}\n> +\toldsegno = curnblk / RELSEG_SIZE;\n> +\tnewsegno = nblocks / RELSEG_SIZE;\n> +\n> +\tStrNCpy(fname, RelationGetRelationName(reln)->data, NAMEDATALEN);\n> +\n> +\tif (newsegno < oldsegno) {\n> +\t\tfor (i = (newsegno + 1);; i++) {\n> +\t\t\tsprintf(tname, \"%s.%d\", fname, i);\n> +\t\t\tif (FileNameUnlink(tname) < 0)\n> +\t\t\t\tbreak;\n> +\t\t}\n> + }\n> #endif\n> \n> \tfd = RelationGetFile(reln);\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 02:39:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
}
] |
[
{
"msg_contents": "postgres, cvs snapshot 19990517 gives me an error:\n\nERROR: Cannot expand tables; null p_rtable (internal error)\n\nThis happens after request to 'select *';\n\ndavid=> select *;\nERROR: Cannot expand tables; null p_rtable (internal error)\n\nThis select statement is useless, but I dont like internal errors.\n\n-- \n* David Sauer, student of Czech Technical University\n* electronic mail: [email protected] (mime compatible)\n",
"msg_date": "17 May 1999 04:35:14 +0200",
"msg_from": "David Sauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG report: ERROR: Cannot expand tables; ...."
},
{
"msg_contents": "> postgres, cvs snapshot 19990517 gives me an error:\n> \n> ERROR: Cannot expand tables; null p_rtable (internal error)\n> \n> This happens after request to 'select *';\n> \n> david=> select *;\n> ERROR: Cannot expand tables; null p_rtable (internal error)\n> \n> This select statement is useless, but I dont like internal errors.\n\nNew message is:\n\n elog(ERROR, \"Wildcard with no tables specified.\"); \n \nIs this OK?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 00:18:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BUG report: ERROR: Cannot expand tables; ...."
}
] |
[
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> But it is still true that DROP TABLE leaves a virtual\n>>>> file descriptor open for each index on the dropped table, and that's\n>>>> a bug in my book.\n>>\n>> AFAIC the patch by Ole Gjerde [[email protected]] which has already \n>> been appiled by Bruce Momjan would solve this problem. \n\n> I thought that patch was just for multi-segment tables, which does not\n> fix the original problem.\n\nBut didn't Ole replace a call to FileNameUnlink with a call to mdunlink\n(or some other higher level routine)? If mdunlink also closes the VFD\nfor the index, then that patch might indeed have fixed it. I'll try\nthe test case I had, and report back.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 May 1999 01:11:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE leaks file descriptors "
}
] |
[
{
"msg_contents": "Just applied some FreeBSD related patches, and regression tests fail on:\n\nfloat8 .. failed\ngeometry .. failed\nrules .. failed\n\nI've included the regression diffs from all three...the float8 one looks\nsuspicious. The second 'diff' in it has some 'out of range' errors that\nthe regression appears to not generate.\n\nThe geometry one looks like all rounding differences, which is normal.\n\nThe rules one...I hven't got a clue.\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org",
"msg_date": "Mon, 17 May 1999 02:20:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "State of v6.5 under FreeBSD"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> Sent: Monday, May 17, 1999 3:40 PM\n> To: Ole Gjerde\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items\n> \n> \n> \n> List updated. Patch applied. Thanks.\n>\n\nI have 2 questions about the patch.\n\n1.The following code exists in mdunlink().\n Something like this isn't necessary ?\n \n /* finally, clean out the mdfd vector */\n fd = RelationGetFile(reln);\n Md_fdvec[fd].mdfd_flags = (uint16) 0;\n\n oldcxt = MemoryContextSwitchTo(MdCxt);\n#ifndef LET_OS_MANAGE_FILESIZE\n for (v = &Md_fdvec[fd]; v != (MdfdVec *) NULL;)\n {\n FileUnlink(v->mdfd_vfd);\n ov = v;\n v = v->mdfd_chain;\n if (ov != &Md_fdvec[fd])\n pfree(ov);\n }\n Md_fdvec[fd].mdfd_chain = (MdfdVec *) NULL;\n#else\n v = &Md_fdvec[fd];\n if (v != (MdfdVec *) NULL)\n FileUnlink(v->mdfd_vfd);\n#endif\n\n2.Even if such code something like above is added,other \n transactions may hold valid file descriptors for FileName\n Unlink()ed segment files. Isn't it the problem ?\n\n I'm afraid different transactions write to different i-nodes \n which have or had a same segment file name.\n It seems more secure to truncate segment files to 0 length \n than unlinking those files. But I'm not sure it works fine.\n\nThanks.\n\nHiroshi Inoue\[email protected] \n\n> \n> > On Sun, 16 May 1999, Bruce Momjian wrote:\n[snip]\n> > \n> > > Vacuum of tables >2 gigs - NOTICE: Can't truncate \n> multi-segments relation\n> > \n> > This is actually more of a fundamental problem with mdtruncate. \n> It looks\n> > like someone just didn't add support for multiple segments for \n> truncation.\n> > \n> > The following patch seems to do the right thing, for me at least.\n> > It passed my tests, my data looks right(no data that shouldn't be in\n> > there) and regression is ok.\n> > \n> > Ole Gjerde\n> > \n> > --- src/backend/storage/smgr/md.c\t1999/04/05 22:25:11\t1.42\n> > +++ src/backend/storage/smgr/md.c\t1999/05/17 06:23:23\n> > @@ -711,15 +711,26 @@\n> > \tMdfdVec *v;\n> > \n> > #ifndef LET_OS_MANAGE_FILESIZE\n> > -\tint\t\t\tcurnblk;\n> > +\tint\t\t\tcurnblk,\n> > +\t\t\t\t\ti,\n> > +\t\t\t\t\toldsegno,\n> > +\t\t\t\t\tnewsegno;\n> > +\tchar\t\tfname[NAMEDATALEN];\n> > +\tchar\t\ttname[NAMEDATALEN + 10];\n> > \n> > \tcurnblk = mdnblocks(reln);\n> > -\tif (curnblk / RELSEG_SIZE > 0)\n> > -\t{\n> > -\t\telog(NOTICE, \"Can't truncate multi-segments relation %s\",\n> > -\t\t\t reln->rd_rel->relname.data);\n> > -\t\treturn curnblk;\n> > -\t}\n> > +\toldsegno = curnblk / RELSEG_SIZE;\n> > +\tnewsegno = nblocks / RELSEG_SIZE;\n> > +\n> > +\tStrNCpy(fname, RelationGetRelationName(reln)->data, NAMEDATALEN);\n> > +\n> > +\tif (newsegno < oldsegno) {\n> > +\t\tfor (i = (newsegno + 1);; i++) {\n> > +\t\t\tsprintf(tname, \"%s.%d\", fname, i);\n> > +\t\t\tif (FileNameUnlink(tname) < 0)\n> > +\t\t\t\tbreak;\n> > +\t\t}\n> > + }\n> > #endif\n> > \n> > \tfd = RelationGetFile(reln);\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n> \n",
"msg_date": "Mon, 17 May 1999 18:50:42 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > List updated. Patch applied. Thanks.\n> >\n> \n> I have 2 questions about the patch.\n> \n> 1.The following code exists in mdunlink().\n> Something like this isn't necessary ?\n> \n> /* finally, clean out the mdfd vector */\n> fd = RelationGetFile(reln);\n> Md_fdvec[fd].mdfd_flags = (uint16) 0;\n> \n> oldcxt = MemoryContextSwitchTo(MdCxt);\n> #ifndef LET_OS_MANAGE_FILESIZE\n> for (v = &Md_fdvec[fd]; v != (MdfdVec *) NULL;)\n> {\n> FileUnlink(v->mdfd_vfd);\n> ov = v;\n> v = v->mdfd_chain;\n> if (ov != &Md_fdvec[fd])\n> pfree(ov);\n> }\n> Md_fdvec[fd].mdfd_chain = (MdfdVec *) NULL;\n> #else\n> v = &Md_fdvec[fd];\n> if (v != (MdfdVec *) NULL)\n> FileUnlink(v->mdfd_vfd);\n> #endif\n> \n> 2.Even if such code something like above is added,other \n> transactions may hold valid file descriptors for FileName\n> Unlink()ed segment files. Isn't it the problem ?\n> \n> I'm afraid different transactions write to different i-nodes \n> which have or had a same segment file name.\n> It seems more secure to truncate segment files to 0 length \n> than unlinking those files. But I'm not sure it works fine.\n\nUnlink still allows open file descriptors to continue being valid. The\nfile is removed only when the kernel open file descriptor reference\ncount is zero.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 12:00:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
}
] |
[
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> But it is still true that DROP TABLE leaves a virtual\n>> file descriptor open for each index on the dropped table, and that's\n>> a bug in my book.\n\n> AFAIC the patch by Ole Gjerde [[email protected]] which has already \n> been appiled by Bruce Momjan would solve this problem. \n> Would you please ascertain the fact ? \n\nYou are correct: I find that repeatedly creating and dropping a table\nwith an index does not leak file descriptors (either real or virtual)\nas of yesterday's sources. So this item can be removed from TODO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 May 1999 18:10:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE leaks file descriptors "
}
] |
[
{
"msg_contents": "Here is the current TODO list. This list is independent of the Open\nItems list. Items not fixed by 6.5 final are moved to the TODO list.\n\nI would like to know which items have been fixed already from this list.\nItems I know are fixed are marked with a dash. Are there more? Tom,\ncan you identify any of the array items as fixed? Should we assume they\nare all fixed unless someone reports them broken?\n\nJan, and rewrite fixes already done that I can mark.\n\n--------------------------------------------------------------------------\n\nTODO list for PostgreSQL\n========================\nLast updated:\t\tSun May 9 21:06:49 EDT 1999\n\nCurrent maintainer:\tBruce Momjian ([email protected])\n\nThe most recent version of this document can be viewed at\nthe PostgreSQL WWW site, http://www.postgreSQL.org.\n\nA dash(-) marks changes to be in the next release.\n\nDevelopers who have claimed items are:\n-------------------------------------\n\t* Billy is Billy G. Allie <[email protected]>\n\t* Brook is Brook Milligan <[email protected]>\n\t* Bruce is Bruce Momjian<[email protected]>\n\t* Bryan is Bryan Henderson<[email protected]>\n\t* D'Arcy is D'Arcy J.M. Cain <[email protected]>\n\t* Dan is Dan McGuirk <[email protected]>\n\t* Darren is Darren King <[email protected]>\n\t* David is David Hartwig <[email protected]>\n\t* Edmund is Edmund Mergl <[email protected]>\n\t* Goran is Goran Thyni <[email protected]>\n\t* Henry is Henry B. Hotz <[email protected]>\n\t* Jan is Jan Wieck <[email protected]>\n\t* Jun is Jun Kuwamura <[email protected]>\n\t* Maarten is Maarten Boekhold <[email protected]>\n \t* Marc is Marc Fournier <[email protected]>\n \t* Martin is Martin S. Utesch <[email protected]>\n\t* Massimo Dal Zotto <[email protected]>\n\t* Michael is Michael Meskes <[email protected]>\n\t* Oleg is Oleg Bartunov <[email protected]>\n\t* Paul is Paul M. Aoki <[email protected]>\n\t* Peter is Peter T Mount <[email protected]>\n\t* Phil is Phil Thompson <[email protected]>\n\t* Ryan is Ryan Bradetich <[email protected]>\n\t* Soo-Ho Ok <[email protected]>\n\t* Stefan Simkovics <[email protected]>\n\t* Sven is Sven Verdoolaege <[email protected]>\n\t* Tatsuo is Tatsuo Ishii <[email protected]>\n\t* Tom is Tom Lane <[email protected]>\n\t* Thomas is Thomas Lockhart <[email protected]>\n\t* TomH is Tom I Helbekkmo <[email protected]>\n\t* Vadim is \"Vadim B. Mikheev\" <[email protected]>\n\nRELIABILITY\n-----------\n* Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n* Overhaul bufmgr/lockmgr/transaction manager\n* Remove EXTEND?\n* Can lo_export()/lo_import() read/write anywhere, causing a security problem?\n* Tables that start with xinv confused to be large objects\n* Two and three dimensional arrays display improperly, missing {}\n* -GROUP BY in INSERT INTO table SELECT * FROM table2 fails(Jan)\n* SELECT a[1] FROM test fails, it needs test.a[1]\n* UPDATE table SET table.value = 3 fails\n* User who can create databases can modify pg_database table\n* elog() does not free all its memory(Jan)\n* views on subselects fail\n* disallow inherited columns with the same name as new columns\n* recover or force failure when disk space is exhausted\n* allow UPDATE using aggregate to affect all rows, not just one\n* -computations in views fail:(Jan)\n\tcreate view test as select usesysid * usesysid from pg_shadow;\n* views containing aggregates sometimes fail(Jan)\n* ALTER TABLE ADD COLUMN does not honor DEFAULT, add CONSTRAINT\n* -fix memory leak in aborted transactions(Tom)\n* array index references without table name cause problems\n* aggregates on array indexes crash backend\n* -subqueries containing HAVING return incorrect results(Stephan)\n* -DEFAULT handles single quotes in value by requiring too many quotes\n* -make CURSOR valid even after you hit end of cursor\n* views with spaces in view name fail when referenced\n* plpgsql does not handle quoted mixed-case identifiers\n* do not allow bpchar column creation without length\n* select t[1] from foo fails, select count(foo.t[1]) from foo crashes\n\nENHANCEMENTS\n------------\n* -Replace table-level locking with row or page-level locking(Vadim)\n* Transaction log, so re-do log can be on a separate disk\n* Allow transaction commits with rollback with no-fsync performance\n* More access control over who can create tables and access the database\n* Add full ANSI SQL capabilities\n\t* add OUTER joins, left and right (Thomas)\n\t* -add INTERSECTS, SUBTRACTS(Stephan)\n\t* -add temporary tables(Bruce)\n\t* add sql3 recursive unions\n\t* add the concept of dataspaces\n\t* add BIT, BIT VARYING\n \t* NCHAR (as distinguished from ordinary varchar),\n\t* DOMAIN capability\n* Allow compression of large fields or a compressed field type\n* -Fix the rules system(Jan)\n* Large objects\n\t* Fix large object mapping scheme, own typeid or reltype(Peter)\n\t* Allow large text type to use large objects(Peter)\n\t* not to stuff everything as files in a single directory\n\t* -delete orphaned large objects(Peter)\n* Better interface for adding to pg_group\n* -Make MONEY/DECIMAL have a defined precision(Jan)\n* -Fix tables >2G, or report error when 2G size reached(Peter)\n\t(fix lseek()/off_t, mdextend()/RELSEG_SIZE)\n* allow row re-use without vacuum, maybe?(Vadim)\n* Populate backend status area and write program to dump status data\n* Add ALTER TABLE DROP/ALTER COLUMN feature\n* Add syslog functionality(Marc)\n* Add STDDEV/VARIANCE() function for standard deviation computation/variance\n* add UNIQUE capability to non-btree indexes\n* certain indexes will not shrink, i.e. oid indexes with many inserts\n* make NULL's come out at the beginning or end depending on the ORDER BY direction\n* change the library/backend interface to use network byte order\n* Restore unused oid's on backend exit if no one else has gotten oids\n* have UPDATE/DELETE clean out indexes\n* allow WHERE restriction on ctid\n* allow pg_descriptions when creating types, tables, columns, and functions\n* Fix compile and security of Kerberos/GSSAPI code\n* Allow psql to print nulls as distinct from \"\"(?)\n* Allow INSERT INTO ... SELECT ... FROM view to work\n* Make VACUUM on database not lock pg_class\n* Make VACUUM ANALYZE only use a readlock\n* Allow cursors to be DECLAREd/OPENed/CLOSEed outside transactions\n* -Allow installation data block size and max tuple size configuration(Darren)\n* -Allow views on a UNION\n* -Allow DISTINCT on view\n* Allow views of aggregate columns\n* -Allow variable block sizes(Darren)\n* -System tables are now more update-able from SQL(Jan)\n* Allow flag to control COPY input/output of NULLs\n* Allow CLUSTER on all tables at once, and improve CLUSTER\n* -Add ELOG_TIMESTAMPS to elog()\n* -Allow max tuple length to be changed\n* Have psql with no database name not connect to username as default(?)\n* Allow subqueries in target list\n* Allow queries across multiple databases\n* Add replication of distributed databases\n* Allow table destruction/alter to be rolled back\n* Generate error on CREATE OPERATOR of ~~, ~ and and ~*\n* Allow constraint NULL just as we honor NOT NULL\n* -Add version number in startup banners for psql and postmaster\n* Restructure storing of GRANT permission information to allow +-=\n* allow psql \\copy to allow delimiters\n* allow international error message support and add error codes\n* -allow usernames with dashes(GRANT fails)\n* add a function to return the last inserted oid, for use in psql scripts\n* allow creation of functional indexes to use default types\n* put sort files, large objects in their on directory\n* do autocommit so always in a transaction block\n* add SIMILAR TO to allow character classes, 'pg_[a-c]%'\n* -multi-verion concurrency control(Vadim)\n* improve reporting of syntax errors by showing location of error in query\n* allow chaining of pages to allow >8k tuples\n* -remove un-needed conversion functions where appropriate\n* redesign the function call interface to handle NULLs better(Jan)\n* permissions on indexes - prevent them?\n* -allow multiple generic operators in expressions without the use of parentheses\n* document/trigger/rule so changes to pg_shadow create pg_pwd\n* generate postmaster pid file and remove flock/fcntl lock code\n* -improve PRIMARY KEY handling(D'Arcy)\n* add ability to specifiy location of lock/socket files\n* -psql \\d on index with char()/varchar() fields shows improper length\n* -disallow LOCK outside a transaction, change message to LOCK instead of DELETE\n* Fix roundoff problems in \"cash\" datatype\n* -fix any sprintf() overruns(Tatsuo)\n* -add portable vsnprintf()(Tatsuo)\n* auto-destroy sequence on SERIAL removal\n* CREATE TABLE inside aborted transaction causes stray table file\n* allow user to define char1 column\n* -have psql \\d on a view show the query\n* allow LOCK TABLE tab1, tab2, tab3 so all tables locked in unison\n* allow INSERT/UPDATE of system-generated oid value for a row\n* missing optimizer selectivities for date, etc.\n\nPERFORMANCE\n-----------\n* Use indexes in ORDER BY for restrictive data sets, min(), max()\n* Allow LIMIT ability on single-table queries that have no ORDER BY to use\n\ta matching index\n* Pull requested data directly from indexes, bypassing heap data\n* -Prevent psort() usage when query already using index matching ORDER BY(Jan)\n* -Fix bushy-plans(Bruce)\n* Prevent fsync in SELECT-only queries\n* Cache most recent query plan(s?)\n* Shared catalog cache, reduce lseek()'s by caching table size in shared area\n* Allow compression of log and meta data\n* Add FILLFACTOR to index creation\n* update pg_statistic table to remove operator column\n* make index creation use psort code, because it is now faster(Vadim)\n* Allow char() not to use variable-sized header to reduce disk size\n* Do async I/O to do better read-ahead of data\n* -Fix optmizer problem with self-table joins\n* Fix memory exhaustion when using many OR's\n* -Use spin locks only on multi-CPU systems, yield CPU instead\n* Get faster regex() code from Henry Spencer <[email protected]>\n\twhen it is available\n* use mmap() rather than SYSV shared memory(?)\n* use index to restrict rows returned by multi-key index when used with\n\tnon-consecutive keys or OR clauses, so fewer heap accesses\n* use index with constants on functions\n\nDOCUMENTATION\n-------------\n* Add use of 'const' for varibles in source tree\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 22:32:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Current TODO list"
},
{
"msg_contents": "On Mon, 17 May 1999, Bruce Momjian wrote:\n\n> ENHANCEMENTS\n> ------------\n> * Large objects\n> \t* Fix large object mapping scheme, own typeid or reltype(Peter)\n> \t* Allow large text type to use large objects(Peter)\n> \t* not to stuff everything as files in a single directory\n\nHopefully when my workload eases (mid june ish) I'll be able to tackle\nthese again in earnest.\n\n> \t* -delete orphaned large objects(Peter)\n\nAs I missed the 6.5beta deadline, I put the solution into contrib/lo\n\n> * -Fix tables >2G, or report error when 2G size reached(Peter)\n> \t(fix lseek()/off_t, mdextend()/RELSEG_SIZE)\n\nThis was done (twice if I remember). The tables now split at 1G. This\nopened a new problem that vacuum can't handle segmented tables. I have the\ngeneral idea of how to fix this, but again it's time that's the problem.\n\npeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Tue, 18 May 1999 06:59:15 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "Peter, I've seen some changes to preproc.y. Was this to sync back up\nwith the recent changes in gram.y for the lock table and set\ntransaction stuff?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 18 May 1999 13:48:07 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, can you identify any of the array items as fixed? Should we\n> assume they are all fixed unless someone reports them broken?\n\nNo, that would be unduly optimistic :-(. I have fixed one or two\narray-related bugs, but I haven't made a serious push on it; several\nof the test cases that are in my to-do list still fail.\n\n> * aggregates on array indexes crash backend\n\nI believe I have fixed that one, at least.\n\n> * select t[1] from foo fails, select count(foo.t[1]) from foo crashes\n\nThis item is a duplicate: the first part refers to the same thing as\n> * array index references without table name cause problems\n(which is as yet unfixed) and the second refers to the aggregate problem.\n\n> * change the library/backend interface to use network byte order\n\nIs there something I'm missing? This has been true for a long while...\n\n> * -Allow DISTINCT on view\n\nI think this is not done...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 May 1999 10:38:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list "
},
{
"msg_contents": "Peter T Mount <[email protected]> writes:\n> This was done (twice if I remember). The tables now split at 1G. This\n> opened a new problem that vacuum can't handle segmented tables. I have the\n> general idea of how to fix this, but again it's time that's the problem.\n\nOle Gjerde <[email protected]> just contributed a patch for the vacuum\nproblem. Perhaps you at least have time to check his patch?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 May 1999 10:40:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, can you identify any of the array items as fixed? Should we\n> > assume they are all fixed unless someone reports them broken?\n> \n> No, that would be unduly optimistic :-(. I have fixed one or two\n> array-related bugs, but I haven't made a serious push on it; several\n> of the test cases that are in my to-do list still fail.\n> \n> > * aggregates on array indexes crash backend\n> \n> I believe I have fixed that one, at least.\n\nOK.\n\n> > * select t[1] from foo fails, select count(foo.t[1]) from foo crashes\n> \n> This item is a duplicate: the first part refers to the same thing as\n> > * array index references without table name cause problems\n> (which is as yet unfixed) and the second refers to the aggregate problem.\n\nOK.\n\n> > * change the library/backend interface to use network byte order\n> \n> Is there something I'm missing? This has been true for a long while...\n\nPeople have mentioned we should make the change, but it will require a\nnew protocol, so it hasn't moved from the list.\n\n> > * -Allow DISTINCT on view\n> \n> I think this is not done...\n\nYes, I think he just added an error message to warn people.\n\nUpdated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 10:45:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> * change the library/backend interface to use network byte order\n>> \n>> Is there something I'm missing? This has been true for a long while...\n>\n> People have mentioned we should make the change, but it will require a\n> new protocol, so it hasn't moved from the list.\n\nBut my point is the current protocol *already* uses network byte order.\n\nIIRC the old \"version 0\" protocol did not, but that's ancient history.\nEither this complaint is long obsolete, or I don't understand what's\nbeing asked for.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 May 1999 11:00:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >>>> * change the library/backend interface to use network byte order\n> >> \n> >> Is there something I'm missing? This has been true for a long while...\n> >\n> > People have mentioned we should make the change, but it will require a\n> > new protocol, so it hasn't moved from the list.\n> \n> But my point is the current protocol *already* uses network byte order.\n\nOh. Item removed.\n\n> IIRC the old \"version 0\" protocol did not, but that's ancient history.\n> Either this complaint is long obsolete, or I don't understand what's\n> being asked for.\n\nI think you are right. Someone added code to serve both orders based on\nthe version of the client, I think, and newer clients use the proper\norder.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 11:19:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "On Tue, May 18, 1999 at 01:48:07PM +0000, Thomas Lockhart wrote:\n> Peter, I've seen some changes to preproc.y. Was this to sync back up\n\nIt was me who changed preproc.y\n\n> with the recent changes in gram.y for the lock table and set\n> transaction stuff?\n\nYes. It was just to get the two back in sync.\n\nNow we only need to get rid of that shift/reduce problem.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 18 May 1999 19:56:31 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "On Tue, 18 May 1999, Tom Lane wrote:\n\n> Peter T Mount <[email protected]> writes:\n> > This was done (twice if I remember). The tables now split at 1G. This\n> > opened a new problem that vacuum can't handle segmented tables. I have the\n> > general idea of how to fix this, but again it's time that's the problem.\n> \n> Ole Gjerde <[email protected]> just contributed a patch for the vacuum\n> problem. Perhaps you at least have time to check his patch?\n\nWill do.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Tue, 18 May 1999 19:08:58 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list "
},
{
"msg_contents": "> It was me who changed preproc.y\n\nOops. Right. Sorry...\n\n> Yes. It was just to get the two back in sync.\n\nThanks.\n\n> Now we only need to get rid of that shift/reduce problem.\n\nYes. I'm worried about it, since there are at least two places which\nwere modified which are leading to shift/reduce conflicts *or* which\nwere disabled to remove shift/reduce conflicts.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 May 1999 04:12:18 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Tuesday, May 18, 1999 11:41 PM\n> To: Peter T Mount\n> Cc: Bruce Momjian; PostgreSQL-development\n> Subject: Re: [HACKERS] Current TODO list\n>\n>\n> Peter T Mount <[email protected]> writes:\n> > This was done (twice if I remember). The tables now split at 1G. This\n> > opened a new problem that vacuum can't handle segmented tables.\n> I have the\n> > general idea of how to fix this, but again it's time that's the problem.\n>\n> Ole Gjerde <[email protected]> just contributed a patch for the vacuum\n> problem. Perhaps you at least have time to check his patch?\n>\n\nI wonder that no one but me object to the patch.\nIt may cause serious results.\nI think it needs mooore checks and tests.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Wed, 19 May 1999 19:10:30 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Current TODO list "
},
{
"msg_contents": "On Wed, May 19, 1999 at 04:12:18AM +0000, Thomas Lockhart wrote:\n> Yes. I'm worried about it, since there are at least two places which\n> were modified which are leading to shift/reduce conflicts *or* which\n> were disabled to remove shift/reduce conflicts.\n\nYes, I wanted to dig into it but didn't find the time yet.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 19 May 1999 17:13:52 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "> * Thomas is Thomas Lockhart <[email protected]>\n\nCan you change this to my home address ([email protected])?\n\nTIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 20 May 1999 04:29:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "On Wed, 19 May 1999, Hiroshi Inoue wrote:\n\n> \n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Tom Lane\n> > Sent: Tuesday, May 18, 1999 11:41 PM\n> > To: Peter T Mount\n> > Cc: Bruce Momjian; PostgreSQL-development\n> > Subject: Re: [HACKERS] Current TODO list\n> >\n> >\n> > Peter T Mount <[email protected]> writes:\n> > > This was done (twice if I remember). The tables now split at 1G. This\n> > > opened a new problem that vacuum can't handle segmented tables.\n> > I have the\n> > > general idea of how to fix this, but again it's time that's the problem.\n> >\n> > Ole Gjerde <[email protected]> just contributed a patch for the vacuum\n> > problem. Perhaps you at least have time to check his patch?\n> >\n> \n> I wonder that no one but me object to the patch.\n> It may cause serious results.\n\nHow? Why? In what way? Details?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 20 May 1999 07:59:02 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Current TODO list "
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: The Hermit Hacker [mailto:[email protected]]\n> Sent: Thursday, May 20, 1999 7:59 PM\n> To: Hiroshi Inoue\n> Cc: Tom Lane; Peter T Mount; Bruce Momjian; PostgreSQL-development\n> Subject: RE: [HACKERS] Current TODO list\n>\n>\n> On Wed, 19 May 1999, Hiroshi Inoue wrote:\n>\n> >\n> >\n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]]On Behalf Of Tom Lane\n> > > Sent: Tuesday, May 18, 1999 11:41 PM\n> > > To: Peter T Mount\n> > > Cc: Bruce Momjian; PostgreSQL-development\n> > > Subject: Re: [HACKERS] Current TODO list\n> > >\n> > >\n> > > Peter T Mount <[email protected]> writes:\n> > > > This was done (twice if I remember). The tables now split\n> at 1G. This\n> > > > opened a new problem that vacuum can't handle segmented tables.\n> > > I have the\n> > > > general idea of how to fix this, but again it's time that's\n> the problem.\n> > >\n> > > Ole Gjerde <[email protected]> just contributed a patch for the vacuum\n> > > problem. Perhaps you at least have time to check his patch?\n> > >\n> >\n> > I wonder that no one but me object to the patch.\n> > It may cause serious results.\n>\n> How? Why? In what way? Details?\n>\n\nI don't have tables > 1G.\nSo I won't be damaged by the patch.\n\nBut I don't understand what Beta is.\nWhy isn't such a dangerous fucntion checked and tested\ncarefully ?\n\nFor example,the following code is not changed by the patch.\n\n if (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n return -1;\n\nIt never truncate segmented files and there may be cases the\noriginal file increases its size(ftruncate() increases the size of\ntarget file if the requested size is longer than the actual size).\nIt's not checked and tested and once it occurs I don't know\nwhat will happen.\n\nBut my anxiety is the use of unlink()(FileNameUnlink()).\n\nUnlink() is very dangerous.\nUnlink() never remove the target file immediately.and even the\ntruncating process doesn't close the files by the patch and so\nunlinked files are still alive for all processes which have already\nopened the files.\nWho checked and tested the influence carefully ?\n\nI think it's not so easy to implement and check mdtruncate().\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 21 May 1999 09:16:07 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Current TODO list "
},
{
"msg_contents": "> > > I wonder that no one but me object to the patch.\n> > > It may cause serious results.\n> >\n> > How? Why? In what way? Details?\n> >\n> \n> I don't have tables > 1G.\n> So I won't be damaged by the patch.\n> \n> But I don't understand what Beta is.\n> Why isn't such a dangerous fucntion checked and tested\n> carefully ?\n> \n> For example,the following code is not changed by the patch.\n> \n> if (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n> return -1;\n> \n> It never truncate segmented files and there may be cases the\n> original file increases its size(ftruncate() increases the size of\n> target file if the requested size is longer than the actual size).\n> It's not checked and tested and once it occurs I don't know\n> what will happen.\n> \n> But my anxiety is the use of unlink()(FileNameUnlink()).\n> \n> Unlink() is very dangerous.\n> Unlink() never remove the target file immediately.and even the\n> truncating process doesn't close the files by the patch and so\n> unlinked files are still alive for all processes which have already\n> opened the files.\n> Who checked and tested the influence carefully ?\n> \n> I think it's not so easy to implement and check mdtruncate().\n\nOK, I see what you are saying, but the multi-segment problem is on our\nlist to fix. Is this risking non-multi-segment cases. If not, then\nlet's keep it, and continue improving the multi-segment handling,\nbecause it was pretty bad before, and we need it fixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 May 1999 23:19:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "On Thu, 20 May 1999, Bruce Momjian wrote:\n> > For example,the following code is not changed by the patch.\n> > if (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n> > return -1;\n> > It never truncate segmented files and there may be cases the\n> > original file increases its size(ftruncate() increases the size of\n> > target file if the requested size is longer than the actual size).\n\nI agree. I have rewritten my patch, but I need to test it some more.\n\n> > But my anxiety is the use of unlink()(FileNameUnlink()).\n> > Unlink() is very dangerous.\n> > Unlink() never remove the target file immediately.and even the\n> > truncating process doesn't close the files by the patch and so\n> > unlinked files are still alive for all processes which have already\n> > opened the files.\n\nI don't think unlink() is a problem. That other backends have the files\nopen shouldn't matter. Whenever they close it(should be pretty quick),\nthe files would be removed..\n\nI'll try to get the patch out later today.\n\nOn another note, I've had some other problems with vacuuming my databases.\n(All before patch :)\nSometimes the backend would crash while doing a vacuum analyze. It would\ndo this repeatedly if I ran it again. Then if I ran a regular vacuum, and\nthen again a vacuum analyze it would work fine. Very weird...\n\nNow I have a bit of a bigger problem. I just did a pg_upgrade to a newer\nCVS version. Most of my tables seems fine and vacuum worked fine on most\nof them.\nBut on the only 2 tables that I have changed lately I'm getting vacuum\n\"errors\". Both tables are very small(shotgun table file is 1.4MB).\nIf I keep running vacuum(over and over) the number of deleted tuples will\neventually go to 0 and it will look normal. It does take a few vacuum\nruns however, so something really weird is going on here.\n\nshotgun=> vacuum verbose analyze shotgun;\nNOTICE: --Relation shotgun--\nNOTICE: Pages 334: Changed 0, Reapped 5, Empty 0, New 0; Tup 22414: Vac\n3, Keep/VTL 11708/10895, Crash 0, UnUsed 49, MinLen 64, MaxLen 159;\nRe-using: Free/Avail. Space 6556/492; EndEmpty/Avail. Pages 0/3. Elapsed\n0/0 sec.\nNOTICE: Index shotgun_index_keyword: Pages 180; Tuples 22274: Deleted 3.\nElapsed 0/0 sec.\nNOTICE: Index shotgun_index_keyword: NUMBER OF INDEX' TUPLES (22274) IS\nNOT THE SAME AS HEAP' (22414)\nNOTICE: Index shotgun_index_email: Pages 222; Tuples 22274: Deleted 3.\nElapsed 0/1 sec.\nNOTICE: Index shotgun_index_email: NUMBER OF INDEX' TUPLES (22274) IS NOT\nTHE SAME AS HEAP' (22414)\nNOTICE: Index shotgun_id_key: Pages 91; Tuples 22414: Deleted 3. Elapsed\n0/0 sec.\nNOTICE: Rel shotgun: Pages: 334 --> 334; Tuple(s) moved: 2. Elapsed 0/0\nsec.\nNOTICE: Index shotgun_index_keyword: Pages 180; Tuples 22275: Deleted 1.\nElapsed 0/0 sec.\nNOTICE: Index shotgun_index_keyword: NUMBER OF INDEX' TUPLES (22275) IS\nNOT THE SAME AS HEAP' (22414)\nNOTICE: Index shotgun_index_email: Pages 222; Tuples 22275: Deleted 1.\nElapsed 0/0 sec.\nNOTICE: Index shotgun_index_email: NUMBER OF INDEX' TUPLES (22275) IS NOT\nTHE SAME AS HEAP' (22414)\nNOTICE: Index shotgun_id_key: Pages 91; Tuples 22415: Deleted 1. Elapsed\n0/0 sec.\nNOTICE: Index shotgun_id_key: NUMBER OF INDEX' TUPLES (22415) IS NOT THE\nSAME AS HEAP' (22414)\nVACUUM\n\nThanks,\nOle Gjerde\n\n",
"msg_date": "Fri, 21 May 1999 11:36:55 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "Ole Gjerde wrote:\n> \n> Now I have a bit of a bigger problem. I just did a pg_upgrade to a newer\n> CVS version. Most of my tables seems fine and vacuum worked fine on most\n> of them.\n> But on the only 2 tables that I have changed lately I'm getting vacuum\n> \"errors\". Both tables are very small(shotgun table file is 1.4MB).\n> If I keep running vacuum(over and over) the number of deleted tuples will\n> eventually go to 0 and it will look normal. It does take a few vacuum\n> runs however, so something really weird is going on here.\n> \n> shotgun=> vacuum verbose analyze shotgun;\n> NOTICE: --Relation shotgun--\n> NOTICE: Pages 334: Changed 0, Reapped 5, Empty 0, New 0; Tup 22414: Vac\n> 3, Keep/VTL 11708/10895, Crash 0, UnUsed 49, MinLen 64, MaxLen 159;\n> Re-using: Free/Avail. Space 6556/492; EndEmpty/Avail. Pages 0/3. Elapsed\n> 0/0 sec.\n> NOTICE: Index shotgun_index_keyword: Pages 180; Tuples 22274: Deleted 3.\n> Elapsed 0/0 sec.\n> NOTICE: Index shotgun_index_keyword: NUMBER OF INDEX' TUPLES (22274) IS\n> NOT THE SAME AS HEAP' (22414)\n\nHiroshi found the bug in vacuum and posted me patch, but I'm\nunhappy with it and will commit my changes in a few hours.\n\nVadim\n",
"msg_date": "Sun, 23 May 1999 13:51:50 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Ole Gjerde [mailto:[email protected]]\n> Sent: Saturday, May 22, 1999 1:37 AM\n> To: Bruce Momjian\n> Cc: Hiroshi Inoue; PostgreSQL-development\n> Subject: Re: [HACKERS] Current TODO list\n> \n> \n> On Thu, 20 May 1999, Bruce Momjian wrote:\n\n[snip]\n\n> \n> > > But my anxiety is the use of unlink()(FileNameUnlink()).\n> > > Unlink() is very dangerous.\n> > > Unlink() never remove the target file immediately.and even the\n> > > truncating process doesn't close the files by the patch and so\n> > > unlinked files are still alive for all processes which have already\n> > > opened the files.\n> \n> I don't think unlink() is a problem. That other backends have the files\n> open shouldn't matter. Whenever they close it(should be pretty quick),\n\nWhen are those files closed ?\nAFAIC,they are kept open until the backends which reference those files \nfinish.\n\nCertainly,those files are re-opened(without closing) by backends after \nvacuum,though I don't know it's intentional or caused by side-effect.\nBut unfortunately,re-open is not sufficiently quick. \n\nAnd I think that the assumption of mdtruncate() is not clear.\nCould we suppose that unlinked files are closed quickly for all backends \nby the caller of mdunlink() ?\n\nThanks.\n\nHiroshi Inoue\[email protected] \n \n\n",
"msg_date": "Mon, 24 May 1999 09:23:23 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Current TODO list"
},
{
"msg_contents": "> > I don't think unlink() is a problem. That other backends have the files\n> > open shouldn't matter. Whenever they close it(should be pretty quick),\n> \n> When are those files closed ?\n> AFAIC,they are kept open until the backends which reference those files \n> finish.\n> \n> Certainly,those files are re-opened(without closing) by backends after \n> vacuum,though I don't know it's intentional or caused by side-effect.\n> But unfortunately,re-open is not sufficiently quick. \n> \n> And I think that the assumption of mdtruncate() is not clear.\n> Could we suppose that unlinked files are closed quickly for all backends \n> by the caller of mdunlink() ?\n\nIf they try and open a file that is already unlinked, they don't get to\nsee the file. Unlink removes it from the directory, so the only way to\ncontinue access after an unlink is if you already hold a file descrpitor\non the file.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 May 1999 23:32:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Current TODO list"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, May 24, 1999 12:32 PM\n> To: Hiroshi Inoue\n> Cc: Ole Gjerde; PostgreSQL-development\n> Subject: Re: [HACKERS] Current TODO list\n>\n>\n> > > I don't think unlink() is a problem. That other backends\n> have the files\n> > > open shouldn't matter. Whenever they close it(should be\n> pretty quick),\n> >\n> > When are those files closed ?\n> > AFAIC,they are kept open until the backends which reference those files\n> > finish.\n> >\n> > Certainly,those files are re-opened(without closing) by backends after\n> > vacuum,though I don't know it's intentional or caused by side-effect.\n> > But unfortunately,re-open is not sufficiently quick.\n> >\n> > And I think that the assumption of mdtruncate() is not clear.\n> > Could we suppose that unlinked files are closed quickly for all\n> backends\n> > by the caller of mdunlink() ?\n>\n> If they try and open a file that is already unlinked, they don't get to\n> see the file. Unlink removes it from the directory, so the only way to\n> continue access after an unlink is if you already hold a file descrpitor\n> on the file.\n>\n\nYou are right.\nBackends would continue to access the file descritors already hold\nif vacuum does nothing about the invalidation of Relation Cache.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 24 May 1999 13:46:21 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Current TODO list"
},
{
"msg_contents": "On Mon, 24 May 1999, Hiroshi Inoue wrote:\n> Backends would continue to access the file descritors already hold\n> if vacuum does nothing about the invalidation of Relation Cache.\n\nYes, and I don't believe that is a problem. I may be wrong however...\n\nFirst, please reverse my patch to mdtruncate() in md.c as soon as\npossible. It does not work properly in some cases.\n\nSecond, I do have a better patch in the works. It is included below, but\nDO NOT APPLY THIS!!! I would like someone to look it over quick. I have\nchecked the logic by hand for a few cases and done a bunch of tests. I\nwould like to test more first.\n\nWhile doing a bunch of vacuums, I have seen some strange things(so my\npatch probably isn't 100%).\nI started with 58 segments, and did a bunch of delete/vacuums and got it\ndown to about 5-6. Then I got the error below while running a vacuum\nanalyze. This appeared after the index clean, but before any tuples were\nmoved.\nERROR: HEAP_MOVED_IN was not expected\n\nAlso, I was seeing some more errors about INDEX' TUPLES being higher than\nHEAP TUPLES. Didn't this just get fixed, or did I break something with my\npatch. I was seeing these after doing delete/vacuums with my patch.\n\nThanks,\nOle Gjerde\n\nIndex: src/backend/storage/smgr/md.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/storage/smgr/md.c,v\nretrieving revision 1.43\ndiff -u -r1.43 md.c\n--- src/backend/storage/smgr/md.c\t1999/05/17 06:38:41\t1.43\n+++ src/backend/storage/smgr/md.c\t1999/05/24 06:30:25\n@@ -712,32 +712,62 @@\n \n #ifndef LET_OS_MANAGE_FILESIZE\n \tint\t\t\tcurnblk,\n-\t\t\t\t\ti,\n \t\t\t\t\toldsegno,\n-\t\t\t\t\tnewsegno;\n-\tchar\t\tfname[NAMEDATALEN];\n-\tchar\t\ttname[NAMEDATALEN + 10];\n+\t\t\t\t\tnewsegno,\n+\t\t\t\t\tlastsegblocks,\n+\t\t\t\t\tsegcount = 0;\n+\tMdfdVec\t\t*ov,\n+\t\t\t\t*lastv;\n+\tMemoryContext\toldcxt;\n \n+\tfd = RelationGetFile(reln);\n \tcurnblk = mdnblocks(reln);\n-\toldsegno = curnblk / RELSEG_SIZE;\n-\tnewsegno = nblocks / RELSEG_SIZE;\n \n-\tStrNCpy(fname, RelationGetRelationName(reln)->data, NAMEDATALEN);\n+\toldsegno = (curnblk / RELSEG_SIZE) + 1;\n+\tnewsegno = (nblocks / RELSEG_SIZE) + 1;\n+\toldcxt = MemoryContextSwitchTo(MdCxt);\n \n-\tif (newsegno < oldsegno) {\n-\t\tfor (i = (newsegno + 1);; i++) {\n-\t\t\tsprintf(tname, \"%s.%d\", fname, i);\n-\t\t\tif (FileNameUnlink(tname) < 0)\n-\t\t\t\tbreak;\n+\tif (newsegno < oldsegno && newsegno > 1)\n+\t{\n+\t\tlastv = v = &Md_fdvec[fd];\n+\t\tfor (segcount = 1; v != (MdfdVec *) NULL;segcount++, v = v->mdfd_chain)\n+\t\t{\n+\t\t\tif(segcount == newsegno) /* Save pointer to last file\n+\t\t\t\t\t\t in the chain */\n+\t\t\t\tlastv = v;\n+ if(segcount > newsegno)\n+\t\t\t{\n+\t\t\t\tFileUnlink(v->mdfd_vfd);\n+\t\t\t\tov = v;\n+\t\t\t\tif (ov != &Md_fdvec[fd])\n+\t\t\t\t\tpfree(ov);\n+\t\t\t}\n \t\t}\n+\t\tlastv->mdfd_chain = (MdfdVec *) NULL;\n }\n-#endif\n \n+\t/* Find the last file in the md chain */\n+\tfor (v = &Md_fdvec[fd]; v->mdfd_chain != (MdfdVec *) NULL;)\n+\t\tv = v->mdfd_chain;\n+\n+\t/* Calculate the # of blocks in the last segment */\n+\tlastsegblocks = nblocks - ((newsegno - 1) * RELSEG_SIZE);\n+\n+\tMemoryContextSwitchTo(oldcxt);\n+\n+\tif (FileTruncate(v->mdfd_vfd, lastsegblocks * BLCKSZ) < 0)\n+\t\treturn -1;\n+\n+#else\n+\n \tfd = RelationGetFile(reln);\n+\n \tv = &Md_fdvec[fd];\n \n \tif (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n \t\treturn -1;\n+\n+#endif\n \n \treturn nblocks;\n \n\n",
"msg_date": "Mon, 24 May 1999 01:42:10 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Vacuum/mdtruncate() (was: RE: [HACKERS] Current TODO list)"
},
{
"msg_contents": "Ole Gjerde wrote:\n> \n> While doing a bunch of vacuums, I have seen some strange things(so my\n> patch probably isn't 100%).\n> I started with 58 segments, and did a bunch of delete/vacuums and got it\n> down to about 5-6. Then I got the error below while running a vacuum\n> analyze. This appeared after the index clean, but before any tuples were\n> moved.\n> ERROR: HEAP_MOVED_IN was not expected\n\nI added this in my last patch ... I have to think more about\nthe cause.\n\n> Also, I was seeing some more errors about INDEX' TUPLES being higher than\n> HEAP TUPLES. Didn't this just get fixed, or did I break something with my\n> patch. I was seeing these after doing delete/vacuums with my patch.\n\nHiroshi, could you try to reproduce NOT THE SAME problem\nwith new vacuum code?\n\nVadim\n",
"msg_date": "Mon, 24 May 1999 15:52:55 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum/mdtruncate() (was: RE: [HACKERS] Current TODO list)"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Monday, May 24, 1999 4:53 PM\n> To: Ole Gjerde\n> Cc: Hiroshi Inoue; Bruce Momjian; PostgreSQL-development\n> Subject: Re: Vacuum/mdtruncate() (was: RE: [HACKERS] Current TODO list)\n>\n>\n> Ole Gjerde wrote:\n> >\n> > While doing a bunch of vacuums, I have seen some strange things(so my\n> > patch probably isn't 100%).\n> > I started with 58 segments, and did a bunch of delete/vacuums and got it\n\nAre delete/vacuums executed sequentially by single session ?\n\n> > down to about 5-6. Then I got the error below while running a vacuum\n> > analyze. This appeared after the index clean, but before any\n> tuples were\n> > moved.\n> > ERROR: HEAP_MOVED_IN was not expected\n>\n> I added this in my last patch ... I have to think more about\n> the cause.\n>\n> > Also, I was seeing some more errors about INDEX' TUPLES being\n> higher than\n> > HEAP TUPLES. Didn't this just get fixed, or did I break\n> something with my\n> > patch. I was seeing these after doing delete/vacuums with my patch.\n>\n> Hiroshi, could you try to reproduce NOT THE SAME problem\n> with new vacuum code?\n>\n\nI couldn't reproduce NOT THE SAME message in current.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 24 May 1999 18:50:55 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Vacuum/mdtruncate() (was: RE: [HACKERS] Current TODO list)"
},
{
"msg_contents": "On Mon, 24 May 1999, Hiroshi Inoue wrote:\n> > Ole Gjerde wrote:\n> > > While doing a bunch of vacuums, I have seen some strange things(so my\n> > > patch probably isn't 100%).\n> > > I started with 58 segments, and did a bunch of delete/vacuums and got it\n> Are delete/vacuums executed sequentially by single session ?\n\nYes.\n\n> > > Also, I was seeing some more errors about INDEX' TUPLES being\n> > higher than\n> > > HEAP TUPLES. Didn't this just get fixed, or did I break\n> > something with my\n> > > patch. I was seeing these after doing delete/vacuums with my patch.\n> > Hiroshi, could you try to reproduce NOT THE SAME problem\n> > with new vacuum code?\n> I couldn't reproduce NOT THE SAME message in current.\n\nCould you try with my patch?\n\nThanks,\nOle Gjerde\n\n",
"msg_date": "Mon, 24 May 1999 12:42:24 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Vacuum/mdtruncate() (was: RE: [HACKERS] Current TODO list)"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> >\n> > > Also, I was seeing some more errors about INDEX' TUPLES being\n> > higher than\n> > > HEAP TUPLES. Didn't this just get fixed, or did I break\n> > something with my\n> > > patch. I was seeing these after doing delete/vacuums with my patch.\n> >\n> > Hiroshi, could you try to reproduce NOT THE SAME problem\n> > with new vacuum code?\n> >\n> \n> I couldn't reproduce NOT THE SAME message in current.\n\nNice to know it.\nThanks for finding/resolving this bug, Hiroshi!\n\nVadim\n",
"msg_date": "Wed, 26 May 1999 12:12:18 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum/mdtruncate() (was: RE: [HACKERS] Current TODO list)"
}
] |
[
{
"msg_contents": "Hello,\nAnyone know of a spreadsheet that works with PostgrSQL? Would like it to\nbe free and use\nPython.\nThanks for your time.\nWayne\n\n",
"msg_date": "Tue, 18 May 1999 07:28:54 -0400",
"msg_from": "Wayne <[email protected]>",
"msg_from_op": true,
"msg_subject": "Off topic - ref spreadsheet"
},
{
"msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Hello,\n> Anyone know of a spreadsheet that works with PostgrSQL? Would like it to\n> be free and use\n> Python.\n> Thanks for your time.\n> Wayne\n> \n> \n> \n\npgaccess works like a spreadsheet, kind of. It is in tcl/tk.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 May 1999 12:21:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Off topic - ref spreadsheet"
}
] |
[
{
"msg_contents": "I haven't touched anything outside of jdbc this month, and before I\ncommitted last night, I redid everything with a fresh clean copy. Also,\nI was in the jdbc directory, so it only amended stuff below that point.\n\nPeter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Tuesday, May 18, 1999 2:48 PM\nTo: Peter T Mount\nCc: Bruce Momjian; PostgreSQL-development\nSubject: Re: [HACKERS] Current TODO list\n\n\nPeter, I've seen some changes to preproc.y. Was this to sync back up\nwith the recent changes in gram.y for the lock table and set\ntransaction stuff?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 18 May 1999 15:34:02 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Current TODO list"
}
] |
[
{
"msg_contents": "After reading a couple more complaints of hashtable-overflow error\nmessages, I went ahead and rewrote the hash join modules so that they\ndon't use fixed-size hash buckets and a fixed-size overflow area.\nInstead, each bucket is just a linked list of tuples (thus no wasted\nspace for underused buckets) and everything is put into a private portal\nso that reclaiming the space is easy/quick. The code is noticeably\nshorter and more readable than before.\n\nThe limited amount of testing I've been able to do here shows no\nproblems.\n\nNow: do I commit it, or wait till after 6.5? I promised Marc the latter\na couple weeks ago, but I am mighty tempted to just go for it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 May 1999 10:56:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "I've got it, now should I commit it?"
},
{
"msg_contents": "> After reading a couple more complaints of hashtable-overflow error\n> messages, I went ahead and rewrote the hash join modules so that they\n> don't use fixed-size hash buckets and a fixed-size overflow area.\n> Instead, each bucket is just a linked list of tuples (thus no wasted\n> space for underused buckets) and everything is put into a private portal\n> so that reclaiming the space is easy/quick. The code is noticeably\n> shorter and more readable than before.\n> \n> The limited amount of testing I've been able to do here shows no\n> problems.\n> \n> Now: do I commit it, or wait till after 6.5? I promised Marc the latter\n> a couple weeks ago, but I am mighty tempted to just go for it...\n\nShhh. He will never know. Did you promise Marc, or did you answer him\nevasively, like I suggested?\n\nBasically, with the new optimizer, this may be a bug fix because of the\nmore frequent hashjoins. That has always been my smokescreen to add the\nfeature.\n\n:-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 11:18:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I've got it, now should I commit it?"
},
{
"msg_contents": "On Tue, 18 May 1999, Bruce Momjian wrote:\n\n> > After reading a couple more complaints of hashtable-overflow error\n> > messages, I went ahead and rewrote the hash join modules so that they\n> > don't use fixed-size hash buckets and a fixed-size overflow area.\n> > Instead, each bucket is just a linked list of tuples (thus no wasted\n> > space for underused buckets) and everything is put into a private portal\n> > so that reclaiming the space is easy/quick. The code is noticeably\n> > shorter and more readable than before.\n> > \n> > The limited amount of testing I've been able to do here shows no\n> > problems.\n> > \n> > Now: do I commit it, or wait till after 6.5? I promised Marc the latter\n> > a couple weeks ago, but I am mighty tempted to just go for it...\n> \n> Shhh. He will never know. Did you promise Marc, or did you answer him\n> evasively, like I suggested?\n> \n> Basically, with the new optimizer, this may be a bug fix because of the\n> more frequent hashjoins. That has always been my smokescreen to add the\n> feature.\n\nTom...make you a deal. If you are confident enough with the code that\nwhen v6.5 goes out in ~13days, it won't generate more bug reports then its\nfixing...go for it. :)\n \nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 18 May 1999 14:32:06 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I've got it, now should I commit it?"
},
{
"msg_contents": "> > Basically, with the new optimizer, this may be a bug fix because of the\n> > more frequent hashjoins. That has always been my smokescreen to add the\n> > feature.\n> \n> Tom...make you a deal. If you are confident enough with the code that\n> when v6.5 goes out in ~13days, it won't generate more bug reports then its\n> fixing...go for it. :)\n> \n\nAh, man, that was way too easy. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 13:35:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I've got it, now should I commit it?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> Basically, with the new optimizer, this may be a bug fix because of the\n>> more frequent hashjoins. That has always been my smokescreen to add the\n>> feature.\n\n> Tom...make you a deal. If you are confident enough with the code that\n> when v6.5 goes out in ~13days, it won't generate more bug reports then its\n> fixing...go for it. :)\n\nOK, you're on --- I feel pretty good about this code, although I'm never\nprepared to guarantee zero bugs ;-). If there are any, we can hope\nthey'll show up before the end of beta.\n\nA note for anyone testing the new code: the hashtable size (which is now\na target estimate, not a hard limit) is now driven by the postmaster's\n-S switch, not the -B switch. -S seems more reasonable since the table\nis private memory in a backend, not shared memory.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 May 1999 17:46:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I've got it, now should I commit it? "
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> >> Basically, with the new optimizer, this may be a bug fix because of the\n> >> more frequent hashjoins. That has always been my smokescreen to add the\n> >> feature.\n> \n> > Tom...make you a deal. If you are confident enough with the code that\n> > when v6.5 goes out in ~13days, it won't generate more bug reports then its\n> > fixing...go for it. :)\n> \n> OK, you're on --- I feel pretty good about this code, although I'm never\n> prepared to guarantee zero bugs ;-). If there are any, we can hope\n> they'll show up before the end of beta.\n> \n> A note for anyone testing the new code: the hashtable size (which is now\n> a target estimate, not a hard limit) is now driven by the postmaster's\n> -S switch, not the -B switch. -S seems more reasonable since the table\n> is private memory in a backend, not shared memory.\n\nI see no documenation that -B was ever used for hash size. I see -B for\nshared buffers for both postmaster and postgres manual pages.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 18:14:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I've got it, now should I commit it?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> A note for anyone testing the new code: the hashtable size (which is now\n>> a target estimate, not a hard limit) is now driven by the postmaster's\n>> -S switch, not the -B switch.\n\n> I see no documenation that -B was ever used for hash size.\n\nEr, did I say anything about documentation?\n\nThe code *was* using NBuffers to size the hashtable, whether or not\nthat was ever documented anywhere except in the \"hash table out of\nmemory. Use -B parameter to increase buffers\" message. Now it uses\nthe SortMem variable.\n\nI do have it on my to-do list to update the relevant documentation.\n(Yo, Thomas: what's the deadline for 6.5 doco changes? I've got a bunch\nof doc to-dos that I suspect I'd better get moving on...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 May 1999 00:06:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I've got it, now should I commit it? "
},
{
"msg_contents": "> (Yo, Thomas: what's the deadline for 6.5 doco changes? I've got a bunch\n> of doc to-dos that I suspect I'd better get moving on...)\n\nYup. Nominally, I should have frozen about May 15, but I still have\nsome writing I want to do. I can freeze docs in stages; which docs are\nyou planning on touching?\n\nBruce, can we get the sgml version of release notes started?\n\nVadim, you had mentioned some docs for MVCC; where would that show up?\nIf nothing else, we should update ref/lock.sgml and ref/set.sgml to\ncover the grammar changes. And it would be great to have some words\nfor the User's Guide.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 May 1999 15:17:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Last call for docs"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I can freeze docs in stages; which docs are\n> you planning on touching?\n\nSeveral, but for most of them I have only small changes. I will try to\ndo those tonight so that as much as possible can be frozen, and then let\nyou know what I still have to work on.\n\nIs there any equivalent in the SGML docs to the postmaster.1 and\npostgres.1 man pages (specifically, doco for the postmaster/backend\ncommand line switches)? I couldn't find it but maybe it's there...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 May 1999 18:58:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Last call for docs "
},
{
"msg_contents": "> Several, but for most of them I have only small changes. I will try to\n> do those tonight so that as much as possible can be frozen, and then let\n> you know what I still have to work on.\n\nGreat.\n\n> Is there any equivalent in the SGML docs to the postmaster.1 and\n> postgres.1 man pages (specifically, doco for the postmaster/backend\n> command line switches)? I couldn't find it but maybe it's there...\n\nNo, but I want to add them by converting the man pages to User's Guide\nreference pages. Will wait a day or two for you to update the man\npages, unless you would prefer to have me convert and then make your\nchanges directly in sgml.\n\nI'm doing a small reorg on the Admin Guide. Most chapters stay the\nsame, but things are streamlined and flow better.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 20 May 1999 03:53:47 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Last call for docs"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Is there any equivalent in the SGML docs to the postmaster.1 and\n>> postgres.1 man pages (specifically, doco for the postmaster/backend\n>> command line switches)? I couldn't find it but maybe it's there...\n\n> No, but I want to add them by converting the man pages to User's Guide\n> reference pages. Will wait a day or two for you to update the man\n> pages,\n\nAlready committed my changes to the .1 files; you may fire when ready.\n(Unless anyone else has updates to make there?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 May 1999 09:14:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Last call for docs "
}
] |
[
{
"msg_contents": "Here is the list. Folks, if we are going to release in 13 days, we will\nneed to reduce the size of this list. I started moving some of the\nmajor items to the TODO list, but many are easy fixes that really should\nbe done by 6.5.\n\n---------------------------------------------------------------------------\n\nDefault of '' causes crash in some cases\nshift/reduce conflict in grammar, SELECT ... FOR [UPDATE|CURSOR]\nSELECT 1; SELECT 2 fails when sent not via psql, semicolon problem\nSELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\nTable with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nAllow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n\tWorks in GROUP BY\nMake sure pg_internal.init concurrent generation can't cause unreliability\nSELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\nALTER TABLE ADD COLUMN to inherited table put column in wrong place\nresno's, sublevelsup corrupt when reaching rewrite system\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\n3 = sum(x) in rewrite system is a problem\nFix function pointer calls to take Datum args for char and int2 args(ecgs)\n\nDo we want pg_dump -z to be the default?\npg_dump of groups fails\npg_dump -o -D does not work, and can not work currently, generate error?\npsql \\d should show precision\ndumping out sequences should not be counted in pg_dump display\n\nMake psql \\help, man pages, and sgml reflect changes in grammar\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\n\nFuture TODO items\n-----------------\nMake Serial its own type\nAdd support for & operator\nstore binary-compatible type information in the system somewhere \nadd ability to add comments to system tables using table/colname combination\nprocess const=const parts of OR clause in separate pass\nmake oid use oidin/oidout not int4in/int4out in pg_type.h, make oid use\n\tunsigned int more reliably, pg_atoi()\nCREATE VIEW ignores DISTINCT\nMove LIKE index optimization handling to the optimizer?\nAllow ESCAPE '\\' at the end of LIKE for ANSI compliance, or rewrite the\n\tLIKE handling by rewriting the user string with the supplied ESCAPE\nFix leak for expressions?, aggregates?\nImprove LIMIT processing by using index to limit rows processed\nCLUSTER failure if vacuum has not been performed in a while\nCREATE OPERATOR *= (leftarg=_varchar, rightarg=varchar, \n\tprocedure=array_varchareq); fails, varchar is reserved word, quotes work\nImprove Subplan list handling\nAllow Subplans to use efficient joins(hash, merge) with upper variable\nUpdate reltuples from COPY command\nCREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops) fails\n\tindex can't store constant parameters, allow SQL function indexes?\nImprove NULL parameter passing into functions\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 13:45:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.5 items"
},
{
"msg_contents": "> SELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\n\nDoes not reproduce here.\n\ntest=> create table t2 (i text check(i ~ '(foo|bar)'));\nCREATE\ntest=> insert into t1 values ('aaa');\nERROR: ExecAppend: rejected due to CHECK constraint t1_i\ntest=> insert into t1 values ('foo');\nINSERT 18634 1\n---\nTatsuo Ishii\n",
"msg_date": "Wed, 19 May 1999 21:06:56 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "> > SELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\n> \n> Does not reproduce here.\n> \n> test=> create table t2 (i text check(i ~ '(foo|bar)'));\n> CREATE\n> test=> insert into t1 values ('aaa');\n> ERROR: ExecAppend: rejected due to CHECK constraint t1_i\n> test=> insert into t1 values ('foo');\n> INSERT 18634 1\n\nThanks. Removed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 May 1999 12:24:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> resno's, sublevelsup corrupt when reaching rewrite system\n\n Don't remember exactly how I produced them. Haven't seen\n them again after the latest changes in the rule system. I\n think it was due to incorrect handling of unrewritten TLE's\n from group by clauses, which are now pulled out of the main\n targetlist.\n\n> 3 = sum(x) in rewrite system is a problem\n\n Is it? I guess what is meant by this item is the problem of\n the rewriter that it must create subqueries for view\n aggregate columns if they appear in the WHERE clause.\n\n That entire area is a very problematic one. And for sake it\n must wait for after v6.5. Aggregates and GROUP BY in views\n are unsafe and depend on the later usage of the view.\n Consider the following:\n\n CREATE TABLE t1 (a text, b text, c int4);\n CREATE VIEW v1 AS SELECT a, b, sum(c) as n\n FROM t1 GROUP BY a, b;\n CREATE TABLE t2 (a text, b text);\n\n SELECT t2.a, v1.n FROM t2, v1 WHERE t2.a = v1.a\n GROUP BY t2.a;\n\n Due to the new code in the rewriter, adding junk TLE's for\n the view's GROUP BY columns, this doesn't crash the backend\n anymore. The result (IMHO wrong) will return multiple rows\n with same t2.a because the rewritten query reads as:\n\n SELECT t2.a, sum(t1.c) FROM t2, t1\n WHERE t2.a = t1.a GROUP BY t2.a, t1.a, t1.b;\n\n The correct result would be only one row per t2.a with one of\n the possible values of v1.n if a plain SELECT * FROM v1 is\n done. But there's currently no way to express that in a\n querytree.\n\n What's absolutely broken is:\n\n SELECT t2.a, sum(v1.n) FROM t2, v1 WHERE t2.a = v1.a\n GROUP BY t2.a;\n\n This gives totally unpredictable results because after\n rewriting you have cascaded aggregates. And I expected the\n rotten results I've seen from it :-)\n\n I really hope to find the time after v6.5 to implement my\n idea of subselecting RTE's where I can place all those views\n that have these beasty DISTINCT, UNION, GROUP BY and other\n f*ing stuff. The result of a subselecting RTE will be an on-\n the-fly-materialization of the entire view used in a nestloop\n or so (dunno exactly yet). It's expansive - yes - and I don't\n know yet how to pull out restrictions from the WHERE clause\n to make the views subset as small as possible - but AFAICS\n the only fail-safe way to meet the view definition in a\n complex join.\n\n> Future TODO items\n> -----------------\n> CREATE VIEW ignores DISTINCT\n\n Covered above.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 19 May 1999 19:29:40 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > resno's, sublevelsup corrupt when reaching rewrite system\n> \n> Don't remember exactly how I produced them. Haven't seen\n> them again after the latest changes in the rule system. I\n> think it was due to incorrect handling of unrewritten TLE's\n> from group by clauses, which are now pulled out of the main\n> targetlist.\n\nRemoved. I suspected you had fixed it with your last GROUP patch,\nbecause you were addressing this exact area.\n\n> \n> > 3 = sum(x) in rewrite system is a problem\n> \n> Is it? I guess what is meant by this item is the problem of\n> the rewriter that it must create subqueries for view\n> aggregate columns if they appear in the WHERE clause.\n\nThe issue where was that aggregates can't be on the right in some cases.\nTom Lane brought this up.\n\n\n> \n> That entire area is a very problematic one. And for sake it\n> must wait for after v6.5. Aggregates and GROUP BY in views\n> are unsafe and depend on the later usage of the view.\n> Consider the following:\n\nYes, I understand.\n\n> > Future TODO items\n> > -----------------\n> > CREATE VIEW ignores DISTINCT\n> \n> Covered above.\n> \n\nOK.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 May 1999 14:30:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> shift/reduce conflict in grammar, SELECT ... FOR [UPDATE|CURSOR]\n\n Fixed.\n\n The problem was that CursorStmt tried to parse FOR UPDATE\n which is already parsed by SelectStmt.\n\n To fix it I had to add FOR READ ONLY to SelectStmt (returning\n NULL for forUpdate as if empty) and let CursorStmt look at\n there for the elog(). Don't know if FOR READ ONLY is O.K. for\n regular SELECT queries too, but I think it's better to allow\n this than to remove this syntax from CURSOR.\n\n The same error is still in the ecpg parser. AFAICS fixing it\n the same way there would make ecpg accept the DECLARE/FOR\n UPDATE syntax because there is no Query where to look at a\n forUpdate. Any suggestions?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 20 May 1999 15:05:49 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
}
] |
[
{
"msg_contents": "\nWhile playing with ODBC and MapInfo 5.01, I came across this one:\n\ndrop table \"MAPINFO_MAPCATALOG\";\n\nThis doesn't work as it seems to ignore the quotes, and convert into lower\ncase.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Tue, 18 May 1999 21:22:59 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "I thought this was picked up ages ago?"
},
{
"msg_contents": "> \n> While playing with ODBC and MapInfo 5.01, I came across this one:\n> \n> drop table \"MAPINFO_MAPCATALOG\";\n> \n> This doesn't work as it seems to ignore the quotes, and convert into lower\n> case.\n\nWorks in psql, which I think means libpq is OK:\n\n\ttest=> create table \"TT\" (x int);\n\tCREATE\n\ttest=> drop table tt;\n\tERROR: Relation 'tt' does not exist\n\ttest=> drop table \"TT\";\n\tDROP\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 17:34:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I thought this was picked up ages ago?"
},
{
"msg_contents": "On Tue, 18 May 1999, Bruce Momjian wrote:\n\n> > \n> > While playing with ODBC and MapInfo 5.01, I came across this one:\n> > \n> > drop table \"MAPINFO_MAPCATALOG\";\n> > \n> > This doesn't work as it seems to ignore the quotes, and convert into lower\n> > case.\n> \n> Works in psql, which I think means libpq is OK:\n> \n> \ttest=> create table \"TT\" (x int);\n> \tCREATE\n> \ttest=> drop table tt;\n> \tERROR: Relation 'tt' does not exist\n> \ttest=> drop table \"TT\";\n> \tDROP\n\nI tried it on mine (cvs update from last night) using psql and it failed\n:-(\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Wed, 19 May 1999 00:15:36 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I thought this was picked up ages ago?"
},
{
"msg_contents": "> On Tue, 18 May 1999, Bruce Momjian wrote:\n> \n> > > \n> > > While playing with ODBC and MapInfo 5.01, I came across this one:\n> > > \n> > > drop table \"MAPINFO_MAPCATALOG\";\n> > > \n> > > This doesn't work as it seems to ignore the quotes, and convert into lower\n> > > case.\n> > \n> > Works in psql, which I think means libpq is OK:\n> > \n> > \ttest=> create table \"TT\" (x int);\n> > \tCREATE\n> > \ttest=> drop table tt;\n> > \tERROR: Relation 'tt' does not exist\n> > \ttest=> drop table \"TT\";\n> > \tDROP\n> \n> I tried it on mine (cvs update from last night) using psql and it failed\n> :-(\n> \n\nCan you try this exact example and see if that works?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 19:54:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I thought this was picked up ages ago?"
},
{
"msg_contents": "On Tue, 18 May 1999, Bruce Momjian wrote:\n\n> > On Tue, 18 May 1999, Bruce Momjian wrote:\n> > \n> > > > \n> > > > While playing with ODBC and MapInfo 5.01, I came across this one:\n> > > > \n> > > > drop table \"MAPINFO_MAPCATALOG\";\n> > > > \n> > > > This doesn't work as it seems to ignore the quotes, and convert into lower\n> > > > case.\n> > > \n> > > Works in psql, which I think means libpq is OK:\n> > > \n> > > \ttest=> create table \"TT\" (x int);\n> > > \tCREATE\n> > > \ttest=> drop table tt;\n> > > \tERROR: Relation 'tt' does not exist\n> > > \ttest=> drop table \"TT\";\n> > > \tDROP\n> > \n> > I tried it on mine (cvs update from last night) using psql and it failed\n> > :-(\n> > \n> \n> Can you try this exact example and see if that works?\n\nHmm weird, as it worked. I retried the experiment, and it worked - even\nthough last night I ended up dropping the database.\n\nWhat I was trying to do was attempt to get MapInfo to use postgresql for\nstoring tables using ODBC. For it to work it needs a table called\nMAPINFO_MAPCATALOG. The problem is the table/column name case. It sends\nthe following query which fails:\n\nSELECT \"SPATIALTYPE\", \"TABLENAME\", \"OWNERNAME\", \"SPATIALCOLUMN\",\n\"DB_X_LL2, \"DB_Y_LL\", \"DB_X_UR\", \"DB_Y_UR\", \"COORDINATESYSTEM\", \"SYMBOL\",\n\"XCOLUMNNAME\", \"YCOLUMNNAME\" FROM \"MAPINFO_MAPCATALOG\" WHERE TABLENAME =\n'wds';\n\nObviously it's missing the \"\" from the where clause...\n\nAh well, back to the drawing board ;-)\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Wed, 19 May 1999 07:02:46 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I thought this was picked up ages ago?"
},
{
"msg_contents": "> What I was trying to do was attempt to get MapInfo to use postgresql for\n> storing tables using ODBC. For it to work it needs a table called\n> MAPINFO_MAPCATALOG. The problem is the table/column name case. It sends\n> the following query which fails:\n> SELECT \"SPATIALTYPE\", \"TABLENAME\", \"OWNERNAME\", \"SPATIALCOLUMN\",\n> \"DB_X_LL2, \"DB_Y_LL\", \"DB_X_UR\", \"DB_Y_UR\", \"COORDINATESYSTEM\", \"SYMBOL\",\n> \"XCOLUMNNAME\", \"YCOLUMNNAME\" FROM \"MAPINFO_MAPCATALOG\" WHERE TABLENAME =\n> 'wds';\n> Obviously it's missing the \"\" from the where clause...\n\nThat's interesting. It's inconsistant SQL, but would work on most\nsystems because they tend to convert unquoted names to upper case\ninternally, whereas Postgres converts them to lower case.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 May 1999 13:06:24 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I thought this was picked up ages ago?"
}
] |
[
{
"msg_contents": "\nAs I have lately been swamped with work, I must confess note having \nread the mailing-list very constantly so some things may have slipped.\n\nI have some questions about new/missing features in 6.5 that\nI did not find a clear answer to in the changes list:\n\nthe list claims:\n\nNew SELECT FOR UPDATE \n\ndoes this just lock the table, or can I use it within a cursor to do \nUPDATE ... WHERE CURRENT also ?\n\nOther questions \n\n1. What is the state of OUTER JOINS ?\n\n2 Which constraints and constraint ops are (not) supported and how ?\n\n2.1. Are FOREIGN KEYs supported\n2.1.a - in the parser\n2.1.b - in system tables\n2.1.c - actually effective\n\n2.2. Are constraints still 'write once - drop and recreate table if must\nchange',\n or is some of the ALTER TABLE .. ADD/REMOVE/DISABLE CONSRTRAINT ...\nsyntax \n supported\n\n3. Is there a way to get the source code for views,functions,rules, etc.\n Oracle does this by simply keeping the original code (this enables one \n to later recompile them as well if underlying tables change)\n\n4. Are views still dumped as table+view ? I can read it but\n various reverse-engineering tools would like to see the actual view \n definition (could probably be solved by storing the code as well)\n\nBTW, the code storing described in 3,4 could be done quite elegantly and\n(unlike Oracle, which has different ways for different object types) \nconsistently, by using the following table, \n\nCREATE TABLE pg_source_code (\n objoid oid,\n rownr int,\n rowtext text,\n constraint primary key(objoid,rownr)\n);\n\n\n\n---------------\nHannu\n",
"msg_date": "Wed, 19 May 1999 00:36:08 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Q: Features of 6.5"
}
] |
[
{
"msg_contents": "I am working on this now:\n\n\tDefault of '' causes crash in some cases\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 May 1999 18:15:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "DEFAULT '' problem"
}
] |
[
{
"msg_contents": "> I still am unclear which of these are valid SQL:\n> \n> \tselect a as b from test order by a\n> \tselect a as b from test order by b\n> \nBoth are valid, and don't forget the third variant:\n\n> select a as b from test order by 1\n> \nAndreas\n",
"msg_date": "Wed, 19 May 1999 13:36:26 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs"
},
{
"msg_contents": ">\n> > I still am unclear which of these are valid SQL:\n> >\n> > select a as b from test order by a\n> > select a as b from test order by b\n> >\n> Both are valid, and don't forget the third variant:\n>\n> > select a as b from test order by 1\n> >\n> Andreas\n>\n\n I wonder why this should be valid. Consider the following\n test case:\n\n CREATE TABLE t1 (a int4, b int4);\n SELECT a AS b, b AS a FROM t1 GROUP BY a, b;\n\n Is that now GROUP BY 1,2 or BY 2,1? Without the grouping, it\n is a totally valid statement because the column DISPLAY-names\n given with AS don't affect the rest of it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 19 May 1999 17:08:42 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs"
}
] |
[
{
"msg_contents": "I have a new version of of PyGreSQL (2.4) ready to release. I am only\nwaiting for 6.5 to be released as the README refers to it. Other than\nthat it is fully released and available for download at the following URL.\n\n ftp://ftp.druid.net/pub/distrib/PyGreSQL-2.4.tgz\n\nCan someone update the files in current before 6.5 is released please?\nI'll announce the release concurrent with the announcement for 6.5.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 19 May 1999 07:52:16 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "PyGreSQL 2.4"
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> I have a new version of of PyGreSQL (2.4) ready to release. I am only\n> waiting for 6.5 to be released as the README refers to it.\n\nDoes it still work with 6.4.2 ?\n\n---------------------\nHannu\n",
"msg_date": "Wed, 19 May 1999 15:27:16 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PyGreSQL 2.4"
},
{
"msg_contents": "Thus spake Hannu Krosing\n> \"D'Arcy J.M. Cain\" wrote:\n> > I have a new version of of PyGreSQL (2.4) ready to release. I am only\n> > waiting for 6.5 to be released as the README refers to it.\n> \n> Does it still work with 6.4.2 ?\n\nI haven't used anything new so I assume so. In fact, at the last minute I\nchanged the ftp URLs to the various packages to http URLs of the home pages\nso I could go ahead an release now I suppose. The only thing is that the\nNetBSD package that installs it refers to 6.5 which I will submit a change\nfor as soon as 6.5 is released and the new distribution file is available.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 19 May 1999 08:38:50 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PyGreSQL 2.4"
},
{
"msg_contents": "On Wed, 19 May 1999, D'Arcy J.M. Cain wrote:\n\n> I have a new version of of PyGreSQL (2.4) ready to release. I am only\n> waiting for 6.5 to be released as the README refers to it. Other than\n> that it is fully released and available for download at the following URL.\n> \n> ftp://ftp.druid.net/pub/distrib/PyGreSQL-2.4.tgz\n> \n> Can someone update the files in current before 6.5 is released please?\n\nI could be blind...but...update what where? *raised eyebrow*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 19 May 1999 10:54:02 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PyGreSQL 2.4"
},
{
"msg_contents": "Thus spake The Hermit Hacker\n> On Wed, 19 May 1999, D'Arcy J.M. Cain wrote:\n> \n> > I have a new version of of PyGreSQL (2.4) ready to release. I am only\n> > waiting for 6.5 to be released as the README refers to it. Other than\n> > that it is fully released and available for download at the following URL.\n> > \n> > ftp://ftp.druid.net/pub/distrib/PyGreSQL-2.4.tgz\n> > \n> > Can someone update the files in current before 6.5 is released please?\n> \n> I could be blind...but...update what where? *raised eyebrow*\n\n../src/interfaces/python\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 19 May 1999 10:00:20 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PyGreSQL 2.4"
},
{
"msg_contents": "\nAh, got it...\n\nOn Wed, 19 May 1999, D'Arcy J.M. Cain wrote:\n\n> Thus spake The Hermit Hacker\n> > On Wed, 19 May 1999, D'Arcy J.M. Cain wrote:\n> > \n> > > I have a new version of of PyGreSQL (2.4) ready to release. I am only\n> > > waiting for 6.5 to be released as the README refers to it. Other than\n> > > that it is fully released and available for download at the following URL.\n> > > \n> > > ftp://ftp.druid.net/pub/distrib/PyGreSQL-2.4.tgz\n> > > \n> > > Can someone update the files in current before 6.5 is released please?\n> > \n> > I could be blind...but...update what where? *raised eyebrow*\n> \n> ../src/interfaces/python\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 19 May 1999 11:43:37 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PyGreSQL 2.4"
},
{
"msg_contents": "\ndone...check it and mamke sure its all okay...\n\nOn Wed, 19 May 1999, D'Arcy J.M. Cain wrote:\n\n> Thus spake The Hermit Hacker\n> > On Wed, 19 May 1999, D'Arcy J.M. Cain wrote:\n> > \n> > > I have a new version of of PyGreSQL (2.4) ready to release. I am only\n> > > waiting for 6.5 to be released as the README refers to it. Other than\n> > > that it is fully released and available for download at the following URL.\n> > > \n> > > ftp://ftp.druid.net/pub/distrib/PyGreSQL-2.4.tgz\n> > > \n> > > Can someone update the files in current before 6.5 is released please?\n> > \n> > I could be blind...but...update what where? *raised eyebrow*\n> \n> ../src/interfaces/python\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 19 May 1999 11:47:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PyGreSQL 2.4"
},
{
"msg_contents": "Updated.\n\n\n> I have a new version of of PyGreSQL (2.4) ready to release. I am only\n> waiting for 6.5 to be released as the README refers to it. Other than\n> that it is fully released and available for download at the following URL.\n> \n> ftp://ftp.druid.net/pub/distrib/PyGreSQL-2.4.tgz\n> \n> Can someone update the files in current before 6.5 is released please?\n> I'll announce the release concurrent with the announcement for 6.5.\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 May 1999 12:31:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PyGreSQL 2.4"
}
] |
[
{
"msg_contents": "\n> smgr is a generic i/o interface layer that allows multiple storage\n> managers. Currently, we always use DEFAULT_SMGR as a parameter to smgr*\n> functions, causing calls to the md* routines. Is there any value in\n> just removing the smgr layer completely. It was originally for a CD\n> jutebox i/o layer in addition to our current disk i/o layer.\n> \nWouldn't this be the interface for a tablespace i/o manager ?\nA tablespace has the advatage of only needing a number of files\nfor thousands of small tables, and reduces the overhead of many \nopen file handles. A tablespace is also needed before raw devices\ncan be efficiently exploited.\n\nAndreas\n",
"msg_date": "Wed, 19 May 1999 14:16:55 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] sgmr* vs. md*"
},
{
"msg_contents": "> \n> > smgr is a generic i/o interface layer that allows multiple storage\n> > managers. Currently, we always use DEFAULT_SMGR as a parameter to smgr*\n> > functions, causing calls to the md* routines. Is there any value in\n> > just removing the smgr layer completely. It was originally for a CD\n> > jutebox i/o layer in addition to our current disk i/o layer.\n> > \n> Wouldn't this be the interface for a tablespace i/o manager ?\n> A tablespace has the advatage of only needing a number of files\n> for thousands of small tables, and reduces the overhead of many \n> open file handles. A tablespace is also needed before raw devices\n> can be efficiently exploited.\n\nGood point.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 May 1999 12:31:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sgmr* vs. md*"
}
] |
[
{
"msg_contents": "\n> Allow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n> \tWorks in GROUP BY\n> \nNeighter Informix nor Oracle do it, so it is probably not ansi, but it would\n\nbe a very neat feature, especially if you do some arithmetic, \nthe statement gets a lot clearer.\n\nBut it probably adds some complexity:\n\ncreate table a (a int, b int, c int);\nselect a, b as c from a where c=5; \n\nWhich c do you use alias or column ? You prbly need to use the column, \nsince this is how all others work, but would this be intuitive ?\n\nAndreas\n",
"msg_date": "Wed, 19 May 1999 15:01:11 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Thus spake ZEUGSWETTER Andreas IZ5\n> > Allow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n> > \tWorks in GROUP BY\n\n> But it probably adds some complexity:\n> \n> create table a (a int, b int, c int);\n> select a, b as c from a where c=5; \n> \n> Which c do you use alias or column ? You prbly need to use the column, \n> since this is how all others work, but would this be intuitive ?\n\nNot to me. What if I don't know that a c exists in the table, or it is\nadded after creating many scripts? I think we should use the alias in\nthat case. Either that or it should generate an error.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 19 May 1999 09:25:25 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> \n> > Allow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n> > \tWorks in GROUP BY\n> > \n> Neighter Informix nor Oracle do it, so it is probably not ansi, but it would\n> \n> be a very neat feature, especially if you do some arithmetic, \n> the statement gets a lot clearer.\n> \n> But it probably adds some complexity:\n> \n> create table a (a int, b int, c int);\n> select a, b as c from a where c=5; \n> \n> Which c do you use alias or column ? You prbly need to use the column, \n> since this is how all others work, but would this be intuitive ?\n\nThat is an excellent point. GROUP BY has to use a column name, and they\nhave to be unique, while WHERE does not require stuff to be in the\ntarget list, so there is a change of ambiguity. I am going to remove\nthe item from the list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 May 1999 12:34:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> That is an excellent point. GROUP BY has to use a column name, and they\n> have to be unique, while WHERE does not require stuff to be in the\n> target list, so there is a change of ambiguity. I am going to remove\n> the item from the list.\n\nGood point --- consider this:\n\tSELECT a, b AS a FROM tt GROUP BY a;\nWe do get it right: \"ERROR: GROUP BY 'a' is ambiguous\".\nWhereas in\n\tSELECT a, b AS a FROM tt WHERE a = 1;\nthe WHERE clause is taken as referring to the \"real\" column a.\n\nSo, unless there's some violation of spec behavior here, there is a\nreason for GROUP BY to behave differently from WHERE. I think I was\nthe one who complained that they were different --- I withdraw the\ncomplaint.\n\nBTW, which behavior should ORDER BY exhibit? I find that\n\tSELECT a, b AS a FROM tt ORDER BY a;\nis accepted and 'a' is taken to be the real column a. Considering that\nORDER BY is otherwise much like GROUP BY, I wonder whether it shouldn't\ncomplain that 'a' is ambiguous...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 May 1999 16:40:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
}
] |
[
{
"msg_contents": "Hi all,\n\nI think I have found a bug in regexp based selections.\nWatch this :\n\ncreate table regdemo (fld1 varchar(32));\nCREATE\ninsert into regdemo values('410');\nINSERT 726409 1\ninsert into regdemo values('7410');\nINSERT 726410 1\ninsert into regdemo values('source');\nINSERT 726411 1\ninsert into regdemo values('destination');\nINSERT 726412 1\nselect * from regdemo where fld1 ~* '^sou|^des';\nfld1\n-----------\nsource\ndestination\n(2 rows)\n\nselect * from regdemo where fld1 ~* '41|^des';\nfld1\n-----------\n410\n7410\ndestination\n(3 rows)\n\nselect * from regdemo where fld1 ~* '^41|^des';\nfld1\n----\n(0 rows)\n\n^^^^^^^^^^^^^^\n!?!?!?!\nI thought it should return '410' and 'destination' rows. But it returns\nnothing!\nThe first select example with ^ in both variants ( ^sou|^des ) works !!!\nThe last one ( ^41|^des ) don't !\n\nAm I missing something?\nI am getting the same result also on 6.4.2 and 6.5 beta 1 versions!\n\nBest regards,\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Wed, 19 May 1999 13:49:57 +0000",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Broken select on regular expression !!!"
},
{
"msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> select * from regdemo where fld1 ~* '^41|^des';\n> fld1\n> ----\n> (0 rows)\n\n> ^^^^^^^^^^^^^^\n> !?!?!?!\n\nI see it too. Even more interesting is that these variants are OK:\n\nregression=> select * from regdemo where fld1 ~* '^des|^41';\nfld1\n-----------\n410\ndestination\n(2 rows)\n\nregression=> select * from regdemo where fld1 ~* '(^41)|(^des)';\nfld1\n-----------\n410\ndestination\n(2 rows)\n\nAnd if you want *really* disturbing:\n\nregression=> select * from regdemo where fld1 ~* '^sou|^des';\nfld1\n-----------\nsource\ndestination\n(2 rows)\n\nregression=> select * from regdemo where fld1 ~ '^sou|^des';\nfld1\n----\n(0 rows)\n\nSomething is rotten in the state of Denmark...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 May 1999 10:32:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!! "
},
{
"msg_contents": ">> select * from regdemo where fld1 ~* '^41|^des';\n>> fld1\n>> ----\n>> (0 rows)\n>\n>> ^^^^^^^^^^^^^^\n>> !?!?!?!\n>\n>I see it too. Even more interesting is that these variants are OK:\n>\n>regression=> select * from regdemo where fld1 ~* '^des|^41';\n>fld1\n>-----------\n>410\n>destination\n>(2 rows)\n>\n>regression=> select * from regdemo where fld1 ~* '(^41)|(^des)';\n>fld1\n>-----------\n>410\n>destination\n>(2 rows)\n>\n>And if you want *really* disturbing:\n>\n>regression=> select * from regdemo where fld1 ~* '^sou|^des';\n>fld1\n>-----------\n>source\n>destination\n>(2 rows)\n>\n>regression=> select * from regdemo where fld1 ~ '^sou|^des';\n>fld1\n>----\n>(0 rows)\n>\n>Something is rotten in the state of Denmark...\n\nThese all oddness are caused by the parser (makeIndexable). When\nmakeIndexable sees ~* '^41|^des' , it tries to rewrite the target\nregexp so that an index can be used. The rewritten query might be\nsomething like:\n\nfld1 ~* '^41|^des' and fld1 >= '41|^' and fld1 <= '41|^\\377'\n\nApparently this is wrong. This is because makeIndexable does not\nunderstand '|' and '^' appearing in the middle of the regexp. On the\nother hand, \n\n>regression=> select * from regdemo where fld1 ~* '^des|^41';\n>regression=> select * from regdemo where fld1 ~* '^sou|^des';\n\nwill work since makeIndexable gave up the optimization if the op is\n\"~*\" and a letter appearing right after '^' is *alphabet*.\n\nNote that:\n\n>regression=> select * from regdemo where fld1 ~ '^sou|^des';\n\nwill not work because the op is *not* \"~*\".\n\nIt seems that the only solution is checking '|' to see if it appears\nin the target regexp and giving up the optimization in that case.\n\nOne might think that ~* '^41|^des' can be rewritten like:\n\nfld1 ~* '^41' or fld1 ~* '^des'\n\nFor me this seems not to be a good idea. To accomplish this, we have\nto deeply parse the regexp (consider that we might have arbitrary\ncomplex regexps) and such kind thing is a job regexp() shoud\ndo.\n\nComments?\n---\nTatsuo Ishii\n\n\n",
"msg_date": "Fri, 21 May 1999 10:57:14 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!! "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Something is rotten in the state of Denmark...\n\n> These all oddness are caused by the parser (makeIndexable).\n\nAh-hah, I'm sure you're right. That makes *two* serious bugs in\nmakeIndexable. (We still don't have an adequate solution\nfor its known problem in non-C locales...)\n\n> It seems that the only solution is checking '|' to see if it appears\n> in the target regexp and giving up the optimization in that case.\n\nI'm feeling a strong urge to just rip out makeIndexable until\nit can be redesigned... how many other problems has it got\nthat we haven't stumbled upon?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 May 1999 23:19:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!! "
},
{
"msg_contents": "> These all oddness are caused by the parser (makeIndexable). When\n> makeIndexable sees ~* '^41|^des' , it tries to rewrite the target\n> regexp so that an index can be used. The rewritten query might be\n> something like:\n> \n> fld1 ~* '^41|^des' and fld1 >= '41|^' and fld1 <= '41|^\\377'\n> \n> Apparently this is wrong. This is because makeIndexable does not\n> understand '|' and '^' appearing in the middle of the regexp. On the\n> other hand, \n> \n> >regression=> select * from regdemo where fld1 ~* '^des|^41';\n> >regression=> select * from regdemo where fld1 ~* '^sou|^des';\n> \n> will work since makeIndexable gave up the optimization if the op is\n> \"~*\" and a letter appearing right after '^' is *alphabet*.\n> \n> Note that:\n> \n> >regression=> select * from regdemo where fld1 ~ '^sou|^des';\n> \n> will not work because the op is *not* \"~*\".\n> \n> It seems that the only solution is checking '|' to see if it appears\n> in the target regexp and giving up the optimization in that case.\n> \n> One might think that ~* '^41|^des' can be rewritten like:\n> \n> fld1 ~* '^41' or fld1 ~* '^des'\n> \n> For me this seems not to be a good idea. To accomplish this, we have\n> to deeply parse the regexp (consider that we might have arbitrary\n> complex regexps) and such kind thing is a job regexp() shoud\n> do.\n\nAgain very clear, and caused by the indexing of regex's, as you suggest.\nI can easily look for '|' in the string, and skip the optimization. Is\nthat the only special case I need to add?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 May 1999 23:21:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> >> Something is rotten in the state of Denmark...\n> \n> > These all oddness are caused by the parser (makeIndexable).\n> \n> Ah-hah, I'm sure you're right. That makes *two* serious bugs in\n> makeIndexable. (We still don't have an adequate solution\n> for its known problem in non-C locales...)\n> \n> > It seems that the only solution is checking '|' to see if it appears\n> > in the target regexp and giving up the optimization in that case.\n> \n> I'm feeling a strong urge to just rip out makeIndexable until\n> it can be redesigned... how many other problems has it got\n> that we haven't stumbled upon?\n\nBut if we rip it out, people will complain we are not using the index\nfor regex and LIKE. That is a pretty serious complaint.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 00:08:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": "> These all oddness are caused by the parser (makeIndexable). When\n> makeIndexable sees ~* '^41|^des' , it tries to rewrite the target\n> regexp so that an index can be used. The rewritten query might be\n> something like:\n> \n> fld1 ~* '^41|^des' and fld1 >= '41|^' and fld1 <= '41|^\\377'\n\nI have just applied a fix to gram.y that should fix this case. Please\nlet me know how it works at your site. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 00:39:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": ">Again very clear, and caused by the indexing of regex's, as you suggest.\n>I can easily look for '|' in the string, and skip the optimization. Is\n>that the only special case I need to add?\n\nWhat about '{' ?\n---\nTatsuo Ishii\n\n",
"msg_date": "Fri, 21 May 1999 14:04:23 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!! "
},
{
"msg_contents": ">I have just applied a fix to gram.y that should fix this case. Please\n>let me know how it works at your site. Thanks.\n\nIt works great!\n\n(This is FreeBSD 2.2.6-RELEASE)\n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 21 May 1999 14:08:52 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!! "
},
{
"msg_contents": "At 12:08 AM 5/21/99 -0400, Bruce Momjian wrote:\n\n>But if we rip it out, people will complain we are not using the index\n>for regex and LIKE. That is a pretty serious complaint.\n\nFor this release, given that it's coming fairly soon (11 days???)\nyou might consider documenting the shortcomings, rather than\nripping it out. Indexing is important, serious users will expect\nit, at least for LIKE (they might not know about regex if they're\njust SQL grunts).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Thu, 20 May 1999 22:13:15 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": "> >Again very clear, and caused by the indexing of regex's, as you suggest.\n> >I can easily look for '|' in the string, and skip the optimization. Is\n> >that the only special case I need to add?\n> \n> What about '{' ?\n\nDoes it understand {? Man, what kind of regex library do we have?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 01:52:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": "> At 12:08 AM 5/21/99 -0400, Bruce Momjian wrote:\n> \n> >But if we rip it out, people will complain we are not using the index\n> >for regex and LIKE. That is a pretty serious complaint.\n> \n> For this release, given that it's coming fairly soon (11 days???)\n> you might consider documenting the shortcomings, rather than\n> ripping it out. Indexing is important, serious users will expect\n> it, at least for LIKE (they might not know about regex if they're\n> just SQL grunts).\n\nI didn't even know our ~ operator supported '|'! Now I am told it know\nabout '{' too. I am checking on that one.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 01:53:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": ">> >Again very clear, and caused by the indexing of regex's, as you suggest.\n>> >I can easily look for '|' in the string, and skip the optimization. Is\n>> >that the only special case I need to add?\n>> \n>> What about '{' ?\n>\n>Does it understand {? Man, what kind of regex library do we have?\n\nI vaguely recall that we used to support only \"basic\" regex. At least\nI thought so. Now looking into the source, I found we have supported\n\"extended\" regex. FYI, our regex routines definitely supprt '{'. See\nbackend/regex/re_format.7.\n\nP.S.\tI will commit a small regex test program under\nbackend/regex for the testing purpose. \n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 21 May 1999 15:19:20 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!! "
},
{
"msg_contents": "> I vaguely recall that we used to support only \"basic\" regex. At least\n> I thought so. Now looking into the source, I found we have supported\n> \"extended\" regex. FYI, our regex routines definitely supprt '{'. See\n> backend/regex/re_format.7.\n\nDidn't even know that man page existed. It sure would be nice to have\nit in the main docs, perhaps in the chapter on operators...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 May 1999 13:09:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": "> >> >Again very clear, and caused by the indexing of regex's, as you suggest.\n> >> >I can easily look for '|' in the string, and skip the optimization. Is\n> >> >that the only special case I need to add?\n> >> \n> >> What about '{' ?\n> >\n> >Does it understand {? Man, what kind of regex library do we have?\n> \n> I vaguely recall that we used to support only \"basic\" regex. At least\n> I thought so. Now looking into the source, I found we have supported\n> \"extended\" regex. FYI, our regex routines definitely supprt '{'. See\n> backend/regex/re_format.7.\n> \n> P.S.\tI will commit a small regex test program under\n> backend/regex for the testing purpose. \n\nI have just commited a fix to skip {} too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 11:46:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": "\nFixed in 6.5 final.\n\n\n> Hi all,\n> \n> I think I have found a bug in regexp based selections.\n> Watch this :\n> \n> create table regdemo (fld1 varchar(32));\n> CREATE\n> insert into regdemo values('410');\n> INSERT 726409 1\n> insert into regdemo values('7410');\n> INSERT 726410 1\n> insert into regdemo values('source');\n> INSERT 726411 1\n> insert into regdemo values('destination');\n> INSERT 726412 1\n> select * from regdemo where fld1 ~* '^sou|^des';\n> fld1\n> -----------\n> source\n> destination\n> (2 rows)\n> \n> select * from regdemo where fld1 ~* '41|^des';\n> fld1\n> -----------\n> 410\n> 7410\n> destination\n> (3 rows)\n> \n> select * from regdemo where fld1 ~* '^41|^des';\n> fld1\n> ----\n> (0 rows)\n> \n> ^^^^^^^^^^^^^^\n> !?!?!?!\n> I thought it should return '410' and 'destination' rows. But it returns\n> nothing!\n> The first select example with ^ in both variants ( ^sou|^des ) works !!!\n> The last one ( ^41|^des ) don't !\n> \n> Am I missing something?\n> I am getting the same result also on 6.4.2 and 6.5 beta 1 versions!\n> \n> Best regards,\n> -- \n> Constantin Teodorescu\n> FLEX Consulting Braila, ROMANIA\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 14:25:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
},
{
"msg_contents": "> >Again very clear, and caused by the indexing of regex's, as you suggest.\n> >I can easily look for '|' in the string, and skip the optimization. Is\n> >that the only special case I need to add?\n> \n> What about '{' ?\n\nI dealt with this before 6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 14:55:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Broken select on regular expression !!!"
}
] |
[
{
"msg_contents": "\nPlease remove [email protected] from [email protected]\nand/or [email protected]\n\nSean Rouse\nUNIX Systems Administrator\nemail: [email protected]\nvoice: 925-952-2531\n\n",
"msg_date": "Wed, 19 May 1999 09:49:50 -0700",
"msg_from": "Sean Rouse <[email protected]>",
"msg_from_op": true,
"msg_subject": "remove an address from your mailing lists"
}
] |
[
{
"msg_contents": "\n> >\n> > > I still am unclear which of these are valid SQL:\n> > >\n> > > select a as b from test order by a\n> > > select a as b from test order by b\n> > >\n> > Both are valid, and don't forget the third variant:\n> >\n> > select a as b from test order by 1\n> >\n> > Andreas\n> >\n> \n> I wonder why this should be valid. Consider the following\n> test case:\n> \n> CREATE TABLE t1 (a int4, b int4);\n> SELECT a AS b, b AS a FROM t1 GROUP BY a, b;\n> \n> Is that now GROUP BY 1,2 or BY 2,1? Without the grouping, it\n> \nThe order of the columns in a group by don't affect the result.\nIt will affect the sort order, but without an order by, the order is \nimplementation depentent and not guaranteed by ANSI. \n\n> is a totally valid statement because the column DISPLAY-names\n> given with AS don't affect the rest of it.\n> \nResumee:\n\tgroup by and where ignores alias completely (in Oracle and Informix)\n\torder by uses alias\n\t\t(only if unambiguous in Informix, alias precedes column name\nin Oracle)\n\nSo I guess our group by code does it different, than all others :-(\n\nAt last what about this, even if it is how the others do it, it is not\nconsistent with\nour group by:\nregression=> select a as b, b as c from a where c=3;\nb|c\n-+-\n3|1\n(1 row)\n\nDoes anyone know what standard says ?\n\nAndreas\n",
"msg_date": "Wed, 19 May 1999 19:36:16 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some progress on INSERT/SELECT/GROUP BY bugs"
}
] |
[
{
"msg_contents": "I created a user-defined type and tried to use it as the primary key\nbut got the following error.\n\nERROR: Can't find a default operator class for type 18594.\n\nSo, how do I create a default operator class?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 19 May 1999 15:49:57 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "default operator class for user-defined types"
}
] |
[
{
"msg_contents": "Greetings\nLooking for how to build a table with a built in unique sequential numeric\nkey (primary optional) and then copy from a flat file to that same field.\nCan not see on the _CREATE TABLE_ nor the _TYPE_ how to do so. Is the OID\nmentioned in the _COPY_ a possible.\n\nI wish to have a new unique ID for any additions to the table without\nhaveing to programmatically create one. This is done in other databases so\nI am sure it is available in Postgresql. Thank you for you help in this\nslow learning one.\n\n\n\n--\nE Westfield\nSCRIBE Network Coordinator\nhttp://www.faire.net/SCRIBE\n\n\n\n",
"msg_date": "Wed, 19 May 1999 15:07:03 -0500",
"msg_from": "\"E Westfield\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Looking for Mr. Autonum"
}
] |
[
{
"msg_contents": "Yeah, you're looking for a \"sequence\" IE:\n\nCREATE SEQUENCE seq_Users_NDX;\n\nCREATE TABLE Users (\n User_NDX Integer default nextval('seq_Users_NDX),\n UserName Text\n);\n\nThen you would insert into the table using:\nINSERT into Users (UserName) VALUES ('edwestfield', 'ctassell');\n\nThe only problem is that it doesn't work with COPY, only INSERT. When\ncopying large amounts of data into a table, I just write a simple PERL/C\nprogram that gets the nextval of the sequence, locks the table, copies in\nall the records incrementing the internal sequence counter for each record,\nthen uses setval (I think that's the name of the function) to set the\nsequence to whatever the last value was I used, and unlock the table. Not\npretty, but it works. :) \n\n\n\n\nAt 05:07 PM 5/19/99, E Westfield wrote:\n>Greetings\n>Looking for how to build a table with a built in unique sequential numeric\n>key (primary optional) and then copy from a flat file to that same field.\n>Can not see on the _CREATE TABLE_ nor the _TYPE_ how to do so. Is the OID\n>mentioned in the _COPY_ a possible.\n>\n>I wish to have a new unique ID for any additions to the table without\n>haveing to programmatically create one. This is done in other databases so\n>I am sure it is available in Postgresql. Thank you for you help in this\n>slow learning one.\n>\n>\n>\n>--\n>E Westfield\n>SCRIBE Network Coordinator\n>http://www.faire.net/SCRIBE\n>\n>\n>\n> \n",
"msg_date": "Wed, 19 May 1999 17:48:48 -0300",
"msg_from": "Charles Tassell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Looking for Mr. Autonum"
}
] |
[
{
"msg_contents": "I did some more thinking on this. The logic is, that where and group by \naffect the result set of a select, but order by (and col. alias) only affect\n\nresult presentation. This is why order by shoud look at the alias first,\nwhile everything else should look at the real column name first. \n(where, group, having ...)\n\n> Good point --- consider this:\n> \tSELECT a, b AS a FROM tt GROUP BY a;\n> We do get it right: \"ERROR: GROUP BY 'a' is ambiguous\".\n> \nThis is wrong, it should use the real column (all other DBMS do this).\n\n> Whereas in\n> \tSELECT a, b AS a FROM tt WHERE a = 1;\n> the WHERE clause is taken as referring to the \"real\" column a.\n> \ngood\n\n> So, unless there's some violation of spec behavior here, there is a\n> reason for GROUP BY to behave differently from WHERE. I think I was\n> the one who complained that they were different --- I withdraw the\n> complaint.\n> \nNo, group by and where have to be the same. Your oringinal complaint was\njustified.\n\n> BTW, which behavior should ORDER BY exhibit? I find that\n> \tSELECT a, b AS a FROM tt ORDER BY a;\n> is accepted and 'a' is taken to be the real column a. Considering that\n> ORDER BY is otherwise much like GROUP BY, I wonder whether it shouldn't\n> complain that 'a' is ambiguous...\n> \nThis is wrong, order by needs to use the alias.\n\nI therefore see the following for TODO:\n\tuse alias before column for order by \t\t-- very important\n(currently wrong)\n\tuse real column name before alias for group by \t-- important\n(currently does elog)\n\tuse alias in where iff it is unambiguous\t\t-- feature,\nnot important\n\nOn the other hand, anyone really using such ambiguous names\ndeserves unpredictable results anyway :-)\n\nAndreas\n",
"msg_date": "Thu, 20 May 1999 08:58:14 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Thus spake ZEUGSWETTER Andreas IZ5\n> > Good point --- consider this:\n> > \tSELECT a, b AS a FROM tt GROUP BY a;\n> > We do get it right: \"ERROR: GROUP BY 'a' is ambiguous\".\n> > \n> This is wrong, it should use the real column (all other DBMS do this).\n\nRegardless of what the others do, I prefer our behaviour better. What if\nthe column is not in the select list and perhaps is added to the database\ntable later? It seems wrong to me that the behaviour of this select\nshould change if a column, perhaps not relevant to the program doing\nthe select, is added. I would prefer that it fail so I could investigate\nit to see what I have to change.\n\n> > Whereas in\n> > \tSELECT a, b AS a FROM tt WHERE a = 1;\n> > the WHERE clause is taken as referring to the \"real\" column a.\n> > \n> good\n\nWell, I don't care only because someone would be nuts to write this. :-)\n\n> > BTW, which behavior should ORDER BY exhibit? I find that\n> > \tSELECT a, b AS a FROM tt ORDER BY a;\n> > is accepted and 'a' is taken to be the real column a. Considering that\n> > ORDER BY is otherwise much like GROUP BY, I wonder whether it shouldn't\n> > complain that 'a' is ambiguous...\n> > \n> This is wrong, order by needs to use the alias.\n\nI agree but I wouldn't complain if it gave an error.\n\n> \n> I therefore see the following for TODO:\n> \tuse alias before column for order by \t\t-- very important\n> (currently wrong)\n\nYep.\n\n> \tuse real column name before alias for group by \t-- important\n> (currently does elog)\n\nI prefer the current behaviour.\n\n> \tuse alias in where iff it is unambiguous\t\t-- feature,\n> not important\n\nYes.\n\n> On the other hand, anyone really using such ambiguous names\n> deserves unpredictable results anyway :-)\n\nAbsolutely. My feeling is that if the select is unambiguous and self\nconsistent, the intuitive thing should happen. This means that as\nlong as they don't make alias names that conflict with column names\nthat are selected (meaning all column names if '*' is selected) then\nthe alias should always be taken over the unselected column name.\nI am less concerned about the behaviour when the select is ambiguous\non the face of it.\n\nOf course, we should follow the standard wherever it has something to\nsay on the subject but let's not be overly concerned about what others\ndo in this situation. If it's a real problem then let's just elog any\nambiguity and document our reasons for doing so.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 20 May 1999 08:15:46 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> Thus spake ZEUGSWETTER Andreas IZ5\n>> This is wrong, it should use the real column (all other DBMS do this).\n\n> Regardless of what the others do, I prefer our behaviour better.\n\nEr, I think what actually counts is what the SQL92 spec says ...\nbut I haven't got a copy to look at.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 May 1999 09:18:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
}
] |
[
{
"msg_contents": "Example:\n\n% sql\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: david\n\ndavid=> create user sss;\nCREATE USER\ndavid=> select * from pg_shadow;\nusename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd|valuntil \n--------+--------+-----------+--------+--------+---------+------+----------------------------\npostgres| 502|t |t |t |t | |Sat Jan 31 07:00:00 2037 CET\ndavid | 501|t |t |t |t | | \nsss | 503|f |t |f |t | | \n(3 rows)\n\ndavid=> create table test ( i int );\nCREATE\ndavid=> grant all on test to sss;\nCHANGE\ndavid=> \\z test\nDatabase = david\n +----------+--------------------------+\n | Relation | Grant/Revoke Permissions |\n +----------+--------------------------+\n | test | {\"=\",\"sss=arwR\"} |\n +----------+--------------------------+\ndavid=> drop user sss; \nDROP USER\ndavid=> \\z test\nDatabase = david\n +----------+--------------------------+\n | Relation | Grant/Revoke Permissions |\n +----------+--------------------------+\n | test | {\"=\",\"503=arwR\"} |\n +----------+--------------------------+\n\n\nAll rights for user 'sss' remains there (but now identified by\nid=503). I'am not sure, if this is error, but it is dangerous.\n ('createuser' with id=503 will grant all rights to new user)\n\n David\n\n-- \n* David Sauer, student of Czech Technical University\n* electronic mail: [email protected] (mime compatible)\n",
"msg_date": "20 May 1999 11:30:34 +0200",
"msg_from": "David Sauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "drop user doesn't remove rights from tables ..."
},
{
"msg_contents": "> david=> create user sss;\n> CREATE USER\n> david=> select * from pg_shadow;\n> usename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd|valuntil \n> --------+--------+-----------+--------+--------+---------+------+----------------------------\n> postgres| 502|t |t |t |t | |Sat Jan 31 07:00:00 2037 CET\n> david | 501|t |t |t |t | | \n> sss | 503|f |t |f |t | | \n> (3 rows)\n> \n> david=> create table test ( i int );\n> CREATE\n> david=> grant all on test to sss;\n> CHANGE\n> david=> \\z test\n> Database = david\n> +----------+--------------------------+\n> | Relation | Grant/Revoke Permissions |\n> +----------+--------------------------+\n> | test | {\"=\",\"sss=arwR\"} |\n> +----------+--------------------------+\n> david=> drop user sss; \n> DROP USER\n> david=> \\z test\n> Database = david\n> +----------+--------------------------+\n> | Relation | Grant/Revoke Permissions |\n> +----------+--------------------------+\n> | test | {\"=\",\"503=arwR\"} |\n> +----------+--------------------------+\n> \n> \n> All rights for user 'sss' remains there (but now identified by\n> id=503). I'am not sure, if this is error, but it is dangerous.\n> ('createuser' with id=503 will grant all rights to new user)\n\nThis has been pointed out before. Not sure how to deal with it.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 14:32:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] drop user doesn't remove rights from tables ..."
}
] |
[
{
"msg_contents": "Ok all here is an question for you.\n\nI am running pgsql on a linux 5.1 box.\n\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: postgres\n\npostgres=> create table smile (\npostgres-> s1 integer,\npostgres-> s2 integer );\nCREATE\npostgres=> insert into smile values ( 1 , 2 );\nINSERT 17866 1\npostgres=> insert into smile values ( 3 , 4 );\nINSERT 17867 1\npostgres=> select * from smile\npostgres-> ;\ns1|s2\n--+--\n 1| 2\n 3| 4\n(2 rows)\n\npostgres=> create function ttt() returns integer \npostgres-> as 'select 4 as result'\npostgres-> language 'sql' ;\nCREATE\npostgres=> select * from smile where s2=ttt() ;\ns1|s2\n--+--\n 3| 4\n(1 row)\n\npostgres=> create trigger trg1 after insert on smile for each row\npostgres-> execute procedure ttt() ;\nERROR: CreateTrigger: function ttt () does not exist\npostgres=> \\q\n\n\n\nSo my question is - why does the create trigger function fail when the\nfunction does in\nfact exist ?\n\n\nAndrew\n",
"msg_date": "Thu, 20 May 1999 14:15:52 +0100",
"msg_from": "\"Blyth A J C (Comp)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "It is doing my head in"
},
{
"msg_contents": "> So my question is - why does the create trigger function fail when the\n> function does in\n> fact exist ?\n\n In fact - it does NOT exist!\n\n First of all, the builtin 'sql' language cannot be used to\n create triggers. This must be done in C or one of the\n procedural languages PL/pgSQL and PL/Tcl.\n\n The reason why the function doesn't exist is because a\n trigger procedure is a function declared with no arguments\n and a return type of OPAQUE. Except for the C language,\n functions in PostgreSQL can be overloaded. Multiple different\n functions can have the same name as long as their arguments\n differ.\n\n In reality trigger procedures take arguments. They are\n defined at CREATE TRIGGER time. And they return one or no\n database row of the table they are actually fired for.\n\n The documentation how to create triggers is in chapters 11\n and 13 of the PostgreSQL programmers manual.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 20 May 1999 15:40:46 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] It is doing my head in"
}
] |
[
{
"msg_contents": "\nIt's not free, but Applixware works with postgresql via postodbc. I\nhave some detailed setup information (Thanks Tom!) at:\n\n\thttp://www.radix.net/~cobrien/applix\n\nThere is a query tool (Data) that lets you set up queries\nwithout using sql, and also update tables. Once queries are\nset up, you can import the queries either into documents or\nspreadsheets. Applixware is $99 for Linux, runs on intel, alpha,\nand powerpc. Pretty good support through the mailing list.\n\nMore at http://linux.applix.com. I don't work for them, just\na happy user.\n\nDrop me a line if you have any questions.\n\n--cary\n\nCary O'Brien\[email protected]\n\n\n\n",
"msg_date": "Thu, 20 May 1999 09:30:10 -0400 (EDT)",
"msg_from": "\"Cary O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Off topic - ref spreadsheet"
}
] |
[
{
"msg_contents": "\n\n> postgres=> create trigger trg1 after insert on smile for each row\n> postgres-> execute procedure ttt() ;\n> ERROR: CreateTrigger: function ttt () does not exist\n> postgres=> \\q\n> \n> So my question is - why does the create trigger function fail when the\n> function does in\n> fact exist ?\n> \nThe procedure called from a trigger has to return opaque (the triggering\ntuple).\nThe elog could probably be modified to:\n> ERROR: CreateTrigger: function ttt () returning opaque does not exist\nto help find your error.\n\nAndreas\n",
"msg_date": "Thu, 20 May 1999 15:30:41 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] It is doing my head in"
},
{
"msg_contents": "ZEUGSWETTER Andreas IZ5 <[email protected]> writes:\n> The procedure called from a trigger has to return opaque (the triggering\n> tuple).\n> The elog could probably be modified to:\n>> ERROR: CreateTrigger: function ttt () returning opaque does not exist\n> to help find your error.\n\nI think two separate messages would be better... will fix it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 May 1999 09:57:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] It is doing my head in "
}
] |
[
{
"msg_contents": "\nOk, I'm trying to finish cleaning up libpq++ and with v6.4.0 and with \nthe snapshot I grabbed a few minutes ago I get the same thing.\n\nThe library builds and installs fine.\nThe examples build fine.\nThe examples run fine UNTIL they end. After the program ends and the\ndestructor finishes I get a core dump from a seg fault. Running it in\nthe debugger (the library and the example are both compiled with -g) I \nget this for a backtrace:\n\n(gdb) bt\n#0 0x1000000 in ?? ()\n#1 0x38000000 in ?? ()\nError accessing memory address 0x7ec1e: Invalid argument.\n(gdb) \n\nAny ideas where to look? This is on FreeBSD 2.2.6 (can't upgrade it \njust yet - it's too busy).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 20 May 1999 10:58:10 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Blowing core - anyone have any ideas?"
},
{
"msg_contents": ">\n>\n> Ok, I'm trying to finish cleaning up libpq++ and with v6.4.0 and with\n> the snapshot I grabbed a few minutes ago I get the same thing.\n>\n> The library builds and installs fine.\n> The examples build fine.\n> The examples run fine UNTIL they end. After the program ends and the\n> destructor finishes I get a core dump from a seg fault. Running it in\n> the debugger (the library and the example are both compiled with -g) I\n> get this for a backtrace:\n>\n> (gdb) bt\n> #0 0x1000000 in ?? ()\n> #1 0x38000000 in ?? ()\n> Error accessing memory address 0x7ec1e: Invalid argument.\n> (gdb)\n>\n> Any ideas where to look? This is on FreeBSD 2.2.6 (can't upgrade it\n> just yet - it's too busy).\n\n Unfortunately I can't compile the examples. I'm not very\n familiar with C++ and the cryptic error messages I get about\n \"undefined\" references surely result from some wrong shared\n lib setups here.\n\n Anyway - the above tells that something corrupted the stack\n and that the core is mostly useless (except for including it\n into replies to spam mail).\n\n The destructor eventually calls PQuntrace(), PQclear() and\n PQfinish(). Try setting breakpoints on them and then single\n step until shit happens.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 20 May 1999 20:05:45 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Blowing core - anyone have any ideas?"
},
{
"msg_contents": "\nOn 20-May-99 Jan Wieck wrote:\n>>\n>>\n>> Ok, I'm trying to finish cleaning up libpq++ and with v6.4.0 and with\n>> the snapshot I grabbed a few minutes ago I get the same thing.\n>>\n>> The library builds and installs fine.\n>> The examples build fine.\n>> The examples run fine UNTIL they end. After the program ends and the\n>> destructor finishes I get a core dump from a seg fault. Running it in\n>> the debugger (the library and the example are both compiled with -g) I\n>> get this for a backtrace:\n>>\n>> (gdb) bt\n>> #0 0x1000000 in ?? ()\n>> #1 0x38000000 in ?? ()\n>> Error accessing memory address 0x7ec1e: Invalid argument.\n>> (gdb)\n>>\n>> Any ideas where to look? This is on FreeBSD 2.2.6 (can't upgrade it\n>> just yet - it's too busy).\n> \n> Unfortunately I can't compile the examples. I'm not very\n> familiar with C++ and the cryptic error messages I get about\n> \"undefined\" references surely result from some wrong shared\n> lib setups here.\n> \n> Anyway - the above tells that something corrupted the stack\n> and that the core is mostly useless (except for including it\n> into replies to spam mail).\n> \n> The destructor eventually calls PQuntrace(), PQclear() and\n> PQfinish(). Try setting breakpoints on them and then single\n> step until shit happens.\n\nActually I ended up making some progress. It's not fixed, but I made\nsome progress.\n\nAll the examples compiled and ran before, the only difference was that\nI discovered they weren't much of an example since they didn't use the\nmain header file or the installed libraries. They used the libraries\nin the source tree and the headers of the individual files that make up\nlibpq++. So I took the original libpq++.H (horribly out of date) and \nreworked it to the current and individual headers. And I changed the\nmakefile to not look into the source tree but rather into the installed\nheader and library. That's when it started blowing core.\n\nRight before I stopped for the day (but may delve back into it tonite)\nI created a new header (called libpq+++.H) consisting of just pgdatabase.h\nand pgconnection.h (pgenv.h is gone now). Compiled and ran - did NOT\nblow up!!! So it looks like it's got something to do with that header. \nI'll probably re-create it one step at a time and see when things begin\nto go awry. I'm wondering if this is related to the comment in testlo.cc\nabout dumping core.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Thu, 20 May 1999 15:32:04 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Blowing core - anyone have any ideas?"
}
] |
[
{
"msg_contents": "REGARDING Postgres 6.4.2 connection problem\nWe configured an external drive with postgres 6.4.2 such that we could\nrun postgres/connect via psql, etc.\n\nUpon connecting the external drive to a different machine (same OS, Sun\nSolaris 2.5.1 and configuration/architecture), we changed the appropriate\nenvironment variables and updated the pg_hba.conf file with a new host\nentry with the IP of the machine. The postmaster process starts up on the new\nmachine however we are receiving the following error upon trying to\nconnect to it via psql:\n\nConnection to DB Template1 failed\nConnect DB(): getprotobyname failed\n\nAny idea what causes this error message?\n\nThanks,\nAndy\n\n",
"msg_date": "20 May 99 11:57:46 -0500",
"msg_from": "Andy Farrell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 6.4.2 connection problem"
},
{
"msg_contents": "Andy Farrell <[email protected]> writes:\n> Connect DB(): getprotobyname failed\n\n> Any idea what causes this error message?\n\nJust what it says: getprotobyname() failed --- there is only one\nplace in libpq that can generate that message.\n\nNow, *why* it failed is a more interesting question; that really\nshouldn't happen in a machine with functioning TCP/IP support.\n\nMy guess is that this new machine is not as close to being an\nidentical platform as you thought, and that you need to rebuild\nthe Postgres libraries and binaries from source.\n\nIt's also possible that there's something wrong with the\n/etc/protocols file on the new machine, but if that were the\ncase then very little would be working...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 May 1999 13:13:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres 6.4.2 connection problem "
}
] |
[
{
"msg_contents": "While on the subjects of triggers, is this the \nproper behavior?\n\nt=> SELECT version();\nversion \n------------------------------------------------------\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, \ncompiled by gcc 2.7.2.\n(1 row)\n\nt=> CREATE TABLE TEST1 (\nt-> id int4, \nt-> value text not null);\nCREATE\n\nt=> CREATE TABLE TEST2 (\nt-> value text not null);\nCREATE\n\nt=> CREATE SEQUENCE TESTSEQ;\nCREATE\n\nt=> CREATE TRIGGER T_TEST1 BEFORE INSERT ON TEST1 \nt-> FOR EACH ROW EXECUTE PROCEDURE \nt-> autoinc(id, TESTSEQ);\nCREATE\n\nt=> INSERT INTO TEST2 VALUES ('hello');\nINSERT 2497567 1\n\nt=> INSERT INTO TEST2 VALUES ('hello');\nINSERT 2497568 1\n\nt=> INSERT INTO TEST2 VALUES ('goodbye');\nINSERT 2497569 1\n\nt=> INSERT INTO TEST1 (value) \nt-> SELECT DISTINCT value FROM TEST2;\nNOTICE: testseq.nextval: sequence was re-created\nINSERT 0 3\n\nt=> SELECT * FROM TEST1;\nid|value \n--+-------\n 1|goodbye\n 2|hello \n 3|hello \n(3 rows)\n\nI guess I was expecting the DISTINCT in the \nSELECT to suppress the fetching of the second \n'hello' record, then the insert is performed, and, \nwhile the insert is performed, the trigger procedure\nis executed to fetch the sequence value for 2\nrows, not 3. Is this related to the same \nconditions which make the use of DISTINCT on VIEWS\nproblematic?\n\nThanks for any info, \n\nMarcus Mascari ([email protected])\n\nP.S. The autoinc() is the one from /contrib\n\n\n\n\n_____________________________________________________________\nDo You Yahoo!?\nFree instant messaging and more at http://messenger.yahoo.com\n",
"msg_date": "Thu, 20 May 1999 09:58:39 -0700 (PDT)",
"msg_from": "Marcus Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger - Rewrite question with 6.5beta"
},
{
"msg_contents": "Marcus Mascari wrote:\n\n>\n> While on the subjects of triggers, is this the\n> proper behavior?\n> [...]\n>\n> t=> INSERT INTO TEST1 (value)\n> t-> SELECT DISTINCT value FROM TEST2;\n> NOTICE: testseq.nextval: sequence was re-created\n> INSERT 0 3\n>\n> t=> SELECT * FROM TEST1;\n> id|value\n> --+-------\n> 1|goodbye\n> 2|hello\n> 3|hello\n> (3 rows)\n>\n> I guess I was expecting the DISTINCT in the\n> SELECT to suppress the fetching of the second\n> 'hello' record, then the insert is performed, and,\n> while the insert is performed, the trigger procedure\n> is executed to fetch the sequence value for 2\n> rows, not 3. Is this related to the same\n> conditions which make the use of DISTINCT on VIEWS\n> problematic?\n\n Similar - i guess. Must be the fact that the distinct clause\n doesn't specify the columns. Thus it is an empty list and\n treated as \"DISTINCT ON id,value\" because the targetlist got\n expanded to match the result tables schema.\n\n So at least in the case of INSERT ... SELECT the list of\n distinct columns must be set to the columns in the targetlist\n if it is empty (no columns specified by user) before\n targetlist expansion.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 20 May 1999 19:26:15 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Trigger - Rewrite question with 6.5beta"
}
] |
[
{
"msg_contents": "Hi,\n\n I've just removed the automatic installation of built\n procedural languages from initdb again.\n\n Instead there are two new commands in src/bin.\n\n createlang [options] [langname [dbname]]\n options are: --pglib path\n -a authsys\n -h host\n -p port\n\n destroylang [options] [langname [dbname]]\n options are: -a authsys\n -h host\n -p port\n\n Createlang checks if the language and call handler aren't\n installed already and if the shared object is in PGLIB before\n attempting to install the language.\n\n Destroylang refuses to remove the language if functions still\n exist in pg_proc that reference the language. There is no\n 'force' switch because a destroylang/createlang sequence\n would corrupt the functions (they still point to the old\n prolang OID) and I don't want those user questions flooding\n the lists.\n\n The required call of createlang is added to regress.sh.\n\n Thomas, I'm actually unable to do anything in the sgml area.\n Would you please be so kind to change the appropriate notes\n in the PL/pgSQL and PL/Tcl docs? And maybe add the new\n commands?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 20 May 1999 19:03:51 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "PL installation"
}
] |
[
{
"msg_contents": " Reply to: Re: [HACKERS] Postgres 6.4.2 connection problem solved\nFYI,\n\n The machine we tried to run postgres on had the 'localhost' entry in its hosts file spelled incorrectly (i.e., 'localhosts'). After updating the hosts file, postgres ran fine.\n\n I would have thought we would have recieved an error other than 'getprotobyname failed'. I would have expected an error more like the error messages you recieve when trying to connect to postgres from a remote client without adding the client's IP in the pg_hba.conf file. In any case, the problem has been solved, now I can go play golf....\n\nThanks to those who replied-\n\n\n\n",
"msg_date": "20 May 99 14:21:43 -0500",
"msg_from": "Andy Farrell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Postgres 6.4.2 connection problem solved"
},
{
"msg_contents": "Andy Farrell <[email protected]> writes:\n> The machine we tried to run postgres on had the 'localhost' entry in\n> its hosts file spelled incorrectly (i.e., 'localhosts'). After\n> updating the hosts file, postgres ran fine.\n\nThat makes sense, if you were using TCP connection protocol rather\nthan a Unix-domain socket...\n\n> I would have thought we would have recieved an error other than\n> 'getprotobyname failed'.\n\nI'll say. How the heck did it manage to get through gethostbyname()\nand connect(), which are the routines that *should* have failed, and\nthen spit up at getprotobyname() (which should be nothing more than a\nsimple scan of /etc/protocols, and should certainly not care what is\nin /etc/hosts)?\n\nThere is more than meets the eye here. If you have time, would you\nrestore /etc/hosts to its broken condition and trace through connectDB\na little more carefully? I would like to know what *really* went\nwrong.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 May 1999 17:10:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres 6.4.2 connection problem solved "
}
] |
[
{
"msg_contents": "Has anyone been able to get 6.5beta to work with Suns compiler in 64bit mode?\nI have been able to hack enough to get it to bulild and install, but when\ntrying to connect with psql over unix domain socket, the server does not know\nthat it is a local connection and says there is no pg_hba.conf entry.\n\nI decided to try and make sure that unix domain sockets work in general on\nSolaris 6 64bit, and in my code examples they do.\n\nAnyone have any ideas? \n\nMatt\n----------\nMatthew C. Aycock\nOperating Systems Analyst/Admin, Senior\nDept Math/CS\nEmory University, Atlanta, GA \nInternet: [email protected] \t\t\n\n\n",
"msg_date": "Thu, 20 May 1999 15:54:10 -0400 (EDT)",
"msg_from": "\"Matthew C. Aycock\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "64 bit version on Solaris 7..."
},
{
"msg_contents": "\"Matthew C. Aycock\" <[email protected]> writes:\n> I have been able to hack enough to get it to bulild and install, but when\n> trying to connect with psql over unix domain socket, the server does not know\n> that it is a local connection and says there is no pg_hba.conf entry.\n\nSounds like you need to trace through src/backend/libpq/hba.c and figure\nout why it's failing to match the hba.conf entry. Probably some silly\nlittle bit of machine-dependent coding, but I don't see it offhand...\nshouldn't be too hard to narrow it down with a debugger attached to the\npostmaster process, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 May 1999 17:15:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64 bit version on Solaris 7... "
}
] |
[
{
"msg_contents": "To get the latest cvs snapshot (19990520) to be installed, I had to make\nthe following patch to the src/interfaces/Makefile\n\nvlad: diff -w3c ../../pgsql.old/src/interfaces/Makefile\ninterfaces/Makefile\n*** ../../pgsql.old/src/interfaces/Makefile Thu May 13 16:52:12 1999\n\n--- interfaces/Makefile Mon May 10 12:01:25 1999\n***************\n*** 52,58 ****\n $(MAKE) -C perl5 clean\n cd perl5 && POSTGRES_HOME=\"$(POSTGRESDIR)\" perl Makefile.PL\n $(MAKE) -C perl5 all\n! @if [ -w `sed -n -e 's/^ *SITELIBEXP *= *//p' perl5/Makefile` ];\nthen \\\n $(MAKE) $(MFLAGS) -C perl5 install; \\\n rm -f perl5/Makefile; \\\n else \\\n--- 52,58 ----\n $(MAKE) -C perl5 clean\n cd perl5 && POSTGRES_HOME=\"$(POSTGRESDIR)\" perl Makefile.PL\n $(MAKE) -C perl5 all\n! @if [ -w `sed -n -e 's/^ *INSTALLSITELIBEXP *= *//p'\nperl5/Makefile` ]; then \\\n $(MAKE) $(MFLAGS) -C perl5 install; \\\n rm -f perl5/Makefile; \\\n else \\\n\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n",
"msg_date": "Thu, 20 May 1999 16:53:43 -0500",
"msg_from": "Brian P Millett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Makefile Patch"
},
{
"msg_contents": "\nBut I applied this a few weeks ago, and my copy has it. Are you sure\nyou are updated?\n\n\n> To get the latest cvs snapshot (19990520) to be installed, I had to make\n> the following patch to the src/interfaces/Makefile\n> \n> vlad: diff -w3c ../../pgsql.old/src/interfaces/Makefile\n> interfaces/Makefile\n> *** ../../pgsql.old/src/interfaces/Makefile Thu May 13 16:52:12 1999\n> \n> --- interfaces/Makefile Mon May 10 12:01:25 1999\n> ***************\n> *** 52,58 ****\n> $(MAKE) -C perl5 clean\n> cd perl5 && POSTGRES_HOME=\"$(POSTGRESDIR)\" perl Makefile.PL\n> $(MAKE) -C perl5 all\n> ! @if [ -w `sed -n -e 's/^ *SITELIBEXP *= *//p' perl5/Makefile` ];\n> then \\\n> $(MAKE) $(MFLAGS) -C perl5 install; \\\n> rm -f perl5/Makefile; \\\n> else \\\n> --- 52,58 ----\n> $(MAKE) -C perl5 clean\n> cd perl5 && POSTGRES_HOME=\"$(POSTGRESDIR)\" perl Makefile.PL\n> $(MAKE) -C perl5 all\n> ! @if [ -w `sed -n -e 's/^ *INSTALLSITELIBEXP *= *//p'\n> perl5/Makefile` ]; then \\\n> $(MAKE) $(MFLAGS) -C perl5 install; \\\n> rm -f perl5/Makefile; \\\n> else \\\n> \n> \n> --\n> Brian Millett\n> Enterprise Consulting Group \"Heaven can not exist,\n> (314) 205-9030 If the family is not eternal\"\n> [email protected] F. Ballard Washburn\n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 May 1999 18:18:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Makefile Patch"
}
] |
[
{
"msg_contents": "I have created a new type and by struggling with the instructions at\nhttp://www.postgresql.org/docs/programmer/xindex.htm (I'm preparing some\ncorrections) I di everything that I can to set this up fully. I am able\nto create a table using my type as the primary key and insert data into\nit fine. However, when I try to update or select using a WHERE clause I\nget the following.\n\nERROR: fmgr_info: function 0: cache lookup failed\n\nHere is the SQL to create the type. Any ideas?\n\n--\n--\tPostgreSQL code for GLACCOUNTs.\n--\n--\t$Id$\n--\n\nload '/usr/local/pgsql/modules/glaccount.so';\n\n--\n--\tInput and output functions and the type itself:\n--\n\ncreate function glaccount_in(opaque)\n\treturns opaque\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_out(opaque)\n\treturns opaque\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate type glaccount (\n\tinternallength = 16,\n\texternallength = 13,\n\tinput = glaccount_in,\n\toutput = glaccount_out\n);\n\n--\n-- Some extra functions\n--\n\ncreate function glaccount_major(glaccount)\n\treturns int\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_minor(glaccount)\n\treturns int\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_cmp(glaccount, glaccount)\n\treturns int\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\n--\n--\tThe various boolean tests:\n--\n\ncreate function glaccount_eq(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_ne(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_lt(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_gt(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_le(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_ge(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\n--\n--\tNow the operators. Note how some of the parameters to some\n--\tof the 'create operator' commands are commented out. This\n--\tis because they reference as yet undefined operators, and\n--\twill be implicitly defined when those are, further down.\n--\n\ncreate operator < (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n--\tnegator = >=,\n\tprocedure = glaccount_lt\n);\n\ncreate operator <= (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n--\tnegator = >,\n\tprocedure = glaccount_le\n);\n\ncreate operator = (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n\tcommutator = =,\n--\tnegator = <>,\n\tprocedure = glaccount_eq\n);\n\ncreate operator >= (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n\tnegator = <,\n\tprocedure = glaccount_ge\n);\n\ncreate operator > (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n\tnegator = <=,\n\tprocedure = glaccount_gt\n);\n\ncreate operator <> (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n\tnegator = =,\n\tprocedure = glaccount_ne\n);\n\n-- Now, let's see if we can set it up for indexing\n\nINSERT INTO pg_opclass (opcname, opcdeftype) \n\tSELECT 'glaccount_ops', oid FROM pg_type WHERE typname = 'glaccount';\n\nSELECT o.oid AS opoid, o.oprname\n\tINTO TEMP TABLE glaccount_ops_tmp\n\tFROM pg_operator o, pg_type t\n\tWHERE o.oprleft = t.oid AND\n\t\to.oprright = t.oid AND\n\t\tt.typname = 'glaccount';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 1,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '<';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 2,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '<=';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 3,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '=';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 4,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '>=';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 5,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '>';\n\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n\tSELECT a.oid, b.oid, c.oid, 1\n\t\tFROM pg_am a, pg_opclass b, pg_proc c\n\t\tWHERE a.amname = 'btree' AND\n\t\t\tb.opcname = 'glaccount_ops' AND\n\t\t\tc.proname = 'glaccount_cmp';\n\nINSERT INTO pg_description (objoid, description)\n\tSELECT oid, 'Two part G/L account'\n\t\tFROM pg_type WHERE typname = 'glaccount';\n\n--\n--\teof\n--\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 20 May 1999 22:51:44 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cache lookup failed"
}
] |
[
{
"msg_contents": "I get above message from the backend while trying to update same raw\nfrom different transactions (I guess). Is this normal?\n\nFYI, if I change the transaction isolation level to serializable, no\nerro occurs.\n---\nTatsuo Ishii\n",
"msg_date": "Fri, 21 May 1999 18:58:18 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: WaitOnLock: error on wakeup - Aborting this transaction"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> I get above message from the backend while trying to update same raw\n> from different transactions (I guess). Is this normal?\n\n1=>begin;\n2=>begin;\n1=>update t set a = 1 where c = 1;\n2=>update t set a = 1 where c = 2;\n1=>update t set a = 2 where c = 2; -- blocked by 2\n2=>update t set a = 2 where c = 1; --> deadlock\n\n\nOr you didn't use BEGIN/END ?\n\nVadim\n",
"msg_date": "Fri, 21 May 1999 19:20:03 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: WaitOnLock: error on wakeup - Aborting this\n\ttransaction"
},
{
"msg_contents": "> > I get above message from the backend while trying to update same raw\n> > from different transactions (I guess). Is this normal?\n> \n> 1=>begin;\n> 2=>begin;\n> 1=>update t set a = 1 where c = 1;\n> 2=>update t set a = 1 where c = 2;\n> 1=>update t set a = 2 where c = 2; -- blocked by 2\n> 2=>update t set a = 2 where c = 1; --> deadlock\n\nMy sessions look like:\n\nbegin;\nupdate t set a = 1 where c = 1;\nselect * from t where c = 1;\nend;\n\nSo I think there is no possibility of a deadlock. Note that the error\nhappens with relatively large number of concurrent transactions\nrunning. I don't see the error at # of transactions = 1~32 while I get\nerrors at 63 (I didn't try 33~62). In each session which raw gets\nupdated is decided by a random generator, so increasing # of\ntransactions might also increases the chance of conflicts, or 63 might\nhit some threshold of certain resources, I don't know. The interesting\nthing is the error never happen if I set the transaction isolation\nmode to \"serializable.\"\n\nIf I have time, I would do more test cases.\n---\nTatsuo Ishii\n",
"msg_date": "Sun, 23 May 1999 23:44:00 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: WaitOnLock: error on wakeup - Aborting this\n\ttransaction"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> > > I get above message from the backend while trying to update same raw\n> > > from different transactions (I guess). Is this normal?\n> \n> My sessions look like:\n> \n> begin;\n> update t set a = 1 where c = 1;\n> select * from t where c = 1;\n> end;\n\nOps. Do you have indices over table t?\nBtree-s are still using page-level locking and don't release\nlocks when leave index page to fetch row from relation.\nSeems that this causes deadlocks more often than I thought -:(\n\nMarc? I can fix this today and I'll be very careful...\nOk?\n\nVadim\n",
"msg_date": "Mon, 24 May 1999 11:56:43 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: WaitOnLock: error on wakeup - Aborting this\n\ttransaction"
},
{
"msg_contents": ">> My sessions look like:\n>> \n>> begin;\n>> update t set a = 1 where c = 1;\n>> select * from t where c = 1;\n>> end;\n>\n>Ops. Do you have indices over table t?\n\nYes. It has the primary key, so has an btree index.\n\n>Btree-s are still using page-level locking and don't release\n>locks when leave index page to fetch row from relation.\n>Seems that this causes deadlocks more often than I thought -:(\n>\n>Marc? I can fix this today and I'll be very careful...\n>Ok?\n\nPlease let me know if you fix it. I will run the test again.\n---\nTatsuo Ishii\n\n\n",
"msg_date": "Mon, 24 May 1999 13:10:37 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: WaitOnLock: error on wakeup - Aborting this\n\ttransaction"
},
{
"msg_contents": "> Tatsuo Ishii wrote:\n> > \n> > > > I get above message from the backend while trying to update same raw\n> > > > from different transactions (I guess). Is this normal?\n> > \n> > My sessions look like:\n> > \n> > begin;\n> > update t set a = 1 where c = 1;\n> > select * from t where c = 1;\n> > end;\n> \n> Ops. Do you have indices over table t?\n> Btree-s are still using page-level locking and don't release\n> locks when leave index page to fetch row from relation.\n> Seems that this causes deadlocks more often than I thought -:(\n> \n> Marc? I can fix this today and I'll be very careful...\n> Ok?\n\n\nIf you don't, seems like our MVCC isn't going to be much good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 09:40:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: WaitOnLock: error on wakeup - Aborting this\n\ttransaction"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> >Btree-s are still using page-level locking and don't release\n> >locks when leave index page to fetch row from relation.\n> >Seems that this causes deadlocks more often than I thought -:(\n> >\n> >Marc? I can fix this today and I'll be very careful...\n> >Ok?\n> \n> Please let me know if you fix it. I will run the test again.\n\nFixed.\n\nVadim\n",
"msg_date": "Wed, 26 May 1999 02:36:43 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: WaitOnLock: error on wakeup - Aborting this\n\ttransaction"
},
{
"msg_contents": ">> >Btree-s are still using page-level locking and don't release\n>> >locks when leave index page to fetch row from relation.\n>> >Seems that this causes deadlocks more often than I thought -:(\n>> >\n>> >Marc? I can fix this today and I'll be very careful...\n>> >Ok?\n>> \n>> Please let me know if you fix it. I will run the test again.\n>\n>Fixed.\n\nThanks, but I now have another problem. I got backend abortings. Stack \ntrace shows followings (Sorry I' writing this by hand rather than\ncut&paste, so there may be an error):\n\ns_lock_stuck\ns_lock\nSpinAcquire\nLockAcquire\nLockRelation\nheap_beginscan\nindex_info\nfind_secondary_index\nfind_relation_indices\nset_base_rel_pathlist\nmake_one_rel\nsubPlanner\nquery_planner\nunion_planner\nplanner\npg_parse_and_plan\npg_exec_query_dest\npg_exec_quer\n:\n:\n\nNote that this happend in both read committed/serializable levels.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 26 May 1999 15:07:33 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: WaitOnLock: error on wakeup - Aborting this\n\ttransaction"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> Thanks, but I now have another problem. I got backend abortings. Stack\n> trace shows followings (Sorry I' writing this by hand rather than\n> cut&paste, so there may be an error):\n> \n> s_lock_stuck\n> s_lock\n> SpinAcquire\n> LockAcquire\n> LockRelation\n> heap_beginscan\n\nTry to re-compile with -DLOCK_MGR_DEBUG -DDEADLOCK_DEBUG\nand run postmaster with -o -K 3 to see what's going on in\nlmgr.\n\nVadim\n",
"msg_date": "Wed, 26 May 1999 14:34:15 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: WaitOnLock: error on wakeup - Aborting this\n\ttransaction"
}
] |
[
{
"msg_contents": "Hallo,\n\nfirst of all, thank you for fixing the ´hash table overflow problem´, I´ve tested it on a join between a 2GB and a 3MB table, it works!\n\nNow, I tried to use the numeric type, and hit the following problem:\n\ncreate table a (val numeric(10,2));\nCREATE\ninsert into a values('123.45');\nINSERT 2402633 1\n\ncreate table b (val numeric(10,2));\nCREATE\n\ninsert into b select sum(val) into b;\nERROR: transformExpr: does not know how to transform node 107\n\ninsert into b select float8(sum(val)) into b;\nINSERT 2402643 1\n\nOne other (SQL-) question: How can I cast results to numeric directly? Using float8 cast and the following implicit cast is not very nice.\n\nKind regards,\n\nMichael Contzen\nDohle Systemberatung, Germany\nEmail: [email protected]\n\n\n\n\n\n\n\nHallo,\n \nfirst of all, thank you for fixing the ´hash table \noverflow problem´, I´ve tested it on a join between a 2GB and a 3MB table, it \nworks!\n \nNow, I tried to use the numeric type, and hit the \nfollowing problem:\n \ncreate table a (val \nnumeric(10,2));CREATEinsert into a values('123.45');INSERT 2402633 \n1\n \ncreate table b (val \nnumeric(10,2));CREATE\ninsert into b select sum(val) into b;ERROR: \n transformExpr: does not know how to transform node 107\ninsert into b select float8(sum(val)) into \nb;INSERT 2402643 1\n \nOne other (SQL-) question: How can I cast results \nto numeric directly? Using float8 cast and the following implicit cast is not \nvery nice.\n \nKind regards,\n \nMichael Contzen\nDohle Systemberatung, Germany\nEmail: [email protected]",
"msg_date": "Fri, 21 May 1999 13:45:43 +0100",
"msg_from": "Michael Contzen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Numeric and Aggregate: transform node 107"
},
{
"msg_contents": "Michael Contzen <[email protected]> writes:\n> insert into b select sum(val) into b;\n> ERROR: transformExpr: does not know how to transform node 107\n\nI fixed that error a few days ago (the parser wasn't coping with\nimplicit type coercion of aggregates). It works here:\n\nregression=> insert into b select sum(val) into b;\nINSERT 540179 1\n\nI'm a little confused about why this statement is allowed, though;\nshouldn't it read 'insert into b select sum(val) FROM a'?\n\nI'm not sure what the parser thought it was doing --- there's no\nneed for a type coercion if it's computing a sum() of a numeric\ncolumn and putting the result in another numeric column, so I have\na suspicion that the parser read the statement as meaning something\nelse entirely.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 May 1999 10:01:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric and Aggregate: transform node 107 "
}
] |
[
{
"msg_contents": "While looking for tables related to default ops I came across the following\ntables that are not mentioned in the programmer docs system catalogues\nsection and are (in my database anyway) empty. Perhaps someone who is\ncloser to the catalogues can confirm that these are still used. If so,\nperhaps we can update the docs to describe them.\n\n pg_inheritproc\n pg_listener\n pg_relcheck\n\nThe following tables are used but are not listed in the docs with the\nrest. How about a short description of each? Maybe even add some to\nthe diagram.\n\n pg_attrdef\n pg_description\n pg_inherits\n pg_ipl\n pg_rewrite\n pg_shadow\n pg_statistic\n pg_trigger\n\nI'm still trying to figure out why I get that fmgr error when I use my\nuser defined type in a where clause. Can anyone point me to the tables\nthat I need to modify? Here is what I have modified so far.\n\n pg_opclass\n pg_amop\n pg_amproc\n pg_description\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 21 May 1999 09:06:57 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Empty system tables"
},
{
"msg_contents": "[email protected] (\"D'Arcy\" \"J.M.\" Cain) writes:\n> perhaps we can update the docs to describe them.\n\n> pg_listener\n\npg_listener records which backends are actively listening for NOTIFY\nconditions, and is used to transmit NOTIFY signals from one backend to\nanother. It will be empty if you do not have any active LISTEN\ncommands.\n\n> pg_shadow\n\nShadow user table, with the real passwords. (pg_user is actually\nonly a view of pg_shadow.)\n\n> pg_statistic\n\nPer-column stats collected by VACUUM ANALYZE and used by the query\noptimizer.\n\nDunno about the rest.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 May 1999 11:07:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Empty system tables "
}
] |
[
{
"msg_contents": "I just checked the problem with views using current cvs and it's\nstell here.\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Thu, 13 May 1999 19:46:50 +0400 (MSD)\nFrom: Oleg Bartunov <[email protected]>\nTo: [email protected]\nSubject: [HACKERS] 6.5 cvs: views doesn't survives after pg_dump\n\nAfter dumping (by pg_dump) and restoring views becomes a tables\n\nHere is a simple scenario:\n1. createdb tview\n\n2. create table t1 (a int4, b int4);\n create view v1 as select a from t1;\n\n3. pg_dump -z tview > tview.dump\n4. destroydb tview\n\n createdb tview\n\n5. psql -e tview < tview.dump\n............................\nQUERY: COPY \"t1\" FROM stdin;\nCREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \"t1\";\nQUERY: CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \"t1\";\nERROR: parser: parse error at or near \"do\"\nEOF\n\n6. psql tview\n\ntview=> \\dt\nDatabase = tview\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | megera | t1 | table |\n | megera | v1 | table |\n +------------------+----------------------------------+----------+\n\ntview=>\n\n view t1 now becomes table v1 !\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Fri, 21 May 1999 17:29:04 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "[HACKERS] 6.5 cvs: views doesn't survives after pg_dump (fwd)"
},
{
"msg_contents": "\nLooks like this was fixed in 6.5.\n\n\n> I just checked the problem with views using current cvs and it's\n> stell here.\n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> ---------- Forwarded message ----------\n> Date: Thu, 13 May 1999 19:46:50 +0400 (MSD)\n> From: Oleg Bartunov <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] 6.5 cvs: views doesn't survives after pg_dump\n> \n> After dumping (by pg_dump) and restoring views becomes a tables\n> \n> Here is a simple scenario:\n> 1. createdb tview\n> \n> 2. create table t1 (a int4, b int4);\n> create view v1 as select a from t1;\n> \n> 3. pg_dump -z tview > tview.dump\n> 4. destroydb tview\n> \n> createdb tview\n> \n> 5. psql -e tview < tview.dump\n> ............................\n> QUERY: COPY \"t1\" FROM stdin;\n> CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \"t1\";\n> QUERY: CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \"t1\";\n> ERROR: parser: parse error at or near \"do\"\n> EOF\n> \n> 6. psql tview\n> \n> tview=> \\dt\n> Database = tview\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | megera | t1 | table |\n> | megera | v1 | table |\n> +------------------+----------------------------------+----------+\n> \n> tview=>\n> \n> view t1 now becomes table v1 !\n> \n> \tRegards,\n> \n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 15:03:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: views doesn't survives after pg_dump (fwd)"
}
] |
[
{
"msg_contents": "Current CVS doesn't compile on Redhat Linux 6.0.\n\nbison 1.27\nflex 2.5.4a\n\nOle Gjerde\n\n--- src/backend/parser/gram.y 1999/05/21 15:47:13 2.81\n+++ src/backend/parser/gram.y 1999/05/21 17:48:46\n@@ -5365,8 +5365,8 @@\n for (pos = 1; n->val.val.str[pos]; pos++)\n {\n if (n->val.val.str[pos] == '|' ||\n- if (n->val.val.str[pos] == '{' ||\n- if (n->val.val.str[pos] == '}')\n+ n->val.val.str[pos] == '{' ||\n+ n->val.val.str[pos] == '}')\n {\n found_special = true;\n break;\n\n\n\n",
"msg_date": "Fri, 21 May 1999 12:38:37 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Doesn't compile"
},
{
"msg_contents": "> Current CVS doesn't compile on Redhat Linux 6.0.\n> \n> bison 1.27\n> flex 2.5.4a\n\nJust fixed. Try again. Sorry,\n\n> \n> Ole Gjerde\n> \n> --- src/backend/parser/gram.y 1999/05/21 15:47:13 2.81\n> +++ src/backend/parser/gram.y 1999/05/21 17:48:46\n> @@ -5365,8 +5365,8 @@\n> for (pos = 1; n->val.val.str[pos]; pos++)\n> {\n> if (n->val.val.str[pos] == '|' ||\n> - if (n->val.val.str[pos] == '{' ||\n> - if (n->val.val.str[pos] == '}')\n> + n->val.val.str[pos] == '{' ||\n> + n->val.val.str[pos] == '}')\n> {\n> found_special = true;\n> break;\n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 14:57:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Doesn't compile"
}
] |
[
{
"msg_contents": "Hi,\n\nrecently I tried to reproduce some benchmark results\nwhen I discovered a very strange behavior. I did\nmy tests with the current snapshot of last week,\nbut other people who have performed the same bench-\nmark with postgresql-6.4-2 reported the same problems.\n\nThe setup is pretty simple: one table with 13\ninteger and 7 char(20) columns. For every column\nan index is created. The postmaster is started with\n-o -F and before each query a 'vacuum analyze' is \nperformed.\n\nWhen loading 100.000 rows into the table \neverything works ok. Selects and updates \nare reasonable fast. But when loading\n1.000.000 rows the select statements still \nwork, but a simple update statement\nshows this strange behavior. A never ending\ndisk-activity starts. Memory consumption\nincreases up to the physical limit (384 MB)\nwhereas the postmaster uses only a few % \nof CPU time. After 1 hour I killed the post-\nmaster.\n\nIt would be nice, if this could be fixed.\nPeople from the german UNIX magazine IX\nbenchmarked Oracle, Informix and Sybase on Linux\nand they claimed, that Postgres is totally unusable\nbecause of this problem.\n\nIf you need some additional info, just let me know.\n\n\nEdmund\n\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Fri, 21 May 1999 22:34:15 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "strange behavior of UPDATE"
},
{
"msg_contents": "Edmund Mergl <[email protected]> writes:\n> When loading 100.000 rows into the table \n> everything works ok. Selects and updates \n> are reasonable fast. But when loading\n> 1.000.000 rows the select statements still \n> work, but a simple update statement\n> shows this strange behavior.\n\nCan you provide a script or something to reproduce this behavior?\n\nThere are a number of people using Postgres with large databases\nand not reporting any such problem, so I think there has to be some\nspecial triggering condition; it's not just a matter of things\nbreaking at a million rows. Before digging into it, I'd like to\neliminate variables like whether I have the right test case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 May 1999 17:00:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
},
{
"msg_contents": "OK, can you attach to the running process and tell us what functions it\nis running. That would help.\n\n\n[Charset iso-8859-2 unsupported, filtering to ASCII...]\n> Hi,\n> \n> recently I tried to reproduce some benchmark results\n> when I discovered a very strange behavior. I did\n> my tests with the current snapshot of last week,\n> but other people who have performed the same bench-\n> mark with postgresql-6.4-2 reported the same problems.\n> \n> The setup is pretty simple: one table with 13\n> integer and 7 char(20) columns. For every column\n> an index is created. The postmaster is started with\n> -o -F and before each query a 'vacuum analyze' is \n> performed.\n> \n> When loading 100.000 rows into the table \n> everything works ok. Selects and updates \n> are reasonable fast. But when loading\n> 1.000.000 rows the select statements still \n> work, but a simple update statement\n> shows this strange behavior. A never ending\n> disk-activity starts. Memory consumption\n> increases up to the physical limit (384 MB)\n> whereas the postmaster uses only a few % \n> of CPU time. After 1 hour I killed the post-\n> master.\n> \n> It would be nice, if this could be fixed.\n> People from the german UNIX magazine IX\n> benchmarked Oracle, Informix and Sybase on Linux\n> and they claimed, that Postgres is totally unusable\n> because of this problem.\n> \n> If you need some additional info, just let me know.\n> \n> \n> Edmund\n> \n> \n> -- \n> Edmund Mergl mailto:[email protected]\n> Im Haldenhau 9 http://www.bawue.de/~mergl\n> 70565 Stuttgart fon: +49 711 747503\n> Germany\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 17:14:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Edmund Mergl <[email protected]> writes:\n> > When loading 100.000 rows into the table\n> > everything works ok. Selects and updates\n> > are reasonable fast. But when loading\n> > 1.000.000 rows the select statements still\n> > work, but a simple update statement\n> > shows this strange behavior.\n> \n> Can you provide a script or something to reproduce this behavior?\n> \n> There are a number of people using Postgres with large databases\n> and not reporting any such problem, so I think there has to be some\n> special triggering condition; it's not just a matter of things\n> breaking at a million rows. Before digging into it, I'd like to\n> eliminate variables like whether I have the right test case.\n> \n> regards, tom lane\n\n\nthe original benchmark can be found at\n\n ftp://ftp.heise.de/pub/ix/benches/sqlb-21.tar\n\nfor a stripped-down version see the attachment.\nFor loading the database and running the first\nand second part (selects and updates) just do\nthe following: \n\n createdb test\n ./make_wnt 1000000 pgsql >make.out 2>&1 &\n\nThis needs about 700 MB of diskspace.\nOn a PII-400 it takes about 40 minutes to\nload the database, 20 minutes to create the indeces\nand 20 minutes to run the first part of the\nbenchmark (make_sqs). For running the benchmark\nin 20 minutes (without swapping) one needs 384 MB RAM.\n\nThe second part (make_nqs) contains update\nstatements which can not be performed properly\nusing PostgreSQL.\n\nFor testing it is sufficient to initialize the\ndatabase and then to perform a query like\n\n update bench set k500k = k500k + 1 where k100 = 30\n\n\nEdmund\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany",
"msg_date": "Sat, 22 May 1999 06:39:25 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> OK, can you attach to the running process and tell us what functions it\n> is running. That would help.\n> \n> [Charset iso-8859-2 unsupported, filtering to ASCII...]\n> > Hi,\n> >\n> > recently I tried to reproduce some benchmark results\n> > when I discovered a very strange behavior. I did\n> > my tests with the current snapshot of last week,\n> > but other people who have performed the same bench-\n> > mark with postgresql-6.4-2 reported the same problems.\n> >\n> > The setup is pretty simple: one table with 13\n> > integer and 7 char(20) columns. For every column\n> > an index is created. The postmaster is started with\n> > -o -F and before each query a 'vacuum analyze' is\n> > performed.\n> >\n> > When loading 100.000 rows into the table\n> > everything works ok. Selects and updates\n> > are reasonable fast. But when loading\n> > 1.000.000 rows the select statements still\n> > work, but a simple update statement\n> > shows this strange behavior. A never ending\n> > disk-activity starts. Memory consumption\n> > increases up to the physical limit (384 MB)\n> > whereas the postmaster uses only a few %\n> > of CPU time. After 1 hour I killed the post-\n> > master.\n> >\n> > It would be nice, if this could be fixed.\n> > People from the german UNIX magazine IX\n> > benchmarked Oracle, Informix and Sybase on Linux\n> > and they claimed, that Postgres is totally unusable\n> > because of this problem.\n> >\n> > If you need some additional info, just let me know.\n> >\n> >\n> > Edmund\n\n\nI can attach to the backend and print a backtrace. \nIs this what you expect ?\n\n\nEdmund\n\n\n(gdb) bt\n#0 0x40186534 in __libc_read ()\n#1 0x360 in ?? ()\n#2 0x80de019 in mdread (reln=0x8222790, blocknum=1671, \n buffer=0x40215c40 \"ďż˝\\a\\024\\bďż˝\\037\") at md.c:413\n#3 0x80dead5 in smgrread (which=0, reln=0x8222790, blocknum=1671, \n buffer=0x40215c40 \"ďż˝\\a\\024\\bďż˝\\037\") at smgr.c:231\n#4 0x80d5863 in ReadBufferWithBufferLock (reln=0x8222790, blockNum=1671, \n bufferLockHeld=0) at bufmgr.c:292\n#5 0x80d5758 in ReadBuffer (reln=0x8222790, blockNum=1671) at bufmgr.c:170\n#6 0x8073153 in _bt_getbuf (rel=0x8222790, blkno=1671, access=0)\n at nbtpage.c:337\n#7 0x8074659 in _bt_searchr (rel=0x8222790, keysz=1, scankey=0x8236470, \n bufP=0xbfffa36c, stack_in=0x849e498) at nbtsearch.c:116\n#8 0x8074547 in _bt_search (rel=0x8222790, keysz=1, scankey=0x8236470, \n bufP=0xbfffa36c) at nbtsearch.c:52\n#9 0x8070d31 in _bt_doinsert (rel=0x8222790, btitem=0x849e468, \n index_is_unique=0 '\\000', heapRel=0x8218c40) at nbtinsert.c:65\n#10 0x8073aea in btinsert (rel=0x8222790, datum=0x849e420, \n nulls=0x849e408 \" \", ht_ctid=0x82367d4, heapRel=0x8218c40) at nbtree.c:369\n#11 0x8109a13 in fmgr_c (finfo=0xbfffa40c, values=0xbfffa41c, \n isNull=0xbfffa40b \"\") at fmgr.c:154\n#12 0x8109cb7 in fmgr (procedureId=331) at fmgr.c:338\n#13 0x806d540 in index_insert (relation=0x8222790, datum=0x849e420, \n nulls=0x849e408 \" \", heap_t_ctid=0x82367d4, heapRel=0x8218c40)\n at indexam.c:190\n#14 0x80953a2 in ExecInsertIndexTuples (slot=0x8233368, tupleid=0x82367d4, \n estate=0x8231740, is_update=1) at execUtils.c:1210\n#15 0x809293c in ExecReplace (slot=0x8233368, tupleid=0xbfffa540, \n estate=0x8231740) at execMain.c:1472\n#16 0x809255e in ExecutePlan (estate=0x8231740, plan=0x8231280, \n operation=CMD_UPDATE, offsetTuples=0, numberTuples=0, \n direction=ForwardScanDirection, destfunc=0x8236350) at execMain.c:1086\n#17 0x8091cae in ExecutorRun (queryDesc=0x8231728, estate=0x8231740, \n feature=3, limoffset=0x0, limcount=0x0) at execMain.c:359\n#18 0x80e1098 in ProcessQueryDesc (queryDesc=0x8231728, limoffset=0x0, \n limcount=0x0) at pquery.c:333\n#19 0x80e10fe in ProcessQuery (parsetree=0x8220998, plan=0x8231280, \n dest=Remote) at pquery.c:376\n#20 0x80dfbb6 in pg_exec_query_dest (\n query_string=0xbfffa67c \"update bench set k500k = k500k + 1 where k100 = 30;\n\", dest=Remote, aclOverride=0) at postgres.c:742\n#21 0x80dfab7 in pg_exec_query (\n query_string=0xbfffa67c \"update bench set k500k = k500k + 1 where k100 = 30;\n\") at postgres.c:642\n#22 0x80e0abc in PostgresMain (argc=6, argv=0xbfffe704, real_argc=6, \n real_argv=0xbffffcf4) at postgres.c:1610\n---Type <return> to continue, or q <return> to quit---\n#23 0x80cacaa in DoBackend (port=0x81c3c38) at postmaster.c:1584\n#24 0x80ca77a in BackendStartup (port=0x81c3c38) at postmaster.c:1351\n#25 0x80c9ec9 in ServerLoop () at postmaster.c:802\n#26 0x80c9a0e in PostmasterMain (argc=6, argv=0xbffffcf4) at postmaster.c:596\n#27 0x80a1836 in main (argc=6, argv=0xbffffcf4) at main.c:97\n#28 0x400fbcb3 in __libc_start_main (main=0x80a17d0 <main>, argc=6, \n argv=0xbffffcf4, init=0x8061878 <_init>, fini=0x810f13c <_fini>, \n rtld_fini=0x4000a350 <_dl_fini>, stack_end=0xbffffcec)\n at ../sysdeps/generic/libc-start.c:78\n\n\n\n\n\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Sat, 22 May 1999 09:00:03 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "> > > The setup is pretty simple: one table with 13\n> > > integer and 7 char(20) columns. For every column\n> > > an index is created. The postmaster is started with\n> > > -o -F and before each query a 'vacuum analyze' is\n> > > performed.\n\n\nYes, this is what I wanted. Does the test use the DEFAULT clause. If\nso, I may have just fixed the problem. If not, it may be another\nproblem with char() length not being padded properly.\n\n\n> > >\n> > > When loading 100.000 rows into the table\n> > > everything works ok. Selects and updates\n> > > are reasonable fast. But when loading\n> > > 1.000.000 rows the select statements still\n> > > work, but a simple update statement\n> > > shows this strange behavior. A never ending\n> > > disk-activity starts. Memory consumption\n> > > increases up to the physical limit (384 MB)\n> > > whereas the postmaster uses only a few %\n> > > of CPU time. After 1 hour I killed the post-\n> > > master.\n> > >\n> > > It would be nice, if this could be fixed.\n> > > People from the german UNIX magazine IX\n> > > benchmarked Oracle, Informix and Sybase on Linux\n> > > and they claimed, that Postgres is totally unusable\n> > > because of this problem.\n> > >\n> > > If you need some additional info, just let me know.\n> > >\n> > >\n> > > Edmund\n> \n> \n> I can attach to the backend and print a backtrace. \n> Is this what you expect ?\n> \n> \n> Edmund\n> \n> \n> (gdb) bt\n> #0 0x40186534 in __libc_read ()\n> #1 0x360 in ?? ()\n> #2 0x80de019 in mdread (reln=0x8222790, blocknum=1671, \n> buffer=0x40215c40 \"_\\a\\024\\b_\\037\") at md.c:413\n> #3 0x80dead5 in smgrread (which=0, reln=0x8222790, blocknum=1671, \n> buffer=0x40215c40 \"_\\a\\024\\b_\\037\") at smgr.c:231\n> #4 0x80d5863 in ReadBufferWithBufferLock (reln=0x8222790, blockNum=1671, \n> bufferLockHeld=0) at bufmgr.c:292\n> #5 0x80d5758 in ReadBuffer (reln=0x8222790, blockNum=1671) at bufmgr.c:170\n> #6 0x8073153 in _bt_getbuf (rel=0x8222790, blkno=1671, access=0)\n> at nbtpage.c:337\n> #7 0x8074659 in _bt_searchr (rel=0x8222790, keysz=1, scankey=0x8236470, \n> bufP=0xbfffa36c, stack_in=0x849e498) at nbtsearch.c:116\n> #8 0x8074547 in _bt_search (rel=0x8222790, keysz=1, scankey=0x8236470, \n> bufP=0xbfffa36c) at nbtsearch.c:52\n> #9 0x8070d31 in _bt_doinsert (rel=0x8222790, btitem=0x849e468, \n> index_is_unique=0 '\\000', heapRel=0x8218c40) at nbtinsert.c:65\n> #10 0x8073aea in btinsert (rel=0x8222790, datum=0x849e420, \n> nulls=0x849e408 \" \", ht_ctid=0x82367d4, heapRel=0x8218c40) at nbtree.c:369\n> #11 0x8109a13 in fmgr_c (finfo=0xbfffa40c, values=0xbfffa41c, \n> isNull=0xbfffa40b \"\") at fmgr.c:154\n> #12 0x8109cb7 in fmgr (procedureId=331) at fmgr.c:338\n> #13 0x806d540 in index_insert (relation=0x8222790, datum=0x849e420, \n> nulls=0x849e408 \" \", heap_t_ctid=0x82367d4, heapRel=0x8218c40)\n> at indexam.c:190\n> #14 0x80953a2 in ExecInsertIndexTuples (slot=0x8233368, tupleid=0x82367d4, \n> estate=0x8231740, is_update=1) at execUtils.c:1210\n> #15 0x809293c in ExecReplace (slot=0x8233368, tupleid=0xbfffa540, \n> estate=0x8231740) at execMain.c:1472\n> #16 0x809255e in ExecutePlan (estate=0x8231740, plan=0x8231280, \n> operation=CMD_UPDATE, offsetTuples=0, numberTuples=0, \n> direction=ForwardScanDirection, destfunc=0x8236350) at execMain.c:1086\n> #17 0x8091cae in ExecutorRun (queryDesc=0x8231728, estate=0x8231740, \n> feature=3, limoffset=0x0, limcount=0x0) at execMain.c:359\n> #18 0x80e1098 in ProcessQueryDesc (queryDesc=0x8231728, limoffset=0x0, \n> limcount=0x0) at pquery.c:333\n> #19 0x80e10fe in ProcessQuery (parsetree=0x8220998, plan=0x8231280, \n> dest=Remote) at pquery.c:376\n> #20 0x80dfbb6 in pg_exec_query_dest (\n> query_string=0xbfffa67c \"update bench set k500k = k500k + 1 where k100 = 30;\n> \", dest=Remote, aclOverride=0) at postgres.c:742\n> #21 0x80dfab7 in pg_exec_query (\n> query_string=0xbfffa67c \"update bench set k500k = k500k + 1 where k100 = 30;\n> \") at postgres.c:642\n> #22 0x80e0abc in PostgresMain (argc=6, argv=0xbfffe704, real_argc=6, \n> real_argv=0xbffffcf4) at postgres.c:1610\n> ---Type <return> to continue, or q <return> to quit---\n> #23 0x80cacaa in DoBackend (port=0x81c3c38) at postmaster.c:1584\n> #24 0x80ca77a in BackendStartup (port=0x81c3c38) at postmaster.c:1351\n> #25 0x80c9ec9 in ServerLoop () at postmaster.c:802\n> #26 0x80c9a0e in PostmasterMain (argc=6, argv=0xbffffcf4) at postmaster.c:596\n> #27 0x80a1836 in main (argc=6, argv=0xbffffcf4) at main.c:97\n> #28 0x400fbcb3 in __libc_start_main (main=0x80a17d0 <main>, argc=6, \n> argv=0xbffffcf4, init=0x8061878 <_init>, fini=0x810f13c <_fini>, \n> rtld_fini=0x4000a350 <_dl_fini>, stack_end=0xbffffcec)\n> at ../sysdeps/generic/libc-start.c:78\n> \n> \n> \n> \n> \n> \n> -- \n> Edmund Mergl mailto:[email protected]\n> Im Haldenhau 9 http://www.bawue.de/~mergl\n> 70565 Stuttgart fon: +49 711 747503\n> Germany\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 May 1999 03:10:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Edmund,\n\nHere is what I got running that test on DUAL PII 350Mhz, 256 RAM,\nFreeBSD-3.1 elf release, current 6.5 cvs:\n\nStart of inserting 1000000 rows: SB 22 MAJ 1999 13:14:58 MSD\nStart of indexing 1000000 rows: SB 22 MAJ 1999 14:47:06 MSD\nStart of SetQuery single user: SB 22 MAJ 1999 18:18:05 MSD\nStart of NewQuery single user: SB 22 MAJ 1999 19:38:06 MSD\nEnd of iX SQLBench 2.1: WS 23 MAJ 1999 11:05:25 MSD\n\nIt was so slow, does FreeBSD has slow IO operation or it's swapping \nproblem ? \nSB 22 MAJ 1999 22:07:26 MSD\nQ8A\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible.\n Terminating.\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible.\n Terminating.\nWS 23 MAJ 1999 11:05:24 MSD\nQ8B\nConnection to database 'test' failed.\n\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nI did upgrade of postgres and accidentally stopped testing - I didn't expect\nit's still running !\n\nAfter reinstalling of postgres I did \ntest=> select count(*) from bench;\nanf it takes forever ! Anybody tried that test ?\nHere is a bt from gdb:\n0x8078b8d in TransactionIdDidAbort ()\n(gdb) bt \n#0 0x8078b8d in TransactionIdDidAbort ()\n#1 0x806b460 in heapgettup ()\n#2 0x806bf3f in heap_getnext ()\n#3 0x809c816 in SeqNext ()\n#4 0x8097609 in ExecScan ()\n#5 0x809c8cb in ExecSeqScan ()\n#6 0x8095c56 in ExecProcNode ()\n#7 0x80993dd in ExecAgg ()\n#8 0x8095cd6 in ExecProcNode ()\n#9 0x8094c10 in ExecutePlan ()\n#10 0x809450b in ExecutorRun ()\n#11 0x80ec121 in ProcessQueryDesc ()\n#12 0x80ec19e in ProcessQuery ()\n#13 0x80eaab3 in pg_exec_query_dest ()\n#14 0x80ea994 in pg_exec_query ()\n#15 0x80ebb28 in PostgresMain ()\n#16 0x80d3608 in DoBackend ()\n#17 0x80d30f3 in BackendStartup ()\n#18 0x80d2716 in ServerLoop ()\n#19 0x80d226f in PostmasterMain ()\n#20 0x80a5517 in main ()\n#21 0x80611fd in _start ()\n(gdb)\n\nI don't know what exactly this test did but I shocked from such a\npoor performance. We really need to find out what's the problem.\nI'm in a way to open a new very big Web+DB project and I'm a little bit\nscare to do this with postgres :-)\n\n\n\tRegards,\n\n\t\tOleg\n\n\nOn Sat, 22 May 1999, Edmund Mergl wrote:\n\n> Date: Sat, 22 May 1999 06:39:25 +0200\n> From: Edmund Mergl <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: PostgreSQL Hackers Mailinglist <[email protected]>\n> Subject: Re: [HACKERS] strange behavior of UPDATE\n> \n> Tom Lane wrote:\n> > \n> > Edmund Mergl <[email protected]> writes:\n> > > When loading 100.000 rows into the table\n> > > everything works ok. Selects and updates\n> > > are reasonable fast. But when loading\n> > > 1.000.000 rows the select statements still\n> > > work, but a simple update statement\n> > > shows this strange behavior.\n> > \n> > Can you provide a script or something to reproduce this behavior?\n> > \n> > There are a number of people using Postgres with large databases\n> > and not reporting any such problem, so I think there has to be some\n> > special triggering condition; it's not just a matter of things\n> > breaking at a million rows. Before digging into it, I'd like to\n> > eliminate variables like whether I have the right test case.\n> > \n> > regards, tom lane\n> \n> \n> the original benchmark can be found at\n> \n> ftp://ftp.heise.de/pub/ix/benches/sqlb-21.tar\n> \n> for a stripped-down version see the attachment.\n> For loading the database and running the first\n> and second part (selects and updates) just do\n> the following: \n> \n> createdb test\n> ./make_wnt 1000000 pgsql >make.out 2>&1 &\n> \n> This needs about 700 MB of diskspace.\n> On a PII-400 it takes about 40 minutes to\n> load the database, 20 minutes to create the indeces\n> and 20 minutes to run the first part of the\n> benchmark (make_sqs). For running the benchmark\n> in 20 minutes (without swapping) one needs 384 MB RAM.\n> \n> The second part (make_nqs) contains update\n> statements which can not be performed properly\n> using PostgreSQL.\n> \n> For testing it is sufficient to initialize the\n> database and then to perform a query like\n> \n> update bench set k500k = k500k + 1 where k100 = 30\n> \n> \n> Edmund\n> \n> -- \n> Edmund Mergl mailto:[email protected]\n> Im Haldenhau 9 http://www.bawue.de/~mergl\n> 70565 Stuttgart fon: +49 711 747503\n> Germany\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 23 May 1999 11:35:09 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Going through the documentation I can only find little about outer\njoins. One statement is in the Changes doc about including syntax for\nouter joins, but there doesn't seem to be implemented any code after\nthat.\n\nIs it true that there's no outer joins yet? Any plans? Btw. what is the\nsyntax for outer joins. I know only Oracle's (+) operator.\n\n",
"msg_date": "Sun, 23 May 1999 09:58:59 +0200 (CEST)",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Outer joins"
},
{
"msg_contents": "Edmund Mergl <[email protected]> writes:\n> When loading 100.000 rows into the table everything works ok. Selects\n> and updates are reasonable fast. But when loading 1.000.000 rows the\n> select statements still work, but a simple update statement shows this\n> strange behavior. A never ending disk-activity starts. Memory\n> consumption increases up to the physical limit (384 MB) whereas the\n> postmaster uses only a few % of CPU time. After 1 hour I killed the\n> post-master.\n\nI tried to reproduce this with current sources on a rather underpowered\nLinux box (64Mb of memory, about 40Mb of which is locked down by a\nhigh-priority data collection process). It took a *long* time, but\nas far as I could see it was all disk activity, and that's hardly\nsurprising given the drastic shortage of buffer cache memory.\nIn particular I did not see any dramatic growth in the size of the\nbackend process. The test case\n\nupdate bench set k500k = k500k + 1 where k100 = 30;\n\nrequired a maximum of 10Mb.\n\nPerhaps you could try it again with a current 6.5 snapshot and see\nwhether things are any better?\n\nAlso, I suspect that increasing the postmaster -B setting beyond its\ndefault of 64 would be quite helpful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 May 1999 20:43:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
},
{
"msg_contents": "Tom,\n\ndid you wait until test finished.\nI also tried to reproduce test with current 6.5 cvs, Linux 2.0.36,\nDUAL PPRO 256Mb. It's still running, it's extremely slow, but\nmemory usage was about 10-11Mb, CPU usage about 5-9%. \nI use -B 1024 option. No surprize people won't use Postgres \nfor large application.\n\n9:12[postgres@zeus]:~/test/sqlbench> cat cat L9905232104.txt\n\npostgresql-6.5pre on linux-2.2.7\nStart of inserting 1000000 rows: Sun May 23 21:04:32 MSD 1999\nStart of indexing 1000000 rows: Mon May 24 00:09:47 MSD 1999\nStart of SetQuery single user: Mon May 24 03:24:01 MSD 1999\nStart of NewQuery single user: Mon May 24 05:23:41 MSD 1999\n\n9:15[postgres@zeus]:~/test/sqlbench>gdb /usr/local/pgsql.65/bin/postgres 10130\nGDB is free software and you are welcome to distribute copies of it\n under certain conditions; type \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB; type \"show warranty\" for details.\nGDB 4.16 (i486-slackware-linux),\nCopyright 1996 Free Software Foundation, Inc...\n\n/usr2/u/postgres/test/sqlbench/10130: No such file or directory.\nAttaching to program /usr/local/pgsql.65/bin/postgres', process 10130\nReading symbols from /lib/libdl.so.1...done.\nReading symbols from /lib/libm.so.5...done.\nReading symbols from /lib/libtermcap.so.2...done.\nReading symbols from /lib/libncurses.so.3.0...done.\nReading symbols from /lib/libc.so.5...done.\nReading symbols from /lib/ld-linux.so.1...done.\n0x400c0564 in __read ()\n(gdb) bt\n#0 0x400c0564 in __read ()\n#1 0x80e5abb in FileRead ()\n#2 0x80ec793 in mdread ()\n#3 0x80ed3b5 in smgrread ()\n#4 0x80e34d2 in ReadBufferWithBufferLock ()\n#5 0x80e33b2 in ReadBuffer ()\n#6 0x806ff28 in heap_fetch ()\n#7 0x809ec19 in IndexNext ()\n#8 0x809b3e9 in ExecScan ()\n#9 0x809ed61 in ExecIndexScan ()\n#10 0x8099a46 in ExecProcNode ()\n#11 0x809d1bd in ExecAgg ()\n#12 0x8099ab6 in ExecProcNode ()\n#13 0x80989f0 in ExecutePlan ()\n#14 0x80982eb in ExecutorRun ()\n#15 0x80eff54 in ProcessQueryDesc ()\n#16 0x80effce in ProcessQuery ()\n#17 0x80ee783 in pg_exec_query_dest ()\n#18 0x80ee664 in pg_exec_query ()\n#19 0x80ef8d8 in PostgresMain ()\n#20 0x80d7290 in DoBackend ()\n#21 0x80d6dd3 in BackendStartup ()\n#22 0x80d6496 in ServerLoop ()\n---Type <return> to continue, or q <return> to quit---\n#23 0x80d603c in PostmasterMain ()\n#24 0x80a9287 in main ()\n#25 0x806502e in _start ()\n(gdb) \n\nTop shows:\n\n10130 postgres 7 0 11020 10M 9680 D 0 5.9 4.0 5:04 postmaster\n\n\n\tRegards,\n\n\t\tOleg\n\nOn Sun, 23 May 1999, Tom Lane wrote:\n\n> Date: Sun, 23 May 1999 20:43:33 -0400\n> From: Tom Lane <[email protected]>\n> To: Edmund Mergl <[email protected]>\n> Cc: PostgreSQL Hackers Mailinglist <[email protected]>\n> Subject: Re: [HACKERS] strange behavior of UPDATE \n> \n> Edmund Mergl <[email protected]> writes:\n> > When loading 100.000 rows into the table everything works ok. Selects\n> > and updates are reasonable fast. But when loading 1.000.000 rows the\n> > select statements still work, but a simple update statement shows this\n> > strange behavior. A never ending disk-activity starts. Memory\n> > consumption increases up to the physical limit (384 MB) whereas the\n> > postmaster uses only a few % of CPU time. After 1 hour I killed the\n> > post-master.\n> \n> I tried to reproduce this with current sources on a rather underpowered\n> Linux box (64Mb of memory, about 40Mb of which is locked down by a\n> high-priority data collection process). It took a *long* time, but\n> as far as I could see it was all disk activity, and that's hardly\n> surprising given the drastic shortage of buffer cache memory.\n> In particular I did not see any dramatic growth in the size of the\n> backend process. The test case\n> \n> update bench set k500k = k500k + 1 where k100 = 30;\n> \n> required a maximum of 10Mb.\n> \n> Perhaps you could try it again with a current 6.5 snapshot and see\n> whether things are any better?\n> \n> Also, I suspect that increasing the postmaster -B setting beyond its\n> default of 64 would be quite helpful.\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 24 May 1999 09:17:53 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
},
{
"msg_contents": "> Going through the documentation I can only find little about outer\n> joins. One statement is in the Changes doc about including syntax for\n> outer joins, but there doesn't seem to be implemented any code after\n> that.\n> Is it true that there's no outer joins yet? Any plans? Btw. what is the\n> syntax for outer joins. I know only Oracle's (+) operator.\n\nThere is a small amount of code inside of #ifdef ENABLE_OUTER_JOINS\nbut it is not even close to what needs to be present for anything to\nrun. Bruce and I were talking about an implementation, but it is\ndefinitely not coming for v6.5.\n\n - Thomas\n\nOh, the syntax has lots of variants, but the basic one is:\n\nselect * from t1 left|right|full outer join t2 on t1.x = t2.x;\n\nor\n\nselect * from t1 left|right|full outer join t2 using (x);\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 24 May 1999 14:06:16 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Outer joins"
},
{
"msg_contents": "> select * from t1 left|right|full outer join t2 on t1.x = t2.x;\n\nWill this be correct?\n\nSELECT * FROM t1, t2, t3, t4 LEFT OUTER JOIN ON t1.x = t2.x, \n t1.x = t3.x, t1.x = t4.x;\n\n",
"msg_date": "Sat, 29 May 1999 08:38:15 +0200 (CEST)",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Outer joins"
},
{
"msg_contents": "> > select * from t1 left|right|full outer join t2 on t1.x = t2.x;\n> Will this be correct?\n> SELECT * FROM t1, t2, t3, t4 LEFT OUTER JOIN ON t1.x = t2.x,\n> t1.x = t3.x, t1.x = t4.x;\n\nLeft outer joins will take the left-side table and null-fill entries\nwhich do not have a corresponding match on the right-side table. If\nyour example is trying to get an output row for at least every input\nrow from t1, then perhaps the query would be\n\nselect * from t1 left join t2 using (x)\n left join t3 using (x)\n left join t4 using (x);\n\nBut since I haven't implemented it yet I don't have much experience\nwith the outer join syntax...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 01 Jun 1999 15:40:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Outer joins"
},
{
"msg_contents": "> Left outer joins will take the left-side table and null-fill entries\n> which do not have a corresponding match on the right-side table. If\n> your example is trying to get an output row for at least every input\n> row from t1, then perhaps the query would be\n> \n> select * from t1 left join t2 using (x)\n> left join t3 using (x)\n> left join t4 using (x);\n> \n> But since I haven't implemented it yet I don't have much experience\n> with the outer join syntax...\n\nYou miss at least two points: The keyword OUTER and the column name\nfrom t1. As I know, LEFT is the default, so it could be omitted.\n\nMaybe\nSELECT * FROM t1 USING (X) OUTER JOIN t2 USING (Y)\n OUTER JOIN t3 USING (Z)\n OUTER JOIN t4 using (t);\n\nIt should be possible to boil it down to\nSELECT * FROM t1 USING (X) OUTER JOIN t2 USING (Y), t3 USING (Z), t4 using (t);\n\n\n\n",
"msg_date": "Sat, 5 Jun 1999 08:58:05 +0200 (CEST)",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Outer joins"
},
{
"msg_contents": "> > Left outer joins will take the left-side table and null-fill entries\n> > which do not have a corresponding match on the right-side table. If\n> > your example is trying to get an output row for at least every input\n> > row from t1, then perhaps the query would be\n> > select * from t1 left join t2 using (x)\n> > left join t3 using (x)\n> > left join t4 using (x);\n> > But since I haven't implemented it yet I don't have much experience\n> > with the outer join syntax...\n> You miss at least two points: The keyword OUTER and the column name\n> from t1. As I know, LEFT is the default, so it could be omitted.\n\n\"OUTER\" conveys no additional information, and can be omitted. My copy\nof Date and Darwen indicates that \"LEFT JOIN\" is the minimum required\nto get a left outer join (i.e. the \"LEFT\" can not be omitted).\n\nI'm not sure what you mean about missing something about \"the column\nname for t1\". My hypothetical query is referring to column \"x\",\npresent in all four tables. Was there some other place a column for t1\nshould be mentioned?\n\n> Maybe\n> SELECT * FROM t1 USING (X) OUTER JOIN t2 USING (Y)\n> OUTER JOIN t3 USING (Z)\n> OUTER JOIN t4 using (t);\n> It should be possible to boil it down to\n> SELECT * FROM t1 USING (X) OUTER JOIN t2 USING (Y), t3 USING (Z), t4 using (t);\n\nThis doesn't resemble SQL92, but may have some similarity to outer\njoin syntaxes in Oracle, Sybase, etc. Don't know myself.\n\nA (hypothetical) simple two table outer join can be written as\n\n select * from t1 left join t2 using (x);\n\nIntroducing a third table to be \"left outer joined\" to this\nintermediate result can be done as\n\n select * from t1 left join t2 using (x)\n left join t3 using (x);\n\nwhere the second \"x\" refers to the column named \"x\" from the first\nouter join, and the column named \"x\" from t3.\n\nAn alternate equivalent query would be\n\n select * from t1 left join t2 on t1.x = t2.x\n left join t3 on x = t3.x;\n\nHope this helps (and that I've got the details right now that I've\nspouted off... :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 07 Jun 1999 06:37:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Outer joins"
},
{
"msg_contents": "> \"OUTER\" conveys no additional information, and can be omitted. My copy\n\nSorry. You're right. Just as long as you accept it.\n\n> I'm not sure what you mean about missing something about \"the column\n> name for t1\". My hypothetical query is referring to column \"x\",\n> present in all four tables. Was there some other place a column for t1\n> should be mentioned?\n\nWhat if the column is named order_id in one table and ord_id in another?\n\n> select * from t1 left join t2 on t1.x = t2.x\n> left join t3 on x = t3.x;\n\nOK, this will do it. You can have a t1.x = t2.y.\n\n\n",
"msg_date": "Mon, 7 Jun 1999 23:10:53 +0200 (CEST)",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Outer joins"
}
] |
[
{
"msg_contents": "Oleg Bartunov <[email protected]>\n> After dumping (by pg_dump) and restoring views becomes a tables\n> \n\nThe problem is that views are dumped with anm extraneous \"WHERE\"\n\n> ............................\n> QUERY: COPY \"t1\" FROM stdin;\n> CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \n\"t1\";\n> QUERY: CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" \nFROM \"t1\";\n\n...................................................++++++\n\n> ERROR: parser: parse error at or near \"do\"\n> EOF\n\nWhich causes this error and the rule (View) is not Created.\n\nI don't know how the where clause gets in there but if you\nedit the dump before restoring all is OK.\n\nKeith.\n\n",
"msg_date": "Fri, 21 May 1999 22:34:50 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 cvs: views doesn't survives after pg_dump (fwd)"
}
] |
[
{
"msg_contents": "I am now using ddd to do debugging. Very nice.\n\nHowever, the \"> \" prompt is confusing ddd. Can I change it to something\nelse, so that me and other ddd users will be OK too. This is only the\nprompt you get when you run the postgres backend manually from the\ncommand line. Perhaps I can change it to \"backend>\". I tried the fix,\nand it worked great.\n\nI am going to commit the change, and reverse it out if anyone complains.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 22:54:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "ddd and postgres prompt"
},
{
"msg_contents": "> I am now using ddd to do debugging. Very nice.\n> \n> However, the \"> \" prompt is confusing ddd. Can I change it to something\n> else, so that me and other ddd users will be OK too. This is only the\n> prompt you get when you run the postgres backend manually from the\n> command line. Perhaps I can change it to \"backend>\". I tried the fix,\n> and it worked great.\n> \n> I am going to commit the change, and reverse it out if anyone complains.\n\nddd is too cool. If you put your mouse pointer on a variable name in\nthe source code, a bubble comes up that displays the variable's value. \nYou can get information about ddd (and lesstiff) at www.xshare.com,\namoung other software.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 May 1999 23:22:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ddd and postgres prompt"
}
] |
[
{
"msg_contents": "I have fixed the problem with DEFAULT ''.\n\n\ttest=> create table t1 (str1 char(2) default '',str2 text default\n\t'',str3 text default '' ); \n\tCREATE\n\ttest=> insert into t1 values ('aa', 'string2', 'string3');\n\tINSERT 18830 1\n\ttest=> insert into t1 (str3) values ('string3');\n\tINSERT 18831 1\n\ttest=> select * from t1;\n\tstr1|str2 |str3 \n\t----+-------+-------\n\taa |string2|string3\n\t | |string3\n\t(2 rows)\n\nThe fix is to pass atttypmod into parse_coerce(), and when a bpchar type\nis the output type, we pass the atttypmod value into bpcharin, so the\ntype is properly padded.\n\nThe bad news is that many other calls to parse_coerce do not have access\nto the column's atttypmod value, so they don't properly pad the\ncoersion.\n\nDoes anyone have an opinion on this? Why does only DEFAULT have this\nproblem? Does anyone know how inserts of '' into char() fields get\npadded with the proper atttypmod value? Do I need to pass atttypmod to\nall the functions that call parse_coerce, so I can pass a value for all\ncases?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? src/Makefile.custom\n? src/config.log\n? src/log\n? src/config.cache\n? src/config.status\n? src/GNUmakefile\n? src/Makefile.global\n? src/backend/fmgr.h\n? src/backend/parse.h\n? src/backend/postgres\n? src/backend/global1.bki.source\n? src/backend/local1_template1.bki.source\n? src/backend/global1.description\n? src/backend/local1_template1.description\n? src/backend/bootstrap/bootparse.c\n? src/backend/bootstrap/bootstrap_tokens.h\n? src/backend/bootstrap/bootscanner.c\n? src/backend/catalog/genbki.sh\n? src/backend/catalog/global1.bki.source\n? src/backend/catalog/global1.description\n? src/backend/catalog/local1_template1.bki.source\n? src/backend/catalog/local1_template1.description\n? src/backend/port/Makefile\n? src/backend/utils/Gen_fmgrtab.sh\n? src/backend/utils/fmgr.h\n? src/backend/utils/fmgrtab.c\n? src/bin/cleardbdir/cleardbdir\n? src/bin/createdb/createdb\n? src/bin/createlang/createlang\n? src/bin/createuser/createuser\n? src/bin/destroydb/destroydb\n? src/bin/destroylang/destroylang\n? src/bin/destroyuser/destroyuser\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_dump/Makefile\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pg_version/Makefile\n? src/bin/pg_version/pg_version\n? src/bin/pgtclsh/mkMakefile.tcldefs.sh\n? src/bin/pgtclsh/mkMakefile.tkdefs.sh\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/Makefile\n? src/bin/psql/psql\n? src/include/version.h\n? src/include/config.h\n? src/interfaces/ecpg/lib/Makefile\n? src/interfaces/ecpg/lib/libecpg.so.3.0.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgtcl/Makefile\n? src/interfaces/libpgtcl/libpgtcl.so.2.0\n? src/interfaces/libpq/Makefile\n? src/interfaces/libpq/libpq.so.2.0\n? src/interfaces/libpq++/Makefile\n? src/interfaces/libpq++/libpq++.so.2.0\n? src/interfaces/odbc/GNUmakefile\n? src/interfaces/odbc/Makefile.global\n? src/lextest/lex.yy.c\n? src/lextest/lextest\n? src/pl/plpgsql/src/Makefile\n? src/pl/plpgsql/src/mklang.sql\n? src/pl/plpgsql/src/pl_gram.c\n? src/pl/plpgsql/src/pl.tab.h\n? src/pl/plpgsql/src/pl_scan.c\n? src/pl/tcl/mkMakefile.tcldefs.sh\n? src/pl/tcl/Makefile.tcldefs\n? src/test/regress/log\n? src/test/regress/log2\nIndex: src/backend/catalog/heap.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/catalog/heap.c,v\nretrieving revision 1.83\ndiff -c -r1.83 heap.c\n*** src/backend/catalog/heap.c\t1999/05/21 18:33:12\t1.83\n--- src/backend/catalog/heap.c\t1999/05/22 04:05:41\n***************\n*** 1538,1563 ****\n \n \tif (type != atp->atttypid)\n \t{\n! \t\t/*\n! \t\t *\tThough these types are binary compatible, bpchar has a fixed\n! \t\t *\tlength on the disk, requiring non-bpchar types to be padded\n! \t\t *\tbefore storage in the default table. bjm 1999/05/18\n! \t\t */\n! \t\tif (1==0 && atp->atttypid == BPCHAROID &&\n! \t\t\t(type == TEXTOID || type == BPCHAROID || type == UNKNOWNOID))\n! \t\t{\n! \n! \t\t\tFuncCall *n = makeNode(FuncCall);\n! \n! \t\t\tn->funcname = typeidTypeName(atp->atttypid);\n! \t\t\tn->args = lcons((Node *)expr, NIL);\n! \t\t\texpr = transformExpr(NULL, (Node *) n, EXPR_COLUMN_FIRST);\n! \n! \t\t}\n! \t\telse if (IS_BINARY_COMPATIBLE(type, atp->atttypid))\n \t\t\t; /* use without change */\n \t\telse if (can_coerce_type(1, &(type), &(atp->atttypid)))\n! \t\t\texpr = coerce_type(NULL, (Node *)expr, type, atp->atttypid);\n \t\telse if (IsA(expr, Const))\n \t\t{\n \t\t\tif (*cast != 0)\n--- 1538,1548 ----\n \n \tif (type != atp->atttypid)\n \t{\n! \t\tif (IS_BINARY_COMPATIBLE(type, atp->atttypid))\n \t\t\t; /* use without change */\n \t\telse if (can_coerce_type(1, &(type), &(atp->atttypid)))\n! \t\t\texpr = coerce_type(NULL, (Node *)expr, type, atp->atttypid,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\t atp->atttypmod);\n \t\telse if (IsA(expr, Const))\n \t\t{\n \t\t\tif (*cast != 0)\nIndex: src/backend/parser/parse_coerce.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_coerce.c,v\nretrieving revision 2.14\ndiff -c -r2.14 parse_coerce.c\n*** src/backend/parser/parse_coerce.c\t1999/05/22 02:55:57\t2.14\n--- src/backend/parser/parse_coerce.c\t1999/05/22 04:05:45\n***************\n*** 35,41 ****\n * Convert a function argument to a different type.\n */\n Node *\n! coerce_type(ParseState *pstate, Node *node, Oid inputTypeId, Oid targetTypeId)\n {\n \tNode\t *result = NULL;\n \tOid\t\t\tinfunc;\n--- 35,42 ----\n * Convert a function argument to a different type.\n */\n Node *\n! coerce_type(ParseState *pstate, Node *node, Oid inputTypeId, Oid targetTypeId,\n! \t\tint32 atttypmod)\n {\n \tNode\t *result = NULL;\n \tOid\t\t\tinfunc;\n***************\n*** 82,92 ****\n \t\t\t\tcon->consttype = targetTypeId;\n \t\t\t\tcon->constlen = typeLen(typeidType(targetTypeId));\n \n! \t\t\t\t/* use \"-1\" for varchar() type */\n \t\t\t\tcon->constvalue = (Datum) fmgr(infunc,\n \t\t\t\t\t\t\t\t\t\t\t val,\n \t\t\t\t\t\t\t\t\t\t\t typeidTypElem(targetTypeId),\n! \t\t\t\t\t\t\t\t\t\t\t -1);\n \t\t\t\tcon->constisnull = false;\n \t\t\t\tcon->constbyval = typeByVal(typeidType(targetTypeId));\n \t\t\t\tcon->constisset = false;\n--- 83,98 ----\n \t\t\t\tcon->consttype = targetTypeId;\n \t\t\t\tcon->constlen = typeLen(typeidType(targetTypeId));\n \n! \t\t\t\t/*\n! \t\t\t\t *\tUse \"-1\" for varchar() type.\n! \t\t\t\t *\tFor char(), we need to pad out the type with the proper\n! \t\t\t\t *\tnumber of spaces. This was a major problem for\n! \t\t\t\t * DEFAULT string constants to char() types.\n! \t\t\t\t */\n \t\t\t\tcon->constvalue = (Datum) fmgr(infunc,\n \t\t\t\t\t\t\t\t\t\t\t val,\n \t\t\t\t\t\t\t\t\t\t\t typeidTypElem(targetTypeId),\n! \t\t\t\t\t\t\t\t(targetTypeId != BPCHAROID) ? -1 : atttypmod);\n \t\t\t\tcon->constisnull = false;\n \t\t\t\tcon->constbyval = typeByVal(typeidType(targetTypeId));\n \t\t\t\tcon->constisset = false;\n***************\n*** 100,106 ****\n \t\tresult = node;\n \n \treturn result;\n! }\t/* coerce_type() */\n \n \n /* can_coerce_type()\n--- 106,112 ----\n \t\tresult = node;\n \n \treturn result;\n! }\n \n \n /* can_coerce_type()\n***************\n*** 178,184 ****\n \t}\n \n \treturn true;\n! }\t/* can_coerce_type() */\n \n \n /* TypeCategory()\n--- 184,190 ----\n \t}\n \n \treturn true;\n! }\n \n \n /* TypeCategory()\nIndex: src/backend/parser/parse_expr.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\nretrieving revision 1.46\ndiff -c -r1.46 parse_expr.c\n*** src/backend/parser/parse_expr.c\t1999/05/18 23:40:05\t1.46\n--- src/backend/parser/parse_expr.c\t1999/05/22 04:05:45\n***************\n*** 417,423 ****\n \t\t\t\t\t}\n \t\t\t\t\telse if (can_coerce_type(1, &c->casetype, &ptype))\n \t\t\t\t\t{\n! \t\t\t\t\t\tc->defresult = coerce_type(pstate, c->defresult, c->casetype, ptype);\n \t\t\t\t\t\tc->casetype = ptype;\n \t\t\t\t\t}\n \t\t\t\t\telse\n--- 417,424 ----\n \t\t\t\t\t}\n \t\t\t\t\telse if (can_coerce_type(1, &c->casetype, &ptype))\n \t\t\t\t\t{\n! \t\t\t\t\t\tc->defresult = coerce_type(pstate, c->defresult,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tc->casetype, ptype, -1);\n \t\t\t\t\t\tc->casetype = ptype;\n \t\t\t\t\t}\n \t\t\t\t\telse\n***************\n*** 439,445 ****\n \t\t\t\t\t{\n \t\t\t\t\t\tif (can_coerce_type(1, &wtype, &ptype))\n \t\t\t\t\t\t{\n! \t\t\t\t\t\t\tw->result = coerce_type(pstate, w->result, wtype, ptype);\n \t\t\t\t\t\t}\n \t\t\t\t\t\telse\n \t\t\t\t\t\t{\n--- 440,447 ----\n \t\t\t\t\t{\n \t\t\t\t\t\tif (can_coerce_type(1, &wtype, &ptype))\n \t\t\t\t\t\t{\n! \t\t\t\t\t\t\tw->result = coerce_type(pstate, w->result, wtype,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tptype, -1);\n \t\t\t\t\t\t}\n \t\t\t\t\t\telse\n \t\t\t\t\t\t{\nIndex: src/backend/parser/parse_func.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_func.c,v\nretrieving revision 1.44\ndiff -c -r1.44 parse_func.c\n*** src/backend/parser/parse_func.c\t1999/05/17 17:03:33\t1.44\n--- src/backend/parser/parse_func.c\t1999/05/22 04:05:53\n***************\n*** 352,358 ****\n \t\t}\n \t\telse\n \t\t{\n- \n \t\t\t/*\n \t\t\t * Parsing aggregates.\n \t\t\t */\n--- 352,357 ----\n***************\n*** 361,367 ****\n \t\t\tint\t\t\t\tncandidates;\n \t\t\tCandidateList\tcandidates;\n \n- \n \t\t\t/*\n \t\t\t * the aggregate COUNT is a special case, ignore its base\n \t\t\t * type. Treat it as zero\n--- 360,365 ----\n***************\n*** 392,398 ****\n \t\t\t\ttype = agg_select_candidate(basetype, candidates);\n \t\t\t\tif (OidIsValid(type))\n \t\t\t\t{\n! \t\t\t\t\tlfirst(fargs) = coerce_type(pstate",
"msg_date": "Sat, 22 May 1999 00:09:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "DEFAULT fixed"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Does anyone have an opinion on this? Why does only DEFAULT have this\n> problem? Does anyone know how inserts of '' into char() fields get\n> padded with the proper atttypmod value? Do I need to pass atttypmod to\n> all the functions that call parse_coerce, so I can pass a value for all\n> cases?\n\nPossibly DEFAULT is the only case where the constant value created by\nthe parser will get shoved directly into a tuple with no run-time\ncoercion? That's strictly a guess. I agree this issue needs to be\nlooked at more closely.\n\nNow that we know the problem comes from missing atttypmod info, it\nseems likely that related failures can occur for NUMERIC and other\ntypes that depend on atttypmod. (Are there any such types? Even\nif there aren't now, there will probably be more and more in future.)\nIt might be best to just bite the bullet and make the parser carry\naround both the type's OID and typmod at all times.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 May 1999 10:45:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEFAULT fixed "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Does anyone have an opinion on this? Why does only DEFAULT have this\n> > problem? Does anyone know how inserts of '' into char() fields get\n> > padded with the proper atttypmod value? Do I need to pass atttypmod to\n> > all the functions that call parse_coerce, so I can pass a value for all\n> > cases?\n> \n> Possibly DEFAULT is the only case where the constant value created by\n> the parser will get shoved directly into a tuple with no run-time\n> coercion? That's strictly a guess. I agree this issue needs to be\n> looked at more closely.\n> \n> Now that we know the problem comes from missing atttypmod info, it\n> seems likely that related failures can occur for NUMERIC and other\n> types that depend on atttypmod. (Are there any such types? Even\n> if there aren't now, there will probably be more and more in future.)\n> It might be best to just bite the bullet and make the parser carry\n> around both the type's OID and typmod at all times.\n\nThat was my guess too, that atttypmod would become more important.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 May 1999 20:55:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DEFAULT fixed"
},
{
"msg_contents": "> Now that we know the problem comes from missing atttypmod info, it\n> seems likely that related failures can occur for NUMERIC and other\n> types that depend on atttypmod. (Are there any such types? Even\n> if there aren't now, there will probably be more and more in future.)\n> It might be best to just bite the bullet and make the parser carry\n> around both the type's OID and typmod at all times.\n\nI will try to add it, but I must not that there are some cases where I\ndon't have access to the atttypmod of the result, so it may not be\npossible to do it in every case. Perhaps I should just leave this for\npost 6.5, because we don't have any other bug reports on it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 May 1999 21:12:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DEFAULT fixed"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> It might be best to just bite the bullet and make the parser carry\n>> around both the type's OID and typmod at all times.\n\n> I will try to add it, but I must not that there are some cases where I\n> don't have access to the atttypmod of the result, so it may not be\n> possible to do it in every case. Perhaps I should just leave this for\n> post 6.5, because we don't have any other bug reports on it.\n\nAfter further thought, I think this may be a more difficult and subtle\nissue than we've realized. In the current state of the system, there\nare many places where you have a value that you can only know the type\nOID for, not atttypmod --- specifically, any intermediate expression\nresult. Barring reworking the entire function-call mechanism to pass\natttypmod around, that's not going to be possible to change.\n\nThe only context where you really know atttypmod is where you have\njust fetched a value out of a table column or are about to store a\nvalue into a table column. When storing, you need to be prepared to\ncoerce the given value to the right type if *either* type OID or\natttypmod is different --- but, in general, you don't know atttypmod\nfor the given value. (In the cases I know of, you can deduce it by\nexamining the value itself, but this requires type-specific knowledge.)\n\nSo on the whole I think this is something that has to be dealt with\nat the point of storing data into a tuple. Maybe we need a new\nfundamental operation for types that pay attention to atttypmod:\n\"make this value match the typmod of the target column, which is\nthus-and-so\". Trying to attack the problem from the source side by\npropagating typmod all around the parser is probably doomed to failure,\nbecause there will be many contexts where there's no way to know it.\n\nSince you have a fix for the only symptom reported to date, I'm\ninclined to agree that we should leave well enough alone for now;\nthere are other, bigger, problems that we need to address for 6.5.\nBut I think we'll have to come back to this issue later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 May 1999 12:32:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEFAULT fixed "
},
{
"msg_contents": "Actually, it's not as fixed as all that...\n\ncreate table foo1 (a char(5) default '', b int4);\ninsert into foo1 (b) values (334);\nselect * from foo1;\na | b\n-----+---\n |334\n(1 row)\n\nGood, the basic case is fixed, but:\n\ncreate table foo2 (a char(5) default text '', b int4);\ninsert into foo2 (b) values (334);\nselect * from foo2;\na| b\n-+--\n |16\n(1 row)\n\nOoops.\n\nWhat you seem to have done is twiddle the handling of DEFAULT clauses\nso that the value stored for the default expression is pre-coerced to the\ncolumn type. That's good as far as it goes, but it fails in cases where\nthe stored value has to be of a different type.\n\nMy guess is that what *really* ought to happen here is that\ntransformInsertStmt should check the type of the value it's gotten from\nthe default clause and apply coerce_type if necessary.\n\nUnless someone can come up with a less artificial example than the one\nabove, I'm inclined to leave it alone for 6.5. This is the same code\narea that will have to be redone to fix the INSERT ... SELECT problem\nI was chasing earlier today: coercion of the values produced by SELECT\nwill have to wait until the tail end of transformInsertStmt, and we\nmight as well make wrong-type default constants get fixed in the same\nplace. So I'm not eager to write some throwaway code to patch a problem\nthat no one is likely to see in practice. What's your feeling about it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 May 1999 18:58:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEFAULT fixed "
},
{
"msg_contents": "> What's your feeling about it?\n\n(Back from the weekend). \nI'd vote for simple-fix-for-now...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 24 May 1999 13:50:06 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DEFAULT fixed"
},
{
"msg_contents": "> After further thought, I think this may be a more difficult and subtle\n> issue than we've realized. In the current state of the system, there\n> are many places where you have a value that you can only know the type\n> OID for, not atttypmod --- specifically, any intermediate expression\n> result. Barring reworking the entire function-call mechanism to pass\n> atttypmod around, that's not going to be possible to change.\n\nYes, I agree this is true, and I am not really sure how we can handle\nthis. The return of a char() field can probably assume that whatever\nthe length identified in the VARLENA length field is the proper length. \nvarchar() is much more complicated. Though the on-disk length doesn't\nhave to match the maximum length specified in the table creation\nstatement, we do need to truncate any overly long strings returned by\nfunctions.\n\nHowever, the good news is that there are only a few cases where we care\nabout atttypmod. If we are returning rows to the user from a function,\nwe don't care. They get whatever we produce. The only cases we care\nare in an INSERT, UPDATE, and now, as we have discovered, a DEFAULT\nclause on a CREATE TABLE. In all other cases, the atttypmod is not\nneeded.\n\nI still need to figure out where INSERT into a char() gets the string\npadded properly. Time to fire up the ddd debugger, now that I have the\n6.5 HISTORY file completed.\n\n> The only context where you really know atttypmod is where you have\n> just fetched a value out of a table column or are about to store a\n> value into a table column. When storing, you need to be prepared to\n> coerce the given value to the right type if *either* type OID or\n> atttypmod is different --- but, in general, you don't know atttypmod\n> for the given value. (In the cases I know of, you can deduce it by\n> examining the value itself, but this requires type-specific knowledge.)\n\nYes, I see what you mean.\n\n> So on the whole I think this is something that has to be dealt with\n> at the point of storing data into a tuple. Maybe we need a new\n> fundamental operation for types that pay attention to atttypmod:\n> \"make this value match the typmod of the target column, which is\n> thus-and-so\". Trying to attack the problem from the source side by\n> propagating typmod all around the parser is probably doomed to failure,\n> because there will be many contexts where there's no way to know it.\n\nExcellent idea.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 10:24:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DEFAULT fixed"
},
{
"msg_contents": "> Actually, it's not as fixed as all that...\n> \n> create table foo1 (a char(5) default '', b int4);\n> insert into foo1 (b) values (334);\n> select * from foo1;\n> a | b\n> -----+---\n> |334\n> (1 row)\n> \n> Good, the basic case is fixed, but:\n> \n> create table foo2 (a char(5) default text '', b int4);\n> insert into foo2 (b) values (334);\n> select * from foo2;\n> a| b\n> -+--\n> |16\n> (1 row)\n> \n> Ooops.\n> \n> What you seem to have done is twiddle the handling of DEFAULT clauses\n> so that the value stored for the default expression is pre-coerced to the\n> column type. That's good as far as it goes, but it fails in cases where\n> the stored value has to be of a different type.\n\nGee, I didn't know you could specify the type of the default. Look at\nthis:\n\n\tcreate table kk( x char(20) default bpchar '');\n\nThis is going to bypass the coerce_type, so I think this would fail too.\n\n> My guess is that what *really* ought to happen here is that\n> transformInsertStmt should check the type of the value it's gotten from\n> the default clause and apply coerce_type if necessary.\n\nAgain, in my example, it will not even get coerced.\n\n\n> Unless someone can come up with a less artificial example than the one\n> above, I'm inclined to leave it alone for 6.5. This is the same code\n> area that will have to be redone to fix the INSERT ... SELECT problem\n> I was chasing earlier today: coercion of the values produced by SELECT\n> will have to wait until the tail end of transformInsertStmt, and we\n> might as well make wrong-type default constants get fixed in the same\n> place. So I'm not eager to write some throwaway code to patch a problem\n> that no one is likely to see in practice. What's your feeling about it?\n\nYes, I think we will just wait on this one, and add it to our TODO list\nfor the next release. We still have some big items on the list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 10:31:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DEFAULT fixed"
},
{
"msg_contents": "Added to TODO:\n\n\t* CREATE TABLE test (a char(5) DEFAULT text '', b int4) fails on INSERT\n\n\n> Actually, it's not as fixed as all that...\n> \n> create table foo1 (a char(5) default '', b int4);\n> insert into foo1 (b) values (334);\n> select * from foo1;\n> a | b\n> -----+---\n> |334\n> (1 row)\n> \n> Good, the basic case is fixed, but:\n> \n> create table foo2 (a char(5) default text '', b int4);\n> insert into foo2 (b) values (334);\n> select * from foo2;\n> a| b\n> -+--\n> |16\n> (1 row)\n> \n> Ooops.\n> \n> What you seem to have done is twiddle the handling of DEFAULT clauses\n> so that the value stored for the default expression is pre-coerced to the\n> column type. That's good as far as it goes, but it fails in cases where\n> the stored value has to be of a different type.\n> \n> My guess is that what *really* ought to happen here is that\n> transformInsertStmt should check the type of the value it's gotten from\n> the default clause and apply coerce_type if necessary.\n> \n> Unless someone can come up with a less artificial example than the one\n> above, I'm inclined to leave it alone for 6.5. This is the same code\n> area that will have to be redone to fix the INSERT ... SELECT problem\n> I was chasing earlier today: coercion of the values produced by SELECT\n> will have to wait until the tail end of transformInsertStmt, and we\n> might as well make wrong-type default constants get fixed in the same\n> place. So I'm not eager to write some throwaway code to patch a problem\n> that no one is likely to see in practice. What's your feeling about it?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 16:56:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DEFAULT fixed"
}
] |
[
{
"msg_contents": "Here is the proper patch. The previous one was truncated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? src/Makefile.custom\n? src/config.log\n? src/log\n? src/config.cache\n? src/config.status\n? src/GNUmakefile\n? src/Makefile.global\n? src/backend/fmgr.h\n? src/backend/parse.h\n? src/backend/postgres\n? src/backend/global1.bki.source\n? src/backend/local1_template1.bki.source\n? src/backend/global1.description\n? src/backend/local1_template1.description\n? src/backend/bootstrap/bootparse.c\n? src/backend/bootstrap/bootstrap_tokens.h\n? src/backend/bootstrap/bootscanner.c\n? src/backend/catalog/genbki.sh\n? src/backend/catalog/global1.bki.source\n? src/backend/catalog/global1.description\n? src/backend/catalog/local1_template1.bki.source\n? src/backend/catalog/local1_template1.description\n? src/backend/port/Makefile\n? src/backend/utils/Gen_fmgrtab.sh\n? src/backend/utils/fmgr.h\n? src/backend/utils/fmgrtab.c\n? src/bin/cleardbdir/cleardbdir\n? src/bin/createdb/createdb\n? src/bin/createlang/createlang\n? src/bin/createuser/createuser\n? src/bin/destroydb/destroydb\n? src/bin/destroylang/destroylang\n? src/bin/destroyuser/destroyuser\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_dump/Makefile\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pg_version/Makefile\n? src/bin/pg_version/pg_version\n? src/bin/pgtclsh/mkMakefile.tcldefs.sh\n? src/bin/pgtclsh/mkMakefile.tkdefs.sh\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/Makefile\n? src/bin/psql/psql\n? src/include/version.h\n? src/include/config.h\n? src/interfaces/ecpg/lib/Makefile\n? src/interfaces/ecpg/lib/libecpg.so.3.0.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgtcl/Makefile\n? src/interfaces/libpgtcl/libpgtcl.so.2.0\n? src/interfaces/libpq/Makefile\n? src/interfaces/libpq/libpq.so.2.0\n? src/interfaces/libpq++/Makefile\n? src/interfaces/libpq++/libpq++.so.2.0\n? src/interfaces/odbc/GNUmakefile\n? src/interfaces/odbc/Makefile.global\n? src/lextest/lex.yy.c\n? src/lextest/lextest\n? src/pl/plpgsql/src/Makefile\n? src/pl/plpgsql/src/mklang.sql\n? src/pl/plpgsql/src/pl_gram.c\n? src/pl/plpgsql/src/pl.tab.h\n? src/pl/plpgsql/src/pl_scan.c\n? src/pl/tcl/mkMakefile.tcldefs.sh\n? src/pl/tcl/Makefile.tcldefs\n? src/test/regress/log\n? src/test/regress/log2\nIndex: src/backend/catalog/heap.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/catalog/heap.c,v\nretrieving revision 1.83\ndiff -c -r1.83 heap.c\n*** src/backend/catalog/heap.c\t1999/05/21 18:33:12\t1.83\n--- src/backend/catalog/heap.c\t1999/05/22 04:05:41\n***************\n*** 1538,1563 ****\n \n \tif (type != atp->atttypid)\n \t{\n! \t\t/*\n! \t\t *\tThough these types are binary compatible, bpchar has a fixed\n! \t\t *\tlength on the disk, requiring non-bpchar types to be padded\n! \t\t *\tbefore storage in the default table. bjm 1999/05/18\n! \t\t */\n! \t\tif (1==0 && atp->atttypid == BPCHAROID &&\n! \t\t\t(type == TEXTOID || type == BPCHAROID || type == UNKNOWNOID))\n! \t\t{\n! \n! \t\t\tFuncCall *n = makeNode(FuncCall);\n! \n! \t\t\tn->funcname = typeidTypeName(atp->atttypid);\n! \t\t\tn->args = lcons((Node *)expr, NIL);\n! \t\t\texpr = transformExpr(NULL, (Node *) n, EXPR_COLUMN_FIRST);\n! \n! \t\t}\n! \t\telse if (IS_BINARY_COMPATIBLE(type, atp->atttypid))\n \t\t\t; /* use without change */\n \t\telse if (can_coerce_type(1, &(type), &(atp->atttypid)))\n! \t\t\texpr = coerce_type(NULL, (Node *)expr, type, atp->atttypid);\n \t\telse if (IsA(expr, Const))\n \t\t{\n \t\t\tif (*cast != 0)\n--- 1538,1548 ----\n \n \tif (type != atp->atttypid)\n \t{\n! \t\tif (IS_BINARY_COMPATIBLE(type, atp->atttypid))\n \t\t\t; /* use without change */\n \t\telse if (can_coerce_type(1, &(type), &(atp->atttypid)))\n! \t\t\texpr = coerce_type(NULL, (Node *)expr, type, atp->atttypid,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\t atp->atttypmod);\n \t\telse if (IsA(expr, Const))\n \t\t{\n \t\t\tif (*cast != 0)\nIndex: src/backend/parser/parse_coerce.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_coerce.c,v\nretrieving revision 2.14\ndiff -c -r2.14 parse_coerce.c\n*** src/backend/parser/parse_coerce.c\t1999/05/22 02:55:57\t2.14\n--- src/backend/parser/parse_coerce.c\t1999/05/22 04:05:45\n***************\n*** 35,41 ****\n * Convert a function argument to a different type.\n */\n Node *\n! coerce_type(ParseState *pstate, Node *node, Oid inputTypeId, Oid targetTypeId)\n {\n \tNode\t *result = NULL;\n \tOid\t\t\tinfunc;\n--- 35,42 ----\n * Convert a function argument to a different type.\n */\n Node *\n! coerce_type(ParseState *pstate, Node *node, Oid inputTypeId, Oid targetTypeId,\n! \t\tint32 atttypmod)\n {\n \tNode\t *result = NULL;\n \tOid\t\t\tinfunc;\n***************\n*** 82,92 ****\n \t\t\t\tcon->consttype = targetTypeId;\n \t\t\t\tcon->constlen = typeLen(typeidType(targetTypeId));\n \n! \t\t\t\t/* use \"-1\" for varchar() type */\n \t\t\t\tcon->constvalue = (Datum) fmgr(infunc,\n \t\t\t\t\t\t\t\t\t\t\t val,\n \t\t\t\t\t\t\t\t\t\t\t typeidTypElem(targetTypeId),\n! \t\t\t\t\t\t\t\t\t\t\t -1);\n \t\t\t\tcon->constisnull = false;\n \t\t\t\tcon->constbyval = typeByVal(typeidType(targetTypeId));\n \t\t\t\tcon->constisset = false;\n--- 83,98 ----\n \t\t\t\tcon->consttype = targetTypeId;\n \t\t\t\tcon->constlen = typeLen(typeidType(targetTypeId));\n \n! \t\t\t\t/*\n! \t\t\t\t *\tUse \"-1\" for varchar() type.\n! \t\t\t\t *\tFor char(), we need to pad out the type with the proper\n! \t\t\t\t *\tnumber of spaces. This was a major problem for\n! \t\t\t\t * DEFAULT string constants to char() types.\n! \t\t\t\t */\n \t\t\t\tcon->constvalue = (Datum) fmgr(infunc,\n \t\t\t\t\t\t\t\t\t\t\t val,\n \t\t\t\t\t\t\t\t\t\t\t typeidTypElem(targetTypeId),\n! \t\t\t\t\t\t\t\t(targetTypeId != BPCHAROID) ? -1 : atttypmod);\n \t\t\t\tcon->constisnull = false;\n \t\t\t\tcon->constbyval = typeByVal(typeidType(targetTypeId));\n \t\t\t\tcon->constisset = false;\n***************\n*** 100,106 ****\n \t\tresult = node;\n \n \treturn result;\n! }\t/* coerce_type() */\n \n \n /* can_coerce_type()\n--- 106,112 ----\n \t\tresult = node;\n \n \treturn result;\n! }\n \n \n /* can_coerce_type()\n***************\n*** 178,184 ****\n \t}\n \n \treturn true;\n! }\t/* can_coerce_type() */\n \n \n /* TypeCategory()\n--- 184,190 ----\n \t}\n \n \treturn true;\n! }\n \n \n /* TypeCategory()\nIndex: src/backend/parser/parse_expr.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\nretrieving revision 1.46\ndiff -c -r1.46 parse_expr.c\n*** src/backend/parser/parse_expr.c\t1999/05/18 23:40:05\t1.46\n--- src/backend/parser/parse_expr.c\t1999/05/22 04:05:45\n***************\n*** 417,423 ****\n \t\t\t\t\t}\n \t\t\t\t\telse if (can_coerce_type(1, &c->casetype, &ptype))\n \t\t\t\t\t{\n! \t\t\t\t\t\tc->defresult = coerce_type(pstate, c->defresult, c->casetype, ptype);\n \t\t\t\t\t\tc->casetype = ptype;\n \t\t\t\t\t}\n \t\t\t\t\telse\n--- 417,424 ----\n \t\t\t\t\t}\n \t\t\t\t\telse if (can_coerce_type(1, &c->casetype, &ptype))\n \t\t\t\t\t{\n! \t\t\t\t\t\tc->defresult = coerce_type(pstate, c->defresult,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tc->casetype, ptype, -1);\n \t\t\t\t\t\tc->casetype = ptype;\n \t\t\t\t\t}\n \t\t\t\t\telse\n***************\n*** 439,445 ****\n \t\t\t\t\t{\n \t\t\t\t\t\tif (can_coerce_type(1, &wtype, &ptype))\n \t\t\t\t\t\t{\n! \t\t\t\t\t\t\tw->result = coerce_type(pstate, w->result, wtype, ptype);\n \t\t\t\t\t\t}\n \t\t\t\t\t\telse\n \t\t\t\t\t\t{\n--- 440,447 ----\n \t\t\t\t\t{\n \t\t\t\t\t\tif (can_coerce_type(1, &wtype, &ptype))\n \t\t\t\t\t\t{\n! \t\t\t\t\t\t\tw->result = coerce_type(pstate, w->result, wtype,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tptype, -1);\n \t\t\t\t\t\t}\n \t\t\t\t\t\telse\n \t\t\t\t\t\t{\nIndex: src/backend/parser/parse_func.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_func.c,v\nretrieving revision 1.44\ndiff -c -r1.44 parse_func.c\n*** src/backend/parser/parse_func.c\t1999/05/17 17:03:33\t1.44\n--- src/backend/parser/parse_func.c\t1999/05/22 04:05:53\n***************\n*** 352,358 ****\n \t\t}\n \t\telse\n \t\t{\n- \n \t\t\t/*\n \t\t\t * Parsing aggregates.\n \t\t\t */\n--- 352,357 ----\n***************\n*** 361,367 ****\n \t\t\tint\t\t\t\tncandidates;\n \t\t\tCandidateList\tcandidates;\n \n- \n \t\t\t/*\n \t\t\t * the aggregate COUNT is a special case, ignore its base\n \t\t\t * type. Treat it as zero\n--- 360,365 ----\n***************\n*** 392,398 ****\n \t\t\t\ttype = agg_select_candidate(basetype, candidates);\n \t\t\t\tif (OidIsValid(type))\n \t\t\t\t{\n! \t\t\t\t\tlfirst(fargs) = coerce_type(pstate, lfirst(fargs), basetype, type);\n \t\t\t\t\tbasetype = type;\n \n \t\t\t\t\treturn (Node *) ParseAgg(pstate, funcname, basetype,\n--- 390,397 ----\n \t\t\t\ttype = agg_select_candidate(basetype, candidates);\n \t\t\t\tif (OidIsValid(type))\n \t\t\t\t{\n! \t\t\t\t\tlfirst(fargs) = coerce_type(pstate, lfirst(fargs),\n! \t\t\t\t\t\t\t\t\t\t\t\tbasetype, type, -1);\n \t\t\t\t\tbasetype = type;\n \n \t\t\t\t\treturn (Node *) ParseAgg(pstate, funcname, basetype,\n***************\n*** 1316,1322 ****\n \t\t\tlfirst(current_fargs) = coerce_type(pstate,\n \t\t\t\t\t\t\t\t\t\t\t\tlfirst(current_fargs),\n \t\t\t\t\t\t\t\t\t\t\t\tinput_typeids[i],\n! \t\t\t\t\t\t\t\t\t\t\t\tfunction_typeids[i]);\n \t\t}\n \t}\n }\n--- 1315,1321 ----\n \t\t\tlfirst(current_fargs) = coerce_type(pstate,\n \t\t\t\t\t\t\t\t\t\t\t\tlfirst(current_fargs),\n \t\t\t\t\t\t\t\t\t\t\t\tinput_typeids[i],\n! \t\t\t\t\t\t\t\t\t\t\t\tfunction_typeids[i], -1);\n \t\t}\n \t}\n }\nIndex: src/backend/parser/parse_node.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_node.c,v\nretrieving revision 1.25\ndiff -c -r1.25 parse_node.c\n*** src/backend/parser/parse_node.c\t1999/05/10 00:45:28\t1.25\n--- src/backend/parser/parse_node.c\t1999/05/22 04:05:55\n***************\n*** 75,83 ****\n \n \t\t/* must coerce? */\n \t\tif (true_typeId != orig_typeId)\n! \t\t{\n! \t\t\tresult = coerce_type(NULL, tree, orig_typeId, true_typeId);\n! \t\t}\n \t}\n \t/* otherwise, this is a NULL value */\n \telse\n--- 75,81 ----\n \n \t\t/* must coerce? */\n \t\tif (true_typeId != orig_typeId)\n! \t\t\tresult = coerce_type(NULL, tree, orig_typeId, true_typeId, -1);\n \t}\n \t/* otherwise, this is a NULL value */\n \telse\nIndex: src/backend/parser/parse_relation.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_relation.c,v\nretrieving revision 1.20\ndiff -c -r1.20 parse_relation.c\n*** src/backend/parser/parse_relation.c\t1999/05/17 17:03:34\t1.20\n--- src/backend/parser/parse_relation.c\t1999/05/22 04:05:58\n***************\n*** 445,451 ****\n \t{\n \t\tif (can_coerce_type(1, &attrtype_id, &attrtype_target))\n \t\t{\n! \t\t\tNode\t *expr = coerce_type(pstate, expr, attrtype_id, attrtype_target);\n \n \t\t\telog(ERROR, \"Type %s(%d) can be coerced to match target column %s(%d)\",\n \t\t\t\t colname, get_atttypmod(rte->relid, resdomno_id),\n--- 445,453 ----\n \t{\n \t\tif (can_coerce_type(1, &attrtype_id, &attrtype_target))\n \t\t{\n! \t\t\tNode\t *expr = coerce_type(pstate, expr, attrtype_id,\n! \t\t\t\t\t\t\t\t\t\t\tattrtype_target,\n! \t\t\tget_atttypmod(pstate->p_target_relation->rd_id, resdomno_target));\n \n \t\t\telog(ERROR, \"Type %s(%d) can be coerced to match target column %s(%d)\",\n \t\t\t\t colname, get_atttypmod(rte->relid, resdomno_id),\nIndex: src/backend/parser/parse_target.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_target.c,v\nretrieving revision 1.37\ndiff -c -r1.37 parse_target.c\n*** src/backend/parser/parse_target.c\t1999/05/17 17:03:35\t1.37\n--- src/backend/parser/parse_target.c\t1999/05/22 04:06:00\n***************\n*** 121,127 ****\n \t\t{\n \t\t\tif (can_coerce_type(1, &attrtype_id, &attrtype_target))\n \t\t\t{\n! \t\t\t\texpr = coerce_type(pstate, node, attrtype_id, attrtype_target);\n \t\t\t\texpr = transformExpr(pstate, expr, EXPR_COLUMN_FIRST);\n \t\t\t\ttent = MakeTargetEntryExpr(pstate, *resname, expr, false, false);\n \t\t\t\texpr = tent->expr;\n--- 121,129 ----\n \t\t{\n \t\t\tif (can_coerce_type(1, &attrtype_id, &attrtype_target))\n \t\t\t{\n! \t\t\t\texpr = coerce_type(pstate, node, attrtype_id,\n! \t\t\t\t\t\t\t\t\tattrtype_target,\n! \t\t\tget_atttypmod(pstate->p_target_relation->rd_id, resdomno_target));\n \t\t\t\texpr = transformExpr(pstate, expr, EXPR_COLUMN_FIRST);\n \t\t\t\ttent = MakeTargetEntryExpr(pstate, *resname, expr, false, false);\n \t\t\t\texpr = tent->expr;\n***************\n*** 666,672 ****\n {\n \tif (can_coerce_type(1, &type_id, &attrtype))\n \t{\n! \t\texpr = coerce_type(pstate, expr, type_id, attrtype);\n \t}\n \n #ifndef DISABLE_STRING_HACKS\n--- 668,674 ----\n {\n \tif (can_coerce_type(1, &type_id, &attrtype))\n \t{\n! \t\texpr = coerce_type(pstate, expr, type_id, attrtype, -1);\n \t}\n \n #ifndef DISABLE_STRING_HACKS\n***************\n*** 683,689 ****\n \t\t{\n \t\t}\n \t\telse if (can_coerce_type(1, &type_id, &text_id))\n! \t\t\texpr = coerce_type(pstate, expr, type_id, text_id);\n \t\telse\n \t\t\texpr = NULL;\n \t}\n--- 685,691 ----\n \t\t{\n \t\t}\n \t\telse if (can_coerce_type(1, &type_id, &text_id))\n! \t\t\texpr = coerce_type(pstate, expr, type_id, text_id, -1);\n \t\telse\n \t\t\texpr = NULL;\n \t}\nIndex: src/include/parser/parse_coerce.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/parser/parse_coerce.h,v\nretrieving revision 1.9\ndiff -c -r1.9 parse_coerce.h\n*** src/include/parser/parse_coerce.h\t1999/03/10 05:05:58\t1.9\n--- src/include/parser/parse_coerce.h\t1999/05/22 04:06:19\n***************\n*** 121,126 ****\n extern CATEGORY TypeCategory(Oid type);\n \n extern bool can_coerce_type(int nargs, Oid *input_typeids, Oid *func_typeids);\n! extern Node *coerce_type(ParseState *pstate, Node *node, Oid inputTypeId, Oid targetTypeId);\n \n #endif\t /* PARSE_COERCE_H */\n--- 121,127 ----\n extern CATEGORY TypeCategory(Oid type);\n \n extern bool can_coerce_type(int nargs, Oid *input_typeids, Oid *func_typeids);\n! extern Node *coerce_type(ParseState *pstate, Node *node, Oid inputTypeId,\n! \t\t\t\t\t\t Oid targetTypeId, int32 atttypmod);\n \n #endif\t /* PARSE_COERCE_H */",
"msg_date": "Sat, 22 May 1999 00:10:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "DEFAULT '' fixed"
}
] |
[
{
"msg_contents": "I have fixed:\n\n\tSELECT 1; SELECT 2 fails when sent not via psql, semicolon problem\n\nOur grammer was way too overly complex in this area, and is much\nclearer, with no shift/reduce conflicts. To test, run postgres from the\ncommand line. psql already breaks up the commands.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 May 1999 01:07:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "fix for SELECT 1;\n\tSELECT 2 fails when sent not via psql, semicolon problem"
}
] |
[
{
"msg_contents": "CVS as of right now (18:40 pm Central), I can't make(use) sequences, and\ninitdb puts part of pg_proc in the wrong place.\n\nRedhat Linux 6.0\nLinux 2.2.7\nglibc 2.1\n\n(SQL below directly from a pg_dump -z):\n\\connect - postgres\nCREATE SEQUENCE \"cagedata_id_seq\" start 165437 increment 1 maxvalue 2147483647 minvalue 1 cache 1 ;\nSELECT nextval ('cagedata_id_seq');\nERROR: No such function 'nextval' with the specified attributes\n\nI also just ran initdb on a clean install, and pg_proc.1 and\npg_proc_proname_narg_type_index.1 gets put in /home/postgres as well as in\n/home/postgres/data/base/template1\n\n-rw------- 1 postgres postgres 131072 May 22 18:38 data/base/template1/pg_proc\n-rw------- 1 postgres postgres 8192 May 22 18:40 data/base/template1/pg_proc.1\n-rw------- 1 postgres postgres 40960 May 22 18:40 data/base/template1/pg_proc_oid_index\n-rw------- 1 postgres postgres 131072 May 22 18:40 data/base/template1/pg_proc_proname_narg_type_index\n-rw------- 1 postgres postgres 57344 May 22 18:40 data/base/template1/pg_proc_prosrc_index\n-rw------- 1 postgres postgres 40960 May 22 18:38 pg_proc.1\n-rw------- 1 postgres postgres 8192 May 22 18:38 pg_proc_proname_narg_type_index.1\n\nThanks,\nOle Gjerde\n\n\n",
"msg_date": "Sat, 22 May 1999 19:30:44 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sequence nexvtal() and initdb/pg_proc problem"
},
{
"msg_contents": "Ole Gjerde <[email protected]> writes:\n> CREATE SEQUENCE \"cagedata_id_seq\" start 165437 increment 1 maxvalue 2147483647 minvalue 1 cache 1 ;\n> SELECT nextval ('cagedata_id_seq');\n> ERROR: No such function 'nextval' with the specified attributes\n\nCan't duplicate that here --- but it might well be related to your\nbusted pg_proc table ...\n\n> I also just ran initdb on a clean install, and pg_proc.1 and\n> pg_proc_proname_narg_type_index.1 gets put in /home/postgres as well as in\n> /home/postgres/data/base/template1\n\nHmm, it sounds like something is being sloppy about attaching the full\ndatabase path to the names of relation extension files. During normal\nbackend operation, the backend is cd'd into the database directory,\nso it doesn't really matter whether you prepend the path or not.\nBut evidently that's not always true during initdb. You must be running\nwith a very low value of RELSEG_SIZE to have precipitated such a\nproblem, however.\n\nReasonable fixes would be either to force the appropriate cd during\ninitdb, or to find and fix the place that's touching extension segments\nusing a relative pathname. But I can't get excited about spending much\ntime on it, since the problem will never arise at realistic RELSEG_SIZE\nsettings...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 May 1999 18:16:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sequence nexvtal() and initdb/pg_proc problem "
},
{
"msg_contents": "On Sun, 23 May 1999, Tom Lane wrote:\n[snip - nextval problem]\n> Can't duplicate that here --- but it might well be related to your\n> busted pg_proc table ...\n\nIndeed that was the problem.\n\n> But evidently that's not always true during initdb. You must be running\n> with a very low value of RELSEG_SIZE to have precipitated such a\n> problem, however.\n\nYes, I removed one too many 0's from RELSEG_SIZE to do some testing.\nI usually set it to 0x200000 / BLCKSZ for testing segment related things.\n\n> Reasonable fixes would be either to force the appropriate cd during\n> initdb, or to find and fix the place that's touching extension segments\n> using a relative pathname. But I can't get excited about spending much\n> time on it, since the problem will never arise at realistic RELSEG_SIZE\n> settings...\n\nIt's definately not worth the time right now. I will probably take a\nlook at this in couple of weeks, since it probably should be checked.\n\nThanks,\nOle Gjerde\n\n",
"msg_date": "Mon, 24 May 1999 01:48:07 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Sequence nexvtal() and initdb/pg_proc problem "
}
] |
[
{
"msg_contents": "I thought I had already posted this query but now I can't remember. If\nI have please excuse the repeat. But as I can't remember the answer\neither can someone please comment.\n\nIs there likely to be any attempt to allow a table to be keyed. It\nseems that by default a table is created as a heap and in order to\nimprove access speed, one must create indices on that table.\n\nI use Ingres at work and quite like the ability to do a 'modify table to\nbtree' type of command. When the table concerned is basically only a\nkey plus value, it seems rather inefficient to have to have both the\nheap and then an index when supposedly one could simply make the table\ninto a btree in the first place.\n\n--\n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nGlen Eustace, on behalf of\nGodZone Internet Services, a division of AGRE Enterprises Limited.\n176 Te Awe Awe St, Palmerston North, New Zealand\nPh: +64 6 356 2562, Fax: +64 6 357 0271, Mobile: 025 416 184,\nhttp://WWW.GodZone.Net.NZ\n\n",
"msg_date": "Sun, 23 May 1999 17:04:50 +1200",
"msg_from": "Glen and Rosanne Eustace <[email protected]>",
"msg_from_op": true,
"msg_subject": "Keyed Tables"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> I thought I had already posted this query but now I can't remember. If\n> I have please excuse the repeat. But as I can't remember the answer\n> either can someone please comment.\n> \n> Is there likely to be any attempt to allow a table to be keyed. It\n> seems that by default a table is created as a heap and in order to\n> improve access speed, one must create indices on that table.\n> \n> I use Ingres at work and quite like the ability to do a 'modify table to\n> btree' type of command. When the table concerned is basically only a\n> key plus value, it seems rather inefficient to have to have both the\n> heap and then an index when supposedly one could simply make the table\n> into a btree in the first place.\n\nYes, it is a nice feature, but we don't support it. We do have CLUSTER,\nbut that is not as nice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 May 1999 02:32:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Keyed Tables"
},
{
"msg_contents": "> Yes, it is a nice feature, but we don't support it. We do \n> have CLUSTER, but that is not as nice.\n\nAny chance of adding it to the list of possible enhancements ?\n\n",
"msg_date": "Mon, 24 May 1999 07:30:27 +1200",
"msg_from": "Glen and Rosanne Eustace <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [GENERAL] Keyed Tables"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Yes, it is a nice feature, but we don't support it. We do \n> > have CLUSTER, but that is not as nice.\n> \n> Any chance of adding it to the list of possible enhancements ?\n\nNot sure it is do-able for us. It would require so much work, that I\nhesitate to add it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 May 1999 17:03:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Keyed Tables"
},
{
"msg_contents": "Whats the possibility of having full text searches added to text fields?\n\nThat would be awesome.....\n\nAndy\n\n",
"msg_date": "Sun, 23 May 1999 19:21:08 -0500 (CDT)",
"msg_from": "Andy Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Full Text Searches"
},
{
"msg_contents": "On Sun, 23 May 1999, Andy Lewis wrote:\n\n> Whats the possibility of having full text searches added to text fields?\n> \n> That would be awesome.....\n\nUnfortunately, full text indexing is a different issue than the kind of\nindexing performed on table columns, and if you want to do any kind of\nefficient full text searching, you have to index the individual words in\nthe text or it'd be so slow as to be hardly useful (especially if you're\ntalking about 600,000 records with 2K of text in each text field).\n\nExcalibur, for instance, creates its own internal indexing for full text\nrecords, but uses an underlying SQL database for regular fielded data, and\nwhen you design your database, you have to make the distinction about what\nkind of indexing you want, stop words (words you don't want indexed, like\n'the' and 'of'), and the way certain fields can or will be searched.\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy/\n-----------------------------------------------------------------------\nThe six great gifts of an Irish girl are beauty, soft voice, sweet speech,\nwisdom, needlework, and chastity.\n\t\t-- Theodore Roosevelt, 1907\n\n\n",
"msg_date": "Sun, 23 May 1999 22:26:08 -0400 (EDT)",
"msg_from": "\"Brett W. McCoy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "We have a fulltext stuff in the contrib directory.\n\n\n> On Sun, 23 May 1999, Andy Lewis wrote:\n> \n> > Whats the possibility of having full text searches added to text fields?\n> > \n> > That would be awesome.....\n> \n> Unfortunately, full text indexing is a different issue than the kind of\n> indexing performed on table columns, and if you want to do any kind of\n> efficient full text searching, you have to index the individual words in\n> the text or it'd be so slow as to be hardly useful (especially if you're\n> talking about 600,000 records with 2K of text in each text field).\n> \n> Excalibur, for instance, creates its own internal indexing for full text\n> records, but uses an underlying SQL database for regular fielded data, and\n> when you design your database, you have to make the distinction about what\n> kind of indexing you want, stop words (words you don't want indexed, like\n> 'the' and 'of'), and the way certain fields can or will be searched.\n> \n> Brett W. McCoy \n> http://www.lan2wan.com/~bmccoy/\n> -----------------------------------------------------------------------\n> The six great gifts of an Irish girl are beauty, soft voice, sweet speech,\n> wisdom, needlework, and chastity.\n> \t\t-- Theodore Roosevelt, 1907\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 May 1999 23:30:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "On Sun, 23 May 1999, Bruce Momjian wrote:\n\n> We have a fulltext stuff in the contrib directory.\n\nWhat's it called? I only see some tcl frontend stuff. Despite my\npessimism form the prior message, I am interested in a full text retrieval\nengine.\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy/\n-----------------------------------------------------------------------\nLonely is a man without love.\n\t\t-- Englebert Humperdinck\n\n",
"msg_date": "Mon, 24 May 1999 07:01:36 -0400 (EDT)",
"msg_from": "\"Brett W. McCoy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "Its not really, really explanitory.....\n\nOn Mon, 24 May 1999, Brett W. McCoy wrote:\n\n> On Sun, 23 May 1999, Bruce Momjian wrote:\n> \n> > We have a fulltext stuff in the contrib directory.\n> \n> What's it called? I only see some tcl frontend stuff. Despite my\n> pessimism form the prior message, I am interested in a full text retrieval\n> engine.\n> \n> Brett W. McCoy \n> http://www.lan2wan.com/~bmccoy/\n> -----------------------------------------------------------------------\n> Lonely is a man without love.\n> \t\t-- Englebert Humperdinck\n> \n\n",
"msg_date": "Mon, 24 May 1999 08:05:22 -0500 (CDT)",
"msg_from": "Andy Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "He means the contrib directory in the source tree not the one on the ftp site.\n\nOn Mon, 24 May 1999, [email protected] wrote:\n> On Mon, 24 May 1999, Bruce Momjian wrote:\n> \n> > > What's it called? I only see some tcl frontend stuff. Despite my\n> > > pessimism form the prior message, I am interested in a full text retrieval\n> > > engine.\n> > \n> > It is called contrib/fulltextindex. Does someone want to suggest a\n> > better name?\n> \n> I didn't see it on the ftp site. I only saw pgv and tcldb in the contrib \n> directory.\n> \n> Brett W. McCoy \n> http://www.lan2wan.com/~bmccoy\n> -----------------------------------------------------------------------\n> Cabbage, n.:\n> \tA familiar kitchen-garden vegetable about as large and wise as\n> a man's head.\n> \t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n--\n------------------------------------------------------------------------------\n\nLincoln Spiteri\n\nManufacturing Systems\nSTMicroelectronics, Malta\n\ne-mail: [email protected]\n\n------------------------------------------------------------------------------\n",
"msg_date": "Mon, 24 May 1999 15:16:39 +0200",
"msg_from": "Lincoln Spiteri <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "> On Sun, 23 May 1999, Bruce Momjian wrote:\n> \n> > We have a fulltext stuff in the contrib directory.\n> \n> What's it called? I only see some tcl frontend stuff. Despite my\n> pessimism form the prior message, I am interested in a full text retrieval\n> engine.\n\nIt is called contrib/fulltextindex. Does someone want to suggest a\nbetter name?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 09:45:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "Going through the documentation I can only find little about outer\njoins. One statement is in the Changes doc about including syntax for\nouter joins, but there doesn't seem to be implemented any code after\nthat.\n\nIs it true that there's no outer joins yet? Any plans? Btw. what is the\nsyntax for outer joins. I know only Oracle's (+) operator.\n \n\n",
"msg_date": "Mon, 24 May 1999 15:57:13 +0200 (CEST)",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Outer joins "
},
{
"msg_contents": "> On Mon, 24 May 1999, Bruce Momjian wrote:\n> \n> > > What's it called? I only see some tcl frontend stuff. Despite my\n> > > pessimism form the prior message, I am interested in a full text retrieval\n> > > engine.\n> > \n> > It is called contrib/fulltextindex. Does someone want to suggest a\n> > better name?\n> \n> I didn't see it on the ftp site. I only saw pgv and tcldb in the contrib \n> directory.\n\nSorry, I meant in the distribution's contrib directory, not the ftp\nsite. I didn't even know we had a contrib directory on the ftp site.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 10:32:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "> On Mon, 24 May 1999, Bruce Momjian wrote:\n> \n> > > What's it called? I only see some tcl frontend stuff. Despite my\n> > > pessimism form the prior message, I am interested in a full text retrieval\n> > > engine.\n> > \n> > It is called contrib/fulltextindex. Does someone want to suggest a\n> > better name?\n> \n> I didn't see it on the ftp site. I only saw pgv and tcldb in the contrib \n> directory.\n\nAh, here's the problem. Bruce means the contrib directory in the\nsource distribution, which is at the top level, right beside src\n(were the core of postgresql lives). It's pgsql/contrib, if you\ndo a CVS checkout. I'm not sure where it ends up in various binary\npackages. (/usr/lib/postgresql/contrib on my Debian Linux install,\nfor example, has parts of it,m but not the whole thing)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 24 May 1999 09:42:53 -0500 (CDT)",
"msg_from": "[email protected] (Ross J. Reedstrom)",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "On Mon, 24 May 1999, Bruce Momjian wrote:\n\n> > What's it called? I only see some tcl frontend stuff. Despite my\n> > pessimism form the prior message, I am interested in a full text retrieval\n> > engine.\n> \n> It is called contrib/fulltextindex. Does someone want to suggest a\n> better name?\n\nI didn't see it on the ftp site. I only saw pgv and tcldb in the contrib \ndirectory.\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy\n-----------------------------------------------------------------------\nCabbage, n.:\n\tA familiar kitchen-garden vegetable about as large and wise as\na man's head.\n\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n",
"msg_date": "Mon, 24 May 1999 13:34:59 -0400 (EDT)",
"msg_from": "\"Brett W. McCoy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "On Mon, 24 May 1999, Bruce Momjian wrote:\n\n> Sorry, I meant in the distribution's contrib directory, not the ftp\n> site. I didn't even know we had a contrib directory on the ftp site.\n\nWel, you do now! Thanks! I'll check it out!\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy\n-----------------------------------------------------------------------\nOnce, adv.:\n\tEnough.\n\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n",
"msg_date": "Mon, 24 May 1999 14:31:36 -0400 (EDT)",
"msg_from": "\"Brett W. McCoy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "Hey, found the module. Looks pretty interesting -- even has the \ncapability of ignoring stopwords. This is just what I am looking for!\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy\n-----------------------------------------------------------------------\n\"What's the use of a good quotation if you can't change it?\"\n\t\t-- Dr. Who\n\n",
"msg_date": "Mon, 24 May 1999 14:39:59 -0400 (EDT)",
"msg_from": "\"Brett W. McCoy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "Sorry, I got this in my mail this morning,\n\n Due to a problem in the European Internet Gateway yesterday (solved this\n morning by xxxxx ), outgoing internet emails will suffer huge delays.\n \n The queue should be fully processed during next european night.\n\nRegards\n\nLincoln\n\nOn Tue, 25 May 1999, [email protected] wrote:\n> On Mon, 24 May 1999, Lincoln Spiteri wrote:\n> \n> > He means the contrib directory in the source tree not the one on the ftp site.\n> \n> Yeah, we got that cleared up yesterday.\n> \n> Brett W. McCoy \n> http://www.lan2wan.com/~bmccoy\n> -----------------------------------------------------------------------\n> \"Now this is a totally brain damaged algorithm. Gag me with a\n> smurfette.\"\n> \t\t-- P. Buhr, Computer Science 354\n--\n------------------------------------------------------------------------------\n\nLincoln Spiteri\n\nManufacturing Systems\nSTMicroelectronics, Malta\n\ne-mail: [email protected]\n\n------------------------------------------------------------------------------\n",
"msg_date": "Tue, 25 May 1999 17:57:47 +0200",
"msg_from": "Lincoln Spiteri <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
},
{
"msg_contents": "On Mon, 24 May 1999, Lincoln Spiteri wrote:\n\n> He means the contrib directory in the source tree not the one on the ftp site.\n\nYeah, we got that cleared up yesterday.\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy\n-----------------------------------------------------------------------\n\"Now this is a totally brain damaged algorithm. Gag me with a\nsmurfette.\"\n\t\t-- P. Buhr, Computer Science 354\n\n",
"msg_date": "Tue, 25 May 1999 15:06:48 -0400 (EDT)",
"msg_from": "\"Brett W. McCoy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Full Text Searches"
}
] |
[
{
"msg_contents": "CURRENT/pgsql/src/test/regress > gmake runtest\nMULTIBYTE=;export MULTIBYTE; \\\n/bin/sh ./regress.sh freebsd 2>&1 | tee regress.out\n=============== Notes... =================\npostmaster must already be running for the regression tests to succeed.\nThe time zone is now set to PST8PDT explicitly by this regression test\n client frontend. Please report any apparent problems to\n [email protected]\nSee regress/README for more information.\n\n=============== destroying old regression database... =================\n=============== creating new regression database... =================\n=============== installing PL/pgSQL... =================\ncreatelang: not found\n^^^^^^^^^^^^^^^^^^^^^\ncreatelang failed\nACTUAL RESULTS OF REGRESSION TEST ARE NOW IN FILE regress.out\n\nVadim\n",
"msg_date": "Sun, 23 May 1999 20:04:16 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "createlang - ?"
},
{
"msg_contents": ">\n> CURRENT/pgsql/src/test/regress > gmake runtest\n> MULTIBYTE=;export MULTIBYTE; \\\n> /bin/sh ./regress.sh freebsd 2>&1 | tee regress.out\n> =============== Notes... =================\n> postmaster must already be running for the regression tests to succeed.\n> The time zone is now set to PST8PDT explicitly by this regression test\n> client frontend. Please report any apparent problems to\n> [email protected]\n> See regress/README for more information.\n>\n> =============== destroying old regression database... =================\n> =============== creating new regression database... =================\n> =============== installing PL/pgSQL... =================\n> createlang: not found\n> ^^^^^^^^^^^^^^^^^^^^^\n> createlang failed\n> ACTUAL RESULTS OF REGRESSION TEST ARE NOW IN FILE regress.out\n\n Did you do a complete \"make clean install\" of the bin tree as\n well? It's a new utility script since I've taken out the\n installation of PL/pgSQL from initdb.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 25 May 1999 09:18:31 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createlang - ?"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> > createlang: not found\n> > ^^^^^^^^^^^^^^^^^^^^^\n> > createlang failed\n> > ACTUAL RESULTS OF REGRESSION TEST ARE NOW IN FILE regress.out\n> \n> Did you do a complete \"make clean install\" of the bin tree as\n> well? It's a new utility script since I've taken out the\n> installation of PL/pgSQL from initdb.\n\nI don't see createlang dir in bin tree after cvs update !\nSomething wrong in my CVS setup ?\n\nVadim\nP.S. BTW, shouldn't PL/pgSQL be installed by default ?\n",
"msg_date": "Tue, 25 May 1999 15:27:19 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] createlang - ?"
},
{
"msg_contents": ">\n> Jan Wieck wrote:\n> >\n> > > createlang: not found\n> > > ^^^^^^^^^^^^^^^^^^^^^\n> > > createlang failed\n> > > ACTUAL RESULTS OF REGRESSION TEST ARE NOW IN FILE regress.out\n> >\n> > Did you do a complete \"make clean install\" of the bin tree as\n> > well? It's a new utility script since I've taken out the\n> > installation of PL/pgSQL from initdb.\n>\n> I don't see createlang dir in bin tree after cvs update !\n> Something wrong in my CVS setup ?\n\n Seems so:\n\ncvs server: Logging createlang\n\nRCS file: /usr/local/cvsroot/pgsql/src/bin/createlang/Makefile,v\nWorking file: createlang/Makefile\nhead: 1.1\nbranch:\nlocks: strict\naccess list:\nsymbolic names:\nkeyword substitution: kv\ntotal revisions: 1; selected revisions: 1\ndescription:\n----------------------------\nrevision 1.1\ndate: 1999/05/20 16:50:00; author: wieck; state: Exp;\nRemoved the automatic installation of built procedural languages\nfrom initdb again.\n\nAdded two new commands, createlang and destroylang to bin. These\nhopefully end this damned mklang.sql discussion.\n\nJan\n=============================================================================\n\nRCS file: /usr/local/cvsroot/pgsql/src/bin/createlang/createlang.sh,v\nWorking file: createlang/createlang.sh\nhead: 1.1\nbranch:\nlocks: strict\naccess list:\nsymbolic names:\nkeyword substitution: kv\ntotal revisions: 1; selected revisions: 1\ndescription:\n----------------------------\nrevision 1.1\ndate: 1999/05/20 16:50:00; author: wieck; state: Exp;\nRemoved the automatic installation of built procedural languages\nfrom initdb again.\n\nAdded two new commands, createlang and destroylang to bin. These\nhopefully end this damned mklang.sql discussion.\n\nJan\n=============================================================================\n\n>\n> Vadim\n> P.S. BTW, shouldn't PL/pgSQL be installed by default ?\n>\n\n Some want (me too) - some don't want it to be installed by\n default. But I can live with installing it after initdb into\n template1. In that case any subsequent createdb automagically\n installs it in the new databases.\n\n It's IMHO the most flexible setup that meets most needs.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 25 May 1999 09:44:14 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createlang - ?"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> I don't see createlang dir in bin tree after cvs update !\n\nYou won't until you run cvs update with -d switch (which allows it\nto create new subdirectories).\n\nI think cvs' default behavior for subdirectories is pretty brain-dead,\nso I keep the following entries in ~/.cvsrc:\n\nupdate -d -P\ncheckout -P\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 09:54:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createlang - ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> > I don't see createlang dir in bin tree after cvs update !\n> \n> You won't until you run cvs update with -d switch (which allows it\n> to create new subdirectories).\n\nThanks! Got it now.\n\nVadim\n",
"msg_date": "Tue, 25 May 1999 23:05:36 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] createlang - ?"
}
] |
[
{
"msg_contents": "\nhacking database?!!?!!\n\n???\n\n\n",
"msg_date": "Sun, 23 May 1999 19:44:44 +0300",
"msg_from": "\" eV!L (John)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Could anyone tell me what tis news-group is about?"
}
] |
[
{
"msg_contents": "I have removed the parameter at Tom Lane's request.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 May 1999 14:50:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wisconsin -B parameter"
}
] |
[
{
"msg_contents": "I have committed some fixes that prevent resjunk targets from being\nassigned to output columns in an INSERT/SELECT. This partially fixes\nthe problem Michael Davis reported a few weeks ago. However, there's\nstill a bug with confusion about column names. Given\n\ncreate table foo (a int4, b int4);\nCREATE\ncreate table bar (c int4, d int4);\nCREATE\n\nwe can do\n\nselect c, sum(d) from bar group by c;\n\nbut not\n\ninsert into foo select c, sum(d) from bar group by c;\nERROR: Illegal use of aggregates or non-group column in target list\n\nThe problem here is that the target expressions of the select have\nbeen relabeled with foo's column names before GROUP BY is processed.\nIf you refer to them by the output column names then it works:\n\ninsert into foo select c, sum(d) from bar group by a;\nINSERT 279412 1\n\nYou can think of the query as having been rewritten to\n\ninsert into foo select c AS a, sum(d) AS b from bar group by a;\n\nin which case the behavior makes some kind of sense. However,\nI think that this behavior is neither intuitive nor in conformance\nwith SQL92's scoping rules. As far as I can tell, the definition\nof the result of \"select c, sum(d) from bar group by c\" is independent\nof whether it is inside an INSERT or not.\n\nFixing this appears to require a substantial rearrangement of code\ninside the parser, which I'm real hesitant to do with only a week to go\ntill 6.5 release. I propose leaving this issue on the \"to fix\" list for\n6.6. Comments?\n\nBTW, although Davis claimed this was broken sometime during April, 6.4.2\nshows the same bugs ... I think it's been wrong for a long time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 May 1999 17:46:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partial fix for INSERT...SELECT problems"
},
{
"msg_contents": "Tom Lane wrote:\n\n>\n> I have committed some fixes that prevent resjunk targets from being\n> assigned to output columns in an INSERT/SELECT. This partially fixes\n> the problem Michael Davis reported a few weeks ago. However, there's\n> still a bug with confusion about column names. Given\n>\n> create table foo (a int4, b int4);\n> CREATE\n> create table bar (c int4, d int4);\n> CREATE\n>\n> we can do\n>\n> select c, sum(d) from bar group by c;\n>\n> but not\n>\n> insert into foo select c, sum(d) from bar group by c;\n> ERROR: Illegal use of aggregates or non-group column in target list\n>\n> The problem here is that the target expressions of the select have\n> been relabeled with foo's column names before GROUP BY is processed.\n> If you refer to them by the output column names then it works:\n>\n> insert into foo select c, sum(d) from bar group by a;\n> INSERT 279412 1\n>\n> You can think of the query as having been rewritten to\n>\n> insert into foo select c AS a, sum(d) AS b from bar group by a;\n>\n> in which case the behavior makes some kind of sense. However,\n> I think that this behavior is neither intuitive nor in conformance\n> with SQL92's scoping rules. As far as I can tell, the definition\n> of the result of \"select c, sum(d) from bar group by c\" is independent\n> of whether it is inside an INSERT or not.\n>\n> Fixing this appears to require a substantial rearrangement of code\n> inside the parser, which I'm real hesitant to do with only a week to go\n> till 6.5 release. I propose leaving this issue on the \"to fix\" list for\n> 6.6. Comments?\n\n Does it really require that substantial rearrangement? Looks\n to me that the renaming of the target columns is only done a\n little too early. Could the per Query unique ID\n Resno.resgroupref <-> GroupClause.tleGroupref help here?\n\n I wonder if the renaming of the target columns during parse\n is required at all. I think in the case of an INSERT this is\n done allways in the planner again at preprocess_targetlist().\n\n I agree that changing it that close to release isn't a good\n idea, but we should move this item to the top ten of TODO\n after v6.5.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 25 May 1999 11:08:13 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Partial fix for INSERT...SELECT problems"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> Fixing this appears to require a substantial rearrangement of code\n>> inside the parser, which I'm real hesitant to do with only a week to go\n>> till 6.5 release. I propose leaving this issue on the \"to fix\" list for\n>> 6.6. Comments?\n\n> Does it really require that substantial rearrangement? Looks\n> to me that the renaming of the target columns is only done a\n> little too early.\n\nYeah, what I wanted to do was move both renaming and type-coercion\nof target columns down to the end of transformInsertStmt (ditto for\nUPDATE I suppose). However there is a lot of crufty code in that\narea, including some array stuff that I am pretty sure has bugs of\nits own; and the DEFAULT issue needs to be fixed right in that same\nroutine, as well. So I'd rather punt for now and tackle all these\nissues in an unhurried manner after 6.5 release, rather than take a\nrisk of breaking things worse for the release. Most of these bugs\nhave been around for quite a while, so I think we can live with 'em\nfor one more release cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 10:02:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Partial fix for INSERT...SELECT problems "
},
{
"msg_contents": "Tom, is this fixed?\n\n> I have committed some fixes that prevent resjunk targets from being\n> assigned to output columns in an INSERT/SELECT. This partially fixes\n> the problem Michael Davis reported a few weeks ago. However, there's\n> still a bug with confusion about column names. Given\n> \n> create table foo (a int4, b int4);\n> CREATE\n> create table bar (c int4, d int4);\n> CREATE\n> \n> we can do\n> \n> select c, sum(d) from bar group by c;\n> \n> but not\n> \n> insert into foo select c, sum(d) from bar group by c;\n> ERROR: Illegal use of aggregates or non-group column in target list\n> \n> The problem here is that the target expressions of the select have\n> been relabeled with foo's column names before GROUP BY is processed.\n> If you refer to them by the output column names then it works:\n> \n> insert into foo select c, sum(d) from bar group by a;\n> INSERT 279412 1\n> \n> You can think of the query as having been rewritten to\n> \n> insert into foo select c AS a, sum(d) AS b from bar group by a;\n> \n> in which case the behavior makes some kind of sense. However,\n> I think that this behavior is neither intuitive nor in conformance\n> with SQL92's scoping rules. As far as I can tell, the definition\n> of the result of \"select c, sum(d) from bar group by c\" is independent\n> of whether it is inside an INSERT or not.\n> \n> Fixing this appears to require a substantial rearrangement of code\n> inside the parser, which I'm real hesitant to do with only a week to go\n> till 6.5 release. I propose leaving this issue on the \"to fix\" list for\n> 6.6. Comments?\n> \n> BTW, although Davis claimed this was broken sometime during April, 6.4.2\n> shows the same bugs ... I think it's been wrong for a long time.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 15:40:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Partial fix for INSERT...SELECT problems"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, is this fixed?\n\nYes, for 6.6.\n\n>> I have committed some fixes that prevent resjunk targets from being\n>> assigned to output columns in an INSERT/SELECT. This partially fixes\n>> the problem Michael Davis reported a few weeks ago. However, there's\n>> still a bug with confusion about column names. Given\n>> \n>> create table foo (a int4, b int4);\n>> CREATE\n>> create table bar (c int4, d int4);\n>> CREATE\n>> \n>> we can do\n>> \n>> select c, sum(d) from bar group by c;\n>> \n>> but not\n>> \n>> insert into foo select c, sum(d) from bar group by c;\n>> ERROR: Illegal use of aggregates or non-group column in target list\n>> \n>> The problem here is that the target expressions of the select have\n>> been relabeled with foo's column names before GROUP BY is processed.\n>> If you refer to them by the output column names then it works:\n>> \n>> insert into foo select c, sum(d) from bar group by a;\n>> INSERT 279412 1\n>> \n>> You can think of the query as having been rewritten to\n>> \n>> insert into foo select c AS a, sum(d) AS b from bar group by a;\n>> \n>> in which case the behavior makes some kind of sense. However,\n>> I think that this behavior is neither intuitive nor in conformance\n>> with SQL92's scoping rules. As far as I can tell, the definition\n>> of the result of \"select c, sum(d) from bar group by c\" is independent\n>> of whether it is inside an INSERT or not.\n>> \n>> Fixing this appears to require a substantial rearrangement of code\n>> inside the parser, which I'm real hesitant to do with only a week to go\n>> till 6.5 release. I propose leaving this issue on the \"to fix\" list for\n>> 6.6. Comments?\n>> \n>> BTW, although Davis claimed this was broken sometime during April, 6.4.2\n>> shows the same bugs ... I think it's been wrong for a long time.\n",
"msg_date": "Tue, 21 Sep 1999 21:11:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Partial fix for INSERT...SELECT problems "
}
] |
[
{
"msg_contents": "Jan,\n\nHave you any ideas on this?\n\nWe get a rule output by pg_dump like :-\n\nCREATE RULE \"_RETsongs\" AS \n ON SELECT TO \"songs\" \n WHERE DO INSTEAD \n SELECT \"t\".\"artist\", \"t\".\"song\", \"t\".\"trackno\", \"d\".\"cdname\" \n FROM \"disks\" \"d\", \"tracks\" \"t\" \n WHERE \"d\".\"diskid\" = \"t\".\"diskid\"; \n\nfrom a view defined like so:-\n\nCREATE VIEW songs AS\n SELECT t.artist, t.song, t.trackno, d.cdname\n FROM disks d, tracks t\n WHERE d.diskid = t.diskid;\n\nNote the WHERE keyword in line 3 of the rule define.\n\n>From \"./src/backend/utils/adt/ruleutils.c\" line 662 of 1814\n\n /* If the rule has an event qualification, add it */\n if (ev_qual == NULL)\n ev_qual = \"\";\n if (strlen(ev_qual) > 0)\n {\n Node *qual;\n Query *query;\n QryHier qh; \n.\n.\n strcat(buf, \" WHERE \");\n strcat(buf, get_rule_expr(&qh, 0, qual, TRUE));\n }\n\n strcat(buf, \" DO \");\n\n /* The INSTEAD keyword (if so) */\n if (is_instead)\n strcat(buf, \"INSTEAD \"); \n\nWe put the WHERE in if strlen(ev_qual) > 0\n\nI've not yet followed this back any further.\n\nKeith.\n \n\n------------ Begin Forwarded Message -------------\n\nX-Authentication-Warning: hub.org: majordom set sender to \[email protected] using -f\nDate: Fri, 21 May 1999 22:34:50 +0100 (BST)\nFrom: Keith Parks <[email protected]>\nSubject: Re: [HACKERS] 6.5 cvs: views doesn't survives after pg_dump (fwd)\nTo: [email protected], [email protected]\nMIME-Version: 1.0\nContent-MD5: 34XqWKKsmVlyonlE1gsMzw==\n\nOleg Bartunov < [email protected]>\n> After dumping (by pg_dump) and restoring views becomes a tables\n> \n\nThe problem is that views are dumped with anm extraneous \"WHERE\"\n\n> ............................\n> QUERY: COPY \"t1\" FROM stdin;\n> CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" FROM \n\"t1\";\n> QUERY: CREATE RULE \"_RETv1\" AS ON SELECT TO \"v1\" WHERE DO INSTEAD SELECT \"a\" \nFROM \"t1\";\n\n...................................................++++++\n\n> ERROR: parser: parse error at or near \"do\"\n> EOF\n\nWhich causes this error and the rule (View) is not Created.\n\nI don't know how the where clause gets in there but if you\nedit the dump before restoring all is OK.\n\nKeith.\n\n\n\n------------- End Forwarded Message -------------\n\n\n",
"msg_date": "Mon, 24 May 1999 14:21:07 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 cvs: views doesn't survives after pg_dump (fwd)"
},
{
"msg_contents": ">\n> Jan,\n>\n> Have you any ideas on this?\n\n Yepp\n\n>\n> We get a rule output by pg_dump like :-\n>\n> CREATE RULE \"_RETsongs\" AS\n> ON SELECT TO \"songs\"\n> WHERE DO INSTEAD\n> SELECT \"t\".\"artist\", \"t\".\"song\", \"t\".\"trackno\", \"d\".\"cdname\"\n> FROM \"disks\" \"d\", \"tracks\" \"t\"\n> WHERE \"d\".\"diskid\" = \"t\".\"diskid\";\n>\n> from a view defined like so:-\n>\n> CREATE VIEW songs AS\n> SELECT t.artist, t.song, t.trackno, d.cdname\n> FROM disks d, tracks t\n> WHERE d.diskid = t.diskid;\n>\n> Note the WHERE keyword in line 3 of the rule define.\n>\n> >From \"./src/backend/utils/adt/ruleutils.c\" line 662 of 1814\n>\n> /* If the rule has an event qualification, add it */\n> if (ev_qual == NULL)\n> ev_qual = \"\";\n> if (strlen(ev_qual) > 0)\n> {\n> Node *qual;\n> Query *query;\n> QryHier qh;\n> .\n> .\n\n That's exactly the location AFAICS. The problem was\n introduced when the storage of rules changed in that the\n event qualification is now stored as \"<>\" (the output of the\n node print functions for NULL) instead of a NULL attribute.\n\n I'll fix it soon - thanks.\n\n>\n> We put the WHERE in if strlen(ev_qual) > 0\n>\n> I've not yet followed this back any further.\n>\n> Keith.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 25 May 1999 10:14:22 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: views doesn't survives after pg_dump (fwd)"
}
] |
[
{
"msg_contents": "Here is the updated list of changes in 6.5. I will roll them into the\nsgml and HISTORY files, and put it up on the web site soon. Note the\nmarker in the list that shows the new items since I last posted this\nlist in March. Please review and suggest changes. Thanks.\n\n---------------------------------------------------------------------------\n\n POSTGRESQL 6.5\n\nThis release marks the development team's final mastery of the source\ncode we inherited from Berkeley. You will see we are now easily adding\nmajor features, thanks to the increasing size and experience of our\nworld-wide development team:\n\nMulti-version concurrency control(MVCC): This removes our old\ntable-level locking, and replaces it with a locking system that is\nsuperior to most commercial database systems. In a traditional system,\neach row that is modified is locked until committed, preventing reads by\nother users. MVCC uses the natural multi-version nature of PostgreSQL\nto allow readers to continue reading consistent data during writer\nactivity. Writers continue to use the compact pg_log transaction\nsystem. This is all preformed without having to allocate a lock for\nevery row like traditional database systems. So, basically, we no\nlonger have table-level locking, we have something better than row-level\nlocking.\n\nNumeric data type: We now have a true numeric data type, with\nuser-specified precision.\n\nTemporary tables: Temporary tables are guaranteed to have unique names\nwithin a database session, and are destroyed on session exit.\n\nNew SQL features: We now have CASE, INTERSECT, and EXCEPT statement\nsupport. We have new LIMIT/OFFSET, SET TRANSACTION ISOLATION LEVEL,\nSELECT ... FOR UPDATE, and an improved LOCK command.\n\nSpeedups: We continue to speed up PostgreSQL, thanks to the variety of\ntalents within our team. We have sped up memory allocation,\noptimization, table joins, and row transfers routines.\n\nOther: We continue to expand our port list, this time including\nWin32/NT. Most interfaces have new versions, and existing functionality\nhas been improved.\n\nPlease look through the list to see the full extent of our changes in\nthis PostgreSQL 6.5 release.\n\n---------------------------------------------------------------------------\n\nAdd \"vacuumdb\" utility\nFix text<->float8 and text<->float4 conversion functions(Thomas)\nFix for creating tables with mixed-case constraints(Billy)\nSpeed up libpq by allocating memory better(Tom)\nEXPLAIN all indices used(Thomas)\nImprove port matching(Tom)\nPortability fixes for SunOS\nChange exp()/pow() behavior to generate error on underflow/overflow(Jan)\nImplement CASE expression(Thomas)\nFix bug in pg_dump -z\nNew pg_dump table output format(Constantin)\nAdd string min()/max() functions(Thomas)\nExtend new type coersion techniques to aggregates(Thomas)\nNew moddatetime contrib(Terry)\nUpdate to pgaccess(Constantin)\nFix problems in the muti-byte code(Tatsuo)\nFix case where executor evaluates functions twice(Tatsuo)\nMemory overrun cleanups(Tatsuo)\nFix for lo_import crash(Tatsuo)\nAdjust handling of data type names to suppress double quotes(Thomas)\nAdd routines for single-byte \"char\" type(Thomas)\nImproved substr() function(Thomas)\nUse type coersion for matching columns and DEFAULT(Thomas)\nAdd CASE statement support(Thomas)\nImproved multi-byte handling(Tatsuo)\nAdd NT/Win32 backend port and enable dynamic loading(Magnus and Daniel Horak)\nMulti-version concurrency control/MVCC(Vadim)\nNew Serialized mode(Vadim)\nFix fix for tables over 2gigs(Peter)\nUpgrade to Pygress(D'Arcy)\nNew SET TRANSACTION ISOLATION LEVEL(Vadim)\nNew LOCK TABLE IN ... MODE(Vadim)\nNew port to Cobalt Qube(Mips) running Linux(Tatsuo)\nFix deadlock so it only checks once after one second of sleep(Bruce)\nPort to NetBSD/m68k(Mr. Mutsuki Nakajima)\nPort to NetBSD/sun3(Mr. Mutsuki Nakajima)\nPort to NetBSD/macppc(Toshimi Aoki)\nUpdate odbc version\nNew NUMERIC data type(Jan)\nNew SELECT FOR UPDATE(Vadim)\nHandle \"NaN\" and \"Infinity\" for input values(Jan)\nBetter date/year handling(Thomas)\nImproved handling of backend connections(Magnus)\nNew options ELOG_TIMESTAMPS and USE_SYSLOG options for log files(Massimo)\nNew TCL_ARRAYS option(Massimo)\nNew INTERSECT and EXCEPT(Stefan)\nNew pg_index.indisprimary for primary key tracking(D'Arcy)\nNew pg_dump option to allow dropping of tables before creation(Brook)\nFixes for aggregates and PL/pgsql(Hiroshi)\nSpeedup of row output routines(Tom)\nJDBC improvements(Peter)\nFix for subquery crash(Vadim)\nNew READ COMMITTED isolation level(Vadim)\nNew TEMP tables/indexes(Bruce)\nPrevent sorting if result is already sorted(Jan)\nFix for libpq function PQfnumber and case-insensitive names(Bahman Rafatjoo)\nFix for large object write-into-middle, remove extra block(Tatsuo)\nNew memory allocation optimization(Jan)\nAllow psql to do \\p\\g(Bruce)\nAllow multiple rule actions(Jan)\nFix for pg_dump -d or -D and quote special characters in INSERT\nAdded LIMIT/OFFSET functionality(Jan)\nRemoved CURRENT keyword for rule queries(Jan)\nImprove optimizer when joining a large number of tables(Bruce)\nAddition of Stefan Simkovics' Master's Thesis to docs(Stefan)\nImproved int8 support(Thomas, Marc)\nNew routines to convert between int8 and text/varchar types(Thomas)\nNew bushy plans, where meta-tables are joined(Bruce)\nEnable right-hand queries by default(Bruce)\nAllow reliable maximum number of backends to be set at configure time\n (--with-maxbackends and postmaster switch (-N backends))(Tom)\nRepair serious problems with dynahash(Tom)\nFix INET/CIDR portability problems\nFix problem with selectivity error in ALTER TABLE ADD COLUMN(Bruce)\nFix executor so mergejoin of different column types works(Tom)\nGEQO default now 11 tables because of optimizer speedups(Tom)\nFix for Alpha OR selectivity bug\nFix OR index selectivity problem(Bruce)\nAllow btree/hash index on the int8 type(Ryan)\nAllow Var = NULL for MS-SQL portability(Michael)\nFix so \\d shows proper length for char()/varchar()(Ryan)\nFix tutorial code(Clark)\nImprove destroyuser checking(Oliver)\nFix for Kerberos(Rodney McDuff)\nModify contrib check_primary_key() so either \"automatic\" or \"dependent\"(Anand)\nAllow psql \\d on a view show query(Ryan)\nSpeedup for LIKE(Bruce)\nFix for dropping database while dirty buffers(Bruce)\nFix so sequence nextval() can be case-sensitive(Bruce)\nFix for tcl/tk configuration(Vince)\nEcpg fixes/features, see src/interfaces/ecpg/ChangeLog file(Michael)\nJdbc fixes/features, see src/interfaces/jdbc/CHANGELOG(Peter)\n\n--new since 1999/03/15-----------------------------------------------\n\nHave psql \\d on a view show the query\nFix !!= operator\nDrop buffers before destroying database files(Bruce)\nAllow sequence nextval actions to be case-sensitive(Bruce)\ntcl/tk configure improvements\nMake % operator have precedence like /.\nAdd new postgres -O option to allow system table structure changes(Bruce)\nFix optimizer indexing not working for negative numbers(Bruce)\nFix for memory leak in executor with fjIsNull\nAllow WHERE specification of NULL = Var(Bruce)\nFix for aggregate memory leaks(Erik Riedel)\nAllow username containing a dash grant permissions\nCleanup of NULL in inet types\nNT dynamic loading now works(Daniel Horak)\nUpdate contrib/pginterface/findoidjoins script(Tom)\nClean up system�table bugs(Tom)\nMajor speedup in vacuum of deleted rows(Vadim) \nAllow non-SQL functions to run different versions based on arguments(Tom)\nFix problems of PAGER and \\? command(Masaaki Sakaida)\nAdd -E option that shows actual queries sent by \\dt and friends(Masaaki Sakaida)\nAdd version number in startup banners for psql(Masaaki Sakaida)\nReduce default multi-segment file size limit to 1GB(Peter)\nNew contrib/vacuumlo removes large objects not referenced(Peter)\nFix for dumping of CREATE OPERATOR(Tom)\nFix for backward scanning of cursors(Hiroshi Inoue)\nAdd ARM32 support(Andrew McMurry)\nDate/time fixes(Thomas)\nFix for COPY FROM STDIN when using \\i(Tom)\nNew initialization for table sizes so non-vacuumed tables perform better(Tom)\nImprove error messages when a connection is rejected(Tom)\nFix for subselect is compared inside an expression(Jan)\nFixes for HPUX 11 and Unixware\nFix handling of error reporting while returning rows(Tom)\nFix problems with reference to array types(Tom,Jan)\nPrevent UPDATE SET oid(Jan)\nBetter optimization statistics for system table access(Tom)\nSupport for arrays of char() and varchar() fields(Massimo)\nBetter handling of non-default block sizes(Massimo)\nFix pg_dump so -t option can handle case-sensitive tablenames\nOverhaul of hash code to increase reliability and performance(Tom)\nImprove file handling to be more uniform, prevent file descriptor leak(Tom)\nLarge object fixes for overlapping writes and memory consumption(Tatsuo)\nUpdate to PyGreSQL 2.4(D'Arcy)\nUpdate to JDBC(Peter)\nChanged debug options so -d4 and -d5 produce different node displays(Jan)\nNew pg_options: pretty_plan, pretty_parse, pretty_rewritten(Jan)\nFixes for GROUP BY in special cases(Tom, Jan)\nFix for memory leak in failed queries(Tom)\nDEFAULT now supports mixed-case identifiers(Tom)\nFix for multi-segment uses of DROP/RENAME table, indexes(Ole Gjerde)\nImprove GEQO optimizer memory consumption(Tom)\nUNION now suppports ORDER BY of columns not in target list(Tom)\nNew install commands for plpgsql(Jan)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 10:11:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Updated 6.5 HISTORY"
},
{
"msg_contents": "> Here is the updated list of changes in 6.5. I will roll them into the\n> sgml and HISTORY files, and put it up on the web site soon. Note the\n> marker in the list that shows the new items since I last posted this\n> list in March. Please review and suggest changes. Thanks.\n\nI'm pretty sure that I can generate HISTORY from the sgml, just before\nrelease. Also, it looks like we could freeze the release notes pretty\nsoon, since we know what *features* are going into this release. OK?\n\nWe should mention the NetBSD/arm32 port in the introduction.\n\nAlso, let's move bug fix notes to below feature notes...\n\n<snip>\n\n> EXPLAIN all indices used(Thomas)\n\nNot me, must be someone else...\n\n> Implement CASE expression(Thomas)\n\nImplement CASE, COALESCE, NULLIF expressions\n\n> Add CASE statement support(Thomas)\n\nThis is a duplicate mention of CASE...\n\n> Upgrade to Pygress(D'Arcy)\n\nPyGres\n\n> Update odbc version\n\nUpdate ODBC driver(Byron)\n\n> Better date/year handling(Thomas)\n\nImprove date/time handling(Thomas)\n\n> Addition of Stefan Simkovics' Master's Thesis to docs(Stefan)\n\nNew intro to SQL from S. Simkovics' Master's Thesis (Stefan, Thomas)\nNew intro to backend processing from \" \" \" \" ...\n\n> Improved int8 support(Thomas, Marc)\n\nImproved int8 support(Ryan, Thomas, Tom)\n\nMarc, did you do something with int8s?? Ryan is Ryan Bradetich, if he\ndoesn't show up yet in the contributor's list...\n\n> Allow btree/hash index on the int8 type(Ryan)\n\nConsolidate this with above? Ryan's contribution was probably the\nbiggest feature for int8 support in this release.\n\n> --new since 1999/03/15-----------------------------------------------\n> Date/time fixes(Thomas)\n\nDuplicate of earlier entries.\n\n> Update to PyGreSQL 2.4(D'Arcy)\n\nDuplicate (though you may want to keep this one instead)\n\n> Update to JDBC(Peter)\n\nDuplicate.\n\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 24 May 1999 15:08:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Updated 6.5 HISTORY"
},
{
"msg_contents": "1.\nToday I tried my test script (many tables joining) and got a crash.\nIt works several days before. CPU usage shows only 5% loading during this test.\nwhat postgres is doing ? Hmm, just rerun the test and it works :-)\nOK, one more again and it crashed. Again - works. Again - crashed, crashed,\ncrashed. Very unstable situation.\n\nI attached perl script to generate date set and sql commands\n\nmkjoindata.pl | psql test\n\n2.\nDoes anybody tried sqlbench, posted by\nEdmund ? I tried several times on Linux box, FreeBSD box with current cvs\nand it never finished. RAM usage during the test was about 11-13 Mb \nand CPU usage was extremly low - 5-9% only !\n\n\n\tRegards,\n\n\t\tOleg\n\n\nselect t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7\n.a as t7,t8.a as t8,t9.a as t9,t10.a as t10,t11.a as t11,t12.a as t12,t13.a as t\n13,t14.a as t14\n from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14\n where t1.a_id = t0.a_t1_id and t2.a_id=t0.a_t2_id and t3.a_id=t0.a_t3_id and t4\n.a_id=t0.a_t4_id and t5.a_id=t0.a_t5_id and t6.a_id=t0.a_t6_id and t7.a_id=t0.a_\nt7_id and t8.a_id=t0.a_t8_id and t9.a_id=t0.a_t9_id and t10.a_id=t0.a_t10_id and\n t11.a_id=t0.a_t11_id and t12.a_id=t0.a_t12_id and t13.a_id=t0.a_t13_id and t14.\na_id=t0.a_t14_id ;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible.\n Terminating.\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Mon, 24 May 1999 19:46:51 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Updated 6.5 HISTORY"
},
{
"msg_contents": "Thus spake Thomas Lockhart\n> > Upgrade to Pygress(D'Arcy)\n> PyGres\n\nPyGreSQL\n\n> > Update to PyGreSQL 2.4(D'Arcy)\n> \n> Duplicate (though you may want to keep this one instead)\n\nI agree.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 24 May 1999 12:55:47 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Updated 6.5 HISTORY"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Today I tried my test script (many tables joining) and got a crash.\n> It works several days before. CPU usage shows only 5% loading during\n> this test. what postgres is doing ? Hmm, just rerun the test and it\n> works :-) OK, one more again and it crashed. Again - works. Again -\n> crashed, crashed, crashed. Very unstable situation.\n\nOh dear ... can you provide a backtrace from one of the crashes?\n\n> Does anybody tried sqlbench, posted by\n> Edmund ? I tried several times on Linux box, FreeBSD box with current cvs\n> and it never finished. RAM usage during the test was about 11-13 Mb \n> and CPU usage was extremly low - 5-9% only !\n\nIt ran through to completion for me, but took hours. (Unfortunately\nI only have access to one machine that has 700Mb of free disk space,\nand I mustn't shut off the data collection task that is its primary load.\nSo I'm stuck with taking a long time to try the sqlbench stuff...\nit does seem to work, but I can't say much about performance...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 May 1999 20:26:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Updated 6.5 HISTORY "
}
] |
[
{
"msg_contents": "SELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\nTable with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nMake sure pg_internal.init concurrent generation can't cause unreliability\nALTER TABLE ADD COLUMN to inherited table put column in wrong place\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\n3 = sum(x) in rewrite system is a problem\nFix function pointer calls to take Datum args for char and int2 args(ecgs)\n\nDo we want pg_dump -z to be the default?\npg_dump of groups fails\npg_dump -o -D does not work, and can not work currently, generate error?\npsql \\d should show precision\ndumping out sequences should not be counted in pg_dump display\n\nMake psql \\help, man pages, and sgml reflect changes in grammar\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\n\nFuture TODO items\n-----------------\nMake Serial its own type\nAdd support for & operator\nstore binary-compatible type information in the system somewhere \nadd ability to add comments to system tables using table/colname combination\nprocess const=const parts of OR clause in separate pass\nmake oid use oidin/oidout not int4in/int4out in pg_type.h, make oid use\n\tunsigned int more reliably, pg_atoi()\nCREATE VIEW ignores DISTINCT\nMove LIKE index optimization handling to the optimizer?\nAllow ESCAPE '\\' at the end of LIKE for ANSI compliance, or rewrite the\n\tLIKE handling by rewriting the user string with the supplied ESCAPE\nFix leak for expressions?, aggregates?\nImprove LIMIT processing by using index to limit rows processed\nCLUSTER failure if vacuum has not been performed in a while\nCREATE OPERATOR *= (leftarg=_varchar, rightarg=varchar, \n\tprocedure=array_varchareq); fails, varchar is reserved word, quotes work\nImprove Subplan list handling\nAllow Subplans to use efficient joins(hash, merge) with upper variable\nUpdate reltuples from COPY command\nCREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops) fails\n\tindex can't store constant parameters, allow SQL function indexes?\nImprove NULL parameter passing into functions\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 10:31:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.5 items"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> Table with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\n\nIs that an error? From the discussions with Paul Vixie, I think that\nthat is the correct way to output it. Note that you can always use\nhost() to get the full string for the host part at least.\n\n> When creating a table with either type inet or type cidr as a primary,unique\n> key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\n\nI guess I'll take a stab at it. I just need to know which of the following\nis true.\n\n 198.68.123.0/24 < 198.68.123.0/27\n 198.68.123.0/24 > 198.68.123.0/27\n\nAlso, is it possible that the current behaviour is what we want? It seems\nto me that if you make a network a primary key, you probably want to prevent\noverlap. What we have does that.\n\n\n> add ability to add comments to system tables using table/colname combination\n\nWhy not just add comments to pg_description?\n\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 24 May 1999 12:51:20 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > Table with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\n> \n> Is that an error? From the discussions with Paul Vixie, I think that\n> that is the correct way to output it. Note that you can always use\n> host() to get the full string for the host part at least.\n\n\nWell, if you say it OK, that's good enough for me. Item removed. It\njust looked strange, the 00/0. Can you explain why it should look like\nthat. Just curious.\n\n\n> > When creating a table with either type inet or type cidr as a primary,unique\n> > key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\n> \n> I guess I'll take a stab at it. I just need to know which of the following\n> is true.\n> \n> 198.68.123.0/24 < 198.68.123.0/27\n> 198.68.123.0/24 > 198.68.123.0/27\n> \n> Also, is it possible that the current behaviour is what we want? It seems\n> to me that if you make a network a primary key, you probably want to prevent\n> overlap. What we have does that.\n\nGood question. If we decide the current behaviour is OK, that is fine\nwith me. Someone know understand this just needs to say so.\n\n> > add ability to add comments to system tables using table/colname combination\n> \n> Why not just add comments to pg_description?\n\nAdding to pg_description requires table creation, then oid retrieval,\nthen inserts into pg_description. At one time, I toyed with the idea of\nmaking this more automatic, but obviously at this point, I will just\nadd it into the TODO list, if is not there already.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 17:20:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > Thus spake Bruce Momjian\n> > > Table with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\n> > \n> > Is that an error? From the discussions with Paul Vixie, I think that\n> > that is the correct way to output it. Note that you can always use\n> > host() to get the full string for the host part at least.\n> \n> \n> Well, if you say it OK, that's good enough for me. Item removed. It\n> just looked strange, the 00/0. Can you explain why it should look like\n> that. Just curious.\n\nI'm not sure why \"00/0\" rather than \"0/0\" but basically you don't have\nto show more octets than necessary based on the netmask. For example,\na netmask of 32 bits requires all 4, 24 bits requires 3, 16 needs 2\nand 8 (and less) needs 1. Technically you don't even need 1 octet for\n0 bits I suppose but \"/0\" doesn't make much sense.\n\nThis is all based on my understanding. RFC lawyers, please feel free to\ncorrect me.\n\n> > > When creating a table with either type inet or type cidr as a primary,unique\n> > > key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\n> > \n> > I guess I'll take a stab at it. I just need to know which of the following\n> > is true.\n> > \n> > 198.68.123.0/24 < 198.68.123.0/27\n> > 198.68.123.0/24 > 198.68.123.0/27\n> > \n> > Also, is it possible that the current behaviour is what we want? It seems\n> > to me that if you make a network a primary key, you probably want to prevent\n> > overlap. What we have does that.\n> \n> Good question. If we decide the current behaviour is OK, that is fine\n> with me. Someone know understand this just needs to say so.\n\nAnd if not, what is the answer to the above question.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 24 May 1999 23:17:23 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> I'm not sure why \"00/0\" rather than \"0/0\" but basically you don't have\n> to show more octets than necessary based on the netmask. For example,\n> a netmask of 32 bits requires all 4, 24 bits requires 3, 16 needs 2\n> and 8 (and less) needs 1. Technically you don't even need 1 octet for\n> 0 bits I suppose but \"/0\" doesn't make much sense.\n\nWell, then it is a bug. It should show 00/0.\n\n> \n> This is all based on my understanding. RFC lawyers, please feel free to\n> correct me.\n> \n> > > > When creating a table with either type inet or type cidr as a primary,unique\n> > > > key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\n> > > \n> > > I guess I'll take a stab at it. I just need to know which of the following\n> > > is true.\n> > > \n> > > 198.68.123.0/24 < 198.68.123.0/27\n> > > 198.68.123.0/24 > 198.68.123.0/27\n> > > \n> > > Also, is it possible that the current behaviour is what we want? It seems\n> > > to me that if you make a network a primary key, you probably want to prevent\n> > > overlap. What we have does that.\n> > \n> > Good question. If we decide the current behaviour is OK, that is fine\n> > with me. Someone know understand this just needs to say so.\n> \n> And if not, what is the answer to the above question.\n\nDon't know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 23:34:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
}
] |
[
{
"msg_contents": "I have contacted Daemon News, and they say they are interested in an\narticle about PostgreSQL development. I sent them the attached. I will\nwork on the article, and have everyone review it before senting it to\nthem. I should finish it in about a week.\n\n---------------------------------------------------------------------------\n\nWould you be interested in an article about the history of PostgreSQL by\na team of Internet developers? I have been involved for three years,\nand can discuss the history of the project, managing of developers,\nsource code management, user community relations, technical challenges,\netc. I think it would make an interesting story.\n\nPostgreSQL is at www.postgresql.org.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 10:34:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Article for Daemon News"
},
{
"msg_contents": "\nGreat!\nLooking forward to seeing it. Did you send the same letter to any\nother publications, bsdzine.org, linux journal or the likes around the \nnet?\n\n/Daniel\n\nBruce Momjian writes:\n > I have contacted Daemon News, and they say they are interested in an\n > article about PostgreSQL development. I sent them the attached. I will\n > work on the article, and have everyone review it before senting it to\n > them. I should finish it in about a week.\n\n_______________________________________________________________ /\\__ \n Daniel Lundin - MediaCenter, UNIX and BeOS Developer \\/\n http://www.umc.se/~daniel/\n",
"msg_date": "Mon, 24 May 1999 20:37:56 +0200 (CEST)",
"msg_from": "Daniel Lundin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Article for Daemon News"
},
{
"msg_contents": "> \n> Great!\n> Looking forward to seeing it. Did you send the same letter to any\n> other publications, bsdzine.org, linux journal or the likes around the \n> net?\n> \n\nNo. I certainly could. I just chose Daemon News because we were\noriginally BSD based, and I run BSDI, and Marc runs FreeBSD. I know we\nhave tons of Linux users, and would be glad to send it there too. How\ndo I choose which one?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 17:23:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Article for Daemon News"
},
{
"msg_contents": "Bruce,\n I have also contacted linux journal, they are sending me an\nad package, i could also contact the regarding an article, or \nyou could.\n\n Jeff.\n [email protected]\n\nOn Mon, 24 May 1999, Bruce Momjian wrote:\n\n> > \n> > Great!\n> > Looking forward to seeing it. Did you send the same letter to any\n> > other publications, bsdzine.org, linux journal or the likes around the \n> > net?\n> > \n> \n> No. I certainly could. I just chose Daemon News because we were\n> originally BSD based, and I run BSDI, and Marc runs FreeBSD. I know we\n> have tons of Linux users, and would be glad to send it there too. How\n> do I choose which one?\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n",
"msg_date": "Mon, 24 May 1999 18:44:02 -0300 (ADT)",
"msg_from": "Jeff MacDonald <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Article for Daemon News"
},
{
"msg_contents": "> I have also contacted linux journal, they are sending me an\n> ad package, i could also contact the regarding an article, or\n> you could.\n\nLJ had a nice review of Postgres a few issues ago. I'm sure any number\nof specific topics, rather than an overview, would be welcome.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 25 May 1999 01:47:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Article for Daemon News"
}
] |
[
{
"msg_contents": "I'm experiencing a strange problem when playing with the\nbench scripts. (No postmaster running).\n\nPlatform SPARC Linux 2.0.36, latest CVS.\n\nIf I do:-\n\n[postgres@sparclinux bench]$ rm -rf /usr/local/pgsql/data/\n[postgres@sparclinux bench]$ initdb\n\nWe are initializing the database system with username postgres (uid=900).\nThis user will own all the files and must also own the server process.\n\nCreating Postgres database system directory /usr/local/pgsql/data\n\nCreating Postgres database system directory /usr/local/pgsql/data/base\n\nCreating template database in /usr/local/pgsql/data/base/template1\n\nCreating global classes in /usr/local/pgsql/data/base\n\nAdding template1 database to pg_database...\n\nVacuuming template1\nCreating public pg_user view\nCreating view pg_rules\nCreating view pg_views\nCreating view pg_tables\nCreating view pg_indexes\nLoading pg_description\n[postgres@sparclinux bench]$ postgres -D/usr/local/pgsql/data template1\n\nPOSTGRES backend interactive interface\n$Revision: 1.115 $ $Date: 1999/05/22 17:47:49 $\n\nbackend> create database bench\n\nThe command just hangs.\n\nIf I run under a debugger I can see the following backtrace:-\n\n(gdb) cont\nContinuing.\n\nProgram received signal SIGINT, Interrupt.\nSpinAcquire (lockid=5) at spin.c:109\n109 S_LOCK(&(slckP->locklock));\n(gdb) bt\n#0 SpinAcquire (lockid=5) at spin.c:109\n#1 0xd0b80 in LockAcquire (lockmethod=1, locktag=0xefff6e00, lockmode=1) at lock.c:530\n#2 0xd02c4 in LockRelation (relation=0x2c9b18, lockmode=1) at lmgr.c:185\n#3 0x104d8c in scan_pg_rel_ind (buildinfo={infotype = 2, i = {info_id = 1400920, info_name = 0x156058 \"pg_type_oid_index\"}})\n at relcache.c:394\n#4 0x104c68 in ScanPgRelation (buildinfo={infotype = 2, i = {info_id = 1400920, info_name = 0x156058 \"pg_type_oid_index\"}})\n at relcache.c:316\n#5 0x1059f0 in RelationBuildDesc (buildinfo={infotype = 2, i = {info_id = 1400920, info_name = 0x156058 \"pg_type_oid_index\"}})\n at relcache.c:802\n#6 0x1061cc in RelationNameGetRelation (relationName=0x156058 \"pg_type_oid_index\") at relcache.c:1220\n#7 0x3d0d4 in index_openr (relationName=0x156058 \"pg_type_oid_index\") at indexam.c:153\n#8 0x102d5c in CatalogCacheInitializeCache (cache=0x2ec018, relation=0x2cadb0) at catcache.c:247\n#9 0x103c1c in SearchSysCache (cache=0x2ec018, v1=705, v2=0, v3=0, v4=0) at catcache.c:834\n#10 0x107d8c in SearchSysCacheTuple (cacheId=13, key1=705, key2=0, key3=0, key4=0) at syscache.c:507\n#11 0x96db4 in typeidType (id=705) at parse_type.c:67\n#12 0x955a8 in make_const (value=0x30e390) at parse_node.c:421\n#13 0x91cdc in transformExpr (pstate=0x30e590, expr=0x30e388, precedence=1) at parse_expr.c:105\n#14 0x97ecc in MakeTargetEntryComplex (pstate=0x30e590, res=0x30e3b0) at parse_target.c:367\n#15 0x98430 in transformTargetList (pstate=0x30e590, targetlist=0x30e3c8) at parse_target.c:574\n#16 0x88078 in transformInsertStmt (pstate=0x30e590, stmt=0x30e530) at analyze.c:267\n#17 0x87ee8 in transformStmt (pstate=0x30e590, parseTree=0x30e530) at analyze.c:180\n#18 0x87c60 in parse_analyze (pl=0x30e578, parentParseState=0x0) at analyze.c:72\n#19 0x905b4 in parser (str=0x1d7c00 \"\", typev=0x0, nargs=0) at parser.c:62\n#20 0xd6f58 in pg_parse_and_plan (\n query_string=0xefffb8e0 \"insert into pg_database (datname, datdba, encoding, datpath) values ('bench', '900', '0', 'bench');\",\n typev=0x0, nargs=0, queryListP=0xefffb65c, dest=Debug, aclOverride=0 '\\000') at postgres.c:454\n#21 0xd7350 in pg_exec_query_dest (\n query_string=0xefffb8e0 \"insert into pg_database (datname, datdba, encoding, datpath) values ('bench', '900', '0', 'bench');\",\n dest=Debug, aclOverride=0 '\\000') at postgres.c:664\n#22 0x72228 in createdb (dbname=0x30e040 \"bench\", dbpath=0xefffb6e0 \"bench\", encoding=0, dest=Debug) at dbcommands.c:91\n#23 0xd9e68 in ProcessUtility (parsetree=0x30e058, dest=Debug) at utility.c:560\n#24 0xd7400 in pg_exec_query_dest (query_string=0xefffbcc8 \"create database bench\\n\", dest=Debug, aclOverride=0 '\\000') at \npostgres.c:704\n#25 0xd7304 in pg_exec_query (query_string=0xefffbcc8 \"create database bench\\n\") at postgres.c:642\n#26 0xd86fc in PostgresMain (argc=3, argv=0xeffffd94, real_argc=3, real_argv=0xeffffd94) at postgres.c:1626\n#27 0x87c24 in main (argc=3, argv=0xeffffd94) at main.c:103\n(gdb) \n\n",
"msg_date": "Mon, 24 May 1999 17:11:01 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem in S_LOCK?"
},
{
"msg_contents": "I just tried initdb and:\n\n\tVacuuming template1\n\tCreating public pg_user view\n\tCreating view pg_rules\n\tCreating view pg_views\n\tCreating view pg_tables\n\tCreating view pg_indexes\n\tLoading pg_description\n\t#$ aspg postgres -D /u/pg/data template1\n\t\n\tPOSTGRES backend interactive interface \n\t$Revision: 1.115 $ $Date: 1999/05/22 17:47:49 $\n\t\n\tbackend> create database bench \n\tblank\n\t 1: datname (typeid = 19, len = 32, typmod = -1, byval = f)\n\t 2: datdba (typeid = 23, len = 4, typmod = -1, byval = t)\n\t 3: encoding (typeid = 23, len = 4, typmod = -1, byval = t)\n\t 4: datpath (typeid = 25, len = -1, typmod = -1, byval = f)\n\t ----\n\tbackend> \n\nand it worked.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 17:07:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem in S_LOCK?"
},
{
"msg_contents": "Keith Parks <[email protected]> writes:\n> Platform SPARC Linux 2.0.36, latest CVS.\n\nSPARC Linux? Didn't know there was such a thing. You should look at\nthe machine-dependent assembly coding in s_lock.h and s_lock.c. Perhaps\nthe #ifdefs are messed up such that the wrong bit of code is being\nselected for your platform. (We do have spinlock code for SPARC, IIRC,\nbut I wonder whether it gets selected if the platform name is linux ...)\n\nIf the failure just started appearing recently then this probably ain't\nthe answer :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 May 1999 20:30:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem in S_LOCK? "
}
] |
[
{
"msg_contents": "Again time for me to run tools/pgindent to make our source code more\nuniform. Is anyone sitting on patches? Running this may make your\npatches hard to apply to the new pgindented source tree.\n\nI may run it tonight, so people can run it for a few days before final\nrelease.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 18:06:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgindent"
}
] |
[
{
"msg_contents": "Here is the current 6.5 HISTORY file. I have broken the items up into\nsections. It is much easier to review.\n\n---------------------------------------------------------------------------\n\n POSTGRESQL 6.5\n\nThis release marks the development team's final mastery of the source\ncode we inherited from Berkeley. You will see we are now easily adding\nmajor features, thanks to the increasing size and experience of our\nworld-wide development team:\n\nMulti-version concurrency control(MVCC): This removes our old\ntable-level locking, and replaces it with a locking system that is\nsuperior to most commercial database systems. In a traditional system,\neach row that is modified is locked until committed, preventing reads by\nother users. MVCC uses the natural multi-version nature of PostgreSQL\nto allow readers to continue reading consistent data during writer\nactivity. Writers continue to use the compact pg_log transaction\nsystem. This is all preformed without having to allocate a lock for\nevery row like traditional database systems. So, basically, we no\nlonger have table-level locking, we have something better than row-level\nlocking.\n\nNumeric data type: We now have a true numeric data type, with\nuser-specified precision.\n\nTemporary tables: Temporary tables are guaranteed to have unique names\nwithin a database session, and are destroyed on session exit.\n\nNew SQL features: We now have CASE, INTERSECT, and EXCEPT statement\nsupport. We have new LIMIT/OFFSET, SET TRANSACTION ISOLATION LEVEL,\nSELECT ... FOR UPDATE, and an improved LOCK command.\n\nSpeedups: We continue to speed up PostgreSQL, thanks to the variety of\ntalents within our team. We have sped up memory allocation,\noptimization, table joins, and row transfers routines.\n\nOther: We continue to expand our port list, this time including\nWin32/NT. Most interfaces have new versions, and existing functionality\nhas been improved.\n\nPlease look through the list to see the full extent of our changes in\nthis PostgreSQL 6.5 release.\n\n---------------------------------------------------------------------------\n\nBug Fixes\n---------\nFix text<->float8 and text<->float4 conversion functions(Thomas)\nFix for creating tables with mixed-case constraints(Billy)\nChange exp()/pow() behavior to generate error on underflow/overflow(Jan)\nFix bug in pg_dump -z\nMemory overrun cleanups(Tatsuo)\nFix for lo_import crash(Tatsuo)\nAdjust handling of data type names to suppress double quotes(Thomas)\nUse type coersion for matching columns and DEFAULT(Thomas)\nFix deadlock so it only checks once after one second of sleep(Bruce)\nFixes for aggregates and PL/pgsql(Hiroshi)\nFix for subquery crash(Vadim)\nFix for libpq function PQfnumber and case-insensitive names(Bahman Rafatjoo)\nFix for large object write-in-middle, no extra block, memory consumption(Tatsuo)\nFix for pg_dump -d or -D and quote special characters in INSERT\nRepair serious problems with dynahash(Tom)\nFix INET/CIDR portability problems\nFix problem with selectivity error in ALTER TABLE ADD COLUMN(Bruce)\nFix executor so mergejoin of different column types works(Tom)\nFix for Alpha OR selectivity bug\nFix OR index selectivity problem(Bruce)\nFix so \\d shows proper length for char()/varchar()(Ryan)\nFix tutorial code(Clark)\nImprove destroyuser checking(Oliver)\nFix for Kerberos(Rodney McDuff)\nFix for dropping database while dirty buffers(Bruce)\nFix so sequence nextval() can be case-sensitive(Bruce)\nFix !!= operator\nDrop buffers before destroying database files(Bruce)\nFix case where executor evaluates functions twice(Tatsuo)\nAllow sequence nextval actions to be case-sensitive(Bruce)\nFix optimizer indexing not working for negative numbers(Bruce)\nFix for memory leak in executor with fjIsNull\nFix for aggregate memory leaks(Erik Riedel)\nAllow username containing a dash GRANT permissions\nCleanup of NULL in inet types\nClean up system�table bugs(Tom)\nFix problems of PAGER and \\? command(Masaaki Sakaida)\nReduce default multi-segment file size limit to 1GB(Peter)\nFix for dumping of CREATE OPERATOR(Tom)\nFix for backward scanning of cursors(Hiroshi Inoue)\nFix for COPY FROM STDIN when using \\i(Tom)\nFix for subselect is compared inside an expression(Jan)\nFix handling of error reporting while returning rows(Tom)\nFix problems with reference to array types(Tom,Jan)\nPrevent UPDATE SET oid(Jan)\nFix pg_dump so -t option can handle case-sensitive tablenames\nFixes for GROUP BY in special cases(Tom, Jan)\nFix for memory leak in failed queries(Tom)\nDEFAULT now supports mixed-case identifiers(Tom)\nFix for multi-segment uses of DROP/RENAME table, indexes(Ole Gjerde)\n\nEnhancements\n------------\nAdd \"vacuumdb\" utility\nSpeed up libpq by allocating memory better(Tom)\nEXPLAIN all indices used(Tom)\nImplement CASE, COALESCE, NULLIF expression(Thomas)\nNew pg_dump table output format(Constantin)\nAdd string min()/max() functions(Thomas)\nExtend new type coersion techniques to aggregates(Thomas)\nNew moddatetime contrib(Terry)\nUpdate to pgaccess 0.96(Constantin)\nAdd routines for single-byte \"char\" type(Thomas)\nImproved substr() function(Thomas)\nImproved multi-byte handling(Tatsuo)\nMulti-version concurrency control/MVCC(Vadim)\nNew Serialized mode(Vadim)\nFix for tables over 2gigs(Peter)\nNew SET TRANSACTION ISOLATION LEVEL(Vadim)\nNew LOCK TABLE IN ... MODE(Vadim)\nUpdate ODBC driver(Byron)\nNew NUMERIC data type(Jan)\nNew SELECT FOR UPDATE(Vadim)\nHandle \"NaN\" and \"Infinity\" for input values(Jan)\nImproved date/year handling(Thomas)\nImproved handling of backend connections(Magnus)\nNew options ELOG_TIMESTAMPS and USE_SYSLOG options for log files(Massimo)\nNew TCL_ARRAYS option(Massimo)\nNew INTERSECT and EXCEPT(Stefan)\nNew pg_index.indisprimary for primary key tracking(D'Arcy)\nNew pg_dump option to allow dropping of tables before creation(Brook)\nSpeedup of row output routines(Tom)\nNew READ COMMITTED isolation level(Vadim)\nNew TEMP tables/indexes(Bruce)\nPrevent sorting if result is already sorted(Jan)\nNew memory allocation optimization(Jan)\nAllow psql to do \\p\\g(Bruce)\nAllow multiple rule actions(Jan)\nAdded LIMIT/OFFSET functionality(Jan)\nImprove optimizer when joining a large number of tables(Bruce)\nNew intro to SQL from S. Simkovics' Master's Thesis (Stefan, Thomas)\nNew intro to backend processing from S. Simkovics' Master's Thesis (Stefan)\nImproved int8 support(Ryan Bradetich, Thomas, Tom)\nNew routines to convert between int8 and text/varchar types(Thomas)\nNew bushy plans, where meta-tables are joined(Bruce)\nEnable right-hand queries by default(Bruce)\nAllow reliable maximum number of backends to be set at configure time\n (--with-maxbackends and postmaster switch (-N backends))(Tom)\nGEQO default now 10 tables because of optimizer speedups(Tom)\nAllow NULL=Var for MS-SQL portability(Michael, Bruce)\nModify contrib check_primary_key() so either \"automatic\" or \"dependent\"(Anand)\nAllow psql \\d on a view show query(Ryan)\nSpeedup for LIKE(Bruce)\nEcpg fixes/features, see src/interfaces/ecpg/ChangeLog file(Michael)\nJDBC fixes/features, see src/interfaces/jdbc/CHANGELOG(Peter)\nMake % operator have precedence like /(Bruce)\nAdd new postgres -O option to allow system table structure changes(Bruce)\nUpdate contrib/pginterface/findoidjoins script(Tom)\nMajor speedup in vacuum of deleted rows with indexes(Vadim) \nAllow non-SQL functions to run different versions based on arguments(Tom)\nAdd -E option that shows actual queries sent by \\dt and friends(Masaaki Sakaida)\nAdd version number in startup banners for psql(Masaaki Sakaida)\nNew contrib/vacuumlo removes large objects not referenced(Peter)\nNew initialization for table sizes so non-vacuumed tables perform better(Tom)\nImprove error messages when a connection is rejected(Tom)\nSupport for arrays of char() and varchar() fields(Massimo)\nOverhaul of hash code to increase reliability and performance(Tom)\nUpdate to PyGreSQL 2.4(D'Arcy)\nChanged debug options so -d4 and -d5 produce different node displays(Jan)\nNew pg_options: pretty_plan, pretty_parse, pretty_rewritten(Jan)\nBetter optimization statistics for system table access(Tom)\nBetter handling of non-default block sizes(Massimo)\nImprove GEQO optimizer memory consumption(Tom)\nUNION now suppports ORDER BY of columns not in target list(Tom)\n\nSource Tree Changes\n-------------------\nImprove port matching(Tom)\nPortability fixes for SunOS\nAdd NT/Win32 backend port and enable dynamic loading(Magnus and Daniel Horak)\nNew port to Cobalt Qube(Mips) running Linux(Tatsuo)\nPort to NetBSD/m68k(Mr. Mutsuki Nakajima)\nPort to NetBSD/sun3(Mr. Mutsuki Nakajima)\nPort to NetBSD/macppc(Toshimi Aoki)\nFix for tcl/tk configuration(Vince)\nRemoved CURRENT keyword for rule queries(Jan)\nNT dynamic loading now works(Daniel Horak)\nAdd ARM32 support(Andrew McMurry)\nBetter support for HPUX 11 and Unixware\nImprove file handling to be more uniform, prevent file descriptor leak(Tom)\nNew install commands for plpgsql(Jan)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 18:10:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5 HISTORY file"
}
] |
[
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > Not really. It required changing function calls all over, and probably\n> > > relies on 6.5 changes too. It is in the 6.5 beta right now. Are you\n> > > using that?\n> >\n> >\n> > I did all the tests with the current snapshot of May 18th.\n> >\n> \n> Fix was installed yesterday.\n> \n\nsorry, but the problem still persists. I used \n\n -rw-r--r-- 1 1005 root 5954900 May 24 03:03 postgresql.snapshot.tar.gz\n\nI started the query \n\n update bench set k500k = k500k + 1 where k100 = 30;\n\nand I killed it after half an hour of havy disk activity.\nThe same query on the same machine with the same setup,\nbut using sybase-ase-11.0.3.3-1 takes less than 1 minute.\n\nThe table bench looks like the following:\n\nTable = bench\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| kseq | int4 not null | 4 |\n| k500k | int4 not null | 4 |\n| k250k | int4 not null | 4 |\n| k100k | int4 not null | 4 |\n| k40k | int4 not null | 4 |\n| k10k | int4 not null | 4 |\n| k1k | int4 not null | 4 |\n| k100 | int4 not null | 4 |\n| k25 | int4 not null | 4 |\n| k10 | int4 not null | 4 |\n| k5 | int4 not null | 4 |\n| k4 | int4 not null | 4 |\n| k2 | int4 not null | 4 |\n| s1 | char() not null | 8 |\n| s2 | char() not null | 20 |\n| s3 | char() not null | 20 |\n| s4 | char() not null | 20 |\n| s5 | char() not null | 20 |\n| s6 | char() not null | 20 |\n| s7 | char() not null | 20 |\n| s8 | char() not null | 20 |\n+----------------------------------+----------------------------------+-------+\n\n\nThe table is filled with 1.000.000 rows of random data\nand on every field an index is created.\n\n\nEdmund\n\n\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Tue, 25 May 1999 00:13:33 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Edmund Mergl <[email protected]> writes:\n> The table is filled with 1.000.000 rows of random data\n> and on every field an index is created.\n\nBTW, do you happen to know just how random the data actually is?\nI noticed that the update query\n\tupdate bench set k500k = k500k + 1 where k100 = 30;\nupdates 10,000 rows. If this \"random\" data actually consists of\n10,000 repetitions of only 100 distinct values in every column,\nthen a possible explanation for the problem would be that our\nbtree index code isn't very fast when there are large numbers of\nidentical keys. (Mind you, I have no idea if that's true or not,\nI'm just trying to think of likely trouble spots. Anyone know\nbtree well enough to say whether that is likely to be a problem?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 May 1999 20:20:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
},
{
"msg_contents": "I wrote:\n> then a possible explanation for the problem would be that our\n> btree index code isn't very fast when there are large numbers of\n> identical keys.\n\nAh-hah, a lucky guess! I went back and looked at the profile stats\nI'd extracted from Edmund's \"update\" example. This Linux box has\nthe same semi-functional gprof as someone else was using a while\nago --- the timings are bogus, but the call counts seem right.\nAnd what I find are entries like this:\n\n 0.00 0.00 284977/284977 _bt_binsrch [3174]\n[3177] 0.0 0.00 0.00 284977 _bt_firsteq [3177]\n 0.00 0.00 21784948/24713758 _bt_compare [3169]\n\n 0.00 0.00 426/35632 _bt_split [53]\n 0.00 0.00 35206/35632 _bt_insertonpg [45]\n[3185] 0.0 0.00 0.00 35632 _bt_findsplitloc [3185]\n 0.00 0.00 5093972/8907411 _bt_itemcmp [3171]\n\nIn other words, _bt_firsteq is averaging almost 100 comparisons per\ncall, _bt_findsplitloc well over that. Both of these routines are\nevidently designed on the assumption that there will be relatively\nfew duplicate keys --- they reduce to linear scans when there are\nmany duplicates.\n\n_bt_firsteq shouldn't exist at all; the only routine that calls it\nis _bt_binsrch, which does a fast binary search of the index page.\n_bt_binsrch should be fixed so that the binary search logic does the\nright thing for equal-key cases, rather than degrading to a linear\nsearch. I am less confident that I understand _bt_findsplitloc's place\nin the great scheme of things, but it could certainly be revised to use\na binary search rather than linear scan.\n\nThis benchmark is clearly overemphasizing the equal-key case, but\nI think it ought to be fixed independently of whether we want to\nlook good on a dubious benchmark ... equal keys are not uncommon in\nreal-world scenarios after all.\n\nNext question is do we want to risk twiddling this code so soon before\n6.5, or wait till after?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 May 1999 22:16:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Edmund Mergl <[email protected]> writes:\n> > The table is filled with 1.000.000 rows of random data\n> > and on every field an index is created.\n> \n> BTW, do you happen to know just how random the data actually is?\n> I noticed that the update query\n> update bench set k500k = k500k + 1 where k100 = 30;\n> updates 10,000 rows. If this \"random\" data actually consists of\n> 10,000 repetitions of only 100 distinct values in every column,\n> then a possible explanation for the problem would be that our\n> btree index code isn't very fast when there are large numbers of\n> identical keys. (Mind you, I have no idea if that's true or not,\n> I'm just trying to think of likely trouble spots. Anyone know\n> btree well enough to say whether that is likely to be a problem?)\n> \n> regards, tom lane\n\nthe query:\n\n update bench set k500k = k500k + 1 where k100 = 30;\n\naffects about 10.000 rows. This can be determined by running \nthe query:\n\n select k500k from bench where k100 = 30;\n\nwhich takes about half a minute. That's the reason I \nwas talking about the strange UPDATE behavior of \nPostgreSQL. If it can determine a specific number\nof rows in a reasonable time, it should be able to\nupdate these rows in the same time frame.\n\nEdmund\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Tue, 25 May 1999 07:11:12 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Edmund Mergl <[email protected]> writes:\n> ... That's the reason I \n> was talking about the strange UPDATE behavior of \n> PostgreSQL. If it can determine a specific number\n> of rows in a reasonable time, it should be able to\n> update these rows in the same time frame.\n\nNot necessarily --- this table has a remarkably large number of indexes,\nand all of them have to be updated when a tuple is replaced. So the\namount of work is significantly greater than simply finding the tuples\nwill require.\n\nAs I posted later, I think that much of the problem comes from poor\nhandling of equal-key cases in our btree index routines...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 09:51:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Edmund Mergl <[email protected]> writes:\n> > ... That's the reason I\n> > was talking about the strange UPDATE behavior of\n> > PostgreSQL. If it can determine a specific number\n> > of rows in a reasonable time, it should be able to\n> > update these rows in the same time frame.\n> \n> Not necessarily --- this table has a remarkably large number of indexes,\n> and all of them have to be updated when a tuple is replaced. So the\n> amount of work is significantly greater than simply finding the tuples\n> will require.\n> \n> As I posted later, I think that much of the problem comes from poor\n> handling of equal-key cases in our btree index routines...\n> \n> regards, tom lane\n\n\nif this is the case, these routines must be very poor.\nAgain some numbers:\n\n1.000.000 rows:\n\n- select * from bench where k100 = 30\n with indeces 10 seconds\n without indeces 28 seconds\n\n- update bench set k500k = k500k + 1 where k100 = 30\n with indeces unknown\n without indeces 36 seconds\n\n\nStill the poor update routines do not explain the\nstrange behavior, that the postmaster runs for\nhours using at most 10% CPU, and all the time\nheavy disk activity is observed. According to\ntop, there are over 80MB free Mem and the postmaster\nhas been started with -o -F. Hence this disk activity\ncan not be simple swapping.\n\n\nSome more numbers:\n\n database #rows inserts create make_sqs make_nqs\n index selects updates\n----------------------------------------------------------------------------\n pgsql 10.000 00:24 00:09 00:16 00:25\n pgsql 100.000 04:01 01:29 01:06 49:45\n pgsql 1.000.000 39:24 20:49 23:42 ???\n\n\nwhereas the increase of elapsed time is somewhat proportional\nto the number of rows, for the update-case it is rather\nexponential.\n\n\nEdmund\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Tue, 25 May 1999 20:46:41 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Edmund Mergl <[email protected]> writes:\n> Some more numbers:\n\n> database #rows inserts create make_sqs make_nqs\n> index selects updates\n> ----------------------------------------------------------------------------\n> pgsql 10.000 00:24 00:09 00:16 00:25\n> pgsql 100.000 04:01 01:29 01:06 49:45\n> pgsql 1.000.000 39:24 20:49 23:42 ???\n\n> whereas the increase of elapsed time is somewhat proportional\n> to the number of rows, for the update-case it is rather\n> exponential.\n\nThose are attention-getting numbers, all right. I think that the two\nequal-key problems I found last night might partially explain them;\nI suspect there are more that I have not found, too. I will look into\nit some more.\n\nCould you try the same queries with no indexes in place, and see what\nthe time scaling is like then? That would confirm or deny the theory\nthat it's an index-update problem.\n\nQuestion for the hackers list: are we prepared to install purely\nperformance-related bug fixes at this late stage of the 6.5 beta cycle?\nBad as the above numbers are, I hesitate to twiddle the btree code and\nrisk breaking things with only a week of testing time to go...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 15:16:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Edmund Mergl <[email protected]> writes:\n> > Some more numbers:\n> \n> > database #rows inserts create make_sqs make_nqs\n> > index selects updates\n> > ----------------------------------------------------------------------------\n> > pgsql 10.000 00:24 00:09 00:16 00:25\n> > pgsql 100.000 04:01 01:29 01:06 49:45\n> > pgsql 1.000.000 39:24 20:49 23:42 ???\n> \n> > whereas the increase of elapsed time is somewhat proportional\n> > to the number of rows, for the update-case it is rather\n> > exponential.\n> \n> Those are attention-getting numbers, all right. I think that the two\n> equal-key problems I found last night might partially explain them;\n> I suspect there are more that I have not found, too. I will look into\n> it some more.\n> \n> Could you try the same queries with no indexes in place, and see what\n> the time scaling is like then? That would confirm or deny the theory\n> that it's an index-update problem.\n\n\nhere they are, and yes, I double-checked them twice !\n\n database #rows inserts create make_sqs make_nqs\n index selects updates\n----------------------------------------------------------------------------\n pgsql 10.000 00:24 - 00:13 00:05\n pgsql 100.000 04:01 - 00:83 00:32\n pgsql 1.000.000 39:24 - 26:36 26:52\n\n\n> \n> Question for the hackers list: are we prepared to install purely\n> performance-related bug fixes at this late stage of the 6.5 beta cycle?\n> Bad as the above numbers are, I hesitate to twiddle the btree code and\n> risk breaking things with only a week of testing time to go...\n> \n> regards, tom lane\n\n\nif there is anything else I can do, just let me know.\n\nEdmund\n\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Tue, 25 May 1999 23:31:37 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Those are attention-getting numbers, all right. I think that the two\n> equal-key problems I found last night might partially explain them;\n> I suspect there are more that I have not found, too. I will look into\n> it some more.\n\nAm I correct that update takes ~ 10% CPU with high disk activity?\n(Unfortunately, no list archive after May 13, so I'm not able\nto re-read thread).\nRemember that update inserts new index tuples and most likely\nindex scan will see these tuples and fetch just inserted\nheap tuples.\n\n> Could you try the same queries with no indexes in place, and see what\n> the time scaling is like then? That would confirm or deny the theory\n> that it's an index-update problem.\n> \n> Question for the hackers list: are we prepared to install purely\n> performance-related bug fixes at this late stage of the 6.5 beta cycle?\n> Bad as the above numbers are, I hesitate to twiddle the btree code and\n> risk breaking things with only a week of testing time to go...\n\nTry to fix problems and run Edmund scripts to see are things\nbetter than now. We can apply fixes after 6.5.\n\nVadim\n",
"msg_date": "Wed, 26 May 1999 10:45:30 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Edmund Mergl <[email protected]> writes:\n>> Could you try the same queries with no indexes in place, and see what\n>> the time scaling is like then? That would confirm or deny the theory\n>> that it's an index-update problem.\n\n> here they are, and yes, I double-checked them twice !\n\n> database #rows inserts create make_sqs make_nqs\n> index selects updates\n> ----------------------------------------------------------------------------\n> pgsql 10.000 00:24 - 00:13 00:05\n> pgsql 100.000 04:01 - 00:83 00:32\n> pgsql 1.000.000 39:24 - 26:36 26:52\n\nOh dear ... so much for my theory that index updates are to blame for\nthe scaling problem. Back to the drawing board ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 09:42:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
},
{
"msg_contents": "Added to TODO:\n\n* Improve _bt_binsrch() to handle equal keys better, remove _bt_firsteq()(Tom)\n\n\n> I wrote:\n> > then a possible explanation for the problem would be that our\n> > btree index code isn't very fast when there are large numbers of\n> > identical keys.\n> \n> Ah-hah, a lucky guess! I went back and looked at the profile stats\n> I'd extracted from Edmund's \"update\" example. This Linux box has\n> the same semi-functional gprof as someone else was using a while\n> ago --- the timings are bogus, but the call counts seem right.\n> And what I find are entries like this:\n> \n> 0.00 0.00 284977/284977 _bt_binsrch [3174]\n> [3177] 0.0 0.00 0.00 284977 _bt_firsteq [3177]\n> 0.00 0.00 21784948/24713758 _bt_compare [3169]\n> \n> 0.00 0.00 426/35632 _bt_split [53]\n> 0.00 0.00 35206/35632 _bt_insertonpg [45]\n> [3185] 0.0 0.00 0.00 35632 _bt_findsplitloc [3185]\n> 0.00 0.00 5093972/8907411 _bt_itemcmp [3171]\n> \n> In other words, _bt_firsteq is averaging almost 100 comparisons per\n> call, _bt_findsplitloc well over that. Both of these routines are\n> evidently designed on the assumption that there will be relatively\n> few duplicate keys --- they reduce to linear scans when there are\n> many duplicates.\n> \n> _bt_firsteq shouldn't exist at all; the only routine that calls it\n> is _bt_binsrch, which does a fast binary search of the index page.\n> _bt_binsrch should be fixed so that the binary search logic does the\n> right thing for equal-key cases, rather than degrading to a linear\n> search. I am less confident that I understand _bt_findsplitloc's place\n> in the great scheme of things, but it could certainly be revised to use\n> a binary search rather than linear scan.\n> \n> This benchmark is clearly overemphasizing the equal-key case, but\n> I think it ought to be fixed independently of whether we want to\n> look good on a dubious benchmark ... equal keys are not uncommon in\n> real-world scenarios after all.\n> \n> Next question is do we want to risk twiddling this code so soon before\n> 6.5, or wait till after?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 17:13:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Tom, I assume this is done, right?\n\n\n> I wrote:\n> > then a possible explanation for the problem would be that our\n> > btree index code isn't very fast when there are large numbers of\n> > identical keys.\n> \n> Ah-hah, a lucky guess! I went back and looked at the profile stats\n> I'd extracted from Edmund's \"update\" example. This Linux box has\n> the same semi-functional gprof as someone else was using a while\n> ago --- the timings are bogus, but the call counts seem right.\n> And what I find are entries like this:\n> \n> 0.00 0.00 284977/284977 _bt_binsrch [3174]\n> [3177] 0.0 0.00 0.00 284977 _bt_firsteq [3177]\n> 0.00 0.00 21784948/24713758 _bt_compare [3169]\n> \n> 0.00 0.00 426/35632 _bt_split [53]\n> 0.00 0.00 35206/35632 _bt_insertonpg [45]\n> [3185] 0.0 0.00 0.00 35632 _bt_findsplitloc [3185]\n> 0.00 0.00 5093972/8907411 _bt_itemcmp [3171]\n> \n> In other words, _bt_firsteq is averaging almost 100 comparisons per\n> call, _bt_findsplitloc well over that. Both of these routines are\n> evidently designed on the assumption that there will be relatively\n> few duplicate keys --- they reduce to linear scans when there are\n> many duplicates.\n> \n> _bt_firsteq shouldn't exist at all; the only routine that calls it\n> is _bt_binsrch, which does a fast binary search of the index page.\n> _bt_binsrch should be fixed so that the binary search logic does the\n> right thing for equal-key cases, rather than degrading to a linear\n> search. I am less confident that I understand _bt_findsplitloc's place\n> in the great scheme of things, but it could certainly be revised to use\n> a binary search rather than linear scan.\n> \n> This benchmark is clearly overemphasizing the equal-key case, but\n> I think it ought to be fixed independently of whether we want to\n> look good on a dubious benchmark ... equal keys are not uncommon in\n> real-world scenarios after all.\n> \n> Next question is do we want to risk twiddling this code so soon before\n> 6.5, or wait till after?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 15:42:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, I assume this is done, right?\n\nOnly partly. I changed _bt_binsrch to use a pure binary search,\nbut didn't yet get around to fixing _bt_findsplitloc (it's more\ncomplicated :-().\n\nVadim seemed to think that a better solution would be to fix the\ncomparison rules so that no two entries are ever considered to have\nequal keys anyway. So I put my change on the back burner ... but if\nhis change doesn't happen, mine ought to get done.\n\n\n>> I wrote:\n>>>> then a possible explanation for the problem would be that our\n>>>> btree index code isn't very fast when there are large numbers of\n>>>> identical keys.\n>> \n>> Ah-hah, a lucky guess! I went back and looked at the profile stats\n>> I'd extracted from Edmund's \"update\" example. This Linux box has\n>> the same semi-functional gprof as someone else was using a while\n>> ago --- the timings are bogus, but the call counts seem right.\n>> And what I find are entries like this:\n>> \n>> 0.00 0.00 284977/284977 _bt_binsrch [3174]\n>> [3177] 0.0 0.00 0.00 284977 _bt_firsteq [3177]\n>> 0.00 0.00 21784948/24713758 _bt_compare [3169]\n>> \n>> 0.00 0.00 426/35632 _bt_split [53]\n>> 0.00 0.00 35206/35632 _bt_insertonpg [45]\n>> [3185] 0.0 0.00 0.00 35632 _bt_findsplitloc [3185]\n>> 0.00 0.00 5093972/8907411 _bt_itemcmp [3171]\n>> \n>> In other words, _bt_firsteq is averaging almost 100 comparisons per\n>> call, _bt_findsplitloc well over that. Both of these routines are\n>> evidently designed on the assumption that there will be relatively\n>> few duplicate keys --- they reduce to linear scans when there are\n>> many duplicates.\n>> \n>> _bt_firsteq shouldn't exist at all; the only routine that calls it\n>> is _bt_binsrch, which does a fast binary search of the index page.\n>> _bt_binsrch should be fixed so that the binary search logic does the\n>> right thing for equal-key cases, rather than degrading to a linear\n>> search. I am less confident that I understand _bt_findsplitloc's place\n>> in the great scheme of things, but it could certainly be revised to use\n>> a binary search rather than linear scan.\n>> \n>> This benchmark is clearly overemphasizing the equal-key case, but\n>> I think it ought to be fixed independently of whether we want to\n>> look good on a dubious benchmark ... equal keys are not uncommon in\n>> real-world scenarios after all.\n>> \n>> Next question is do we want to risk twiddling this code so soon before\n>> 6.5, or wait till after?\n",
"msg_date": "Tue, 21 Sep 1999 21:16:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
}
] |
[
{
"msg_contents": "Why is this line here, the size-- line? It does not appear in the\ninet_cidr_ntop_ipv4() function. This is maybe the cause of our 00/0.\n\n---------------------------------------------------------------------------\n\n\nstatic char *\ninet_net_ntop_ipv4(const u_char *src, int bits, char *dst, size_t size)\n{\n char *odst = dst;\n char *t;\n size_t len = 4;\n int b, tb;\n\n if (bits < 0 || bits > 32)\n {\n errno = EINVAL;\n return (NULL);\n }\n if (bits == 0)\n {\n if (size < sizeof \"0\")\n goto emsgsize;\n *dst++ = '0';\n size--;\n *dst = '\\0';\n }\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 23:57:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "INET type and 00/0"
}
] |
[
{
"msg_contents": "This seems like the reasonable fix for this problem, and I am going to\napply it. Any comments?\n\n---------------------------------------------------------------------------\n\n\nIndex: src/backend/utils/adt/inet_net_ntop.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/inet_net_ntop.c,v\nretrieving revision 1.4\ndiff -c -r1.4 inet_net_ntop.c\n*** src/backend/utils/adt/inet_net_ntop.c\t1999/01/01 04:17:13\t1.4\n--- src/backend/utils/adt/inet_net_ntop.c\t1999/05/25 05:17:40\n***************\n*** 207,213 ****\n \n \t/* Format whole octets plus nonzero trailing octets. */\n \ttb = (bits == 32) ? 31 : bits;\n! \tfor (b = 0; b <= (tb / 8) || (b < len && *src != 0); b++)\n \t{\n \t\tif (size < sizeof \"255.\")\n \t\t\tgoto emsgsize;\n--- 207,213 ----\n \n \t/* Format whole octets plus nonzero trailing octets. */\n \ttb = (bits == 32) ? 31 : bits;\n! \tfor (b = 0; bits != 0 && (b <= (tb / 8) || (b < len && *src != 0)); b++)\n \t{\n \t\tif (size < sizeof \"255.\")\n \t\t\tgoto emsgsize;\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 25 May 1999 01:28:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix for INET 00/0"
}
] |
[
{
"msg_contents": "Does anybody know of a URL to a good (easy comprehended) reference to\nSQL92?\n\nI know \"easy\" may be too much, but there are levels of unreadability...\n\n",
"msg_date": "Tue, 25 May 1999 07:29:09 +0200 (CEST)",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL92"
}
] |
[
{
"msg_contents": "Hi,\n\ntoday I did some tests with current 6.5 from cvs and multiple joins.\nI did unpredictable server crashes, i.e. sometimes query works\nsometimes crashes. After about a hour of my experiments I can't drop table in\nmy test database:\n\n13:55[mira]:~/app/sql>mkjoindata.pl --joins 10 --rows 20 | psql test\n\nmkjoindata.pl - is my test script specially rewritten to get parameters\nfrom command line. It generates test data, sql commands and automatize\nprocess of postgres crashing :-) I attach this new version to my post.\n\n\tRegards,\n\n\t\tOleg\n\nPS.\nTom (Lane), sometimes I got an old behaivour of postgres on big joins - \nall memory (ram+swap) exhausted. I remember a week ago that was fixed\nand I certainly did the same tests without any problem.\n\ndrop table t0;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n\nBacktrace:\nmira:/usr/local/pgsql/data/base/test# gdb /usr/local/pgsql/bin/postmaster core\nGDB is free software and you are welcome to distribute copies of it\n under certain conditions; type \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB; type \"show warranty\" for details.\nGDB 4.16 (i486-slackware-linux), \nCopyright 1996 Free Software Foundation, Inc...\nCore was generated by /usr/local/pgsql/bin/postgres localhost megera test DROP \n '.\nProgram terminated with signal 11, Segmentation fault.\nReading symbols from /lib/libdl.so.1...done.\nReading symbols from /lib/libm.so.5...done.\nReading symbols from /usr/lib/libreadline.so...done.\nReading symbols from /usr/lib/libhistory.so...done.\nReading symbols from /lib/libtermcap.so.2...done.\nReading symbols from /lib/libncurses.so.3.0...done.\nReading symbols from /usr/lib/libc.so.5...done.\nReading symbols from /lib/ld-linux.so.1...done.\n#0 0x806aa2b in heapgettup ()\n(gdb) bt\n#0 0x806aa2b in heapgettup ()\n#1 0x806b7d1 in heap_getnext ()\n#2 0x807ca56 in DeleteTypeTuple ()\n#3 0x807cbb5 in heap_destroy_with_catalog ()\n#4 0x8083128 in RemoveRelation ()\n#5 0x80e41ef in ProcessUtility ()\n#6 0x80e2486 in pg_exec_query_dest ()\n#7 0x80e23cc in pg_exec_query ()\n#8 0x80e3518 in PostgresMain ()\n#9 0x80cc72c in DoBackend ()\n#10 0x80cc26b in BackendStartup ()\n#11 0x80cb9e7 in ServerLoop ()\n#12 0x80cb573 in PostmasterMain ()\n#13 0x80a2999 in main ()\n#14 0x806131e in _start ()\n(gdb) \n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Tue, 25 May 1999 13:53:19 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5 cvs: can't drop table"
},
{
"msg_contents": "I had to destroy test database to continue my tests.\nHere is backtrace from postgres crashing on 10 tables joining:\n\n14:10[mira]:~/app/sql>mkjoindata.pl --joins 10 --rows 20 | psql test\n\n\n ...............\nselect t0.a,t1.a as t1,t2.a as t2,t3.a as t3,t4.a as t4,t5.a as t5,t6.a as t6,t7.a as t7,t8.a as t8,t9.a as t9,t10.a as t10\n from t0 ,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10\n where t1.id = t0.id and t2.id=t0.id and t3.id=t0.id and t4.id=t0.id and t5.id=t0.id and t6.id=t0.id and t7.id=t0.id and t8.id=t0.id and t9.id=t0.id and t10.id=t0.id ;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n\n\ngdb /usr/local/pgsql/bin/postmaster core\nGDB is free software and you are welcome to distribute copies of it\n under certain conditions; type \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB; type \"show warranty\" for details.\nGDB 4.16 (i486-slackware-linux), \nCopyright 1996 Free Software Foundation, Inc...\nCore was generated by /usr/local/pgsql/bin/postgres localhost megera test idle \n '.\nProgram terminated with signal 11, Segmentation fault.\nReading symbols from /lib/libdl.so.1...done.\nReading symbols from /lib/libm.so.5...done.\nReading symbols from /usr/lib/libreadline.so...done.\nReading symbols from /usr/lib/libhistory.so...done.\nReading symbols from /lib/libtermcap.so.2...done.\nReading symbols from /lib/libncurses.so.3.0...done.\nReading symbols from /usr/lib/libc.so.5...done.\nReading symbols from /lib/ld-linux.so.1...done.\n#0 0x80b6065 in equal ()\n(gdb) bt\n#0 0x80b6065 in equal ()\n#1 0x80c7652 in pathorder_match ()\n#2 0x80c7900 in better_path ()\n#3 0x80c7863 in add_pathlist ()\n#4 0x80bf551 in update_rels_pathlist_for_joins ()\n#5 0x80c8b8b in gimme_tree ()\n#6 0x80c8ae7 in geqo_eval ()\n#7 0x80c99f2 in random_init_pool ()\n#8 0x80c8c7b in geqo ()\n#9 0x80bd6e6 in make_one_rel_by_joins ()\n#10 0x80bd5ee in make_one_rel ()\n#11 0x80c1e81 in subplanner ()\n#12 0x80c1dff in query_planner ()\n#13 0x80c2173 in union_planner ()\n#14 0x80c1f55 in planner ()\n#15 0x80e22e7 in pg_parse_and_plan ()\n#16 0x80e240b in pg_exec_query_dest ()\n#17 0x80e23cc in pg_exec_query ()\n#18 0x80e3518 in PostgresMain ()\n#19 0x80cc72c in DoBackend ()\n#20 0x80cc26b in BackendStartup ()\n#21 0x80cb9e7 in ServerLoop ()\n#22 0x80cb573 in PostmasterMain ()\n---Type <return> to continue, or q <return> to quit---\n#23 0x80a2999 in main ()\n#24 0x806131e in _start ()\n(gdb) \n\n\n\tRegards,\n\n\t\tOleg\n\nOn Tue, 25 May 1999, Oleg Bartunov wrote:\n\n> Date: Tue, 25 May 1999 13:53:19 +0400 (MSD)\n> From: Oleg Bartunov <[email protected]>\n> To: [email protected]\n> Cc: [email protected]\n> Subject: [HACKERS] 6.5 cvs: can't drop table\n> \n> Hi,\n> \n> today I did some tests with current 6.5 from cvs and multiple joins.\n> I did unpredictable server crashes, i.e. sometimes query works\n> sometimes crashes. After about a hour of my experiments I can't drop table in\n> my test database:\n> \n> 13:55[mira]:~/app/sql>mkjoindata.pl --joins 10 --rows 20 | psql test\n> \n> mkjoindata.pl - is my test script specially rewritten to get parameters\n> from command line. It generates test data, sql commands and automatize\n> process of postgres crashing :-) I attach this new version to my post.\n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> PS.\n> Tom (Lane), sometimes I got an old behaivour of postgres on big joins - \n> all memory (ram+swap) exhausted. I remember a week ago that was fixed\n> and I certainly did the same tests without any problem.\n> \n> drop table t0;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is impossible. Terminating.\n> \n> Backtrace:\n> mira:/usr/local/pgsql/data/base/test# gdb /usr/local/pgsql/bin/postmaster core\n> GDB is free software and you are welcome to distribute copies of it\n> under certain conditions; type \"show copying\" to see the conditions.\n> There is absolutely no warranty for GDB; type \"show warranty\" for details.\n> GDB 4.16 (i486-slackware-linux), \n> Copyright 1996 Free Software Foundation, Inc...\n> Core was generated by /usr/local/pgsql/bin/postgres localhost megera test DROP \n> '.\n> Program terminated with signal 11, Segmentation fault.\n> Reading symbols from /lib/libdl.so.1...done.\n> Reading symbols from /lib/libm.so.5...done.\n> Reading symbols from /usr/lib/libreadline.so...done.\n> Reading symbols from /usr/lib/libhistory.so...done.\n> Reading symbols from /lib/libtermcap.so.2...done.\n> Reading symbols from /lib/libncurses.so.3.0...done.\n> Reading symbols from /usr/lib/libc.so.5...done.\n> Reading symbols from /lib/ld-linux.so.1...done.\n> #0 0x806aa2b in heapgettup ()\n> (gdb) bt\n> #0 0x806aa2b in heapgettup ()\n> #1 0x806b7d1 in heap_getnext ()\n> #2 0x807ca56 in DeleteTypeTuple ()\n> #3 0x807cbb5 in heap_destroy_with_catalog ()\n> #4 0x8083128 in RemoveRelation ()\n> #5 0x80e41ef in ProcessUtility ()\n> #6 0x80e2486 in pg_exec_query_dest ()\n> #7 0x80e23cc in pg_exec_query ()\n> #8 0x80e3518 in PostgresMain ()\n> #9 0x80cc72c in DoBackend ()\n> #10 0x80cc26b in BackendStartup ()\n> #11 0x80cb9e7 in ServerLoop ()\n> #12 0x80cb573 in PostmasterMain ()\n> #13 0x80a2999 in main ()\n> #14 0x806131e in _start ()\n> (gdb) \n> \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 25 May 1999 14:21:22 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 cvs: can't drop table"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> today I did some tests with current 6.5 from cvs and multiple joins.\n> I did unpredictable server crashes, i.e. sometimes query works\n> sometimes crashes.\n\nI have a theory about why the results are random: the GEQO planner\ndeliberately uses random numbers to generate plans, so you don't\nalways get the same plan out of it. Whatever bug you are seeing\noccurs only for a particular plan path. (I haven't had any luck\nrepeating your crash here, so the bug may be platform-specific.)\n\nIt bothers me that the GEQO results are not reliably reproducible\nacross platforms; that complicates debugging. I have been thinking\nabout suggesting that we ought to change GEQO to use a fixed random\nseed value by default, with the variable random seed being available\nonly as a *non default* option. Comments anyone?\n\nIn the meantime, you could try setting up a pgsql/data/pg_geqo file\nwith a specific Random_Seed NNN line, and try different NNN values\nuntil you find one that will reliably trigger the failure. That\nwould help in reproducing the problem elsewhere.\n\n> After about a hour of my experiments I can't drop table in\n> my test database:\n\nIf you crash the backend enough times, you shouldn't be too surprised\nthat your database gets corrupted ... I think this is just collateral\ndamage.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 10:15:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: can't drop table "
},
{
"msg_contents": "On Tue, 25 May 1999, Tom Lane wrote:\n\n> Date: Tue, 25 May 1999 10:15:43 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] 6.5 cvs: can't drop table \n> \n> Oleg Bartunov <[email protected]> writes:\n> > today I did some tests with current 6.5 from cvs and multiple joins.\n> > I did unpredictable server crashes, i.e. sometimes query works\n> > sometimes crashes.\n> \n> I have a theory about why the results are random: the GEQO planner\n> deliberately uses random numbers to generate plans, so you don't\n> always get the same plan out of it. Whatever bug you are seeing\n> occurs only for a particular plan path. (I haven't had any luck\n> repeating your crash here, so the bug may be platform-specific.)\n> \n> It bothers me that the GEQO results are not reliably reproducible\n> across platforms; that complicates debugging. I have been thinking\n> about suggesting that we ought to change GEQO to use a fixed random\n> seed value by default, with the variable random seed being available\n> only as a *non default* option. Comments anyone?\n> \n> In the meantime, you could try setting up a pgsql/data/pg_geqo file\n> with a specific Random_Seed NNN line, and try different NNN values\n> until you find one that will reliably trigger the failure. That\n> would help in reproducing the problem elsewhere.\n\nI have rather stable crash under 2.0.37, see below\n\n> \n> > After about a hour of my experiments I can't drop table in\n> > my test database:\n> \n> If you crash the backend enough times, you shouldn't be too surprised\n> that your database gets corrupted ... I think this is just collateral\n> damage.\n\nGot cvs update, reinstall pgsql, run my test and after several\nsuccess got the same crash :-) You probably right - this could be\nconnected with OS - Linux 2.0.37, I installed new kernel (old one was 2.0.36)\nseveral days ago. I'll move back to 2.0.36 and will see what happens.\nInteresting that I never get a crash on the same test (even 20 tables)\non my home machine which is running 2.2.9 ! I also run test under\nFreeBSD 3.1 release (elf) and also no problems.\n\nAs usual, here is a backtrace :-)\n\n\tRegards,\n\t\tOleg\n\nPS. btw, it seems Jan fixed the bug with pg_dump and view !\n\n where t1.id = t0.id and t2.id=t0.id and t3.id=t0.id and t4.id=t0.id and t5.id=t0.id and t6.id=t0.id and t7.id=t0.id and t8.id=t0.id and t9.id=t0.id and t10.id=t0.id and t11.id=t0.id and t12.id=t0.id and t13.id=t0.id and t14.id=t0.id and t15.id=t0.id and \nt16.id=t0.id and t17.id=t0.id and t18.id=t0.id and t19.id=t0.id and t20.id=t0.id ;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\nmira:/usr/local/pgsql/data/base/test$ l core\n-rw------- 1 postgres users 11784192 May 25 19:07 core\nmira:/usr/local/pgsql/data/base/test$ gdb /usr/local/pgsql/bin/postmaster core\nGDB is free software and you are welcome to distribute copies of it\n under certain conditions; type \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB; type \"show warranty\" for details.\nGDB 4.16 (i486-slackware-linux), \nCopyright 1996 Free Software Foundation, Inc...\nCore was generated by /usr/local/pgsql/bin/postgres localhost megera test idle '.\nProgram terminated with signal 11, Segmentation fault.\nReading symbols from /lib/libdl.so.1...done.\nReading symbols from /lib/libm.so.5...done.\nReading symbols from /usr/lib/libreadline.so...done.\nReading symbols from /usr/lib/libhistory.so...done.\nReading symbols from /lib/libtermcap.so.2...done.\nReading symbols from /lib/libncurses.so.3.0...done.\nReading symbols from /usr/lib/libc.so.5...done.\nReading symbols from /lib/ld-linux.so.1...done.\nReading symbols from /usr/lib/libc.so.5...done.\nReading symbols from /lib/ld-linux.so.1...done.\n#0 0x80c76af in pathorder_match ()\n(gdb) bt\n#0 0x80c76af in pathorder_match ()\n#1 0x80c7900 in better_path ()\n#2 0x80c7863 in add_pathlist ()\n#3 0x80bf515 in update_rels_pathlist_for_joins ()\n#4 0x80c8b8b in gimme_tree ()\n#5 0x80c8ae7 in geqo_eval ()\n#6 0x80c8d12 in geqo ()\n#7 0x80bd6e6 in make_one_rel_by_joins ()\n#8 0x80bd5ee in make_one_rel ()\n#9 0x80c1e81 in subplanner ()\n#10 0x80c1dff in query_planner ()\n#11 0x80c2173 in union_planner ()\n#12 0x80c1f55 in planner ()\n#13 0x80e2497 in pg_parse_and_plan ()\n#14 0x80e25bb in pg_exec_query_dest ()\n#15 0x80e257c in pg_exec_query ()\n#16 0x80e36c8 in PostgresMain ()\n#17 0x80cc72c in DoBackend ()\n#18 0x80cc26b in BackendStartup ()\n#19 0x80cb9e7 in ServerLoop ()\n#20 0x80cb573 in PostmasterMain ()\n#21 0x80a2999 in main ()\n#22 0x806131e in _start ()\n(gdb) \n\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 25 May 1999 19:05:45 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 cvs: can't drop table "
},
{
"msg_contents": "I hate to say this, but I think we need postgres compiled with debugging\nsymbols, -g, so we can see the parameters being passed to the functions.\n\n\n> #0 0x80b6065 in equal ()\n> #1 0x80c7652 in pathorder_match ()\n> #2 0x80c7900 in better_path ()\n> #3 0x80c7863 in add_pathlist ()\n> #4 0x80bf551 in update_rels_pathlist_for_joins ()\n> #5 0x80c8b8b in gimme_tree ()\n> #6 0x80c8ae7 in geqo_eval ()\n> #7 0x80c99f2 in random_init_pool ()\n> #8 0x80c8c7b in geqo ()\n> #9 0x80bd6e6 in make_one_rel_by_joins ()\n> #10 0x80bd5ee in make_one_rel ()\n> #11 0x80c1e81 in subplanner ()\n> #12 0x80c1dff in query_planner ()\n> #13 0x80c2173 in union_planner ()\n> #14 0x80c1f55 in planner ()\n> #15 0x80e22e7 in pg_parse_and_plan ()\n> #16 0x80e240b in pg_exec_query_dest ()\n> #17 0x80e23cc in pg_exec_query ()\n> #18 0x80e3518 in PostgresMain ()\n> #19 0x80cc72c in DoBackend ()\n> #20 0x80cc26b in BackendStartup ()\n> #21 0x80cb9e7 in ServerLoop ()\n> #22 0x80cb573 in PostmasterMain ()\n> ---Type <return> to continue, or q <return> to quit---\n> #23 0x80a2999 in main ()\n> #24 0x806131e in _start ()\n> (gdb) \n> \n> \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 25 May 1999 11:06:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: can't drop table"
},
{
"msg_contents": "> Oleg Bartunov <[email protected]> writes:\n> > today I did some tests with current 6.5 from cvs and multiple joins.\n> > I did unpredictable server crashes, i.e. sometimes query works\n> > sometimes crashes.\n> \n> I have a theory about why the results are random: the GEQO planner\n> deliberately uses random numbers to generate plans, so you don't\n> always get the same plan out of it. Whatever bug you are seeing\n> occurs only for a particular plan path. (I haven't had any luck\n> repeating your crash here, so the bug may be platform-specific.)\n> \n> It bothers me that the GEQO results are not reliably reproducible\n> across platforms; that complicates debugging. I have been thinking\n> about suggesting that we ought to change GEQO to use a fixed random\n> seed value by default, with the variable random seed being available\n> only as a *non default* option. Comments anyone?\n\nI would leave the random alone. There may be some advantages to having\nit random.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 25 May 1999 11:10:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 cvs: can't drop table"
},
{
"msg_contents": "On Tue, 25 May 1999, Bruce Momjian wrote:\n\n> Date: Tue, 25 May 1999 11:06:32 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: Re: [HACKERS] 6.5 cvs: can't drop table\n> \n> I hate to say this, but I think we need postgres compiled with debugging\n> symbols, -g, so we can see the parameters being passed to the functions.\n> \n\nNo crashes with -O0 -g :-) I have egcs 1.12 release installed at this\nmachine as well as at home. I used the same optimization -O2 -mpentium\nbut at work postgres crashes while at home works like a charm.\n\nbtw,\n\njust got new updates from cvs - a bunch of files !\n\nTwo problems:\n1. at work:\n\nmake[3]: Entering directory /home/postgres/cvs/pgsql/src/backend/utils/mb'\ngcc -I../../../include -I../../../backend -O2 -mpentium -Wall -Wmissing-prototypes -I../.. -DMULTIBYTE=KOI8 -c conv.c -o conv.o\nconv.c: In function \u0002ig52mic':\nconv.c:387: parse error before \u0015'\nconv.c:392: parse error before \u0005lse'\nconv.c:378: warning: \u00031' might be used uninitialized in this function\nconv.c: At top level:\nconv.c:422: parse error before }'\nconv.c:423: warning: type defaults to \tnt' in declaration of \u0010'\nconv.c:423: warning: data definition has no type or storage class\nconv.c:424: parse error before }'\nmake[3]: *** [conv.o] Error 1\nmake[3]: Leaving directory /home/postgres/cvs/pgsql/src/backend/utils/mb'\nmake[2]: *** [submake] Error 2\n\n2. at home:\n\npostmaster.c: In function \u0013erverLoop':\npostmaster.c:668: too few arguments to function \u0007ettimeofday'\npostmaster.c:707: too few arguments to function \u0007ettimeofday'\npostmaster.c:666: warning: unused variable \u0014z'\npostmaster.c: In function \u0004oBackend':\npostmaster.c:1512: too few arguments to function \u0007ettimeofday'\npostmaster.c:1463: warning: unused variable \u0014z'\nmake[2]: *** [postmaster.o] Error 1\nmake[2]: Leaving directory /u/postgres/cvs/pgsql/src/backend/postmaster'\n\nThis problem I've seen already !\n\n> \n> > #0 0x80b6065 in equal ()\n> > #1 0x80c7652 in pathorder_match ()\n> > #2 0x80c7900 in better_path ()\n> > #3 0x80c7863 in add_pathlist ()\n> > #4 0x80bf551 in update_rels_pathlist_for_joins ()\n> > #5 0x80c8b8b in gimme_tree ()\n> > #6 0x80c8ae7 in geqo_eval ()\n> > #7 0x80c99f2 in random_init_pool ()\n> > #8 0x80c8c7b in geqo ()\n> > #9 0x80bd6e6 in make_one_rel_by_joins ()\n> > #10 0x80bd5ee in make_one_rel ()\n> > #11 0x80c1e81 in subplanner ()\n> > #12 0x80c1dff in query_planner ()\n> > #13 0x80c2173 in union_planner ()\n> > #14 0x80c1f55 in planner ()\n> > #15 0x80e22e7 in pg_parse_and_plan ()\n> > #16 0x80e240b in pg_exec_query_dest ()\n> > #17 0x80e23cc in pg_exec_query ()\n> > #18 0x80e3518 in PostgresMain ()\n> > #19 0x80cc72c in DoBackend ()\n> > #20 0x80cc26b in BackendStartup ()\n> > #21 0x80cb9e7 in ServerLoop ()\n> > #22 0x80cb573 in PostmasterMain ()\n> > ---Type <return> to continue, or q <return> to quit---\n> > #23 0x80a2999 in main ()\n> > #24 0x806131e in _start ()\n> > (gdb) \n> > \n> > \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Tue, 25 May 1999 20:51:20 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 cvs: can't drop table"
}
] |
[
{
"msg_contents": "If anyone has gotten a clean compile on AIX 432, send me what you did.\n\nThanks.\n",
"msg_date": "Tue, 25 May 1999 05:59:24 -0500",
"msg_from": "\"David R. Favor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compile woes on AIX 432 for v6.5"
}
] |
[
{
"msg_contents": "\nI'm being asked the same question every so often, about the following\nline:\n\n\tClass.forName(postgresql.Driver);\n\nPeople are asking why this isn't working. Obviously there should be quotes\nin there, and I'm thinking that they are missing from the docs.\n\n\tClass.forName(\"postgresql.Driver\");\n\nI haven't the time to check at the moment, but can someone (Tom?) check.\nThey may have vanished when we converted them to sgml\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Tue, 25 May 1999 13:07:35 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "JDBC problem in the docs"
},
{
"msg_contents": "> I'm being asked the same question every so often, about the following\n> line:\n> Class.forName(postgresql.Driver);\n> People are asking why this isn't working. Obviously there should be quotes\n> in there, and I'm thinking that they are missing from the docs.\n> Class.forName(\"postgresql.Driver\");\n\ngolem> grep -i Class.forName *.sgml\njdbc.sgml:Class.forName() method. For\n<application>Postgres</application>, you would use:\njdbc.sgml:Class.forName(\"postgresql.Driver\");\n\nAnd from the generated html:\n\n<snip>\nIn the first method, your code implicitly loads the driver using the\nClass.forName() method. For Postgres, you would use: \n\nClass.forName(\"postgresql.Driver\");\n</snip>\n\nNot sure if it could be mentioned somewhere else? Or maybe we can make\na general statement about how well people using Java follow directions\n:)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 25 May 1999 13:27:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC problem in the docs"
}
] |
[
{
"msg_contents": "Hi. I'd like to update the ports list in the docs to include\nreferences to v6.5 for the various platforms for which PostgreSQL-6.5b\nhas been tested.\n\nThe list is at:\n\n http://www.postgresql.org/docs/admin/ports.htm\n\nLet me know what you are running or if you are running on a platform\nnot mentioned in the list. I've already gotten a report for\nNetBSD/arm32 and have patches to move it into the \"supported list\".\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 25 May 1999 13:18:38 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Call for updates!"
},
{
"msg_contents": "BSDI 4.01.\n\n\n> Hi. I'd like to update the ports list in the docs to include\n> references to v6.5 for the various platforms for which PostgreSQL-6.5b\n> has been tested.\n> \n> The list is at:\n> \n> http://www.postgresql.org/docs/admin/ports.htm\n> \n> Let me know what you are running or if you are running on a platform\n> not mentioned in the list. I've already gotten a report for\n> NetBSD/arm32 and have patches to move it into the \"supported list\".\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 25 May 1999 11:07:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Call for updates!"
},
{
"msg_contents": "\nWill re-test tonight, but after the last round of patches for FreeBSD I\napplied, all appeared well...\n\n\nOn Tue, 25 May 1999, Bruce Momjian wrote:\n\n> BSDI 4.01.\n> \n> \n> > Hi. I'd like to update the ports list in the docs to include\n> > references to v6.5 for the various platforms for which PostgreSQL-6.5b\n> > has been tested.\n> > \n> > The list is at:\n> > \n> > http://www.postgresql.org/docs/admin/ports.htm\n> > \n> > Let me know what you are running or if you are running on a platform\n> > not mentioned in the list. I've already gotten a report for\n> > NetBSD/arm32 and have patches to move it into the \"supported list\".\n> > \n> > - Thomas\n> > \n> > -- \n> > Thomas Lockhart\t\t\t\[email protected]\n> > South Pasadena, California\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 25 May 1999 14:54:35 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Call for updates!"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> Hi. I'd like to update the ports list in the docs to include\n> references to v6.5 for the various platforms for which PostgreSQL-6.5b\n> has been tested.\n>\n> The list is at:\n>\n> http://www.postgresql.org/docs/admin/ports.htm\n\nHas PostgreSQL ever worked on Linux/ALPHA ?\n\nI'm very likely to get one soon, and knowing that it works on Digital\nUnix \nand also on several Linuxes, it should not be too hard to make it work\non \nALPHA, but knowing that it already did would be even better.\n\nI have also heard that Linux ALPHA can run Digital Unix binaries, so i\nmight \njust ask for some friendly soul for precompiled binaries\n\nAlso, would the table splitting at at least 2GB be needed on 64bit \narchitectures ? If I'm not mistaken, then Linux 2.2 has a small patch\nthat \ncan make all off_t-s to long longs.\n\n---------------\n Hannu Krosing\n",
"msg_date": "Tue, 25 May 1999 23:20:54 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Call for updates!"
},
{
"msg_contents": "Hannu Krosing wrote:\n >The Hermit Hacker wrote:\n >> Hi. I'd like to update the ports list in the docs to include\n >> references to v6.5 for the various platforms for which PostgreSQL-6.5b\n >> has been tested.\n >>\n >> The list is at:\n >>\n >> http://www.postgresql.org/docs/admin/ports.htm\n >\n >Has PostgreSQL ever worked on Linux/ALPHA ?\n \nAs far as I am aware, the Debian packages of postgresql (which I release\nfor Intel architecture) have been built for Alpha by Debian's Alpha\ndevelopment team. They should be available from the Debian ftp site and \nmirrors. (See www.debian.org)\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And Jesus answering said unto them, They that are\n whole need not a physician; but they that are sick. I\n come not to call the righteous, but sinners to\n repentance.\" Luke 5:31,32\n\n\n",
"msg_date": "Tue, 25 May 1999 23:50:56 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Call for updates! "
}
] |
[
{
"msg_contents": "unsubscribe pgsql-hackers\n",
"msg_date": "Tue, 25 May 1999 10:16:24 -0400",
"msg_from": "\"Wheeler, Alfred\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "unsubscribe pgsql-hackers"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.