threads
listlengths
1
2.99k
[ { "msg_contents": "\nin src/interfaces/libpq/libpq.rc there is an obvious typo in \nfirst line:\n\n v#include <winver.h>\n\n\nhence linking with Visual Studio fails with the error-message:\n\n file not found: VS_VERSION_INFO\n\n\nEdmund\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n", "msg_date": "Sun, 26 Sep 1999 17:56:55 +0200", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "libpq.rc for Win32" }, { "msg_contents": "Fixed. Thanks.\n\n[Charset iso-8859-2 unsupported, filtering to ASCII...]\n> \n> in src/interfaces/libpq/libpq.rc there is an obvious typo in \n> first line:\n> \n> v#include <winver.h>\n> \n> \n> hence linking with Visual Studio fails with the error-message:\n> \n> file not found: VS_VERSION_INFO\n> \n> \n> Edmund\n> \n> -- \n> Edmund Mergl\n> mailto:[email protected]\n> http://www.bawue.de/~mergl\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 26 Sep 1999 20:31:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq.rc for Win32" } ]
[ { "msg_contents": "I am interested in people's opinions on this patch. Not sure what it is\nsupposed to do.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n�b Sun, 25 Jul 1999 15:51:18 -0400 (EDT) ���ѮɡABruce Momjian <[email protected]> �g�D:\n> First, the attached patch was zero length. Second, I am not sure what\n> this patch was supposed to do. I am not sure we could distribute a\n> patch for GNU C library as part of PostgreSQL.\n\noh~~ This is a mistake for me, say sorry to everyone~~ :b\n\nAnd the attachment was be sent via this email again. \t:>\n\n--\n.....=======............................. Cd Chen, (���X��)\n..// �s �s |............................ ===========================\n..|| �� �� <............................ What's Cynix? Cyber Linux.\n..|< > |............................ mailto:[email protected]\n..| | \\___/ |............................ http://www.cynix.com.tw/~cdchen\n.. |\\______/............................. ICQ UIN:3345768", "msg_date": "Sun, 26 Sep 1999 23:26:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] A multi-lang patch for psql 6.5.1 (fwd)" } ]
[ { "msg_contents": "hi there,\n\ni'm wondering if postgres has, or is currently developing, the ability to\ncreate in memory tables similar to hash tables but sql'ised. the reason i ask\nis i am looking to write a library to link my c/c++ against that allows me to\nuse sql statements to manage internal hash tables by requesting the statement\nto be parsed by a local function as opposed to a sql server.\n\nim not on the pgsql-hackers list, so please reply cc'd to me directly.\n\nRM\n\n", "msg_date": "Mon, 27 Sep 1999 15:03:19 +1000", "msg_from": "Ryan Mills <[email protected]>", "msg_from_op": true, "msg_subject": "question." } ]
[ { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, I only see one entry in TODO for this:\n> \t* -Fix memory leak for aggregates?\n\n>> ----------------------------- Log Message -----------------------------\n>> Modify nodeAgg.c so that no rows are returned for a GROUP BY\n>> with no input rows, per pghackers discussions around 7/22/99. Clean up\n>> a bunch of ugly coding while at it; remove redundant re-lookup of\n>> aggregate info at start of each new GROUP. Arrange to pfree intermediate\n>> values when they are pass-by-ref types, so that aggregates on pass-by-ref\n>> types no longer eat memory. This takes care of a couple of TODO items...\n\nHmm, you are right --- I thought that discussion about changing the\nsemantics of aggregates with GROUP BY had made it to the TODO list,\nbut apparently it never did. It should have, however:\n * When using aggregates + GROUP BY, no rows in should yield no rows out\n\n\nThis motivated me to grovel through the TODO list, which I hadn't done\nfor a while, and I have some updates/comments.\n\n\nPARSER\n------\n\n* Select a[1] FROM test fails, it needs test.a[1]\n\nFixed for 6.6 --- actually same thing as next item,\n* -Array index references without table name cause problems [array]\n\n* Update table SET table.value = 3 fails\n\nAFAICS, the SQL92 syntax allows only a bare <column name> as the\ntarget of a SET clause. Not sure it's worth expending any effort\non this one...\n\nENHANCEMENTS\n------------\n\nCOMMANDS\n\n* Generate error on CREATE OPERATOR of ~~, ~ and and ~*\n\n\"Error\" seems a little strong, maybe a \"NOTICE\" along the lines of\n\"We trust you know that ~~ defines the behavior of the LIKE keyword\".\n\nI believe the original motivation for this entry was that the parser\nwould do the wrong thing for arbitrary operators named ~~ etc, because\nit would try to apply optimizations that were only suitable for the\nstandard ops of those names (textlike etc). That's no longer a problem,\nbecause those optimizations are now triggered off matching of the\noperator OID; they will not cause a problem if Joe User invents an\noperator named ~~ for his spiffy new datatype. But perhaps Joe should\nbe reminded that he just made LIKE applicable to his datatype. Or maybe\nthat's not worth worrying about...\n\n* Move LIKE index optimization handling to the optimizer\n\nThis is basically done, although I have a couple of cleanup issues\nto take care of.\n\nCLIENTS\n\n* PQrequestCancel() be able to terminate backend waiting for lock\n\nThere is an equivalent item under MISC, and it doesn't seem like it\nbelongs under CLIENTS --- the necessary code change is in the backend.\n\nMISC\n\n* Do autocommit so always in a transaction block(?)\n\nHuh? What is this supposed to mean?\n\nPERFORMANCE\n-----------\n\nINDEXES\n\n* Convert function(constant) into a constant for index use\n\nDone as of now; see Frankpitt's constant-expression simplifier.\nWe might have some lingering bugs with simplifying things that\nought not be simplified, however...\n\n* Allow SELECT * FROM tab WHERE int2col = 4 use int2col index, int8 too\n\t[optimizer]\n\nI believe float4 columns have the same sort of problem, since a numeric\nconstant will be taken as float8 not float4 if not explicitly casted.\nFor that matter, numeric/decimal columns do too, or would if we had\nindexing support for them...\n\n* Allow optimizer to prefer plans that match ORDER BY\n\nThis is done, although we now have the opposite problem: the darn thing\nis too eager to pick an indexscan plan :-(. Need to make the cost\nestimates for indexscan vs explicit sort more accurate.\n\nMISC\n\n* Update pg_statistic table to remove operator column\n\nI do not believe we should do this. It's true that right now we have\nno use for the operator column, because only the default '<' ordering\nwill ever be used by VACUUM, but we should keep the column in the name\nof datatype extensibility. Someday VACUUM might compute stats with\nrespect to more than one ordering, for datatypes that have more than one.\n\n* -Fix memory exhaustion when using many OR's [cnfify]\n\ncnfify is still pretty slow with many subclauses --- the behavior\nis now O(N^2) rather than O(2^N), but that just means it's \nslow rather than intolerable. I'm not sure what to do about it.\nWe probably need to be using heuristics instead of an unconditional\nconvert-to-normal-form-or-bust algorithm, but what should the\nheuristic conditions be? Am thinking about it, could use suggestions.\n\n* Process const = const parts of OR clause in separate pass\n\nDone --- Frankpitt's const simplifier handles this.\n\n* change VACUUM ANALYZE to use btree comparison functions, not <,=,> calls\n\nDidn't we decide this probably wasn't worth doing?\n\nSOURCE CODE\n-----------\n\n* Remove SET KSQO option if OR processing is improved\n\nYou can put my name on this one --- I'm not quite ready to pull KSQO\nbut I think we are close.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Sep 1999 01:31:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "TODO items (was Re: [COMMITTERS] pgsql/src/include/nodes\n\t(execnodes.h))" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, I only see one entry in TODO for this:\n> > \t* -Fix memory leak for aggregates?\n> \n> >> ----------------------------- Log Message -----------------------------\n> >> Modify nodeAgg.c so that no rows are returned for a GROUP BY\n> >> with no input rows, per pghackers discussions around 7/22/99. Clean up\n> >> a bunch of ugly coding while at it; remove redundant re-lookup of\n> >> aggregate info at start of each new GROUP. Arrange to pfree intermediate\n> >> values when they are pass-by-ref types, so that aggregates on pass-by-ref\n> >> types no longer eat memory. This takes care of a couple of TODO items...\n> \n> Hmm, you are right --- I thought that discussion about changing the\n> semantics of aggregates with GROUP BY had made it to the TODO list,\n> but apparently it never did. It should have, however:\n> * When using aggregates + GROUP BY, no rows in should yield no rows out\n\nAdded to TODO, with a completion mark.\n\n> This motivated me to grovel through the TODO list, which I hadn't done\n> for a while, and I have some updates/comments.\n\nGood. It is a long list.\n\n> PARSER\n> ------\n> \n> * Select a[1] FROM test fails, it needs test.a[1]\n> \n> Fixed for 6.6 --- actually same thing as next item,\n> * -Array index references without table name cause problems [array]\n\nDone.\n\n> \n> * Update table SET table.value = 3 fails\n> \n> AFAICS, the SQL92 syntax allows only a bare <column name> as the\n> target of a SET clause. Not sure it's worth expending any effort\n> on this one...\n\nMarked now as:\n\n\t* Update table SET table.value = 3 fails(SQL standard says this is OK)\n\n> \n> ENHANCEMENTS\n> ------------\n> \n> COMMANDS\n> \n> * Generate error on CREATE OPERATOR of ~~, ~ and and ~*\n> \n> \"Error\" seems a little strong, maybe a \"NOTICE\" along the lines of\n> \"We trust you know that ~~ defines the behavior of the LIKE keyword\".\n> \n> I believe the original motivation for this entry was that the parser\n> would do the wrong thing for arbitrary operators named ~~ etc, because\n> it would try to apply optimizations that were only suitable for the\n> standard ops of those names (textlike etc). That's no longer a problem,\n> because those optimizations are now triggered off matching of the\n> operator OID; they will not cause a problem if Joe User invents an\n> operator named ~~ for his spiffy new datatype. But perhaps Joe should\n> be reminded that he just made LIKE applicable to his datatype. Or maybe\n> that's not worth worrying about...\n\nRemoved. You are correct that the message describes the old LIKE\noptimization of user ~~ functions. This item is removed.\n\n> \n> * Move LIKE index optimization handling to the optimizer\n> \n> This is basically done, although I have a couple of cleanup issues\n> to take care of.\n\nMarked as done.\n\n> \n> CLIENTS\n> \n> * PQrequestCancel() be able to terminate backend waiting for lock\n> \n> There is an equivalent item under MISC, and it doesn't seem like it\n> belongs under CLIENTS --- the necessary code change is in the backend.\n\nRemoved. Already present, as you mentioned.\n\n> \n> MISC\n> \n> * Do autocommit so always in a transaction block(?)\n> \n> Huh? What is this supposed to mean?\n\nSome people want the SQL session to start inside a transaction, and you\nhave to explicity use COMMIT, at which point you are in a new\ntransaction that lasts until the next commit. Ingres SQL does this, and\nit is a pain, I think.\n\n> \n> PERFORMANCE\n> -----------\n> \n> INDEXES\n> \n> * Convert function(constant) into a constant for index use\n> \n> Done as of now; see Frankpitt's constant-expression simplifier.\n> We might have some lingering bugs with simplifying things that\n> ought not be simplified, however...\n\nMarked as done.\n\n> \n> * Allow SELECT * FROM tab WHERE int2col = 4 use int2col index, int8 too\n> \t[optimizer]\n> \n> I believe float4 columns have the same sort of problem, since a numeric\n> constant will be taken as float8 not float4 if not explicitly casted.\n> For that matter, numeric/decimal columns do too, or would if we had\n> indexing support for them...\n\nAdded new types to list.\n\n> \n> * Allow optimizer to prefer plans that match ORDER BY\n> \n> This is done, although we now have the opposite problem: the darn thing\n> is too eager to pick an indexscan plan :-(. Need to make the cost\n> estimates for indexscan vs explicit sort more accurate.\n\nThat is amusing. Marked as done.\n\n> \n> MISC\n> \n> * Update pg_statistic table to remove operator column\n> \n> I do not believe we should do this. It's true that right now we have\n> no use for the operator column, because only the default '<' ordering\n> will ever be used by VACUUM, but we should keep the column in the name\n> of datatype extensibility. Someday VACUUM might compute stats with\n> respect to more than one ordering, for datatypes that have more than one.\n\nRemoved from the list.\n\n> \n> * -Fix memory exhaustion when using many OR's [cnfify]\n> \n> cnfify is still pretty slow with many subclauses --- the behavior\n> is now O(N^2) rather than O(2^N), but that just means it's \n> slow rather than intolerable. I'm not sure what to do about it.\n> We probably need to be using heuristics instead of an unconditional\n> convert-to-normal-form-or-bust algorithm, but what should the\n> heuristic conditions be? Am thinking about it, could use suggestions.\n\nMarked as done. Let's see if people complain.\n\n\n> \n> * Process const = const parts of OR clause in separate pass\n> \n> Done --- Frankpitt's const simplifier handles this.\n\nMarked as done.\n\n> \n> * change VACUUM ANALYZE to use btree comparison functions, not <,=,> calls\n> \n> Didn't we decide this probably wasn't worth doing?\n> \n\nYes. Removed.\n\n\n> SOURCE CODE\n> -----------\n> \n> * Remove SET KSQO option if OR processing is improved\n> \n> You can put my name on this one --- I'm not quite ready to pull KSQO\n> but I think we are close.\n\nMarked for you. New TODO copy installed.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 11:19:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TODO items (was Re: [COMMITTERS] pgsql/src/include/nodes\n\t(execnodes.h))" }, { "msg_contents": "Just my 0.02 kronor . . .\n\nOn Sep 27, Bruce Momjian noted:\n\n> > * Update table SET table.value = 3 fails\n> > \n> > AFAICS, the SQL92 syntax allows only a bare <column name> as the\n> > target of a SET clause. Not sure it's worth expending any effort\n> > on this one...\n> \n> Marked now as:\n> \n> \t* Update table SET table.value = 3 fails(SQL standard says this is OK)\n\nIn my opinion this should definitely _not_ be allowed. Let's be glad the\nUPDATE command is so conceptually simple (cf. SELECT). The next thing they\nwant is ALTER TABLE foo RENAME foo.colum [ TO bar.something ??? -- moving\ncolumns between tables, why not :) ] and then CREATE TABLE foo (foo.a int,\n...); and it won't stop :)\n\n\n> > MISC\n> > \n> > * Do autocommit so always in a transaction block(?)\n> > \n> > Huh? What is this supposed to mean?\n> \n> Some people want the SQL session to start inside a transaction, and you\n> have to explicity use COMMIT, at which point you are in a new\n> transaction that lasts until the next commit. Ingres SQL does this, and\n> it is a pain, I think.\n\nI have been wondering about this, too. Oracle does this as well. This is\nalso how they taught me SQL in university, so it is probably not out of\nthe blue. What do the standards say?\n\nThen again, while I think that client programmers won't die if they type\nan extra BEGIN here or there, this might be useful as a psql feature. Too\nmany times I've seen people type DELETE FROM <table>; by accident.\n\nWhat do y'all think? (Besides the fact that this might be a pain to\nimplement.)\n\n\nPeter\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n", "msg_date": "Wed, 29 Sep 1999 18:28:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: TODO items" }, { "msg_contents": "> Just my 0.02 kronor . . .\n> \n> On Sep 27, Bruce Momjian noted:\n> \n> > > * Update table SET table.value = 3 fails\n> > > \n> > > AFAICS, the SQL92 syntax allows only a bare <column name> as the\n> > > target of a SET clause. Not sure it's worth expending any effort\n> > > on this one...\n> > \n> > Marked now as:\n> > \n> > \t* Update table SET table.value = 3 fails(SQL standard says this is OK)\n> \n> In my opinion this should definitely _not_ be allowed. Let's be glad the\n> UPDATE command is so conceptually simple (cf. SELECT). The next thing they\n> want is ALTER TABLE foo RENAME foo.colum [ TO bar.something ??? -- moving\n> columns between tables, why not :) ] and then CREATE TABLE foo (foo.a int,\n> ...); and it won't stop :)\n\nOK, let's leave it in so people know it is not implemented.\n\n> > Some people want the SQL session to start inside a transaction, and you\n> > have to explicity use COMMIT, at which point you are in a new\n> > transaction that lasts until the next commit. Ingres SQL does this, and\n> > it is a pain, I think.\n> \n> I have been wondering about this, too. Oracle does this as well. This is\n> also how they taught me SQL in university, so it is probably not out of\n> the blue. What do the standards say?\n> \n> Then again, while I think that client programmers won't die if they type\n> an extra BEGIN here or there, this might be useful as a psql feature. Too\n> many times I've seen people type DELETE FROM <table>; by accident.\n\nNo one has really been passionate about it either way.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 29 Sep 1999 13:21:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: TODO items" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: 25 September 1999 23:58\n> To: Peter Eisentraut\n> Cc: [email protected]\n> Subject: Re: [HACKERS] psql code to be obducted by alien (me) \n\n> Another part of psql that should be made as independent as possible\n> is the support for \\copy. I recall a number of people asking in the\n> past how they can read and write tables to files in their own apps.\n> There's not that much code involved, but psql is such a mess that it's\n> hard to point to a chunk of code they can borrow.\n\nThis is a common request for JDBC (which I'm targetting for 6.6)\n\n> BTW, something closely related to \\copy that's languishing on the TODO\n> list is the ability to load the contents of a local file into a Large\n> Object or write the data out again. This would be the \n> equivalent of the\n> server-side operations lo_import and lo_export, but reading \n> or writing a\n> file in psql's environment instead of the backend's. \n> Basically a wrapper\n> around lo_read/lo_write, not much to it but it needs done...\n\nIf I remember, one of the tests/examples for libpq does this (although\nthat code is commented out) - testlo & testlo2 ?\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n", "msg_date": "Mon, 27 Sep 1999 09:07:18 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] psql code to be obducted by alien (me) " } ]
[ { "msg_contents": "Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n---------- Forwarded message ----------\nDate: Mon, 27 Sep 1999 14:07:37 +0200 (MET DST)\nFrom: Mail Delivery Subsystem <[email protected]>\nTo: [email protected]\nSubject: Returned mail: Host unknown (Name server: postgreql.org: host not found)\n\nThe original message was received at Mon, 27 Sep 1999 14:07:34 +0200 (MET DST)\nfrom localhost [127.0.0.1]\n\n ----- The following addresses had permanent fatal errors -----\n<[email protected]>\n\n ----- Transcript of session follows -----\n550 <[email protected]>... Host unknown (Name server: postgreql.org: host not found)\n\nHi all,\n\nIs there any simple way to crypt a text before inserting it in a column.\n\nI could'nt see any crypt function in the docs. So if someone has already\nwirttent such function, please share!\n\nThanks in advance to you all.\n\nRegards,\n--\nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)", "msg_date": "Mon, 27 Sep 1999 14:09:46 +0200 (MET DST)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": true, "msg_subject": "Column crypt" } ]
[ { "msg_contents": "\n Hi,\n\n I have small problem with text array in union query.. \tsee:\n\n\nabil=> select 5 union select 5;\n?column?\n--------\n 5\n(1 row)\n\nabil=> select 5 union select 6;\n?column?\n--------\n 5\n 6\n(2 rows)\n\nabil=> select '{\"aaa\"}'::_text union select '{\"aaa\"}'::_text;\n?column?\n--------\n{\"aaa\"} \n(1 row)\n\nabil=> select '{\"aaa\"}'::_text union select '{\"bbb\"}'::_text;\nERROR: Unable to identify an ordering operator '<' for type '_text'\n Use an explicit ordering operator or modify the query\nabil=> \n\n\n... hmm, any suggestion?\n\n\t\t\t\t\t\tZakkr\n\n\n", "msg_date": "Mon, 27 Sep 1999 15:11:10 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "_text problem in union" }, { "msg_contents": "> \n> Hi,\n> \n> I have small problem with text array in union query.. \tsee:\n> \n> \n> abil=> select 5 union select 5;\n> ?column?\n> --------\n> 5\n> (1 row)\n> \n> abil=> select 5 union select 6;\n> ?column?\n> --------\n> 5\n> 6\n> (2 rows)\n> \n> abil=> select '{\"aaa\"}'::_text union select '{\"aaa\"}'::_text;\n> ?column?\n> --------\n> {\"aaa\"} \n> (1 row)\n> \n> abil=> select '{\"aaa\"}'::_text union select '{\"bbb\"}'::_text;\n> ERROR: Unable to identify an ordering operator '<' for type '_text'\n> Use an explicit ordering operator or modify the query\n> abil=> \n\nGood problem description. Seems we can't compare arrays of text fields.\n\nSeems if we have an array of text fields, we could compare each element\none-by-one using the base type until we get a comparison result.\n\nNot sure if this should make the TODO list or not.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 11:42:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] _text problem in union" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > abil=> select '{\"aaa\"}'::_text union select '{\"bbb\"}'::_text;\n> > > ERROR: Unable to identify an ordering operator '<' for type '_text'\n> > > Use an explicit ordering operator or modify the query\n> > > abil=>\n> > \n> > Good problem description. Seems we can't compare arrays of text fields.\n> > \n> > Seems if we have an array of text fields, we could compare each element\n> > one-by-one using the base type until we get a comparison result.\n> > \n> > Not sure if this should make the TODO list or not.\n> \n> It woulf be better to have a generic array compare op, that just\n> traverses \n> both arrays comparing them with the \"<\" for base type\n\nYes, that was my idea. Is this a worthy TODO item?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 12:51:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] _text problem in union" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > abil=> select '{\"aaa\"}'::_text union select '{\"bbb\"}'::_text;\n> > ERROR: Unable to identify an ordering operator '<' for type '_text'\n> > Use an explicit ordering operator or modify the query\n> > abil=>\n> \n> Good problem description. Seems we can't compare arrays of text fields.\n> \n> Seems if we have an array of text fields, we could compare each element\n> one-by-one using the base type until we get a comparison result.\n> \n> Not sure if this should make the TODO list or not.\n\nIt woulf be better to have a generic array compare op, that just\ntraverses \nboth arrays comparing them with the \"<\" for base type\n\n--------------\nHannu\n", "msg_date": "Mon, 27 Sep 1999 19:56:42 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] _text problem in union" }, { "msg_contents": "Zakkr <[email protected]> writes:\n> abil=> select '{\"aaa\"}'::_text union select '{\"bbb\"}'::_text;\n> ERROR: Unable to identify an ordering operator '<' for type '_text'\n> Use an explicit ordering operator or modify the query\n\nDepending on what you're trying to do, UNION ALL might be an adequate\nworkaround. UNION is defined to remove duplicates, so it has to sort\nthe results of the union'ed queries, which requires an ordering\noperator. UNION ALL just appends the two query results together...\n\nIn the long run we probably ought to think about providing ordering\noperators for array types.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Sep 1999 19:30:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] _text problem in union " }, { "msg_contents": "> > It woulf be better to have a generic array compare op, that just\n> > traverses both arrays comparing them with the \"<\" for base type\n> Yes, that was my idea. Is this a worthy TODO item?\n\nSure. There should be a fairly large list of things for arrays, which\nhave not quite gotten the same attention as other Postgres features.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 28 Sep 1999 01:32:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] _text problem in union" }, { "msg_contents": "> Zakkr <[email protected]> writes:\n> > abil=> select '{\"aaa\"}'::_text union select '{\"bbb\"}'::_text;\n> > ERROR: Unable to identify an ordering operator '<' for type '_text'\n> > Use an explicit ordering operator or modify the query\n> \n> Depending on what you're trying to do, UNION ALL might be an adequate\n> workaround. UNION is defined to remove duplicates, so it has to sort\n> the results of the union'ed queries, which requires an ordering\n> operator. UNION ALL just appends the two query results together...\n> \n> In the long run we probably ought to think about providing ordering\n> operators for array types.\n> \n\nAdded to TODO:\n\n\t* Allow arrays to be ORDER'ed\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 21:38:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] _text problem in union" }, { "msg_contents": "\n\nOn Mon, 27 Sep 1999, Tom Lane wrote:\n\n> Zakkr <[email protected]> writes:\n> > abil=> select '{\"aaa\"}'::_text union select '{\"bbb\"}'::_text;\n> > ERROR: Unable to identify an ordering operator '<' for type '_text'\n> > Use an explicit ordering operator or modify the query\n> \n> Depending on what you're trying to do, UNION ALL might be an adequate\n> workaround. UNION is defined to remove duplicates, so it has to sort\n> the results of the union'ed queries, which requires an ordering\n> operator. UNION ALL just appends the two query results together...\n> \n\nYes, UNION ALL is good resolution for me. Thank Tom.\n\n> In the long run we probably ought to think about providing ordering\n> operators for array types.\n\n ..hmm :-))\n\n", "msg_date": "Tue, 28 Sep 1999 09:26:12 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] _text problem in union " } ]
[ { "msg_contents": "[cc:ing the hackers mailing list, as this exposes an issue I hadn't\nthought about]\n[for the benefit of the hackers list, Dale Lovelace is testing the rpm\nupgrading for potential inclusion in RedHat 6.1. Read his transcript\ncarefully. I comment below it on what the problem is.]\n[version 6.0 had postgresql-6.4.2; version 6.1 potentially 6.5.1]\n\nDale Lovelace wrote:\n> Here is a test of upgrading a database with your method:\n> \n> [root@test144 i386]# cd /mnt/redhat/comps/dist/6.0/i386/\n> [root@test144 i386]# rpm -Uvh postgresql-*\n> postgresql ##################################################\n> postgresql-clients ##################################################\n> postgresql-devel ##################################################\n> [root@test144 i386]# su postgres\n> [postgres@test144 i386]$ initdb --pglib=/usr/lib/pgsql/\n> --pgdata=/var/lib/pgsql/\n> We are initializing the database system with username postgres (uid=215).\n> This user will own all the files and must also own the server process.\n> \n> Creating Postgres database system directory /var/lib/pgsql//base\n> \n> Creating template database in /var/lib/pgsql//base/template1\n> \n> Creating global classes in /var/lib/pgsql//base\n> \n> Adding template1 database to pg_database...\n> \n> Vacuuming template1\n> Creating public pg_user view\n> Creating view pg_rules\n> Creating view pg_views\n> Creating view pg_tables\n> Creating view pg_indexes\n> Loading pg_description\n> [postgres@test144 i386]$ exit\n> \n> [root@test144 i386]# /etc/rc.d/init.d/postgresql start\n> Starting postgresql service: postmaster [1481]\n> [root@test144 i386]# su postgres\n> [postgres@test144 i386]$ psql -d template1\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: template1\n\n-----NOTE THIS SESSION! WHAT'S MISSING?-------\n> \n> template1=> create table dale (row1 text, row2 text, row3 text);\n> CREATE\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17515 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17516 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17517 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17518 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17519 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17520 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17521 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17522 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17523 1\n> template1=> insert into dale (row1, row2, row3) values ('this', 'is', 'text');\n> INSERT 17524 1\n> template1=> select * from dale;\n> row1|row2|row3\n> ----+----+----\n> this|is |text\n> this|is |text\n> this|is |text\n> this|is |text\n> this|is |text\n> this|is |text\n> this|is |text\n> this|is |text\n> this|is |text\n> this|is |text\n> (10 rows)\n> \n> template1=> \\q\n> [postgres@test144 i386]$ exit\n> [root@test144 i386]# /etc/rc.d/init.d/postgresql stop\n> Stopping postgresql service: [ OK ]\n> [root@test144 i386]# cd /mnt/redhat/comps/dist/6.1/i386/\n> [root@test144 i386]# rpm -Uvh postgresql-*\n> postgresql ##################################################\n> cannot remove /var/lib/pgsql - directory not empty\n> cannot remove /usr/lib/pgsql - directory not empty\n> postgresql-devel ##################################################\n> postgresql-jdbc ##################################################\n> postgresql-odbc ##################################################\n> postgresql-perl ##################################################\n> postgresql-python ##################################################\n> postgresql-server ##################################################\n> postgresql-tcl ##################################################\n> postgresql-test ##################################################\n> [root@test144 i386]# /etc/rc.d/init.d/postgresql start\n> Checking postgresql installation: old version. Need to Upgrade.\n> See /usr/doc/postgresql-6.5.2/README.rpm for more information.\n> [root@test144 i386]# su postgres\n> [postgres@test144 i386]$ postgresql-dump -t /usr/lib/pgsql/backup/db.bak -p\n> /usr/lib/pgsql/backup/old -d\n> /usr/bin/postgresql-dump: [: /usr/lib/pgsql/backup/db.bak: unary operator\n> expected\n> /usr/bin/postgresql-dump: [: /usr/lib/pgsql/backup: unary operator expected\n> /usr/bin/postgresql-dump: [: /usr/lib/pgsql: unary operator expected\n> /usr/bin/postgresql-dump: [: /usr/lib: unary operator expected\n> /usr/bin/postgresql-dump: [: /usr: unary operator expected\n> This is the ASCII output of the dump for you to check:\n> \n> -- postgresql-dump on Sat Sep 25 19:36:51 EDT 1999 from version 6.4\n> \\connect template1\n> select datdba into table tmp_pg_shadow from pg_database where datname =\n> 'template1';\n> delete from pg_shadow where usesysid <> tmp_pg_shadow.datdba;\n> drop table tmp_pg_shadow;\n> copy pg_shadow from stdin;\n> \\.\n> -- postgresql-dump completed on Sat Sep 25 19:36:51 EDT 1999\n> On the basis of this dump, is it OK to delete the old database? [y/n] y\n> Destroying old database...\n> [postgres@test144 i386]$ exit\n> [root@test144 i386]# /etc/rc.d/init.d/postgresql start\n> Checking postgresql installation: no database files found.\n> \n> We are initializing the database system with username postgres (uid=215).\n> This user will own all the files and must also own the server process.\n> \n> Creating Postgres database system directory /var/lib/pgsql/base\n> \n> Creating template database in /var/lib/pgsql/base/template1\n> \n> Creating global classes in /var/lib/pgsql/base\n> \n> Adding template1 database to pg_database...\n> \n> Vacuuming template1\n> Creating public pg_user view\n> Creating view pg_rules\n> Creating view pg_views\n> Creating view pg_tables\n> Creating view pg_indexes\n> Loading pg_description\n> Starting postgresql service: postmaster [1828]\n> [root@test144 i386]# su postgres\n> [postgres@test144 i386]$ psql -e template1 </usr/lib/pgsql/backup/db.bak\n> -- postgresql-dump on Sat Sep 25 19:36:51 EDT 1999 from version 6.4\n> \\connect template1\n> connecting to new database: template1\n> select datdba into table tmp_pg_shadow from pg_database where datname =\n> 'template1';\n> QUERY: select datdba into table tmp_pg_shadow from pg_database where\n> datname = 'template1';\n> SELECT\n> delete from pg_shadow where usesysid <> tmp_pg_shadow.datdba;\n> QUERY: delete from pg_shadow where usesysid <> tmp_pg_shadow.datdba;\n> DELETE 0\n> drop table tmp_pg_shadow;\n> QUERY: drop table tmp_pg_shadow;\n> DROP\n> copy pg_shadow from stdin;\n> QUERY: copy pg_shadow from stdin;\n> -- postgresql-dump completed on Sat Sep 25 19:36:51 EDT 1999\n> EOF\n> [postgres@test144 i386]$ psql -d template1\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> [PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: template1\n> \n> template1=> \\d\n> Couldn't find any tables, sequences or indices!\n> template1=> select * from dale;\n> ERROR: dale: Table does not exist.\n> template1=>\n\n\n> \n> I'm not really sure what is going on. I really don't have time to delve into\n> it :-) If you could point me in the right direction I would sure appreciate\n> it! I am wondering if those unary operator errors while running\n> postgresql-dump are the root of this?\n\nThe unary operator errors are red herrings. The core issue is an\nundocumented assumption of pg_dumpall (a modified version of which is\nuse by postgresql-dump) -- template1 is assumed to always be empty.\n[hackers -- is this an ACCURATE ASSUMPTION???]\n\nTo test the upgrading with this assumption in place, do this:\n\n1.)\tDowngrade to 6.4.2\n2.)\tInitdb\n3.)\tSu to postgres, and type the following command: createdb dale\n4.)\tPerform the same psql session as you did above.\n5.)\tUpgrade just as you did above.\n6.)\tWhen checking for the existance of you data, issue a psql -d dale\ninstead of psql -d template1\n7.)\tThe data should be there.\n\nWow, Dale, you are exposing some serious assumptions made in PostgreSQL.\n\nAlso, unless you guys are releasing 6.5.2, then you'll need to replace\nall the '6.5.2's in postgresql.init with '6.5.1'. Of course, if you're\nshipping 6.5.2, ignore that... ;-)\n\nHackers: should pg_dumpall dump template1?? If not, why not? What\nEXACTLY does template1 do in the larger scheme of things? If dumping\ntemplate1 is desired -- it can be arranged in the upgrade by modifying\npg_dumpall.\n\nAs it stands now, any data the user might, whether mistakenly or not,\nplace in tables in template1 will not get dumped by pg_dumpall. Dale,\nI'll get back to you ASAIC with a patch to pg_dumpall_new that will\naddress this, if I don't hear otherwise from the hackers list. \nOrdinarily, template1 is not used for user data storage and is normally\nempty.\n\n> Thanks for your help with this! I am totally braindead, going home. Will be\n> working on this tommorrow. If you get a chance to look at it, wouldbe great!\n\nGlad to be of help....\n", "msg_date": "Mon, 27 Sep 1999 11:30:12 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New init script and upgrade attempt: failed" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Hackers: should pg_dumpall dump template1?? If not, why not? What\n> EXACTLY does template1 do in the larger scheme of things? If dumping\n> template1 is desired -- it can be arranged in the upgrade by modifying\n> pg_dumpall.\n\ntemplate1 is copied verbatim by CREATE DATABASE to produce the initial\nstate of any new database. So, people might reasonably put stuff in\nit that they want copied to new DB's. The most common example is\ndoing a createlang to create non-default PLs (plpgsql etc); you can do\nit just once in template1 instead of every time you make a DB, assuming\nthat you want plpgsql in all your DBs. I guess there could be reasons\nfor making a user table in template1 to be copied to each new DB.\n\nIf pg_dumpall doesn't dump the (user-added) contents of template1,\nI think that's an oversight...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Sep 1999 19:35:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New init script and upgrade attempt: failed " }, { "msg_contents": "On Mon, 27 Sep 1999, Tom Lane wrote:\n\n> If pg_dumpall doesn't dump the (user-added) contents of template1,\n> I think that's an oversight...\n\nThanks, Tom -- that's what I thought. I'll tackle making and verifying changes\nto pg_dumpall to do just that. It's not a showstopper -- but it is a nuisance.\n\n-----------------------------------------------------------------------------\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 27 Sep 1999 19:40:54 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: New init script and upgrade attempt: failed" } ]
[ { "msg_contents": "Yes, exactly. I know that you have been saying this from the word go, and\nthis adds to the chorus.\nThe check is pretty simple - if the previous token was not a value or right\nparenthesis, then it's a unary, not binary, minus. Of course, we would have\nto expand 'value' to mean constant or column name. Is there anything else\nthat defines the binary minus that I've left out? We would, I suppose, also\nhave to check that the minus isn't part of a user-defined operator. What is\nthe BNF for our operators? In fact, do we have a BNF diagram for our\nflavour of SQL? Perhaps that is where the problem lies. I must admit, I\nhave hardly been into the compiler.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Thursday, September 23, 1999 5:03 PM\n>> To: Ansley, Michael\n>> Subject: Re: [HACKERS] Lexxing and yaccing... \n>> \n>> \n>> \"Ansley, Michael\" <[email protected]> writes:\n>> > However, the most interesting part that I noticed is on \n>> the second page,\n>> > under the 'Other Titles' section. It's called 'Operator-Precedence\n>> > Parsing'. I haven't yet managed to get to it, because the \n>> web server (or my\n>> > browser, I'm not sure yet) keeps hooching over the page, \n>> however, I'll put\n>> > money on the fact that it will provide us with some \n>> insight into solving the\n>> > current operator problem(s?) that we have (see previous \n>> postings titled\n>> > 'Status Report: long query string changes' and \"Postgres' lexer\").\n>> \n>> I doubt it. Operator precedence is a grammar-level technique; it\n>> doesn't have anything to do with lexical analysis...\n>> \n>> \t\t\tregards, tom lane\n>> \n", "msg_date": "Mon, 27 Sep 1999 17:35:16 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Lexxing and yaccing... " } ]
[ { "msg_contents": "In the course of building and testing the rpm's for 6.5.2, unexpected\nresults were found in the regression testing. I am curious as to what\nthe results for 'float8' mean (geometry also failed, but it's obvious as\nto why):\n\n> *** expected/float8.out Sat Jan 23 19:12:59 1999\n> --- results/float8.out Mon Sep 27 11:01:13 1999\n> ***************\n> *** 189,201 ****\n> QUERY: SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;\n> ERROR: Bad float8 input format -- overflow\n> QUERY: SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> ! ERROR: pow() result is out of range\n> QUERY: SELECT '' AS bad, (; (f.f1)) from FLOAT8_TBL f where f.f1 = '0.0' ;\n> ERROR: can't take log of zero\n> QUERY: SELECT '' AS bad, (; (f.f1)) from FLOAT8_TBL f where f.f1 < '0.0' ;\n> ERROR: can't take log of a negative number\n> QUERY: SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\n> ! ERROR: exp() result is out of range\n> QUERY: SELECT '' AS bad, f.f1 / '0.0' from FLOAT8_TBL f;\n> ERROR: float8div: divide by zero error\n> QUERY: SELECT '' AS five, FLOAT8_TBL.*;\n> --- 189,217 ----\n> QUERY: SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;\n> ERROR: Bad float8 input format -- overflow\n> QUERY: SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> ! bad|?column?\n> ! ---+--------\n> ! |0 \n> ! |NaN \n> ! |NaN \n> ! |NaN \n> ! |NaN \n> ! (5 rows)\n> ! \n> QUERY: SELECT '' AS bad, (; (f.f1)) from FLOAT8_TBL f where f.f1 = '0.0' ;\n> ERROR: can't take log of zero\n> QUERY: SELECT '' AS bad, (; (f.f1)) from FLOAT8_TBL f where f.f1 < '0.0' ;\n> ERROR: can't take log of a negative number\n> QUERY: SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\n> ! bad| ?column?\n> ! ---+--------------------\n> ! | 1\n> ! |7.39912306090513e-16\n> ! | 0\n> ! | 0\n> ! | 1\n> ! (5 rows)\n> ! \n> QUERY: SELECT '' AS bad, f.f1 / '0.0' from FLOAT8_TBL f;\n> ERROR: float8div: divide by zero error\n> QUERY: SELECT '' AS five, FLOAT8_TBL.*;\n> \n> ----------------------\n \n\nTIA\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Mon, 27 Sep 1999 12:02:26 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Regression tests on intel for 6.5.2" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> In the course of building and testing the rpm's for 6.5.2, unexpected\n> results were found in the regression testing. I am curious as to what\n> the results for 'float8' mean (geometry also failed, but it's obvious as\n> to why):\n\nI saw similar results with older Postgres releases on HPUX. The problem\nis failure to detect an invalid result from the exp() library function.\nUnfortunately there's not complete uniformity about how to test that\non different platforms.\n\nWhat's currently in dexp() in backend/utils/adt/float.c is\n\n#ifndef finite\n\terrno = 0;\n#endif\n\t*result = (float64data) exp(tmp);\n#ifndef finite\n\tif (errno == ERANGE)\n#else\n\t/* infinity implies overflow, zero implies underflow */\n\tif (!finite(*result) || *result == 0.0)\n#endif\n\t\telog(ERROR, \"exp() result is out of range\");\n\nwhich is evidently doing the wrong thing on your platform. What does\nyour man page for exp() say about error return conventions?\n\nI suspect the assumption that finite() is always implemented as a macro\nif it's present at all is the weak spot ... or it might be that your\nmath lib returns some other error code like EDOM ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Sep 1999 19:46:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Regression tests on intel for 6.5.2 " }, { "msg_contents": "On Mon, 27 Sep 1999, Tom Lane wrote:\n> which is evidently doing the wrong thing on your platform. What does\n> your man page for exp() say about error return conventions?\n\nPlatform is Intel Linux -- specifically:\nRedHat Linux 6.0/Intel (glibc 2.1.1):\n\nMan page for exp(3)...\n-------------------\nThe log() and log10() functions can return the following errors: \n\nEDOM \nThe argument x is negative. \n\nERANGE The argument x is zero. The log of zero is not defined. \n\nThe pow() function can return the following error: \n\nEDOM \nThe argument x is negative and y is not an integral value. This would result in a complex number. \n-------------------------------\n\n> I suspect the assumption that finite() is always implemented as a macro\n> if it's present at all is the weak spot ... or it might be that your\n> math lib returns some other error code like EDOM ...\n\nMan page finite(3)\n-------------------------------\nThe finite() function returns a non-zero value if value is neither infinite nor a \n�not-a-number� (NaN) value, and 0 otherwise. \n-------------------------------\n\nSeems that there was a table in those regression test results populated by\nNaN....\n\n-----------------------------------------------------------------------------\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 27 Sep 1999 19:52:07 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Regression tests on intel for 6.5.2" }, { "msg_contents": "Lamar Owen wrote:\n\n> On Mon, 27 Sep 1999, Tom Lane wrote:\n> > which is evidently doing the wrong thing on your platform. What does\n> > your man page for exp() say about error return conventions?\n>\n\nI checked it twice, I can't find any error in the current sources. I even wrote a test program:\n#include <math.h>\n#include <stdio.h>\n#include <errno.h>\n\nint main()\n{ double e;\n errno=0;\n e=pow(100,200);\n if (errno) perror(\"pow\");\n if (!finite(e)) puts(\"!finite\\n\");\n else printf(\"%f\\n\",e);\n}\n\nOutput:\npow: Numerical result out of range\n!finite\n\nSo both methods seem to work. (finite is a function on glibc-2.1 systems)\n\nPerhaps (strange thoughts come in to my mind ...) the compiler optimizes the function call into a\nmachine instruction ...\n/tmp> cc -O2 -o test test.c -lm\n/tmp> ./test\n!finite\n\nLooks like this is the case. So (I use gcc-2.95) what to do? Complain about a compiler/library bug\n(doesn't set errno)? I would propose another autoconf test. (I could easily do it.)\n\n Christof\n\nPS: I found the offending inline routines in /usr/include/bits/mathinline.h\n\n\n", "msg_date": "Wed, 29 Sep 1999 17:05:34 +0200", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Regression tests on intel for 6.5.2" }, { "msg_contents": "> > > which is evidently doing the wrong thing on your platform. What does\n> > > your man page for exp() say about error return conventions?\n> I checked it twice, I can't find any error in the current sources. I even wrote a test \n> program...\n> So both methods seem to work. (finite is a function on glibc-2.1 systems)\n\nAnd that is the problem. I didn't have enough platforms to test on, so\nwhen I improved the code I did so in a way that I would get a better\nresult on at least my platform (probably RH4.2 or earlier) without\nbreaking the behavior on other platforms.\n\nSo, I test locally for finite() being defined as a macro! But on newer\nglibc systems it is a real function, so you are seeing the old\nbehavior.\n\nA better thing to do would be to define HAVE_FINITE, and to have a\n./configure test for it. That should be easy enough; do you have time\nto look at it? Then code like\n\n#ifndef finite\n if (errno == ERANGE)\n#else\n /* infinity implies overflow, zero implies underflow */\n if (!finite(*result) || *result == 0.0)\n#endif\n\nCould become\n\n...\n#if HAVE_FINITE\n...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 30 Sep 1999 06:17:00 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Regression tests on intel for 6.5.2" }, { "msg_contents": "Christof Petig <[email protected]> writes:\n> Perhaps (strange thoughts come in to my mind ...) the compiler\n> optimizes the function call into a machine instruction ...\n> /tmp> cc -O2 -o test test.c -lm\n> /tmp> ./test\n> !finite\n\n> Looks like this is the case.\n\nBingo! I think you've got it.\n\n> I would propose another autoconf test. (I could easily do it.)\n\nYes, we should not be assuming that finite() is a macro, which is what\nthat #ifdef coding does. We need a HAVE_FINITE configuration test.\nIf you have time to prepare the diffs it'd be great.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Sep 1999 09:25:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Regression tests on intel for 6.5.2 " }, { "msg_contents": "Tom Lane wrote:\n\n> Yes, we should not be assuming that finite() is a macro, which is what\n> that #ifdef coding does. We need a HAVE_FINITE configuration test.\n> If you have time to prepare the diffs it'd be great.\n\nHere they are\n Christof", "msg_date": "Fri, 01 Oct 1999 07:36:26 +0200", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Regression tests on intel for 6.5.2" }, { "msg_contents": "Christof Petig <[email protected]> writes:\n>> Yes, we should not be assuming that finite() is a macro, which is what\n>> that #ifdef coding does. We need a HAVE_FINITE configuration test.\n>> If you have time to prepare the diffs it'd be great.\n\n> Here they are\n\nChecked, applied to current. Thanks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Oct 1999 13:47:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Regression tests on intel for 6.5.2 " } ]
[ { "msg_contents": "Hi,\n\nthis select produces error message:\ntest=> select test2(NULL);\nERROR: typeidTypeRelid: Invalid type - oid = 0\n\ntest2:\nCREATE FUNCTION test2 (int4) RETURNS int4 AS '\nDeclare\n keyval Alias For $1;\n cnt int4;\nBegin\n Update hits set count = count +1 where msg_id = keyval;\n return cnt; \nEnd;\n' LANGUAGE 'plpgsql';\n\nWhen I do manually update\nUpdate hits set count = count +1 where msg_id = NULL;\nit works fine. What's the problem ?\n\n\tRegards,\n\t\n\t\tOleg\n\n\ntest=> \\d hits\nTable = hits\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| msg_id | int4 | 4 |\n| count | int4 | 4 |\n+----------------------------------+----------------------------------+-------+\ntest=> select version();\nversion \n------------------------------------------------------------------\nPostgreSQL 6.5.2 on i586-pc-linux-gnulibc1, compiled by gcc 2.95.1\n(1 row)\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 27 Sep 1999 21:59:50 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "NULL as an argument in plpgsql functions" }, { "msg_contents": "> Hi,\n> \n> this select produces error message:\n> test=> select test2(NULL);\n> ERROR: typeidTypeRelid: Invalid type - oid = 0\n> \n\nNot sure how to pass NULL's into functions.\n\n\n> test2:\n> CREATE FUNCTION test2 (int4) RETURNS int4 AS '\n> Declare\n> keyval Alias For $1;\n> cnt int4;\n> Begin\n> Update hits set count = count +1 where msg_id = keyval;\n> return cnt; \n> End;\n> ' LANGUAGE 'plpgsql';\n> \n> When I do manually update\n> Update hits set count = count +1 where msg_id = NULL;\n> it works fine. What's the problem ?\n> \n> \tRegards,\n> \t\n> \t\tOleg\n> \n> \n> test=> \\d hits\n> Table = hits\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | msg_id | int4 | 4 |\n> | count | int4 | 4 |\n> +----------------------------------+----------------------------------+-------+\n> test=> select version();\n> version \n> ------------------------------------------------------------------\n> PostgreSQL 6.5.2 on i586-pc-linux-gnulibc1, compiled by gcc 2.95.1\n> (1 row)\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 15:26:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions" }, { "msg_contents": "On Mon, 27 Sep 1999, Bruce Momjian wrote:\n\n> Date: Mon, 27 Sep 1999 15:26:08 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] NULL as an argument in plpgsql functions\n> \n> > Hi,\n> > \n> > this select produces error message:\n> > test=> select test2(NULL);\n> > ERROR: typeidTypeRelid: Invalid type - oid = 0\n> > \n> \n> Not sure how to pass NULL's into functions.\n\nI'm unable to pass NULL also to sql function not only\nto plpgsql one. I don't see any reason for this :-)\nI'm wondering if I'm the only have this problem.\n\n\tRegards,\n\t\n\t\tOleg\n\n> \n> \n> > test2:\n> > CREATE FUNCTION test2 (int4) RETURNS int4 AS '\n> > Declare\n> > keyval Alias For $1;\n> > cnt int4;\n> > Begin\n> > Update hits set count = count +1 where msg_id = keyval;\n> > return cnt; \n> > End;\n> > ' LANGUAGE 'plpgsql';\n> > \n> > When I do manually update\n> > Update hits set count = count +1 where msg_id = NULL;\n> > it works fine. What's the problem ?\n> > \n> > \tRegards,\n> > \t\n> > \t\tOleg\n> > \n> > \n> > test=> \\d hits\n> > Table = hits\n> > +----------------------------------+----------------------------------+-------+\n> > | Field | Type | Length|\n> > +----------------------------------+----------------------------------+-------+\n> > | msg_id | int4 | 4 |\n> > | count | int4 | 4 |\n> > +----------------------------------+----------------------------------+-------+\n> > test=> select version();\n> > version \n> > ------------------------------------------------------------------\n> > PostgreSQL 6.5.2 on i586-pc-linux-gnulibc1, compiled by gcc 2.95.1\n> > (1 row)\n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 28 Sep 1999 12:19:22 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> this select produces error message:\n> test=> select test2(NULL);\n> ERROR: typeidTypeRelid: Invalid type - oid = 0\n\n[ where test2 is a plpgsql function ]\n\nActually this is not a plpgsql issue; with current sources you get the\nsame error with any function, for example\n\nregression=> select int4fac(NULL);\nERROR: typeidTypeRelid: Invalid type - oid = 0\n\nDigging into this, I find that (a) make_const() in parse_node.c produces\na Const node for the NULL that has consttype = 0; (b) ParseFuncOrColumn\napplies ISCOMPLEX() which tries to get the type tuple for the argument\nof the function; (c) that fails because the type ID is 0.\n\nI am not sure whether there are two bugs here or only one. It would\nprobably be better to mark the Const node as having type UNKNOWN instead\nof type 0 (but make_const is not the only place that makes null\nconstants this way! we'd need to find all the others...). But I am not\nsure whether ParseFuncOrColumn would then do the right thing in terms of\nresolving the type of the function; for that matter I'm not real sure\nwhat the right thing for it to do is.\n\nThomas, this stuff is mostly your bailiwick; what do you think?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Oct 1999 14:19:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions " }, { "msg_contents": "> probably be better to mark the Const node as having type UNKNOWN instead\n> of type 0 (but make_const is not the only place that makes null\n> constants this way! we'd need to find all the others...). But I am not\n> sure whether ParseFuncOrColumn would then do the right thing in terms of\n> resolving the type of the function; for that matter I'm not real sure\n> what the right thing for it to do is.\n> Thomas, this stuff is mostly your bailiwick; what do you think?\n\nMy recollection is that UNKNOWN usually applies to strings of\nunspecified type, while \"0\" applies to NULL fields. I can put this on\nmy list to look at later.\n\nAnother side issue; any function called with a null parameter will\nactually not get called at all! Postgres assumes that a function\ncalled with null must return null, so doesn't bother calling the\nroutine...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 03 Oct 1999 06:14:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions" }, { "msg_contents": "Thus spake Thomas Lockhart\n> Another side issue; any function called with a null parameter will\n> actually not get called at all! Postgres assumes that a function\n> called with null must return null, so doesn't bother calling the\n> routine...\n\nDid this get changed recently? AFAIK the routine gets called. It's just\nthat the result is ignored and null is then returned. This bit me in the\nass when I was working on the inet stuff. If I didn't check for NULL and\nreturn something my function would dump core but if I tried to deal with\nthe NULL and return something sensible, the function returned NULL anyway.\n\nThere was a discussion at the time about fixing this so that the function\nnever got called as investigation showed that there were existing ones\nthat would also crash if given null inputs. Did this ever happen?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 3 Oct 1999 04:54:38 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> probably be better to mark the Const node as having type UNKNOWN instead\n>> of type 0 (but make_const is not the only place that makes null\n>> constants this way! we'd need to find all the others...). But I am not\n>> sure whether ParseFuncOrColumn would then do the right thing in terms of\n>> resolving the type of the function; for that matter I'm not real sure\n>> what the right thing for it to do is.\n>> Thomas, this stuff is mostly your bailiwick; what do you think?\n\n> My recollection is that UNKNOWN usually applies to strings of\n> unspecified type, while \"0\" applies to NULL fields. I can put this on\n> my list to look at later.\n\nOK, but after mulling it over it seems that UNKNOWN is pretty much what\nwe want for an explicit null constant. If you want to consider NULL\nas having a type different from UNKNOWN, then most of the places that\ncurrently check for UNKNOWN would have to check for both, no?\n\n> Another side issue; any function called with a null parameter will\n> actually not get called at all! Postgres assumes that a function\n> called with null must return null, so doesn't bother calling the\n> routine...\n\nActually, it's even sillier than that: the function *is* called, but\nthen the OR of the input values' nullflags is attached to the output,\nso you get back a null no matter what the function did. (This is why\nall the functions that take pass-by-ref args have to be careful about\ngetting null pointers.)\n\nIn any case, I hope to see that fixed before 6.6/7.0/whatever our\nnext release is. So we do need a fix for the parser issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Oct 1999 12:32:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions " }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> There was a discussion at the time about fixing this so that the function\n> never got called as investigation showed that there were existing ones\n> that would also crash if given null inputs. Did this ever happen?\n\nNothing's changed yet, but you are right that one of the many problems\nwith the existing fmgr interface is that checking for null inputs is\nboth necessary and tedious (= frequently omitted).\n\nI have a rough proposal on the table for cleaning this up so that null\nhandling is done properly, ie, a function can see *which* of its inputs\nare null and can choose whether to return null or not. The most common\ncase of a \"strict\" function (any null input -> null result) would be\npainless, but we wouldn't force all functions into that straitjacket.\nSee my pghackers message of 14 Jun 99.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Oct 1999 12:41:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions " }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n\n> Thus spake Thomas Lockhart\n> > Another side issue; any function called with a null parameter will\n> > actually not get called at all! Postgres assumes that a function\n> > called with null must return null, so doesn't bother calling the\n> > routine...\n>\n> Did this get changed recently? AFAIK the routine gets called. It's just\n> that the result is ignored and null is then returned. This bit me in the\n> ass when I was working on the inet stuff. If I didn't check for NULL and\n> return something my function would dump core but if I tried to deal with\n> the NULL and return something sensible, the function returned NULL anyway.\n>\n> There was a discussion at the time about fixing this so that the function\n> never got called as investigation showed that there were existing ones\n> that would also crash if given null inputs. Did this ever happen?\n\n It wasn't changed. But the isNull bool pointer (in-/out-\n param) is only handed down as the second call argument if a\n function is called via fmgr_c() and has exactly one argument\n as defined in pg_proc.\n\n Handling NULL on a per argument/return value base is one of\n the long standing TODO's.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 4 Oct 1999 13:16:01 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions" } ]
[ { "msg_contents": "Hi all,\n\n just to give anyone a chance to complain, I'd like to\n describe what I plan to implement for v6.6 in the referential\n integrity (RI) corner.\n\n 1. What will be supported\n\n I'm concentrating on FOREIGN KEY support. From the general\n definition (thanks to Vadim for the SQL3 draft):\n\n [ CONSTRAINT constraint-name ] FOREIGN KEY ( column [, ...] )\n REFERENCES [ PENDANT ] table-name [ ( column [, ...] ) ]\n [ MATCH { FULL | PARTIAL } ]\n [ ON DELETE <referential-action> ]\n [ ON UPDATE <referential-action> ]\n [ [ NOT ] DEFERRABLE ]\n [ INITIALLY { IMMEDIATE | DEFERRED } ]\n\n <referential-action> ::=\n CASCADE\n | SET NULL\n | SET DEFAULT\n | RESTRICT\n | NO ACTION\n\n I'll omit the following parts on the first go:\n\n PENDANT\n MATCH (match type is allways FULL)\n\n The implementation of referential-actions will require\n that the columns in the referenced table build a unique\n key. It will not be guaranteed, that an appropriate unique\n index exists.\n\n The support for the SET DEFAULT action depends on the\n smartness of the generic trigger procedure described later\n - so that detail might be left unsupported too in v6.6.\n\n 2. Implementation\n\n As previous discussions turned out, the rule system isn't\n adequate for implementing deferred constraints with\n respect to all their side effects. Thus, RI constraints\n will be implemented by specialized trigger procedures.\n\n Therefore, a bunch of new attributes and some indices are\n added to the pg_trigger system catalog. These are required\n to tell constraints from real triggers, automatically drop\n referential-action constraints from referenced tables if\n the referencing table is dropped and to hold information\n about deferrability and initially deferred states for the\n constraints.\n\n The procedures will finally get implemented as builtin, C\n language, generic functions.\n\n 3. What I have so far\n\n I've added the following attributes to pg_trigger:\n\n tgenabled A bool that is designed to switch off\n a regular trigger with the ALTER\n TRIGGER command. This is not related\n to RI and I'm not actually planning on\n implementing the parser/utility stuff.\n\n tgisconstraint A bool that tells a constraint from a\n trigger.\n\n tgconstrname The NAME of the constraint. RI\n constraint triggers will be\n automatically inserted during CREATE\n TABLE with trigger names\n _RI_Fkey_Constraint_<n> so that they\n are unique \"triggers\" per table. This\n attribute (indexed) holds the real\n constraint name for SET CONSTRAINT.\n\n tgconstrrelid The OID of the opposite table\n (indexed). In the constraints that\n check foreign key existance, it's the\n Oid of the referenced table. In the\n constraints that do the referential-\n actions, it's the Oid of the\n referenced table. This Oid is used to\n quickly drop triggers from the\n opposite table in the case of DROP\n TABLE.\n\n tgdeferrable A bool telling if the constraint can\n be set to DEFERRED checking.\n\n tginitdeferred A bool telling if the constraint is in\n DEFERRED state by default.\n\n To commands/trigger.c I've added a few hundred lines of\n code. All AFTER ROW IMMEDIATE triggers are executed after\n the entire query. DEFERRED triggers are executed at SET\n CONSTRAINTS ... IMMEDIATE or at COMMIT.\n\n To the time qualification code I've added SnapshotAny.\n Since I know the exact CTID of the tuple WHICH IS OLD/NEW\n for the event in question, this new snapshot completely\n ignores any time qualification and fetches it.\n\n What I see so far from the tests, anything (except for the\n damned funny_dup17 reported earlier) still works. And\n setting triggers to deferred execution solves the cyclic\n integrity problems which are the reason for deferred\n execution. So I assume I'm on the right track.\n\n 4. Next steps\n\n First I need to implement the SET CONSTRAINTS command in\n the parser and utility stuff now.\n\n Second I'll write the generic trigger procs in PL/Tcl that\n don't use prepared SPI plans (it's easiest to do it this\n way). When they work as needed by the implementation, I'll\n write down the specs and ask the co-developers to\n implement their high quality, plan saving C-language\n equivalents. Sorry, but all co-developers therefore should\n at least compile in PL/Tcl support.\n\n All the parser/utility stuff must be written that handles\n constraint trigger creation/dropping during CREATE/DROP\n table. And all the new features/definitions must be\n adapted to pg_dump and psql.\n\n Finally the deferred trigger manager must buffer huge\n amounts of trigger events (actually collected in memory)\n out onto disk.\n\n Most of the activities after \"Second\" next step can be\n done parallel. I'll commit my changes after that, because\n then I'm able to run a full test of deferred constraints\n to be sure I'm really on the right track. All co-\n developers can join then using the CURRENT tree.\n\n Any comments?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 27 Sep 1999 20:42:07 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "RI status report #1" }, { "msg_contents": "> Most of the activities after \"Second\" next step can be\n> done parallel. I'll commit my changes after that, because\n> then I'm able to run a full test of deferred constraints\n> to be sure I'm really on the right track. All co-\n> developers can join then using the CURRENT tree.\n> \n> Any comments?\n\nGreat. How's that for a comment? :-)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 15:31:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #1" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> just to give anyone a chance to complain, I'd like to\n> describe what I plan to implement for v6.6 in the referential\n> integrity (RI) corner.\n\nJan, I have no comments about the RI features, but I am a little worried\nabout not creating big headaches in merging different changes. Can we\nwork out a schedule that will minimize tromping on each others' toes?\n\nI am in the middle of some fairly extensive revisions in\nrewriteManip.c, rewriteHandler.c, and ruleutils.c --- basically,\nfixing all the routines that recurse through expression trees to use\nexpression_tree_walker and expression_tree_mutator, for a big space\nsavings and elimination of a bunch of routine-X-doesn't-handle-node-\ntype-Y bugs. Also I'm going to fix the rule deparser to use a\nstringinfo buffer so it doesn't have any hardwired limits on the textual\nlength of a rule. And I think I know how to fix some of the problems\nwith aggregates in subselects, like this one:\n\ncreate table t1 (name text, value float8);\nCREATE\nselect name from t1 where name IN\n(select name from t1 group by name having 2 = count(*));\nERROR: SELECT/HAVING requires aggregates to be valid\n\n(It looks to me like some of the routines recurse into subselects when\nthey shouldn't. It's a lot easier to see that sort of issue when\nthere's no code left except the actual Var manipulation and the\nnonstandard recursion decisions ;-).)\n\nI intended to finish these up in the next few days and commit them,\nbut if you've already started major hacking in these files then maybe\nwe should find another way.\n\nAlso, I believe Thomas is in the middle of wide-ranging revisions in\nthe parser, so you'd better coordinate with him on touching that area.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Sep 1999 20:06:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #1 " }, { "msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > just to give anyone a chance to complain, I'd like to\n> > describe what I plan to implement for v6.6 in the referential\n> > integrity (RI) corner.\n>\n> Jan, I have no comments about the RI features, but I am a little worried\n> about not creating big headaches in merging different changes. Can we\n> work out a schedule that will minimize tromping on each others' toes?\n\n To avoid this kind of trouble is one of my reasons for the\n status report. So others can see which areas will be affected\n and we can coordinate a little.\n\n> I am in the middle of some fairly extensive revisions in\n> rewriteManip.c, rewriteHandler.c, and ruleutils.c --- basically,\n\n My changes absolutely don't touch the rule system. Except for\n a few lines in tcop, transam and tqual all the work is done\n in trigger.c.\n\n> I intended to finish these up in the next few days and commit them,\n> but if you've already started major hacking in these files then maybe\n> we should find another way.\n\n Do it - I'll wait for you (would you please give me a sign\n then). But I'm 97.5% sure our work has no collision areas.\n\n> Also, I believe Thomas is in the middle of wide-ranging revisions in\n> the parser, so you'd better coordinate with him on touching that area.\n\n Ah - that's more critical. I just began to add the SET\n CONSTRAINTS command and am through with thinking about the\n CREATE CONSTRAINT TRIGGER too. We all know that our parser is\n a very delicate peace of software. Thomas, could you please\n comment on this?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 28 Sep 1999 13:58:09 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] RI status report #1" }, { "msg_contents": "> > Also, I believe Thomas is in the middle of wide-ranging revisions in\n> > the parser, so you'd better coordinate with him on touching that area.\n> Ah - that's more critical. I just began to add the SET\n> CONSTRAINTS command and am through with thinking about the\n> CREATE CONSTRAINT TRIGGER too. We all know that our parser is\n> a very delicate peace of software. Thomas, could you please\n> comment on this?\n\nAt the moment I am working on join *syntax*, so my changes are\nisolated to gram.y, analyze.c, parse_clause.c, and parse_target.c.\nDon't wait for me; I'll bet that we don't collide much, and if we do I\ndon't mind doing the merge.\n\nSometime later, once I understand the syntax and have it coded for\ninner joins, I'll want to start modifying the parser and planner to\nhandle outer joins. At that point, I'll be asking for help and advice,\nand look forward to your input. But I'm not there yet.\n\nI'm hoping to have some updates committed in a week or so, but things\nhave been going very slowly with little time to work on this :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 28 Sep 1999 13:43:00 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #1" }, { "msg_contents": "On Mon, 27 Sep 1999, Bruce Momjian wrote:\n\n> > Most of the activities after \"Second\" next step can be\n> > done parallel. I'll commit my changes after that, because\n> > then I'm able to run a full test of deferred constraints\n> > to be sure I'm really on the right track. All co-\n> > developers can join then using the CURRENT tree.\n> > \n> > Any comments?\n> \n> Great. How's that for a comment? :-)\n\nDamn, I was gonna say that but figured it wasn't enough...:)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 29 Sep 1999 18:15:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #1" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> I am in the middle of some fairly extensive revisions in\n>> rewriteManip.c, rewriteHandler.c, and ruleutils.c --- ...\n>> I intended to finish these up in the next few days and commit them,\n>> but if you've already started major hacking in these files then maybe\n>> we should find another way.\n\n> Do it - I'll wait for you (would you please give me a sign\n> then). But I'm 97.5% sure our work has no collision areas.\n\nOK, I'm done with rewriter/ruleutils cleanups, at least for now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Oct 1999 14:24:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #1 " } ]
[ { "msg_contents": "Hi all,\n\nI have wondered that md.c handles incomplete block(page)s\ncorrectly.\nAm I mistaken ?\n\n1) _mdnblocks() takes the last incomplete block into account\n\n2) mdextend() doesn't care about the existence of incomplete\n block(page)s.\n\n3) In spite of 1)2),mdextend() does nothing when incomplete\n write occurs.\n\nComments ?\n\nIf I am right,I would provide a patch.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 28 Sep 1999 13:52:12 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Recovery on incomplete write" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I have wondered that md.c handles incomplete block(page)s\n> correctly.\n> Am I mistaken ?\n\nI think you are right, and there may be some other trouble spots in that\nfile too. I remember thinking that the code depended heavily on never\nhaving a partial block at the end of the file.\n\nBut is it worth fixing? The only way I can see for the file length\nto become funny is if we run out of disk space part way through writing\na page, which seems unlikely...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Sep 1999 10:39:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Recovery on incomplete write " }, { "msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I have wondered that md.c handles incomplete block(page)s\n> > correctly.\n> > Am I mistaken ?\n> \n> I think you are right, and there may be some other trouble spots in that\n> file too. I remember thinking that the code depended heavily on never\n> having a partial block at the end of the file.\n> \n> But is it worth fixing? The only way I can see for the file length\n> to become funny is if we run out of disk space part way through writing\n> a page, which seems unlikely...\n> \n\nThat is how he got started, the TODO item about running out of disk\nspace causing corrupted databases. I think it needs a fix, if we can.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Sep 1999 10:54:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Recovery on incomplete write" }, { "msg_contents": "\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, September 28, 1999 11:54 PM\n> To: Tom Lane\n> Cc: Hiroshi Inoue; pgsql-hackers\n> Subject: Re: [HACKERS] Recovery on incomplete write\n>\n>\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > I have wondered that md.c handles incomplete block(page)s\n> > > correctly.\n> > > Am I mistaken ?\n> >\n> > I think you are right, and there may be some other trouble spots in that\n> > file too. I remember thinking that the code depended heavily on never\n> > having a partial block at the end of the file.\n> >\n> > But is it worth fixing? The only way I can see for the file length\n> > to become funny is if we run out of disk space part way through writing\n> > a page, which seems unlikely...\n> >\n>\n> That is how he got started, the TODO item about running out of disk\n> space causing corrupted databases. I think it needs a fix, if we can.\n>\n\nMaybe it isn't so difficult to fix.\nI would provide a patch.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 29 Sep 1999 16:20:14 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Recovery on incomplete write" }, { "msg_contents": "Bruce Momjian wrote:\n\n> >\n> > I think you are right, and there may be some other trouble spots in that\n> > file too. I remember thinking that the code depended heavily on never\n> > having a partial block at the end of the file.\n> >\n> > But is it worth fixing? The only way I can see for the file length\n> > to become funny is if we run out of disk space part way through writing\n> > a page, which seems unlikely...\n> >\n>\n> That is how he got started, the TODO item about running out of disk\n> space causing corrupted databases. I think it needs a fix, if we can.\n\nIt does corrupt the database (happened twice to me last week, I'm using the\ncurrent CVS version!). You can't even pg_dump the database - it stops in the\nmiddle of a line.\nAnd this happened just because some process went amok and stdout was written\nto a file.\n\nChristof\n\n\n", "msg_date": "Wed, 29 Sep 1999 10:39:32 +0200", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Recovery on incomplete write" } ]
[ { "msg_contents": "Hi, all,\n\nWhoever is doing the SSL stuff:\nIn postmaster.c, at lines 995, and 1841, there is code that is not wrapped\nin the USE_SSL define.\n\nLast night I checked out the latest source, and couldn't get it to compile.\nIt seems that the function heap_openr() has a new parameter, and there are\ncalls that have not been updated yet. I was a little hesitant to go adding\nstuff, as the new parameter is a LOCKTYPE, and I wouldn't know what locks\nare required where, so I just left well alone. Any comments on this? There\nis another function which is paired up with it, but I forget the name, which\nalso has a new parameter, it seems, and also has calls which have not yet\nbeen updated.\n\n\nMikeA\n", "msg_date": "Tue, 28 Sep 1999 10:39:18 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Latest tree" }, { "msg_contents": "\"Ansley, Michael\" wrote:\n> \n> Hi, all,\n> \n> Whoever is doing the SSL stuff:\n> In postmaster.c, at lines 995, and 1841, there is code that is not wrapped\n> in the USE_SSL define.\n> \n> Last night I checked out the latest source, and couldn't get it to compile.\n> It seems that the function heap_openr() has a new parameter, and there are\n> calls that have not been updated yet. I was a little hesitant to go adding\n> stuff, as the new parameter is a LOCKTYPE, and I wouldn't know what locks\n> are required where, so I just left well alone. Any comments on this? There\n> is another function which is paired up with it, but I forget the name, which\n> also has a new parameter, it seems, and also has calls which have not yet\n> been updated.\n\nI fixed all this yesterday while committing my WAL changes...\n\nVadim\n", "msg_date": "Tue, 28 Sep 1999 17:01:22 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Latest tree" }, { "msg_contents": ">I fixed all this yesterday while committing my WAL changes...\n>\n>Vadim\n\nDoes this mean that now we can enjoy the WAL?:-)\n\nBTW, your WAL implementation will allow database recovery from log\nfiles by using roll-forward or similar techniques?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 28 Sep 1999 18:54:48 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] Latest tree " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> >I fixed all this yesterday while committing my WAL changes...\n> >\n> >Vadim\n> \n> Does this mean that now we can enjoy the WAL?:-)\n\nNo yet :))\n\n> \n> BTW, your WAL implementation will allow database recovery from log\n> files by using roll-forward or similar techniques?\n\nYes.\n\nVadim\n", "msg_date": "Tue, 28 Sep 1999 18:51:32 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] Latest tree" } ]
[ { "msg_contents": "Hi, Vadim,\n\nI had a problem compiling, message:\n\nline: 1379\nIn function 'CreateCheckPoint'\nstorage size of 'delay' is unknown\n\nIt seems to hooch over the struct timeval. Ideas?\n\nMikeA\n\n>> -----Original Message-----\n>> From: Tatsuo Ishii [mailto:[email protected]]\n>> Sent: Tuesday, September 28, 1999 11:55 AM\n>> To: Vadim Mikheev\n>> Cc: Ansley, Michael; '[email protected]';\n>> '[email protected]'\n>> Subject: Re: [INTERFACES] Re: [HACKERS] Latest tree \n>> \n>> \n>> >I fixed all this yesterday while committing my WAL changes...\n>> >\n>> >Vadim\n>> \n>> Does this mean that now we can enjoy the WAL?:-)\n>> \n>> BTW, your WAL implementation will allow database recovery from log\n>> files by using roll-forward or similar techniques?\n>> --\n>> Tatsuo Ishii\n>> \n", "msg_date": "Tue, 28 Sep 1999 12:03:00 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] Re: [HACKERS] Latest tree " }, { "msg_contents": "\"Ansley, Michael\" wrote:\n> \n> Hi, Vadim,\n> \n> I had a problem compiling, message:\n> \n> line: 1379\n> In function 'CreateCheckPoint'\n> storage size of 'delay' is unknown\n> \n> It seems to hooch over the struct timeval. Ideas?\n\nI don't know why but sys/time.h was not required under FreeBSD :)\n\nVadim\n", "msg_date": "Tue, 28 Sep 1999 18:50:32 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] Latest tree" } ]
[ { "msg_contents": "Got it. sys/time wasn't #include'd in xlog.c. I probably should have\nmentioned the file the last time as well ;-) sorry.\n\n>> Hi, Vadim,\n>> \n>> I had a problem compiling, message:\n>> \n>> line: 1379\n>> In function 'CreateCheckPoint'\n>> storage size of 'delay' is unknown\n>> \n>> It seems to hooch over the struct timeval. Ideas?\n>> \n>> MikeA\n\n>> -----Original Message-----\n>> From: Tatsuo Ishii [mailto:[email protected]]\n>> Sent: Tuesday, September 28, 1999 11:55 AM\n>> To: Vadim Mikheev\n>> Cc: Ansley, Michael; '[email protected]';\n>> '[email protected]'\n>> Subject: Re: [INTERFACES] Re: [HACKERS] Latest tree \n>> \n>> \n>> >I fixed all this yesterday while committing my WAL changes...\n>> >\n>> >Vadim\n>> \n>> Does this mean that now we can enjoy the WAL?:-)\n>> \n>> BTW, your WAL implementation will allow database recovery from log\n>> files by using roll-forward or similar techniques?\n>> --\n>> Tatsuo Ishii\n>> \n", "msg_date": "Tue, 28 Sep 1999 12:10:35 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] Re: [HACKERS] Latest tree " } ]
[ { "msg_contents": "Hmmmm...\n\nAnyway, I added it, and found another glitch in\nbackend/utils/fmgr/dfmgr.c:106\n\nI changed the function call to heap_close(rel, AccessShareLock);\nDo you want a patch for this, and any others that I find?\n\nMikeA\n\n>> -----Original Message-----\n>> From: Vadim Mikheev [mailto:[email protected]]\n>> Sent: Tuesday, September 28, 1999 12:51 PM\n>> To: '[email protected]'\n>> Subject: Re: [INTERFACES] Re: [HACKERS] Latest tree\n>> \n>> \n>> \"Ansley, Michael\" wrote:\n>> > \n>> > Hi, Vadim,\n>> > \n>> > I had a problem compiling, message:\n>> > \n>> > line: 1379\n>> > In function 'CreateCheckPoint'\n>> > storage size of 'delay' is unknown\n>> > \n>> > It seems to hooch over the struct timeval. Ideas?\n>> \n>> I don't know why but sys/time.h was not required under FreeBSD :)\n>> \n>> Vadim\n>> \n>> ************\n>> \n", "msg_date": "Tue, 28 Sep 1999 13:09:35 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] Re: [HACKERS] Latest tree" }, { "msg_contents": "\"Ansley, Michael\" wrote:\n> \n> Hmmmm...\n> \n> Anyway, I added it, and found another glitch in\n> backend/utils/fmgr/dfmgr.c:106\n> \n> I changed the function call to heap_close(rel, AccessShareLock);\n ^^^^^^^^^^^^^^^^^^\nIt was there yesterday but disappeared today, fixed.\n\nVadim\n", "msg_date": "Tue, 28 Sep 1999 19:28:34 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] Latest tree" } ]
[ { "msg_contents": "> To coordinate with your work I've included my needs for the\n> SET CONSTRAINTS command below. I can wait a little with the\n> other (CREATE CONTRAINT TRIGGER) until you're done - except\n> you need to lock the parser for loooong time.\n\nI didn't look *carefully*, but I'm sure this is all just fine. If you\nhave a chance, could you please try adding every new keyword to the\nexisting alphabetical list in ColId and/or ColLabel? In many cases\nkeywords which appear in only a limited context can still be allowed\nin other places, and when we add new ones we tend to forget to update\nthis list.\n\nI can do this later if you like; send me a note to remind me after you\ncommit your changes.\n\nbtw, since I'd already done some work on gram.y for join syntax the\npatches to get it right aren't all that invasive in that file.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 28 Sep 1999 14:11:04 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RI and PARSER (was: Re: [HACKERS] RI status report #1)" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> > To coordinate with your work I've included my needs for the\n> > SET CONSTRAINTS command below. I can wait a little with the\n> > other (CREATE CONTRAINT TRIGGER) until you're done - except\n> > you need to lock the parser for loooong time.\n>\n> I didn't look *carefully*, but I'm sure this is all just fine. If you\n> have a chance, could you please try adding every new keyword to the\n> existing alphabetical list in ColId and/or ColLabel? In many cases\n> keywords which appear in only a limited context can still be allowed\n> in other places, and when we add new ones we tend to forget to update\n> this list.\n\n Just tell me which of these SQL3 \"reserved\" keywords\n (according to the SQL3 draft I got from Vadim) should be\n available for column ID or Label:\n\n CONSTRAINTS\n DEFERRABLE\n DEFERRED\n IMMEDIATE\n INITIALLY\n PENDANT\n RESTRICT\n\n Then I'll add them before committing. Overlooking the syntax\n of my new commands, it wouldn't hurt to add them all to these\n lists. But should SQL3 reserved words really be in them?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 28 Sep 1999 19:58:09 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: RI and PARSER (was: Re: [HACKERS] RI status report #1)" }, { "msg_contents": "> Just tell me which of these SQL3 \"reserved\" keywords\n> (according to the SQL3 draft I got from Vadim) should be\n> available for column ID or Label:\n> CONSTRAINTS\n> DEFERRABLE\n> DEFERRED\n> IMMEDIATE\n> INITIALLY\n> PENDANT\n> RESTRICT\n> Then I'll add them before committing. Overlooking the syntax\n> of my new commands, it wouldn't hurt to add them all to these\n> lists. But should SQL3 reserved words really be in them?\n\nWe have tried to allow as many keywords as possible for identifiers\n(for ColId, which includes ColLabel) or, as a more limited choice, for\ncolumn aliases (ColLabel only). This is particularly helpful as we\nimplement more and more of the standard, and take away previously\nallowed column and table names. The keywords, reserved, unreserved,\nand unused, are documented for Postgres in syntax.sgml, and the docs\npresent them wrt the SQL92 and SQL3 standards.\n\nWhat I usually do is try adding one or all of them to ColId, and if\nthat fails by giving shift/reduce conflicts I'll try moving the\noffenders to ColLabel. There aren't many places in the syntax where\nyacc/bison can't handle keywords at least as column labels.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 29 Sep 1999 04:41:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RI and PARSER (was: Re: [HACKERS] RI status report #1)" }, { "msg_contents": "> > CONSTRAINTS\n> > DEFERRABLE\n> > DEFERRED\n> > IMMEDIATE\n> > INITIALLY\n> > PENDANT\n> > RESTRICT\n>\n> [...]\n> allowed column and table names. The keywords, reserved, unreserved,\n> and unused, are documented for Postgres in syntax.sgml, and the docs\n> present them wrt the SQL92 and SQL3 standards.\n>\n> What I usually do is try adding one or all of them to ColId, and if\n> that fails by giving shift/reduce conflicts I'll try moving the\n> offenders to ColLabel. There aren't many places in the syntax where\n> yacc/bison can't handle keywords at least as column labels.\n\n O.K. - I was able to add them all to ColId without conflicts\n for now. Let's see what happens after adding the syntax for\n CREATE CONSTRAINT TRIGGER.\n\n I'm not sure which of them are SQL92 or SQL3, at least they\n are all SQL3 \"reserved\" words according to the SQL3 draft.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 11:13:04 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: RI and PARSER (was: Re: [HACKERS] RI status report #1)" }, { "msg_contents": "> > > CONSTRAINTS\n> > > DEFERRABLE\n> > > DEFERRED\n> > > IMMEDIATE\n> > > INITIALLY\n> > > PENDANT\n> > > RESTRICT\n> O.K. - I was able to add them all to ColId without conflicts\n> for now. Let's see what happens after adding the syntax for\n> CREATE CONSTRAINT TRIGGER.\n\nRight. Anything which causes trouble can be demoted to ColLabel.\n\n> I'm not sure which of them are SQL92 or SQL3, at least they\n> are all SQL3 \"reserved\" words according to the SQL3 draft.\n\nAccording to my Date and Darwen (which is mostly SQL92), all of these\nexcept \"PENDANT\" are SQL92 reserved words. PENDANT is not mentioned,\nso is presumably an SQL3-ism.\n\nDo you want me to update syntax.sgml?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 29 Sep 1999 13:10:29 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RI and PARSER (was: Re: [HACKERS] RI status report #1)" }, { "msg_contents": "Thomas Lockhart wrote:\n\n>\n> > > > CONSTRAINTS\n> > > > DEFERRABLE\n> > > > DEFERRED\n> > > > IMMEDIATE\n> > > > INITIALLY\n> > > > PENDANT\n> > > > RESTRICT\n> > O.K. - I was able to add them all to ColId without conflicts\n> > for now. Let's see what happens after adding the syntax for\n> > CREATE CONSTRAINT TRIGGER.\n>\n> Right. Anything which causes trouble can be demoted to ColLabel.\n>\n> > I'm not sure which of them are SQL92 or SQL3, at least they\n> > are all SQL3 \"reserved\" words according to the SQL3 draft.\n>\n> According to my Date and Darwen (which is mostly SQL92), all of these\n> except \"PENDANT\" are SQL92 reserved words. PENDANT is not mentioned,\n> so is presumably an SQL3-ism.\n>\n> Do you want me to update syntax.sgml?\n\n Please be so kind. CREATE CONSTRAINT TRIGGER did not mess up\n anything, so all these new reserved words appear in ColId and\n are still available.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 16:03:05 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: RI and PARSER (was: Re: [HACKERS] RI status report #1)" } ]
[ { "msg_contents": "Hi,\n\nThere is a very handy wizard for upsizing MS Access\nto use MS SQL Server as a backend.\n\nHas anyone produced anything similar fro upsizing MS Access\nto use PostgreSQL as a backend?\n\nRegards\n\nJohn Ridout.\n\n", "msg_date": "Tue, 28 Sep 1999 18:31:19 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": true, "msg_subject": "MS Access upsizing" }, { "msg_contents": "On Tue, 28 Sep 1999, John Ridout wrote:\n\n> Hi,\n> \n> There is a very handy wizard for upsizing MS Access\n> to use MS SQL Server as a backend.\n> \n> Has anyone produced anything similar fro upsizing MS Access\n> to use PostgreSQL as a backend?\n\nODBC drivers? *raised eyebrow*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 28 Sep 1999 23:27:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MS Access upsizing" }, { "msg_contents": "Someone has already done the ODBC\nLook in the PostgreSQL Programmer's Guide under Interfaces. ;-)\n\nOr perhaps you mean it will be difficult over ODBC.\nI intend to pass the DDL to PostgreSQL to execute.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of The Hermit\n> Hacker\n> Sent: 29 September 1999 03:27\n> To: John Ridout\n> Cc: [email protected]\n> Subject: Re: [HACKERS] MS Access upsizing\n> \n> \n> On Tue, 28 Sep 1999, John Ridout wrote:\n> \n> > Hi,\n> > \n> > There is a very handy wizard for upsizing MS Access\n> > to use MS SQL Server as a backend.\n> > \n> > Has anyone produced anything similar fro upsizing MS Access\n> > to use PostgreSQL as a backend?\n> \n> ODBC drivers? *raised eyebrow*\n> \n> Marc G. Fournier ICQ#7615664 IRC \n> Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: \n> scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n", "msg_date": "Wed, 29 Sep 1999 10:26:48 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] MS Access upsizing" } ]
[ { "msg_contents": "I am down to 52 messages in my PostgreSQL mailbox. That is amazingly\nsmall. \n\nThanks to Tom Lane for suggesting the new TODO.detail directory, so I\ncan get the bug reports out of my mailbox and into the CVS distribution.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Sep 1999 16:54:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "My mailbox size" } ]
[ { "msg_contents": "Dear Sir,\n\t\n\tI got your email address from the file notice.txt of pub/odbc/\n\tI've problem with getting output from database via ODBC\nmy query is -->\n\tselect s.account,sum(dround(date_part('epoch',s.stop-s.start)/36.0)/100) as\nusage,sum(date_part('epoch',s.stop-s.start)/3600*s.rate/t.rate) as charge\nfrom session s,ticket t\nwhere s.stop >= '1999/06/10' and s.stop < '1999/06/21' and\ns.ticket = t.id and not t.free and s.nas = '203.151.66.7' group by\ns.account;\n\nI use MS Access 97 as frontend application and \nODBC : postodbc 6.4.7 \nDatabase : PostgreSQL 6.5.1 on i386-unknown-freebsd2.2.8, compiled by cc\n\n\tI sent the passthrough query to the postgres backend. it always\nreturn only the first field and the second (account,usage). and I noticed \nthat the backend always return only some other fields and only 1 sum field\n(it's wierd).\n\twith the same query I tried run it on the database promt it return\nthe complete result. \n\n\tPlease help.\n\nRegards,\n\n-----------------------------------------\nNuchanach Klinjun\nR&D Project. Internet Thailand\nEmail: [email protected]\n\n\n\n\n", "msg_date": "Wed, 29 Sep 1999 10:31:00 +0700 (GMT+0700)", "msg_from": "Nuchanach Klinjun <[email protected]>", "msg_from_op": true, "msg_subject": "Returned Result via ODBC!" } ]
[ { "msg_contents": "--- Tom Lane <[email protected]> wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Seems like good comments on these items. Anything\n> for TODO list here?\n> \n> Actually, the current state of play is that I\n> reduced the ERROR messages\n> to NOTICEs in DROP TABLE and DROP INDEX (\"NOTICE:\n> DROP TABLE cannot be\n> rolled back, so don't abort now\"), since there\n> seemed to be some\n> unhappiness about making them hard errors. I also\n> put similar messages\n> into RENAME TABLE and TRUNCATE TABLE.\n> \n> I have a personal TODO item to go and insert some\n> more checks: per the\n> discussions so far, CREATE/DROP DATABASE probably\n> need similar messages,\n> and I think we need to make VACUUM refuse to run\n> inside a transaction\n> block at all (since its internal commits will not do\n> the intended thing\n> if you do BEGIN; VACUUM). Also on my list is to\n> investigate these\n> reports that CREATE VIEW and ALTER TABLE don't roll\n> back cleanly ---\n> there may be bugs lurking there. If you want to add\n> those to the\n> public list, go ahead.\n> \n> \t\t\tregards, tom lane\n\nIf my TRUNCATE TABLE patch was applied as submitted,\n(I haven't downloaded a newer snapshot yet), then\nit falls into category #2...same as VACUUM. It \ncommits the current transaction before truncating\nthe specified relation, then begins a new transaction.\nQuite frankly, as Vadim pointed out in earlier posts,\nPostgreSQL attempts to go \"above and beyond\" with\nrespect to rolling back transactions which contain \nDDL statements.\n\n>From the ORACLE 7 Server Manual:\n\nTransaction \n\nA transaction (or a logical unit of work) is a\nsequence\nof SQL statements that ORACLE treats as a single unit.\nA transaction begins with the first executable SQL \nstatement after a COMMIT, ROLLBACK or connection to \nORACLE. A transaction ends with a COMMIT statement, a \nROLLBACK statement, or disconnection (intentional or \nunintentional) from ORACLE. ORACLE issues an implicit\n ^^^^^^^^^^^^^^^^^^^^^^^^^ \nCOMMIT before and after any Data Definition Language \n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nstatement. \n^^^^^^^^^\n\nAnyways, so did the TRUNCATE TABLE patch. \n\nFor what its worth,\n\nMike Mascari\n([email protected])\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n", "msg_date": "Tue, 28 Sep 1999 22:45:51 -0700 (PDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block " }, { "msg_contents": "> If my TRUNCATE TABLE patch was applied as submitted,\n> (I haven't downloaded a newer snapshot yet), then\n\nYes, applied.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 09:02:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block" }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> If my TRUNCATE TABLE patch was applied as submitted,\n> (I haven't downloaded a newer snapshot yet), then\n> it falls into category #2...same as VACUUM. It \n> commits the current transaction before truncating\n> the specified relation, then begins a new transaction.\n\nI took all that out ;-) while updating it to compile against the current\nstate of heap_open et al. I see no need for multiple transactions in\nTRUNCATE. It's really on a par with RENAME TABLE, since both have to\nforce a buffer flush.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Sep 1999 09:33:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block " } ]
[ { "msg_contents": "I believe Dave Page is thinking of putting that into pgAdmin, but you'll\nneed to check that with him.\nCheck on the pgAdmin page for features as well ->\nhttp://www.pgadmin.freeserve.co.uk/\n\nMikeA\n\n>> -----Original Message-----\n>> From: John Ridout [mailto:[email protected]]\n>> Sent: Tuesday, September 28, 1999 7:31 PM\n>> To: [email protected]\n>> Subject: [HACKERS] MS Access upsizing\n>> \n>> \n>> Hi,\n>> \n>> There is a very handy wizard for upsizing MS Access\n>> to use MS SQL Server as a backend.\n>> \n>> Has anyone produced anything similar fro upsizing MS Access\n>> to use PostgreSQL as a backend?\n>> \n>> Regards\n>> \n>> John Ridout.\n>> \n>> \n>> ************\n>> \n", "msg_date": "Wed, 29 Sep 1999 09:59:05 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] MS Access upsizing" }, { "msg_contents": "\nThanks for the link.\nUnfortunately the code is not open source.\nI'll put a little something together this\nweekend and stick it on my website.\nSee it we can't tempt a few MS Access users\nto go for PostgreSQL instead of MS SQLServer.\n\nRegards\n\nJohn.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Ansley, Michael\n> Sent: 29 September 1999 08:59\n> To: 'John Ridout'; [email protected]\n> Subject: RE: [HACKERS] MS Access upsizing\n> \n> \n> I believe Dave Page is thinking of putting that into pgAdmin, but you'll\n> need to check that with him.\n> Check on the pgAdmin page for features as well ->\n> http://www.pgadmin.freeserve.co.uk/\n> \n> MikeA\n> \n> >> -----Original Message-----\n> >> From: John Ridout [mailto:[email protected]]\n> >> Sent: Tuesday, September 28, 1999 7:31 PM\n> >> To: [email protected]\n> >> Subject: [HACKERS] MS Access upsizing\n> >> \n> >> \n> >> Hi,\n> >> \n> >> There is a very handy wizard for upsizing MS Access\n> >> to use MS SQL Server as a backend.\n> >> \n> >> Has anyone produced anything similar fro upsizing MS Access\n> >> to use PostgreSQL as a backend?\n> >> \n> >> Regards\n> >> \n> >> John Ridout.\n> >> \n> >> \n> >> ************\n> >> \n> \n> ************\n> \n", "msg_date": "Wed, 29 Sep 1999 10:19:15 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] MS Access upsizing" } ]
[ { "msg_contents": "Since the message didn't come back to me via the list (I sent it last\nTuesday), I resend it, this time to pgsql-hackers, because I think the\ndiscussion doesn't belong to PATCHES.\n\nBruce Momjian wrote:\n\n> Applied. Thanks. You can report any ecpg problems to the bugs list.\n\nOk, I'll flood the list ;-)\n\nWARNING: My patch breaks existing code!\nIf an ecpg program did not provide an indicator variable the library set \nthe variable to zero.\nNow it will return an error.\n\nI would also like to address a remaining problem:\nshould ecpglib touch the host variable if the result is NULL and an\nindicator\nvariable is given?\nEcpglib so far zeroed the variable, my patch doesn't touch the bool\nvariable.\nWhat does the standard say about this? (Adabas e.g. doesn't touch the\nvariable.)\n\nChristof\n", "msg_date": "Wed, 29 Sep 1999 10:56:07 +0200", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] ECPGlib: NULL into bool, force indicator on NULL (2\n\tpatches)" } ]
[ { "msg_contents": "Excuse me for reposting,\n\nI just want to be sure my original posting doesn't lost.\nIs this a bug or feature ? \nAlso, it seems there is a limitation to a number of arguments.\n\n\tRegards,\n \tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Mon, 27 Sep 1999 15:26:08 -0400 (EDT)\nFrom: Bruce Momjian <[email protected]>\nTo: Oleg Bartunov <[email protected]>\nCc: [email protected]\nSubject: Re: [HACKERS] NULL as an argument in plpgsql functions\n\n> Hi,\n> \n> this select produces error message:\n> test=> select test2(NULL);\n> ERROR: typeidTypeRelid: Invalid type - oid = 0\n> \n\nNot sure how to pass NULL's into functions.\n\n\n> test2:\n> CREATE FUNCTION test2 (int4) RETURNS int4 AS '\n> Declare\n> keyval Alias For $1;\n> cnt int4;\n> Begin\n> Update hits set count = count +1 where msg_id = keyval;\n> return cnt; \n> End;\n> ' LANGUAGE 'plpgsql';\n> \n> When I do manually update\n> Update hits set count = count +1 where msg_id = NULL;\n> it works fine. What's the problem ?\n> \n> \tRegards,\n> \t\n> \t\tOleg\n> \n> \n> test=> \\d hits\n> Table = hits\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | msg_id | int4 | 4 |\n> | count | int4 | 4 |\n> +----------------------------------+----------------------------------+-------+\n> test=> select version();\n> version \n> ------------------------------------------------------------------\n> PostgreSQL 6.5.2 on i586-pc-linux-gnulibc1, compiled by gcc 2.95.1\n> (1 row)\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Wed, 29 Sep 1999 14:19:35 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions (fwd)" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Is this a bug or feature ? \n\nIt's a bug. Fixing it will require a wholesale code revision, however\n(see my prior postings about redesigning the function call interface).\n\nThis is something we need to do for 6.6, IMHO, not only because of\nthe NULL-argument issue but also because it will solve the portability\nproblems that are being created by the existing fmgr interface (Alpha\nbugs, need to dumb down to -O0 on some platforms, etc). I've been\ntrying to summon the will to get started on it, but other things keep\ngetting in the way...\n\n> Also, it seems there is a limitation to a number of arguments.\n\nYes, 8. I'm not planning to do anything about that in the near term.\nEven just making the limit configurable would be a lot of work :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Sep 1999 09:55:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions (fwd) " }, { "msg_contents": "Thanks Tom for explanation. I'm giving up :-)\nWill wait ... Just surprised how many things are covered from \ncasual glimpse.\nIt seems that Perl is the only panacea for all kind of problem.\n\nBTW, I think this bug must be written in documentation so people\nwill not spent time as me or at leaset in release notices.\n\n\tRegards,\n\n\t\tOleg\n\nOn Wed, 29 Sep 1999, Tom Lane wrote:\n\n> Date: Wed, 29 Sep 1999 09:55:15 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] NULL as an argument in plpgsql functions (fwd) \n> \n> Oleg Bartunov <[email protected]> writes:\n> > Is this a bug or feature ? \n> \n> It's a bug. Fixing it will require a wholesale code revision, however\n> (see my prior postings about redesigning the function call interface).\n> \n> This is something we need to do for 6.6, IMHO, not only because of\n> the NULL-argument issue but also because it will solve the portability\n> problems that are being created by the existing fmgr interface (Alpha\n> bugs, need to dumb down to -O0 on some platforms, etc). I've been\n> trying to summon the will to get started on it, but other things keep\n> getting in the way...\n> \n> > Also, it seems there is a limitation to a number of arguments.\n> \n> Yes, 8. I'm not planning to do anything about that in the near term.\n> Even just making the limit configurable would be a lot of work :-(\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 29 Sep 1999 18:38:30 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] NULL as an argument in plpgsql functions (fwd) " } ]
[ { "msg_contents": "\n\nHi,\n\nI wrote new buildin functions (inspired with ...src/backend/utils/adt):\n\ntext *asterisk(text *string) - Returns string, with all letters is \n '*' and length of new string is equal \n to original string.\n\ntext *pgcrypt(text *string) - Returns string, cryped via DES crypt(3).\n\n\nDo somebody want this func. ? I try write more string func. if will\ninterest.. (in PSQL is not strcat(), md5() ...etc.).\n\n\t\t\t\t\tZakkr\n\nPS. sorry I new in PSQL development, but PSQL programing is very interest \nfor me :-))\n\n", "msg_date": "Wed, 29 Sep 1999 14:32:00 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "string function" }, { "msg_contents": "\n\nOn Wed, 29 Sep 1999, D'Arcy J.M. Cain wrote:\n\n> Thus spake Zakkr\n> > text *pgcrypt(text *string) - Returns string, cryped via DES crypt(3).\n> > \n> > \n> > Do somebody want this func. ? I try write more string func. if will\n> > interest.. (in PSQL is not strcat(), md5() ...etc.).\n> \n> Careful. This could affect the ability to distribute the code world wide.\n> I know that the server is in Canada but we have mirrors in the US.\n\nYes, I know unimaginable US restrictions... pgcrypt() is experiment for me\n(I need it in my project) - I want write other func. for world wide. \n\nTry somebody implemeny strftime(),strcat() to PSQL ? - \n(is any problem in PSQL, that not exist more string/date/..etc functions or \nis problem with programmer absence (only) ?)\n\n\t\t\t\t\t\t\tZakkr\n\n", "msg_date": "Wed, 29 Sep 1999 15:31:29 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] string function" }, { "msg_contents": "Thus spake Zakkr\n> text *pgcrypt(text *string) - Returns string, cryped via DES crypt(3).\n> \n> \n> Do somebody want this func. ? I try write more string func. if will\n> interest.. (in PSQL is not strcat(), md5() ...etc.).\n\nCareful. This could affect the ability to distribute the code world wide.\nI know that the server is in Canada but we have mirrors in the US.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 29 Sep 1999 10:07:14 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] string function" }, { "msg_contents": "> Try somebody implemeny strftime(),strcat() to PSQL ? -\n> (is any problem in PSQL, that not exist more string/date/..etc functions or\n> is problem with programmer absence (only) ?)\n\nIt is some combination of: 1) a mild reluctance to add too many\nspecialty functions, 2) no specific need, at least from a programmer\nwho can make it happen, 3) unclear tradeoffs between bigger hammers\ncausing more damage than they help, and 4) existing functionality\nwhich already does it. Oh, and 5) simply that no one has done it yet!\n\nIn (almost) all cases, extensions can be put into the contrib area,\nand that is a good way to test out new functionality. In some cases,\nnew functionality should go into the main tree directly. Of the cases\nyou just mentioned:\n\n1) strftime() allows arbitrary formatting of date/time strings.\nCertainly useful, though one can easily format a string that is no\nlonger recognizable to Postgres as a date which is one reason why I\ndidn't code it up previously. Perhaps we should focus on an\nOracle-compatible routine for this; I think it uses tochar() to do\nformatting. Someone recently volunteered to send in code which could\nbe used for this, but I haven't seen the code yet :(\n\n2) strcat() concatenates two strings? There is a full set of functions\nwhich do this, and they are used to support the SQL92 concatenation\noperator \"||\".\n\nBut in general, more functionality is A Good Thing, and discussing it\non the hackers list is a good way to get people used to a new idea, or\nto evolve a new idea into something people like even better.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 29 Sep 1999 15:30:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] string function" }, { "msg_contents": "\n\n> 1) strftime() allows arbitrary formatting of date/time strings.\n> Certainly useful, though one can easily format a string that is no\n> longer recognizable to Postgres as a date which is one reason why I\n> didn't code it up previously. Perhaps we should focus on an\n> Oracle-compatible routine for this; I think it uses tochar() to do\n> formatting. Someone recently volunteered to send in code which could\n> be used for this, but I haven't seen the code yet :(\n\nIf I good understand you, you don't reject strftime idea. I try it..\n \n> 2) strcat() concatenates two strings? There is a full set of functions\n> which do this, and they are used to support the SQL92 concatenation\n> operator \"||\".\n\n :-)) yes '||' is good. I said it bad. I think exapmle inetcat() (I\nprogramming this for my project):\n\nselect inetcat('160.217.1.0/24', 50);\n inetcat\n------------\n160.217.1.50\n\nIn my prev. letter I said it generally. (I'am finding a more function \nin PSQL.. and I can try write it.).\n\n\t\t\t\t\t\tZakkr \n\n", "msg_date": "Wed, 29 Sep 1999 17:50:25 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] string function" }, { "msg_contents": "> If I good understand you, you don't reject strftime idea. I try it..\n\nBut I was trying to nudge you to look at a slightly different idea\nwhich would be compatible with Oracle, for no particularly good reason\nother than that would help some folks when they try to port apps over\nto Postgres.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 30 Sep 1999 06:08:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] string function" }, { "msg_contents": "\n\nOn Thu, 30 Sep 1999, Thomas Lockhart wrote:\n\n> > If I good understand you, you don't reject strftime idea. I try it..\n> \n> But I was trying to nudge you to look at a slightly different idea\n> which would be compatible with Oracle, for no particularly good reason\n> other than that would help some folks when they try to port apps over\n> to Postgres.\n\n My first idea was write to PSQL strftime full compatible with 'C' \nstrftime(). Now I see Oracle documentation for TO_CHAR(), ..hmm it is not\neasy, it is very specific (unique) function, but I agree - your idea \n(compatible with Oracle) is better :-)). I try to_char()..\n\n Think for your time Thomas.\n\n\t\t\t\t\t\tZakkr\n\n", "msg_date": "Thu, 30 Sep 1999 10:19:00 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] string function" }, { "msg_contents": "\nHi,\n\nthis is my new (experimental) TO_CHAR() function (compatible with oracle), \nit is available on: ftp://ftp2.zf.jcu.cz/users/zakkr/pg/TO_CHAR-0.1.tar.gz.\n\nSee example:\n\n=======\nabil=> select to_char('now', 'HH:MI:SS Day MON CC');\nto_char\n------------------------\n20:12:02 Thursday Sep 19\n\nabil=> select to_char('now', 'MM MON Month MONTH YYYY Y,YYY YYY YY Y');\nto_char\n----------------------------------------------\n09 Sep September SEPTEMBER 1999 1,999 999 99 9\n\nabil=> select to_char('now', 'DDD D WW SSSS');\n to_char\n--------------\n273 4 39 72810\n\nabil=> select to_char('now', 'hello year YYYY');\nto_char\n---------------\nhello year 1999\n\n========\n\t\n\tAny comments ?\n\t\t\t\t\t\tZakkr\n\t\t\t\t\t\t \n\n", "msg_date": "Thu, 30 Sep 1999 19:13:12 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "TO_CHAR()" }, { "msg_contents": "> this is my new (experimental) TO_CHAR() function (compatible with oracle),\n> it is available on: ftp://ftp2.zf.jcu.cz/users/zakkr/pg/TO_CHAR-0.1.tar.gz.\n> Any comments ?\n\nNice! So, there is a routine to go the other way in Oracle\n(format()??) and if we have both then we're cookin'\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 01 Oct 1999 02:57:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TO_CHAR()" }, { "msg_contents": "\n\nOn Fri, 1 Oct 1999, Thomas Lockhart wrote:\n\n> > this is my new (experimental) TO_CHAR() function (compatible with oracle),\n> > it is available on: ftp://ftp2.zf.jcu.cz/users/zakkr/pg/TO_CHAR-0.1.tar.gz.\n> > Any comments ?\n> \n> Nice! So, there is a routine to go the other way in Oracle\n> (format()??) and if we have both then we're cookin'\n\nThank! But.. sorry, I don't understand you. What other way in the Oracle\n(format()??) ?\n\nWhat make with this code next? Is it interesting for developers (hmm, to this\ndiscussion join you (Thomas) and me only, but others probably needn't\nTO_CHAR, TO_NUMBER, TO_DATE) ..? \n\n\t\t\t\t\t\tZakkr\n\n", "msg_date": "Fri, 1 Oct 1999 09:43:14 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: TO_CHAR()" }, { "msg_contents": "> > > this is my new (experimental) TO_CHAR() function (compatible with oracle),\n> > > it is available on: ftp://ftp2.zf.jcu.cz/users/zakkr/pg/TO_CHAR-0.1.tar.gz.\n> > > Any comments ?\n> > Nice! So, there is a routine to go the other way in Oracle\n> > (format()??) and if we have both then we're cookin'\n> Thank! But.. sorry, I don't understand you. What other way in the Oracle\n> (format()??) ?\n\nAh, something to go from a random character string to an internal\ndate/time type. We have a fairly generic way to do this already, but\nsince to_char() can insert random garbage at the user's behest then it\nwould be nice to have a related routine which can be told how to\ndecode a string containing random garbage. I'm pretty sure that Oracle\nhas such a beast, but I don't have the docs. I would think that it\ncould be done as a thin layer on top of datetime_in().\n\n> What make with this code next? Is it interesting for developers (hmm, to this\n> discussion join you (Thomas) and me only, but others probably needn't\n> TO_CHAR, TO_NUMBER, TO_DATE) ..?\n\nPeople have requested to_char(), or at least inquired about it, though\nof course there are always ways to work around not having it. After\nall, it *is* non-standard ;) But we already have some Oracle\ncompatibility functions, and a few more won't hurt.\n\nThere are two possibilities:\n\n1) we incorporate it into the main tree\n2) we distribute it as a contrib package\n\nI'd prefer the former, though right now the code has problems since it\nconverts input to timestamp to take advantage of localtime(). Can you\nlook at, and perhaps use directly, datetime2tm()? That should get you\nthe structure you need to work with, and it is not limited to just\nUnix system time ranges. Just be aware that the year field contains a\nreal year, not year modulo 1900 as in a real Unix tm structure.\n\nLet me know if this is possible.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 01 Oct 1999 14:54:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: TO_CHAR()" }, { "msg_contents": "> > discussion join you (Thomas) and me only, but others probably needn't\n> > TO_CHAR, TO_NUMBER, TO_DATE) ..?\n>\n> People have requested to_char(), or at least inquired about it, though\n> of course there are always ways to work around not having it. After\n> all, it *is* non-standard ;) But we already have some Oracle\n> compatibility functions, and a few more won't hurt.\n>\n> There are two possibilities:\n>\n> 1) we incorporate it into the main tree\n> 2) we distribute it as a contrib package\n\n If incorporating into main tree, don't forget that TO_CHAR()\n must also be capable to handle NUMERIC/DECIMAL/INTEGER with a\n rich set of fomatting styles. Actually I'm in doubt if you\n both are a little too much focusing on DATE/TIME.\n\n This means that there could be different input arguments\n (type and number!) to TO_CHAR().\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 1 Oct 1999 18:00:09 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: TO_CHAR()" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n>>>>> \"Thomas\" == Thomas Lockhart <[email protected]> writes:\n\n Thomas> Ah, something to go from a random character string to an\n Thomas> internal date/time type. [...] I'm pretty sure that Oracle\n Thomas> has such a beast, but I don't have the docs.\n\nJust FYI, it's to_date().\n\nroland\n- -- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 76-15 113th Street, Apt 3B\[email protected] Forest Hills, NY 11375\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3a\nCharset: noconv\nComment: Processed by Mailcrypt 3.5.4, an Emacs/PGP interface\n\niQCVAwUBN/VmwOoW38lmvDvNAQElOQP/dFLgEyjpuKrtF9Ahu682joAegub4TbyW\nRJUT8oVoMgchw0iIhZ4d5y6X7PNYc0ynJfdd5DmIawJuCdw79fvmpQrl+XVkft33\n78mTJFkSyilqYfl/uT2zq5i+P/k6ARZYYJ+OpvUIJG0ttuDit5Xf/LRIM3N+UJ6l\nmATOFpUCn9E=\n=kmuG\n-----END PGP SIGNATURE-----\n", "msg_date": "01 Oct 1999 21:58:25 -0400", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: TO_CHAR()" }, { "msg_contents": "> If incorporating into main tree, don't forget that TO_CHAR()\n> must also be capable to handle NUMERIC/DECIMAL/INTEGER with a\n> rich set of fomatting styles. Actually I'm in doubt if you\n> both are a little too much focusing on DATE/TIME.\n> This means that there could be different input arguments\n> (type and number!) to TO_CHAR().\n\nNot a problem. In some cases, we are only an alias away from having it\n(e.g. to_char(int) == text(int4)). Not sure about *all* of the others,\nbut the ugliest will be the to_char(datetime) and to_date(text,format)\nstuff, so that is a good place to start.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 02 Oct 1999 14:48:11 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: TO_CHAR()" } ]
[ { "msg_contents": "postgres=> select * from t4;\nNOTICE: Adding missing FROM-clause entry for table t4\nm|n\n-+-\n...\n\nI updated my current tree and now this message comes out on even\nsimple queries. Is it supposed to be there? If so, why??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 29 Sep 1999 13:47:16 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "New notices?" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> postgres=> select * from t4;\n> NOTICE: Adding missing FROM-clause entry for table t4\n\nHoo, boy. I think your change didn't quite work right, Bruce...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Sep 1999 10:03:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New notices? " }, { "msg_contents": "> postgres=> select * from t4;\n> NOTICE: Adding missing FROM-clause entry for table t4\n> m|n\n> -+-\n> ...\n> \n> I updated my current tree and now this message comes out on even\n> simple queries. Is it supposed to be there? If so, why??\n\nStrange. I don't get it:\n\ntest=> select * from pg_language;\nlanname |lanispl|lanpltrusted|lanplcallfoid|lancompiler\n--------+-------+------------+-------------+--------------\ninternal|f |f | 0|n/a\nlisp |f |f | 0|/usr/ucb/liszt\nC |f |f | 0|/bin/cc\nsql |f |f | 0|postgres\n(4 rows)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 10:14:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New notices?" }, { "msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > postgres=> select * from t4;\n> > NOTICE: Adding missing FROM-clause entry for table t4\n> \n> Hoo, boy. I think your change didn't quite work right, Bruce...\n\nDo you see it there too? I can't see it here. Let me cvs update.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 10:15:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New notices?" }, { "msg_contents": "> > I updated my current tree and now this message comes out on even\n> > simple queries. Is it supposed to be there? If so, why??\n> Strange. I don't get it:\n\nI'll bet it is coming from my using expandAll() to support the new\njoin syntax, and as a kludge I am feeding it a dummy parse state as a\nplaceholder. As I try to implement table and column aliases, I'll be\nmucking around in all of these areas. It isn't at all clear to me from\nthe notes or from the checks that there was some specific case this\nwas intended to catch...\n\nIn the meantime, I've bracketed my local copy of the code:\n\n#ifdef EMIT_ANNOYING_MESSAGES\n elog(NOTICE,\"Adding missing FROM-clause entry%s for table %s\",\n pstate->parentParseState != NULL ? \" in subquery\" : \"\",\n refname);\n#endif\n\n;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 29 Sep 1999 15:07:53 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New notices?" }, { "msg_contents": "> > > I updated my current tree and now this message comes out on even\n> > > simple queries. Is it supposed to be there? If so, why??\n> > Strange. I don't get it:\n> \n> I'll bet it is coming from my using expandAll() to support the new\n> join syntax, and as a kludge I am feeding it a dummy parse state as a\n> placeholder. As I try to implement table and column aliases, I'll be\n> mucking around in all of these areas. It isn't at all clear to me from\n> the notes or from the checks that there was some specific case this\n> was intended to catch...\n\nThis was added to address the long-standing error reports of problems\nwhen addressing aliased and non-aliased columns in the same query. We\ndon't issue an error, but go ahead and auto-create a from entry, very\nnon-standard SQL:\n\n\tSELECT tab.* FROM tab t\n\nTom Lane and I agreed we need to issue a NOTICE for this type of\nauto-FROM creation.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 11:39:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New notices?" }, { "msg_contents": "> This was added to address the long-standing error reports of problems\n> when addressing aliased and non-aliased columns in the same query. We\n> don't issue an error, but go ahead and auto-create a from entry, very\n> non-standard SQL:\n> SELECT tab.* FROM tab t\n> Tom Lane and I agreed we need to issue a NOTICE for this type of\n> auto-FROM creation.\n\nOK, but I may happily break it when implementing table and column\naliases for join syntax. Don't know yet what the ramifications will\nbe...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 29 Sep 1999 15:57:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New notices?" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Tom Lane and I agreed we need to issue a NOTICE for this type of\n>> auto-FROM creation.\n\n> OK, but I may happily break it when implementing table and column\n> aliases for join syntax. Don't know yet what the ramifications will\n> be...\n\nWell, it's certainly a second-order feature. How about you leave the\nmessage turned off until the dust has settled from JOIN, and then we\ncan see what it takes to make it work right...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Sep 1999 19:38:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New notices? " }, { "msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> >> Tom Lane and I agreed we need to issue a NOTICE for this type of\n> >> auto-FROM creation.\n> \n> > OK, but I may happily break it when implementing table and column\n> > aliases for join syntax. Don't know yet what the ramifications will\n> > be...\n> \n> Well, it's certainly a second-order feature. How about you leave the\n> message turned off until the dust has settled from JOIN, and then we\n> can see what it takes to make it work right...\n\nYes, I think that's the plan.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 21:15:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New notices?" } ]
[ { "msg_contents": "Whilst chasing down a few more aggregate-related bug reports,\nI realized that the planner is doing the Wrong Thing when\na query's FROM clause mentions tables that are not used\nelswhere in the query. For example, I make a table with\nthree rows:\n\nplay=> select x.f1 from x;\nf1\n--\n 1\n 2\n 3\n(3 rows)\n\nNow:\n\nplay=> select x.f1 from x, x as x2;\nf1\n--\n 1\n 2\n 3\n(3 rows)\n\nIt seems to me that the latter query must yield 9 rows (three\noccurrences of each value) to satisfy the SQL spec. The spec defines\nthe result of a two-query FROM clause to be the Cartesian product of the\ntwo tables, period. It doesn't say anything about \"only if one or more\ncolumns of each table are actually used somewhere\".\n\nThe particular case that led me into this was for an aggregate:\n\nplay=> select count(f1) from x;\ncount\n-----\n 3\n(1 row)\n\nplay=> select count(1) from x;\ncount\n-----\n 1\n(1 row)\n\nNow IMHO count(1) should yield the same count as for any other non-null\nexpression, ie, the number of rows in the source table, because the spec\neffectively says \"evaluate the expression for each row and count the\nnumber of non-null results\". The reason you get 1 here is that the\nplanner is dropping the \"unreferenced\" x, deciding that the query looks\nlike \"select 2+2;\", and generating a single-row Result plan.\n\nBefore I look into ways of fixing this, is there anyone who wants\nto argue that the current behavior is correct? It looks all wrong\nto me, but...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Sep 1999 10:34:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Planner drops unreferenced tables --- bug, no?" }, { "msg_contents": "Tom Lane wrote:\n\n> [...]\n>\n> Now:\n>\n> play=> select x.f1 from x, x as x2;\n> f1\n> --\n> 1\n> 2\n> 3\n> (3 rows)\n>\n> It seems to me that the latter query must yield 9 rows (three\n> occurrences of each value) to satisfy the SQL spec. The spec defines\n> the result of a two-query FROM clause to be the Cartesian product of the\n> two tables, period. It doesn't say anything about \"only if one or more\n> columns of each table are actually used somewhere\".\n\n Caution here!\n\n After rewriting there can be many unused rangetable entries\n floating around. Especially if you SELECT from a view, the\n view's relation is still mentioned in the rangetable.\n\n If you now build the cartesian product over all relations\n (including the EMPTY view relation), you'll allways get NO\n rows.\n\n So when touching this, make sure the rewriter removes\n properly all rewritten RTE's and those for NEW and OLD.\n Removing RTE's will then require changing varno's allover\n again. Are you sure you want to open this can of worms?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 16:52:04 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planner drops unreferenced tables --- bug, no?" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n>> It seems to me that the latter query must yield 9 rows (three\n>> occurrences of each value) to satisfy the SQL spec. The spec defines\n>> the result of a two-query FROM clause to be the Cartesian product of the\n>> two tables, period. It doesn't say anything about \"only if one or more\n>> columns of each table are actually used somewhere\".\n\n> Caution here!\n\n> After rewriting there can be many unused rangetable entries\n> floating around. Especially if you SELECT from a view, the\n> view's relation is still mentioned in the rangetable.\n\nI was thinking of forcing rangetable entries that are marked as\n'inFromCl' to be included in the planner's target relation set,\nbut those not so marked would only get added if referenced, same as now.\nDo you think that will not work?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Sep 1999 11:04:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Planner drops unreferenced tables --- bug, no? " }, { "msg_contents": ">\n> [email protected] (Jan Wieck) writes:\n> >> It seems to me that the latter query must yield 9 rows (three\n> >> occurrences of each value) to satisfy the SQL spec. The spec defines\n> >> the result of a two-query FROM clause to be the Cartesian product of the\n> >> two tables, period. It doesn't say anything about \"only if one or more\n> >> columns of each table are actually used somewhere\".\n>\n> > Caution here!\n>\n> > After rewriting there can be many unused rangetable entries\n> > floating around. Especially if you SELECT from a view, the\n> > view's relation is still mentioned in the rangetable.\n>\n> I was thinking of forcing rangetable entries that are marked as\n> 'inFromCl' to be included in the planner's target relation set,\n> but those not so marked would only get added if referenced, same as now.\n> Do you think that will not work?\n\n I'm not sure and don't have the time to dive into. Just\n wanted to point on an area of (maybe unexpected) side\n effects.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 17:16:18 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planner drops unreferenced tables --- bug, no?" }, { "msg_contents": "At 10:34 AM 9/29/99 -0400, Tom Lane wrote:\n\n>play=> select x.f1 from x, x as x2;\n>f1\n>--\n> 1\n> 2\n> 3\n>(3 rows)\n>\n>It seems to me that the latter query must yield 9 rows (three\n>occurrences of each value) to satisfy the SQL spec. The spec defines\n>the result of a two-query FROM clause to be the Cartesian product of the\n>two tables, period. It doesn't say anything about \"only if one or more\n>columns of each table are actually used somewhere\".\n\nAFAIK, this is correct. For the heck of it, I tried it in \nOracle, and indeed the full cartesian product's returned:\n\n\nSQL> select x2.i from x, x x2;\n\n I\n----------\n 1\n 1\n 1\n 2\n 2\n 2\n 3\n 3\n 3\n\n9 rows selected.\n\n>play=> select count(1) from x;\n>count\n>-----\n> 1\n>(1 row)\n\nAgain, Oracle 8:\n\nSQL> select count(1) from x, x x2;\n\n COUNT(1)\n----------\n 9\n\nSQL> \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 29 Sep 1999 08:21:58 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planner drops unreferenced tables --- bug, no?" }, { "msg_contents": "I wrote:\n>>>>> It seems to me that the latter query must yield 9 rows (three\n>>>>> occurrences of each value) to satisfy the SQL spec. The spec defines\n>>>>> the result of a two-query FROM clause to be the Cartesian product of the\n>>>>> two tables, period. It doesn't say anything about \"only if one or more\n>>>>> columns of each table are actually used somewhere\".\n\nOn further investigation, it turns out that there is actually code in\nthe planner that tries to do this right! (and at one time probably\ndid do it right...) add_missing_vars_to_tlist() in initsplan.c has the\nspecific mission of making sure any tables that are mentioned only in\nthe FROM clause get included into the planner's target relation set.\nUnfortunately, it's been dead code for a while, because there are two\ndifferent upstream routines that do the wrong thing.\n\n50% of the problem is in query_planner, which tries to short-circuit\nthe whole planning process if it doesn't see any vars at all in the\ntargetlist or qual. I think this code can simply be diked out as\nmisguided optimization, although there might be a couple of small\nchanges needed elsewhere to handle the case where the target relation\nset is truly empty. (But this error is not what's preventing\n\"SELECT foo.f1 FROM foo, bar\" from generating a join between foo and\nbar; add_missing_vars_to_tlist() could still fire, so long as at least\none var appears in the query.)\n\n\[email protected] (Jan Wieck) writes:\n>>>> Caution here!\n>> \n>>>> After rewriting there can be many unused rangetable entries\n>>>> floating around. Especially if you SELECT from a view, the\n>>>> view's relation is still mentioned in the rangetable.\n\nThe other 50% of the problem is that the rewriter is overly enthusiastic\nabout clearing the inFromCl flag in order to prevent views from being\ntaken as valid join targets. rewriteHandler.c has two different\nroutines that will clear inFromCl flags, and they're both bogus:\n\n1. fireRIRrules will zap a table's inFromCl flag if the table is not\nreferenced by any Var in the parsetree, *whether or not the table\nactually has any rules*. This is why add_missing_vars_to_tlist() is\ncurrently dead code in all cases. It's wrong in another way too: if\nthe table has no referencing vars, fireRIRrules doesn't look for rules\napplicable to the table. That's wrong for the same reasons that\nremoving the table from the join set is wrong: it can still affect\nthe results, so the lookup and substitution should still occur.\n\n2. If ApplyRetrieveRule is fired, it resets the inFromCl flag on *all*\ntables in the query, not only the one being substituted for. This is\njust plain wrong.\n\n\nI believe the right thing to do is remove ApplyRetrieveRule's change\nof inFromCl entirely, and to modify fireRIRrules so that it only clears\na table's inFromCl flag if it finds an ON SELECT DO INSTEAD rule for it.\nThat will remove views from the join set without side-effects on the\nbehavior for normal tables. (Also, fireRIRrules should try to apply\nrules for a table whether it finds references to it or not; which means\nthat rangeTableEntry_used() isn't needed at all.)\n\nJan, what do you think of this? In particular, what do you think should\nhappen in the following cases:\n 1. Table has an ON SELECT *not* INSTEAD rule.\n 2. There is an ON SELECT (with or without INSTEAD) rule for one or\n more fields of the table, but not for the whole table.\n\nI'm not at all clear on the semantics of those kinds of rules, so I\ndon't know if they should remove the original table from the join set\nor not. (I'm also confused about ON SELECT INSTEAD where the INSTEAD\nis not a select; is that even legal?)\n\nAlso, would it be a good idea to propagate a source view's inFromCl\nflag into the substituted tables? (That is, when firing a select rule\nfor a table that wasn't marked inFromCl to begin with, clear the\ninFromCl flags of all RTEs that it adds to the query.) I am not sure\nif this is appropriate or not.\n\n\nActually, it would probably be cleanest if we split the functions of\ninFromCl into two separate flags, say \"inFromCl\" and \"inJoinSet\".\nThe point of inFromCl is that when we add an implicit RTE using the\nPostquel extension, it shouldn't suddenly become part of the available\nnamespace for unqualified column names processed later in the query.\nSo inFromCl controls the use of the RTE to look up unqualified names.\nHowever, if we believe that the planner should subsequently treat that\nimplicit RTE as if it were a normal join target, then we need a separate\nflag that carries that information.\n\nThis didn't use to be an issue, because the implicit RTE could only be\nthere if there were a Var reference to it; add_missing_vars_to_tlist()\nshould never need to do anything for it, because the RTE would have been\nadded to the join set already because of its Var, right? Well, that\n*used* to be true up till last week. Now that we have a constant-\nexpression folder that understands about boolean and case expression\nshort-circuiting, it is possible for Var nodes to disappear from the\ntree during optimization. It would be a bad thing if that changed the\njoin semantics. So, I think we need a flag that can force RTEs\nto be included in the planner's join set regardless of whether their\nVars survive the optimizer. And that can't be the same as inFromCl,\nor there's no such thing as an implicit RTE.\n\nWith this split, inFromCl would be looked at only by the parser code\nthat resolves unqualified names, and inJoinSet would be looked at\nby add_missing_vars_to_tlist(). The rewriter's machinations would\nonly need to consider whether to set/clear inJoinSet or not.\n\n(Thomas, does any of this strike a chord with your inner/outer join\nstuff?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Oct 1999 21:46:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Planner drops unreferenced tables --- bug, no? " }, { "msg_contents": "> Actually, it would probably be cleanest if we split the functions of\n> inFromCl into two separate flags, say \"inFromCl\" and \"inJoinSet\".\n> The point of inFromCl is that when we add an implicit RTE using the\n> Postquel extension, it shouldn't suddenly become part of the available\n> namespace for unqualified column names processed later in the query.\n> So inFromCl controls the use of the RTE to look up unqualified names.\n> However, if we believe that the planner should subsequently treat that\n> implicit RTE as if it were a normal join target, then we need a separate\n> flag that carries that information.\n\nTwo different flags seems like the perfect way. Let me know if you need\nany help adding the flag. I would be glad to do it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 00:04:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planner drops unreferenced tables --- bug, no?" }, { "msg_contents": "Tom Lane wrote:\n\n> [...]\n>\n> [email protected] (Jan Wieck) writes:\n> >>>> Caution here!\n> >>\n> >>>> After rewriting there can be many unused rangetable entries\n> >>>> floating around. Especially if you SELECT from a view, the\n> >>>> view's relation is still mentioned in the rangetable.\n>\n> The other 50% of the problem is that the rewriter is overly enthusiastic\n> about clearing the inFromCl flag in order to prevent views from being\n> taken as valid join targets. rewriteHandler.c has two different\n> routines that will clear inFromCl flags, and they're both bogus:\n>\n> [...]\n>\n> Jan, what do you think of this? In particular, what do you think should\n> happen in the following cases:\n> 1. Table has an ON SELECT *not* INSTEAD rule.\n> 2. There is an ON SELECT (with or without INSTEAD) rule for one or\n> more fields of the table, but not for the whole table.\n>\n> I'm not at all clear on the semantics of those kinds of rules, so I\n> don't know if they should remove the original table from the join set\n> or not. (I'm also confused about ON SELECT INSTEAD where the INSTEAD\n> is not a select; is that even legal?)\n>\n> Also, would it be a good idea to propagate a source view's inFromCl\n> flag into the substituted tables? (That is, when firing a select rule\n> for a table that wasn't marked inFromCl to begin with, clear the\n> inFromCl flags of all RTEs that it adds to the query.) I am not sure\n> if this is appropriate or not.\n\n Don't worry about it, those rules cannot occur and I'm sure\n we'll never reincarnate them in the future.\n\n The only allowed rule ON SELECT is one that\n\n - IS INSTEAD\n - is named \"_RET<relation-name>\"\n - has one action which must be another SELECT with a\n targetlist producing exactly the relations attribute list.\n\n Again: If a relation has a rule ON SELECT, it IS A VIEW. No\n relation can have more that one rule ON SELECT.\n\n I've disabled all the other cases in RewriteDefine() on v6.4\n - I think - because of the unclear semantics. Rules ON SELECT\n where planned to have different actions or rewrite single\n attributes too. But ON SELECT rules must be applied on all\n relations which get scanned, so if there would be a rule ON\n SELECT that inserts some logging into another relation, this\n would actually occur ON UPDATE and ON DELETE to it's relation\n too because to do the UPDATE/DELETE it's relation has to be\n scanned.\n\n I think it's correct to MOVE the inFromCl from the relation\n rewritten to the join relations coming with the view's rule.\n Thus clear it on the RTE rewritten and on the first two of\n the rules (which are allways NEW and OLD for all rules). Then\n set all other RTE's which come from the view to the former\n inFromCl state of the rewritten RTE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 5 Oct 1999 11:43:25 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planner drops unreferenced tables --- bug, no?" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> I think it's correct to MOVE the inFromCl from the relation\n> rewritten to the join relations coming with the view's rule.\n> Thus clear it on the RTE rewritten and on the first two of\n> the rules (which are allways NEW and OLD for all rules). Then\n> set all other RTE's which come from the view to the former\n> inFromCl state of the rewritten RTE.\n\nOK, I will do that. My first-cut code (attached, please look it over)\npasses regress test without it, but we know how much that's worth ;-).\n\nActually I think moving inJoinSet is now the important thing...\n\n\t\t\tregards, tom lane\n\n\nAll mention of inFromCl/inJoinSet removed from ApplyRetrieveRule;\nfireRIRrules loop looks like:\n\n rt_index = 0;\n while (rt_index < length(parsetree->rtable))\n {\n ++rt_index;\n\n rte = nth(rt_index - 1, parsetree->rtable);\n\n /*\n * If the table is not one named in the original FROM clause\n * then it must be referenced in the query, or we ignore it.\n * This prevents infinite expansion loop due to new rtable\n * entries inserted by expansion of a rule.\n */\n if (! rte->inFromCl && rt_index != parsetree->resultRelation &&\n ! rangeTableEntry_used((Node *) parsetree, rt_index, 0))\n {\n /* Make sure the planner ignores it too... */\n rte->inJoinSet = false;\n continue;\n }\n\n rel = heap_openr(rte->relname, AccessShareLock);\n rules = rel->rd_rules;\n if (rules == NULL)\n {\n heap_close(rel, AccessShareLock);\n continue;\n }\n\n locks = NIL;\n\n /*\n * Collect the RIR rules that we must apply\n */\n for (i = 0; i < rules->numLocks; i++)\n {\n rule = rules->rules[i];\n if (rule->event != CMD_SELECT)\n continue;\n\n if (rule->attrno > 0)\n {\n /* per-attr rule; do we need it? */\n if (! attribute_used((Node *) parsetree,\n rt_index,\n rule->attrno, 0))\n continue;\n }\n else\n {\n /* Rel-wide ON SELECT DO INSTEAD means this is a view.\n * Remove the view from the planner's join target set,\n * or we'll get no rows out because view itself is empty!\n */\n if (rule->isInstead)\n rte->inJoinSet = false;\n }\n\n locks = lappend(locks, rule);\n }\n\n /*\n * Check permissions\n */\n checkLockPerms(locks, parsetree, rt_index);\n\n /*\n * Now apply them\n */\n foreach(l, locks)\n {\n rule = lfirst(l);\n\n RIRonly.event = rule->event;\n RIRonly.attrno = rule->attrno;\n RIRonly.qual = rule->qual;\n RIRonly.actions = rule->actions;\n\n parsetree = ApplyRetrieveRule(parsetree,\n &RIRonly,\n rt_index,\n RIRonly.attrno == -1,\n rel,\n &modified);\n }\n\n heap_close(rel, AccessShareLock);\n }\n", "msg_date": "Tue, 05 Oct 1999 10:08:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Planner drops unreferenced tables --- bug, no? " } ]
[ { "msg_contents": "What does PENDANT imply/mean in terms of RI? I could figure out all of the\nother syntax.\n\n> -----Original Message-----\n> From:\[email protected] [SMTP:[email protected]]\n> Sent:\tWednesday, September 29, 1999 9:03 AM\n> To:\[email protected]\n> Cc:\[email protected]; [email protected]; [email protected]\n> Subject:\tRe: RI and PARSER (was: Re: [HACKERS] RI status report #1)\n> \n> Thomas Lockhart wrote:\n> \n> >\n> > > > > CONSTRAINTS\n> > > > > DEFERRABLE\n> > > > > DEFERRED\n> > > > > IMMEDIATE\n> > > > > INITIALLY\n> > > > > PENDANT\n> > > > > RESTRICT\n> > > O.K. - I was able to add them all to ColId without conflicts\n> > > for now. Let's see what happens after adding the syntax for\n> > > CREATE CONSTRAINT TRIGGER.\n> >\n> > Right. Anything which causes trouble can be demoted to ColLabel.\n> >\n> > > I'm not sure which of them are SQL92 or SQL3, at least they\n> > > are all SQL3 \"reserved\" words according to the SQL3 draft.\n> >\n> > According to my Date and Darwen (which is mostly SQL92), all of these\n> > except \"PENDANT\" are SQL92 reserved words. PENDANT is not mentioned,\n> > so is presumably an SQL3-ism.\n> >\n> > Do you want me to update syntax.sgml?\n> \n> Please be so kind. CREATE CONSTRAINT TRIGGER did not mess up\n> anything, so all these new reserved words appear in ColId and\n> are still available.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #========================================= [email protected] (Jan Wieck) #\n> \n> \n> \n> ************\n", "msg_date": "Wed, 29 Sep 1999 11:31:58 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: RI and PARSER (was: Re: [HACKERS] RI status report #1)" }, { "msg_contents": ">\n> What does PENDANT imply/mean in terms of RI? I could figure out all of the\n> other syntax.\n\n As far as I understood it:\n\n CREATE TABLE t1 (a1 integer PRIMARY KEY NOT NULL, b1 text);\n\n CREATE TABLE t2 (a2 integer NOT NULL, b2 text,\n CONSTRAINT check_a2\n FOREIGN KEY (a2)\n REFERENCES t1 (a1)\n PENDANT\n ON DELETE CASCADE\n ON UPDATE CASCADE\n INITIALLY DEFERRED);\n\n This setup requires, that for each key in t1.a1 at least one\n reference from t2.a2 MUST exist. So this is a cyclic\n integrity check. I'm not sure if removing the last reference\n automatically removes the PK row from t1 or if it raises an\n error. Can someone clearify here?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 19:25:32 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: RI and PARSER (was: Re: [HACKERS] RI status report #1)" } ]
[ { "msg_contents": "ATTENTION: catalog changes - initdb required!\n\n General support for deferred constraint triggers is finished\n and committed to CURRENT tree.\n\n\n Implemented so far:\n\n CREATE CONSTRAINT TRIGGER <constraint_name>\n AFTER <event> ON <relation_name>\n [ FROM <referencing_relation_name> ]\n [ [ NOT ] DEFERRABLE ]\n [ INITIALLY { IMMEDIATE | DEFERRED } ]\n FOR EACH ROW EXECUTE PROCEDURE <procedure_name> ( <args> )\n\n SET CONSTRAINTS { <constraint_names> | ALL } { IMMEDIATE | DEFERRED }\n\n Details on CREATE CONSTRAINT TRIGGER:\n\n <constraint_name>\n\n Can be a usual identifier or \"\" for unnamed\n constraints. Since the same constraint can result in\n multiple pg_trigger entries for different tables,\n there's no check for duplicates. This is the name to\n later identify constraints in SET CONSTRAINTS.\n\n FROM <referencing_relation_name>\n\n If given, causes that this trigger are automatically\n removed when the referencing relation is dropped.\n This is useful for referential action triggers (like\n ON DELETE CASCADE), which are fired on changes to the\n PK table. Dropping the FK table without removing the\n triggers from the PK table would make it unusable.\n\n [ NOT ] DEFERRABLE\n\n Specifies if the trigger is deferrable or not.\n Defaults to NOT DEFERRABLE if INITIALLY is IMMEDIATE.\n Defaults to DEFERRABLE if INITIALLY is DEFERRED.\n\n INITIALLY { IMMEDIATE | DEFERRED }\n\n Specifies the deferred state of the trigger at\n session start. Defaults to IMMEDIATE.\n\n <procedure_name> ( <args> )\n\n The usual trigger procedure definition.\n\n The trigger itself in pg_trigger is created with a tgname\n of RI_ConstraintTrigger_<newoid>, which should be unique\n enough.\n\n Details on SET CONSTRAINTS:\n\n <constraint_names>\n\n A comma separated list of constraint identifiers. An\n attempt to set named constraints to DEFERRED where at\n least one of the pg_trigger entries with this name\n isn't deferrable raises an ERROR.\n\n Using ALL with DEFERRED sets all deferrable\n constraint triggers (named and unnamed) to deferred,\n leaving not deferrable ones immediate.\n\n If SET CONSTRAINTS is used outside of a transaction block\n (BEGIN/COMMIT), it sets the default behaviour on session\n level. All constraint triggers begin each transaction\n (explicit block or implicit single statement) in these\n states.\n\n All AFTER ROW triggers (regular ones) are treated like\n IMMEDIATE constraint triggers now so they are fired at\n the end of the entire statement instead of during it.\n This interfered with the funny_dup17 test in the\n regression suite which is commented out now.\n\n Trigger events for deferred triggers are condensed during\n a transaction. That means, that executing multiple\n UPDATE commands affecting the same row would finally\n invoke only one trigger call which receives the original\n tuple (before BEGIN) as OLD and the final tuple (after\n last UPDATE) as NEW. Similar INSERT/DELETE of same row\n will fire no trigger at all.\n\n There are checks done if IMMEDIATE or BEFORE ROW triggers\n have already been fired when a row is touched multiple\n times in the same transaction. In that case, an error is\n raised because this might violate referential integrity.\n\n Needless to say that COMMIT causes an implicit SET\n CONSTRAINTS ALL IMMEDIATE. All deferred triggers are run\n then, so COMMIT could raise trigger generated errors now!\n\n Next we need:\n\n 1. Generic trigger procs that are argument driven. I'll make\n a separate thread for this topic.\n\n 2. Support in CREATE TABLE that issues the appropriate\n CREATE CONSTRAINT TRIGGER statements for FOREIGN KEY in\n the same manner as CREATE INDEX for PRIMARY KEY is done.\n This must wait until we have an accepted call interface\n for the generic trigger procs from 1..\n\n 3. Support for pg_dump to emit the correct CREATE CONSTRAINT\n TRIGGER statements. Who wants to pick up this topic?\n\n 4. Add the ability to swap out huge amounts of deferred\n trigger events to disk (actually I'm collecting them in\n memory - so large transactions affecting millions of rows\n of a table where triggers are defined are likely to blow\n up the backend). This is my topic - sorry.\n\n 5. Write a regression test for the new FOREIGN KEY support.\n Surely an important thing but one of the last steps after\n anything else works properly.\n\n 6. Remove the \"not supported yet\" note for FOREIGN KEY from\n the docs along with correcting to the full syntax\n supported finally :-)\n\n Hmmmm - the more I work on it the longer the TODO becomes.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 19:10:31 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "RI status report #2" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> \n> Man, that's a heap of additions.\n\n Only the top of the iceberg :-)\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n", "msg_date": "Wed, 29 Sep 1999 19:30:50 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "\nMan, that's a heap of additions.\n\n> ATTENTION: catalog changes - initdb required!\n> \n> General support for deferred constraint triggers is finished\n> and committed to CURRENT tree.\n> \n> \n> Implemented so far:\n> \n> CREATE CONSTRAINT TRIGGER <constraint_name>\n> AFTER <event> ON <relation_name>\n> [ FROM <referencing_relation_name> ]\n> [ [ NOT ] DEFERRABLE ]\n> [ INITIALLY { IMMEDIATE | DEFERRED } ]\n> FOR EACH ROW EXECUTE PROCEDURE <procedure_name> ( <args> )\n> \n> SET CONSTRAINTS { <constraint_names> | ALL } { IMMEDIATE | DEFERRED }\n> \n> Details on CREATE CONSTRAINT TRIGGER:\n> \n> <constraint_name>\n> \n> Can be a usual identifier or \"\" for unnamed\n> constraints. Since the same constraint can result in\n> multiple pg_trigger entries for different tables,\n> there's no check for duplicates. This is the name to\n> later identify constraints in SET CONSTRAINTS.\n> \n> FROM <referencing_relation_name>\n> \n> If given, causes that this trigger are automatically\n> removed when the referencing relation is dropped.\n> This is useful for referential action triggers (like\n> ON DELETE CASCADE), which are fired on changes to the\n> PK table. Dropping the FK table without removing the\n> triggers from the PK table would make it unusable.\n> \n> [ NOT ] DEFERRABLE\n> \n> Specifies if the trigger is deferrable or not.\n> Defaults to NOT DEFERRABLE if INITIALLY is IMMEDIATE.\n> Defaults to DEFERRABLE if INITIALLY is DEFERRED.\n> \n> INITIALLY { IMMEDIATE | DEFERRED }\n> \n> Specifies the deferred state of the trigger at\n> session start. Defaults to IMMEDIATE.\n> \n> <procedure_name> ( <args> )\n> \n> The usual trigger procedure definition.\n> \n> The trigger itself in pg_trigger is created with a tgname\n> of RI_ConstraintTrigger_<newoid>, which should be unique\n> enough.\n> \n> Details on SET CONSTRAINTS:\n> \n> <constraint_names>\n> \n> A comma separated list of constraint identifiers. An\n> attempt to set named constraints to DEFERRED where at\n> least one of the pg_trigger entries with this name\n> isn't deferrable raises an ERROR.\n> \n> Using ALL with DEFERRED sets all deferrable\n> constraint triggers (named and unnamed) to deferred,\n> leaving not deferrable ones immediate.\n> \n> If SET CONSTRAINTS is used outside of a transaction block\n> (BEGIN/COMMIT), it sets the default behaviour on session\n> level. All constraint triggers begin each transaction\n> (explicit block or implicit single statement) in these\n> states.\n> \n> All AFTER ROW triggers (regular ones) are treated like\n> IMMEDIATE constraint triggers now so they are fired at\n> the end of the entire statement instead of during it.\n> This interfered with the funny_dup17 test in the\n> regression suite which is commented out now.\n> \n> Trigger events for deferred triggers are condensed during\n> a transaction. That means, that executing multiple\n> UPDATE commands affecting the same row would finally\n> invoke only one trigger call which receives the original\n> tuple (before BEGIN) as OLD and the final tuple (after\n> last UPDATE) as NEW. Similar INSERT/DELETE of same row\n> will fire no trigger at all.\n> \n> There are checks done if IMMEDIATE or BEFORE ROW triggers\n> have already been fired when a row is touched multiple\n> times in the same transaction. In that case, an error is\n> raised because this might violate referential integrity.\n> \n> Needless to say that COMMIT causes an implicit SET\n> CONSTRAINTS ALL IMMEDIATE. All deferred triggers are run\n> then, so COMMIT could raise trigger generated errors now!\n> \n> Next we need:\n> \n> 1. Generic trigger procs that are argument driven. I'll make\n> a separate thread for this topic.\n> \n> 2. Support in CREATE TABLE that issues the appropriate\n> CREATE CONSTRAINT TRIGGER statements for FOREIGN KEY in\n> the same manner as CREATE INDEX for PRIMARY KEY is done.\n> This must wait until we have an accepted call interface\n> for the generic trigger procs from 1..\n> \n> 3. Support for pg_dump to emit the correct CREATE CONSTRAINT\n> TRIGGER statements. Who wants to pick up this topic?\n> \n> 4. Add the ability to swap out huge amounts of deferred\n> trigger events to disk (actually I'm collecting them in\n> memory - so large transactions affecting millions of rows\n> of a table where triggers are defined are likely to blow\n> up the backend). This is my topic - sorry.\n> \n> 5. Write a regression test for the new FOREIGN KEY support.\n> Surely an important thing but one of the last steps after\n> anything else works properly.\n> \n> 6. Remove the \"not supported yet\" note for FOREIGN KEY from\n> the docs along with correcting to the full syntax\n> supported finally :-)\n> \n> Hmmmm - the more I work on it the longer the TODO becomes.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #========================================= [email protected] (Jan Wieck) #\n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 13:30:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > \n> > Man, that's a heap of additions.\n> \n> Only the top of the iceberg :-)\n\nYikes. I was just talking to Thomas Lockhart by phone, and was saying\nthat I thought 6.6 would be a small, incremental release after the\nchanges in 6.5.*. Obviously, 6.6 is going to be as full-featured as\nearlier releases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 13:51:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": ">\n> > Bruce Momjian wrote:\n> > >\n> > >\n> > > Man, that's a heap of additions.\n> >\n> > Only the top of the iceberg :-)\n>\n> Yikes. I was just talking to Thomas Lockhart by phone, and was saying\n> that I thought 6.6 would be a small, incremental release after the\n> changes in 6.5.*. Obviously, 6.6 is going to be as full-featured as\n> earlier releases.\n\n Wasn't it YOU who asked ME to become active again? Your\n above thought is a little silly if ya really wanted to\n interrupt my sleep mode ;-)\n\n OTOH Vadim is close to WAL and I see activity on\n (outer/left/right?) join support too. Maybe there wouldn't be\n a v6.6 at all.\n\n WAL is IMHO the only real reason not to choose PostgreSQL for\n production. Beeing able to recover (roll forward) from a\n backup using transaction log is a required feature for\n mission critical data. Thus, having all this (WAL, FOREIGN\n KEY etc.) is a greater step forward that that between v6.4\n and v6.5.\n\n If all that really materializes in our next release, it's\n time to number it v7.0 - no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 21:02:28 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "Bruce Momjian wrote:\n \n> Yikes. I was just talking to Thomas Lockhart by phone, and was saying\n> that I thought 6.6 would be a small, incremental release after the\n> changes in 6.5.*. Obviously, 6.6 is going to be as full-featured as\n> earlier releases.\n\nAnd that surprises you?? Even in the short two years I've used\nPostgreSQL, I have grown accustomed to major changes every major\nversion. First there was the NOT NULL (and scads of other) features to\ncompel me to go from 6.1.1 to 6.2, then there were subselects (and\nvastly improved documentation) to get me up to 6.3, then there were\nviews, rules, and the new protocol to make 6.4 a must-cc event, then\nMVCC.... And now I'm maintaining RPM's so I can stay on the released\nbleeding edge without breaking my server policies. Whoda thunk it?\n\nOf course, my measly list above doesn't do the development justice -- as\none look at the changelog will show.\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Wed, 29 Sep 1999 15:08:38 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "> >\n> > > Bruce Momjian wrote:\n> > > >\n> > > >\n> > > > Man, that's a heap of additions.\n> > >\n> > > Only the top of the iceberg :-)\n> >\n> > Yikes. I was just talking to Thomas Lockhart by phone, and was saying\n> > that I thought 6.6 would be a small, incremental release after the\n> > changes in 6.5.*. Obviously, 6.6 is going to be as full-featured as\n> > earlier releases.\n> \n> Wasn't it YOU who asked ME to become active again? Your\n> above thought is a little silly if ya really wanted to\n> interrupt my sleep mode ;-)\n\nI specialize in silly. :-)\n\nFull-featured is good, much better than small, incremental.\n\nI certainly interrupted your sleep mode.\n\n> \n> OTOH Vadim is close to WAL and I see activity on\n> (outer/left/right?) join support too. Maybe there wouldn't be\n> a v6.6 at all.\n\nDo I read 7.0 in there?\n\n> WAL is IMHO the only real reason not to choose PostgreSQL for\n> production. Beeing able to recover (roll forward) from a\n> backup using transaction log is a required feature for\n> mission critical data. Thus, having all this (WAL, FOREIGN\n> KEY etc.) is a greater step forward that that between v6.4\n> and v6.5.\n> \n> If all that really materializes in our next release, it's\n> time to number it v7.0 - no?\n\nYes, I am starting to see 7.0 too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 17:03:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > Yikes. I was just talking to Thomas Lockhart by phone, and was saying\n> > that I thought 6.6 would be a small, incremental release after the\n> > changes in 6.5.*. Obviously, 6.6 is going to be as full-featured as\n> > earlier releases.\n> \n> And that surprises you?? Even in the short two years I've used\n> PostgreSQL, I have grown accustomed to major changes every major\n> version. First there was the NOT NULL (and scads of other) features to\n> compel me to go from 6.1.1 to 6.2, then there were subselects (and\n> vastly improved documentation) to get me up to 6.3, then there were\n> views, rules, and the new protocol to make 6.4 a must-cc event, then\n> MVCC.... And now I'm maintaining RPM's so I can stay on the released\n> bleeding edge without breaking my server policies. Whoda thunk it?\n> \n> Of course, my measly list above doesn't do the development justice -- as\n> one look at the changelog will show.\n\nYes, it still shocks me. I was telling Thomas, every release I think,\nman, this is so great, no reason anyone should be using a prior release.\nAnd then the next release is the same thing.\n\nThe basic issue for me is that each of the new features requsted looks\nso hard, I can't imagine how it could be done, but by release time, it\ndoes get done. Amazing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 17:06:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "Bruce Momjian wrote:\n> Lamr Owen wrote:\n> > And that surprises you?? Even in the short two years I've used\n> > PostgreSQL, I have grown accustomed to major changes every major\n[snip]\n \n> Yes, it still shocks me. I was telling Thomas, every release I think,\n> man, this is so great, no reason anyone should be using a prior release.\n> And then the next release is the same thing.\n> \n> The basic issue for me is that each of the new features requsted looks\n> so hard, I can't imagine how it could be done, but by release time, it\n> does get done. Amazing.\n\nI find the enthusiasm of this particular development quite infectious. \nWhile I'm only doing a very small part in packaging RPM's (thus far), I\nfeel quite good about it (it conjures back the same feeling that I had\nat 15 years old when my Z80 disassembler first correctly disassembled\nthe opcodes of three-quarters of the instruction set -- no operands at\nthat time, but the opcode logic was WORKING... It felt uniquely\ngratifying). \n\nJust reading the web page and the release notes doesn't do this\ndevelopment justice -- until I subscribed to this hackers list, I had no\nidea that PostgreSQL development was so dynamic.\n\nThis beats following the linux kernel development, IMO.\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Thu, 30 Sep 1999 11:46:01 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "On Wed, 29 Sep 1999, Bruce Momjian wrote:\n\n> > Bruce Momjian wrote:\n> > > \n> > > \n> > > Man, that's a heap of additions.\n> > \n> > Only the top of the iceberg :-)\n> \n> Yikes. I was just talking to Thomas Lockhart by phone, and was saying\n> that I thought 6.6 would be a small, incremental release after the\n> changes in 6.5.*. Obviously, 6.6 is going to be as full-featured as\n> earlier releases.\n\nHave we ever had a \"small, incremental release\", other then the\nminor-minor releases? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 3 Oct 1999 16:54:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "On Wed, 29 Sep 1999, Jan Wieck wrote:\n\n> >\n> > > Bruce Momjian wrote:\n> > > >\n> > > >\n> > > > Man, that's a heap of additions.\n> > >\n> > > Only the top of the iceberg :-)\n> >\n> > Yikes. I was just talking to Thomas Lockhart by phone, and was saying\n> > that I thought 6.6 would be a small, incremental release after the\n> > changes in 6.5.*. Obviously, 6.6 is going to be as full-featured as\n> > earlier releases.\n> \n> Wasn't it YOU who asked ME to become active again? Your\n> above thought is a little silly if ya really wanted to\n> interrupt my sleep mode ;-)\n> \n> OTOH Vadim is close to WAL and I see activity on\n> (outer/left/right?) join support too. Maybe there wouldn't be\n> a v6.6 at all.\n> \n> WAL is IMHO the only real reason not to choose PostgreSQL for\n> production. Beeing able to recover (roll forward) from a\n> backup using transaction log is a required feature for\n> mission critical data. Thus, having all this (WAL, FOREIGN\n> KEY etc.) is a greater step forward that that between v6.4\n> and v6.5.\n> \n> If all that really materializes in our next release, it's\n> time to number it v7.0 - no?\n\nI was kinda starting to wonder that one myself... my feeling: its time\nguys.\n\nWe're still in no more rush to get it out the door...sometime 1st quarter\nof year 2000, but with everything that has changed up until now, I think\nits time we up'd the major version number ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 3 Oct 1999 16:56:48 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" }, { "msg_contents": "On Wed, 29 Sep 1999, Bruce Momjian wrote:\n\n> Yes, I am starting to see 7.0 too.\n\none motion, two second's...anyone disagree? *lifts mallet* \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 3 Oct 1999 16:57:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #2" } ]
[ { "msg_contents": "So co-developers,\n\n let's go.\n\n I think the best is if all RI triggers share the same call\n interface. My favorite for now is this:\n\n RI_FKey_<operation>_<event> ( <args> )\n\n Where <operation> is one of check, cascade, restrict, setnull\n or setdefault and where <event> is one of ins, upd or del.\n Thus, the trigger proc to check foreign key on insert would\n be RI_FKey_check_ins() and the one to do cascaded deletes\n becomes RI_FKey_cascade_del().\n\n The <args> is allways '<constraint_name>', '<FK_table_name>',\n '<PK_table_name>' and a variable length list of\n '<FK_attribute_name>', '<PK_attribute_name>' pairs. In the\n case of an unnamed constraint, the <constraint_name> is given\n as '(unnamed)'. It is up to CREATE TABLE parsing/utility to\n build these arguments properly.\n\n All the procs should use prepared and saved SPI plans\n wherever possible.\n\n Any combination of attributes in a table referenced to by one\n or more FOREIGN KEY ... REFERENCES constraint of another\n table shall have a UNIQUE and NOT NULL constraint. I don't\n think that it's easy to assure this and to avoid that the\n underlying unique index isn't dropped later. First it shall\n be enough to document and leave it up to the\n creator/maintainer of the database schema. So we assume here\n that any PK is unique and cannot contain NULL's.\n\n The FOREIGN KEY itself might allow NULL's, but since we\n implement only MATCH FULL right now, either all or none of\n the FK attributes may contain NULL. We'll check this in\n RI_FKey_check_ins() and RI_FKey_check_upd().\n\n The behaviour of the procs in detail (this is what ya shall\n write - they don't exist):\n\n RI_FKey_check_ins()\n\n Implements \"FOREIGN KEY ... REFERENCES ...\" at insert\n time.\n\n Fired AFTER INSERT on FK table.\n\n First off, either all or none of the FK attributes in NEW\n must be NULL. If all are NULL, nothing is checked and\n operation goes through. Otherwise it raises an ERROR if\n the given key isn't present in the PK table.\n\n RI_FKey_check_upd()\n\n Implements \"FOREIGN KEY ... REFERENCES ...\" at update\n time.\n\n Fired AFTER UPDATE on FK table.\n\n If all FK attributes in OLD and NEW are the same, nothing\n is done. Otherwise, the operation is the same as for\n RI_FKey_check_ins().\n\n RI_FKey_cascade_del()\n\n Implements \"FOREIGN KEY ... ON DELETE CASCADE\".\n\n Fired AFTER DELETE on PK table.\n\n It deletes all occurences of the deleted key in the FK\n table.\n\n RI_FKey_cascade_upd()\n\n Implements \"FOREIGN KEY ... ON UPDATE CASCADE\".\n\n Fired AFTER UPDATE on PK table.\n\n Nothing happens if OLD and NEW keys in PK are identical.\n Otherwise it updates all occurences of the OLD key to the\n NEW key in the FK table.\n\n RI_FKey_restrict_del()\n\n Implements \"FOREIGN KEY ... ON DELETE RESTRICT\".\n\n Fired AFTER DELETE on PK table.\n\n Checks if the deleted key is still referenced from the FK\n table and raises an ERROR if so.\n\n RI_FKey_restrict_upd()\n\n Fired AFTER UPDATE on PK table.\n\n Nothing happens if OLD and NEW keys in PK are identical.\n Otherwise checks if the OLD key is still referenced from\n the FK table and raises an ERROR if so.\n\n RI_FKey_setnull_del()\n\n Implements \"FOREIGN KEY ... ON DELETE SET NULL\"\n\n Fired AFTER DELETE on PK table.\n\n Updates all occurences of the OLD key to NULL values in\n the FK table.\n\n RI_FKey_setnull_upd()\n\n Implements \"FOREIGN KEY ... ON UPDATE SET NULL\"\n\n Fired AFTER UPDATE on PK table.\n\n Nothing happens if OLD and NEW keys in PK are identical.\n Otherwise updates all occurences of the OLD key to NULL\n values in the FK table.\n\n RI_FKey_setdefault_del()\n\n Implements \"FOREIGN KEY ... ON DELETE SET DEFAULT\"\n\n Fired AFTER DELETE on PK table.\n\n Updates all occurences of the OLD key in FK table to the\n default values defined in the schema of the FK table.\n\n RI_FKey_setdefault_upd()\n\n Implements \"FOREIGN KEY ... ON UPDATE SET DEFAULT\"\n\n Fired AFTER UPDATE on PK table.\n\n Nothing happens if OLD and NEW keys in PK are identical.\n Otherwise updates all occurences of the OLD key in FK\n table to the default values defined in the schema of the\n FK table.\n\n This all is the behaviour of FOREIGN KEY ... MATCH FULL\n according to the SQL3 standard - as I understood it. I know\n that the above trigger procs aren't easy to implement. But\n after all, many of the referential action ones look very\n similar to each other.\n\n One general thing required is IMHO some hashtable(s) living\n in the cache context where any trigger once fired can cache\n information needed again and again (like the saved plans,\n functions for equality checks on OLD vs. NEW, etc.).\n\n The bazar is open - come in.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 20:47:18 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "RI generic trigger procs" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Any combination of attributes in a table referenced to by one\n> or more FOREIGN KEY ... REFERENCES constraint of another\n> table shall have a UNIQUE and NOT NULL constraint.\n...\n> So we assume here that any PK is unique and cannot contain NULL's.\n\nWhat is the reasoning behind requiring this ?\n\nI can't see anything that would mandate this -\n * NULLs are'nt equal anyway and ar even disregarded under your \n current description. \n Or are you just protecting yourself against the case where the \n foreign key field is set to null - could this be handled the \n same as deleting for cascaded constraints ?\n * UNIQUE would save us the check for existing other possible \n referenced values - is this mandated by SQL spec ?\n\n-------------\nHannu\n", "msg_date": "Wed, 29 Sep 1999 23:37:50 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI generic trigger procs" }, { "msg_contents": "Hannu Krosing wrote:\n>\n> Jan Wieck wrote:\n> >\n> > Any combination of attributes in a table referenced to by one\n> > or more FOREIGN KEY ... REFERENCES constraint of another\n> > table shall have a UNIQUE and NOT NULL constraint.\n> ...\n> > So we assume here that any PK is unique and cannot contain NULL's.\n>\n> What is the reasoning behind requiring this ?\n>\n> I can't see anything that would mandate this -\n> * NULLs are'nt equal anyway and ar even disregarded under your\n> current description.\n> Or are you just protecting yourself against the case where the\n> foreign key field is set to null - could this be handled the\n> same as deleting for cascaded constraints ?\n\n MATCH FULL (as I planned to implement for now) mandates that\n either none or all fields of foreign key are NULL.\n\n Of course, we could handle the UPDATE of referenced key (PK)\n from none NULL to some/all NULL as if the operation was\n DELETE. And similar treating UPDATE/DELETE where OLD had\n NULL(s) as nothing, since according to MATCH FULL absolutely\n no reference can exist.\n\n When looking ahead it's better to add one more argument to\n the trigger proc's specifying the MATCH type. That way we\n could add support for MATCH PARTIAL by only working on the\n trigger procs with no need to touch anything else in the\n system. This will be the 4th argument before the attribute\n name pairs and containts either 'FULL' or 'PARTIAL'.\n\n Support for MATCH PARTIAL is alot more complicated though -\n thus I left it for later. Let's see how fast we could get\n this all to work and then decide if it's something to include\n in this or one of the next releases.\n\n> * UNIQUE would save us the check for existing other possible\n> referenced values - is this mandated by SQL spec ?\n\n SQL3 specification X3H2-93-359 and MUN-003\n\n 11.9 <referential constraint definition>\n\n 2) Case:\n\n a) If the <referenced table and columns> specifies a <reference\n column list>, then the set of column names of that <refer-\n ence column list> shall be equal to the set of column names\n in the unique columns of a unique constraint of the refer-\n enced table. Let referenced columns be the column or columns\n identified by that <reference column list> and let refer-\n enced column be one such column. Each referenced column shall\n identify a column of the referenced table and the same column\n shall not be identified more than once.\n\n b) If the <referenced table and columns> does not specify a\n <reference column list>, then the table descriptor of the\n referenced table shall include a unique constraint that spec-\n ifies PRIMARY KEY. Let referenced columns be the column or\n columns identified by the unique columns in that unique con-\n straint and let referenced column be one such column. The\n <referenced table and columns> shall be considered to implic-\n itly specify a <reference column list> that is identical to\n that <unique column list>.\n\n So the UNIQUE constraint on the referenced columns of the\n referenced table is mandatory.\n\n And the spec also tells that the UNIQUE constrain on the\n referenced columns shall NOT be deferrable, so our (mis)usage\n of a unique index for uniqueness doesn't break the specs\n here.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 23:43:16 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] RI generic trigger procs" } ]
[ { "msg_contents": "I have, after testing by myself and others, verified the integrity of\nthe upgrade procedure I released with the 6.5.1-0.7lo and 6.5.2-0.2lo\nRPM sets -- after squashing some bugs and misconceptions (thanks\nprimarily goes to Dale at RedHat, as he saw what I didn't -- then I\nrelayed how to fix it).\n\nTHEREFORE, I am announcing NON-BETA UPGRADING RPM's for RedHat Linux 6.0\nand 5.2, available from http://www.ramifordistat.net. These RPMs carry\na release of 1, no lo appended. Upgrading from a previous release of\nPostgreSQL on RedHat Linux is as simple as typing 'rpm -Uvh postgresql*'\n, then reading the file /usr/doc/postgresql-6.5.2/README.rpm to get an\nidea of what you need to do next.\n\nThese RPMS include a fully functional pgaccess 0.98 -- as the 6.5.2\ntarball didn't. Also, Thomas' gram.y patches are incorporated at his\nurging.\n\nIn order to rebuild from the source RPM, you must be running at least\nRPM 3.0.2, and must have a full development environment installed --\nparticularly python-devel, which is not installed by default on RedHat\n6.0. \n\nI will be continuing to make improvements and bug fixes to the\npackaging, as well as keeping up with the latest and greatest PostgreSQL\nreleased version. If there is demand, I am willing to try to package the\nsnapshots, for those bleeding edge testers -- however, it is probably\nmore productive to only package the 'official' PostgreSQL betas. I am,\nas always, open to suggestions. \n\nI have, in this line, reorganized my postgres RPM information on\nramifordistat.net to reflect dual development tracks, with a released\nnon-beta rpm always available in parallel to the beta rpms. If Marc and\nthe rest of the group want to place the non-beta rpms on\nftp.postgresql.org, be my guest.\n\nMany thanks to those who, in this list particularly, who have helped\nwith testing these (and Thomas') RPMS and have provided patches. Most\nwere incorporated by me or had already by incorporated by the fine folks\nat RedHat (I particularly liked the init script modification to remove\nstale locks in /tmp). Many thanks to Thomas for starting this snowball\nrolling and for allowing me to be flattened by it (;-D). Very many\nthanks to Oliver Elphick, as he's already been down a similar road with\nthe Debian packages -- I learned alot from his work, and am using\nmodified copies of two of his scripts (one of which is a modified\npg_dumpall). And many many thanks to the fine folks at RedHat (in\nparticular, Cristian, Jeff and Dale), who helped a great deal in lots of\ndifferent ways!\n\nEnjoy!\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Wed, 29 Sep 1999 15:33:07 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Non-beta RPMS for RedHat Linux -- PostgreSQL 6.5.2" }, { "msg_contents": "> THEREFORE, I am announcing NON-BETA UPGRADING RPM's for RedHat Linux 6.0\n> and 5.2, available from http://www.ramifordistat.net.\n\nMarc, we should have these posted at ftp.postgresql.org. Is there a\nway to allow Lamar to deposit files into /pub/RPMS and /pub/SRPMS?\nafaik his web site only allows http transfers, so I don't find it\nconvenient to relay them through a remote machine (one at work). My\nhome machine has a narrow pipe, so that won't work either...\n\nI can help with the file transfers once I'm at work, but that isn't\nparticularly timely or convenient.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 30 Sep 1999 05:53:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-beta RPMS for RedHat Linux -- PostgreSQL 6.5.2" } ]
[ { "msg_contents": "\nHow much shared memory do you need?\n\noptions SHMMNI=96 # number of shared memory identifiers default is 32\noptions SHMMAXPGS=20480 # number of pages a shgment can have and the total number of pages? default 1024\n # max seg size is 25meg\noptions SHMSEG=64 # number of segments per process\noptions SEMMNI=40 # number of semaphore identifiers\noptions SEMMNS=240 # number of semaphores in the system\noptions MSGSEG=4096 # max number of message segments is this\n\nWith Postgres set for 512 backends, I can't start it.\nI can run with 128, but at 10240 have been running out during large transactions.\nI am hoping I can get it at 20480.\n\n\n", "msg_date": "Wed, 29 Sep 1999 15:26:44 -0700", "msg_from": "Jason Venner <[email protected]>", "msg_from_op": true, "msg_subject": "shared memory 651, freebsd 2.2.7" }, { "msg_contents": "Jason Venner <[email protected]> writes:\n> How much shared memory do you need?\n\nWhat's probably getting you is the SEMMNS & SEMMNI limits --- we need a\nsemaphore per backend, and an identifier for every 16 semas.\n\nI thought there was a discussion of kernel resource settings\nsomewhere in the documentation, but I'm not sure where offhand.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Sep 1999 19:48:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] shared memory 651, freebsd 2.2.7 " }, { "msg_contents": ">\n>\n> How much shared memory do you need?\n>\n> options SHMMNI=96 # number of shared memory identifiers default is 32\n> options SHMMAXPGS=20480 # number of pages a shgment can have and the total number of pages? default 1024\n> # max seg size is 25meg\n> options SHMSEG=64 # number of segments per process\n> options SEMMNI=40 # number of semaphore identifiers\n> options SEMMNS=240 # number of semaphores in the system\n> options MSGSEG=4096 # max number of message segments is this\n>\n> With Postgres set for 512 backends, I can't start it.\n> I can run with 128, but at 10240 have been running out during large transactions.\n> I am hoping I can get it at 20480.\n\n What do you need 512 backends for? Such a high concurrency\n doesn't make things better (locking, spin locks etc.).\n\n Would eventually some kind of a middle tear application with\n a limited number of work processes which connect to the\n database help?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 30 Sep 1999 03:16:22 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] shared memory 651, freebsd 2.2.7" }, { "msg_contents": "At 03:16 AM 9/30/99 +0200, Jan Wieck wrote:\n\n> Would eventually some kind of a middle tear application with\n> a limited number of work processes which connect to the\n> database help?\n\nThis is exactly how some web servers, i.e. AOLServer (which\nI use) help to throttle traffic. This server lets you\nlimit the number of pooled backends, when the pool's exhausted,\nthe web server blocks new threads until there's a free back\nend.\n\nThis means you can configure your web server to service the\nnumber of concurrent back-ends you think it can deal with,\nbased on its hardware configuration and the load placed on\nit due to your specific queries and database contents, and\nweb load. \n\nAOLServer's been doing this for at least five years. For\nweb work, it's seems to be the right place to do it, because\nthere are other things that impact the load the server can\ndeal with. The db access throttle is just one of various\nthrottles one can imagine when trying to tune the entire\nsite.\n\nOf course, this is web specific in detail. Not necessarily\nin concept, though...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 29 Sep 1999 20:52:23 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] shared memory 651, freebsd 2.2.7" } ]
[ { "msg_contents": "Hi,\n\nI have made a small tool called \"pgbench,\" that may be useful for\nsome stress testings and performance measurements for PostgreSQL.\npgbench can be obtained from:\n\n\tftp://ftp.sra.co.jp/pub/cmd/postgres/pgbench/\n\nPgbench runs small transactions similar to TPC-B concurrently and\nreports the number of transactions actually done per second (tps).\nPgbench uses the asynchronous functions of libpq to simulate\nconcurrent clients. Example outputs from pgbench are shown below:\n\nnumber of clients: 4\nnumber of transactions per client: 100\nnumber of processed transactions: 400/400\ntps = 19.875015(including connections establishing)\ntps = 20.098827(excluding connections establishing)\n\n(above result was reported on my PowerBook with 603e CPU(180MHz), 80MB\nmem running PostgreSQL 6.5.2 with -F option, and Linux 2.2.1 kernel)\n\nPgbench does not require any special libraries other than libpq. It\ncomes with a configure script and should be very easy to build.\n\n*CAUTION*\npgbench will blow away tables named accounts, branches, history and\ntellers. It is best to create a new database for pgbench before\nrunning it.\n\nBTW, the greatest tps I have ever seen was around 260 on a Linux box\nrunning RedHat 6.0, having 2 Pentiumn III 600MHz CPUs and 512MB mem.\n\nEnjoy,\n---\nTatsuo Ishii\n", "msg_date": "Thu, 30 Sep 1999 10:15:59 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Announcement: pgbench-1.1 released" }, { "msg_contents": "Hi,\n\n With many cooperators, I have made a *bash built-in command* for\nPostgreSQL called \"pgbash\".\n\n The pgbash is the system which offers the 'direct SQL'/'embedded\nSQL' interface for PostgreSQL by being included in the bash-2.x \nshell. \n\n\nFeatures of pgbash\n-------------------\n\n1.The pgbash has a function which is equivalent to psql except for\n the interactive input processing function. \n\n2.It is possible that pgbash carries out the interactive input\n processing using the hysteresis editing function ( history, !, \n fc command ) of bash.\n\n3.An output of retrieval result and database information of pgbash \n uses PSprint() which improved PQprint(). By PSprint(), it is \n possible to output by plain table type, plain table + outer frame \n type and HTML table type. And, it is possible to display NULL \n value string(like '-NULL-') and bit zero string(like '-0-').\n\n4.It is possible that pgbash manipulates multiple databases using\n CONNECT, DISCONNECT and SET CONNECTION (or -d option ).\n\n5.The pgbash has a function which substitutes the retrieval result \n for the shell variable using FETCH INTO statement. \n\n6.It is possible to set CGI mode. In CGI mode, the pgbash switches \n the output to HTML, and read the datat by GET/POST method, and \n read the data of HTTP_COOKIE.\n\n7.The pgbash sets \"error code\", \"error message\", \"number of tuples\",\n etc to the shell variable. Therefore, it is possible to know the\n condition after the SQL execution. \n\n\n Details is as follows. \n http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n\n# I am very glad, if many people will use the pgbash.\n\n--\nRegards.\n\nSAKAIDA Masaaki -- Osaka, Japan \n# Sorry, I am not good at English. \n\n", "msg_date": "Fri, 01 Oct 1999 17:24:57 +0900", "msg_from": "SAKAIDA <[email protected]>", "msg_from_op": false, "msg_subject": "pgbash-1.1.1 release" }, { "msg_contents": "That's really cool !\nI just install and played a little bit.\nI found a minor problem :\nI have to connect to any database to issue\nexec_sql -l database\nI have no my personal database\nHere is a log:\n\nbash-2.03$ exec_sql -l database\n(-402)FATAL 1: Database megera does not exist in pg_database\nbash-2.03$ exec_sql \"connect to discovery\"\n# PostgreSQL 6.5.2 on i586-pc-linux-gnulibc1, compiled by gcc 2.95.1\n# CONNECT TO discovery:5432 AS discovery USER megera\n\nbash-2.03$ exec_sql -l database\n# Databases list\n\ndatname |datdba|encoding|datpath \n---------+------+--------+---------\ntemplate1| 505| 16|template1\napod | 11| 16|apod \n\nI don't understand this requirements just to list all databases\n\n\tRegards,\n\n\t\t\tOleg\n\n\nOn Fri, 1 Oct 1999, SAKAIDA wrote:\n\n> Date: Fri, 01 Oct 1999 17:24:57 +0900\n> From: SAKAIDA <[email protected]>\n> To: [email protected]\n> Cc: [email protected]\n> Subject: [HACKERS] pgbash-1.1.1 release\n> \n> Hi,\n> \n> With many cooperators, I have made a *bash built-in command* for\n> PostgreSQL called \"pgbash\".\n> \n> The pgbash is the system which offers the 'direct SQL'/'embedded\n> SQL' interface for PostgreSQL by being included in the bash-2.x \n> shell. \n> \n> \n> Features of pgbash\n> -------------------\n> \n> 1.The pgbash has a function which is equivalent to psql except for\n> the interactive input processing function. \n> \n> 2.It is possible that pgbash carries out the interactive input\n> processing using the hysteresis editing function ( history, !, \n> fc command ) of bash.\n> \n> 3.An output of retrieval result and database information of pgbash \n> uses PSprint() which improved PQprint(). By PSprint(), it is \n> possible to output by plain table type, plain table + outer frame \n> type and HTML table type. And, it is possible to display NULL \n> value string(like '-NULL-') and bit zero string(like '-0-').\n> \n> 4.It is possible that pgbash manipulates multiple databases using\n> CONNECT, DISCONNECT and SET CONNECTION (or -d option ).\n> \n> 5.The pgbash has a function which substitutes the retrieval result \n> for the shell variable using FETCH INTO statement. \n> \n> 6.It is possible to set CGI mode. In CGI mode, the pgbash switches \n> the output to HTML, and read the datat by GET/POST method, and \n> read the data of HTTP_COOKIE.\n> \n> 7.The pgbash sets \"error code\", \"error message\", \"number of tuples\",\n> etc to the shell variable. Therefore, it is possible to know the\n> condition after the SQL execution. \n> \n> \n> Details is as follows. \n> http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n> \n> # I am very glad, if many people will use the pgbash.\n> \n> --\n> Regards.\n> \n> SAKAIDA Masaaki -- Osaka, Japan\u001b$B!!\u001b(B\n> # Sorry, I am not good at English. \n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 1 Oct 1999 15:38:11 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgbash-1.1.1 release" }, { "msg_contents": "\nOleg Bartunov <[email protected]> wrote:\n> That's really cool !\n\n Thank you.\n\n> I just install and played a little bit.\n> I found a minor problem :\n> I have to connect to any database to issue\n> exec_sql -l database\n> I have no my personal database\n> Here is a log:\n> \n> bash-2.03$ exec_sql -l database\n> (-402)FATAL 1: Database megera does not exist in pg_database\n\n If CONNECT have not been executed yet, \"CONNECT TO DEFAULT\" \nwill be automatically issued when -l option is executed.\n\n If user name is \"megera\", then \"CONNECT TO DEFAULT\" is equal\nto \"CONNECT TO megera USER megera\". \n\n\n> bash-2.03$ exec_sql \"connect to discovery\"\n> # PostgreSQL 6.5.2 on i586-pc-linux-gnulibc1, compiled by gcc 2.95.1\n> # CONNECT TO discovery:5432 AS discovery USER megera\n> \n> bash-2.03$ exec_sql -l database\n> # Databases list\n> \n> datname |datdba|encoding|datpath \n> ---------+------+--------+---------\n> template1| 505| 16|template1\n> apod | 11| 16|apod \n> \n> I don't understand this requirements just to list all databases\n\n This approach is equal to psql. \n\n# However, I consider that \"CONNECT TO template1\" may be better \n than \"CONNECT TO <User_naame>\" in the case of \"-l database\".\n\n\n--\nRegard.\n\nSAKAIDA Masaaki -- Osaka, Japan\n# Sorry, I am not good at English.\n\n", "msg_date": "Fri, 01 Oct 1999 23:08:10 +0900", "msg_from": "SAKAIDA Masaaki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgbash-1.1.1 release" }, { "msg_contents": "\nOleg Bartunov <[email protected]> wrote:\n> Sakaida,\n> \n> sorry for bothering you but I didn't find\n> TODO file and wondering what new features (if any)\n> you plan to implement to pgbash.\n\n Please give me your opinion. My plan is not yet the concrete,\nbut a few of my new plan are:\n\n 1. Improvement of function of an HTML output.\n\n exec_sql -H -O \"<TABLE option>\"\n -O \"*:<TH option1>\" -O \"name:<TH option2>\"\n -O \"*:<TD option3>\" -O \"addr:<TD option4>\"\n -F \"name:<font color='0000ff' size=4>%-7.7s</font>\"\n -F \"addr:<fonr color='ff0000'>%s</font>\"\n \"select * from test where ...\"\n\n 2. Snapshot cursor operation. \n exec_sql -c cur \"select * from test where ...\"\n exec_sql \"open cur\"\n exec_sql \"fetch in cur into :r1, :r2\"\n echo \"r1=$r1 r2=$r2\"\n exec_sql \"close cur\"\n # Declare cursor operation is already implemented.\n\n\n> I'm a little boried to enclose SQL statements\n> into double quotes. Is't really need ?\n\n I think that it is necessary, because in the inside of the \ndouble quotes, it can be used $variable and it is possible to\ndescribe the SQL statement in the multiple line. \n\n ex) exec_sql \"select * from test \n where name='$DATA' and \n addr='$ADATA'\"\n\n> I'm doubt it's possible, because\n> SQL statement must begins from valid SQL word.\n> \n> If it's impossible to avoid probably pgbash\n> might have a possibility to redefine quote character,\n> so user could use\n> exec_sql [select * from test]\n> Notice, no need to press shift key !\n> I think with a little more effort this could be\n> achieved without explicit redefining of\n> quote character. But this is not a big problem,\n> I could use alias to define [] as a quote characters\n> just as an example:\n> alias sql='exec_sql -Q \"[]\"'\n> and then use sql instead of exec_sql.\n> \n\n I use alias too, but I do not know the method for aliasing \ndouble quotes.\n \n ex)\n alias E='exec_sql'\n\n (Shell program)\n -----------------------------------------------------\n #!/usr/local/bin/bash \n function E { exec_sql \"$@\" }\n -----------------------------------------------------\n (KUBO Takehiro taught me this method.)\n\n\n> Anyway, I'm just speculating about enhancement\n> after playing for several hours with pgbash.\n> I like it !\n \n I hope a new idea.\n\n--\nRegards.\n\nSAKAIDA Masaaki -- Osaka, Japan \n# Sorry, I am not good at English. \n\n", "msg_date": "Sat, 02 Oct 1999 12:39:47 +0900", "msg_from": "SAKAIDA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgbash-1.1.1 release" }, { "msg_contents": "\nOleg Bartunov <[email protected]> wrote:\n> Sakaida,\n> \n> sorry for bothering you but I didn't find\n> TODO file and wondering what new features (if any)\n> you plan to implement to pgbash.\n\n Please give me your opinion. My plan is not yet the concrete,\nbut a few of my new plan are:\n\n 1. Improvement of function of an HTML output.\n\n exec_sql -H -O \"<TABLE option>\"\n -O \"*:<TH option1>\" -O \"name:<TH option2>\"\n -O \"*:<TD option3>\" -O \"addr:<TD option4>\"\n -F \"name:<font color='0000ff' size=4>%-7.7s</font>\"\n -F \"addr:<fonr color='ff0000'>%s</font>\"\n \"select * from test where ...\"\n\n 2. Snapshot cursor operation. \n exec_sql -c cur \"select * from test where ...\"\n exec_sql \"open cur\"\n exec_sql \"fetch in cur into :r1, :r2\"\n echo \"r1=$r1 r2=$r2\"\n exec_sql \"close cur\"\n # Declare cursor operation is already implemented.\n\n\n> I'm a little boried to enclose SQL statements\n> into double quotes. Is't really need ?\n\n I think that it is necessary, because in the inside of the \ndouble quotes, it can be used $variable and it is possible to\ndescribe the SQL statement in the multiple line. \n\n ex) exec_sql \"select * from test \n where name='$DATA' and \n addr='$ADATA'\"\n\n> I'm doubt it's possible, because\n> SQL statement must begins from valid SQL word.\n> \n> If it's impossible to avoid probably pgbash\n> might have a possibility to redefine quote character,\n> so user could use\n> exec_sql [select * from test]\n> Notice, no need to press shift key !\n> I think with a little more effort this could be\n> achieved without explicit redefining of\n> quote character. But this is not a big problem,\n> I could use alias to define [] as a quote characters\n> just as an example:\n> alias sql='exec_sql -Q \"[]\"'\n> and then use sql instead of exec_sql.\n> \n\n I use alias too, but I do not know the method for aliasing \ndouble quotes.\n \n ex)\n alias E='exec_sql'\n\n (Shell program)\n -----------------------------------------------------\n #!/usr/local/bin/bash \n function E { exec_sql \"$@\" }\n -----------------------------------------------------\n (KUBO Takehiro taught me this method.)\n\n\n> Anyway, I'm just speculating about enhancement\n> after playing for several hours with pgbash.\n> I like it !\n \n I hope a new idea.\n\n--\nRegards.\n\nSAKAIDA Masaaki -- Osaka, Japan \n# Sorry, I am not good at English. \n\n", "msg_date": "Sat, 02 Oct 1999 13:00:49 +0900", "msg_from": "SAKAIDA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgbash-1.1.1 release" }, { "msg_contents": "\nThis works like a charm. Thanks tons!\n\n;) Clark\n\nOn Fri, 1 Oct 1999, SAKAIDA wrote:\n\n> Hi,\n> \n> With many cooperators, I have made a *bash built-in command* for\n> PostgreSQL called \"pgbash\".\n> \n> The pgbash is the system which offers the 'direct SQL'/'embedded\n> SQL' interface for PostgreSQL by being included in the bash-2.x \n> shell. \n> \n> \n> Features of pgbash\n> -------------------\n> \n> 1.The pgbash has a function which is equivalent to psql except for\n> the interactive input processing function. \n> \n> 2.It is possible that pgbash carries out the interactive input\n> processing using the hysteresis editing function ( history, !, \n> fc command ) of bash.\n> \n> 3.An output of retrieval result and database information of pgbash \n> uses PSprint() which improved PQprint(). By PSprint(), it is \n> possible to output by plain table type, plain table + outer frame \n> type and HTML table type. And, it is possible to display NULL \n> value string(like '-NULL-') and bit zero string(like '-0-').\n> \n> 4.It is possible that pgbash manipulates multiple databases using\n> CONNECT, DISCONNECT and SET CONNECTION (or -d option ).\n> \n> 5.The pgbash has a function which substitutes the retrieval result \n> for the shell variable using FETCH INTO statement. \n> \n> 6.It is possible to set CGI mode. In CGI mode, the pgbash switches \n> the output to HTML, and read the datat by GET/POST method, and \n> read the data of HTTP_COOKIE.\n> \n> 7.The pgbash sets \"error code\", \"error message\", \"number of tuples\",\n> etc to the shell variable. Therefore, it is possible to know the\n> condition after the SQL execution. \n> \n> \n> Details is as follows. \n> http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n> \n> # I am very glad, if many people will use the pgbash.\n> \n> --\n> Regards.\n> \n> SAKAIDA Masaaki -- Osaka, Japan\u001b$B!!\u001b(B\n> # Sorry, I am not good at English. \n> \n> \n> ************\n> \n\n", "msg_date": "Thu, 14 Oct 1999 21:48:58 -0400 (EDT)", "msg_from": "\"Clark C. Evans\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] pgbash-1.1.1 release" } ]
[ { "msg_contents": "Someone had the bright idea that the postmaster's -i switch could\nbe redefined as\n\t-i\tsame as it ever was\n\t-is\taccept only SSL connections\n\nUnfortunately, implementing that requires a getopt() that understands\nthe GNU double-colon extension (\"i::\"). HPUX's getopt, which claims\nto be fully conformant to POSIX.2 and about six other standards,\ndoesn't grok it. Net result: postmaster is quitting on startup with\na \"usage\" message for me. Doubtless it will also fail on most other\nnon-GNU-libc platforms.\n\nUnless we want to get into the business of supplying a substitute\noptarg() library routine, we're going to have to pick a more portable\nswitch syntax for SSL. (I might also point out that \"-is\" used to\nhave a quite different interpretation, ie \"-i -s\", which could trip\nup someone somewhere.)\n\nI can see two reasonable choices: (a) pick a currently-unused\nswitch letter that you specify *in addition to* -i, if you want\nonly secure connections; (b) pick a currently-unused switch letter\nthat you specify *instead of* -i, if you want only secure connections.\n\nI'd lean towards (a) except that both of the obvious choices, -s and -S,\nare already taken. If we go with (b), -I is available and perhaps not\na totally off-the-wall choice, but I can't say I really like it.\n\nComments? Ideas? Is it time to give up on getopt and go to multiletter\nswitch names? (Of course that would break a lot of people's startup\nscripts... but we may someday be forced into it... maybe it's better\nto bite the bullet now.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Sep 1999 23:04:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "postmaster dead on startup from unportable SSL patch" }, { "msg_contents": "> Comments? Ideas? Is it time to give up on getopt and go to multiletter\n> switch names? (Of course that would break a lot of people's startup\n> scripts... but we may someday be forced into it... maybe it's better\n> to bite the bullet now.)\n\nBreak it ;)\n\nThe single-character switches are definitely non-intuitive. Implement\n-I for now if you want, but if/when we release v7.0 breakage should be\nOK. And with Jan's and Vadim's projects (among others) it looks like\nv7.0 is coming up soon :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 30 Sep 1999 06:03:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster dead on startup from unportable SSL patch" }, { "msg_contents": "> Someone had the bright idea that the postmaster's -i switch could\n> be redefined as\n> \t-i\tsame as it ever was\n> \t-is\taccept only SSL connections\n> \n> Unfortunately, implementing that requires a getopt() that understands\n> the GNU double-colon extension (\"i::\"). HPUX's getopt, which claims\n> to be fully conformant to POSIX.2 and about six other standards,\n> doesn't grok it. Net result: postmaster is quitting on startup with\n> a \"usage\" message for me. Doubtless it will also fail on most other\n> non-GNU-libc platforms.\n> \n> Unless we want to get into the business of supplying a substitute\n> optarg() library routine, we're going to have to pick a more portable\n> switch syntax for SSL. (I might also point out that \"-is\" used to\n> have a quite different interpretation, ie \"-i -s\", which could trip\n> up someone somewhere.)\n\n-is is a totally broken option flag.\n\n> \n> I can see two reasonable choices: (a) pick a currently-unused\n> switch letter that you specify *in addition to* -i, if you want\n> only secure connections; (b) pick a currently-unused switch letter\n> that you specify *instead of* -i, if you want only secure connections.\n> \n> I'd lean towards (a) except that both of the obvious choices, -s and -S,\n> are already taken. If we go with (b), -I is available and perhaps not\n> a totally off-the-wall choice, but I can't say I really like it.\n\nI like option (a). Just pick any letter for the additional SSL flag\n. It is SSL, you can use -L or -l. I would like to keep -i as\nrequired, so when we tell people they have to use -i, they really have\nto use -i for INET connection, not -i or -L.\n\n> \n> Comments? Ideas? Is it time to give up on getopt and go to multiletter\n> switch names? (Of course that would break a lot of people's startup\n> scripts... but we may someday be forced into it... maybe it's better\n> to bite the bullet now.)\n\nNo, I don't think so. long opt names are more a headache than just\npicking any new letter.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Thu, 30 Sep 1999 13:53:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster dead on startup from unportable SSL patch" }, { "msg_contents": "On Sep 30, Thomas Lockhart mentioned:\n\n> > Comments? Ideas? Is it time to give up on getopt and go to multiletter\n> > switch names? (Of course that would break a lot of people's startup\n> > scripts... but we may someday be forced into it... maybe it's better\n> > to bite the bullet now.)\n> \n> Break it ;)\n> \n> The single-character switches are definitely non-intuitive. Implement\n\nIt's a backend people! My man page shows 12 defined switches, so there are\nat least 44 character switches left. A little imagination please.\n\nI am implementing GNU style long options in psql but obviously that sort\nof thing won't help anybody that doesn't have a GNU'ish system, in\nparticular the people affected by the -is thing in the first place.\n\nOr do you *really* want to get into the business of writing your own\ngetopt replacement??? Then you are liable to end up with something even\nless intuitive.\n\nMeanwhile, how about something like -i for normal and -i SSL for what's\ndesired. (That is, change the \"i\" to \"i:\"). Then, if someone comes up with\nsomething related (accept only ssh, ipv6, latest pgsql protocol, etc.\nconnections), you save a switch.\n\nPeter\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n", "msg_date": "Fri, 1 Oct 1999 02:49:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster dead on startup from unportable SSL patch" }, { "msg_contents": "Thus spake Peter Eisentraut\n> > The single-character switches are definitely non-intuitive. Implement\n> \n> It's a backend people! My man page shows 12 defined switches, so there are\n> at least 44 character switches left. A little imagination please.\n\nMy take is that on the CL I want single character flags for speed of\nentry and in startup scripts I can comment if neccesary.\n\n> Or do you *really* want to get into the business of writing your own\n> getopt replacement??? Then you are liable to end up with something even\n> less intuitive.\n\nWell, in fact I do and have. :-)\n\nSee http://www.druid.net/~darcy/files/getarg.c for a different type of\ngetopt. It uses a different programmer API to add some functionality\nbut looks the same to the user unless they know about the extras.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sat, 2 Oct 1999 07:25:05 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster dead on startup from unportable SSL patch" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> The single-character switches are definitely non-intuitive. Implement\n\n> It's a backend people! My man page shows 12 defined switches, so there are\n> at least 44 character switches left. A little imagination please.\n\nIt's true that the postmaster isn't something you normally start by\nhand, but the other side of that coin is that startup scripts are\nusually made by people when they are new to Postgres, and it's not\nhard to make a mistake...\n\n> I am implementing GNU style long options in psql but obviously that sort\n> of thing won't help anybody that doesn't have a GNU'ish system, in\n> particular the people affected by the -is thing in the first place.\n>\n> Or do you *really* want to get into the business of writing your own\n> getopt replacement???\n\nEr, you had better be writing your own getopt replacement if you want\nto provide GNU-style options in psql. Or have you forgotten that the\ncode must be portable to non-GNU platforms? I don't think it would be\na good idea to support long options only on boxes with a suitable\ngetopt, either. That would create a documentation, scripting, and\nsupport nightmare (because the same psql command line would work for\nsome people and not others).\n\nIf it weren't for the license conflict between BSD and GPL, I'd suggest\njust dropping GNU getopt into the Postgres distribution, but having\nGPL'd code in the distribution is a can of worms we'd best not open.\n\n> Meanwhile, how about something like -i for normal and -i SSL for what's\n> desired. (That is, change the \"i\" to \"i:\").\n\nI tried that before I realized what the i:: was all about, but it still\nbreaks existing startup scripts, because i: means that there *must* be\nan argument to -i --- so if you write something like -i -o \"backend switches\"\nthe -o gets swallowed as the argument to -i, and chaos ensues.\n\nI do like the notion of specifying SSL as the argument of some new\nswitch letter, so that we have a way to add more connection\nmethods without using up more switch letters...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Oct 1999 11:49:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster dead on startup from unportable SSL patch " }, { "msg_contents": "On Oct 2, Tom Lane mentioned:\n\n> Er, you had better be writing your own getopt replacement if you want\n> to provide GNU-style options in psql. Or have you forgotten that the\n> code must be portable to non-GNU platforms? I don't think it would be\n> a good idea to support long options only on boxes with a suitable\n> getopt, either. That would create a documentation, scripting, and\n> support nightmare (because the same psql command line would work for\n> some people and not others).\n\nNaturally this whole thing will be #ifdef'ed out and depending on an\nautoconf test for getopt_long. Also there will be a short option for every\nlong one and vice versa.\n\nI also gave the documentation issue some thought and I agree that this\nmight not be fun to support. But then I don't really see too many support\nquestions regarding psql _invocation_.\n\nAt this point I'm just going to leave it undocumented, pending further\ncomplaints. I just like the self-documenting elegancy of\n\n$ psql --host=localhost --port=5432 --dbname=foo --username=joe\n--from-file=myfile.sql --out-file=result.txt \n\nBut you can also get (actual output):\n$ ./psql --list\nThis version of psql was compiled without support for long options.\nUse -? for help on invocation options.\n\nIt's not too hard to check for that: just include \"-\" in your options list\nfor the regular getopt.\n\n> If it weren't for the license conflict between BSD and GPL, I'd suggest\n\nOkay, while we're at it: Someone wrote me regarding readline (GPL) vs\nlibedit (BSD). If you look at the code, readline is pretty deeply\nintegrated. This is almost the same issue. But there has not been any\nsupport nightmare I was aware of. On the other hand there are even\nbackslash commands (\\s) that only work with readline.\n\nI even wrote an SQL-aware readline tab completion which I intend to\nincorporate in one form or another. This is true added functionality,\nwhile you might get away with saying that long options are just a toy.\n\nAnd of course we don't even want to talk about the requirements regarding\nGNU make, GNU flex, GNU tar, or this whole autoconf business. Of course,\nwe could write ./configure -e locale -w perl, but that's no fun . . .\n\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n", "msg_date": "Sat, 2 Oct 1999 20:40:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "getopt_long (was Re: [HACKERS] postmaster dead on startup ...)" } ]
[ { "msg_contents": "Hi,\n\nin the last few days I compiled libpq on Windows NT\nusing MS Visual Studio 6.0. I followed the instructions\ngiven by Bob Kline <[email protected]> in his mail from\nFri, 3 Sep 1999.\nUnfortuanetely he sent his mail only to dbi-users, so I would\nlike to repeat one major problem on this list.\n\nHere is an excerpt from his mail:\n\n4. The DllMain function in src/interfaces/libpq/libpqdll.c of the\nPostgreSQL 6.5 sources, in which WSAStartup is invoked, is never called,\nwhich causes gethostbyname calls to fail. Solution (more properly,\n\"kludge\" -- I know there's a cleaner fix somewhere, but this works for\nnow): immediately after the local declarations for the connectDB function\nin src/interfaces/libpq/fe-connect.c:\n\n#ifdef WIN32\n static int WeHaveCalledWSAStartup;\n\n if (!WeHaveCalledWSAStartup) {\n WSADATA wsaData;\n if (WSAStartup(MAKEWORD(1, 1), &wsaData)) {\n sprintf(conn->errorMessage,\n \"WSAStartup failed: errno=%d\\n\", h_errno);\n goto connect_errReturn;\n }\n WeHaveCalledWSAStartup = 1;\n }\n#endif\n\n\n\nBesides the effort to port the complete server om Win32\nusing the Cygnus environment, it would be nice to be able\nto compile at least the client part (libpq) with a standard\nMS-compiler.\n\nSo please apply this patch or an equivalent cleaner solution.\n\n\nthanks\nEdmund\n\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n", "msg_date": "Thu, 30 Sep 1999 07:24:34 +0200", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "Win32 =?iso-8859-1?Q?p=FCort?= of libpq" }, { "msg_contents": "\nOn 30-Sep-99 Edmund Mergl wrote:\n> Hi,\n> \n> in the last few days I compiled libpq on Windows NT\n> using MS Visual Studio 6.0. I followed the instructions\n> given by Bob Kline <[email protected]> in his mail from\n> Fri, 3 Sep 1999.\n> Unfortuanetely he sent his mail only to dbi-users, so I would\n> like to repeat one major problem on this list.\n> \n> Here is an excerpt from his mail:\n> \n> 4. The DllMain function in src/interfaces/libpq/libpqdll.c of the\n> PostgreSQL 6.5 sources, in which WSAStartup is invoked, is never called,\n> which causes gethostbyname calls to fail. Solution (more properly,\n> \"kludge\" -- I know there's a cleaner fix somewhere, but this works for\n> now): immediately after the local declarations for the connectDB function\n> in src/interfaces/libpq/fe-connect.c:\n> \n>#ifdef WIN32\n> static int WeHaveCalledWSAStartup;\n> \n> if (!WeHaveCalledWSAStartup) {\n> WSADATA wsaData;\n> if (WSAStartup(MAKEWORD(1, 1), &wsaData)) {\n> sprintf(conn->errorMessage,\n> \"WSAStartup failed: errno=%d\\n\", h_errno);\n> goto connect_errReturn;\n> }\n> WeHaveCalledWSAStartup = 1;\n> }\n>#endif\n\nYou need not to take care wether WSAStartup is alredy called or not.\nWindows handle it automatically.\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Thu, 30 Sep 1999 15:43:37 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Win32=?KOI8-R?Q?_p=FCort?= of libpq" }, { "msg_contents": "On Thu, 30 Sep 1999, Dmitry Samersoff wrote:\n\n> \n> On 30-Sep-99 Edmund Mergl wrote:\n> > Hi,\n> > \n> > in the last few days I compiled libpq on Windows NT\n> > using MS Visual Studio 6.0. I followed the instructions\n> > given by Bob Kline <[email protected]> in his mail from\n> > Fri, 3 Sep 1999.\n> > Unfortuanetely he sent his mail only to dbi-users, so I would\n> > like to repeat one major problem on this list.\n> > \n> > Here is an excerpt from his mail:\n> > \n> > 4. The DllMain function in src/interfaces/libpq/libpqdll.c of the\n> > PostgreSQL 6.5 sources, in which WSAStartup is invoked, is never called,\n> > which causes gethostbyname calls to fail. Solution (more properly,\n> > \"kludge\" -- I know there's a cleaner fix somewhere, but this works for\n> > now): immediately after the local declarations for the connectDB function\n> > in src/interfaces/libpq/fe-connect.c:\n> > \n> >#ifdef WIN32\n> > static int WeHaveCalledWSAStartup;\n> > \n> > if (!WeHaveCalledWSAStartup) {\n> > WSADATA wsaData;\n> > if (WSAStartup(MAKEWORD(1, 1), &wsaData)) {\n> > sprintf(conn->errorMessage,\n> > \"WSAStartup failed: errno=%d\\n\", h_errno);\n> > goto connect_errReturn;\n> > }\n> > WeHaveCalledWSAStartup = 1;\n> > }\n> >#endif\n> \n> You need not to take care wether WSAStartup is alredy called or not.\n> Windows handle it automatically.\n\nBy calling it yourself you have more control over which minimum version \nwill be loaded.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 30 Sep 1999 08:06:31 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Win32=?KOI8-R?Q?_p=FCort?= of libpq" }, { "msg_contents": "\nOn 30-Sep-99 Vince Vielhaber wrote:\n> On Thu, 30 Sep 1999, Dmitry Samersoff wrote:\n> \n>> \n>> On 30-Sep-99 Edmund Mergl wrote:\n>> > Hi,\n>> > \n>> > in the last few days I compiled libpq on Windows NT\n>> > using MS Visual Studio 6.0. I followed the instructions\n>> > given by Bob Kline <[email protected]> in his mail from\n>> > Fri, 3 Sep 1999.\n>> > Unfortuanetely he sent his mail only to dbi-users, so I would\n>> > like to repeat one major problem on this list.\n>> > \n>> > Here is an excerpt from his mail:\n>> > \n>> > 4. The DllMain function in src/interfaces/libpq/libpqdll.c of the\n>> > PostgreSQL 6.5 sources, in which WSAStartup is invoked, is never called,\n>> > which causes gethostbyname calls to fail. Solution (more properly,\n>> > \"kludge\" -- I know there's a cleaner fix somewhere, but this works for\n>> > now): immediately after the local declarations for the connectDB function\n>> > in src/interfaces/libpq/fe-connect.c:\n>> > \n>> >#ifdef WIN32\n>> > static int WeHaveCalledWSAStartup;\n>> > \n>> > if (!WeHaveCalledWSAStartup) {\n>> > WSADATA wsaData;\n>> > if (WSAStartup(MAKEWORD(1, 1), &wsaData)) {\n>> > sprintf(conn->errorMessage,\n>> > \"WSAStartup failed: errno=%d\\n\", h_errno);\n>> > goto connect_errReturn;\n>> > }\n>> > WeHaveCalledWSAStartup = 1;\n>> > }\n>> >#endif\n>> \n>> You need not to take care wether WSAStartup is alredy called or not.\n>> Windows handle it automatically.\n> \n> By calling it yourself you have more control over which minimum version \n> will be loaded.\n\nYes, but you can just call\n\n WSADATA wsaData;\n if (WSAStartup(MAKEWORD(1, 1), &wsaData)) {\n sprintf(conn->errorMessage,\n \"WSAStartup failed: errno=%d\\n\", h_errno);\n goto connect_errReturn;\n }\n\n\nwithout WeHaveCalledWSAStartup at all.\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Thu, 30 Sep 1999 16:52:04 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Win32=?KOI8-R?Q?_p=FCort?= of libpq" }, { "msg_contents": "On Thu, 30 Sep 1999, Dmitry Samersoff wrote:\n\n> \n> On 30-Sep-99 Vince Vielhaber wrote:\n> > On Thu, 30 Sep 1999, Dmitry Samersoff wrote:\n> > \n> >> \n> >> On 30-Sep-99 Edmund Mergl wrote:\n> >> > Hi,\n> >> > \n> >> > in the last few days I compiled libpq on Windows NT\n> >> > using MS Visual Studio 6.0. I followed the instructions\n> >> > given by Bob Kline <[email protected]> in his mail from\n> >> > Fri, 3 Sep 1999.\n> >> > Unfortuanetely he sent his mail only to dbi-users, so I would\n> >> > like to repeat one major problem on this list.\n> >> > \n> >> > Here is an excerpt from his mail:\n> >> > \n> >> > 4. The DllMain function in src/interfaces/libpq/libpqdll.c of the\n> >> > PostgreSQL 6.5 sources, in which WSAStartup is invoked, is never called,\n> >> > which causes gethostbyname calls to fail. Solution (more properly,\n> >> > \"kludge\" -- I know there's a cleaner fix somewhere, but this works for\n> >> > now): immediately after the local declarations for the connectDB function\n> >> > in src/interfaces/libpq/fe-connect.c:\n> >> > \n> >> >#ifdef WIN32\n> >> > static int WeHaveCalledWSAStartup;\n> >> > \n> >> > if (!WeHaveCalledWSAStartup) {\n> >> > WSADATA wsaData;\n> >> > if (WSAStartup(MAKEWORD(1, 1), &wsaData)) {\n> >> > sprintf(conn->errorMessage,\n> >> > \"WSAStartup failed: errno=%d\\n\", h_errno);\n> >> > goto connect_errReturn;\n> >> > }\n> >> > WeHaveCalledWSAStartup = 1;\n> >> > }\n> >> >#endif\n> >> \n> >> You need not to take care wether WSAStartup is alredy called or not.\n> >> Windows handle it automatically.\n> > \n> > By calling it yourself you have more control over which minimum version \n> > will be loaded.\n> \n> Yes, but you can just call\n> \n> WSADATA wsaData;\n> if (WSAStartup(MAKEWORD(1, 1), &wsaData)) {\n> sprintf(conn->errorMessage,\n> \"WSAStartup failed: errno=%d\\n\", h_errno);\n> goto connect_errReturn;\n> }\n> \n> \n> without WeHaveCalledWSAStartup at all.\n\nAccroding to the 1.1 spec, you must call WSACleanup() for EVERY WSAStartup\ncall made. So if you call WSAStartup() three times, you must call\nWSACleanup() three times - the first two only decrement the internal \ncounter, the last one does the cleanup. This may have changed in versions\nof Winsock after 1.1 and my spec is a bit old (20 Jan 1993).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 30 Sep 1999 12:09:46 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Win32=?KOI8-R?Q?_p=FCort?= of libpq" } ]
[ { "msg_contents": "It's gone!\n\n I've just removed the pg_proc_prosrc_index and all it's\n related things like functions and the syscache for it.\n\n Actually, this index made problems (I was able to corrupt the\n index by using 2700 byte sized PL procedures) and it wasn't\n used at all. The only reference I found to it was when the\n system automagically creates a SET function from a\n specialized node - but first this code is #ifdef'd out\n (SETS_FIXED) and second I wasn't able to figure out which SQL\n construct would force this to happen.\n\n If someone in the future does this SETS_FIXED, the lookup in\n catalog/pg_proc.c must fallback to a heap scan or do\n something smarter (maybe a separate system catalog for these\n SET functions to quickly find them). For now it would throw\n an elog(ERROR) if ever hit.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 30 Sep 1999 12:36:15 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "prosrc index removed" } ]
[ { "msg_contents": "\nOK... doing some serious report hacking... and I decided I wanted to\ndo this:\n\nselect acct_id, \n sum(case when recd > ('now'::date - '30 days'::timespan)::date \n then amt else 0) as current \n from payable where not paid_p group by acct_id order by acct_id;\n\nbut pgsql gives me:\n\nERROR: parser: parse error at or near \")\"\n\nNow... I also thought I might be able to contruct \nsum(amt * <boolean>), but this also isn't allowed. I think that we\nshould make an int(boolean) function.\n\nDave.\n\n-- \n============================================================================\n|David Gilbert, Velocet Communications. | Two things can only be |\n|Mail: [email protected] | equal if and only if they |\n|http://www.velocet.net/~dgilbert | are precisely opposite. |\n=========================================================GLO================\n", "msg_date": "Thu, 30 Sep 1999 09:17:25 -0400 (EDT)", "msg_from": "David Gilbert <[email protected]>", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "David Gilbert <[email protected]> writes:\n> select acct_id, \n> sum(case when recd > ('now'::date - '30 days'::timespan)::date \n> then amt else 0) as current \n> from payable where not paid_p group by acct_id order by acct_id;\n> but pgsql gives me:\n> ERROR: parser: parse error at or near \")\"\n\nThe case construct has to be terminated with an \"end\" keyword;\n\"... else 0 end)\" ought to work.\n\n> Now... I also thought I might be able to contruct \n> sum(amt * <boolean>), but this also isn't allowed. I think that we\n> should make an int(boolean) function.\n\nThat's been suggested before, and I agree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Sep 1999 09:33:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "None" } ]
[ { "msg_contents": "Use the stuff that just got sent for Access (Interfaces list, header ->\n\"ODBC-client->Linux-server: datatype boolean not recognized?\"). Same\nprinciple.\n\nMikeA\n\n>> -----Original Message-----\n>> From: David Gilbert [mailto:[email protected]]\n>> Sent: Thursday, September 30, 1999 3:17 PM\n>> To: [email protected]\n>> Subject: \n>> \n>> \n>> \n>> OK... doing some serious report hacking... and I decided I wanted to\n>> do this:\n>> \n>> select acct_id, \n>> sum(case when recd > ('now'::date - '30 days'::timespan)::date \n>> then amt else 0) as current \n>> from payable where not paid_p group by acct_id order by acct_id;\n>> \n>> but pgsql gives me:\n>> \n>> ERROR: parser: parse error at or near \")\"\n>> \n>> Now... I also thought I might be able to contruct \n>> sum(amt * <boolean>), but this also isn't allowed. I think that we\n>> should make an int(boolean) function.\n>> \n>> Dave.\n>> \n>> -- \n>> =============================================================\n>> ===============\n>> |David Gilbert, Velocet Communications. | Two things \n>> can only be |\n>> |Mail: [email protected] | equal if \n>> and only if they |\n>> |http://www.velocet.net/~dgilbert | are \n>> precisely opposite. |\n>> =========================================================GLO=\n>> ===============\n>> \n>> ************\n>> \n", "msg_date": "Thu, 30 Sep 1999 15:23:26 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: " } ]
[ { "msg_contents": "Hello all,\n\nI posted the following also to pgsql-interfaces/general, but got no \nworkable solution (yet).\n\nThe ODBC client-application I'm developping will be able to run on Mac \nand WIN/NT, uses booleans itself and has to be able to be a frontend for \na range of SQL-databases.\n\n\nThanks,\nJelle\n\nvvvv MESSAGE vvvv\n\nHello all,\n\nI'm trying to connect from a ODBC-client to PostgreSQL on a Linux-server. \nAll is going well, except that the ODBC-client seems not to recognize the \nPostgreSQL boolean-datatype.\n\nIs this really caused by PostgreSQL? If yes: is there a workaround, like \naltering the datatypename 'bool' in the proper places (pg_type?)? If no: \nthen it is another problem and can somebody give a hint?\n\nThanks,\nJelle.\n\n\nI tested and encountered it in the following testsituations:\n\nSituation 1:\nClient: Mac via OpenLink ODBC-Client\nServer: Linux RH 6.0 via OpenLink Requestbroker\nResult: it looks like PostgreSQL-boolean is converted to SQL_C_CHAR, the \ndatatype itself is called 'bool'\n\nSituation 2:\nClient: NT with psqlODBC\nServer: same Linux\nResult: in MS-Query:\n- viewing the table-definitions, it comes with the message that it \ndoesn't recognize datatype 'bool'\n- selecting values it looks like PostgreSQL-boolean is converted to a \nnumeric\n\nPostgreSQL: 6.5.1\n\n\n--------------------------------------------------------------\n NEROC Publishing Solutions\n\n Jelle Ruttenberg\n\nDe Run 1131, 5503 LB Veldhoven Phone : +31-(0)40-2586641\nP.O.Box 133, 5500 AC Veldhoven Fax : +31-(0)40-2541893\nThe Netherlands E-mail : [email protected]\n--------------------------------------------------------------\n\n", "msg_date": "Thu, 30 Sep 1999 16:46:16 +0200", "msg_from": "Jelle Ruttenberg <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC-client->Linux-server: datatype boolean not recognized?" } ]
[ { "msg_contents": "One more step forward,\n\n I've created function shells and the appropriate pg_proc\n entries for all the FOREIGN KEY constraint triggers.\n\n The builtin language is now also accepted by CREATE TRIGGER.\n\n The new trigger functions are in utils/adt/ri_triggers.c (new\n file). They have no functionality up to now - just put a\n NOTICE that they're called and that's it. Will work on some\n general local functions soon.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 30 Sep 1999 16:57:10 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "RI status report #3" } ]
[ { "msg_contents": "As you might recognize, this is supposed to be a psql \\d imitation in one \nshot:\n\nSELECT usename as \"Owner\", relname as \"Relation\",\n(CASE WHEN relkind='r' THEN \n\t(CASE WHEN 0<(select count(*) from pg_views where viewname = relname)\n\tTHEN 'view' ELSE 'table' END) \n WHEN relkind='i' THEN 'index'\n WHEN relkind='S' THEN 'sequence'\n ELSE 'other'\nEND) as \"Type\"\nFROM pg_class, pg_user\nWHERE usesysid = relowner AND\n( relkind = 'r' OR\n relkind = 'i' OR\n relkind = 'S') AND\nrelname !~ '^pg_' AND\n(relkind != 'i' OR relname !~ '^xinx') ORDER BY relname;\nERROR: flatten_tlistentry: Cannot handle node type 108\n\nHowever, if you do \n-\t(CASE WHEN 0<(select count(*) from pg_views where viewname = relname)\n-\tTHEN 'view' ELSE 'table' END) \n+\t'relation'\nif works fine. No nested CASE's?\n\nPostgreSQL 6.5.2 on i586-pc-linux-gnu, compiled by egcs\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n", "msg_date": "Fri, 1 Oct 1999 03:03:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Tricky query, tricky response" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> However, if you do \n> -\t(CASE WHEN 0<(select count(*) from pg_views where viewname = relname)\n> -\tTHEN 'view' ELSE 'table' END) \n> +\t'relation'\n> if works fine. No nested CASE's?\n\nNope: no sub-selects in target list.\n\nI'm hoping to fix that soon, but if you want psql to continue to work\nwith pre-6.6 backends then you'll have to use a different approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Oct 1999 11:00:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tricky query, tricky response " }, { "msg_contents": "> No nested CASE's?\n\nLooks like not. I would guess that it is fairly straightforward to\nfix, but am not sure. Tom Lane hunted down an killed most of the CASE\nproblems (thanks Tom!), and this is in an area he is working on now.\nMaybe you can get him to look at it??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 02 Oct 1999 15:00:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tricky query, tricky response" }, { "msg_contents": "On Oct 2, Tom Lane mentioned:\n\n> Nope: no sub-selects in target list.\n> \n> I'm hoping to fix that soon, but if you want psql to continue to work\n> with pre-6.6 backends then you'll have to use a different approach.\n\nQuestion number one is: Do I? Yeah, okay :)\n\nAnyway, I thought wouldn't a more, um, user-friendly error message like\nERROR: Subselects are not allowed in target list.\nbe more desirable than\nERROR: flatten_tlistentry: Cannot handle node type 108\n\nIf _I_ read the latter I can at least suspect that there is a problem in\nthe query tree, but Joe User that just learned about inodes the other day\nis going to think his system is broken is all sorts of ways.\n\nAnother example is\nFATAL 1: SetUserId: user 'joeschmoe' is not in 'pg_shadow'\nclearly not as nice as\nFATAL ERROR: 'joeschmoe' is not a valid database user.\n\n(Also, if you want to be really secure you wouldn't give them the\ninformation that 'joeschmoe' is not a valid user but rather just return\n\"Permission denied\" or \"Authentication failed\". -- cf. login(1) )\n\nI think that ought to be a TODO item, right above\n* Allow international error message support and add error codes\nPerhaps it's even the same in essence.\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n", "msg_date": "Sun, 3 Oct 1999 07:31:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tricky query, tricky response " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Anyway, I thought wouldn't a more, um, user-friendly error message like\n> ERROR: Subselects are not allowed in target list.\n> be more desirable than\n> ERROR: flatten_tlistentry: Cannot handle node type 108\n\nYes, it would. Are you volunteering to try to make that happen?\n(Not for this one case, but for everything?)\n\nThere's been some discussion of trying to clean up the error reporting\nconventions, and in particular separate internal details (such as which\nroutine is reporting the error) from the \"user level\" information.\nBut a lot of the internal checks are pretty content-free from a user's\npoint of view, and there's little to be done about that. (Does\nflatten_tlistentry have a clue *why* it got a node type it never heard\nof? Nyet.) I do not think that a generic \"Internal error\" message\nwould be an improvement over what we have, messy though it is. It's\nnot a simple problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Oct 1999 12:53:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tricky query, tricky response " }, { "msg_contents": "On Oct 3, Tom Lane mentioned:\n\n> Peter Eisentraut <[email protected]> writes:\n> > Anyway, I thought wouldn't a more, um, user-friendly error message like\n> > ERROR: Subselects are not allowed in target list.\n> > be more desirable than\n> > ERROR: flatten_tlistentry: Cannot handle node type 108\n> \n> Yes, it would. Are you volunteering to try to make that happen?\n\nHmm, I'll put it on the List O' Things to consider right after I clear out\nall the crap that has accumulated in psql over the years which will take\nan unpredictable amount of time still.\n\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n", "msg_date": "Tue, 5 Oct 1999 22:30:35 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Error messages (was Re: [HACKERS] Tricky query, tricky response)" }, { "msg_contents": "> > No nested CASE's?\n> \n> Looks like not. I would guess that it is fairly straightforward to\n> fix, but am not sure. Tom Lane hunted down an killed most of the CASE\n> problems (thanks Tom!), and this is in an area he is working on now.\n> Maybe you can get him to look at it??\n> \n\nIs this an item for the TODO list?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 18:27:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tricky query, tricky response" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> No nested CASE's?\n>> \n>> Looks like not. I would guess that it is fairly straightforward to\n>> fix, but am not sure. Tom Lane hunted down an killed most of the CASE\n>> problems (thanks Tom!), and this is in an area he is working on now.\n>> Maybe you can get him to look at it??\n\n> Is this an item for the TODO list?\n\nFixed in current sources, I believe.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Nov 1999 21:30:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tricky query, tricky response " }, { "msg_contents": "On 1999-11-29, Tom Lane mentioned:\n\n> Bruce Momjian <[email protected]> writes:\n> >>>> No nested CASE's?\n> >> \n> >> Looks like not. I would guess that it is fairly straightforward to\n> >> fix, but am not sure. Tom Lane hunted down an killed most of the CASE\n> >> problems (thanks Tom!), and this is in an area he is working on now.\n> >> Maybe you can get him to look at it??\n> \n> > Is this an item for the TODO list?\n> \n> Fixed in current sources, I believe.\n\nThe problem (I sent it in) was actually no sub-selects in target list.\nNested cases work fine I believe.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 1 Dec 1999 01:27:30 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tricky query, tricky response " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>>>> Is this an item for the TODO list?\n>> \n>> Fixed in current sources, I believe.\n\n> The problem (I sent it in) was actually no sub-selects in target list.\n\nStill fixed in current sources ;-) ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Dec 1999 00:17:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tricky query, tricky response " } ]
[ { "msg_contents": "Hello,\n\nis it correct that a nesting of subtransactions is not possible with\npostgres at the moment?\n(I didn't find a TODO entry for this (even at 'exotic' features))\n\nchristof=> begin;\nBEGIN\nchristof=> begin;\nNOTICE: BeginTransactionBlock and not in default state\nBEGIN\nchristof=> commit;\nEND\nchristof=> commit;\nNOTICE: EndTransactionBlock and not inprogress/abort state\nEND\n\nRegards\n Christof\n\n\n", "msg_date": "Fri, 01 Oct 1999 08:24:22 +0200", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": true, "msg_subject": "are subtransactions not nestable?" }, { "msg_contents": "> is it correct that a nesting of subtransactions is not possible with\n> postgres at the moment?\n> (I didn't find a TODO entry for this (even at 'exotic' features))\n\nThere has been some discussion of this. The fact that it is disallowed\nby SQL92 is *not* the reason we don't have the feature, but it may be\nresponsible for it not yet being done...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 01 Oct 1999 15:03:44 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] are subtransactions not nestable?" }, { "msg_contents": "> Hello,\n> \n> is it correct that a nesting of subtransactions is not possible with\n> postgres at the moment?\n> (I didn't find a TODO entry for this (even at 'exotic' features))\n\nWe don't support it, but will add to TODO now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Oct 1999 11:45:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] are subtransactions not nestable?" } ]
[ { "msg_contents": "I trried to compile today's current sources and got a problem:\n\nmake[2]: Entering directory /u/postgres/cvs/pgsql/src/interfaces/libpq++'\nc++ -I../../backend -I../../include -I../../interfaces/libpq -I../../include -I../../backend -O2 -mpentium -Wall -Wmissing-prototypes -fpic -c pgdatabase.cc -o pgdatabase.o\npgdatabase.cc: In method \t`int PgDatabase::CmdTuples()':\npgdatabase.cc:66: return to \t`nt' from `const char *' lacks a cast\nmake[2]: *** [pgdatabase.o] Error 1\nmake[2]: Leaving directory /u/postgres/cvs/pgsql/src/interfaces/libpq++'\n\nI'm using gcc 2.95.1 and have no problem to compile 6.5.2\non my Linux box 2.2.12\n\n\tregards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 1 Oct 1999 16:12:29 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "problem compiling current sources with gcc 2.95.1" } ]
[ { "msg_contents": "We need to estimate the number of distinct values of an attribute from\ninside optimizer, for an extension we are working on.\nShould we use stacommonfrac from pg_statistic, or attdisbursion, or what\nelse?\n", "msg_date": "Fri, 01 Oct 1999 18:11:51 +0200", "msg_from": "Roberto Cornacchia <[email protected]>", "msg_from_op": true, "msg_subject": "attribute distinct values estimate" } ]
[ { "msg_contents": "\nHi,\n\nIs anybody tried Rexx to work with postgres ?\nI found RexxSQL ( http://www.lightlink.com/hessling/)\nwhich claimed to work via ODBC but I didn't work \nwork with ODBC and would like to know if it's worth to play\nand what I need to install/\n\n\tRegards,\n\n\t\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Fri, 1 Oct 1999 22:16:44 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Rexx interface po postgres ?" } ]
[ { "msg_contents": "[email protected]\n>[pgsql@hot] ~/devel/src/test/regress > ./checkresults\n>====== int2 ======\n>10c10\n>< ERROR: pg_atoi: error reading \"100000\": Numerical result out of range\n>---\n>> ERROR: pg_atoi: error reading \"100000\": Math result not representable\n>====== int4 ======\n>10c10\n>< ERROR: pg_atoi: error reading \"1000000000000\": Numerical result out of range\n>---\n>> ERROR: pg_atoi: error reading \"1000000000000\": Math result not representable\n>[pgsql@hot] ~/devel/src/test/regress >\n>\n>\n>\n> Such a regression result while we're in the middle of feature\n> development.\n>\n> I'm really impressed - if we only can keep it on this level!\n>\n\nI'm sure we could get rid of even those errors if we were to\nincorporate some test like the following and then mangle the\nexpected results accordingly.\n\nTrouble is I'm not sure how portable the code is:-\n\nSPARCLinux compiles and gives \"Math result not representable\"\nSolaris7 compiles and gives \"Result too large\"\n\nComments?\n\n#include <stdio.h>\n#include <errno.h>\n#include <string.h>\n#include <stdlib.h>\nint main(void)\n{\n char *s = \"10000000000000000000000000000000000000000000000000\";\n long l = 0;\n char *badp = (char *) NULL;\n\n errno = 0;\n\n l = strtol(s, &badp, 10);\n if (errno) {\n printf(\"%s\\n\",strerror(errno));\n exit(0);\n } else {\n printf(\"Error - No Error.\");\n exit(1);\n }\n} \n\n", "msg_date": "Fri, 1 Oct 1999 20:02:54 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2" }, { "msg_contents": "> [email protected]\n> >[pgsql@hot] ~/devel/src/test/regress > ./checkresults\n> >====== int2 ======\n> >10c10\n> >< ERROR: pg_atoi: error reading \"100000\": Numerical result out of range\n> >---\n> >> ERROR: pg_atoi: error reading \"100000\": Math result not representable\n> >====== int4 ======\n> >10c10\n> >< ERROR: pg_atoi: error reading \"1000000000000\": Numerical result out of range\n> >---\n> >> ERROR: pg_atoi: error reading \"1000000000000\": Math result not representable\n> >[pgsql@hot] ~/devel/src/test/regress >\n\nI am inclined to strip off error messages after the second : and the do\nthe compare so:\n\n\tERROR: pg_atoi: error reading \"1000000000000\": Math result not representa..\n\n\nbecomes:\n\n\tERROR: pg_atoi: error reading \"1000000000000\":\n\nThey we just have to make sure all calls to perror have a colon before\nthem.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 1 Oct 1999 16:11:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2" }, { "msg_contents": "Keith Parks <[email protected]> writes:\n> I'm sure we could get rid of even those errors if we were to\n> incorporate some test like the following and then mangle the\n> expected results accordingly.\n\nI don't see much value in getting rid of the discrepancies in strerror()\nmessages unless you have some proposal for getting rid of platform-\nspecific float roundoff differences. On my machine, the diffs in the\nfloat8 and geometry regress tests are *much* larger and much harder to\nvalidate by eyeball than the piddling little diffs in int2 and int4.\n(I suppose I should submit platform-specific expected files for HPUX,\nbut have never gotten round to it...)\n\nHowever, if people like this approach, why not just print out\n\"strerror(ERANGE)\" instead of fooling with strtol?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Oct 1999 17:32:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> They we just have to make sure all calls to perror have a colon before\n> them.\n\nIIRC, perror is defined to supply the colon for you. But this approach\nwon't really work unless you want to guarantee that colons never appear\nfor any other reason in the regress outputs... horology, for one, is\ngoing to have a problem with that...\n\nIt does seem that the majority of the platform-specific expected files\nwe have are for this one issue in int2/int4, so maybe Keith's idea of\nsubstituting the correct error message as a localization string is not a\nbad one. We already have all the mechanism for that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Oct 1999 17:38:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 " } ]
[ { "msg_contents": "I have a new job, Marc, sorry for the inconvenience, but I was on almost all\nof the lists so I thought I'd make sure I unsubscribed from them all rather\nthan have the bounced messages going back to senders or my box here filling\nup.\n\nJUST IGNORE THIS, thanks\n DEJ\n", "msg_date": "Fri, 1 Oct 1999 17:27:41 -0500 ", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "testing unsubscribe" } ]
[ { "msg_contents": "\n>From: Tom Lane <[email protected]>\n>\n>Keith Parks <[email protected]> writes:\n>> I'm sure we could get rid of even those errors if we were to\n>> incorporate some test like the following and then mangle the\n>> expected results accordingly.\n>\n>I don't see much value in getting rid of the discrepancies in strerror()\n>messages unless you have some proposal for getting rid of platform-\n>specific float roundoff differences. On my machine, the diffs in the\n>float8 and geometry regress tests are *much* larger and much harder to\n>validate by eyeball than the piddling little diffs in int2 and int4.\n>(I suppose I should submit platform-specific expected files for HPUX,\n>but have never gotten round to it...)\n>\n>However, if people like this approach, why not just print out\n>\"strerror(ERANGE)\" instead of fooling with strtol?\n\nTrust me to make things over complex!!\n\nKeith.\n\n", "msg_date": "Fri, 1 Oct 1999 23:55:00 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 " } ]
[ { "msg_contents": "I have finished installing the code changes associated with marking\nfunctions 'iscachable' or not. I had hoped that this change would\neliminate the problems we have with premature coercion of datetime\nconstants in column defaults and rules. It turns out it doesn't :-(.\nThat's because there isn't any good way to postpone the evaluation of\na typinput function. Since the argument of a typinput function is a\nnull-terminated C string, and not a 'text' or any other full-fledged\nPostgres type, there is no way to construct an expression tree\nrepresenting runtime evaluation of the typinput function. So, even\nthough the system knows it shouldn't evaluate the typinput function\nbefore runtime, it has little choice.\n\nWe have talked about making 'C string' a genuine Postgres type, at\nleast to the extent of giving it an OID and making it representable\nas a Const node. If we did that then we could represent a typinput\nfunction call by an expression tree and make this problem go away.\nI'm not going to tackle that right now, though, since there are\nhigher-priority problems to deal with.\n\nThe current state of affairs is that if you write a constant of UNKNOWN\ntype (ie, an unadorned quoted constant), it'll get coerced to the\ndestination type just as soon as the system can figure out what that\ntype is. So, it's still necessary to write \"'now'::text\" (or one of the\nother syntaxes for type-casting) or \"now()\" as the default value for\na datetime column --- if you write unadorned 'now' then you will get\nthe time of table creation, same as before.\n\nI am about to rip out and redo the crufty implementation of default and\nconstraint expressions, and I think that I can arrange for UNKNOWN\nconstants to remain UNKNOWN when they are stored into the pg_attrdef\ntable. This would mean that what gets into pg_attrdef is just the\nunadorned string 'now', and then the coercion of this to a particular\ntimestamp will occur when an INSERT statement that uses the default\nis parsed. So the right thing (approximately, anyway) should happen for\na typical run-of-the-mill INSERT. The wrong thing will still happen\nfor an INSERT written in a rule --- its default will be established when\nthe rule is created.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Oct 1999 19:57:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "'iscachable' only partially solves premature constant coercion" }, { "msg_contents": "I wrote:\n> I am about to rip out and redo the crufty implementation of default and\n> constraint expressions, and I think that I can arrange for UNKNOWN\n> constants to remain UNKNOWN when they are stored into the pg_attrdef\n> table. This would mean that what gets into pg_attrdef is just the\n> unadorned string 'now', and then the coercion of this to a particular\n> timestamp will occur when an INSERT statement that uses the default\n> is parsed. So the right thing (approximately, anyway) should happen for\n> a typical run-of-the-mill INSERT. The wrong thing will still happen\n> for an INSERT written in a rule --- its default will be established when\n> the rule is created.\n\nI did this, and that's how it works now. Unless we choose to do\nsomething about making C strings and typinput functions fit into the\nPostgres type scheme, that's how it will continue to work.\n\nTo summarize: in current sources, \"default 'now'\" works as expected in\nsimple cases:\n\nplay=> create table valplustimestamp (val int, stamp datetime default 'now');\nCREATE\nplay=> insert into valplustimestamp values(1);\nINSERT 653323 1\nplay=> insert into valplustimestamp values(2);\nINSERT 653324 1\nplay=> select * from valplustimestamp;\nval|stamp\n---+----------------------------\n 1|Mon Oct 04 10:58:47 1999 EDT\n 2|Mon Oct 04 10:58:49 1999 EDT\n(2 rows)\n\nbut it still has a subtle failure mode:\n\nplay=> create view val as select val from valplustimestamp;\nCREATE\nplay=> create rule val_ins as on insert to val do instead\nplay-> insert into valplustimestamp values(new.val);\nCREATE\nplay=> insert into val values(3);\nINSERT 653336 1\nplay=> insert into val values(4);\nINSERT 653337 1\nplay=> select * from valplustimestamp;\nval|stamp\n---+----------------------------\n 1|Mon Oct 04 10:58:47 1999 EDT\n 2|Mon Oct 04 10:58:49 1999 EDT\n 3|Mon Oct 04 10:59:48 1999 EDT\n 4|Mon Oct 04 10:59:48 1999 EDT\n(4 rows)\n\nThe default value inserted by the rule got frozen when the rule was\nparsed, as can be seen by inspecting the back-parsing of the rule:\n\nplay=> select * from pg_rules;\ntablename|rulename|definition\n---------+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------\nval |val_ins |CREATE RULE val_ins AS ON INSERT TO val DO INSTEAD INSERT INTO valplustimestamp (val, stamp) VALUES (new.val, 'Mon Oct 04 10:59:48 1999 EDT'::datetime);\n(1 row)\n\n\nSo, we should still recommend \"DEFAULT now()\" rather than \"DEFAULT 'now'\"\nas the most reliable way of setting up a current-time default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Oct 1999 11:11:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Status of 'now' column defaults" } ]
[ { "msg_contents": "I see a todo item\n* Views with spaces in view name fail when referenced\n\nI have another one for you:\n* Databases with spaces in name fail to be created and destroyed despite\nresponses to the contrary.\n\nA sample session:\ntemplate1=> create database \"with space\";\nCREATEDB\ntemplate1=> \\q\n$ psql -d \"with space\"\nConnection to database 'with space' failed.\nFATAL 1: InitPostgres could not validate that the database version is\ncompatible with this level of Postgres\n even though the database system as a whole appears to be at a\ncompatible level.\n You may need to recreate the database with SQL commands DROP\nDATABASE and CREATE DATABASE.\n File '/usr/local/pgsql/data/base/with space/PG_VERSION' does not\nexist or no read permission.\n\n(You can't do \\c with space or \\c \"with space\" yet. That will be (is) in\nthe new version.)\n\nFurther investigation shows that the directory\n/usr/local/pgsql/data/base/with space is totally empty.\n\nBut:\ntemplate1=> select * from pg_database;\ndatname |datdba|encoding|datpath\n----------+------+--------+----------\ntemplate1 | 100| 0|template1\n . . .\nwith space| 101| 0|with space\n(4 rows)\n\ntemplate1=> drop database \"with space\";\nDESTROYDB\n\nYet, the mysterious empty directory is still there.\n\nBUG?\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n", "msg_date": "Mon, 4 Oct 1999 16:03:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Database names with spaces" }, { "msg_contents": "\nLooks like bug to me.\n\n\n> I see a todo item\n> * Views with spaces in view name fail when referenced\n> \n> I have another one for you:\n> * Databases with spaces in name fail to be created and destroyed despite\n> responses to the contrary.\n> \n> A sample session:\n> template1=> create database \"with space\";\n> CREATEDB\n> template1=> \\q\n> $ psql -d \"with space\"\n> Connection to database 'with space' failed.\n> FATAL 1: InitPostgres could not validate that the database version is\n> compatible with this level of Postgres\n> even though the database system as a whole appears to be at a\n> compatible level.\n> You may need to recreate the database with SQL commands DROP\n> DATABASE and CREATE DATABASE.\n> File '/usr/local/pgsql/data/base/with space/PG_VERSION' does not\n> exist or no read permission.\n> \n> (You can't do \\c with space or \\c \"with space\" yet. That will be (is) in\n> the new version.)\n> \n> Further investigation shows that the directory\n> /usr/local/pgsql/data/base/with space is totally empty.\n> \n> But:\n> template1=> select * from pg_database;\n> datname |datdba|encoding|datpath\n> ----------+------+--------+----------\n> template1 | 100| 0|template1\n> . . .\n> with space| 101| 0|with space\n> (4 rows)\n> \n> template1=> drop database \"with space\";\n> DESTROYDB\n> \n> Yet, the mysterious empty directory is still there.\n> \n> BUG?\n> \n> -- \n> Peter Eisentraut - [email protected]\n> http://yi.org/peter-e\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 17:11:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Database names with spaces" }, { "msg_contents": "\nLooks like a bug. Added to TODO list.\n\n\n\n> I see a todo item\n> * Views with spaces in view name fail when referenced\n> \n> I have another one for you:\n> * Databases with spaces in name fail to be created and destroyed despite\n> responses to the contrary.\n> \n> A sample session:\n> template1=> create database \"with space\";\n> CREATEDB\n> template1=> \\q\n> $ psql -d \"with space\"\n> Connection to database 'with space' failed.\n> FATAL 1: InitPostgres could not validate that the database version is\n> compatible with this level of Postgres\n> even though the database system as a whole appears to be at a\n> compatible level.\n> You may need to recreate the database with SQL commands DROP\n> DATABASE and CREATE DATABASE.\n> File '/usr/local/pgsql/data/base/with space/PG_VERSION' does not\n> exist or no read permission.\n> \n> (You can't do \\c with space or \\c \"with space\" yet. That will be (is) in\n> the new version.)\n> \n> Further investigation shows that the directory\n> /usr/local/pgsql/data/base/with space is totally empty.\n> \n> But:\n> template1=> select * from pg_database;\n> datname |datdba|encoding|datpath\n> ----------+------+--------+----------\n> template1 | 100| 0|template1\n> . . .\n> with space| 101| 0|with space\n> (4 rows)\n> \n> template1=> drop database \"with space\";\n> DESTROYDB\n> \n> Yet, the mysterious empty directory is still there.\n> \n> BUG?\n> \n> -- \n> Peter Eisentraut - [email protected]\n> http://yi.org/peter-e\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 17:11:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Database names with spaces" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Looks like a bug. Added to TODO list.\n\n>> I see a todo item\n>> * Views with spaces in view name fail when referenced\n>> \n>> I have another one for you:\n>> * Databases with spaces in name fail to be created and destroyed despite\n>> responses to the contrary.\n\nIIRC, createdb and destroydb use \"cp -r\" and \"rm -r\" respectively.\nLack of careful quoting in the system calls is probably what's\ncausing the problem here.\n\nHowever, I wonder if it wouldn't be a better idea to forbid funny\ncharacters in things that will become Unix filenames. In particular,\nsomething like CREATE DATABASE \"../../../something\" could have real\nbad consequences...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Oct 1999 18:42:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Database names with spaces " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Looks like a bug. Added to TODO list.\n> \n> >> I see a todo item\n> >> * Views with spaces in view name fail when referenced\n> >> \n> >> I have another one for you:\n> >> * Databases with spaces in name fail to be created and destroyed despite\n> >> responses to the contrary.\n> \n> IIRC, createdb and destroydb use \"cp -r\" and \"rm -r\" respectively.\n> Lack of careful quoting in the system calls is probably what's\n> causing the problem here.\n> \n> However, I wonder if it wouldn't be a better idea to forbid funny\n> characters in things that will become Unix filenames. In particular,\n> something like CREATE DATABASE \"../../../something\" could have real\n> bad consequences...\n\nI just tried it:\n\n\ttest=> create database \"../../pg_hba.conf\"\n\ttest-> \\g\n\tERROR: Unable to locate path '../../pg_hba.conf'\n\t This may be due to a missing environment variable in the server\n\nSeems we are safe.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 23:08:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Database names with spaces" } ]
[ { "msg_contents": "I've written a little C mex file for MATLAB to interface to the my\nPostgreSQL database. Currently, I have a compiled version for SGI IRIX\n6.5.4, but I think this should be simple enough to port to any machine.\n\nCurrently, the mex file just brings data back as ASCII. I hope to\neventually add some ability for binary cursors and possibly even\nlower-level functions like PQexec and PQntuples. But considering my list\nof things to do that may take a while.\n\nI've tried to compile it for Linux, however MATLAB was compiled on Linux\n4.2 and so the mex compiler won't work properly under my RH 6.0. There\nis a workaround by getting the version 5.0 C libraries for Linux and\ncompiling it with them. However, I am hoping that the Mathworks will\neventually compile the program on 6.0 so that I don't have to do the\nworkaround.\n\nI hope people get some use out of it. Let me know if you find ways to\nimprove it. I think it would be a nice little interface to add to the\nPostgreSQL distribution someday.\n\n-Tony", "msg_date": "Mon, 04 Oct 1999 11:59:32 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "MATLAB mex file for PostgreSQL" }, { "msg_contents": "> I've written a little C mex file for MATLAB to interface to the my\n> PostgreSQL database.\n\nI'll add it to the contrib area, unless someone else beats me to it.\nThanks.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 05 Oct 1999 01:48:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MATLAB mex file for PostgreSQL" } ]
[ { "msg_contents": "Hi..\n\nI have installed postgreSQL 6.42 on Solaris sparc as per the instructions \ngiven in readmesolaris.htm.\nNow after installing successfully when I run initdb command I am getting\nfollowing error\n\n$ initdb --username=pgsql --pglib=/usr/local/pgsql/lib\n\nld.so.1: pg_id: fatal: libgen.so.1: open failed: No such file or directory\nUnable to determine a valid username. If you are running\ninitdb without an explicit username specified, then there\nmay be a problem with finding the Postgres shared library\nand/or the pg_id utility.\n\nPlease guide ..\n\nRegards\nRajesh\n\n\n\n\n\n\n\n\nHi..I have installed postgreSQL 6.42 on Solaris sparc \nas per the instructions given in readmesolaris.htm.\nNow after installing successfully when I run initdb command I \nam gettingfollowing error\n$ initdb --username=pgsql \n--pglib=/usr/local/pgsql/lib\n \nld.so.1: pg_id: fatal: libgen.so.1: open failed: No such file \nor directoryUnable to determine a valid username.  If you are \nrunninginitdb without an explicit username specified, then theremay be a \nproblem with finding the Postgres shared libraryand/or the pg_id \nutility.\n \nPlease guide ..\n \nRegards\nRajesh", "msg_date": "Tue, 5 Oct 1999 14:10:03 +0530", "msg_from": "\"Geeta Mahesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "initdb problem on sun solaris sparc" } ]
[ { "msg_contents": "Hi!\n\nI am trying to implement a user type for PostgreSQL 6.5 whose size can\nbe arbitrarely large, so I am implementing it over a large oject.\nHowever, I am getting in trobles with accessing to such large object\nfrom the code of the data type functions (for example the in() and out()\nfucntions) when this code is executed in the server side (as it happens\nwhen the new type is registered in PostgreSQL). If I execute such\nfunctions from a stand alone program, accessing to PostgreSQL as a\nclient aplication, and I add the command Qexec(\"begin\") and Qexec(\"end\")\nto these functions (to access to the large object within a transaction,\notherwise I know it would not work), then I have no trouble accesing to\nthe large object ised by the data type. However, If I execute this code\nin the server side (registering the new data type in PostgreSQL), I can\nnot execute the Qexec(\"begin\") call (because both it would be wrong and\nit fails), so as a result I get an error when trying to open the large\nobject.\n\nThe documentation of PostgreSQL says that for data types of more thatn\n8Kb (even less) one needs to use large objects, but it says nothing\nabout how, so I assume the interface is the same than for stand alone\nprograms. But I do not get it working that way.\n\nIs it a bug related with adding in PostgreSQL 6.5 the explicit\nconstraint that the access to a large object must be inside a\ntransaction, and forgetting something about the case of accessing to the\nlarge object from the server side, or perhaps am I missing something?\n\nI have not installed PostgreSQL 6.4.2, so I can not check whether before\nadding the explicit constraint it was possible to access to the large\nobject from the server side with the code I have (This could discard my\nhypotesis of a problem with the new constraint, if with previous\nversions my code does not work either).\n\nAny clue about the subject?\n\nThanks in advance,\n\n\tTony.\n\n-- \nJose Antonio Cotelo Lema. | [email protected]\nPraktische Informatik IV. Fernuniversitaet Hagen.\nD-58084 Hagen. Germany.\n", "msg_date": "Tue, 05 Oct 1999 10:44:03 +0200", "msg_from": "Jose Antonio Cotelo lema <[email protected]>", "msg_from_op": true, "msg_subject": "User types using large objects. Is it really possible?" } ]
[ { "msg_contents": ">\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: Tuesday, September 28, 1999 11:54 PM\n> > To: Tom Lane\n> > Cc: Hiroshi Inoue; pgsql-hackers\n> > Subject: Re: [HACKERS] Recovery on incomplete write\n> >\n> >\n> > > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > > I have wondered that md.c handles incomplete block(page)s\n> > > > correctly.\n> > > > Am I mistaken ?\n> > >\n> > > I think you are right, and there may be some other trouble\n> spots in that\n> > > file too. I remember thinking that the code depended heavily on never\n> > > having a partial block at the end of the file.\n> > >\n> > > But is it worth fixing? The only way I can see for the file length\n> > > to become funny is if we run out of disk space part way\n> through writing\n> > > a page, which seems unlikely...\n> > >\n> >\n> > That is how he got started, the TODO item about running out of disk\n> > space causing corrupted databases. I think it needs a fix, if we can.\n> >\n>\n> Maybe it isn't so difficult to fix.\n> I would provide a patch.\n>\n\nHere is a patch.\n\n1) mdnblocks() ignores a partial block at the end of relation files.\n2) mdread() ignores a partial block of block number 0.\n3) mdextend() adjusts its position to a multiple of BLCKSZ\n before writing.\n4) mdextend() truncates extra bytes in case of incomplete write.\n\nIf there's no objection,I would commit this change to the current\ntree.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** storage/smgr/md.c.orig\tThu Sep 30 10:50:58 1999\n--- storage/smgr/md.c\tTue Oct 5 13:30:55 1999\n***************\n*** 233,239 ****\n int\n mdextend(Relation reln, char *buffer)\n {\n! \tlong\t\tpos;\n \tint\t\t\tnblocks;\n \tMdfdVec *v;\n\n--- 233,239 ----\n int\n mdextend(Relation reln, char *buffer)\n {\n! \tlong\t\tpos, nbytes;\n \tint\t\t\tnblocks;\n \tMdfdVec *v;\n\n***************\n*** 243,250 ****\n \tif ((pos = FileSeek(v->mdfd_vfd, 0L, SEEK_END)) < 0)\n \t\treturn SM_FAIL;\n\n! \tif (FileWrite(v->mdfd_vfd, buffer, BLCKSZ) != BLCKSZ)\n \t\treturn SM_FAIL;\n\n \t/* remember that we did a write, so we can sync at xact commit */\n \tv->mdfd_flags |= MDFD_DIRTY;\n--- 243,264 ----\n \tif ((pos = FileSeek(v->mdfd_vfd, 0L, SEEK_END)) < 0)\n \t\treturn SM_FAIL;\n\n! \tif (pos % BLCKSZ != 0) /* the last block is incomplete */\n! \t{\n! \t\tpos = BLCKSZ * (long)(pos / BLCKSZ);\n! \t\tif (FileSeek(v->mdfd_vfd, pos, SEEK_SET) < 0)\n! \t\t\treturn SM_FAIL;\n! \t}\n!\n! \tif ((nbytes = FileWrite(v->mdfd_vfd, buffer, BLCKSZ)) != BLCKSZ)\n! \t{\n! \t\tif (nbytes > 0)\n! \t\t{\n! \t\t\tFileTruncate(v->mdfd_vfd, pos);\n! \t\t\tFileSeek(v->mdfd_vfd, pos, SEEK_SET);\n! \t\t}\n \t\treturn SM_FAIL;\n+ \t}\n\n \t/* remember that we did a write, so we can sync at xact commit */\n \tv->mdfd_flags |= MDFD_DIRTY;\n***************\n*** 432,437 ****\n--- 446,453 ----\n \t{\n \t\tif (nbytes == 0)\n \t\t\tMemSet(buffer, 0, BLCKSZ);\n+ \t\telse if (blocknum == 0 && nbytes > 0 && mdnblocks(reln) == 0)\n+ \t\t\tMemSet(buffer, 0, BLCKSZ);\n \t\telse\n \t\t\tstatus = SM_FAIL;\n \t}\n***************\n*** 1067,1072 ****\n {\n \tlong\t\tlen;\n\n! \tlen = FileSeek(file, 0L, SEEK_END) - 1;\n! \treturn (BlockNumber) ((len < 0) ? 0 : 1 + len / blcksz);\n }\n--- 1083,1088 ----\n {\n \tlong\t\tlen;\n\n! \tlen = FileSeek(file, 0L, SEEK_END);\n! \treturn (BlockNumber) (len / blcksz);\n }\n\n\n", "msg_date": "Tue, 5 Oct 1999 18:25:48 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Recovery on incomplete write" } ]
[ { "msg_contents": "Hi all,\n\nI'm trying to fix a TODO item\n* spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n.\nBut it's more difficult than I have thought.\nIO_ERROR stuff in bufmgr.c has never been executed before\nbecause of spinlock stuck abort.\nAs far as I see,I would have to change it.\nPlease help me.\n\nNow I have a question about IO_IN_PROGRESS handling.\n\nIO_IN_PROGRESS mask and io_in_progress_lock spinlock\nare held while BufferAlloc() reads disk pages into buffer.\n\nBut seems they aren't held while writing buffers to disk,\nWe couldn't detect writing_IO_IN_PROGRESS and simultaneous\nwriting to a same page may occur.\nNo problem ?\n\n\nAnd I have other questions which are irrevalent to the TODO item.\n\n1. Why does BufferReplace() call smgrflush()(not smgrwrite()) ?\n Are there any reasons that we couldn't postpone fsync() until\n commit ?\n\n2. Why does FlushRelationBuffers() call FlushBuffer() ?\n Isn't it a overhead to call fsync() per page ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 5 Oct 1999 18:32:44 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Questions about bufmgr" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> I'm trying to fix a TODO item\n> * spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n\n...\n\n> And I have other questions which are irrevalent to the TODO item.\n> \n> 1. Why does BufferReplace() call smgrflush()(not smgrwrite()) ?\n> Are there any reasons that we couldn't postpone fsync() until\n> commit ?\n> \n> 2. Why does FlushRelationBuffers() call FlushBuffer() ?\n> Isn't it a overhead to call fsync() per page ?\n\nPleeease don't touch bufmgr for the moment - it will be\nchanged due to WAL implementation. Currently I do data \nbase startup/shutdown stuff but will switch to bufmgr\nin 1-2 days.\n\nVadim\n", "msg_date": "Tue, 05 Oct 1999 17:39:50 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Questions about bufmgr" } ]
[ { "msg_contents": "Hi,\n\nI'm planning to implement a new type of scan,scan by TID.\nIt's on TODO * Allow WHERE restriction on ctid.\n\nFirst,I want to define an equal operator between TID.\nSeems OID's 1700-1799 are reserved for numeric type.\nCan I use 1800 as its OID ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 5 Oct 1999 18:54:28 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to add a new build-in operator" }, { "msg_contents": "> I'm planning to implement a new type of scan,scan by TID.\n> It's on TODO * Allow WHERE restriction on ctid.\n> First,I want to define an equal operator between TID.\n> Seems OID's 1700-1799 are reserved for numeric type.\n> Can I use 1800 as its OID ?\n\nCertainly, or perhaps it would be better to recycle an OID from\nfarther down? We have some open values, and if you only need a few it\nwould work well.\n\nYou probably already know this, but just in case,\n\ncd src/include/catalog\n./unused_oids\n\nwill help you find some OIDs to use.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 05 Oct 1999 13:22:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to add a new build-in operator" }, { "msg_contents": "> I'm planning to implement a new type of scan,scan by TID.\n> It's on TODO * Allow WHERE restriction on ctid.\n> \n> First,I want to define an equal operator between TID.\n> Seems OID's 1700-1799 are reserved for numeric type.\n> Can I use 1800 as its OID ?\n> \n\nYou can use any unused oid for your purposes.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 11:28:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to add a new build-in operator" }, { "msg_contents": "> > I'm planning to implement a new type of scan,scan by TID.\n> > It's on TODO * Allow WHERE restriction on ctid.\n> > First,I want to define an equal operator between TID.\n> \n> Certainly, or perhaps it would be better to recycle an OID from\n> farther down? We have some open values, and if you only need a few it\n> would work well.\n> \n> You probably already know this, but just in case,\n> \n> cd src/include/catalog\n> ./unused_oids\n>\n\nI didn't know it.\nThanks.\nI would use OIDs for '=' operator between TIDs as follows.\n\t387\tfor = (tid, tid)\n\t1292\tfor tideq(tid, tid)\n\n\nUnfortunately,TIDs are changed by UPDATE operations.\nSo we would need some functions in order to get the latest\nTID of a specified tuple such as\n\t currtid(relationid/name, tid) which returns tid.\nI would provide functions for both relid and relname and\nuse 1293-1294 for OIDs of these functions.\n\nComments ?\nIf there's no objection,I would commit them to the current tree.\n\nMoreover,we would need to know TIDs of inserted tuples.\nWhat is a reasonable way to do so ?\n1. Add TID to return_info of INSERT commands.\n2. Provide a function to get TID of the last inserted tuple\n of the session(backend).\n... \n\nAny ideas ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Thu, 7 Oct 1999 18:56:52 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Scan by TID (was RE: [HACKERS] How to add a new build-in operator)" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > I'm planning to implement a new type of scan,scan by TID.\n> > > It's on TODO * Allow WHERE restriction on ctid.\n> > > First,I want to define an equal operator between TID.\n> > \n> > Certainly, or perhaps it would be better to recycle an OID from\n> > farther down? We have some open values, and if you only need a few it\n> > would work well.\n> > \n> > You probably already know this, but just in case,\n> > \n> > cd src/include/catalog\n> > ./unused_oids\n> >\n> \n> I didn't know it.\n> Thanks.\n\nOops, no mention of that in the developers FAQ. Let me do that now.\n\n\n> I would use OIDs for '=' operator between TIDs as follows.\n> \t387\tfor = (tid, tid)\n> \t1292\tfor tideq(tid, tid)\n> \n> \n> Unfortunately,TIDs are changed by UPDATE operations.\n> So we would need some functions in order to get the latest\n> TID of a specified tuple such as\n> \t currtid(relationid/name, tid) which returns tid.\n> I would provide functions for both relid and relname and\n> use 1293-1294 for OIDs of these functions.\n> \n> Comments ?\n> If there's no objection,I would commit them to the current tree.\n\nSounds good.\n\n> \n> Moreover,we would need to know TIDs of inserted tuples.\n> What is a reasonable way to do so ?\n> 1. Add TID to return_info of INSERT commands.\n> 2. Provide a function to get TID of the last inserted tuple\n> of the session(backend).\n\nEither sounds good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 12:35:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "> \n> > > > I'm planning to implement a new type of scan,scan by TID.\n> > > > It's on TODO * Allow WHERE restriction on ctid.\n> > > > First,I want to define an equal operator between TID.\n> > >\n\n[snip] \n\n> \n> > I would use OIDs for '=' operator between TIDs as follows.\n> > \t387\tfor = (tid, tid)\n> > \t1292\tfor tideq(tid, tid)\n> > \n> > \n> > Unfortunately,TIDs are changed by UPDATE operations.\n> > So we would need some functions in order to get the latest\n> > TID of a specified tuple such as\n> > \t currtid(relationid/name, tid) which returns tid.\n> > I would provide functions for both relid and relname and\n> > use 1293-1294 for OIDs of these functions.\n> > \n> > Comments ?\n> > If there's no objection,I would commit them to the current tree.\n> \n> Sounds good.\n>\n\nI have committed them to the current tree.\nNeeds initdb.\n\nNow we could enjoy WHERE restriction on ctid as follows.\nUnfortunately,the scan is still sequential. \n\n=> create table t1 (dt text);\nCREATE\n=> insert into t1 values ('data inserted');\nINSERT 45833 1\n=> select ctid,* from t1;\nctid |dt\n-----+----------\n(0,1)|data inserted\n(1 row)\n\n=> select * from t1 where ctid='(0,1)';\ndt\n----------\ndata inserted\n(1 row)\n\n=> update t1 set dt='data updated';\nUPDATE 1\n=> select * from t1 where ctid='(0,1)';\ndt\n--\n(0 rows)\n\n=> select ctid,* from t1 where ctid=currtid2('t1', '(0,1)');\nctid |dt\n-----+------------\n(0,2)|data updated\n(1 row) \n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Mon, 11 Oct 1999 19:12:06 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "> I have committed them to the current tree.\n> Needs initdb.\n\nTODO list updated. Seems the tid could be accessed directly, by\nchecking the page list, and if it is valid, just going to the tid. I\nassume you are working on that issue, or do you need assistance?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 09:45:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, October 11, 1999 10:45 PM\n> To: Hiroshi Inoue\n> Cc: Thomas Lockhart; pgsql-hackers\n> Subject: Re: Scan by TID (was RE: [HACKERS] How to add a new \n> build-in operator)\n> \n> \n> > I have committed them to the current tree.\n> > Needs initdb.\n> \n> TODO list updated. Seems the tid could be accessed directly, by\n> checking the page list, and if it is valid, just going to the tid. I\n> assume you are working on that issue, or do you need assistance?\n>\n\nYes,I have done a part of my story.\nI would add new type of path and scan by which we are able to access\ntids directly.\n\nI don't understand planner/executor stage well.\nSo I'm happy if someone could check my story.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 12 Oct 1999 00:30:54 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Yes,I have done a part of my story.\n> I would add new type of path and scan by which we are able to access\n> tids directly.\n\nYes, new path type, new plan type, probably a new access method if\nyou want to keep things clean in the executor, cost-estimation routines\nin the planner, etc. etc.\n\nLooks like a lot of work, and a lot of added code bulk that will\nhave to be maintained. I haven't figured out why you think it's\nworth it... tids are so transient that I don't see much need for\nfinding tuples by them...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Oct 1999 15:07:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n\toperator)" }, { "msg_contents": "> > TODO list updated. Seems the tid could be accessed directly, by\n> > checking the page list, and if it is valid, just going to the tid. I\n> > assume you are working on that issue, or do you need assistance?\n> >\n> \n> Yes,I have done a part of my story.\n> I would add new type of path and scan by which we are able to access\n> tids directly.\n> \n> I don't understand planner/executor stage well.\n> So I'm happy if someone could check my story.\n\nMy guess is that you are going to have to hard-code something into the\nbackend to handle non-scan lookup of this type specially.\n\nNormally, either there is an index lookup, or a sequential scan. In\nyour case, you \"know\" the actual location of the row, or at least a\nrequest for a possible valid location.\n\nYou could create a fake index that really doesn't exist, but returns a\ntid that exactly matches the requested tid, or you could have the code\ncheck for a specific TID type, and heap_fetch the desired row directly,\nrather than performing a sequential scan. See the developers FAQ on how\nto do a heap_fetch with a tid.\n\n\tYou can also use <I>heap_fetch()</I> to fetch rows by block\n\tnumber/offset.\n\nThe block number/offset is the tid. Of course, you have to make sure\nthe tid is valid.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 15:10:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n\toperator)]" }, { "msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Yes,I have done a part of my story.\n> > I would add new type of path and scan by which we are able to access\n> > tids directly.\n> \n> Yes, new path type, new plan type, probably a new access method if\n> you want to keep things clean in the executor, cost-estimation routines\n> in the planner, etc. etc.\n> \n> Looks like a lot of work, and a lot of added code bulk that will\n> have to be maintained. I haven't figured out why you think it's\n> worth it... tids are so transient that I don't see much need for\n> finding tuples by them...\n\nThat's why I just suggested a more short-circuited option of snatching\ntid oids from expressions, and doing a heap_fetch directly at that point\nto avoid the index scan. Seems it could be done in just one file.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 15:14:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Yes,I have done a part of my story.\n> > I would add new type of path and scan by which we are able to access\n> > tids directly.\n> \n> Yes, new path type, new plan type, probably a new access method if\n> you want to keep things clean in the executor, cost-estimation routines\n> in the planner, etc. etc.\n> \n> Looks like a lot of work, and a lot of added code bulk that will\n> have to be maintained. I haven't figured out why you think it's\n> worth it... tids are so transient that I don't see much need for\n> finding tuples by them...\n\nIngres has access by tid, and it does come in handy for quick-and-dirty\nuses where you just want to snag a bunch of rows and operate on them\nwithout too much fuss. I believe if we had it, some things internally\nmay prove to be easier with them accessible in a query.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 15:15:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Tuesday, October 12, 1999 4:08 AM\n> To: Hiroshi Inoue\n> Cc: Bruce Momjian; pgsql-hackers\n> Subject: Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n> operator) \n> \n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Yes,I have done a part of my story.\n> > I would add new type of path and scan by which we are able to access\n> > tids directly.\n> \n> Yes, new path type, new plan type, probably a new access method if\n> you want to keep things clean in the executor, cost-estimation routines\n> in the planner, etc. etc.\n> \n> Looks like a lot of work, and a lot of added code bulk that will\n> have to be maintained.\n\nYou are right and It's the reason I have announced this issue\nmany times. \n\n> I haven't figured out why you think it's\n> worth it... tids are so transient that I don't see much need for\n> finding tuples by them...\n>\n\nAs far as I know,many DBMSs have means to access tuples\ndirectly.\nI have been wondering why PostgreSQL doesn't support such\nmeans.\n\nPostgreSQL isn't perfect and this kind of means are necessary\nto make up for the deficiency,I think. \nFor example how do we implement advanced features of ODBC\n/JDBC drivers etc without this kind of support ? \n\nOIDs are preferable for such means because they are\ninvariant. But unfortunately OIDs have no guarantee that\nhave indexes and even with indexes it isn't the fastest way.\n\nTIDs are transient,so I have provided built-in functions \ncurrtid() to follow update chain of links.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Tue, 12 Oct 1999 09:23:26 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Scan by TID (was RE: [HACKERS] How to add a new build-in\n\toperator)" }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, October 12, 1999 4:14 AM\n> To: Tom Lane\n> Cc: Hiroshi Inoue; pgsql-hackers\n> Subject: Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n> operator)\n> \n> \n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > Yes,I have done a part of my story.\n> > > I would add new type of path and scan by which we are able to access\n> > > tids directly.\n> > \n> > Yes, new path type, new plan type, probably a new access method if\n> > you want to keep things clean in the executor, cost-estimation routines\n> > in the planner, etc. etc.\n> > \n> > Looks like a lot of work, and a lot of added code bulk that will\n> > have to be maintained. I haven't figured out why you think it's\n> > worth it... tids are so transient that I don't see much need for\n> > finding tuples by them...\n> \n> That's why I just suggested a more short-circuited option of snatching\n> tid oids from expressions, and doing a heap_fetch directly at that point\n> to avoid the index scan. Seems it could be done in just one file.\n>\n\nI have thought the way as Tom says and I have a prospect to do it.\nBut it would take a lot of work.\n\nWhere to snatch and return to(or exit from) planner/executor \nin your story ?\n\nHiroshi Inoue\[email protected].\n\n\n", "msg_date": "Tue, 12 Oct 1999 11:17:50 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "> > > Looks like a lot of work, and a lot of added code bulk that will\n> > > have to be maintained. I haven't figured out why you think it's\n> > > worth it... tids are so transient that I don't see much need for\n> > > finding tuples by them...\n> > \n> > That's why I just suggested a more short-circuited option of snatching\n> > tid oids from expressions, and doing a heap_fetch directly at that point\n> > to avoid the index scan. Seems it could be done in just one file.\n> >\n> \n> I have thought the way as Tom says and I have a prospect to do it.\n> But it would take a lot of work.\n> \n> Where to snatch and return to(or exit from) planner/executor \n> in your story ?\n\nBasically, if I remember, in the executor, access to a table either\nopens the table for sequential scan, does an index scan, or has the\nvalue it needs already in a result of a previous join result.\n\nIf we put something in the executor so when a sequential/index scan is\nrequested on a table that has a restriction on tid, you could just do a\nheap_fetch and return the single row, rather than putting the query\nthrough the whole scan process for every row checking to see if it\nmatches the WHERE restriction.\n\nSeems like a few lines in the executor could do the entire job of\nfetching by tid by short-circuiting the sequential/index scan.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 23:22:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> If we put something in the executor so when a sequential/index scan is\n> requested on a table that has a restriction on tid, you could just do a\n> heap_fetch and return the single row, rather than putting the query\n> through the whole scan process for every row checking to see if it\n> matches the WHERE restriction.\n\n> Seems like a few lines in the executor could do the entire job of\n> fetching by tid by short-circuiting the sequential/index scan.\n\nIf I understand what you're proposing here, it would be a horrible\nmangling of the system structure and doubtless a fruitful source\nof bugs. I don't think we should be taking shortcuts with this issue.\nIf we think fast access by TID is worth supporting at all, we should\nexpend the work to do it properly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 11:02:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n\toperator)" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > If we put something in the executor so when a sequential/index scan is\n> > requested on a table that has a restriction on tid, you could just do a\n> > heap_fetch and return the single row, rather than putting the query\n> > through the whole scan process for every row checking to see if it\n> > matches the WHERE restriction.\n> \n> > Seems like a few lines in the executor could do the entire job of\n> > fetching by tid by short-circuiting the sequential/index scan.\n> \n> If I understand what you're proposing here, it would be a horrible\n> mangling of the system structure and doubtless a fruitful source\n> of bugs. I don't think we should be taking shortcuts with this issue.\n> If we think fast access by TID is worth supporting at all, we should\n> expend the work to do it properly.\n\nBut to do that whole thing properly, you are adding tons of complexity\nin access methods and stuff just to support one type that by definition\nis very internal to the backend.\n\nJust my ideas. I understand you concern. I just thought a few\nwell-placed lines could do the trick without adding tons of other stuff.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 11:10:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "Interesting. I see what you mean. We have a pyrotechnic API already\ninstalled.\n\n\n> Bruce,\n> \tI think that an index interface would be simpler than you think. \n> The index does not need any disk storage which takes out virtually all\n> the complexity in implementation. All that you really need to implement\n> is the scan interface, and the only state that the scan needs is a\n> single flag that indicates when getnext has already been called once.\n> All that getnext need do is return the ctid, and flip the flag so that\n> it knows to return null on the next call. You also need to ensure that\n> the access method functions used by the optimizer return appropriate\n> values to ensure that the cost of an index search is always zero. I\n> have some suitable functions for that.\n> \n> \n> With all due respect to people who I am sure know a lot more about this\n> than I do, it seems to me that extensive use of TIDs in user code might\n> place an unwelcome restraint on the internal database design. If you\n> follow the arguments of the reiserfs people, the whole idea of a\n> buffered cache with fix size blocks is a necessary hack to cope with a\n> less than optimal underlying filesystem. In the ideal world that\n> reiserfs promises (:-)) disk access efficiency would be independent of\n> file-size, and it would be feasible to construct the buffered cache from\n> raw tuples of variable size. The files on disk would be identified by\n> OID. reiserfs uses a B-tree varient to cope with very large name\n> spaces.\n> \n> Similar considerations would seem to apply if the storage layer of the\n> database is separated from the rest of the backend by a high-speed\n> qnetwork interface on something like a hard-disk farm. ( See for\n> example some of the Mariposa work ).\n> \n> Until things like that actually happen (Version 10.* perhaps) I can see\n> that TIDs are a useful addition, but you might want to fasten them in\n> with a pyrotechnic interface so that you can blow them away if need be.\n> \n> I have a URL for the reiserfs stuff at home, if anyone is interested\n> email me and I will dig it up and post it.\n> \n> Bernie Frankpitt\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 12:42:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "Bruce,\n\tI think that an index interface would be simpler than you think. \nThe index does not need any disk storage which takes out virtually all\nthe complexity in implementation. All that you really need to implement\nis the scan interface, and the only state that the scan needs is a\nsingle flag that indicates when getnext has already been called once.\nAll that getnext need do is return the ctid, and flip the flag so that\nit knows to return null on the next call. You also need to ensure that\nthe access method functions used by the optimizer return appropriate\nvalues to ensure that the cost of an index search is always zero. I\nhave some suitable functions for that.\n\n\nWith all due respect to people who I am sure know a lot more about this\nthan I do, it seems to me that extensive use of TIDs in user code might\nplace an unwelcome restraint on the internal database design. If you\nfollow the arguments of the reiserfs people, the whole idea of a\nbuffered cache with fix size blocks is a necessary hack to cope with a\nless than optimal underlying filesystem. In the ideal world that\nreiserfs promises (:-)) disk access efficiency would be independent of\nfile-size, and it would be feasible to construct the buffered cache from\nraw tuples of variable size. The files on disk would be identified by\nOID. reiserfs uses a B-tree varient to cope with very large name\nspaces.\n\nSimilar considerations would seem to apply if the storage layer of the\ndatabase is separated from the rest of the backend by a high-speed\nqnetwork interface on something like a hard-disk farm. ( See for\nexample some of the Mariposa work ).\n\nUntil things like that actually happen (Version 10.* perhaps) I can see\nthat TIDs are a useful addition, but you might want to fasten them in\nwith a pyrotechnic interface so that you can blow them away if need be.\n\nI have a URL for the reiserfs stuff at home, if anyone is interested\nemail me and I will dig it up and post it.\n\nBernie Frankpitt\n", "msg_date": "Tue, 12 Oct 1999 17:04:13 +0000", "msg_from": "Bernard Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "Bernard Frankpitt <[email protected]> writes:\n> With all due respect to people who I am sure know a lot more about this\n> than I do, it seems to me that extensive use of TIDs in user code might\n> place an unwelcome restraint on the internal database design.\n\nYes, we'd certainly have to label it as an implementation-dependent\nfeature that might change or vanish in the future. But as long as\npeople understand that they are tying themselves to a particular\nimplementation, I can see the usefulness of making this feature\naccessible. I'm still dubious that it's actually worth the work ...\nbut as long as I'm not the one doing the work, I can hardly object ;-).\n\nI just want to be sure that we don't create a maintenance headache\nfor ourselves by corrupting the system structure. We've spent a\nlot of time cleaning up after past shortcuts, and still have many\nmore to deal with; introducing new ones doesn't seem good.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 14:32:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n\toperator)" }, { "msg_contents": "> Bernard Frankpitt <[email protected]> writes:\n> > With all due respect to people who I am sure know a lot more about this\n> > than I do, it seems to me that extensive use of TIDs in user code might\n> > place an unwelcome restraint on the internal database design.\n> \n> Yes, we'd certainly have to label it as an implementation-dependent\n> feature that might change or vanish in the future. But as long as\n> people understand that they are tying themselves to a particular\n> implementation, I can see the usefulness of making this feature\n> accessible. I'm still dubious that it's actually worth the work ...\n> but as long as I'm not the one doing the work, I can hardly object ;-).\n> \n> I just want to be sure that we don't create a maintenance headache\n> for ourselves by corrupting the system structure. We've spent a\n> lot of time cleaning up after past shortcuts, and still have many\n> more to deal with; introducing new ones doesn't seem good.\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 14:40:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" }, { "msg_contents": "> -----Original Message-----\n> \n> > Bernard Frankpitt <[email protected]> writes:\n> > > With all due respect to people who I am sure know a lot more \n> about this\n> > > than I do, it seems to me that extensive use of TIDs in user \n> code might\n> > > place an unwelcome restraint on the internal database design.\n> > \n> > Yes, we'd certainly have to label it as an implementation-dependent\n> > feature that might change or vanish in the future. But as long as\n> > people understand that they are tying themselves to a particular\n> > implementation, I can see the usefulness of making this feature\n> > accessible. I'm still dubious that it's actually worth the work ...\n> > but as long as I'm not the one doing the work, I can hardly object ;-).\n> > \n> > I just want to be sure that we don't create a maintenance headache\n> > for ourselves by corrupting the system structure. We've spent a\n> > lot of time cleaning up after past shortcuts, and still have many\n> > more to deal with; introducing new ones doesn't seem good.\n> \n> Agreed.\n>\n\nI think it isn't so difficult to implement a new type of scan\non trial. But I'm not sure my story is right and I'm afraid\nto invite a maintenance headache like intersexcept ....\nMay I proceed the work ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 13 Oct 1999 19:14:47 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Scan by TID (was RE: [HACKERS] How to add a new build-in\n operator)" } ]
[ { "msg_contents": "\nAnyone want to comment on this one? Just tested with v6.5.0 and it still\nexists there...\n\nvhosts=> create table test ( a int, b char );\nCREATE\nvhosts=> insert into test values ( 1, 'a' );\nINSERT 149258 1\nvhosts=> select a from test group by a having a > 0;\nERROR: SELECT/HAVING requires aggregates to be valid\n\n\n\n\nOn Tue, 5 Oct 1999, Luuk de Boer wrote:\n\n> On 4 Oct 99, at 21:18, Bruce Momjian wrote:\n> \n> > > <cut>\n> > > \n> > > > However, this is an old recollection, and I see on the current page that\n> > > > this is no longer the case. The current page looks much better, though\n> > > > somehow you show PostgreSQL doesn't have HAVING or support -- comments. \n> > > > However, I realize such a test is a major project, and you are not going\n> > > > to get everything right.\n> > > \n> > > ps. I removed all the mailnglists to discuss some little things ...\n> > > \n> > > hmmm do you mean having is now supported in postgresql. The \n> > > latest run of crash-me which I watched (last week I believe with \n> > > version 6.5.1) I believe I saw the message HAVING still not \n> > > supported in postgresql. Is that correct or did I do something wrong \n> > > with compiling postgres (just followed the normal procedure as \n> > > stated in the INSTALL file.\n> > \n> > We have had HAVING since 6.3.* as I remember.\n> \n> I looked into it this morning and found the following thing why crash-\n> me is saying that having is not supported.\n> We have a table (crash_me) with two columns (a (int)and b (char)) \n> which are filled with one entry (1 and 'a').\n> The following thing is comming back to me ...\n> query3: select a from crash_me group by a having \n> a > 0 \n> \n> Got error from query: 'select a from crash_me \n> group by a having a > 0'\n> ERROR: SELECT/HAVING requires aggregates to be \n> valid\n> \n> Checking connection\n> Having: no\n> Having with group function: query1: select a \n> from crash_me group by a having count(*) = 1 \n> ...(53)\n> yes\n> Order by alias: yes\n> Having on alias: query1: select a as ab from \n> crash_me group by a having ab > 0 \n> ...(53)\n> \n> Got error from query: 'select a as ab from \n> crash_me group by a having ab > 0'\n> ERROR: attribute 'ab' not found\n> \n> Checking connection\n> no \n> \n> We had an if structure around testing having with group function if \n> having was supported and that if structure I removed.\n> Could you explain to me what's wrong to the above queries?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 5 Oct 1999 08:52:07 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql comparison" }, { "msg_contents": "> Anyone want to comment on this one? Just tested with v6.5.0 and it still\n> exists there...\n> vhosts=> create table test ( a int, b char );\n> CREATE\n> vhosts=> insert into test values ( 1, 'a' );\n> INSERT 149258 1\n> vhosts=> select a from test group by a having a > 0;\n> ERROR: SELECT/HAVING requires aggregates to be valid\n\nOh, don't get me started again on crashme :(\n\nWhat is the purpose of the previous query? It seems to be equivalent\nto\n\n select distinct a where a > 0;\n\nWe do support the HAVING clause, but apparently disallow some\ndegenerate cases. If MySQL weren't just a toy db, perhaps they would\nstart putting real queries into their garbage crashme. There, I feel\nbetter now ;)\n\npostgres=> select b, avg(a) from test group by b having avg(a) > 0;\nb|avg\n-+---\na| 1\n(1 row)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 05 Oct 1999 13:59:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Anyone want to comment on this one? Just tested with v6.5.0 and it still\n> exists there...\n\n> vhosts=> create table test ( a int, b char );\n> CREATE\n> vhosts=> insert into test values ( 1, 'a' );\n> INSERT 149258 1\n> vhosts=> select a from test group by a having a > 0;\n> ERROR: SELECT/HAVING requires aggregates to be valid\n\nThat's not a bug, it means what it says: HAVING clauses should contain\naggregate functions. Otherwise they might as well be WHERE clauses.\n(In this example, flushing rows with negative a before the group step,\nrather than after, is obviously a win, not least because it would\nallow the use of an index on a.)\n\nHowever, I can't see anything in the SQL92 spec that requires you to\nuse HAVING intelligently, so maybe this error should be downgraded to\na notice? \"HAVING with no aggregates would be faster as a WHERE\"\n(but we'll do it anyway to satisfy pedants...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Oct 1999 10:46:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> That's not a bug, it means what it says: HAVING clauses should contain\n> aggregate functions. Otherwise they might as well be WHERE clauses.\n> (In this example, flushing rows with negative a before the group step,\n> rather than after, is obviously a win, not least because it would\n> allow the use of an index on a.)\n> \n> However, I can't see anything in the SQL92 spec that requires you to\n> use HAVING intelligently, so maybe this error should be downgraded to\n> a notice? \"HAVING with no aggregates would be faster as a WHERE\"\n> (but we'll do it anyway to satisfy pedants...)\n\nIf we allow them, then people can do things like:\n\n\tHAVING max(a) > b\n\nwhich seems strange. Would we handle that?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 11:50:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "On Oct 5, Tom Lane mentioned:\n\n> However, I can't see anything in the SQL92 spec that requires you to\n> use HAVING intelligently, so maybe this error should be downgraded to\n> a notice? \"HAVING with no aggregates would be faster as a WHERE\"\n> (but we'll do it anyway to satisfy pedants...)\n\nOh please God, NO! The next thing they want is SELECT FROM HAVING to\nreplace WHERE. That is merely the reverse case of what you so humbly\nsuggested. HAVING doesn't stand after GROUP BY for no reason, AFAIC.\n\nOf course personally, I would love to kill SQL altogether and invent\nsomething better, but not by the end of this day . . .\n\nPeter\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n", "msg_date": "Tue, 5 Oct 1999 22:24:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> However, I can't see anything in the SQL92 spec that requires you to\n>> use HAVING intelligently, so maybe this error should be downgraded to\n>> a notice? \"HAVING with no aggregates would be faster as a WHERE\"\n>> (but we'll do it anyway to satisfy pedants...)\n\n> If we allow them, then people can do things like:\n> \tHAVING max(a) > b\n\nEr ... what's wrong with that? Assuming b is a group by column,\nof course...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Oct 1999 18:05:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> However, I can't see anything in the SQL92 spec that requires you to\n> >> use HAVING intelligently, so maybe this error should be downgraded to\n> >> a notice? \"HAVING with no aggregates would be faster as a WHERE\"\n> >> (but we'll do it anyway to satisfy pedants...)\n> \n> > If we allow them, then people can do things like:\n> > \tHAVING max(a) > b\n> \n> Er ... what's wrong with that? Assuming b is a group by column,\n> of course...\n\nBut can we compare aggs and non-aggs? I see now that our code is fine:\n\n\tselect relowner \n\tfrom pg_class \n\tgroup by relowner \n\thaving max(relowner) = relowner;\n\nThis returns the proper result, namely the relowner _having_ the max\nid.\n\nHaving is using an aggregate and non-aggregate, so when I said we only\nsupport aggregates in the HAVING clause, I was wrong. Looks fine.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 18:16:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> But can we compare aggs and non-aggs? I see now that our code is fine:\n\nNo, you're barking up the wrong tree. The issue is whether a HAVING\nclause that doesn't contain *any* aggregates is legal/reasonable.\nIt can contain non-aggregated references to GROUP BY columns in\nany case. But without aggregates, there's no semantic difference\nfrom putting the same condition in WHERE.\n\nI believe that planner.c currently has an implementation assumption\nthat HAVING must have an aggregate (because it hangs the HAVING clause\nonto the Agg plan node as a qual clause --- if no Agg node, no place to\nperform the HAVING test). This could be fixed if we felt it was worth\ndoing.\n\nI can't get excited about changing this from the standpoint of\nfunctionality, because AFAICS there is no added functionality.\nBut if we're looking bad on a recognized benchmark maybe we\nshould do something about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Oct 1999 18:29:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > But can we compare aggs and non-aggs? I see now that our code is fine:\n> \n> No, you're barking up the wrong tree. The issue is whether a HAVING\n> clause that doesn't contain *any* aggregates is legal/reasonable.\n> It can contain non-aggregated references to GROUP BY columns in\n> any case. But without aggregates, there's no semantic difference\n> from putting the same condition in WHERE.\n> \n> I believe that planner.c currently has an implementation assumption\n> that HAVING must have an aggregate (because it hangs the HAVING clause\n> onto the Agg plan node as a qual clause --- if no Agg node, no place to\n> perform the HAVING test). This could be fixed if we felt it was worth\n> doing.\n> \n> I can't get excited about changing this from the standpoint of\n> functionality, because AFAICS there is no added functionality.\n> But if we're looking bad on a recognized benchmark maybe we\n> should do something about it.\n\nAgreed. I think there are too many people who get HAVING confused to\nallow it. Better that we should prevent it and make them do it right.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 18:34:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "\nLuuk...\n\n\tI brought this up with the -hackers list, and, in generally, it\nappears to be felt that the query, which you use in the crashme test to\ntest HAVING, isn't necessarily valid ...\n\n\tBasically:\n\n\tselect a from test group by a having a > 0;\n\n\tcould be more efficiently written as:\n\n\tselect a from test where a > 0 group by a;\n\n\tI'm personally curious, though...how does Oracle/Informix and\nother RDBMS systems handle this? Do they let it pass, or do they give an\nerror also?\n\n\tI think the general concensus, at this time, is to change the\nERROR to a NOTICE, with a comment that using a WHERE would be more\nefficient then the HAVING...and, unless someone can come up with an\ninstance that would make sense (ie. why you'd do it with HAVING vs WHERE),\nI'm in agreement with them...\n\n\tSince we obviously do support HAVING, and, I believe, follow the\nSQL92 spec on it, is there any way of getting the crashme test fixed to\nnot use the above query as a basis for whether an RDBMS supports HAVING or\nnot?\n\nthanks...\n\n On Tue, 5 Oct 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Anyone want to comment on this one? Just tested with v6.5.0 and it still\n> > exists there...\n> \n> > vhosts=> create table test ( a int, b char );\n> > CREATE\n> > vhosts=> insert into test values ( 1, 'a' );\n> > INSERT 149258 1\n> > vhosts=> select a from test group by a having a > 0;\n> > ERROR: SELECT/HAVING requires aggregates to be valid\n> \n> That's not a bug, it means what it says: HAVING clauses should contain\n> aggregate functions. Otherwise they might as well be WHERE clauses.\n> (In this example, flushing rows with negative a before the group step,\n> rather than after, is obviously a win, not least because it would\n> allow the use of an index on a.)\n> \n> However, I can't see anything in the SQL92 spec that requires you to\n> use HAVING intelligently, so maybe this error should be downgraded to\n> a notice? \"HAVING with no aggregates would be faster as a WHERE\"\n> (but we'll do it anyway to satisfy pedants...)\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Tue, 5 Oct 1999 22:23:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> Thanks bruce and hermit for all the comments,\n> I looked into the book \"The SQL Standard\" fourth edition of Date \n> and in the appendixes page 439 they have an example which they \n> discuss. The example is: select count(*) as x from mt having 0 = 0; \n> with an empty table they say logically correct it should return one \n> column and no rows but sql gives a table of one column and one \n> row. So I think it's true that HAVING has to have an aggregation \n> but it will also be possible use a non-aggregation.\n> \n> If I look in our crash-me output page (this is a handy thing for this \n> kind of questions) and look for all the other db's to see what they \n> do I can say the following thing:\n> Informix,Access,Adabas,db2,empress,ms-sql,oracle,solid and \n> sybase are all supporting non-aggregation in having clause.\n> At this moment everyone except postgres is supporting it.\n> \n> The change which I can made is to remove the if structure around \n> the having tests so that having with group functions will also be \n> tested in the crash-me test.\n> \n> I will try the patch of bruce for the comment part. It shouldn't be the \n> way that the perl module is stripping the comments of the querie \n> but it is possible and if it is possible it will be a bug in the DBD \n> postgresql perl module.\n\nMaybe we should support the HAVING without aggregates. What do others\nthink?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Oct 1999 01:53:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "On 5 Oct 99, at 22:23, The Hermit Hacker wrote:\n\n> \n> Luuk...\n> \n> \tI brought this up with the -hackers list, and, in generally, it\n> appears to be felt that the query, which you use in the crashme test to\n> test HAVING, isn't necessarily valid ...\n> \n> \tBasically:\n> \n> \tselect a from test group by a having a > 0;\n> \n> \tcould be more efficiently written as:\n> \n> \tselect a from test where a > 0 group by a;\n> \n> \tI'm personally curious, though...how does Oracle/Informix and\n> other RDBMS systems handle this? Do they let it pass, or do they give an\n> error also?\n> \n> \tI think the general concensus, at this time, is to change the\n> ERROR to a NOTICE, with a comment that using a WHERE would be more\n> efficient then the HAVING...and, unless someone can come up with an\n> instance that would make sense (ie. why you'd do it with HAVING vs WHERE),\n> I'm in agreement with them...\n> \n> \tSince we obviously do support HAVING, and, I believe, follow the\n> SQL92 spec on it, is there any way of getting the crashme test fixed to\n> not use the above query as a basis for whether an RDBMS supports HAVING or\n> not?\n\nThanks bruce and hermit for all the comments,\nI looked into the book \"The SQL Standard\" fourth edition of Date \nand in the appendixes page 439 they have an example which they \ndiscuss. The example is: select count(*) as x from mt having 0 = 0; \nwith an empty table they say logically correct it should return one \ncolumn and no rows but sql gives a table of one column and one \nrow. So I think it's true that HAVING has to have an aggregation \nbut it will also be possible use a non-aggregation.\n\nIf I look in our crash-me output page (this is a handy thing for this \nkind of questions) and look for all the other db's to see what they \ndo I can say the following thing:\nInformix,Access,Adabas,db2,empress,ms-sql,oracle,solid and \nsybase are all supporting non-aggregation in having clause.\nAt this moment everyone except postgres is supporting it.\n\nThe change which I can made is to remove the if structure around \nthe having tests so that having with group functions will also be \ntested in the crash-me test.\n\nI will try the patch of bruce for the comment part. It shouldn't be the \nway that the perl module is stripping the comments of the querie \nbut it is possible and if it is possible it will be a bug in the DBD \npostgresql perl module.\n\nPS. the benchmark results of postgres 6.5.2 are also added to the \nbenchmark result page.\n\nGreetz...\n\nLuuk\n", "msg_date": "Wed, 6 Oct 1999 07:51:27 +1.00", "msg_from": "\"Luuk de Boer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "On Wed, 6 Oct 1999, Bruce Momjian wrote:\n\n> > Thanks bruce and hermit for all the comments,\n> > I looked into the book \"The SQL Standard\" fourth edition of Date \n> > and in the appendixes page 439 they have an example which they \n> > discuss. The example is: select count(*) as x from mt having 0 = 0; \n> > with an empty table they say logically correct it should return one \n> > column and no rows but sql gives a table of one column and one \n> > row. So I think it's true that HAVING has to have an aggregation \n> > but it will also be possible use a non-aggregation.\n> > \n> > If I look in our crash-me output page (this is a handy thing for this \n> > kind of questions) and look for all the other db's to see what they \n> > do I can say the following thing:\n> > Informix,Access,Adabas,db2,empress,ms-sql,oracle,solid and \n> > sybase are all supporting non-aggregation in having clause.\n> > At this moment everyone except postgres is supporting it.\n> > \n> > The change which I can made is to remove the if structure around \n> > the having tests so that having with group functions will also be \n> > tested in the crash-me test.\n> > \n> > I will try the patch of bruce for the comment part. It shouldn't be the \n> > way that the perl module is stripping the comments of the querie \n> > but it is possible and if it is possible it will be a bug in the DBD \n> > postgresql perl module.\n> \n> Maybe we should support the HAVING without aggregates. What do others\n> think?\n\nIf we are the only one that doesn't, it just makes it harder for those\nmoving from Oracle/Informix/etc if they happen to be using such queries...\n\nHow hard would it be to implement?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 6 Oct 1999 10:43:31 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "\nCan someone remind me where these benchmark pages are again? :)\n\n\nOn Wed, 6 Oct 1999, Luuk de Boer wrote:\n\n> On 5 Oct 99, at 22:23, The Hermit Hacker wrote:\n> \n> > \n> > Luuk...\n> > \n> > \tI brought this up with the -hackers list, and, in generally, it\n> > appears to be felt that the query, which you use in the crashme test to\n> > test HAVING, isn't necessarily valid ...\n> > \n> > \tBasically:\n> > \n> > \tselect a from test group by a having a > 0;\n> > \n> > \tcould be more efficiently written as:\n> > \n> > \tselect a from test where a > 0 group by a;\n> > \n> > \tI'm personally curious, though...how does Oracle/Informix and\n> > other RDBMS systems handle this? Do they let it pass, or do they give an\n> > error also?\n> > \n> > \tI think the general concensus, at this time, is to change the\n> > ERROR to a NOTICE, with a comment that using a WHERE would be more\n> > efficient then the HAVING...and, unless someone can come up with an\n> > instance that would make sense (ie. why you'd do it with HAVING vs WHERE),\n> > I'm in agreement with them...\n> > \n> > \tSince we obviously do support HAVING, and, I believe, follow the\n> > SQL92 spec on it, is there any way of getting the crashme test fixed to\n> > not use the above query as a basis for whether an RDBMS supports HAVING or\n> > not?\n> \n> Thanks bruce and hermit for all the comments,\n> I looked into the book \"The SQL Standard\" fourth edition of Date \n> and in the appendixes page 439 they have an example which they \n> discuss. The example is: select count(*) as x from mt having 0 = 0; \n> with an empty table they say logically correct it should return one \n> column and no rows but sql gives a table of one column and one \n> row. So I think it's true that HAVING has to have an aggregation \n> but it will also be possible use a non-aggregation.\n> \n> If I look in our crash-me output page (this is a handy thing for this \n> kind of questions) and look for all the other db's to see what they \n> do I can say the following thing:\n> Informix,Access,Adabas,db2,empress,ms-sql,oracle,solid and \n> sybase are all supporting non-aggregation in having clause.\n> At this moment everyone except postgres is supporting it.\n> \n> The change which I can made is to remove the if structure around \n> the having tests so that having with group functions will also be \n> tested in the crash-me test.\n> \n> I will try the patch of bruce for the comment part. It shouldn't be the \n> way that the perl module is stripping the comments of the querie \n> but it is possible and if it is possible it will be a bug in the DBD \n> postgresql perl module.\n> \n> PS. the benchmark results of postgres 6.5.2 are also added to the \n> benchmark result page.\n> \n> Greetz...\n> \n> Luuk\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 6 Oct 1999 10:43:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> > > I will try the patch of bruce for the comment part. It shouldn't be the \n> > > way that the perl module is stripping the comments of the querie \n> > > but it is possible and if it is possible it will be a bug in the DBD \n> > > postgresql perl module.\n> > \n> > Maybe we should support the HAVING without aggregates. What do others\n> > think?\n> \n> If we are the only one that doesn't, it just makes it harder for those\n> moving from Oracle/Informix/etc if they happen to be using such queries...\n> \n> How hard would it be to implement?\n\nNot hard. I will add it to the TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Oct 1999 09:45:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> I can't get excited about changing this from the standpoint of\n> functionality, because AFAICS there is no added functionality.\n> But if we're looking bad on a recognized benchmark maybe we\n> should do something about it.\n\nWe are looking bad on a benchmark designed to show MySQL in the best\npossible light, and to show other DBs at their worst. The maintainers\nof that benchmark have no interest in changing that emphasis (e.g. we\nare still reported as not supporting HAVING, even though we have\ndemonstrated to them that we do; this is the same pattern we have seen\nearlier).\n\nThe last time I looked at it, there were ~30% factual errors in the\nreported results for Postgres; no telling what errors are there for\nother products. imho it is a waste of time to address a bogus\nbenchmark, unless someone wants to take it up as a hobby. I'm a bit\nbusy right now ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 06 Oct 1999 13:47:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> > I can't get excited about changing this from the standpoint of\n> > functionality, because AFAICS there is no added functionality.\n> > But if we're looking bad on a recognized benchmark maybe we\n> > should do something about it.\n> \n> We are looking bad on a benchmark designed to show MySQL in the best\n> possible light, and to show other DBs at their worst. The maintainers\n> of that benchmark have no interest in changing that emphasis (e.g. we\n> are still reported as not supporting HAVING, even though we have\n> demonstrated to them that we do; this is the same pattern we have seen\n> earlier).\n> \n> The last time I looked at it, there were ~30% factual errors in the\n> reported results for Postgres; no telling what errors are there for\n> other products. imho it is a waste of time to address a bogus\n> benchmark, unless someone wants to take it up as a hobby. I'm a bit\n> busy right now ;)\n\nOn a separate note, should we support HAVING without any aggregates?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Oct 1999 09:54:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "On Wed, 6 Oct 1999, The Hermit Hacker wrote:\n\n> > Maybe we should support the HAVING without aggregates. What do others\n> > think?\n> \n> If we are the only one that doesn't, it just makes it harder for those\n> moving from Oracle/Informix/etc if they happen to be using such queries...\n\nI just tried it on a very old Sybase (ver 4 something, before ODBC was\navailable for it) and it works on that. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 6 Oct 1999 10:15:12 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": ">> If I look in our crash-me output page (this is a handy thing for this \n>> kind of questions) and look for all the other db's to see what they \n>> do I can say the following thing:\n>> Informix,Access,Adabas,db2,empress,ms-sql,oracle,solid and \n>> sybase are all supporting non-aggregation in having clause.\n>> At this moment everyone except postgres is supporting it.\n\n> Maybe we should support the HAVING without aggregates. What do others\n> think?\n\nKinda looks like we gotta, just for compatibility reasons. Also, if I\nread the SQL spec correctly, it does not forbid HAVING w/out aggregates,\nso those guys are adhering to the spec.\n\nI'll put it on my todo list --- I'm busy making some other fixes in that\ngeneral area anyway.\n\nNext question is should we emit a NOTICE or just silently do it?\n(For that matter, should we go so far as to push the HAVING condition\nover to become part of WHERE when it has no agg? Then the speed issue\ngoes away.) I kind of like emitting a NOTICE on the grounds of helping\nto educate users about the difference between WHERE and HAVING, but\nmaybe people would just see it as obnoxious.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Oct 1999 10:17:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> Next question is should we emit a NOTICE or just silently do it?\n> (For that matter, should we go so far as to push the HAVING condition\n> over to become part of WHERE when it has no agg? Then the speed issue\n> goes away.) I kind of like emitting a NOTICE on the grounds of helping\n> to educate users about the difference between WHERE and HAVING, but\n> maybe people would just see it as obnoxious.\n\nThat is a tough call. My personal vote is that HAVING is misunderstood\nenough to emit a warning.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Oct 1999 10:39:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "At 10:17 AM 10/6/99 -0400, Tom Lane wrote:\n\n>Next question is should we emit a NOTICE or just silently do it?\n>(For that matter, should we go so far as to push the HAVING condition\n>over to become part of WHERE when it has no agg? Then the speed issue\n>goes away.) I kind of like emitting a NOTICE on the grounds of helping\n>to educate users about the difference between WHERE and HAVING, but\n>maybe people would just see it as obnoxious.\n\nPeople used to commercial servers like Oracle would just see it as\nbeing obnoxious, I suspect.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 06 Oct 1999 08:11:49 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "On Wed, 6 Oct 1999, Bruce Momjian wrote:\n\n> > Next question is should we emit a NOTICE or just silently do it?\n> > (For that matter, should we go so far as to push the HAVING condition\n> > over to become part of WHERE when it has no agg? Then the speed issue\n> > goes away.) I kind of like emitting a NOTICE on the grounds of helping\n> > to educate users about the difference between WHERE and HAVING, but\n> > maybe people would just see it as obnoxious.\n> \n> That is a tough call. My personal vote is that HAVING is misunderstood\n> enough to emit a warning.\n\nAgreed from here...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 6 Oct 1999 12:29:43 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "On Wed, 6 Oct 1999, Thomas Lockhart wrote:\n\n> > I can't get excited about changing this from the standpoint of\n> > functionality, because AFAICS there is no added functionality.\n> > But if we're looking bad on a recognized benchmark maybe we\n> > should do something about it.\n> \n> We are looking bad on a benchmark designed to show MySQL in the best\n> possible light, and to show other DBs at their worst. The maintainers\n> of that benchmark have no interest in changing that emphasis (e.g. we\n> are still reported as not supporting HAVING, even though we have\n> demonstrated to them that we do; this is the same pattern we have seen\n> earlier).\n> \n> The last time I looked at it, there were ~30% factual errors in the\n> reported results for Postgres; no telling what errors are there for\n> other products. imho it is a waste of time to address a bogus\n> benchmark, unless someone wants to take it up as a hobby. I'm a bit\n> busy right now ;)\n\nMy opinion on this tends to be that, in the HAVING case, we are the only\none that doesn't support it w/o aggregates, so we altho we do follow the\nspec, we are making it slightly more difficult to migrate from 'the\nothers' to us...\n\nSo far, Luuk has appeared to be relatively open as far as investigating\nthe discrepencies in the report...but, since he doesn't *know* PostgreSQL,\nhe has no way of knowing what is wrong, and that is where, I think, we\nshould be trying to help support our end of things...\n\nIf Luuk were to come back and tell us that he absolutely won't change\nanything, then, IMHO, there is a problem...but, thanks to his test, Bruce\nmade some changes to how we handle our comments to fix a bug...and Luuk\ntold us that he fixed the HAVING test such that HAVING w/o aggregates\ndoesn't fail the test...\n\nBenchmarks, IMHO, always try to favor the 'base product' that is being\nadvertised...but, more often then not, its because the person doing the\nbenchmarking knows that product well enough to be able to 'tweak' it to\nperform better...Luuk, so far as I believe, is willing to be \"educated in\nPostgreSQL\"...I don't think its right for us to stifle that, is it?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 6 Oct 1999 12:41:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> My opinion on this tends to be that, in the HAVING case, we are the only\n> one that doesn't support it w/o aggregates, so we altho we do follow the\n> spec, we are making it slightly more difficult to migrate from 'the\n> others' to us...\n\nWe follow the spec in what we support, but the spec *does* allow\nHAVING w/o aggregates (and w/o any GROUP BY clause).\n\nTom, imho we absolutely should *not* emit warnings for unusual but\nlegal constructs. Our chapter on \"syntax\" can start addressing these\nkinds of topics, but the backend probably isn't the place to teach SQL\nstyle...\n\n> Benchmarks, IMHO, always try to favor the 'base product' that is being\n> advertised...but, more often then not, its because the person doing the\n> benchmarking knows that product well enough to be able to 'tweak' it to\n> perform better...Luuk, so far as I believe, is willing to be \"educated in\n> PostgreSQL\"...I don't think its right for us to stifle that, is it?\n\nRight. Sorry Luuk for going off on ya...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 07 Oct 1999 13:15:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> We follow the spec in what we support, but the spec *does* allow\n> HAVING w/o aggregates (and w/o any GROUP BY clause).\n\n> Tom, imho we absolutely should *not* emit warnings for unusual but\n> legal constructs.\n\nYeah, I came to the same conclusion while I was working on it last\nnight. What I committed will still complain about HAVING that\nreferences an ungrouped variable --- that *is* incorrect per spec ---\nbut otherwise it will take degenerate cases like\n\tselect 2+2 having 1<2;\nwithout complaint.\n\nHmm... here is a boundary condition that may or may not be right yet:\n\nregression=> select f1 from int4_tbl having 1 < 2;\nERROR: Illegal use of aggregates or non-group column in target list\n\nIs this query legal, or not? The spec sez about HAVING:\n\n 1) If neither a <where clause> nor a <group by clause> is speci-\n fied, then let T be the result of the preceding <from clause>;\n\t [snip]\n\n 1) Let T be the result of the preceding <from clause>, <where\n clause>, or <group by clause>. If that clause is not a <group\n by clause>, then T consists of a single group and does not have\n a grouping column.\n\t [snip]\n\n 2) Each <column reference> contained in a <subquery> in the <search\n condition> that references a column of T shall reference a\n grouping column of T or shall be specified within a <set func-\n tion specification>.\n\nIn the absence of a GROUP BY clause, it's clearly illegal for the HAVING\ncondition to reference any columns of the source table except via\naggregates. It's not quite so clear whether the target list has the same\nrestriction --- my just-committed code assumes so, but is that right?\n\nI guess the real question here is whether a query like the above should\ndeliver one row or N. AFAICS the spec defines the result of this query\nas a \"grouped table\" with one group, and in every other context\ninvolving grouped tables you get only one output row per group; but\nI don't see that spelled out for this case.\n\nComments? Anyone want to opine on the legality of this, or try it on\nsome other DBMSes?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Oct 1999 10:34:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> On a separate note, should we support HAVING without any aggregates?\n\nSure, it is allowed by the SQL92 spec (as are various other\ncombinations with and without GROUP and HAVING). But it adds no real\nfunctionality, and this is the first report of anyone even trying it,\nsince the same behavior is covered by simpler, more common queries.\nDoesn't seem to be a high priority...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 07 Oct 1999 15:27:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql \n\tcomparison" }, { "msg_contents": "> > My opinion on this tends to be that, in the HAVING case, we are the only\n> > one that doesn't support it w/o aggregates, so we altho we do follow the\n> > spec, we are making it slightly more difficult to migrate from 'the\n> > others' to us...\n> \n> We follow the spec in what we support, but the spec *does* allow\n> HAVING w/o aggregates (and w/o any GROUP BY clause).\n> \n> Tom, imho we absolutely should *not* emit warnings for unusual but\n> legal constructs. Our chapter on \"syntax\" can start addressing these\n> kinds of topics, but the backend probably isn't the place to teach SQL\n> style...\n> \n\nOK. Agreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 12:41:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" } ]
[ { "msg_contents": "Hi,\nI have problems witn postgres port to windows.\n\nIf I run \"psql -p 5432 dataname\" the psql cliente plug good.\n\nBut \"telnet 192.168.100.2:5432\" don't answer me.\n\nThe \"ping\" utility work good.\n\nthe pg_hba.conf file have the follow line:\n\nhost 192.168.100.10 255.255.255.0 trust\n\n\nHow can I read the port 5432???\n\nthanks\n\nHenry Molina\nBogota, Colombia\n\n______________________________________________________\nGet Your Private, Free Email at http://www.hotmail.com\n", "msg_date": "Tue, 05 Oct 1999 14:05:31 PDT", "msg_from": "\"Henry Molina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Request inforamation" } ]
[ { "msg_contents": "Hi,\n\nI have a cron job which vaccuming database every hour\n(say for testing purposes) and sometimes I get following messages:\n\nNOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (10003) IS NOT THE SAME AS \nHEAP' (10004)\nNOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (10003) IS NOT THE SAME AS \nHEAP' (10004)\n\nThis happens on Linux 2.0.37, postgresql 6.5.2\n\nWhat does it means ? Why it's happens not every time script runs ?\nWhat's the best way to get rid off this problem except dump/reload ?\n\nThe script is here:\n\n/usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sql\n\nvacuum_hits.sql:\n\nbegin work;\nvacuum analyze hits(msg_id);\ndrop index hits_pkey;\ncreate unique index hits_pkey on hits(msg_id);\nend work;\n\n\n\tRegards,\n\n\t\tOleg\n \n\n\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 5 Oct 1999 18:58:00 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "6.5.2 vacuum NOTICE messages" }, { "msg_contents": "Oleg Bartunov wrote:\n> \n> Hi,\n> \n> I have a cron job which vaccuming database every hour\n> (say for testing purposes) and sometimes I get following messages:\n> \n> NOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (10003) IS NOT THE SAME AS\n> HEAP' (10004)\n> NOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (10003) IS NOT THE SAME AS\n> HEAP' (10004)\n> \n> This happens on Linux 2.0.37, postgresql 6.5.2\n> \n> What does it means ? Why it's happens not every time script runs ?\n> What's the best way to get rid off this problem except dump/reload ?\n\nRe-build indices.\n\n> \n> The script is here:\n> \n> /usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sql\n> \n> vacuum_hits.sql:\n> \n> begin work;\n> vacuum analyze hits(msg_id);\n\nYou MUST NOT run vacuum inside BEGIN/END!\n\n> drop index hits_pkey;\n> create unique index hits_pkey on hits(msg_id);\n> end work;\n\nVadim\n", "msg_date": "Wed, 06 Oct 1999 09:16:36 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.2 vacuum NOTICE messages" } ]
[ { "msg_contents": "There are a couple of gripes in the pgsql-sql list this morning about\nbeing unable to enter CREATE RULE commands that specify multiple\nactions --- those folk are getting parser: parse error at or near \"\"\nfrom what appears to be perfectly valid syntax. I suppose they are\ngetting burnt by a broken vendor-supplied yacc. But while looking at\nthis, I couldn't help noticing how crufty the syntax is:\n\nRuleActionList: NOTHING { $$ = NIL; }\n | SelectStmt { $$ = lcons($1, NIL); }\n | RuleActionStmt { $$ = lcons($1, NIL); }\n | '[' RuleActionBlock ']' { $$ = $2; }\n | '(' RuleActionBlock ')' { $$ = $2; } \n ;\n\nRuleActionBlock: RuleActionMulti { $$ = $1; }\n | RuleActionStmt { $$ = lcons($1, NIL); }\n ;\n\nRuleActionMulti: RuleActionMulti RuleActionStmt\n { $$ = lappend($1, $2); }\n | RuleActionMulti RuleActionStmt ';'\n { $$ = lappend($1, $2); }\n | RuleActionStmt ';'\n { $$ = lcons($1, NIL); }\n ;\n\nRuleActionStmt: InsertStmt\n | UpdateStmt\n | DeleteStmt\n | NotifyStmt\n ;\n\nWhat's wrong with that you say? Well, it allows a RuleActionBlock to\nbe made up of RuleActionStmts that aren't separated by semicolons.\nSpecifically\n\t\tstmt1 ; stmt2 stmt3 stmt4\nhas a production sequence.\n\nI don't know if that was intentional or not, but it sure looks like\na shift/reduce conflict waiting to happen, as soon as the possible\nRuleActionStmts get any more complicated. In any case, it's pretty\nbizarre that a semi is only required after the first statement.\n\nI suggest that we require separating semicolons and simplify the\nRuleActionBlock production to a more conventional list style,\n\nRuleActionBlock: RuleActionStmt\n\t\t| RuleActionBlock ';' RuleActionStmt\n\t\t| RuleActionBlock ';'\n\n(the last alternative isn't normal list style, but it allows a trailing\nsemicolon which is accepted by the existing grammar).\n\nAside from forestalling future trouble with syntax extensions, this\nmight make us work a little better with old versions of yacc. I don't\nknow exactly why this area is a trouble spot for vendor yaccs, but it\nseems to be. Simplifying the syntax may well help.\n\nComments? Anyone really in love with semicolon-less rule lists?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Oct 1999 11:12:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE RULE syntax simplification" } ]
[ { "msg_contents": "Here's the summary of my wheelings and dealings. No code yet, but trust\nme, you don't want it.\n\n* Changed one file into many\n* Changed few functions into countless\n* Added GNU long option support (added test to configure.in,\n definition to config.h.in)\n [ This feature is under protest by Tom L. ]\n* Extra arguments are now taken as dbname and username\n (it used to fail if you have more than one extra)\n* Added switch -V (--version) to display client and server version and\n warn about possible incompatibilities\n* Added \\copyright command. Changed welcome message accordingly.\n* Added a few long slash commands equivalent to short ones (e.g.,\n \\list = \\l)\n* Rewrote backslash command parser from scratch. Can now quote\n arguments. (Only doublequotes. Single quotes to come.)\n* Added message output channel as alternative to query output and\n stderr. Might be useful to funnel output in scripts.\n* SQL command help is now generated directly from the SGML sources at\n build time.\n [ Must have perl. Might be a problem on Windows. Perhaps package\n preparsed version as well. ]\n* \\connect now asks for password when appropriate\n* Added switch -U allowing you to specify username on cmd line\n* -? prints out default username, host, port, etc. in help screen\n* PSQLRC env variable can override the name of your startup script\n* PSQL_EDITOR can set your prefered editor for \\e and \\E (overriding\n EDITOR and VISUAL)\n* Fixed flat tire on bike ...\n* when \\connect fails, it now keeps the previous connection (formerly\n aborted program)\n* Custom prompts in tcsh style\n (No, I am not partial to tcsh. In fact, I don't even use it. But\n using % as escape character rather than \\ saves a lot of headaches.)\n* Increased abstraction of input routines.\n [ Cheers to Matthew D.! ]\n* Started to clean up \\copy. Can now specify delimiters and \"with\n oids\". Still needs some work though, especially regarding binary and\n quoting/escaping. I'll probably end up writing a better strtok()\n before this is all over.\n\nMore in a week . . .\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n", "msg_date": "Tue, 5 Oct 1999 20:12:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "psql Week 1" }, { "msg_contents": "Peter,\n\nwhat I miss on pgsql is ability to keep commands I issued in history file\nas bash does for example. Say, ~/.pgsqlhistory\nOf course, not to save passwords.\n\n\tRegards,\n\t\tOleg\n\nOn Tue, 5 Oct 1999, Peter Eisentraut wrote:\n\n> Date: Tue, 5 Oct 1999 20:12:29 +0200 (CEST)\n> From: Peter Eisentraut <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] psql Week 1\n> \n> Here's the summary of my wheelings and dealings. No code yet, but trust\n> me, you don't want it.\n> \n> * Changed one file into many\n> * Changed few functions into countless\n> * Added GNU long option support (added test to configure.in,\n> definition to config.h.in)\n> [ This feature is under protest by Tom L. ]\n> * Extra arguments are now taken as dbname and username\n> (it used to fail if you have more than one extra)\n> * Added switch -V (--version) to display client and server version and\n> warn about possible incompatibilities\n> * Added \\copyright command. Changed welcome message accordingly.\n> * Added a few long slash commands equivalent to short ones (e.g.,\n> \\list = \\l)\n> * Rewrote backslash command parser from scratch. Can now quote\n> arguments. (Only doublequotes. Single quotes to come.)\n> * Added message output channel as alternative to query output and\n> stderr. Might be useful to funnel output in scripts.\n> * SQL command help is now generated directly from the SGML sources at\n> build time.\n> [ Must have perl. Might be a problem on Windows. Perhaps package\n> preparsed version as well. ]\n> * \\connect now asks for password when appropriate\n> * Added switch -U allowing you to specify username on cmd line\n> * -? prints out default username, host, port, etc. in help screen\n> * PSQLRC env variable can override the name of your startup script\n> * PSQL_EDITOR can set your prefered editor for \\e and \\E (overriding\n> EDITOR and VISUAL)\n> * Fixed flat tire on bike ...\n> * when \\connect fails, it now keeps the previous connection (formerly\n> aborted program)\n> * Custom prompts in tcsh style\n> (No, I am not partial to tcsh. In fact, I don't even use it. But\n> using % as escape character rather than \\ saves a lot of headaches.)\n> * Increased abstraction of input routines.\n> [ Cheers to Matthew D.! ]\n> * Started to clean up \\copy. Can now specify delimiters and \"with\n> oids\". Still needs some work though, especially regarding binary and\n> quoting/escaping. I'll probably end up writing a better strtok()\n> before this is all over.\n> \n> More in a week . . .\n> \n> -- \n> Peter Eisentraut - [email protected]\n> http://yi.org/peter-e\n> \n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 6 Oct 1999 01:10:21 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 1" }, { "msg_contents": "> Peter,\n> \n> what I miss on pgsql is ability to keep commands I issued in history file\n> as bash does for example. Say, ~/.pgsqlhistory\n> Of course, not to save passwords.\n\nreadline has the capability. We would just need to enable it. Good\nidea.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 5 Oct 1999 17:30:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n\n Peter> * \\connect now asks for password when appropriate\n\nDoes this include the initial connect? I has password authentication\nenabled and think it would be nice if psql just prompted me rather\nthan failed....\n\nroland\n- -- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 76-15 113th Street, Apt 3B\[email protected] Forest Hills, NY 11375\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3a\nCharset: noconv\nComment: Processed by Mailcrypt 3.5.4, an Emacs/PGP interface\n\niQCVAwUBN/q/B+oW38lmvDvNAQHUwwP/dLhX5AP05+v1lcpVEzx3gSK+9vWxySfx\nlK521D3fsMrWmUQOYn0mqEtLPv/bVUcYgAT+rL3L6dPdvXAHNg1rz64dmZwcsHhy\nErm7GSH4OfCh6msNAhlF0vwgJQams+uRTbYf9AZ3UA6OBgTUCrQ3zR7Q4PSVM9O+\n+v2y2Y+FJlo=\n=hgOk\n-----END PGP SIGNATURE-----\n", "msg_date": "05 Oct 1999 23:16:30 -0400", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 1" }, { "msg_contents": "On Tue, 5 Oct 1999, Peter Eisentraut wrote:\n> * when \\connect fails, it now keeps the previous connection (formerly\n> aborted program)\n\nThis is bad.\n\nConsider some process that connects to various DBs and does stuff like\ndrop foo.\n\nIf connect fails assume that the user no longer wanted to be connected to\nthe old connection.\n\n-- \n| Matthew N. Dodd | '78 Datsun 280Z | '75 Volvo 164E | FreeBSD/NetBSD |\n| [email protected] | 2 x '84 Volvo 245DL | ix86,sparc,pmax |\n| http://www.jurai.net/~winter | This Space For Rent | ISO8802.5 4ever |\n\n", "msg_date": "Wed, 6 Oct 1999 00:59:21 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 1" }, { "msg_contents": "On Oct 6, Oleg Bartunov mentioned:\n\n> what I miss on pgsql is ability to keep commands I issued in history file\n> as bash does for example. Say, ~/.pgsqlhistory\n\nConsider it done. (Because it is done.)\n\n> Of course, not to save passwords.\n\nUsernames and passwords are entered through a different channel. This will\nnot be a problem.\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n", "msg_date": "Wed, 6 Oct 1999 18:26:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql Week 1" }, { "msg_contents": "On Oct 6, Matthew N. Dodd mentioned:\n\n> On Tue, 5 Oct 1999, Peter Eisentraut wrote:\n> > * when \\connect fails, it now keeps the previous connection (formerly\n> > aborted program)\n> \n> This is bad.\n> \n> Consider some process that connects to various DBs and does stuff like\n> drop foo.\n> \n> If connect fails assume that the user no longer wanted to be connected to\n> the old connection.\n\nI forgot to mention that this only happens in interactive mode for the\nvery reason you cited. I do not see a problem there.\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n", "msg_date": "Wed, 6 Oct 1999 18:29:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql Week 1" } ]
[ { "msg_contents": "> > hmmm that's strange. I tested this also this morning with a pretty \n> > simple program. I will attach the program to this email and place \n> > the output here. It's a quiet simple program. Just create a table, fill \n> > it and do select * from test1 -- comment ...and see what it does. \n> > This is the output:\n> > \n> > DBD::Pg::st execute failed: ERROR: parser: \n> > parse error at or near \"-\"\n> > DBD::Pg::st fetchrow_array failed: no statement \n> > executing\n> > can't execute -- comment: ERROR: parser: parse \n> > error at or near \"-\"\n> > \n> > this is why I thought it was done in the front end and not in the \n> > backend. Is there a way to solve this. PS the comment with /* */ is \n> > going okay this way.\n> \n> I just connected to the backend and did it directly, and it worked:\n> \n> \tpostgres -D /u/pg/data test\n> \t\n> \tPOSTGRES backend interactive interface \n> \t$Revision: 1.130 $ $Date: 1999/09/29 16:06:10 $\n> \t\n> \tbackend> select * from test1 -- comment\n> \tblank\n> \t 1: x (typeid = 23, len = 4, typmod = -1, byval = t)\n> \t ----\n> \tbackend> \n> \n> So, there must be something in the perl interface that is causing the\n> problem. I don't have pgperl defined here, so I am not sure, but I can\n> record it as a bug.\n> \n> Now, if I do this:\n> \t\n> \tbackend> -- testssdf\n> \tERROR: parser: parse error at or near \"\"\n> \tERROR: parser: parse error at or near \"\"\n> \n> it shouldn't throw an error, but it does. psql doesn't mind it, though.\n> Strange. Same with /* lkjas;ldfjk */. Seems we have a bug there\n> because non-psql interfaces can send these queries.\n\nOK, I am applying the following patch to fix the above problem. The old\ncode, which I wrote, took care of trailing semicolons, but did not\nreally fix the problems of comment-only lines, and lines containing many\nsemicolons next to each other. This code is much cleaner, and\nregression tests seem to like it.\n\nI know this problem was reported before, and Thomas had made some\ncomment about needing it fixed.\n\nThis will be applied to 6.6 only. Seems to dangerous for 6.5.*.\n\n[Not sure if the perl test is going to be OK after this fix. Looks like\nsomething inside perl may be the problem. Maybe there is some code in\nthe perl interface to strip out -- comments? ]\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\necho: cannot create /dev/ttyp5: permission denied\n\nIndex: gram.y\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.106\ndiff -c -r2.106 gram.y\n*** gram.y\t1999/10/03 23:55:30\t2.106\n--- gram.y\t1999/10/05 18:01:41\n***************\n*** 365,384 ****\n %left\t\tUNION INTERSECT EXCEPT\n %%\n \n! stmtblock: stmtmulti opt_semi\n \t\t\t\t{ parsetree = $1; }\n \t\t;\n \n stmtmulti: stmtmulti ';' stmt\n! \t\t\t\t{ $$ = lappend($1, $3); }\n \t\t| stmt\n! \t\t\t\t{ $$ = lcons($1,NIL); }\n \t\t;\n \n- opt_semi:\t';'\n- \t\t|\t/*EMPTY*/\n- \t\t;\n- \t\t\n stmt :\t AddAttrStmt\n \t\t| AlterUserStmt\n \t\t| ClosePortalStmt\n--- 365,393 ----\n %left\t\tUNION INTERSECT EXCEPT\n %%\n \n! /*\n! *\tHandle comment-only lines, and ;; SELECT * FROM pg_class ;;;\n! *\tpsql already handles such cases, but other interfaces don't.\n! *\tbjm 1999/10/05\n! */\n! stmtblock: stmtmulti\n \t\t\t\t{ parsetree = $1; }\n \t\t;\n \n stmtmulti: stmtmulti ';' stmt\n! \t\t\t\t{ if ($3 != (Node *)NIL)\n! \t\t\t\t\t$$ = lappend($1, $3);\n! \t\t\t\t else\n! \t\t\t\t\t$$ = $1;\n! \t\t\t\t}\n \t\t| stmt\n! \t\t\t\t{ if ($1 != (Node *)NIL)\n! \t\t\t\t\t$$ = lcons($1,NIL);\n! \t\t\t\t else\n! \t\t\t\t\t$$ = (Node *)NIL;\n! \t\t\t\t}\n \t\t;\n \n stmt :\t AddAttrStmt\n \t\t| AlterUserStmt\n \t\t| ClosePortalStmt\n***************\n*** 423,428 ****\n--- 432,439 ----\n \t\t| VariableShowStmt\n \t\t| VariableResetStmt\n \t\t| ConstraintsSetStmt\n+ \t\t|\t/*EMPTY*/\n+ \t\t\t\t{ $$ = (Node *)NIL; }\n \t\t;\n \n /*****************************************************************************", "msg_date": "Tue, 5 Oct 1999 14:13:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql comparison" } ]
[ { "msg_contents": "Tom, I suspect that this is your work:\n./psql: error in loading shared libraries : undefined symbol:\ncreatePQExpBuffer\n\nwhich happens if you run several versions of everything all at once and\nforget to set all your lib paths right.\n\nHow about version numbering libpq properly? It has been 2.0 ever since I\ncan remember (not very long :). At least do ++0.0.1 when you change\nsomething. Is there any particular reason why this is not done?\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n", "msg_date": "Tue, 5 Oct 1999 22:45:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> How about version numbering libpq properly? It has been 2.0 ever since I\n> can remember (not very long :). At least do ++0.0.1 when you change\n> something. Is there any particular reason why this is not done?\n\nWe've been pretty lax about version numbering during development cycles.\nIt could be a problem if you are keeping several versions around,\nI suppose. But I think what you are asking for is a major-version bump\nanytime a subroutine gets added (else it's not going to help a dynamic\nlinker distinguish two versions anyway). That seems not very workable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Oct 1999 19:00:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "None" } ]
[ { "msg_contents": "\nI'm beginning to think I created my own error! After looking over the\nsources I'm thinking I did the cut-n-paste from the same source but a\ndifferent function (if that makes sense).\n\nI just sent patches for libpq++ to return an int (I'll also submit changes\nto the sgml docs) and return a -1 if PQcmdTuples() returns a NULL pointer.\nI just haven't been able to convince it to return a NULL.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 05 Oct 1999 22:47:42 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "libpq++ doc error?" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> I just sent patches for libpq++ to return an int (I'll also submit changes\n> to the sgml docs) and return a -1 if PQcmdTuples() returns a NULL pointer.\n> I just haven't been able to convince it to return a NULL.\n\nIt won't. If you look at the source, it's quite clear that it never\nreturns NULL. It will return an empty string (\"\") if it notices a\nproblem.\n\nAlso, if you wanted to be really paranoid you'd check that what it\nreturns actually looks like it is a number, because PQcmdTuples doesn't\ncheck that there is a number in the right spot in the command status\nreturned by the backend.\n\nThese two points together are why I suggested testing for an initial\ndigit...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Oct 1999 10:10:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq++ doc error? " }, { "msg_contents": "On Wed, 6 Oct 1999, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > I just sent patches for libpq++ to return an int (I'll also submit changes\n> > to the sgml docs) and return a -1 if PQcmdTuples() returns a NULL pointer.\n> > I just haven't been able to convince it to return a NULL.\n> \n> It won't. If you look at the source, it's quite clear that it never\n> returns NULL. It will return an empty string (\"\") if it notices a\n> problem.\n> \n> Also, if you wanted to be really paranoid you'd check that what it\n> returns actually looks like it is a number, because PQcmdTuples doesn't\n> check that there is a number in the right spot in the command status\n> returned by the backend.\n> \n> These two points together are why I suggested testing for an initial\n> digit...\n\nOk, I'm looking for an empty string now and that will return a -1. The\nother return possibilities *should* be handled correctly by atoi() since\nit doesn't care if there's any leading blanks/spaces, as long as it's\nnot a non-numeric non-space character. If it's going to get that crazy\nwith differing possibilities then we'd be further ahead to fix it in \neither the backend or in libpq (by adding a new function and letting\nthis one fade away). Is there an atoi() out there that would not return\na 123 if passed the string \" 123 \" ?? It does the right thing in\nhpux8 and in FreeBSD 3.2.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 6 Oct 1999 10:52:34 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] libpq++ doc error? " } ]
[ { "msg_contents": "Perhaps an option for the command:\n\\connect dbname /nofail\nor something\n\n>> -----Original Message-----\n>> From: Matthew N. Dodd [mailto:[email protected]]\n>> Sent: Wednesday, October 06, 1999 6:59 AM\n>> To: Peter Eisentraut\n>> Cc: [email protected]\n>> Subject: Re: [HACKERS] psql Week 1\n>> \n>> \n>> On Tue, 5 Oct 1999, Peter Eisentraut wrote:\n>> > * when \\connect fails, it now keeps the previous \n>> connection (formerly\n>> > aborted program)\n>> \n>> This is bad.\n>> \n>> Consider some process that connects to various DBs and does \n>> stuff like\n>> drop foo.\n>> \n>> If connect fails assume that the user no longer wanted to be \n>> connected to\n>> the old connection.\n>> \n>> -- \n>> | Matthew N. Dodd | '78 Datsun 280Z | '75 Volvo 164E | \n>> FreeBSD/NetBSD |\n>> | [email protected] | 2 x '84 Volvo 245DL | \n>> ix86,sparc,pmax |\n>> | http://www.jurai.net/~winter | This Space For Rent | \n>> ISO8802.5 4ever |\n>> \n>> \n>> ************\n>> \n", "msg_date": "Wed, 6 Oct 1999 09:46:23 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] psql Week 1" } ]
[ { "msg_contents": "> * when \\connect fails, it now keeps the previous connection (formerly\n> aborted program)\n\nThis was the behavior at one time, but it was regarded as very dangerous\nand thus changed to the current behavior.\n\nThe problem was, that scripts would do things to the wrong database if a\nconnect failed.\n\nThe ultimate solution would be an unconnected state, where only\n\\connect or help commands would be accepted.\n\nAndreas\n", "msg_date": "Wed, 6 Oct 1999 10:33:00 +0200 ", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] psql Week 1" }, { "msg_contents": "On Oct 6, Zeugswetter Andreas IZ5 mentioned:\n\n> > * when \\connect fails, it now keeps the previous connection (formerly\n> > aborted program)\n> \n> This was the behavior at one time, but it was regarded as very dangerous\n> and thus changed to the current behavior.\n> \n> The problem was, that scripts would do things to the wrong database if a\n> connect failed.\n\nSee earlier response. Only in interactive mode. The idea was that a typo\nshould not bomb you out of the program.\n\n> The ultimate solution would be an unconnected state, where only\n> \\connect or help commands would be accepted.\n\nI had that idea, too, but that would open a whole new can of worms\ncode-wise. Can you guys out there comment if this feature would be\ndesirable?\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n", "msg_date": "Wed, 6 Oct 1999 18:34:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] psql Week 1" } ]
[ { "msg_contents": "While you're at it...\n\nThe following example shows psql correctly clearing its input buffer\nwhen a line containing *only* a comment is seen, but not completely\nclearing the buffer (or not realizing that it is cleared; note the\nchanged prompt) if the comment is at the end of a valid query.\n\npostgres=> -- comment\npostgres=> select 'hi'; -- comment\n?column?\n--------\nhi \n(1 row)\n\npostgres->\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 06 Oct 1999 14:07:41 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "psql and comments" }, { "msg_contents": "> While you're at it...\n> \n> The following example shows psql correctly clearing its input buffer\n> when a line containing *only* a comment is seen, but not completely\n> clearing the buffer (or not realizing that it is cleared; note the\n> changed prompt) if the comment is at the end of a valid query.\n> \n> postgres=> -- comment\n> postgres=> select 'hi'; -- comment\n> ?column?\n> --------\n> hi \n> (1 row)\n> \n> postgres->\n\nBut aren't they _in_ a new statement, that begins with '--'?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Oct 1999 14:17:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments" }, { "msg_contents": "On Oct 6, Thomas Lockhart mentioned:\n\n> The following example shows psql correctly clearing its input buffer\n> when a line containing *only* a comment is seen, but not completely\n> clearing the buffer (or not realizing that it is cleared; note the\n> changed prompt) if the comment is at the end of a valid query.\n> \n> postgres=> -- comment\n> postgres=> select 'hi'; -- comment\n> ?column?\n> --------\n> hi \n> (1 row)\n> \n> postgres->\n\nThat has been noted by me as well. From looking at the code I see that\nsomeone intended to do something quite different in this case, like print\nthe comment on top of the query being echoed, I think. But I couldn't\nreally follow that.\n\nAnyway, I'm going to end up rewriting that parser anyway, so that will be\ntaken care of. I was almost about to use flex but the Windows crowd\nprobably wouldn't find that too funny. (The Windows crowd won't find this\nthing funny anyway, since I have no clue what #ifdef's I need for that.\nSomeone else will have to do a looong compile&fix session.)\n\nThe question I have though is, is there a reason, besides efficiency, that\npsql doesn't just send the comment to the backend with the query? The\nbackend does accept comments last time I checked. Perhaps someone will one\nday write something that makes some use of those comments on the backend\n(thus conflicting with the very definition of \"comment\", but maybe a\nlogger) and it would remove some load out of psql.\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n", "msg_date": "Wed, 6 Oct 1999 21:46:17 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql and comments" }, { "msg_contents": "> Anyway, I'm going to end up rewriting that parser anyway, so that will be\n> taken care of. I was almost about to use flex but the Windows crowd\n> probably wouldn't find that too funny. (The Windows crowd won't find this\n> thing funny anyway, since I have no clue what #ifdef's I need for that.\n> Someone else will have to do a looong compile&fix session.)\n> \n> The question I have though is, is there a reason, besides efficiency, that\n> psql doesn't just send the comment to the backend with the query? The\n> backend does accept comments last time I checked. Perhaps someone will one\n> day write something that makes some use of those comments on the backend\n> (thus conflicting with the very definition of \"comment\", but maybe a\n> logger) and it would remove some load out of psql.\n\nRemove it. Send it to the backend.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Oct 1999 18:22:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: psql and comments" }, { "msg_contents": "> > The following example shows psql correctly clearing its input buffer\n> > when a line containing *only* a comment is seen, but not completely\n> > clearing the buffer (or not realizing that it is cleared; note the\n> > changed prompt) if the comment is at the end of a valid query.\n> >\n> > postgres=> -- comment\n> > postgres=> select 'hi'; -- comment\n> > ?column?\n> > --------\n> > hi\n> > (1 row)\n> >\n> > postgres->\n> But aren't they _in_ a new statement, that begins with '--'?\n\n?? Sure, that's what psql thinks. But the first case shown above\nshould also begin a new statement, changing the prompt (it doesn't,\nbecause after stripping the comment there are zero blanks in the\nline). I don't think that is the right behavior though.\n\nThings aren't a big problem the way they stand, but istm that a\ncompletely blank line (after stripping single-line comments) may as\nwell be the same as an empty line,and that psql could figure that out.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 07 Oct 1999 13:30:44 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and comments" }, { "msg_contents": "On Oct 6, Bruce Momjian mentioned:\n\n> > postgres=> select 'hi'; -- comment\n\n> But aren't they _in_ a new statement, that begins with '--'?\n\nGood point. But it's still kind of counterintuitive, isn't it? Especially\nsince something that begins with a '--' can't ever become a useful\nstatement. The problem seems to be in the parsing stages: the query is\nsend off before the comment is encountered.\n\nThe alternative solution of putting query and comment on the same line\nwould be\n=> select 'hi' -- comment ;\nbut that doesn't work at all obviously.\n\nMeanwhile it might be worth pondering if\n=> select 'hi' -- comment \\g\nshould be allowed, since\n=> select 'hi' \\g -- comment\ndoes something different. (And try removing that file if you're an\nunexperienced user.)\n\nRegarding that last line, I just discovered a possible incompatibility\nbetween the official and my current version. The official version creates\na file \"-- comment\" whereas mine makes a file \"--\" because I now have word\nsplitting and quoting rules and the like (so \\g '-- comment' would\nwork). Something to else ponder.\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n", "msg_date": "Thu, 7 Oct 1999 16:57:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments" }, { "msg_contents": "> The question I have though is, is there a reason, besides efficiency, that\n> psql doesn't just send the comment to the backend with the query? The\n> backend does accept comments last time I checked. Perhaps someone will one\n> day write something that makes some use of those comments on the backend\n> (thus conflicting with the very definition of \"comment\", but maybe a\n> logger) and it would remove some load out of psql.\n\nEfficiency is all, along with (probably) the backend being unhappy\ngetting *only* a comment and no query.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 07 Oct 1999 15:30:08 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql and comments" }, { "msg_contents": "> > > The following example shows psql correctly clearing its input buffer\n> > > when a line containing *only* a comment is seen, but not completely\n> > > clearing the buffer (or not realizing that it is cleared; note the\n> > > changed prompt) if the comment is at the end of a valid query.\n> > >\n> > > postgres=> -- comment\n> > > postgres=> select 'hi'; -- comment\n> > > ?column?\n> > > --------\n> > > hi\n> > > (1 row)\n> > >\n> > > postgres->\n> > But aren't they _in_ a new statement, that begins with '--'?\n> \n> ?? Sure, that's what psql thinks. But the first case shown above\n> should also begin a new statement, changing the prompt (it doesn't,\n> because after stripping the comment there are zero blanks in the\n> line). I don't think that is the right behavior though.\n> \n> Things aren't a big problem the way they stand, but istm that a\n> completely blank line (after stripping single-line comments) may as\n> well be the same as an empty line,and that psql could figure that out.\n\nI see your point in the above example. I will wait for the psql/libpq\ncleaner-upper to finish, and take a look at it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 12:43:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments" }, { "msg_contents": "> > The question I have though is, is there a reason, besides efficiency, that\n> > psql doesn't just send the comment to the backend with the query? The\n> > backend does accept comments last time I checked. Perhaps someone will one\n> > day write something that makes some use of those comments on the backend\n> > (thus conflicting with the very definition of \"comment\", but maybe a\n> > logger) and it would remove some load out of psql.\n> \n> Efficiency is all, along with (probably) the backend being unhappy\n> getting *only* a comment and no query.\n> \n\nThat is fixed now. External interfaces showed problems, as the perl\nMySQL test showed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 12:54:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: psql and comments" }, { "msg_contents": ">> Things aren't a big problem the way they stand, but istm that a\n>> completely blank line (after stripping single-line comments) may as\n>> well be the same as an empty line,and that psql could figure that out.\n\nThere was talk earlier of changing the behavior so that psql would\nforward comments to the backend, rather than stripping them. One\npotential annoyance if we do that is that (I think) all the regress\ntest expected outputs will change because comments will then appear\nin them.\n\nI'd be inclined to maintain the current behavior. psql has to have a\nsimple parser in it anyway to know when it has a complete query it can\nsend to the backend --- so it must know what is a comment and what is\nnot. Removing the comments is not really going to add much complexity\nAFAICS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Oct 1999 13:35:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Efficiency is all, along with (probably) the backend being unhappy\n>> getting *only* a comment and no query.\n\n> That is fixed now.\n\nIs it? postgres.c treats an all-whitespace input as an empty query,\nbut if you pass it a comment and nothing else it will cycle the parser/\nplanner/executor, and I'm not sure every phase of that process behaves\nreasonably on empty input. Also, that path will not produce the\n\"empty query\" response code that you get from all-whitespace input.\nI *think* libpq doesn't depend on that anymore, but other frontend\nlibraries might...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Oct 1999 13:45:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: psql and comments " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Efficiency is all, along with (probably) the backend being unhappy\n> >> getting *only* a comment and no query.\n> \n> > That is fixed now.\n> \n> Is it? postgres.c treats an all-whitespace input as an empty query,\n> but if you pass it a comment and nothing else it will cycle the parser/\n> planner/executor, and I'm not sure every phase of that process behaves\n> reasonably on empty input. Also, that path will not produce the\n> \"empty query\" response code that you get from all-whitespace input.\n> I *think* libpq doesn't depend on that anymore, but other frontend\n> libraries might...\n\n\tpostgres -D /u/pg/data test\n\t\n\tPOSTGRES backend interactive interface \n\t$Revision: 1.130 $ $Date: 1999/09/29 16:06:10 $\n\t\n\tbackend> -- test\n\tbackend> \n\nIs that what you mean?\t\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 13:50:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: psql and comments" }, { "msg_contents": "On Oct 7, Bruce Momjian mentioned:\n\n> > Things aren't a big problem the way they stand, but istm that a\n> > completely blank line (after stripping single-line comments) may as\n> > well be the same as an empty line,and that psql could figure that out.\n> \n> I see your point in the above example. I will wait for the psql/libpq\n> cleaner-upper to finish, and take a look at it.\n\nOh, now I'm cleaning up libpq as well??? 8-}\n\nWell anyway, by a vote of 1 1/2 to 1 psql will strip all comments before\nsending a query, probably in a C pre-processor kind of way.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 7 Oct 1999 23:47:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments" }, { "msg_contents": "On Oct 7, Tom Lane mentioned:\n\n> There was talk earlier of changing the behavior so that psql would\n> forward comments to the backend, rather than stripping them. One\n> potential annoyance if we do that is that (I think) all the regress\n> test expected outputs will change because comments will then appear\n> in them.\n\nThat is a somewhat separate issue, but good that you bring it up. In my\ncleaning ways I noticed that the -e vs. -E switches weren't applied\ncorrectly so I set that straight to an extent. The regression tests rely\non -e to echo the query back correctly, not the one actually sent to the\nbackend, so that could be tweaked.\n\nLuckily, the regression tests don't make extensive use of the backslash\ncommands, the issue being that their output might change. I only found\nthree backslash commands in the whole regression tests. One occurence does\nsomething like this:\n\nsome query;\n*** comment\n*** comment\n\\p\n\\r\nmore queries;\n\nwhich should probably be changed anyway to something like\n\n-- comment\n-- comment\n\nThe other case is\nCREATE TEMP TABLE temptest(col int); \n-- test temp table deletion \n\\c regression \nSELECT * FROM temptest;\nwhich still works as I just confirmed, and the output of \\c gets eaten in\n-q mode anyway.\n\nSeems it's still safe.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Fri, 8 Oct 1999 00:09:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> That is fixed now.\n>> \n>> Is it? postgres.c treats an all-whitespace input as an empty query,\n>> but if you pass it a comment and nothing else it will cycle the parser/\n>> planner/executor, and I'm not sure every phase of that process behaves\n>> reasonably on empty input. Also, that path will not produce the\n>> \"empty query\" response code that you get from all-whitespace input.\n>> I *think* libpq doesn't depend on that anymore, but other frontend\n>> libraries might...\n\n> \tpostgres -D /u/pg/data test\n\t\n> \tPOSTGRES backend interactive interface \n> \t$Revision: 1.130 $ $Date: 1999/09/29 16:06:10 $\n\t\nbackend> -- test\nbackend> \n\n> Is that what you mean?\t\n\nOK, so the parser/planner/executor can cope with dummy input. That's\ngood. There's still the problem of returning an 'empty query' response\nto the frontend. I think you'd probably need to hack up postgres.c\nso that when the querytree list produced by the parser is NIL, the\nIsEmptyQuery flag gets set --- this could be done instead of, rather\nthan in addition to, the current test for an all-whitespace input\nbuffer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Oct 1999 19:28:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: psql and comments " }, { "msg_contents": "> OK, so the parser/planner/executor can cope with dummy input. That's\n> good. There's still the problem of returning an 'empty query' response\n> to the frontend. I think you'd probably need to hack up postgres.c\n> so that when the querytree list produced by the parser is NIL, the\n> IsEmptyQuery flag gets set --- this could be done instead of, rather\n> than in addition to, the current test for an all-whitespace input\n> buffer.\n\nThe system may already return that. My gram.y code tests for empty\nqueries, and should be doing the right thing. Not sure, because I am\nnot sure what to check for.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 19:54:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: psql and comments" }, { "msg_contents": "> On Oct 7, Bruce Momjian mentioned:\n> \n> > > Things aren't a big problem the way they stand, but istm that a\n> > > completely blank line (after stripping single-line comments) may as\n> > > well be the same as an empty line,and that psql could figure that out.\n> > \n> > I see your point in the above example. I will wait for the psql/libpq\n> > cleaner-upper to finish, and take a look at it.\n> \n> Oh, now I'm cleaning up libpq as well??? 8-}\n> \n> Well anyway, by a vote of 1 1/2 to 1 psql will strip all comments before\n> sending a query, probably in a C pre-processor kind of way.\n\nLooks like we are going for a 7.0 next, so feel free to remove very old\nfunctions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Oct 1999 08:50:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments" }, { "msg_contents": "> Luckily, the regression tests don't make extensive use of the backslash\n> commands, the issue being that their output might change. I only found\n> three backslash commands in the whole regression tests. One occurence does\n> something like this:\n> some query;\n> *** comment\n> *** comment\n> \\p\n> \\r\n> more queries;\n> which should probably be changed anyway to something like\n> -- comment\n> -- comment\n\nActually, this is probably testing that the buffer reset actually\nclears the lines, which wouldn't be as obvious if there were only\nlegal SQL preceeding it. Maybe leave it as-is??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 11 Oct 1999 14:57:58 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and comments" }, { "msg_contents": "On Oct 11, Thomas Lockhart mentioned:\n\n> > some query;\n> > *** comment\n> > *** comment\n> > \\p\n> > \\r\n> > more queries;\n> > which should probably be changed anyway to something like\n> > -- comment\n> > -- comment\n> \n> Actually, this is probably testing that the buffer reset actually\n> clears the lines, which wouldn't be as obvious if there were only\n> legal SQL preceeding it. Maybe leave it as-is??\n\nI think I figured that out: the *** comments actually show up in the\noutput of the regression tests as an aid to a person glancing at the\nresults. If the regression tests wanted to test psql then they could\ncertainly do a lot more fun things than that.\n\nI was planning on implementing an \\echo command which could easily be\ndropped in there as a more elegant solution.\n\nWhich makes me think. The server regression tests should certainly not\nrely on some particular psql functionality, just as a matter of principle.\nBut if I'm ever really bored I could write a separate psql regression\ntest. No promise though.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 11 Oct 1999 22:31:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments" }, { "msg_contents": "Um, I hate to break this to you folks, but, barring an AI diff being\navailable, the regression tests might need to be adjusted anyway, since\nthey rely on a particular table output format which I cannot guarantee to\n(or even want to) reproduce exactly.\n\nBut I hear the tests were going to be revised for 7.0 anyway, right? ;)\n\nSince I got you into this mess I would certainly try to help to resolve\nit, but that's a future issue.\n\n\t-Peter\n\nOn Oct 11, Peter Eisentraut mentioned:\n\n> On Oct 11, Thomas Lockhart mentioned:\n> \n> > > some query;\n> > > *** comment\n> > > *** comment\n> > > \\p\n> > > \\r\n> > > more queries;\n> > > which should probably be changed anyway to something like\n> > > -- comment\n> > > -- comment\n> > \n> > Actually, this is probably testing that the buffer reset actually\n> > clears the lines, which wouldn't be as obvious if there were only\n> > legal SQL preceeding it. Maybe leave it as-is??\n> \n> I think I figured that out: the *** comments actually show up in the\n> output of the regression tests as an aid to a person glancing at the\n> results. If the regression tests wanted to test psql then they could\n> certainly do a lot more fun things than that.\n> \n> I was planning on implementing an \\echo command which could easily be\n> dropped in there as a more elegant solution.\n> \n> Which makes me think. The server regression tests should certainly not\n> rely on some particular psql functionality, just as a matter of principle.\n> But if I'm ever really bored I could write a separate psql regression\n> test. No promise though.\n> \n> \t-Peter\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 11 Oct 1999 23:07:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Regression tests (was Re: [HACKERS] psql and comments)" }, { "msg_contents": "> Um, I hate to break this to you folks, but, barring an AI diff being\n> available, the regression tests might need to be adjusted anyway, since\n> they rely on a particular table output format which I cannot guarantee to\n> (or even want to) reproduce exactly.\n> \n> But I hear the tests were going to be revised for 7.0 anyway, right? ;)\n\nYes, Thomas does it, but doesn't like to do it too often. He will\nalmost certainly do it for 7.0. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 18:25:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests (was Re: [HACKERS] psql and comments)" }, { "msg_contents": "> Yes, Thomas does it, but doesn't like to do it too often. He will\n> almost certainly do it for 7.0. :-)\n\nI can regenerate the tests (usually) pretty easily. I've got the\nparser spread across the computer room floor at the moment, so it will\nbe a few weeks...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 12 Oct 1999 06:00:04 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests (was Re: [HACKERS] psql and comments)" }, { "msg_contents": "> > Yes, Thomas does it, but doesn't like to do it too often. He will\n> > almost certainly do it for 7.0. :-)\n> \n> I can regenerate the tests (usually) pretty easily. I've got the\n> parser spread across the computer room floor at the moment, so it will\n> be a few weeks...\n> \n\nYes, Thomas has to check the current regression tests to see that he has\nno errors, then change the backend display, and regenerate the\nregression tests with the new output style. That way, he knows that the\nold and new regression tests are the same.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 10:01:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests (was Re: [HACKERS] psql and comments)" }, { "msg_contents": "\nThis is fixed in the current source tree.\n\n> > > The following example shows psql correctly clearing its input buffer\n> > > when a line containing *only* a comment is seen, but not completely\n> > > clearing the buffer (or not realizing that it is cleared; note the\n> > > changed prompt) if the comment is at the end of a valid query.\n> > >\n> > > postgres=> -- comment\n> > > postgres=> select 'hi'; -- comment\n> > > ?column?\n> > > --------\n> > > hi\n> > > (1 row)\n> > >\n> > > postgres->\n> > But aren't they _in_ a new statement, that begins with '--'?\n> \n> ?? Sure, that's what psql thinks. But the first case shown above\n> should also begin a new statement, changing the prompt (it doesn't,\n> because after stripping the comment there are zero blanks in the\n> line). I don't think that is the right behavior though.\n> \n> Things aren't a big problem the way they stand, but istm that a\n> completely blank line (after stripping single-line comments) may as\n> well be the same as an empty line,and that psql could figure that out.\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 19:31:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and comments" } ]
[ { "msg_contents": "Save yourself (left hand on forehead, right hand on back of head, elbows up)\nThomas, save yourself ;-)\n\n>> -----Original Message-----\n>> From: Thomas Lockhart [mailto:[email protected]]\n>> Sent: Wednesday, October 06, 1999 3:47 PM\n>> To: Tom Lane\n>> Cc: Bruce Momjian; [email protected]; [email protected]\n>> Subject: Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: \n>> PostgreSQL vs Mysql\n>> comparison\n>> \n>> \n>> > I can't get excited about changing this from the standpoint of\n>> > functionality, because AFAICS there is no added functionality.\n>> > But if we're looking bad on a recognized benchmark maybe we\n>> > should do something about it.\n>> \n>> We are looking bad on a benchmark designed to show MySQL in the best\n>> possible light, and to show other DBs at their worst. The maintainers\n>> of that benchmark have no interest in changing that emphasis (e.g. we\n>> are still reported as not supporting HAVING, even though we have\n>> demonstrated to them that we do; this is the same pattern we \n>> have seen\n>> earlier).\n>> \n>> The last time I looked at it, there were ~30% factual errors in the\n>> reported results for Postgres; no telling what errors are there for\n>> other products. imho it is a waste of time to address a bogus\n>> benchmark, unless someone wants to take it up as a hobby. I'm a bit\n>> busy right now ;)\n>> \n>> - Thomas\n>> \n>> -- \n>> Thomas Lockhart\t\t\t\t\n>> [email protected]\n>> South Pasadena, California\n>> \n>> ************\n>> \n", "msg_date": "Wed, 6 Oct 1999 16:16:21 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql c\n\tomparison" } ]
[ { "msg_contents": "Does anybody know how to use UNION and LIMIT together ?\nI want to get 10 rows from publications and 10 rows \nfrom keys.\n\nselect msg_id from publications limit 10 union\nselect key_id from keys limit 10 \nproduces \nERROR: parser: parse error at or near \"union\n\nselect msg_id from publications union \nselect key_id from keys limit 10\nproduces something I wasn't expected\n\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 6 Oct 1999 23:06:26 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "union and LIMIT problem" }, { "msg_contents": "> Does anybody know how to use UNION and LIMIT together ?\n> I want to get 10 rows from publications and 10 rows \n> from keys.\n> \n> select msg_id from publications limit 10 union\n> select key_id from keys limit 10 \n> produces \n> ERROR: parser: parse error at or near \"union\n> \n> select msg_id from publications union \n> select key_id from keys limit 10\n> produces something I wasn't expected\n\nI have on the TODO list:\n\n\t* UNION with LIMIT fails\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Oct 1999 15:23:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union and LIMIT problem" }, { "msg_contents": "Bruce Momjian wrote:\n\n>\n> > Does anybody know how to use UNION and LIMIT together ?\n> > I want to get 10 rows from publications and 10 rows\n> > from keys.\n> >\n> > select msg_id from publications limit 10 union\n> > select key_id from keys limit 10\n> > produces\n> > ERROR: parser: parse error at or near \"union\n> >\n> > select msg_id from publications union\n> > select key_id from keys limit 10\n> > produces something I wasn't expected\n>\n> I have on the TODO list:\n>\n> * UNION with LIMIT fails\n\n and must fail by it's implementation. LIMIT is handled by the\n outermost executor loop, suppressing OFFSET result tuples and\n stopping execution after LIMIT results sent to the client.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 7 Oct 1999 12:00:22 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union and LIMIT problem" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> >\n> > > Does anybody know how to use UNION and LIMIT together ?\n> > > I want to get 10 rows from publications and 10 rows\n> > > from keys.\n> > >\n> > > select msg_id from publications limit 10 union\n> > > select key_id from keys limit 10\n> > > produces\n> > > ERROR: parser: parse error at or near \"union\n> > >\n> > > select msg_id from publications union\n> > > select key_id from keys limit 10\n> > > produces something I wasn't expected\n> >\n> > I have on the TODO list:\n> >\n> > * UNION with LIMIT fails\n> \n> and must fail by it's implementation. LIMIT is handled by the\n> outermost executor loop, suppressing OFFSET result tuples and\n> stopping execution after LIMIT results sent to the client.\n\nAh, but it works sometimes:\n\n test=> select * from pg_language union select * from pg_language limit 1;\n lanname|lanispl|lanpltrusted|lanplcallfoid|lancompiler\n -------+-------+------------+-------------+-----------\n |t |f |f | 0|/bin/cc \n (1 row)\n\nso we would need to get it working, or disable it from happening.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 12:36:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union and LIMIT problem" }, { "msg_contents": ">>>> * UNION with LIMIT fails\n>> \n>> and must fail by it's implementation. LIMIT is handled by the\n>> outermost executor loop, suppressing OFFSET result tuples and\n>> stopping execution after LIMIT results sent to the client.\n\n> Ah, but it works sometimes:\n\nWell, the real question is what do you mean by \"works\" or \"fails\".\nIn particular, do you think that LIMIT applies to the overall result\nof the whole query, or to any one sub-select?\n\nIIRC, ORDER BY is supposed to apply to the end result (and you can\nonly write it at the very end of the query, not after a sub-select),\nand I'd vote for making LIMIT work the same. In which case the\nexecutor should be fine, and we probably just have a problem with\nthe parser hanging the info on the wrong node of the querytree...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Oct 1999 13:29:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union and LIMIT problem " }, { "msg_contents": "> >>>> * UNION with LIMIT fails\n> >> \n> >> and must fail by it's implementation. LIMIT is handled by the\n> >> outermost executor loop, suppressing OFFSET result tuples and\n> >> stopping execution after LIMIT results sent to the client.\n> \n> > Ah, but it works sometimes:\n> \n> Well, the real question is what do you mean by \"works\" or \"fails\".\n> In particular, do you think that LIMIT applies to the overall result\n> of the whole query, or to any one sub-select?\n\nShould apply to overall result, like ORDER BY.\n\n> \n> IIRC, ORDER BY is supposed to apply to the end result (and you can\n> only write it at the very end of the query, not after a sub-select),\n> and I'd vote for making LIMIT work the same. In which case the\n> executor should be fine, and we probably just have a problem with\n> the parser hanging the info on the wrong node of the querytree...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 13:39:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union and LIMIT problem" }, { "msg_contents": "\nCan I assume this is fixed? I see it marked on the TODO list.\n\n> Does anybody know how to use UNION and LIMIT together ?\n> I want to get 10 rows from publications and 10 rows \n> from keys.\n> \n> select msg_id from publications limit 10 union\n> select key_id from keys limit 10 \n> produces \n> ERROR: parser: parse error at or near \"union\n> \n> select msg_id from publications union \n> select key_id from keys limit 10\n> produces something I wasn't expected\n> \n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 19:29:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union and LIMIT problem" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can I assume this is fixed? I see it marked on the TODO list.\n\nYes, I think it is (barring a counterexample from someone ... the\nUNION rewriter is awfully crufty ...).\n\nIt might be nice to allow LIMIT to be attached to subselects rather\nthan just the top level, but I have no idea what it would take in the\nexecutor to implement that. I could handle fixing the parser & planner\nif someone else wants to fix it in the executor.\n\n>> Does anybody know how to use UNION and LIMIT together ?\n>> \n>> select msg_id from publications union \n>> select key_id from keys limit 10\n>> produces something I wasn't expected\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Nov 1999 21:34:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union and LIMIT problem " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Can I assume this is fixed? I see it marked on the TODO list.\n> \n> Yes, I think it is (barring a counterexample from someone ... the\n> UNION rewriter is awfully crufty ...).\n> \n> It might be nice to allow LIMIT to be attached to subselects rather\n> than just the top level, but I have no idea what it would take in the\n> executor to implement that. I could handle fixing the parser & planner\n> if someone else wants to fix it in the executor.\n\nLet's wait for someone to ask for it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 21:41:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] union and LIMIT problem" }, { "msg_contents": "On Mon, 29 Nov 1999, Bruce Momjian wrote:\n\n> Date: Mon, 29 Nov 1999 19:29:05 -0500 (EST)\n> From: Bruce Momjian <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] union and LIMIT problem\n> \n> \n> Can I assume this is fixed? I see it marked on the TODO list.\n> \n\nYes, it is fixed in 6.5.3 by Tom Lane. \n\n\tRegards,\n\n\t\tOleg\n\n> > Does anybody know how to use UNION and LIMIT together ?\n> > I want to get 10 rows from publications and 10 rows \n> > from keys.\n> > \n> > select msg_id from publications limit 10 union\n> > select key_id from keys limit 10 \n> > produces \n> > ERROR: parser: parse error at or near \"union\n> > \n> > select msg_id from publications union \n> > select key_id from keys limit 10\n> > produces something I wasn't expected\n> > \n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 30 Nov 1999 13:31:22 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] union and LIMIT problem" } ]
[ { "msg_contents": "On Oct 5, Roland Roberts mentioned:\n\n> Peter> * \\connect now asks for password when appropriate\n> \n> Does this include the initial connect? I has password authentication\n> enabled and think it would be nice if psql just prompted me rather\n> than failed....\n\nThere was a design flaw in psql in that the -u switch always asked for\nusername *and* password. Those are essentially two separate things: Do you\nwant a different username than the default? and Do you need to enter a\npassword because you use that as authentication?\n\nI resolved that by adding a switch -U to specifiy username and -P to\nrequest a password prompt. If and only if you start psql with -P you will\nget a password prompt any time you reconnect. (This is still not ideal\nsince the new database might not require a password, but there is no way\nto read the pg_hba.conf from the front end obviously.) For backward\ncompatibility the -u switch essentially simulates \"-U ? -P\". (Username \"?\"\nmeans prompt for username. You guys don't use question marks as username,\ndo you?)\n\nI hope that solves it.\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e/\n\n\n", "msg_date": "Wed, 6 Oct 1999 21:53:02 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql Week 1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n>>>>> \"Peter\" == Peter Eisentraut <[email protected]> writes:\n\n Peter> (This is still not ideal since the new database might not\n Peter> require a password, but there is no way to read the\n Peter> pg_hba.conf from the front end obviously.)\n\nCouldn't you just prompt for the password *after* the backend\ncomplains about needing it?\n\nroland\n- -- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 76-15 113th Street, Apt 3B\[email protected] Forest Hills, NY 11375\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3a\nCharset: noconv\nComment: Processed by Mailcrypt 3.5.4, an Emacs/PGP interface\n\niQCVAwUBN/wSyuoW38lmvDvNAQFa9gP/YMZ9yN7OgR1N+2O2wkSAfmqHRGBocwis\nzw5qg+U/mJop+1OWX6bujY3oOk2GypQGSppCkWgvV5j7sDeBLJ5cNczQLepqZxHB\nABYMAaRr6jE7JPLHua1lWxmp58CIPGz9wp1niLRap2UeFE0jgCUa3z3TsnzkgcmS\nUCaMV9kLlVE=\n=p2+8\n-----END PGP SIGNATURE-----\n", "msg_date": "06 Oct 1999 23:26:03 -0400", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 1" } ]
[ { "msg_contents": "\nBruce Momjian <[email protected]> writes:\n>> > Looks like a bug. Added to TODO list.\n>> \n>> >> I see a todo item\n>> >> * Views with spaces in view name fail when referenced\n>> >> \n>> >> I have another one for you:\n>> >> * Databases with spaces in name fail to be created and destroyed despite\n>> >> responses to the contrary.\n> \n>> IIRC, createdb and destroydb use \"cp -r\" and \"rm -r\" respectively.\n>> Lack of careful quoting in the system calls is probably what's\n>> causing the problem here.\n>> \n>> However, I wonder if it wouldn't be a better idea to forbid funny\n>> characters in things that will become Unix filenames. In particular,\n>> something like CREATE DATABASE \"../../../something\" could have real\n>> bad consequences...\n>\n>I just tried it:\n>\n> test=> create database \"../../pg_hba.conf\"\n> test-> \\g\n> ERROR: Unable to locate path '../../pg_hba.conf'\n> This may be due to a missing environment variable in the server\n>\n>Seems we are safe.\n\n(This is my first time going through the code, so I could be getting alot of\nthings wrong, but here's what I see...)\n\nThe function createdb in backend/commands/dbcommands.c (I assume this is right \nbecause it seems to do the correct thing and actually define the above error\nmessage) tries to get the path to the database using ExpandDatabasePath on\neither dbpath/dbname or just dbname depending on whether dbpath is null (or\nsame as dbname).\n\nExpandDatabasePath in backend/utils/misc/database.c seems to assume that anything\nthat has the separator character ('/' i would assume) is of the form\nenvironmentvariable/<rest> which seems to be rewritten into\n<value of environment variable>/base/<rest>\nSo with your example it fails because it can't find the environment variable\n'..' (although if you had one, it might actually attempt to put it wherever\nthat would point)\n\nIt then makes a command and systems:\nCOPY_CMD DataDir/base/template1/ <return from ExpandDatabasePath>\n\nWhen i tried to do a:\n create database \"`a.sh`\" on a little shell script in the PATH, it decided\nto copy the stuff from template1 into my data/base directory. The shell\nscript touches a file in tmp, which was touched.\n\nIt seems like the current implementation would let someone run a command in\nbackticks that is in the postgres user's path as long as there are no\ndirectory name separators in the command.\n\nStephan Szabo\n", "msg_date": "Wed, 06 Oct 1999 16:17:39 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Database names with spaces" }, { "msg_contents": "Can someone comment on this?\n\n\n> \n> Bruce Momjian <[email protected]> writes:\n> >> > Looks like a bug. Added to TODO list.\n> >> \n> >> >> I see a todo item\n> >> >> * Views with spaces in view name fail when referenced\n> >> >> \n> >> >> I have another one for you:\n> >> >> * Databases with spaces in name fail to be created and destroyed despite\n> >> >> responses to the contrary.\n> > \n> >> IIRC, createdb and destroydb use \"cp -r\" and \"rm -r\" respectively.\n> >> Lack of careful quoting in the system calls is probably what's\n> >> causing the problem here.\n> >> \n> >> However, I wonder if it wouldn't be a better idea to forbid funny\n> >> characters in things that will become Unix filenames. In particular,\n> >> something like CREATE DATABASE \"../../../something\" could have real\n> >> bad consequences...\n> >\n> >I just tried it:\n> >\n> > test=> create database \"../../pg_hba.conf\"\n> > test-> \\g\n> > ERROR: Unable to locate path '../../pg_hba.conf'\n> > This may be due to a missing environment variable in the server\n> >\n> >Seems we are safe.\n> \n> (This is my first time going through the code, so I could be getting alot of\n> things wrong, but here's what I see...)\n> \n> The function createdb in backend/commands/dbcommands.c (I assume this is right \n> because it seems to do the correct thing and actually define the above error\n> message) tries to get the path to the database using ExpandDatabasePath on\n> either dbpath/dbname or just dbname depending on whether dbpath is null (or\n> same as dbname).\n> \n> ExpandDatabasePath in backend/utils/misc/database.c seems to assume that anything\n> that has the separator character ('/' i would assume) is of the form\n> environmentvariable/<rest> which seems to be rewritten into\n> <value of environment variable>/base/<rest>\n> So with your example it fails because it can't find the environment variable\n> '..' (although if you had one, it might actually attempt to put it wherever\n> that would point)\n> \n> It then makes a command and systems:\n> COPY_CMD DataDir/base/template1/ <return from ExpandDatabasePath>\n> \n> When i tried to do a:\n> create database \"`a.sh`\" on a little shell script in the PATH, it decided\n> to copy the stuff from template1 into my data/base directory. The shell\n> script touches a file in tmp, which was touched.\n> \n> It seems like the current implementation would let someone run a command in\n> backticks that is in the postgres user's path as long as there are no\n> directory name separators in the command.\n> \n> Stephan Szabo\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 17:07:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database names with spaces" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone comment on this?\n\nPeter fixed all that stuff for 7.0. No backticks allowed in database\npaths anymore ;-). I do recall having tested a DB name with a space\nin it just recently...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 May 2000 19:08:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database names with spaces " }, { "msg_contents": "On Wed, May 31, 2000 at 07:08:57PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Can someone comment on this?\n> \n> Peter fixed all that stuff for 7.0. No backticks allowed in database\n> paths anymore ;-). \n\nHey, Peter does a _lot_ but I get credit for excluding backticks from\nDB path, on March 7. (Or is there seperate code to deal with the dbname\nitself? That'd be Peter, then)\n\nRoss\n(Hmm, why do I suddenly feel like the younger brother, who's trophy\nfor 'honorable mention: state bowling championship' is completely\novershadowed on the fireplace mantel by his older brother's gazillion\ntrack and football trophies? ;-)\n", "msg_date": "Thu, 1 Jun 2000 00:09:20 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database names with spaces" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n>> Peter fixed all that stuff for 7.0. No backticks allowed in database\n>> paths anymore ;-). \n\n> Hey, Peter does a _lot_ but I get credit for excluding backticks from\n> DB path, on March 7.\n\nHumblest apologies --- I remembered Peter complaining about the space-\nin-DB-name issue, and thought he'd done all the related work.\n\nCredit is a slippery thing when there are so many people working on\nthe code... but I don't think anyone wants to slight anyone else's\ncontribution.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jun 2000 01:38:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database names with spaces " }, { "msg_contents": "On Thu, Jun 01, 2000 at 01:38:30AM -0400, Tom Lane wrote:\n> \n> > Hey, Peter does a _lot_ but I get credit for excluding backticks from\n> > DB path, on March 7.\n> \n> Humblest apologies --- I remembered Peter complaining about the space-\n> in-DB-name issue, and thought he'd done all the related work.\n> \n> Credit is a slippery thing when there are so many people working on\n> the code... but I don't think anyone wants to slight anyone else's\n> contribution.\n\nHey, it's truly minor, no slight felt. I hoped to be humorous, so as\nnot to appear petty, but it's too late at night, as well. To bed!\n\nLater,\nRoss\nWell, it helps to be a petty nitpicker when dealing with security issues,\nright?\n", "msg_date": "Thu, 1 Jun 2000 01:05:30 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database names with spaces" } ]
[ { "msg_contents": "\nI forgot to mention that I was looking at 6.5.2 source, so\nsomeone might have changed it, and if they have I'm sorry\nfor hitting the list. (I have a more recent snapshot on\nmy machine at home, but I can't seem to get in right now\nto check).\n\nStephan Szabo\n\n", "msg_date": "Wed, 06 Oct 1999 16:19:43 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Database names with space (II)" } ]
[ { "msg_contents": "gmake clean + initdb (also changed) required.\n\nWAL still doesn't anything but eats 16Mb disk space on bootstrap -:)\n\nData base system shutdown is changed!\n\nNow, after receiving SIGTERM, postmaster disallows new\nconnections but let active backend to end their works\nand shutdown data base only after all of them terminated\n(by client request) - Smart Shutdown.\n\nSIGINT: postmaster disallows new connections,\nsends all active backends SIGTERM (abort+exit),\nwaits for children exits and shutdowns data base\n- Fast Shutdown.\n\nSIGQUIT: postmaster terminates children with SIGUSR1\nand exits (without shutdowning data base)\n- Immediate Shutdown (results in recovery on startup).\n\nI started to clean up backend initialization code: MUST use\nlocking when read catalog relations and setup MyProc before\nacquiring any (except for ProcStructLock) spinlocks.\n\nAlso, now FATAL is ERROR + exit.\n\nVadim\n", "msg_date": "Thu, 07 Oct 1999 06:28:32 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "WAL Bootstrap/Startup/Shutdown committed..." } ]
[ { "msg_contents": "\n\nIs all in next example good? See:\n\nabil=> CREATE USER myname WITH PASSWORD BuBuBuBu;\nCREATE USER ^^^^^^^^^^ \n\nabil=> select * from pg_shadow;\nusename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil\nmyname | 5808|f |f |f |f |bubububu |\n ^^^^^^^^ \n(4 rows)\n\n\nWhy is in pg_shadow.passwd low case only?\n\n\t\t\t\t\t\t\tZakkr\n\n", "msg_date": "Thu, 7 Oct 1999 11:54:42 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "password in pg_shadow" }, { "msg_contents": "> \n> \n> Is all in next example good? See:\n> \n> abil=> CREATE USER myname WITH PASSWORD BuBuBuBu;\n> CREATE USER ^^^^^^^^^^ \n> \n> abil=> select * from pg_shadow;\n> usename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil\n> myname | 5808|f |f |f |f |bubububu |\n> ^^^^^^^^ \n> (4 rows)\n> \n> \n> Why is in pg_shadow.passwd low case only?\n\nTry putting quotes aroud the password.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 12:17:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] password in pg_shadow" }, { "msg_contents": "> \n> \n> Is all in next example good? See:\n> \n> abil=> CREATE USER myname WITH PASSWORD BuBuBuBu;\n> CREATE USER ^^^^^^^^^^ \n> \n> abil=> select * from pg_shadow;\n> usename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil\n> myname | 5808|f |f |f |f |bubububu |\n> ^^^^^^^^ \n> (4 rows)\n> \n> \n> Why is in pg_shadow.passwd low case only?\n\nSorry, try putting _double_ quotes aroud the password.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 12:17:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] password in pg_shadow" } ]
[ { "msg_contents": "\n\nOn Thu, 7 Oct 1999, Peter Mount wrote:\n\n> I think its because the parser forces everything outside of quotes to\n> being lowercase.\n> \n\n Yes, it is right, but in tutorial is nothing about it, In tutorial is\nthis bad example:\n\n\tCREATE USER davide WITH PASSWORD jw8s0F4 \n\nPassword is very important....\n\n\t\t\t\t\t\tZakkr\n\n", "msg_date": "Thu, 7 Oct 1999 12:48:11 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] password in pg_shadow" }, { "msg_contents": "I think its because the parser forces everything outside of quotes to\nbeing lowercase.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Zakkr [mailto:[email protected]]\nSent: 07 October 1999 10:55\nTo: pgsql-hackers\nSubject: [HACKERS] password in pg_shadow\n\n\n\n\nIs all in next example good? See:\n\nabil=> CREATE USER myname WITH PASSWORD BuBuBuBu;\nCREATE USER ^^^^^^^^^^ \n\nabil=> select * from pg_shadow;\nusename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd\n|valuntil\nmyname | 5808|f |f |f |f |bubububu |\n ^^^^^^^^ \n(4 rows)\n\n\nWhy is in pg_shadow.passwd low case only?\n\n\t\t\t\t\t\t\tZakkr\n\n\n************\n", "msg_date": "Thu, 7 Oct 1999 12:14:05 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] password in pg_shadow" }, { "msg_contents": "> \n> \n> On Thu, 7 Oct 1999, Peter Mount wrote:\n> \n> > I think its because the parser forces everything outside of quotes to\n> > being lowercase.\n> > \n> \n> Yes, it is right, but in tutorial is nothing about it, In tutorial is\n> this bad example:\n> \n> \tCREATE USER davide WITH PASSWORD jw8s0F4 \n> \n> Password is very important....\n> \n> \t\t\t\t\t\tZakkr\n\nFixed documentation. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Oct 1999 12:38:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] password in pg_shadow" } ]
[ { "msg_contents": "subscribe\n\n-- \nAlessio F. Bragadini\t\[email protected]\nAPL Financial Services\t\thttp://staff.dsnet.it/~alessio\nNicosia, Cyprus\t\t \tphone: +357-2-750652\n\nYou are welcome, sir, to Cyprus. -- Shakespeare's \"Othello\"\n", "msg_date": "Thu, 07 Oct 1999 14:43:30 +0300", "msg_from": "Alessio Bragadini <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "\nHi,\n\nhave somebody trigger for attr. privilege setting? Or exist other\nresolution? (Please) - I need it for my project and if this not exist I \nwill must write this myself :-((\n\n\t\t\t\t\t\tZakkr\n\n", "msg_date": "Thu, 7 Oct 1999 15:53:07 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Privilege for attribute (columns)" } ]
[ { "msg_contents": ">\n> Dear Jan,\n\n Don't know why you're asking me - I'm not involved in JDBC\n nor am I in the multibyte stuff.\n\n I've included the hackers list into this response - maybe\n someone else can comment on it?\n\n\nJan\n\n>\n> I am using Java with JDBC (from a PC using VisualCafe Database Edition)\n> to run an application using PostgreSQL 6.2.1 (on a Sparc). Everything\n> is running fine except for a string problem. The Postmaster says (when\n> loading data from the database into the text fields of the GUI):\n> \"ERROR: MultiByte strings (MB) must be enabled to use this function\"\n>\n> The DOS prompt (where the Visual Cafe development environment runs an\n> Applet) tells me:\n> \"The maximum width size for column 3 is: 17\"\n>\n> And this is for a varchar(43) field. When I enter a string into the\n> column from my Java text field I get the diagnostic:\n> \"Invalid value for the column: product_name\"\n>\n> I figure that all I have to do is enable MB but I can't find it. If you\n> have not encountered this problem before, do you know of a user group\n> where I can post a question.\n>\n> Sincerely,\n>\n> Allan in Belgique\n>\n> [email protected]\n>\n>\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 7 Oct 1999 16:11:15 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Help" } ]
[ { "msg_contents": "The column width bug was fixed in 6.5.2, so it should return the correct\nresult. However, the 6.5.2 driver won't work with 6.2.1, as they use a\ndifferent protocol.\n\nAs for MultiByte strings, you need to compile the backend to accept them\n(someone correct me if I'm wrong here).\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]\nSent: 07 October 1999 15:11\nTo: [email protected]\nCc: [email protected]\nSubject: [HACKERS] Re: PostgreSQL Help\n\n\n>\n> Dear Jan,\n\n Don't know why you're asking me - I'm not involved in JDBC\n nor am I in the multibyte stuff.\n\n I've included the hackers list into this response - maybe\n someone else can comment on it?\n\n\nJan\n\n>\n> I am using Java with JDBC (from a PC using VisualCafe Database\nEdition)\n> to run an application using PostgreSQL 6.2.1 (on a Sparc). Everything\n> is running fine except for a string problem. The Postmaster says (when\n> loading data from the database into the text fields of the GUI):\n> \"ERROR: MultiByte strings (MB) must be enabled to use this function\"\n>\n> The DOS prompt (where the Visual Cafe development environment runs an\n> Applet) tells me:\n> \"The maximum width size for column 3 is: 17\"\n>\n> And this is for a varchar(43) field. When I enter a string into the\n> column from my Java text field I get the diagnostic:\n> \"Invalid value for the column: product_name\"\n>\n> I figure that all I have to do is enable MB but I can't find it. If\nyou\n> have not encountered this problem before, do you know of a user group\n> where I can post a question.\n>\n> Sincerely,\n>\n> Allan in Belgique\n>\n> [email protected]\n>\n>\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n************\n", "msg_date": "Thu, 7 Oct 1999 15:50:25 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: PostgreSQL Help" }, { "msg_contents": "Peter,\n\n> As for MultiByte strings, you need to compile the backend to accept them\n> (someone correct me if I'm wrong here).\n\nI suspect he is not running PostgreSQL 6.2.1 becasue the multibyte\ncapability has been introduced since 6.3.2. Anyway, the particular\nmessage:\n\n\t\"ERROR: MultiByte strings (MB) must be enabled to use this function\"\n\nis raised if getdatabaseencoding() is called and the backend is not\ncompiled with MB option as you said. But the question is: does\nthe standard PostgreSQL JDBC driver call getdatabaseencoding()?\n---\nTatsuo Ishii\n", "msg_date": "Fri, 08 Oct 1999 10:34:39 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: PostgreSQL Help " } ]
[ { "msg_contents": "> tested your patch and there was no change in result. I think it \n> wouldn't be nice if this will point out a bug in the perl pg driver \n> because I can't imagine that you would like to do such things in \n> there ...\n> \n> the new crash-me tests results are sent to monty so I think he will \n> put them online tomorrow (today for you I think). I also did a test \n> run on oracle and on a microsoft sql 7 server on windows nt (oracle \n> on linux).\n\nEnclosed is a patch that shows our perl interface can't handle '--'\ncomments, even though psql and the backend directly can handle them.\n\nTo add complexity to this, the backend -d3 log from the perl test\nsession shows the same query that works perfectly in a direct backend\nconnection.\n\nCan anyone suggest a cause for this?\n\n---------------------------------------------------------------------------\n\t\n\tStartTransactionCommand\n\tquery: CREATE TABLE person (id int4, name char(16)) --test\n\tERROR: parser: parse error at or near \"--\"\n\tAbortCurrentTransaction\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n*** ./interfaces/perl5/test.pl.orig\tThu Oct 7 20:25:13 1999\n--- ./interfaces/perl5/test.pl\tThu Oct 7 20:41:40 1999\n***************\n*** 147,153 ****\n \n ######################### create and insert into table\n \n! $result = $conn->exec(\"CREATE TABLE person (id int4, name char(16))\");\n die $conn->errorMessage unless PGRES_COMMAND_OK eq $result->resultStatus;\n my $cmd = $result->cmdStatus;\n ( \"CREATE\" eq $cmd )\n--- 147,153 ----\n \n ######################### create and insert into table\n \n! $result = $conn->exec(\"CREATE TABLE person (id int4, name char(16)) -- /* test*/\");\n die $conn->errorMessage unless PGRES_COMMAND_OK eq $result->resultStatus;\n my $cmd = $result->cmdStatus;\n ( \"CREATE\" eq $cmd )", "msg_date": "Thu, 7 Oct 1999 21:47:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql comparison" } ]
[ { "msg_contents": "I have cleaned up the SSL -is mess. Flag is no -l, with check that if\nthey use -l, they also must use -i or they get an error message on\nstartup.\n\nI fixed some -d debug handling that was broken in postmaster and\npostgres programs.\n\nAnother PERL variable name fix.\n\nDocumentation updates.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 00:14:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Cleanup of debugging flags, SSL" }, { "msg_contents": "\n> I have cleaned up the SSL -is mess. Flag is no -l, with check that if\n ^^\n\t\t\t\t\t now a -l flag for SSL\n\nSorry.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 00:26:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Cleanup of debugging flags, SSL" } ]
[ { "msg_contents": "> > tested your patch and there was no change in result. I think it \n> > wouldn't be nice if this will point out a bug in the perl pg driver \n> > because I can't imagine that you would like to do such things in \n> > there ...\n> > \n> > the new crash-me tests results are sent to monty so I think he will \n> > put them online tomorrow (today for you I think). I also did a test \n> > run on oracle and on a microsoft sql 7 server on windows nt (oracle \n> > on linux).\n> \n> Enclosed is a patch that shows our perl interface can't handle '--'\n> comments, even though psql and the backend directly can handle them.\n> \n> To add complexity to this, the backend -d3 log from the perl test\n> session shows the same query that works perfectly in a direct backend\n> connection.\n> \n> Can anyone suggest a cause for this?\n\nOK, fix attached. Seems our \"--\" comments required a newline on the\nend, which was not being done in interfaces like Perl. Added a test in\nthe perl code for the trailing comments, and patched scan.l.\n\nSeems this should only be applied to 6.6. Applied to 6.6.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? doc/src/sgml/install.htm\n? src/log\n? src/config.log\n? src/config.cache\n? src/config.status\n? src/GNUmakefile\n? src/Makefile.global\n? src/Makefile.custom\n? src/backend/fmgr.h\n? src/backend/parse.h\n? src/backend/postgres\n? src/backend/global1.bki.source\n? src/backend/local1_template1.bki.source\n? src/backend/global1.description\n? src/backend/local1_template1.description\n? src/backend/bootstrap/bootparse.c\n? src/backend/bootstrap/bootstrap_tokens.h\n? src/backend/bootstrap/bootscanner.c\n? src/backend/catalog/genbki.sh\n? src/backend/catalog/global1.bki.source\n? src/backend/catalog/global1.description\n? src/backend/catalog/local1_template1.bki.source\n? src/backend/catalog/local1_template1.description\n? src/backend/port/Makefile\n? src/backend/utils/Gen_fmgrtab.sh\n? src/backend/utils/fmgr.h\n? src/backend/utils/fmgrtab.c\n? src/bin/cleardbdir/cleardbdir\n? src/bin/createdb/createdb\n? src/bin/createlang/createlang\n? src/bin/createuser/createuser\n? src/bin/destroydb/destroydb\n? src/bin/destroylang/destroylang\n? src/bin/destroyuser/destroyuser\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_dump/Makefile\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pg_version/Makefile\n? src/bin/pg_version/pg_version\n? src/bin/pgtclsh/mkMakefile.tcldefs.sh\n? src/bin/pgtclsh/mkMakefile.tkdefs.sh\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/Makefile\n? src/bin/psql/psql\n? src/include/version.h\n? src/include/config.h\n? src/interfaces/ecpg/lib/Makefile\n? src/interfaces/ecpg/lib/libecpg.so.3.0.1\n? src/interfaces/ecpg/lib/libecpg.so.3.0.3\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgtcl/Makefile\n? src/interfaces/libpgtcl/libpgtcl.so.2.0\n? src/interfaces/libpq/Makefile\n? src/interfaces/libpq/libpq.so.2.0\n? src/interfaces/libpq++/Makefile\n? src/interfaces/libpq++/libpq++.so.3.0\n? src/interfaces/odbc/GNUmakefile\n? src/interfaces/odbc/Makefile.global\n? src/interfaces/perl5/blib\n? src/interfaces/perl5/pm_to_blib\n? src/interfaces/perl5/Pg.c\n? src/interfaces/perl5/Pg.bs\n? src/interfaces/perl5/Makefile\n? src/lextest/lex.yy.c\n? src/lextest/lextest\n? src/pl/plpgsql/src/Makefile\n? src/pl/plpgsql/src/mklang.sql\n? src/pl/plpgsql/src/pl_gram.c\n? src/pl/plpgsql/src/pl.tab.h\n? src/pl/plpgsql/src/pl_scan.c\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/mkMakefile.tcldefs.sh\n? src/pl/tcl/Makefile.tcldefs\n? src/test/regress/regress.out\n? src/test/regress/regression.diffs\n? src/test/regress/expected/copy.out\n? src/test/regress/expected/create_function_1.out\n? src/test/regress/expected/create_function_2.out\n? src/test/regress/expected/misc.out\n? src/test/regress/expected/constraints.out\n? src/test/regress/expected/install_plpgsql.out\n? src/test/regress/results/boolean.out\n? src/test/regress/results/char.out\n? src/test/regress/results/name.out\n? src/test/regress/results/varchar.out\n? src/test/regress/results/text.out\n? src/test/regress/results/strings.out\n? src/test/regress/results/int2.out\n? src/test/regress/results/int4.out\n? src/test/regress/results/int8.out\n? src/test/regress/results/oid.out\n? src/test/regress/results/float4.out\n? src/test/regress/results/float8.out\n? src/test/regress/results/numerology.out\n? src/test/regress/results/point.out\n? src/test/regress/results/lseg.out\n? src/test/regress/results/box.out\n? src/test/regress/results/path.out\n? src/test/regress/results/polygon.out\n? src/test/regress/results/circle.out\n? src/test/regress/results/geometry.out\n? src/test/regress/results/timespan.out\n? src/test/regress/results/datetime.out\n? src/test/regress/results/reltime.out\n? src/test/regress/results/abstime.out\n? src/test/regress/results/tinterval.out\n? src/test/regress/results/horology.out\n? src/test/regress/results/inet.out\n? src/test/regress/results/comments.out\n? src/test/regress/results/oidjoins.out\n? src/test/regress/results/type_sanity.out\n? src/test/regress/results/opr_sanity.out\n? src/test/regress/results/create_function_1.out\n? src/test/regress/results/create_type.out\n? src/test/regress/results/create_table.out\n? src/test/regress/results/create_function_2.out\n? src/test/regress/results/constraints.out\n? src/test/regress/results/triggers.out\n? src/test/regress/results/copy.out\n? src/test/regress/results/onek.data\n? src/test/regress/sql/copy.sql\n? src/test/regress/sql/create_function_1.sql\n? src/test/regress/sql/create_function_2.sql\n? src/test/regress/sql/misc.sql\n? src/test/regress/sql/constraints.sql\n? src/test/regress/sql/install_plpgsql.sql\n? src/tools/backend/flow.eps\n? src/tools/backend/flow.ps\n? src/tools/backend/flow.png\n? src/tools/backend/flow.tif\nIndex: src/backend/parser/scan.l\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/scan.l,v\nretrieving revision 1.57\ndiff -c -r1.57 scan.l\n*** src/backend/parser/scan.l\t1999/09/28 03:41:36\t1.57\n--- src/backend/parser/scan.l\t1999/10/08 04:58:23\n***************\n*** 167,173 ****\n \n param\t\t\t\\${integer}\n \n! comment\t\t\t(\"--\"|\"//\").*\\n\n \n space\t\t\t[ \\t\\n\\f]\n other\t\t\t.\n--- 167,173 ----\n \n param\t\t\t\\${integer}\n \n! comment\t\t\t(\"--\"|\"//\").*\n \n space\t\t\t[ \\t\\n\\f]\n other\t\t\t.\nIndex: src/interfaces/perl5/test.pl\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/interfaces/perl5/test.pl,v\nretrieving revision 1.9\ndiff -c -r1.9 test.pl\n*** src/interfaces/perl5/test.pl\t1998/09/27 19:12:26\t1.9\n--- src/interfaces/perl5/test.pl\t1999/10/08 04:58:30\n***************\n*** 147,153 ****\n \n ######################### create and insert into table\n \n! $result = $conn->exec(\"CREATE TABLE person (id int4, name char(16))\");\n die $conn->errorMessage unless PGRES_COMMAND_OK eq $result->resultStatus;\n my $cmd = $result->cmdStatus;\n ( \"CREATE\" eq $cmd )\n--- 147,153 ----\n \n ######################### create and insert into table\n \n! $result = $conn->exec(\"CREATE TABLE person (id int4, name char(16)) -- test\");\n die $conn->errorMessage unless PGRES_COMMAND_OK eq $result->resultStatus;\n my $cmd = $result->cmdStatus;\n ( \"CREATE\" eq $cmd )", "msg_date": "Fri, 8 Oct 1999 01:03:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql comparison" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> *** src/backend/parser/scan.l\t1999/09/28 03:41:36\t1.57\n> --- src/backend/parser/scan.l\t1999/10/08 04:58:23\n> ***************\n> *** 167,173 ****\n> \n> param\t\t\t\\${integer}\n> \n> ! comment\t\t\t(\"--\"|\"//\").*\\n\n> \n> space\t\t\t[ \\t\\n\\f]\n> other\t\t\t.\n> --- 167,173 ----\n> \n> param\t\t\t\\${integer}\n> \n> ! comment\t\t\t(\"--\"|\"//\").*\n> \n> space\t\t\t[ \\t\\n\\f]\n> other\t\t\t.\n\nAh, so the problem was that the perl interface didn't append a newline?\nGood catch. I don't like this fix, however, since I fear it will\nalter behavior for the case where there is an embedded newline in the\nquery buffer. For example\n\tCREATE TABLE mytab -- comment \\n (f1 int)\ncan be sent to the backend as one string (though not via psql). With\nthe above change in scan.l I think the comment will be taken to include\neverything from -- to the end of the buffer, which is wrong.\n\nA better solution IMHO is to leave scan.l as it was and instead\nalways append a \\n to the presented query string before we parse.\n\nBTW, might be a good idea to add \\r to that list of \"space\" characters\nso we don't mess up on DOS-style newlines (\\r\\n).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Oct 1999 09:47:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > *** src/backend/parser/scan.l\t1999/09/28 03:41:36\t1.57\n> > --- src/backend/parser/scan.l\t1999/10/08 04:58:23\n> > ***************\n> > *** 167,173 ****\n> > \n> > param\t\t\t\\${integer}\n> > \n> > ! comment\t\t\t(\"--\"|\"//\").*\\n\n> > \n> > space\t\t\t[ \\t\\n\\f]\n> > other\t\t\t.\n> > --- 167,173 ----\n> > \n> > param\t\t\t\\${integer}\n> > \n> > ! comment\t\t\t(\"--\"|\"//\").*\n> > \n> > space\t\t\t[ \\t\\n\\f]\n> > other\t\t\t.\n> \n> Ah, so the problem was that the perl interface didn't append a newline?\n> Good catch. I don't like this fix, however, since I fear it will\n> alter behavior for the case where there is an embedded newline in the\n> query buffer. For example\n> \tCREATE TABLE mytab -- comment \\n (f1 int)\n\nNo problem. I just added test code to see if it works, and it does:\n\n\t$result = $conn->exec(\n\t\"CREATE TABLE person (id int4, -- test\\n name char(16)) -- test\"); \n\nTests embedded newline, and comment without newline.\n\nI will commit this so it will always be tested by the perl test code.\n\n\n> can be sent to the backend as one string (though not via psql). With\n> the above change in scan.l I think the comment will be taken to include\n> everything from -- to the end of the buffer, which is wrong.\n\nNo, seems lex only goes the end-of-line unless you specifically say \\n.\n\n> \n> A better solution IMHO is to leave scan.l as it was and instead\n> always append a \\n to the presented query string before we parse.\n\nProblem here is that perl is not the only interface that would have this\nproblem. In fact, I am not sure why libpq doesn't have this problem. \nMaybe it does. Anyway, changing all the interfaces would be a pain, and\nnon-portable to older releases.\n\n> \n> BTW, might be a good idea to add \\r to that list of \"space\" characters\n> so we don't mess up on DOS-style newlines (\\r\\n).\n\nInteresting idea. I tried that, but the problem is things like this:\n\n\txqliteral [\\\\](.|\\n)\n\nIf I change it to:\n\n\txqliteral [\\\\](.|\\n|\\r)\n\nthen \\r\\n is not going to work, and if I change it to:\n\n\txqliteral [\\\\](.|\\n|\\r)+\n\nThen \\n\\n is going to be accepted when it shouldn't. Seems I will have\nto leave it alone for now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 12:30:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Ah, so the problem was that the perl interface didn't append a newline?\n>> Good catch. I don't like this fix, however, since I fear it will\n>> alter behavior for the case where there is an embedded newline in the\n>> query buffer.\n\n> I will commit this so it will always be tested by the perl test code.\n\nBut how often do we run that?\n\n> No, seems lex only goes the end-of-line unless you specifically say \\n.\n\nOK, I see in the flex manual that \".\" matches everything except newline,\nso I guess it will work. At least with flex. But \".*\" patterns with\nno clearly defined terminator always make me itch --- it doesn't take\nmuch change to get the wrong result.\n\n>> A better solution IMHO is to leave scan.l as it was and instead\n>> always append a \\n to the presented query string before we parse.\n\n> Problem here is that perl is not the only interface that would have this\n> problem. In fact, I am not sure why libpq doesn't have this problem. \n\nNo, I wasn't suggesting patching the perl interface; I was suggesting\nchanging the backend, ie, adding the \\n to the received query in\npostgres.c just before we hand it off to the parser.\n\n>> BTW, might be a good idea to add \\r to that list of \"space\" characters\n>> so we don't mess up on DOS-style newlines (\\r\\n).\n\n> Interesting idea. I tried that, but the problem is things like this:\n> \txqliteral [\\\\](.|\\n)\n\nHmm, didn't think about what to do with \\r inside literals. I agree,\nit's not worth trying to be smart about those, so I suppose ignoring\nthem outside literals would be inconsistent. Still, how many people\ntry to enter newlines within literals? Adding \\r to the whitespace\nset and nothing else might still be a useful compatibility gain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Oct 1999 18:53:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Ah, so the problem was that the perl interface didn't append a newline?\n> >> Good catch. I don't like this fix, however, since I fear it will\n> >> alter behavior for the case where there is an embedded newline in the\n> >> query buffer.\n> \n> > I will commit this so it will always be tested by the perl test code.\n> \n> But how often do we run that?\n\nWell, at least it is there now, and I will do --with-perl here, so it\nwill be run.\n\n> > No, seems lex only goes the end-of-line unless you specifically say \\n.\n> \n> OK, I see in the flex manual that \".\" matches everything except newline,\n> so I guess it will work. At least with flex. But \".*\" patterns with\n> no clearly defined terminator always make me itch --- it doesn't take\n> much change to get the wrong result.\n\nTrue, but it fixes the problem.\n\n> \n> >> A better solution IMHO is to leave scan.l as it was and instead\n> >> always append a \\n to the presented query string before we parse.\n> \n> > Problem here is that perl is not the only interface that would have this\n> > problem. In fact, I am not sure why libpq doesn't have this problem. \n> \n> No, I wasn't suggesting patching the perl interface; I was suggesting\n> changing the backend, ie, adding the \\n to the received query in\n> postgres.c just before we hand it off to the parser.\n\nI try to avoid hacks like that if I can. Removing \\n from the comment\ntermination is much clearer and more limited.\n\n> \n> >> BTW, might be a good idea to add \\r to that list of \"space\" characters\n> >> so we don't mess up on DOS-style newlines (\\r\\n).\n> \n> > Interesting idea. I tried that, but the problem is things like this:\n> > \txqliteral [\\\\](.|\\n)\n> \n> Hmm, didn't think about what to do with \\r inside literals. I agree,\n> it's not worth trying to be smart about those, so I suppose ignoring\n> them outside literals would be inconsistent. Still, how many people\n> try to enter newlines within literals? Adding \\r to the whitespace\n> set and nothing else might still be a useful compatibility gain.\n\nAdded \\r to the {space} pattern.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 21:34:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql\n\tcomparison" } ]
[ { "msg_contents": "\nI'm putting together a comparison chart for mysql vs PostgreSQL. The\nbeginnings are at: http://hub.org/~vev/pgsql-my.html. Take a look \nand tell me what's incorrect, missing, etc for the PostgreSQL stuff \nfor version 6.5.2. I've set up a seperate mailbox for it so none of\nit gets lost. Send changes to: [email protected]. The address is\nalso on the web page. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 8 Oct 1999 07:16:43 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "mysql-PostgreSQL comparisons" } ]
[ { "msg_contents": "Co-developers,\n\n I've prepared some things and it's time now to start\n contributing to this subproject.\n\n What's done so far (I've included a little SQL script at the\n end that show's what's working):\n\n - The parser recognizes the new syntax for constraint\n triggers and hands down all the new attributes into the\n utility function for CREATE TRIGGER.\n\n - The utility function for CREATE TRIGGER handles all the\n new attributes so constraints can be defined with a bunch\n of\n\n CREATE CONSTRAINT TRIGGER ...\n\n statements after CREATE TABLE.\n\n - The parser recognizes the new SET CONSTRAINTS command.\n\n - The trigger manager handles trigger deferred states\n correctly so that circular constraint checks would be\n possible by deferring trigger invocation until COMMIT.\n Also it traces multiple operations on the same row and\n invokes only that trigger that is defined for the\n resulting operation if all operations during a transaction\n are condensed.\n\n - In backend/utils/adt/ri_triggers.c are some support\n routines and the first real trigger procedures that\n implement:\n\n FOREIGN KEY ... REFERENCES ... MATCH FULL\n (checks for FK existance in PK table on INSERT and\n UPDATE)\n\n FOREIGN KEY ... MATCH FULL ... ON DELETE CASCADE\n (constraint deletes references from FK table on\n DELETE of PK row)\n\n I hope that's enough example implementation to get started\n for you. If not, ask, ask, ask.\n\n What we need next (what y'all shall do) is:\n\n 1. Add all functionality to ri_triggers.c required for\n\n ON UPDATE CASCADE\n ON DELETE SET NULL\n ON UPDATE SET NULL\n ON DELETE SET DEFAULT\n ON UPDATE SET DEFAULT\n\n 2. Add full FOREIGN KEY syntax to the parser and arrange\n that the appropriate CREATE CONSTRAINT TRIGGER statements\n are executed at CREATE TABLE just like the CREATE INDEX\n is done for PRIMARY KEY.\n\n 3. Building a test suite for FOREIGN KEY ... MATCH FULL\n support.\n\n Anyone who wants to contribute to this should at least drop\n us a note on which detail he's starting to work - just to\n avoid frustration. Patches should be sent to me directly and\n I'll incorporate them into the CVS tree.\n\n I'll keep my hands off from all the above now and continue to\n work on the deferred trigger manager (the disk buffering\n during huge transactions).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\ndrop table t1;\ndrop table t2;\n\n\n-- **********\n-- * Create a PK and an FK table.\n-- **********\ncreate table t1 (a1 int4, b1 int4, c1 text, PRIMARY KEY (a1, b1));\ncreate table t2 (a2 int4, b2 int4, c2 text);\n\n\n-- **********\n-- * Manually setup constraint triggers for t2 as if\n-- *\n-- *\tCONSTRAINT check_t2_key \n-- *\t\tFOREIGN KEY (a2, b2) REFERENCES t1 (a1, b1)\n-- *\t\tMATCH FULL\n-- *\t\tON DELETE CASCADE\n-- *\n-- * was specified in the table schema. These are the commands\n-- * which should later be executed automatically during CREATE TABLE\n-- * like done for the index t1_pkey due to the PRIMARY KEY constraint.\n-- **********\ncreate constraint trigger \"check_t2_key\" after insert on t2\n\tdeferrable initially immediate\n\tfor each row execute procedure\n\t\"RI_FKey_check_ins\" ('check_t2_key', 't2', 't1', 'FULL', \n\t\t\t\t\t\t'a2', 'a1', 'b2', 'b1');\ncreate constraint trigger \"check_t2_key\" after update on t2\n\tdeferrable initially immediate\n\tfor each row execute procedure\n\t\"RI_FKey_check_upd\" ('check_t2_key', 't2', 't1', 'FULL', \n\t\t\t\t\t\t'a2', 'a1', 'b2', 'b1');\ncreate constraint trigger \"check_t2_key\" after delete on t1\n\tdeferrable initially immediate\n\tfor each row execute procedure\n\t\"RI_FKey_cascade_del\" ('check_t2_key', 't2', 't1', 'FULL', \n\t\t\t\t\t\t'a2', 'a1', 'b2', 'b1');\n\n-- **********\n-- * Insert some PK values\n-- **********\ninsert into t1 values (1, 1, 'key 1');\ninsert into t1 values (2, 2, 'key 2');\ninsert into t1 values (3, 3, 'key 3');\n\n-- **********\n-- * Check FK on insert\n-- **********\n-- The first two are O.K.\ninsert into t2 values (1, 1, 'ref 1');\ninsert into t2 values (2, 2, 'ref 2');\n-- This one must fail\ninsert into t2 values (4, 3, 'ref 4');\n-- The following one is O.K. again since all FK attributes are NULL\ninsert into t2 (c2) values ('null');\n-- This one not - MATCH FULL does not allow mixing of NULL/notNULL\ninsert into t2 (a2, c2) values (1, 'full violation');\n\n-- **********\n-- * Check FK on update\n-- **********\n-- These two should fail\nupdate t2 set a2 = 4 where a2 = 1;\nupdate t2 set a2 = 3 where a2 = 2;\n-- These two should succeed\nupdate t2 set a2 = 3, b2 = 3 where a2 = 2;\nupdate t2 set c2 = '' where a2 = 1;\n\n-- **********\n-- * Check the cascaded delete\n-- **********\nselect * from t2;\ndelete from t1 where a1 = 1 and b1 = 1;\nselect * from t2;\n\n-- **********\n-- * Now for deferred constraint checks\n-- **********\n-- First the case that doesn't work\nbegin;\ninsert into t2 values (6, 6, 'ref 6');\ninsert into t1 values (6, 6, 'key 6');\ncommit;\n-- But it must work this way\nbegin;\nset constraints check_t2_key deferred;\ninsert into t2 values (7, 7, 'ref 7');\ninsert into t1 values (7, 7, 'key 7');\ncommit;\n\n", "msg_date": "Fri, 8 Oct 1999 14:32:32 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "RI status report #4 (come and join)" }, { "msg_contents": " What we need next (what y'all shall do) is:\n\n4.? Add glue so that these contraints can be recreated via pgdump?\n\nCheers,\nBrook\n", "msg_date": "Fri, 8 Oct 1999 08:03:18 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RI status report #4 (come and join)" }, { "msg_contents": "> \n> What we need next (what y'all shall do) is:\n> \n> 4.? Add glue so that these contraints can be recreated via pgdump?\n\n Yepp - forgot that and\n\n\t5. Write documentation.\n\n\ttoo.\n\n\nThanks, Jan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n", "msg_date": "Fri, 8 Oct 1999 16:05:18 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] RI status report #4 (come and join)" } ]
[ { "msg_contents": "Surely we could store more information (using vacuum) about each table, to\nbe able to produce good stats relatively quickly? This would mean that the\nestimates would be dependent on vacuum, but there are worse options. Also,\ncan't we do something similar to what Oracle does, where you can define your\noptimisation to be rule-based, or stats-based. If it's rule based, the\noptimizer looks only at the schema to decide how to optimize. If\nstats-based, then it has a huge amount of information at its disposal to\ndetermine how to optimise. However, those stats are compiled by something\nlike vacuum.\n\nMikeA\n\n\n\n>> -----Original Message-----\n>> From: Roberto Cornacchia [mailto:[email protected]]\n>> Sent: Friday, October 08, 1999 3:19 PM\n>> To: Bruce Momjian\n>> Cc: Tom Lane; [email protected]\n>> Subject: Re: [HACKERS] Re: Top N queries and disbursion\n>> \n>> \n>> Bruce Momjian wrote:\n>> > \n>> > > No, it's certainly not the right thing. To my \n>> understanding, disbursion\n>> > > is a measure of the frequency of the most common value \n>> of an attribute;\n>> > > but that tells you very little about how many other \n>> values there are.\n>> > > 1/disbursion is a lower bound on the number of values, \n>> but it wouldn't\n>> > > be a good estimate unless you had reason to think that \n>> the values were\n>> > > pretty evenly distributed. There could be a *lot* of \n>> very-infrequent\n>> > > values.\n>> > >\n>> > > > with 100 distinct values of an attribute uniformly \n>> distribuited in a\n>> > > > relation of 10000 tuples, disbursion was estimated as \n>> 0.002275, giving\n>> > > > us 440 distinct values.\n>> > >\n>> > > This is an illustration of the fact that Postgres' \n>> disbursion-estimator\n>> > > is pretty bad :-(. It usually underestimates the \n>> frequency of the most\n>> > > common value, unless the most common value is really frequent\n>> > > (probability > 0.2 or so). I've been trying to think of \n>> a more accurate\n>> > > way of figuring the statistic that wouldn't be unreasonably slow.\n>> > > Or, perhaps, we should forget all about disbursion and \n>> adopt some other\n>> > > statistic(s).\n>> > \n>> > Yes, you have the crux of the issue. I wrote it because \n>> it was the best\n>> > thing I could think of, but it is non-optimimal. Because all the\n>> > optimal solutions seemed too slow to me, I couldn't think \n>> of a better\n>> > one.\n>> \n>> Thank you, Tom and Bruce.\n>> This is not a good news for us :-(. In any case, is 1/disbursion the\n>> best estimate we can have by now, even if not optimal?\n>> \n>> Roberto Cornacchia\n>> Andrea Ghidini\n>> \n>> \n>> ************\n>> \n", "msg_date": "Fri, 8 Oct 1999 15:38:07 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: Top N queries and disbursion" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> can't we do something similar to what Oracle does, where you can define your\n> optimisation to be rule-based, or stats-based. If it's rule based, the\n> optimizer looks only at the schema to decide how to optimize. If\n> stats-based, then it has a huge amount of information at its disposal to\n> determine how to optimise. However, those stats are compiled by something\n> like vacuum.\n\nWe pretty much do that already; the \"rules\" are embodied in the default\ncost estimates that get used if there's no statistical data from VACUUM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Oct 1999 10:29:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Top N queries and disbursion " }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Surely we could store more information (using vacuum) about each table, to\n> be able to produce good stats relatively quickly? This would mean that the\n> estimates would be dependent on vacuum, but there are worse options. Also,\n> can't we do something similar to what Oracle does, where you can define your\n> optimisation to be rule-based, or stats-based. If it's rule based, the\n> optimizer looks only at the schema to decide how to optimize. If\n> stats-based, then it has a huge amount of information at its disposal to\n> determine how to optimise. However, those stats are compiled by something\n> like vacuum.\n\nStats are compiled by vacuum analyze, and every column is analyzed the\nsame way.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 12:20:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Top N queries and disbursion" } ]
[ { "msg_contents": "Hi All, \n\nI'm seeing a funny with the new WAL aware postmaster.\n\nAfter doing a \"kill\" on the postmaster process I get another \npostmaster process which appears to be spinning on a spinlock.\n\nI left it for almost an hour whilst I did something else.\n\nI believe the spinlock code on my system, SPARCLinux, is OK \nfrom running the s_lock_test, see below.\n\nAny ideas what's happening?\n\nKeith.\n\n[postgres@sparclinux pgsql]$ ps -aux| grep post\npostgres 19364 0.0 1.9 1936 760 p0 S Sep 26 0:00 login -p -h sparc2 -f\npostgres 19365 0.0 2.2 1484 880 p0 S Sep 26 0:10 -bash\npostgres 27333 0.0 2.4 4012 948 p0 S 22:24 0:00 postmaster -N 16 -B 3\npostgres 27385 99.6 2.9 4076 1124 p0 R 19:19 51:49 postmaster -N 16 -B 3\npostgres 27409 0.0 1.3 1116 512 p0 R 20:11 0:00 ps -aux\npostgres 27410 0.0 1.1 1176 448 p0 R 20:11 0:00 grep post\n[postgres@sparclinux pgsql]$ gdb /usr/local/pgsql/bin/postmaster 27385\nGDB is free software and you are welcome to distribute copies of it\n under certain conditions; type \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB; type \"show warranty\" for details.\nGDB 4.16 (sparc-unknown-linux), Copyright 1996 Free Software Foundation, Inc...\n\n/usr/local/pgsql/27385: No such file or directory.\nAttaching to program `/usr/local/pgsql/bin/postmaster', process 27385\nReading symbols from /lib/libdl.so.1.8.3...done.\nReading symbols from /lib/libm.so.5.0.6...done.\nReading symbols from /usr/lib/libreadline.so.2.0...done.\nReading symbols from /lib/libtermcap.so.2.0.8...done.\nReading symbols from /usr/lib/libncurses.so.3.0...done.\nReading symbols from /lib/libc.so.5.3.12...done.\nReading symbols from /lib/ld-linux.so.1...done.\nCreateCheckPoint (shutdown=1 '\\001') at ../../../include/storage/s_lock.h:151\n151 __asm__(\"ldstub [%2], %0\" \\\n(gdb) bt\n#0 CreateCheckPoint (shutdown=1 '\\001') at \n../../../include/storage/s_lock.h:151\n#1 0x55f1c in ShutdownXLOG () at xlog.c:1426\n#2 0x587e4 in BootstrapMain (argc=5, argv=0xeffff5c8) at bootstrap.c:359\n#3 0xb63bc in SSDataBase (startup=0 '\\000') at postmaster.c:2026\n#4 0xb53f0 in pmdie (postgres_signal_arg=1469440) at postmaster.c:1254\n#5 0xeffff8f0 in ?? ()\n#6 0xb48cc in ServerLoop () at postmaster.c:745\n#7 0xb468c in PostmasterMain (argc=4620, argv=0x120c) at postmaster.c:640\n\n\n[postgres@sparclinux buffer]$ make s_lock_test\ngcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes \n-I../.. -DS_LOCK_TEST=1 s_lock.c -o\n s_lock_test\n./s_lock_test\nS_LOCK_TEST: this will hang for a few minutes and then abort\n with a 'stuck spinlock' message if S_LOCK()\n and TAS() are working.\n\nFATAL: s_lock(00020bf0) at s_lock.c:270, stuck spinlock. Aborting.\n\nFATAL: s_lock(00020bf0) at s_lock.c:270, stuck spinlock. Aborting.\nmake: *** [s_lock_test] IOT trap/Abort (core dumped)\nmake: *** Deleting file `s_lock_test'\n\n", "msg_date": "Fri, 8 Oct 1999 21:08:33 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "SPARC problem with WAL additions?" } ]
[ { "msg_contents": "Because the next release is going to probably be 7.0, if people have\nbackward compatability library code that they have been dying to remove,\nthis is the time for it.\n\nI don't recommend removing backward compatability with 6.4 or 6.5\nreleases, but if you have some code that is hanging around just to be\ncompatible with 6.1 or earlier, I think it can be removed now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Oct 1999 08:53:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Next release is 7.0(?)" }, { "msg_contents": "On Sat, 9 Oct 1999, Bruce Momjian wrote:\n\n> Because the next release is going to probably be 7.0, if people have\n> backward compatability library code that they have been dying to remove,\n> this is the time for it.\n> \n> I don't recommend removing backward compatability with 6.4 or 6.5\n> releases, but if you have some code that is hanging around just to be\n> compatible with 6.1 or earlier, I think it can be removed now.\n> \n> \n\nThen perhaps we can also overhaul the installation. I installed about \n5 times this past week and have it down to 4 or 5 steps (not including\nregression testing).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 9 Oct 1999 09:03:09 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Next release is 7.0(?)" }, { "msg_contents": "> On Sat, 9 Oct 1999, Bruce Momjian wrote:\n> \n> > Because the next release is going to probably be 7.0, if people have\n> > backward compatability library code that they have been dying to remove,\n> > this is the time for it.\n> > \n> > I don't recommend removing backward compatability with 6.4 or 6.5\n> > releases, but if you have some code that is hanging around just to be\n> > compatible with 6.1 or earlier, I think it can be removed now.\n> > \n> > \n> \n> Then perhaps we can also overhaul the installation. I installed about \n> 5 times this past week and have it down to 4 or 5 steps (not including\n> regression testing).\n\nGreat. That certainly needs a cleanup. My ideal would be to have a\nlist of short instructions, and then have footnotes people would go to\nwhen they had a problem with a certain item. Not sure how to do that in\nsgml.\n\nIf we did it in html, they could click on something when they had a\nproblem, but html instructions are hard if you don't have a browser.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Oct 1999 16:21:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Next release is 7.0(?)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Because the next release is going to probably be 7.0, if people have\n> backward compatability library code that they have been dying to remove,\n> this is the time for it.\n> \n> I don't recommend removing backward compatability with 6.4 or 6.5\n> releases, but if you have some code that is hanging around just to be\n> compatible with 6.1 or earlier, I think it can be removed now.\n\ncan the commands createuser and destroyuser be renamed to\npg_createuser and pg_destroyuser (as pg_dump) adding soft links to \npreserve backwards compatibility (declaring previous commands \ndeprecated) ?\n\nthe same can be done for createdb and destroydb ...\n\nI think is a much more nice namespace for Postgres wich is \nimportant when you have thousands of commands in this times.\n\n-- \n | Sergio A. Kessler http://sak.org.ar\n-O_O- You can have it Soon, Cheap, and Working; choose *two*.\n", "msg_date": "Sun, 10 Oct 1999 15:01:03 -0300", "msg_from": "\"Sergio A. Kessler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "Yes, we have discussed this, and will add it to the TODO list.\n\n> Bruce Momjian wrote:\n> > \n> > Because the next release is going to probably be 7.0, if people have\n> > backward compatability library code that they have been dying to remove,\n> > this is the time for it.\n> > \n> > I don't recommend removing backward compatability with 6.4 or 6.5\n> > releases, but if you have some code that is hanging around just to be\n> > compatible with 6.1 or earlier, I think it can be removed now.\n> \n> can the commands createuser and destroyuser be renamed to\n> pg_createuser and pg_destroyuser (as pg_dump) adding soft links to \n> preserve backwards compatibility (declaring previous commands \n> deprecated) ?\n> \n> the same can be done for createdb and destroydb ...\n> \n> I think is a much more nice namespace for Postgres wich is \n> important when you have thousands of commands in this times.\n> \n> -- \n> | Sergio A. Kessler http://sak.org.ar\n> -O_O- You can have it Soon, Cheap, and Working; choose *two*.\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 10 Oct 1999 16:39:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "> Yes, we have discussed this, and will add it to the TODO list.\n> > can the commands createuser and destroyuser be renamed to\n> > pg_createuser and pg_destroyuser (as pg_dump) adding soft links to\n> > preserve backwards compatibility (declaring previous commands\n> > deprecated) ?\n> > the same can be done for createdb and destroydb ...\n\nI hope we don't have a consensus on this. Long commands with\nunderscores in them are certainly another sign of the coming\napocalypse ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 11 Oct 1999 15:17:46 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "> > Yes, we have discussed this, and will add it to the TODO list.\n> > > can the commands createuser and destroyuser be renamed to\n> > > pg_createuser and pg_destroyuser (as pg_dump) adding soft links to\n> > > preserve backwards compatibility (declaring previous commands\n> > > deprecated) ?\n> > > the same can be done for createdb and destroydb ...\n> \n> I hope we don't have a consensus on this. Long commands with\n> underscores in them are certainly another sign of the coming\n> apocalypse ;)\n\nBut if we keep symlinks to the existing names, is that OK?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 11:43:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": " > I hope we don't have a consensus on this. Long commands with\n > underscores in them are certainly another sign of the coming\n > apocalypse ;)\n\n But if we keep symlinks to the existing names, is that OK?\n\nIsn't the point to avoid naming conflicts. Symlinks won't help that,\nsurely.\n\nI agree; underscores are a pain. If you must go this direction, I\nsuggest hyphens (-).\n\nCheers,\nBrook\n", "msg_date": "Mon, 11 Oct 1999 10:56:30 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": " > Then perhaps we can also overhaul the installation. I installed about \n > 5 times this past week and have it down to 4 or 5 steps (not including\n > regression testing).\n\nWould this also be a relevant time to get a regression test to run on\na non-installed system? Couldn't this be done by starting up the\ndatabase on a different port from within the newly compiled source\ntree for the purpose of testing?\n\nCheers,\nBrook\n", "msg_date": "Mon, 11 Oct 1999 10:58:14 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Next release is 7.0(?)" }, { "msg_contents": "> > I hope we don't have a consensus on this. Long commands with\n> > underscores in them are certainly another sign of the coming\n> > apocalypse ;)\n> \n> But if we keep symlinks to the existing names, is that OK?\n> \n> Isn't the point to avoid naming conflicts. Symlinks won't help that,\n> surely.\n> \n> I agree; underscores are a pain. If you must go this direction, I\n> suggest hyphens (-).\n\nYou could make the actual command pg_createuser, and make a symlink of\ncreateuser, but allow the symlink creation to fail. Best of both\nworlds, I think. I vote for underscore. That's what I normally use. \nDashes look too much like command arguments.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 13:04:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "> > Then perhaps we can also overhaul the installation. I installed about \n> > 5 times this past week and have it down to 4 or 5 steps (not including\n> > regression testing).\n> \n> Would this also be a relevant time to get a regression test to run on\n> a non-installed system? Couldn't this be done by starting up the\n> database on a different port from within the newly compiled source\n> tree for the purpose of testing?\n\nGee, I never even though of that. What advantage would there be for\nthat?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 13:05:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Next release is 7.0(?)" }, { "msg_contents": " > Would this also be a relevant time to get a regression test to run on\n > a non-installed system? Couldn't this be done by starting up the\n > database on a different port from within the newly compiled source\n > tree for the purpose of testing?\n\n Gee, I never even though of that. What advantage would there be for\n that?\n\nWouldn't that be useful for developing/debugging/testing a new version\non a machine that runs some other version in production mode?\n\nOne example would be someone who wants to update, but wishes to verify\nthe functioning of the new version prior to blowing away the old\none. So, one could build the new version, run regression tests,\nresolve any issues, THEN deinstall the old version and install the new\nalready verified one.\n\nAnother example is that developers could use a production machine to\ntweak new code without having to actually install the test versions.\n\nI'm sure other applications of that flexibility are apparent also.\n\nCheers,\nBrook\n", "msg_date": "Mon, 11 Oct 1999 12:05:42 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Next release is 7.0(?)" }, { "msg_contents": "On Mon, 11 Oct 1999, Brook Milligan wrote:\n\n> Then perhaps we can also overhaul the installation. I installed about \n> 5 times this past week and have it down to 4 or 5 steps (not including\n> regression testing).\n> \n\nOne thing I thought would be nice would be a client only install. Say I'm\nrunning a server on Solaris and want to make psql available on linux\nstations which will access the Solaris server. Right now, unless I've\nmissed something, I'm stuck installing the whole package. No big problem\nbut if you're overhaulling the install..... :-)\n\nTake Care,\nJames\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n", "msg_date": "Mon, 11 Oct 1999 13:06:02 -0500 (CDT)", "msg_from": "James Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Next release is 7.0(?)" }, { "msg_contents": "James Thompson wrote:\n> \n> One thing I thought would be nice would be a client only install. Say I'm\n> running a server on Solaris and want to make psql available on linux\n> stations which will access the Solaris server. Right now, unless I've\n> missed something, I'm stuck installing the whole package. No big problem\n> but if you're overhaulling the install..... :-)\n\nYou can do that now with the RPM installation under RedHat Linux --\nhowever, it is an RPM feature, not part of the tarball.\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 11 Oct 1999 14:32:26 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Next release is 7.0(?)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Would this also be a relevant time to get a regression test to run on\n>> a non-installed system?\n\n> Gee, I never even though of that. What advantage would there be for\n> that?\n\nIt has been suggested before, and I think it's a good idea. The\nadvantage is you can smoke-test a new compilation *before* you blow\naway your existing installation ;-)\n\nIt is, of course, possible to do that by installing into a nonstandard\nlocation/port and then running the regress tests there. But you have\nto know exactly what you're doing to do that. If we're going to\noverhaul install, we should make it easier to run the regress tests\nthat way, or even better with not-installed-at-all binaries from the\nsource tree.\n\nAnother thing I'd like to see would be full support for building in\na separate object-directory tree, leaving the source tree pristine\nrather than filled with configure and build output files.\nThis is a standard GNU practice and I think it's a good one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Oct 1999 14:57:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Next release is 7.0(?) " }, { "msg_contents": "Brook Milligan wrote:\n> \n> > I hope we don't have a consensus on this. Long commands with\n> > underscores in them are certainly another sign of the coming\n> > apocalypse ;)\n> \n> But if we keep symlinks to the existing names, is that OK?\n> \n> Isn't the point to avoid naming conflicts. Symlinks won't help that,\n> surely.\n\nmy suggestion was:\nin 7.0: provide symlinks for backward compatibility and make big\nwarnings 'don't use createuser, etc. they are deprecated'\nall over the place ...\nin 7.1: remove the symlinks.\n\n> I agree; underscores are a pain. If you must go this direction, I\n> suggest hyphens (-).\n\nyup, but maybe (-) can rise problems in some filesystems ...\n(I certainly don't know)\nand more importantly you have then to rename pg_dump and pg_dumpall\nwich is the more heavily used command in scripts ...\nmy idea was to leave this commands untouched.\n\nas for longnames that Thomas doesn't like (wich I don't agree because\nmy idea for upgrading scripts is just: 'prepend a pg_ to the commands')\nwhat about pg_adduser, pg_deluser, pg_adddb, etc ? ;)\n\n\n-- \n | Sergio A. Kessler http://sak.org.ar\n-O_O- You can have it Soon, Cheap, and Working; choose *two*.\n", "msg_date": "Mon, 11 Oct 1999 17:31:36 -0300", "msg_from": "\"Sergio A. Kessler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "> my suggestion was:\n> in 7.0: provide symlinks for backward compatibility and make big\n> warnings 'don't use createuser, etc. they are deprecated'\n> all over the place ...\n> in 7.1: remove the symlinks.\n> \n> > I agree; underscores are a pain. If you must go this direction, I\n> > suggest hyphens (-).\n> \n> yup, but maybe (-) can rise problems in some filesystems ...\n> (I certainly don't know)\n> and more importantly you have then to rename pg_dump and pg_dumpall\n> wich is the more heavily used command in scripts ...\n> my idea was to leave this commands untouched.\n> \n> as for longnames that Thomas doesn't like (wich I don't agree because\n> my idea for upgrading scripts is just: 'prepend a pg_ to the commands')\n> what about pg_adduser, pg_deluser, pg_adddb, etc ? ;)\n\nThat's a compromise. The destroy* entries are from the old quel days. \nOf course, that would make pg_deluser into pg_dropuser, but adddb would\nbe createdb, and we are back were we started. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 16:51:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "On Oct 11, Bruce Momjian mentioned:\n\n> > > Yes, we have discussed this, and will add it to the TODO list.\n> > > > can the commands createuser and destroyuser be renamed to\n> > > > pg_createuser and pg_destroyuser (as pg_dump) adding soft links to\n> > > > preserve backwards compatibility (declaring previous commands\n> > > > deprecated) ?\n> > > > the same can be done for createdb and destroydb ...\n> > \n> > I hope we don't have a consensus on this. Long commands with\n> > underscores in them are certainly another sign of the coming\n> > apocalypse ;)\n> \n> But if we keep symlinks to the existing names, is that OK?\n\nI think Thomas' primary problem was the underscore. (?)\n\nI was going to say that I can take of that, since I was going to adjust\nthe scripts to play well with the new psql anyway. (In particular, I just\nterminally removed the -a option, since it has gone unused for a while and\nmight be a very popular option switch in the future. Switches are becoming\nscarce these days.)\n\nI can offer the following plan (from my bin dir):\n\ncleardbdir\t--> (Remove. It's been a while.)\ncreatedb\t--> pgcreatedb\ncreatelang\t--> (In my excessively undereducated opinion, this should\n be removed. createdb and createuser I can see but\n this?)\ncreateuser --> pgcreateuser\ndestroydb --> pgdestroydb\ndestroylang --> (see above)\ndestroyuser --> pgdestroyuser\necpg\ninitdb --> pginitdb\ninitlocation --> pginitlocation\nipcclean --> pg_ipcclean\n(An underscore here to make it more complicated to type :)\n. . .\nvacuumdb\t--> pgvacuumdb\n\nAlternatively, there could also be shorter commands, now that the\nassociation with the PostgreSQL installation is clearer:\npgcrdb\npgcruser\npgdestdb\npgdestuser\npgvacuum\n\nThis might remove the mnemonic association with the related SQL commands\n(which some might find desirable). Some might also go for a set like this:\npguseradd\npguserdel\npgmkdb\npgrmdb\nin association to *nix commands. (Some might find that a bad idea).\n\nFurthermore I was thinking about a configure switch along the following\nlines:\n\n--enable-scripts=old|new|both|none\n(defaults to new)\n\nsince a while back there was some talk about removing the scripts\naltogether (which also died after Thomas protested, I think).\n\nWhile we're at it, perhaps the scripts can also be moved around in the\nsource tree, e.g., to bin/scripts or (if there will really be only 4 or 5)\neven into the psql subtree.\n\nWell, unless someone vetos, I would take a vote here and see what I can\ndo.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 11 Oct 1999 22:57:51 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Scripts (was Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?))" }, { "msg_contents": "> I think Thomas' primary problem was the underscore. (?)\n\nOK.\n\n> \n> I was going to say that I can take of that, since I was going to adjust\n> the scripts to play well with the new psql anyway. (In particular, I just\n> terminally removed the -a option, since it has gone unused for a while and\n> might be a very popular option switch in the future. Switches are becoming\n> scarce these days.)\n> \n> I can offer the following plan (from my bin dir):\n> \n> cleardbdir\t--> (Remove. It's been a while.)\n\nGood.\n\n> createdb\t--> pgcreatedb\n\nGood.\n\n> createlang\t--> (In my excessively undereducated opinion, this should\n> be removed. createdb and createuser I can see but\n> this?)\n\nYes, remove. What is that doing there. Jan's plpgsql doesn't use it. :-)\n\n> createuser --> pgcreateuser\n> destroydb --> pgdestroydb\n\nCan I recommend pgdropdb?\n\n> destroylang --> (see above)\n> destroyuser --> pgdestroyuser\n\npgdropuser?\n\n> ecpg\n> initdb --> pginitdb\n> initlocation --> pginitlocation\n> ipcclean --> pg_ipcclean\n> (An underscore here to make it more complicated to type :)\n> . . .\n\nNot sure about that.\n\n> vacuumdb\t--> pgvacuumdb\n\nOK.\n\n> Alternatively, there could also be shorter commands, now that the\n> association with the PostgreSQL installation is clearer:\n> pgcrdb\n> pgcruser\n> pgdestdb\n> pgdestuser\n> pgvacuum\n\nToo cryptic for me.\n\n> This might remove the mnemonic association with the related SQL commands\n> (which some might find desirable). Some might also go for a set like this:\n> pguseradd\n> pguserdel\n> pgmkdb\n> pgrmdb\n> in association to *nix commands. (Some might find that a bad idea).\n\nDoesn't grab me.\n\n> \n> Furthermore I was thinking about a configure switch along the following\n> lines:\n> \n> --enable-scripts=old|new|both|none\n> (defaults to new)\n\nToo complicated. Issue a warning if invoked with old args and remove\nold link in 8.x. You can test basename $0 and test to see how you were\ninvoked.\n\n\n> since a while back there was some talk about removing the scripts\n> altogether (which also died after Thomas protested, I think).\n\nI like the scripts too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 18:24:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scripts (was Re: [HACKERS] Re: [INTERFACES] Next release is\n\t7.0(?))" }, { "msg_contents": "Bruce Momjian wrote:\n> > createlang --> (In my excessively undereducated opinion, this should\n> > be removed. createdb and createuser I can see but\n> > this?)\n> \n> Yes, remove. What is that doing there. Jan's plpgsql doesn't use it. :-)\n\nUsed by regression test script. No reason the script can't inline the\ncreatelang script's code, though.\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 11 Oct 1999 18:59:45 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scripts (was Re: [HACKERS] Re: [INTERFACES] Next release is\n\t7.0(?))" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > I think Thomas' primary problem was the underscore. (?)\n> \n> OK.\n\nhmmm, ok, not everyone can be totally happy ...\nplease, Thomas, let us use the underscore so there no need to \nrename pg_dump and pg_dumpall\n\npg_dump -> pg_dump\npg_dumpall -> pg_dumpall\n\ncreatedb -> pg_createdb\ndestroydb -> pg_dropdb\n\ncreateuser -> pg_createuser\ndestroyuser -> pg_dropuser\n\nand prepend a 'pg_' to the rest\n\nI think it can't be more consistent that this...\n\nThomas ?, please ?\n\n-- \n | Sergio A. Kessler http://sak.org.ar\n-O_O- You can have it Soon, Cheap, and Working; choose *two*.\n", "msg_date": "Tue, 12 Oct 1999 00:12:25 -0300", "msg_from": "\"Sergio A. Kessler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scripts (was Re: [HACKERS] Re: [INTERFACES] Next release is\n\t7.0(?))" }, { "msg_contents": "On Mon, 11 Oct 1999, Thomas Lockhart wrote:\n\n> > Yes, we have discussed this, and will add it to the TODO list.\n> > > can the commands createuser and destroyuser be renamed to\n> > > pg_createuser and pg_destroyuser (as pg_dump) adding soft links to\n> > > preserve backwards compatibility (declaring previous commands\n> > > deprecated) ?\n> > > the same can be done for createdb and destroydb ...\n> \n> I hope we don't have a consensus on this. Long commands with\n> underscores in them are certainly another sign of the coming\n> apocalypse ;)\n\nYou'll get my agreement on this...but, then again, I'm advocating getting\nrid of them altoghther and forcing the DBA to be forced to learn teh\nproper commands...*shrug*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 01:16:26 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "On Mon, 11 Oct 1999, Bruce Momjian wrote:\n\n> > > Yes, we have discussed this, and will add it to the TODO list.\n> > > > can the commands createuser and destroyuser be renamed to\n> > > > pg_createuser and pg_destroyuser (as pg_dump) adding soft links to\n> > > > preserve backwards compatibility (declaring previous commands\n> > > > deprecated) ?\n> > > > the same can be done for createdb and destroydb ...\n> > \n> > I hope we don't have a consensus on this. Long commands with\n> > underscores in them are certainly another sign of the coming\n> > apocalypse ;)\n> \n> But if we keep symlinks to the existing names, is that OK?\n\ncan we get rid of 'createdb/destroydb' and shorten them to:\n'pg_adddb/pg_deldb' ... ? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 01:17:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "On Mon, 11 Oct 1999, Bruce Momjian wrote:\n\n> > > Yes, we have discussed this, and will add it to the TODO list.\n> > > > can the commands createuser and destroyuser be renamed to\n> > > > pg_createuser and pg_destroyuser (as pg_dump) adding soft links to\n> > > > preserve backwards compatibility (declaring previous commands\n> > > > deprecated) ?\n> > > > the same can be done for createdb and destroydb ...\n> > \n> > I hope we don't have a consensus on this. Long commands with\n> > underscores in them are certainly another sign of the coming\n> > apocalypse ;)\n> \n> But if we keep symlinks to the existing names, is that OK?\n\nOh, and, IMHO...remove, don't create symlinks...its a major release, we\ndon't have to maintain backwards compatability...we aren't mIcrosoft, eh?\n:)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 01:18:04 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "\nNote: I still don't like the scripts...they create lazy, uneducated DBAs,\nof which I was one, since I didn't even *know* there were internal\ncommands that did more then the external ones did, for the longest time...\n\nPersonally, those scripts should jsut become:\n\n\ttry 'create user' from psql interface\n\n*grump look*\n\nOn Mon, 11 Oct 1999, Bruce Momjian wrote:\n\n> > I think Thomas' primary problem was the underscore. (?)\n> \n> OK.\n> \n> > \n> > I was going to say that I can take of that, since I was going to adjust\n> > the scripts to play well with the new psql anyway. (In particular, I just\n> > terminally removed the -a option, since it has gone unused for a while and\n> > might be a very popular option switch in the future. Switches are becoming\n> > scarce these days.)\n> > \n> > I can offer the following plan (from my bin dir):\n> > \n> > cleardbdir\t--> (Remove. It's been a while.)\n> \n> Good.\n> \n> > createdb\t--> pgcreatedb\n> \n> Good.\n> \n> > createlang\t--> (In my excessively undereducated opinion, this should\n> > be removed. createdb and createuser I can see but\n> > this?)\n> \n> Yes, remove. What is that doing there. Jan's plpgsql doesn't use it. :-)\n> \n> > createuser --> pgcreateuser\n> > destroydb --> pgdestroydb\n> \n> Can I recommend pgdropdb?\n> \n> > destroylang --> (see above)\n> > destroyuser --> pgdestroyuser\n> \n> pgdropuser?\n> \n> > ecpg\n> > initdb --> pginitdb\n> > initlocation --> pginitlocation\n> > ipcclean --> pg_ipcclean\n> > (An underscore here to make it more complicated to type :)\n> > . . .\n> \n> Not sure about that.\n> \n> > vacuumdb\t--> pgvacuumdb\n> \n> OK.\n> \n> > Alternatively, there could also be shorter commands, now that the\n> > association with the PostgreSQL installation is clearer:\n> > pgcrdb\n> > pgcruser\n> > pgdestdb\n> > pgdestuser\n> > pgvacuum\n> \n> Too cryptic for me.\n> \n> > This might remove the mnemonic association with the related SQL commands\n> > (which some might find desirable). Some might also go for a set like this:\n> > pguseradd\n> > pguserdel\n> > pgmkdb\n> > pgrmdb\n> > in association to *nix commands. (Some might find that a bad idea).\n> \n> Doesn't grab me.\n> \n> > \n> > Furthermore I was thinking about a configure switch along the following\n> > lines:\n> > \n> > --enable-scripts=old|new|both|none\n> > (defaults to new)\n> \n> Too complicated. Issue a warning if invoked with old args and remove\n> old link in 8.x. You can test basename $0 and test to see how you were\n> invoked.\n> \n> \n> > since a while back there was some talk about removing the scripts\n> > altogether (which also died after Thomas protested, I think).\n> \n> I like the scripts too.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 01:22:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scripts (was Re: [HACKERS] Re: [INTERFACES] Next release is\n\t7.0(?))" }, { "msg_contents": "> > > in association to *nix commands. (Some might find that a bad idea).\n\n Then it MUST be pgcreatuser!\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n", "msg_date": "Tue, 12 Oct 1999 11:12:11 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Scripts (was Re: [HACKERS] Re: [INTERFACES] Next release is" }, { "msg_contents": "> > > I hope we don't have a consensus on this. Long commands with\n> > > underscores in them are certainly another sign of the coming\n> > > apocalypse ;)\n> > \n> > But if we keep symlinks to the existing names, is that OK?\n> \n> Oh, and, IMHO...remove, don't create symlinks...its a major release, we\n> don't have to maintain backwards compatability...we aren't mIcrosoft, eh?\n> :)\n\nThat is a good point. We could remove them. They don't get called very\noften.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 09:58:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "\nOn 11-Oct-99 Bruce Momjian wrote:\n> You could make the actual command pg_createuser, and make a symlink of\n> createuser, but allow the symlink creation to fail. Best of both\n> worlds, I think. I vote for underscore. That's what I normally use. \n> Dashes look too much like command arguments.\n\n Agreed. I don't like underscores, but hyphens can be worse.\nI think the best idea is to use a name that identifies the script as\nbeing a part of PostgreSQL, and can somehow clearly define what the\nscript is designed to do, and then whatever you choose, _stick_with_it_!\nSystem administrators and programmers can make symbolic links if they\nneed them.\n\n----------------------------------\nDate: 12-Oct-99 Time: 16:10:10\n\nCraig Orsinger (email: <[email protected]>)\nLogicon RDA\nBldg. 8B28 \"Just another megalomaniac with ideas above his\n6th & F Streets station. The Universe is full of them.\"\nFt. Lewis, WA 98433 - The Doctor\n----------------------------------\n", "msg_date": "Tue, 12 Oct 1999 16:13:48 -0700 (PDT)", "msg_from": "Craig Orsinger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Next release is 7.0(?)" }, { "msg_contents": "Lamar Owen wrote:\n\n>\n> Bruce Momjian wrote:\n> > > createlang --> (In my excessively undereducated opinion, this should\n> > > be removed. createdb and createuser I can see but\n> > > this?)\n> >\n> > Yes, remove. What is that doing there. Jan's plpgsql doesn't use it. :-)\n>\n> Used by regression test script. No reason the script can't inline the\n> createlang script's code, though.\n\n That script was the result of some longer discussion about\n \"installing PL/pgSQL by default (initdb) or not\" which\n resulted from some problems with the location of the language\n handler object.\n\n Some people like to have one or the other language in\n template1, so it will automatically be there after createdb.\n Others like to install individual PL's per database.\n\n What I see from your comments above is, that you don't use\n procedural languages at all. But that's not a good reason for\n making it harder for others to gain access to these\n languages.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 15 Oct 1999 13:54:11 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Scripts (was Re: [HACKERS] Re: [INTERFACES] Next release is\n\t7.0(?))" } ]
[ { "msg_contents": "Folks,\n\nI've been struggling to copy a large table (200 million\nrecords, 60GB) to tape using:\n\n psql -qc \"copy psc to STDOUT;\" Winter99 | dd of=/dev/st0 bs=32k\n\nAfter processing about 10 million records (this varies), I\nget:\n\n FATAL 1: Memory exhausted in AllocSetAlloc()\n\n(The tape drive is a DLT 7000, and the tape is not filled at\nthis point).\n\nThere is no evidence that the backend has really exhausted\navail memory (I have 256MB but 1GB swap and the postgres\nuser and database user both have unlimited memory usage).\n\nThis is 6.5.1 on Linux 2.2.11 (w/Debian 2.1) on a dual\n450Mhz Xeon box with a 128GB software Raid0 array. I've set\nSHMEM to 128MB am using \"-B 12288 -S 8192\". I've been\ntrying to figure this out for a few weeks but can't seem to\nget this table copied to tape. Can one of the developers\noffer a suggestion?\n\nBTW, this is a large astronomical database which will\neventually grow to about 500 million records. Besides this\ntape problem, pgsql is now working nicely for our\napplication.\n\nI've been following the mysql thread. You folks may want\nto add \"works with databases over 2GB\" to your plus column.\n\nWith thoughtful indexing, one can retrieve queries of\n<100000 records in 1 to 15 minutes which competes nicely\nwith our main data server, a bunch of Sun Enterprise 5000\nand 6000s running Informix. Of course, many people using\nthis large system simultaneously, but our goal for this\nproject is to recommend an alternative hardware/software\nsolution to the astronomical community for <$10K.\n\n--M\n\n===========================================================================\n\nMartin Weinberg Phone: (413) 545-3821\nDept. of Physics and Astronomy FAX: (413) 545-2117/0648\n530 Graduate Research Tower\t [email protected]\nUniversity of Massachusetts\t http://www.astro.umass.edu/~weinberg/\nAmherst, MA 01003-4525\n\n\n\n", "msg_date": "Sat, 9 Oct 1999 10:51:24 -0400", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "memory problems in copying large table" } ]
[ { "msg_contents": "Tom Lane wrote on Sat, 09 Oct 1999 11:54:34 EDT\n>Martin Weinberg <[email protected]> writes:\n>> I've been struggling to copy a large table (200 million\n>> records, 60GB) to tape using:\n>> psql -qc \"copy psc to STDOUT;\" Winter99 | dd of=/dev/st0 bs=32k\n>> After processing about 10 million records (this varies), I\n>> get:\n>> FATAL 1: Memory exhausted in AllocSetAlloc()\n>\n>Hmm. What is the exact declaration of the table?\n>\n>The only explanation I can think of offhand is that the output\n>conversion function for one of the column types is leaking memory...\n>copy.c itself looks to be pretty careful not to.\n>\n\nThe table def is:\n\nCREATE TABLE psc (\n\themis\t\ttext,\n\tdate\t\tdate,\n\tscan\t\tint2,\n\tid\t\tint4,\n\tra\t\tfloat4,\n\tdec\t\tfloat4,\n\tglon\t\tfloat4,\n\tglat\t\tfloat4,\n\terr_maj\t\tfloat4,\n\terr_min\t\tfloat4,\n\terr_ang\t\tint2,\n\txscan\t\tfloat4,\n\tcnf_flag\ttext,\n\tj_m\t\tfloat4,\n\th_m\t\tfloat4,\n\tk_m\t\tfloat4,\n\tj_msig\t\tfloat4,\n\th_msig\t\tfloat4,\n\tk_msig\t\tfloat4,\n\tj_m_psf\t\tfloat4,\n\th_m_psf\t\tfloat4,\n\tk_m_psf\t\tfloat4,\n\tj_psfchi\tfloat4,\n\th_psfchi\tfloat4,\n\tk_psfchi\tfloat4,\n\tj_skyval\tfloat4,\n\th_skyval\tfloat4,\n\tk_skyval\tfloat4,\n\tj_blend\t\tint2,\n\th_blend\t\tint2,\n\tk_blend\t\tint2,\n\tj_m_stdap\tfloat4,\n\th_m_stdap\tfloat4,\n\tk_m_stdap\tfloat4,\n\tj_msig_stdap\tfloat4,\n\th_msig_stdap\tfloat4,\n\tk_msig_stdap\tfloat4,\n\tj_prob_pers\tfloat4,\n\th_prob_pers\tfloat4,\n\tk_prob_pers\tfloat4,\n\tj_prg_flg\ttext,\n\th_prg_flg\ttext,\n\tk_prg_flg\ttext,\n\tj_mrg_flg\ttext,\n\th_mrg_flg\ttext,\n\tk_mrg_flg\ttext,\n\tj_pix_flg\ttext,\n\th_pix_flg\ttext,\n\tk_pix_flg\ttext,\n\tj_cal\t\tfloat4,\n\th_cal\t\tfloat4,\n\tk_cal\t\tfloat4,\n\tgal_contam\tint2,\n\tid_opt\t\ttext,\n\tdist_opt\tfloat4,\n\tb_m_opt\t\tfloat4,\n\tr_m_opt\t\tfloat4,\n\tj_h\t\tfloat4,\n\th_k\t\tfloat4,\n\tj_k\t\tfloat4,\n\tdup_src\t\tint2,\n\tuse_src\t\tint2,\n\text_key_1\tint4\n);\n\nThanks for taking a look,\n\n--Martin\n\n===========================================================================\n\nMartin Weinberg Phone: (413) 545-3821\nDept. of Physics and Astronomy FAX: (413) 545-2117/0648\n530 Graduate Research Tower\t [email protected]\nUniversity of Massachusetts\t http://www.astro.umass.edu/~weinberg/\nAmherst, MA 01003-4525\n", "msg_date": "Sat, 09 Oct 1999 12:02:27 -0300", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Folks,\n\nI've been struggling to copy a large table (200 million\nrecords, 60GB) to tape using:\n\n psql -qc \"copy psc to STDOUT;\" Winter99 | dd of=/dev/st0 bs=32k\n\nAfter processing about 10 million records (this varies), I\nget:\n\n FATAL 1: Memory exhausted in AllocSetAlloc()\n\n(The tape drive is a DLT 7000, and the tape is not filled at\nthis point).\n\nThere is no evidence that the backend has really exhausted\navail memory (I have 256MB but 1GB swap and the postgres\nuser and database user both have unlimited memory usage).\n\nThis is 6.5.1 on Linux 2.2.11 (w/Debian 2.1) on a dual\n450Mhz Xeon box with a 128GB software Raid0 array. I've set\nSHMEM to 128MB am using \"-B 12288 -S 8192\". I've been\ntrying to figure this out for a few weeks but can't seem to\nget this table copied to tape. Can one of the developers\noffer a suggestion?\n\nBTW, this is a large astronomical database which will\neventually grow to about 500 million records. Besides this\ntape problem, pgsql is now working nicely for our\napplication.\n\nI've been following the mysql thread. You folks may want\nto add \"works with databases over 2GB\" to your plus column.\n\nWith thoughtful indexing, one can retrieve queries of\n<100000 records in 1 to 15 minutes which competes nicely\nwith our main data server, a bunch of Sun Enterprise 5000\nand 6000s running Informix. Of course, many people using\nthis large system simultaneously, but our goal for this\nproject is to recommend an alternative hardware/software\nsolution to the astronomical community for <$10K.\n\n--M\n\n===========================================================================\n\nMartin Weinberg Phone: (413) 545-3821\nDept. of Physics and Astronomy FAX: (413) 545-2117/0648\n530 Graduate Research Tower\t [email protected]\nUniversity of Massachusetts\t http://www.astro.umass.edu/~weinberg/\nAmherst, MA 01003-4525\n", "msg_date": "Sat, 9 Oct 1999 11:04:14 -0400", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "memory problems in copying large table to STDOUT" }, { "msg_contents": "Martin Weinberg <[email protected]> writes:\n> I've been struggling to copy a large table (200 million\n> records, 60GB) to tape using:\n> psql -qc \"copy psc to STDOUT;\" Winter99 | dd of=/dev/st0 bs=32k\n> After processing about 10 million records (this varies), I\n> get:\n> FATAL 1: Memory exhausted in AllocSetAlloc()\n\nHmm. What is the exact declaration of the table?\n\nThe only explanation I can think of offhand is that the output\nconversion function for one of the column types is leaking memory...\ncopy.c itself looks to be pretty careful not to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 Oct 1999 11:54:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Martin Weinberg <[email protected]> writes:\n> Tom Lane wrote on Sat, 09 Oct 1999 11:54:34 EDT\n>>> FATAL 1: Memory exhausted in AllocSetAlloc()\n>> \n>> Hmm. What is the exact declaration of the table?\n>> \n>> The only explanation I can think of offhand is that the output\n>> conversion function for one of the column types is leaking memory...\n>> copy.c itself looks to be pretty careful not to.\n\n> The table def is:\n\n> CREATE TABLE psc (\n> \themis\t\ttext,\n> \tdate\t\tdate,\n> \tscan\t\tint2,\n> \tid\t\tint4,\n> \tra\t\tfloat4,\n> [ lots more fields of these same types ]\n\nHmm, nothing unusual there. I made up a dummy table containing these\ncolumn types, filled it with 16 meg of junk data, and copied in and\nout without observing any process memory usage growth at all, under\nboth current sources and 6.5.2. I also used gdb to set a breakpoint\nat AllocSetAlloc, and checked that the inner loop of the copy wasn't\nallocating anything it didn't free. So there's no obvious memory\nleakage bug here. (It'd be pretty surprising if there was, really,\nfor such commonly used data types.)\n\nI'm now thinking that there must be either a problem specific to your\nplatform, or some heretofore unnoticed problem with copying from a\nmulti-segment (ie, multi-gigabyte) table. I don't have enough disk\nspace to check the latter theory here...\n\nCan you prepare a debugger backtrace showing what the backend is doing\nwhen it gets the error? If you're not familiar with gdb, it'd go\nsomething like this:\n\n1. Build & install postgres with debugging symbols enabled\n (\"make CUSTOM_COPT=-g all\"). \n2. Start gdb on the postgres executable, eg\n \"gdb /usr/local/pgsql/bin/postgres\".\n3. Fire up the copy-out operation as usual. (I assume this takes long\n enough that you have plenty of time for the next two steps ;-))\n4. Use ps or top to find out the process number of the backend handling\n the session.\n5. Attach to that process number in gdb:\n\t(gdb)\tattach NNNN\n6. Set a breakpoint at elog, and let the backend continue running:\n\t(gdb)\tbreak elog\n\t(gdb)\tcontinue\n7. When the breakpoint is hit, get a backtrace:\n\tBreakpoint 1, elog ...\n\t(gdb)\tbt\n After copying & pasting the resulting printout, you can \"quit\" to\n get out of gdb.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 Oct 1999 14:42:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Tom,\n\nI am attaching the backtrace. This one simultaneously generated\nthis kernel message from the md driver:\n\nraid0_map bug: hash->zone0==NULL for block 1132810879\nBad md_map in ll_rw_block\n\nDefinitely a problem but no longer sure if it's the same one . . .\nsigh. \n\nI inserted a pause() in the beginning of elog.c so that I\ncould attach remotely; I'm keeping the sleeping processing\naround in case there's anything else you would like me to\ncheck.\n\nGuess, I'm (foolishly?) pushing the envelope here with a 100GB \ndatabase on software raid.\n\nThanks!!!\n\n--Martin\n\nP.S. After the fact, I realized that my source is Oliver Elphick's\nDebian 6.5.1 source package rather than a pure vanilla source. \nHope this is not a problem . . .\n\n----------------------------------------------------------------------\n\n(gdb) bt\n#0 0x4012eb77 in pause ()\n#1 0x81160e9 in elog (lev=-1, fmt=0x8146a4b \"cannot read block %d of %s\")\n at elog.c:81\n#2 0x80e76ef in smgrread (which=0, reln=0x822af60, blocknum=2638753, \n buffer=0x44a85040 \"h\") at smgr.c:235\n#3 0x80dd7a2 in ReadBufferWithBufferLock (reln=0x822af60, blockNum=2638753, \n bufferLockHeld=0) at bufmgr.c:302\n#4 0x80dd682 in ReadBuffer (reln=0x822af60, blockNum=2638753) at bufmgr.c:180\n#5 0x80ddf1d in ReleaseAndReadBuffer (buffer=9175, relation=0x822af60, \n blockNum=2638753) at bufmgr.c:954\n#6 0x806ad13 in heapgettup (relation=0x822af60, tuple=0x8235374, dir=1, \n buffer=0x8235398, snapshot=0x8232af0, nkeys=0, key=0x0) at heapam.c:469\n#7 0x806b6bf in heap_getnext (scandesc=0x8235360, backw=0) at heapam.c:912\n#8 0x8084eb3 in CopyTo (rel=0x822af60, binary=0 '\\000', oids=0 '\\000', \n fp=0x0, delim=0x813c829 \"\\t\") at copy.c:405\n#9 0x8084ce4 in DoCopy (relname=0x82350c0 \"psc\", binary=0 '\\000', \n oids=0 '\\000', from=0 '\\000', pipe=1 '\\001', filename=0x0, \n delim=0x813c829 \"\\t\") at copy.c:323\n#10 0x80ea8c6 in ProcessUtility (parsetree=0x82350d8, dest=Remote)\n at utility.c:227\n#11 0x80e8a36 in pg_exec_query_dest (\n query_string=0xbfffaef4 \"copy psc to STDOUT;\", dest=Remote, aclOverride=0)\n at postgres.c:727\n#12 0x80e8944 in pg_exec_query (query_string=0xbfffaef4 \"copy psc to STDOUT;\")\n at postgres.c:656\n#13 0x80e9b88 in PostgresMain (argc=11, argv=0xbffff46c, real_argc=12, \n real_argv=0xbffff984) at postgres.c:1647\n#14 0x80d1adc in DoBackend (port=0x81ef748) at postmaster.c:1628\n#15 0x80d1613 in BackendStartup (port=0x81ef748) at postmaster.c:1373\n#16 0x80d0ca6 in ServerLoop () at postmaster.c:823\n#17 0x80d080c in PostmasterMain (argc=12, argv=0xbffff984) at postmaster.c:616\n#18 0x80a4597 in main (argc=12, argv=0xbffff984) at main.c:93\n\n\nTom Lane wrote on Sat, 09 Oct 1999 14:42:56 EDT\n>Martin Weinberg <[email protected]> writes:\n>> Tom Lane wrote on Sat, 09 Oct 1999 11:54:34 EDT\n>>>> FATAL 1: Memory exhausted in AllocSetAlloc()\n>>> \n>>> Hmm. What is the exact declaration of the table?\n>>> \n>>> The only explanation I can think of offhand is that the output\n>>> conversion function for one of the column types is leaking memory...\n>>> copy.c itself looks to be pretty careful not to.\n>\n>> The table def is:\n>\n>> CREATE TABLE psc (\n>> \themis\t\ttext,\n>> \tdate\t\tdate,\n>> \tscan\t\tint2,\n>> \tid\t\tint4,\n>> \tra\t\tfloat4,\n>> [ lots more fields of these same types ]\n>\n>Hmm, nothing unusual there. I made up a dummy table containing these\n>column types, filled it with 16 meg of junk data, and copied in and\n>out without observing any process memory usage growth at all, under\n>both current sources and 6.5.2. I also used gdb to set a breakpoint\n>at AllocSetAlloc, and checked that the inner loop of the copy wasn't\n>allocating anything it didn't free. So there's no obvious memory\n>leakage bug here. (It'd be pretty surprising if there was, really,\n>for such commonly used data types.)\n>\n>I'm now thinking that there must be either a problem specific to your\n>platform, or some heretofore unnoticed problem with copying from a\n>multi-segment (ie, multi-gigabyte) table. I don't have enough disk\n>space to check the latter theory here...\n>\n>Can you prepare a debugger backtrace showing what the backend is doing\n>when it gets the error? If you're not familiar with gdb, it'd go\n>something like this:\n>\n>1. Build & install postgres with debugging symbols enabled\n> (\"make CUSTOM_COPT=-g all\"). \n>2. Start gdb on the postgres executable, eg\n> \"gdb /usr/local/pgsql/bin/postgres\".\n>3. Fire up the copy-out operation as usual. (I assume this takes long\n> enough that you have plenty of time for the next two steps ;-))\n>4. Use ps or top to find out the process number of the backend handling\n> the session.\n>5. Attach to that process number in gdb:\n>\t(gdb)\tattach NNNN\n>6. Set a breakpoint at elog, and let the backend continue running:\n>\t(gdb)\tbreak elog\n>\t(gdb)\tcontinue\n>7. When the breakpoint is hit, get a backtrace:\n>\tBreakpoint 1, elog ...\n>\t(gdb)\tbt\n> After copying & pasting the resulting printout, you can \"quit\" to\n> get out of gdb.\n>\n>\t\t\tregards, tom lane\n>\n>************\n>\n", "msg_date": "Sun, 10 Oct 1999 10:38:43 -0300", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Martin Weinberg <[email protected]> writes:\n> I am attaching the backtrace. This one simultaneously generated\n> this kernel message from the md driver:\n\n> raid0_map bug: hash->zone0==NULL for block 1132810879\n> Bad md_map in ll_rw_block\n\n> Definitely a problem but no longer sure if it's the same one . . .\n> sigh. \n\nLooks like it is not the same. As you can see, the error message that\nelog is about to report is \"cannot read <block#> of <file>\", which isn't\ntoo surprising given the kernel notice:\n\n> #1 0x81160e9 in elog (lev=-1, fmt=0x8146a4b \"cannot read block %d of %s\")\n> at elog.c:81\n\nIf this read failure is reproducible then you will need to get that\ntaken care of before we can make any progress on the original problem.\nBut it might be a transient failure --- why don't you just start the\ncopy over again to see?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 10 Oct 1999 11:59:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Hi Tom,\n\nI got the backtrace with \"\"Memory exhausted in\nAllocSetAlloc()\" this time. The process has a virtual image\nsize of 103840 which is consistent with SHMMAX + the text\nand stack size (in case this fact is of any use) . . .\n\nAgain, I've saved the process in case checking any symbols\nwould be helpful.\n\n--Martin\n\n----------------------------------------------------------------------\n\n(gdb) bt\n#0 0x4012eb77 in pause ()\n#1 0x81160e9 in elog (lev=1, \n fmt=0x814f05d \"Memory exhausted in AllocSetAlloc()\") at elog.c:81\n#2 0x811949e in AllocSetAlloc (set=0x8232fb0, size=875628846) at aset.c:273\n#3 0x8119a13 in PortalHeapMemoryAlloc (this=0x81efbd8, size=875628846)\n at portalmem.c:264\n#4 0x8119732 in MemoryContextAlloc (context=0x81efbd8, size=875628846)\n at mcxt.c:230\n#5 0x810ebf1 in textout (vlena=0x4106182c) at varlena.c:190\n#6 0x808508c in CopyTo (rel=0x822af08, binary=0 '\\000', oids=0 '\\000', \n fp=0x0, delim=0x813c829 \"\\t\") at copy.c:421\n#7 0x8084ce4 in DoCopy (relname=0x8235068 \"psc\", binary=0 '\\000', \n oids=0 '\\000', from=0 '\\000', pipe=1 '\\001', filename=0x0, \n delim=0x813c829 \"\\t\") at copy.c:323\n#8 0x80ea8c6 in ProcessUtility (parsetree=0x8235080, dest=Remote)\n at utility.c:227\n#9 0x80e8a36 in pg_exec_query_dest (\n query_string=0xbfffb274 \"copy psc to STDOUT;\", dest=Remote, aclOverride=0)\n at postgres.c:727\n#10 0x80e8944 in pg_exec_query (query_string=0xbfffb274 \"copy psc to STDOUT;\")\n at postgres.c:656\n#11 0x80e9b88 in PostgresMain (argc=11, argv=0xbffff7ec, real_argc=12, \n real_argv=0xbffffd04) at postgres.c:1647\n#12 0x80d1adc in DoBackend (port=0x81ef748) at postmaster.c:1628\n#13 0x80d1613 in BackendStartup (port=0x81ef748) at postmaster.c:1373\n#14 0x80d0ca6 in ServerLoop () at postmaster.c:823\n#15 0x80d080c in PostmasterMain (argc=12, argv=0xbffffd04) at postmaster.c:616\n#16 0x80a4597 in main (argc=12, argv=0xbffffd04) at main.c:93\n(gdb) \n\nTom Lane wrote on Sun, 10 Oct 1999 11:59:53 EDT\n>Martin Weinberg <[email protected]> writes:\n>> I am attaching the backtrace. This one simultaneously generated\n>> this kernel message from the md driver:\n>\n>> raid0_map bug: hash->zone0==NULL for block 1132810879\n>> Bad md_map in ll_rw_block\n>\n>> Definitely a problem but no longer sure if it's the same one . . .\n>> sigh. \n>\n>Looks like it is not the same. As you can see, the error message that\n>elog is about to report is \"cannot read <block#> of <file>\", which isn't\n>too surprising given the kernel notice:\n>\n>> #1 0x81160e9 in elog (lev=-1, fmt=0x8146a4b \"cannot read block %d of %s\")\n>> at elog.c:81\n>\n>If this read failure is reproducible then you will need to get that\n>taken care of before we can make any progress on the original problem.\n>But it might be a transient failure --- why don't you just start the\n>copy over again to see?\n>\n>\t\t\tregards, tom lane\n>\n>************\n>\n", "msg_date": "Sun, 10 Oct 1999 20:37:04 -0300", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Martin Weinberg <[email protected]> writes:\n> I got the backtrace with \"\"Memory exhausted in\n> AllocSetAlloc()\" this time.\n\n> #4 0x8119732 in MemoryContextAlloc (context=0x81efbd8, size=875628846)\n> at mcxt.c:230\n> #5 0x810ebf1 in textout (vlena=0x4106182c) at varlena.c:190\n> #6 0x808508c in CopyTo (rel=0x822af08, binary=0 '\\000', oids=0 '\\000', \n> fp=0x0, delim=0x813c829 \"\\t\") at copy.c:421\n\nOK, that shoots down the \"memory leak\" theory. It sure looks like\nwhat you've got is corrupt data: textout is reading a length word of\n875628846 (plus or minus a couple bytes) from what is supposed to be\na text datum. Obviously that's not right. Next question is how\nit got that way.\n\nI think it's pretty likely that the original cause is the kernel disk\ndriver or disk hardware flakiness that we already have evidence for.\nHowever, I hate passing the buck like that, so I'm willing to continue\ndigging if you are.\n\n> Again, I've saved the process in case checking any symbols\n> would be helpful.\n\nYou should look at the source tuple location info in CopyTo ---\nsomething like\n\t(gdb)\tf 6\t\t-- frame 6, ie, CopyTo\n\t(gdb)\tp i\t\t-- get column number\n\t(gdb) p *tuple\t-- print contents of HeapTupleData\n\t(gdb) p *tuple->t_data -- print contents of HeapTupleHeaderData\n\t\nThe last is mainly to find out the tuple's OID for possible future\nreference. What we want right now is the tuple location info,\ntuple->t_self, which will give us a block number (bi_hi and bi_lo in\nthat struct are the high and low 16 bits of the block number). Then,\nif you can use dd and od to get a hex dump of that block from the\nrelation's data files, we can see what's really on disk there.\n(Remember that the \"blocks\" are 8K each; also, if you get an offset\nbeyond 1 gig, then it's going to be in one of the continuation files\n\"psc.1\", \"psc.2\", etc --- one gig apiece.)\n\nIt would also be useful to look at the contents of the disk block as\nsitting in memory in the backend, to see if they are the same as what\nyou read using dd; I would not be too surprised to find they are not.\nThe t_data pointer should be pointing into a disk buffer in Postgres'\nshared memory block, but offhand I'm not sure what's the easiest way to\ndiscover the starting address of that buffer using gdb. (Can any other\nhackers lend a hand here?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 10 Oct 1999 21:33:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Hi Tom,\n\nTom Lane wrote on Sun, 10 Oct 1999 21:33:00 EDT\n>\n>I think it's pretty likely that the original cause is the kernel disk\n>driver or disk hardware flakiness that we already have evidence for.\n>However, I hate passing the buck like that, so I'm willing to continue\n>digging if you are.\n\nHonestly, I'm inclined to agree but I'd like to know for sure before\ngo buy a hardware raid controller (or give up trying to build a\ndatabase this large with Linux altogether). The \"coincidence\" that's\nhaunting me here is that I used pgsql on a 12GB database of the same\nsort with no trouble for about 6 months. All of this started when I\ntried to load 60GB. And after loading the table, I index it and\nexercise it and everything is fine. Twice now, this problem has\narisen when I've tried to backup the table with a \"copy\". The first\ntime, I wiped out the raid array, got the latest kernel with the\nup-to-date raid patches and rebuilt (takes a while . . .). Same thing\nagain. Of course, this copy hits the disks pretty hard, so maybe it's\nnot so coincidental. Anyway, ruling out a pgsql problem would be\nprogress.\n\n>> Again, I've saved the process in case checking any symbols\n>> would be helpful.\n>\n>You should look at the source tuple location info in CopyTo ---\n>something like\n>\t(gdb)\tf 6\t\t-- frame 6, ie, CopyTo\n>\t(gdb)\tp i\t\t-- get column number\n>\t(gdb) p *tuple\t-- print contents of HeapTupleData\n>\t(gdb) p *tuple->t_data -- print contents of HeapTupleHeaderData\n>\t\n>The last is mainly to find out the tuple's OID for possible future\n>reference. What we want right now is the tuple location info,\n>tuple->t_self, which will give us a block number (bi_hi and bi_lo in\n>that struct are the high and low 16 bits of the block number). Then,\n>if you can use dd and od to get a hex dump of that block from the\n>relation's data files, we can see what's really on disk there.\n>(Remember that the \"blocks\" are 8K each; also, if you get an offset\n>beyond 1 gig, then it's going to be in one of the continuation files\n>\"psc.1\", \"psc.2\", etc --- one gig apiece.)\n>\n\nOk done. Here's what I find:\n\n(gdb) p i\n$1 = 48\n(gdb) p *tuple\n$2 = {t_len = 352, t_self = {ip_blkid = {bi_hi = 24, bi_lo = 26279}, \n ip_posid = 19}, t_data = 0x41061710}\n(gdb) p *tuple->t_data\n$3 = {t_oid = 37497689, t_cmin = 0, t_cmax = 0, t_xmin = 17943, t_xmax\n= 0, \n t_ctid = {ip_blkid = {bi_hi = 24, bi_lo = 26279}, ip_posid = 19}, \n t_natts = 63, t_infomask = 2307, t_hoff = 40 '(', t_bits = \"<FF><FF><FF><FF>\"}\n\nNow, check me to make sure I've followed you correctly:\n\nSince 1GB of blocks is 0x20000, this data in the 13th GB.\nThe offset into the 13th is 26279.\n\nSo I did:\n\ndd if=psc.12 skip=26279 count=1 bs=8k | od -t x > ~/dump.hex\n\nI'm not sure what I'm looking for in dump.hex. The first hand full\nof lines are:\n\n0000000 01840064 20002000 02c09ea0 02809d60\n0000020 02c09c00 02989ab4 02c09954 02989808\n0000040 028096c8 02809588 02c09428 02c092c8\n0000060 02c09168 02c09008 02c08ea8 02988d5c\n0000100 02c08bfc 02c08a9c 0280895c 02588830\n0000120 02c086d0 02c08570 02588444 02c082e4\n0000140 02c08184 00000000 00000000 00000000\n0000160 00000000 00000000 00000000 00000000\n*\n0000600 00000000 023c2b5d 00000000 00000000\n0000620 00004617 00000000 66a70018 003f0017\n0000640 ff280903 ffffffff 000fffff 00000005\n0000660 00000073 fffffd95 000000ac 00003d2b\n.\n.\n.\n\n--Martin\n\n\n===========================================================================\n\nMartin Weinberg Phone: (413) 545-3821\nDept. of Physics and Astronomy FAX: (413) 545-2117/0648\n530 Graduate Research Tower\t [email protected]\nUniversity of Massachusetts\t http://www.astro.umass.edu/~weinberg/\nAmherst, MA 01003-4525\n", "msg_date": "Mon, 11 Oct 1999 00:17:21 -0300", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Tom Lane wrote on Mon, 11 Oct 1999 10:59:25 EDT\n>\n>Looks good. For fun you might try \"select * from psc where oid = 37497689\"\n>and see if it succeeds or not on a \"retail\" retrieval of the problem tuple.\n\nYes . . . the backend dies with status 11. Could this be that I\ncan't do this until \"release\" the other process since it seems to\nhave gobbled all the shared memory?\n\n>Could you send the whole dump? I can figure this stuff out by hand but\n>I'm not sure I can explain it to someone else. (The relevant data\n>structure declarations are in various files in src/include/storage/ and\n>src/include/access/ if you want to look for yourself.)\n\nHere it is:\n\n0000000 01840064 20002000 02c09ea0 02809d60\n0000020 02c09c00 02989ab4 02c09954 02989808\n0000040 028096c8 02809588 02c09428 02c092c8\n0000060 02c09168 02c09008 02c08ea8 02988d5c\n0000100 02c08bfc 02c08a9c 0280895c 02588830\n0000120 02c086d0 02c08570 02588444 02c082e4\n0000140 02c08184 00000000 00000000 00000000\n0000160 00000000 00000000 00000000 00000000\n*\n0000600 00000000 023c2b5d 00000000 00000000\n0000620 00004617 00000000 66a70018 003f0017\n0000640 ff280903 ffffffff 000fffff 00000005\n0000660 00000073 fffffd95 000000ac 00003d2b\n0000700 4375bc7b c142851c 4024fe3b 41cb436f\n0000720 3df5c28f 3df5c28f 0000002d c0e851ec\n0000740 0000000a 31313131 00003131 416ec49c\n0000760 41676873 41672b02 3cfdf3b6 3d1fbe77\n0001000 3da7ef9e 416ec49c 41676873 41672b02\n0001020 3fc8f5c3 3f866666 3f91eb85 438015c3\n0001040 445407ae 44b6f429 00010001 00000001\n0001060 416ecccd 41661893 416476c9 3d810625\n0001100 3de147ae 3e48b439 00000000 00000000\n0001120 00000000 00000007 00303030 00000007\n0001140 00303030 00000007 00303030 0000000a\n0001160 30303030 00003030 0000000a 30303030\n0001200 00003030 0000000a 30303030 00003030\n0001220 00000008 36363030 00000008 36363030\n0001240 00000008 36333030 bd2c0831 bd1ba5e3\n0001260 bced9168 00000000 00000011 35373055\n0001300 36393030 35373237 00000032 3ee147ae\n0001320 41873333 417e6666 3eeb851f 3c75c28f\n0001340 3ef33333 023c2b5c 00000000 00000000\n0001360 00004617 00000000 66a70018 003f0016\n0001400 ff280903 ffffffff 000fffff 00000005\n0001420 00000073 fffffd95 000000ac 00003d2c\n0001440 4375b28f c14283ee 4023590c 41cb80be\n0001460 3e851eb8 3e4ccccd 0000000b 43011eb8\n0001500 0000000a 31313131 00003131 418226e9\n0001520 417d4396 41764dd3 3db43958 3df9db23\n0001540 3e3851ec 418226e9 417d4396 41764dd3\n0001560 3fb47ae1 3f4ccccd 3f6147ae 437fbae1\n0001600 445437ae 44b6e000 00010001 00000001\n0001620 418047ae 4181126f 4178ed91 3e49ba5e\n0001640 3e8ccccd 3e418937 00000000 00000000\n0001660 00000000 00000007 00303030 00000007\n0001700 00303030 00000007 00303030 0000000a\n0001720 30303030 00003030 0000000a 30303030\n0001740 00003030 0000000a 30303030 00003030\n0001760 00000008 36323030 00000008 36303030\n0002000 00000008 35303130 bd2c0831 bd1ba5e3\n0002020 bced9168 00000000 00000011 35373055\n0002040 36393030 37343037 00000034 3d8f5c29\n0002060 41980000 418d999a 3ee147ae 3edeb852\n0002100 3f600000 023c2b5b 00000000 00000000\n0002120 00004617 00000000 66a70018 003f0015\n0002140 ff280903 edbefbff 000e1fff 00000005\n0002160 00000073 fffffd95 000000ac 00003d2e\n0002200 4375af05 c1427b74 4022dec2 41cb9919\n0002220 3e570a3d 3e428f5c 0000005d 4331c51f\n0002240 0000000a 30313031 00003030 41871062\n0002260 4181645a 4180f5c3 3e126e98 3e1db22d\n0002300 41871062 4181645a 4185624e 3f7851ec\n0002320 3fa8f5c3 437fc7ae 445463d7 40d4cccd\n0002340 00010001 4188851f 417d1eb8 3e9a1cac\n0002360 3e656042 00000000 00000000 00000000\n0002400 00000007 00303030 00000007 00303030\n0002420 00000007 006c6966 0000000a 30303030\n0002440 00003030 0000000a 30303030 00003030\n0002460 0000000a 30303030 00003030 00000008\n0002500 36303030 00000008 36303030 00000005\n0002520 00000030 bd2c0831 bd1ba5e3 bced9168\n0002540 00000000 3f358106 3d5d2f1b 3f4353f8\n0002560 023c2b5a 00000000 00000000 00004617\n0002600 00000000 66a70018 003f0014 ff280903\n0002620 ffffffff 000fffff 00000005 00000073\n0002640 fffffd95 000000ac 00003d2f 4375c250\n0002660 c14279bb 40261fbc 41cb231d 3df5c28f\n0002700 3df5c28f 0000005a c2aee148 0000000a\n0002720 31313131 00003131 415c6666 4154c083\n0002740 41531aa0 3cc49ba6 3ccccccd 3d178d50\n0002760 415c6666 4154c083 41531aa0 3f666666\n0003000 3f933333 3f7851ec 43802000 4454299a\n0003020 44b6bbd7 00010001 00000001 415c7efa\n0003040 4154bc6a 41527ae1 3cbc6a7f 3d8d4fdf\n0003060 3d7df3b6 00000000 00000000 00000000\n0003100 00000007 00303030 00000007 00303030\n0003120 00000007 00303030 0000000a 30303030\n0003140 00003030 0000000a 30303030 00003030\n0003160 0000000a 30303030 00003030 00000008\n0003200 36363030 00000008 36363030 00000008\n0003220 36363030 bd2c0831 bd1ba5e3 bced9168\n0003240 00000000 00000011 35373055 36393030\n0003260 30313437 00000037 3f0a3d71 4180cccd\n0003300 41700000 3ef4bc6a 3dd2f1aa 3f14bc6a\n0003320 023c2b59 00000000 00000000 00004617\n0003340 00000000 66a70018 003f0013 ff280903\n0003360 ffffffff 000fffff 00000005 00000073\n0003400 fffffd95 000000ac 00003d31 4375c8ba\n0003420 c1427477 402744d4 41cafd50 3e800000\n0003440 3e428f5c 0000004f c32fa3d7 0000000a\n0003460 31313131 00003131 41810c4a 417b4bc7\n0003500 41769fbe 3d9db22d 3dd70a3d 3e395810\n0003520 41810c4a 417b4bc7 41769fbe 3f666666\n0003540 3f9c28f6 3f7851ec 437f8000 44541333\n0003560 44b6e7ae 00010001 00000001 4180126f\n0003600 41783958 41753f7d 3dc8b439 3e25e354\n0003620 3e1ba5e3 00000000 00000000 00000000\n0003640 00000007 00303030 00000007 00303030\n0003660 00000007 00303030 0000000a 30303030\n0003700 00003030 0000000a 30303030 00003030\n0003720 0000000a 30303030 00003030 00000008\n0003740 36313030 00000008 09310931 34310931\n0003760 3237382e 2e343109 09393035 342e3431\n0004000 00000000 00000011 35373055 36393030\n0004020 35353537 00000035 3f4ccccd 4195999a\n0004040 418c0000 3ed9999a 3e958106 3f378d50\n0004060 023c2b58 00000000 00000000 00004617\n0004100 00000000 66a70018 003f0012 ff280903\n0004120 edbefbff 000e1fff 00000005 00000073\n0004140 fffffd95 000000ac 00003d32 4375abb4\n0004160 c1427133 40227432 41cbb0a2 3e570a3d\n0004200 3e428f5c 000000a2 435f599a 0000000a\n0004220 30313031 00003030 41860c4a 41820c4a\n0004240 4174b852 3e116873 3e3851ec 41860c4a\n0004260 41820c4a 4177ba5e 3f4f5c29 3fa00000\n0004300 43800000 44544b85 40df0a3d 00010001\n0004320 41861eb8 41818312 3e395810 3ebbe76d\n0004340 00000000 00000000 00000000 00000007\n0004360 00303030 00000007 00303030 00000007\n0004400 006c6966 0000000a 30303030 00003030\n0004420 0000000a 30303030 00003030 0000000a\n0004440 30303030 00003030 00000008 36303030\n0004460 00000008 36303030 00000005 00000030\n0004500 bd2c0831 bd1ba5e3 bced9168 00000000\n0004520 3f000000 3f760419 3fbb020c 023c2b57\n0004540 00000000 00000000 00004617 00000000\n0004560 66a70018 003f0011 ff280903 ffffffff\n0004600 000e1fff 00000005 00000073 fffffd95\n0004620 000000ac 00003d38 4375b2a2 c14260bd\n0004640 4023d6b6 41cb8b2a 3df5c28f 3df5c28f\n0004660 00000056 4300147b 0000000a 31313131\n0004700 00003131 4177a5e3 416f53f8 4168dd2f\n0004720 3d48b439 3d6d9168 3da9fbe7 4177a5e3\n0004740 416f53f8 4168dd2f 3f7ae148 3f9851ec\n0004760 3f70a3d7 437f547b 44543c29 44b6bd71\n0005000 00010001 00000001 4178c49c 416f4fdf\n0005020 41678d50 3d810625 3e2f1aa0 3dba5e35\n0005040 00000000 00000000 00000000 00000007\n0005060 00303030 00000007 00303030 00000007\n0005100 00303030 0000000a 30303030 00003030\n0005120 0000000a 30303030 00003030 0000000a\n0005140 30303030 00003030 00000008 36363030\n0005160 00000008 36333030 00000008 36303030\n0005200 bd2c0831 bd1ba5e3 bced9168 00000000\n0005220 3f051eb8 3eced917 3f6c8b44 023c2b56\n0005240 00000000 00000000 00004617 00000000\n0005260 66a70018 003f0010 ff280903 ffffffff\n0005300 000fffff 00000005 00000073 fffffd95\n0005320 000000ac 00003d3b 4375c079 c1425ae2\n0005340 40263c43 41cb37f5 3df5c28f 3df5c28f\n0005360 0000005a c278a3d7 0000000a 31313131\n0005400 00003131 413e0c4a 4133eb85 4131e354\n0005420 3cb43958 3cbc6a7f 3cd4fdf4 413e0c4a\n0005440 4133eb85 4131e354 3f19999a 3f400000\n0005460 3ee66666 4380ab85 445420a4 44b71614\n0005500 00010001 00000001 413da9fc 41341062\n0005520 41319db2 3c75c28f 3cf5c28f 3c9374bc\n0005540 00000000 00000000 00000000 00000007\n0005560 00303030 00000007 00303030 00000007\n0005600 00303030 0000000a 30303030 00003030\n0005620 0000000a 30303030 00003030 0000000a\n0005640 30303030 00003030 00000008 35353130\n0005660 00000008 36363030 00000008 35353130\n0005700 bd2c0831 bd1ba5e3 bced9168 00000000\n0005720 00000011 35373055 36393030 36363337\n0005740 00000038 3f170a3d 41700000 41566666\n0005760 3f220c4a 3e020c4a 3f428f5c 023c2b55\n0006000 00000000 00000000 00004617 00000000\n0006020 66a70018 003f000f ff280903 ffffffff\n0006040 000fffff 00000005 00000073 fffffd95\n0006060 000000ac 00003d3e 4375c67b c1425599\n0006100 40275009 41cb14aa 3df5c28f 3df5c28f\n0006120 0000002d c310c51f 0000000a 31313131\n0006140 00003131 41732f1b 41660000 41636042\n0006160 3d23d70a 3d178d50 3d89374c 41732f1b\n0006200 41660000 41636042 3f8b851f 3f4a3d71\n0006220 3f666666 437ff0a4 44541ccd 44b6f148\n0006240 00010001 00000001 4172624e 41668b44\n0006260 416147ae 3d03126f 3ddd2f1b 3ddf3b64\n0006300 00000000 00000000 00000000 00000007\n0006320 00303030 00000007 00303030 00000007\n0006340 00303030 0000000a 30303030 00003030\n0006360 0000000a 30303030 00003030 0000000a\n0006400 30303030 00003030 00000008 36363030\n0006420 00000008 35353130 00000008 36333030\n0006440 bd2c0831 bd1ba5e3 bced9168 00000000\n0006460 00000011 35373055 36393030 35303537\n0006500 00000030 3faccccd 41973333 4189999a\n0006520 3f52f1aa 3e27ef9e 3f7ced91 023c2b54\n0006540 00000000 00000000 00004617 00000000\n0006560 66a70018 003f000e ff280903 edbefbff\n0006600 000fffff 00000005 00000073 fffffd95\n0006620 000000ac 00003d45 4375bb2e c142481a\n0006640 40259ad4 41cb5e49 3e19999a 3e19999a\n0006660 0000002d 412947ae 0000000a 30313031\n0006700 00003030 41826666 4179374c 4180872b\n0006720 3db851ec 3db22d0e 41826666 4179374c\n0006740 41850625 3f95c28f 3f9c28f6 437fb0a4\n0006760 44540333 40d33333 00010001 418320c5\n0007000 417ce560 3ea2d0e5 3ea8f5c3 00000000\n0007020 00000000 00000000 00000007 00303030\n0007040 00000007 00303030 00000007 006c6966\n0007060 0000000a 30303030 00003030 0000000a\n0007100 30303030 00003030 0000000a 30303030\n0007120 00003030 00000008 36313030 00000008\n0007140 36303030 00000005 00000030 bd2c0831\n0007160 bd1ba5e3 bced9168 00000000 00000011\n0007200 35373055 36393030 34343237 00000038\n0007220 3f3851ec 419a6666 418b3333 3f395810\n0007240 befae148 3e6f9db2 023c2b53 00000000\n0007260 00000000 00004617 00000000 66a70018\n0007300 003f000d ff280903 ffffffff 000fffff\n0007320 00000005 00000073 fffffd95 000000ac\n0007340 00003d48 4375c3cc c1424254 40272014\n0007360 41cb2b20 3df5c28f 3df5c28f 0000002d\n0007400 c2d7b852 0000000a 31313131 00003131\n0007420 41681893 415f9db2 415d374c 3ced9168\n0007440 3d03126f 3d3c6a7f 41681893 415f9db2\n0007460 415d374c 3f733333 3fb33333 3f947ae1\n0007500 437fae14 4453fb85 44b6f6b8 00010001\n0007520 00000001 41674fdf 415f851f 415bced9\n0007540 3c03126f 3d9374bc 3ddb22d1 00000000\n0007560 00000000 00000000 00000007 00303030\n0007600 00000007 00303030 00000007 00303030\n0007620 0000000a 30303030 00003030 0000000a\n0007640 30303030 00003030 0000000a 30303030\n0007660 00003030 00000008 36363030 00000008\n0007700 36363030 00000008 35343130 bd2c0831\n0007720 bd1ba5e3 bced9168 00000000 00000011\n0007740 35373055 36393030 35343437 00000031\n0007760 3f6147ae 41873333 417ccccd 3f07ae14\n0010000 3e19999a 3f2e147b 023c2b52 00000000\n0010020 00000000 00004617 00000000 66a70018\n0010040 003f000c ff280903 ffffffff 000fffff\n0010060 00000005 00000073 fffffd95 000000ac\n0010100 00003d4a 4375ad5e c14240f3 4023636f\n0010120 41cbb55a 3df5c28f 3df5c28f 0000005a\n0010140 43487ae1 0000000a 31313131 00003131\n0010160 41609ba6 4158dd2f 4156353f 3cdd2f1b\n0010200 3ce56042 3d23d70a 41609ba6 4158dd2f\n0010220 4156353f 3f88f5c3 3f8b851f 3f828f5c\n0010240 437fe666 44542852 44b6b5c3 00010001\n0010260 00000001 4160a3d7 415778d5 4155c6a8\n0010300 3cbc6a7f 3d872b02 3dc08312 00000000\n0010320 00000000 00000000 00000007 00303030\n0010340 00000007 00303030 00000007 00303030\n0010360 0000000a 30303030 00003030 0000000a\n0010400 30303030 00003030 0000000a 30303030\n0010420 00003030 00000008 36363030 00000008\n0010440 36363030 00000008 36363030 bd2c0831\n0010460 bd1ba5e3 bced9168 00000000 00000011\n0010500 35373055 36393030 38323936 00000036\n0010520 3f2b851f 4185999a 41766666 3ef7ced9\n0010540 3e29fbe7 3f266666 023c2b51 00000000\n0010560 00000000 00004617 00000000 66a70018\n0010600 003f000b ff280903 ffffffff 000fffff\n0010620 00000005 00000073 fffffd95 000000ac\n0010640 00003d4e 4375ab2d c142384d 40232374\n0010660 41cbc580 3df5c28f 3df5c28f 0000005a\n0010700 4366a148 0000000a 31313131 00003131\n0010720 4173126f 416b0625 41666a7f 3d408312\n0010740 3d5d2f1b 3da5e354 4173126f 416b0625\n0010760 41666a7f 3fae147b 3faa3d71 3f95c28f\n0011000 43800148 445438f6 44b6e3d7 00010001\n0011020 00000001 41711eb8 4168f1aa 41659db2\n0011040 3d591687 3df7ced9 3dba5e35 00000000\n0011060 00000000 00000000 00000007 00303030\n0011100 00000007 00303030 00000007 00303030\n0011120 0000000a 30303030 00003030 0000000a\n0011140 30303030 00003030 0000000a 30303030\n0011160 00003030 00000008 36363030 00000008\n0011200 36353030 00000008 36323030 bd2c0831\n0011220 bd1ba5e3 bced9168 00000000 00000011\n0011240 35373055 36393030 35373836 00000035\n0011260 3f63d70a 418a6666 41833333 3f00c49c\n0011300 3e9374bc 3f4a7efa 023c2b50 00000000\n0011320 00000000 00004617 00000000 66a70018\n0011340 003f000a ff280903 ffffffff 000fffff\n0011360 00000005 00000073 fffffd95 000000ac\n0011400 00003d4f 4375c0b4 c1423820 4026bf0a\n0011420 41cb414c 3df5c28f 3df5c28f 0000005a\n0011440 c282a3d7 0000000a 31313131 00003131\n0011460 415ba1cb 415378d5 4153126f 3cc49ba6\n0011500 3ccccccd 3d1374bc 415ba1cb 415378d5\n0011520 4153126f 3f866666 3f570a3d 3f428f5c\n0011540 43805c29 445417ae 44b6eeb8 00010001\n0011560 00000001 415afdf4 4153f3b6 4154ac08\n0011600 3cdd2f1b 3d5d2f1b 3d0b4396 00000000\n0011620 00000000 00000000 00000007 00303030\n0011640 00000007 00303030 00000007 00303030\n0011660 0000000a 30303030 00003030 0000000a\n0011700 30303030 00003030 0000000a 30303030\n0011720 00003030 00000008 35353130 00000008\n0011740 36363030 00000008 36363030 bd2c0831\n0011760 bd1ba5e3 bced9168 00000000 00000011\n0012000 35373055 36393030 33373337 00000034\n0012020 3f88f5c3 417b3333 416ccccd 3f028f5c\n0012040 3ccccccd 3f08f5c3 023c2b4f 00000000\n0012060 00000000 00004617 00000000 66a70018\n0012100 003f0009 ff280903 ffffffff 000fffff\n0012120 00000005 00000073 fffffd95 000000ac\n0012140 00003d51 4375bd58 c1423676 402634e3\n0012160 41cb5674 3e051eb8 3e051eb8 0000002d\n0012200 c1993333 0000000a 31313131 00003131\n0012220 41780000 41706a7f 4172dd2f 3d48b439\n0012240 3d50e560 3e16872b 41780000 41706a7f\n0012260 4172dd2f 3f68f5c3 3fc28f5c 3f5c28f6\n0012300 437ff0a4 44540a3d 44b70a8f 00010001\n0012320 00000001 41770625 41779db2 416dbe77\n0012340 3db43958 3ded9168 3df9db23 00000000\n0012360 00000000 00000000 00000007 00303030\n0012400 00000007 00303030 00000007 00303030\n0012420 0000000a 30303030 00003030 0000000a\n0012440 30303030 00003030 0000000a 30303030\n0012460 00003030 00000008 36353030 00000008\n0012500 36303030 00000008 36303030 bd2c0831\n0012520 bd1ba5e3 bced9168 00000000 00000011\n0012540 35373055 36393030 37393237 00000036\n0012560 3f3851ec 418d999a 4184cccd 3ef2b021\n0012600 be1cac08 3ea45a1d 023c2b4e 00000000\n0012620 00000000 00004617 00000000 66a70018\n0012640 003f0008 ff280903 ffffffff 000e1fff\n0012660 00000005 00000073 fffffd95 000000ac\n0012700 00003d52 4375c128 c1423492 4026dedb\n0012720 41cb3f9a 3e8f5c29 3e851eb8 00000051\n0012740 c28f23d7 0000000a 31313131 00003131\n0012760 4182e76d 417a5604 41796042 3dba5e35\n0013000 3dced917 3e560419 4182e76d 417a5604\n0013020 41796042 3f9ae148 3f63d70a 3f428f5c\n0013040 437fc51f 44542d71 44b6dae1 00010001\n0013060 00000001 41839375 4178cccd 417a3d71\n0013100 3edcac08 3df1a9fc 3e818937 00000000\n0013120 00000000 00000000 00000007 00303030\n0013140 00000007 00303030 00000007 00303030\n0013160 0000000a 30303030 00003030 0000000a\n0013200 30303030 00003030 0000000a 30303030\n0013220 00003030 00000008 35313130 00000008\n0013240 36303030 00000008 36303030 bd2c0831\n0013260 bd1ba5e3 bced9168 00000000 3f378d50\n0013300 3d75c28f 3f46e979 023c2b4d 00000000\n0013320 00000000 00004617 00000000 66a70018\n0013340 003f0007 ff280903 ffffffff 000e1fff\n0013360 00000005 00000073 fffffd95 000000ac\n0013400 00003d55 4375cc2c c1423065 4028c522\n0013420 41cafd30 3df5c28f 3df5c28f 00000056\n0013440 c35f028f 0000000a 31313131 00003131\n0013460 417ec49c 41735c29 416e5a1d 3d9374bc\n0013500 3d9374bc 3dced917 417ec49c 41735c29\n0013520 416e5a1d 3f733333 3f75c28f 3f95c28f\n0013540 43804148 4453feb8 44b6f0f6 00010001\n0013560 00000001 417be354 4172c8b4 4170624e\n0013600 3db43958 3e5c28f6 3e841893 00000000\n0013620 00000000 00000000 00000007 00303030\n0013640 00000007 00303030 00000007 00303030\n0013660 0000000a 30303030 00003030 0000000a\n0013700 30303030 00003030 0000000a 30303030\n0013720 00003030 00000008 36333030 00000008\n0013740 36323030 00000008 36303030 bd2c0831\n0013760 bd1ba5e3 bced9168 00000000 3f36872b\n0014000 3ea04189 3f8353f8 023c2b4c 00000000\n0014020 00000000 00004617 00000000 66a70018\n0014040 003f0006 ff280903 edbefbff 000fffff\n0014060 00000005 00000073 fffffd95 000000ac\n0014100 00003d56 4375c913 c1422f27 402844e1\n0014120 41cb109b 3e3851ec 3e2e147b 00000078\n0014140 c33470a4 0000000a 30313031 00003030\n0014160 4183d4fe 418076c9 417cf9db 3dced917\n0014200 3e116873 4183d4fe 418076c9 4182020c\n0014220 3f99999a 3f7851ec 437f2e14 445410a4\n0014240 40d9999a 00010001 4184c28f 4184c8b4\n0014260 3e820c4a 3e947ae1 00000000 00000000\n0014300 00000000 00000007 00303030 00000007\n0014320 00303030 00000007 006c6966 0000000a\n0014340 30303030 00003030 0000000a 30303030\n0014360 00003030 0000000a 30303030 00003030\n0014400 00000008 36313030 00000008 36303030\n0014420 00000005 00000030 bd2c0831 bd1ba5e3\n0014440 bced9168 00000000 00000011 35373055\n0014460 36393030 33363537 00000036 3e8a3d71\n0014500 4191999a 418f3333 3ed78d50 3e7ced91\n0014520 3f2b020c 023c2b4b 00000000 00000000\n0014540 00004617 00000000 66a70018 003f0005\n0014560 ff280903 ffffffff 000fffff 00000005\n0014600 00000073 fffffd95 000000ac 00003d58\n0014620 4375b5f4 c1422d49 4025180d 41cb86b3\n0014640 3df5c28f 3df5c28f 0000002d 42a4e666\n0014660 0000000a 31313131 00003131 416c28f6\n0014700 4164ac08 4164872b 3cf5c28f 3d1374bc\n0014720 3d8d4fdf 416c28f6 4164ac08 4164872b\n0014740 3f5eb852 3fa00000 3f47ae14 437fcf5c\n0014760 44542f5c 44b6fa3d 00010001 00000001\n0015000 416cf5c3 41643d71 416251ec 3cf5c28f\n0015020 3d79db23 3df5c28f 00000000 00000000\n0015040 00000000 00000007 00303030 00000007\n0015060 00303030 00000007 00303030 0000000a\n0015100 30303030 00003030 0000000a 30303030\n0015120 00003030 0000000a 30303030 00003030\n0015140 00000008 36363030 00000008 36363030\n0015160 00000008 35333130 bd2c0831 bd1ba5e3\n0015200 bced9168 00000000 00000011 35373055\n0015220 36393030 34323137 00000034 3ef5c28f\n0015240 41826666 417b3333 3eef9db2 3c1374bc\n0015260 3ef43958 023c2b4a 00000000 00000000\n0015300 00004617 00000000 66a70018 003f0004\n0015320 ff280903 edbefbff 000fffff 00000005\n0015340 00000073 fffffd95 000000ac 00003d5b\n0015360 4375c246 c1422b6b 40272e84 41cb3b94\n0015400 3e6147ae 3e4ccccd 00000049 c2add1ec\n0015420 0000000a 30313031 00003030 418570a4\n0015440 41812f1b 417e0419 3df7ced9 3e147ae1\n0015460 418570a4 41812f1b 4182999a 3f51eb85\n0015500 3f5eb852 438007ae 44540e14 40d051ec\n0015520 00010001 41859ba6 41802d0e 3e25e354\n0015540 3ec5a1cb 00000000 00000000 00000000\n0015560 00000007 00303030 00000007 00303030\n0015600 00000007 006c6966 0000000a 30303030\n0015620 00003030 0000000a 30303030 00003030\n0015640 0000000a 30303030 00003030 00000008\n0015660 36303030 00000008 36303030 00000005\n0015700 00000030 bd2c0831 bd1ba5e3 bced9168\n0015720 00000000 00000011 35373055 36393030\n0015740 39303437 00000039 3eb851ec 419b3333\n0015760 41926666 3f083127 3e8b4396 3f4dd2f2\n0016000 023c2b49 00000000 00000000 00004617\n0016020 00000000 66a70018 003f0003 ff280903\n0016040 ffffffff 000fffff 00000005 00000073\n0016060 fffffd95 000000ac 00003d5f 4375c992\n0016100 c14225e5 40287a31 41cb1070 3f028f5c\n0016120 3f028f5c 0000005a c33b3852 0000000a\n0016140 31313131 00003131 4187147b 41835c29\n0016160 4180a3d7 3e189375 3e49ba5e 3e947ae1\n0016200 4187147b 41835c29 4180a3d7 3f9ae148\n0016220 3f451eb8 3f947ae1 437f75c3 44540ccd\n0016240 44b6e948 00010001 00000001 418af5c3\n0016260 418022d1 41a50000 3ea9fbe7 3e9a9fbe\n0016300 410e353f 00000000 00000000 00000000\n0016320 00000007 00303030 00000007 00303030\n0016340 00000007 00303030 0000000a 30303030\n0016360 00003030 0000000a 30303030 00003030\n0016400 0000000a 30303030 00003030 00000008\n0016420 36303030 00000008 36303030 00000008\n0016440 36303030 bd2c0831 bd1ba5e3 bced9168\n0016460 00000000 00000011 35373055 36393030\n0016500 34373537 00000039 3f266666 4198cccd\n0016520 418f3333 3eee147b 3eae147b 3f4e147b\n0016540 023c2b48 00000000 00000000 00004617\n0016560 00000000 66a70018 003f0002 ff280903\n0016600 ffffffff 000e1fff 00000005 00000073\n0016620 fffffd95 000000ac 00003d61 4375b6b0\n0016640 c1422463 40255671 41cb84f3 3e6b851f\n0016660 3e6147ae 00000047 4290bd71 0000000a\n0016700 31313131 00003131 41847ae1 4179d2f2\n0016720 4177a9fc 3de147ae 3dc28f5c 3e52f1aa\n0016740 41847ae1 4179d2f2 4177a9fc 3f400000\n0016760 3f91eb85 3f970a3d 437fbae1 44543852\n0017000 44b6e052 00010001 00000001 41839ba6\n0017020 41781062 41770625 3e52f1aa 3e818937\n0017040 3ed6872b 00000000 00000000 00000000\n0017060 00000007 00303030 00000007 00303030\n0017100 00000007 00303030 0000000a 30303030\n0017120 00003030 0000000a 30303030 00003030\n0017140 0000000a 30303030 00003030 00000008\n0017160 36303030 00000008 36313030 00000008\n0017200 36303030 bd2c0831 bd1ba5e3 bced9168\n0017220 00000000 3f722d0e 3e0a3d71 3f8a5e35\n0017240 023c2b47 00000000 00000000 00004617\n0017260 00000000 66a70018 003f0001 ff280903\n0017300 ffffffff 000fffff 00000005 00000073\n0017320 fffffd95 000000ac 00003d62 4375b331\n0017340 c14221b7 4024c9cd 41cb9b43 3ea8f5c3\n0017360 3e75c28f 0000004d 42f0e666 0000000a\n0017400 31313131 00003131 4182b22d 417ced91\n0017420 417bc28f 3dc8b439 3ddd2f1b 3e4ed917\n0017440 4182b22d 417ced91 417bc28f 3f7ae148\n0017460 3fa3d70a 3fa28f5c 437fc51f 44542c29\n0017500 44b6fd71 00010001 00000001 41808937\n0017520 4179f3b6 4173ef9e 3dcccccd 3e4ed917\n0017540 3e872b02 00000000 00000000 00000000\n0017560 00000007 00303030 00000007 00303030\n0017600 00000007 00303030 0000000a 30303030\n0017620 00003030 0000000a 30303030 00003030\n0017640 0000000a 30303030 00003030 00000008\n0017660 36303030 00000008 36313030 00000008\n0017700 36303030 bd2c0831 bd1ba5e3 bced9168\n0017720 00000000 00000011 35373055 36393030\n0017740 32363037 00000035 3e8f5c29 41966666\n0017760 418d999a 3f076c8b 3d958106 3f1a1cac\n0020000\n", "msg_date": "Mon, 11 Oct 1999 11:57:16 -0300", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " }, { "msg_contents": "Martin Weinberg <[email protected]> writes:\n> Ok done. Here's what I find:\n\n> (gdb) p i\n> $1 = 48\n> (gdb) p *tuple\n> $2 = {t_len = 352, t_self = {ip_blkid = {bi_hi = 24, bi_lo = 26279}, \n> ip_posid = 19}, t_data = 0x41061710}\n> (gdb) p *tuple->t_data\n> $3 = {t_oid = 37497689, t_cmin = 0, t_cmax = 0, t_xmin = 17943, t_xmax\n> = 0, \n> t_ctid = {ip_blkid = {bi_hi = 24, bi_lo = 26279}, ip_posid = 19}, \n> t_natts = 63, t_infomask = 2307, t_hoff = 40 '(', t_bits = \"<FF><FF><FF><FF>\"}\n\nLooks good. For fun you might try \"select * from psc where oid = 37497689\"\nand see if it succeeds or not on a \"retail\" retrieval of the problem tuple.\n\n> Now, check me to make sure I've followed you correctly:\n> Since 1GB of blocks is 0x20000, this data in the 13th GB.\n> The offset into the 13th is 26279.\n> So I did:\n> dd if=psc.12 skip=26279 count=1 bs=8k | od -t x > ~/dump.hex\n\nLooks right to me.\n\n> I'm not sure what I'm looking for in dump.hex.\n\nCould you send the whole dump? I can figure this stuff out by hand but\nI'm not sure I can explain it to someone else. (The relevant data\nstructure declarations are in various files in src/include/storage/ and\nsrc/include/access/ if you want to look for yourself.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Oct 1999 10:59:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] memory problems in copying large table to STDOUT " } ]
[ { "msg_contents": "Hi,\n\nI'd like to announce the release of pgxml 1.0, a tool which outputs\nPostgreSQL queries in XML format.\n\nHere's a small example:\n\nRun\n\tpgxml -d books -c \"select * from books\" \\\n\t\t-t library,book -s books.css -o books.xml\n\nThe output might look like this\n\n\t<?xml version=\"1.0\"?>\n\t<?xml-stylesheet href=\"books.css\"type=\"text/css\"?>\n\t<!-- Generated by pgxml 1.0 -->\n\t<!DOCTYPE library [\n\t<!ELEMENT library (book)*>\n\t<!ELEMENT book (id?, title?, author?)>\n\t<!ELEMENT id (#PCDATA)>\n\t<!ELEMENT title (#PCDATA)>\n\t<!ELEMENT author (#PCDATA)>\n\t]>\n\t<library>\n\t<book>\n\t<id>1</id>\n\t<title>Hitchhiker's Guide to the Galaxy</title>\n\t<author>Douglas Adams</author>\n\t</book>\n\n\t[...]\n\n\t<book>\n\t<id>4</id>\n\t<title>The C Programming Language</title>\n\t<author>Brian W. Kernighan and Dennis M. Ritchie</author>\n\t</book>\n\t</library>\n\n\nCheck it out at http://www.morinel.demon.nl/pgxml/ or download it from\nhttp://www.morinel.demon.nl/pgxml/pgxml-1.0.tar.gz\n\nAs I make heavy use of stylesheets, my website is best viewed with Mozilla\nor M$ IE 5 :-(\n\n\nCheers,\n\nJeroen \n", "msg_date": "Sun, 10 Oct 1999 14:35:01 +0200", "msg_from": "Jeroen van Vianen <[email protected]>", "msg_from_op": true, "msg_subject": "pgxml 1.0 released" } ]
[ { "msg_contents": "We are continuing to get ipc/shared memory error reports from people who\nhave not looked at the FAQ and platform-specific FAQ's.\n\nThis is easily fixed by mentioning the FAQ locations in the error\nmessages, and have added that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 10 Oct 1999 12:55:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Mention of FAQ in shared memory/ipc errors" } ]
[ { "msg_contents": "Gentlemen, pardon my non-hacker's intrusion, into this hacker zone.\nI have two projects - one rather urgent - for a pgSQL programmer(s)\nand thought this would be the right place to ask.\nBoth projects are for different types of retail businesses.\nI have a dBase-3 code of a working application to be used as a prototype\n\nfor one of them.\nMy business is located in New York City, so, some proximity - I imagine\n-\nwould be a convenience.\n If anyone of you is interested - please contact me at my email\naddress.\n\n Victor.\n\n", "msg_date": "Sun, 10 Oct 1999 17:59:19 -0400", "msg_from": "Victor Kane <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for pgsql programmer" } ]
[ { "msg_contents": "\nTo ./configure PostgreSQL 6.5.2 to run with JDBC 1.1.7 what encoding\nshould I use in the following:\n\n\"--with-mb-encoding enable multi-byte support\" (from\nwww.postgresql.org/docs/admin/config.htm)\n\nMany Thanks,\n\nAllan\n\[email protected]\n\nPS I found this:\n\nUnicode can now be used in operating systems from Microsoft, IBM, DEC,\nSun, and Apple. Microsoft Windows NT uses Unicode as the default\ncharacter encoding in the OS; Windows 95 has most of the Microsoft\nUnicode API in the MFC foundation class libraries. IBM, DEC, and Sun\noffer Unicode in varying degrees of implementation, ranging from a full\nUnicode development library provided with IBM AIX, to UTF-8 multi-byte\nand UCS-4 wide-character support in Solaris, following the POSIX model.\n\nat\n\nhttp://www.isoc.org/isoc/whatis/conferences/inet/97/proceedings/A8/A8_2.HTM\n\nso should I specify Unicode? Guess that it is worth a try.... Guess\nthat I should check out the Sun site too.\n\n\n", "msg_date": "Mon, 11 Oct 1999 10:30:44 +0200", "msg_from": "\"Allan Huffman\" <[email protected]>", "msg_from_op": true, "msg_subject": "--with-mb-encoding?" } ]
[ { "msg_contents": "Hi,\n\nI have a cron job to vacuum table which updates rather frequently\nand after month of work I'm getting NOTICE\n\nNOTICE: Index hits_pkey: NUMBER OF INDEX' TUPLES (10003) IS NOT THE SAME AS \nHEAP' (10006)\n\nI use \n\npsql -tq discovery < vacuum_hits.sql\nwhere vacuum_hits.sql is:\n\nvacuum analyze hits(msg_id);\nbegin work;\ndrop index hits_pkey;\ncreate unique index hits_pkey on hits(msg_id);\nend work;\n\nI rebuild index hits_pkey to avoid infinite grow - well, after Vadim's\npatch it still grows when table intensively updates.\n\nI've dumped and restored this table but I still get NOTICE message.\nThe site I developed is in alpha stage, so sometimes I restart http\nserver. I use persistent connection with postgres database so\neach http process has persistent connection with postgres database.\nCould be the problem If I just kill http processes and restart them ?\nOr I have to stop all postgres processes before ?\nWhat's the best way to manage persistent connections in 24*7 regime\nof http server ?\n\n\tRegards,\n\n\t\tOleg\n\nPS.\nForget to note: postgres 6.5.2, Linux 2.0.37, Apache 1.3.9,\n modperl 1.21, ApacheDBI 0.87\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Mon, 11 Oct 1999 13:32:47 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "vaccum problem" } ]
[ { "msg_contents": "> Dear Tatsuo (did I get your name right this time),\n\nYes:-)\n\n> After installing PostgreSQL 6.5.2\n> --with-mb=UNICODE\n> \n> I am still getting a truncation of some varchar columns. When accessing a table\n> in PostgreSQL I get this:\n> \u001b$(C\u001b(BERROR: Conversion between UNICODE and SQL_ASCII is not supported\u001b$(D\u001b(B\n\nWhat kind of client program are you using? If it's psql or something\nusing libpq, you are likely setting an environment variable\nPGCLIENTENCODING to SQL_ASCII. As the message said, Conversion between\nUNICODE and SQL_ASCII is not currently supported. You should unset the\nvariable and use a client program that understands UNICODE (UTF-8).\n\nIf your client program is a Java using JDBC, I have no idea at all\nsince I am not involved into the PostgreSQL JDBC driver.\n---\nTatsuo Ishii\n", "msg_date": "Mon, 11 Oct 1999 23:04:22 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: PostgreSQL Help " } ]
[ { "msg_contents": "I have moved my easy C interface to pgsql into the main tree, and\nrenamed the old 'pginterface' to 'pgeasy'. Should compile/install\ncleanly.\n\nThomas, can you convert the manual page to SGML and add it to the other\nmanuals? I have attached the troff manual page source. Should I try\nthe conversion myself?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n.\\\" This is -*-nroff-*-\n.\\\" XXX standard disclaimer belongs here....\n.\\\" $Header: /usr/local/cvsroot/pgsql/src/interfaces/libpgeasy/pgeasy.3,v 1.1 1999/10/11 18:03:00 momjian Exp $\n.TH PGEASY INTRO 08/08/98 PostgreSQL PostgreSQL\n.SH DESCRIPTION\nPgeasy allows you to cleanly interface to the libpq library,\nmore like a 4gl SQL interface.\n.PP\nIt consists of set of simplified C functions that encapsulate the\nfunctionality of libpq.\nThe functions are:\n\n.nf\nPGresult *doquery(char *query);\nPGconn *connectdb();\nvoid disconnectdb();\n\nint fetch(void *param,...);\nint fetchwithnulls(void *param,...);\nvoid reset_fetch();\n\nvoid on_error_continue();\nvoid on_error_stop();\n\nPGresult *get_result();\nvoid set_result(PGresult *newres);\nvoid unset_result(PGresult *oldres);\n.fi\n.PP\nMany functions return a structure or value, so you can do more work\nwith the result if required. \n.PP\nYou basically connect to the database with\n.BR connectdb ,\nissue your query with\n.BR doquery ,\nfetch the results with\n.BR fetch ,\nand finish with\n.BR disconnectdb .\n.PP\nFor\n.IR select\nqueries,\n.BR fetch \nallows you to pass pointers as parameters, and on return the variables\nare filled with data from the binary cursor you opened. These binary\ncursors can not be used if you are running the\n.BR pgeasy\nclient on a system with a different architecture than the database\nserver. If you pass a NULL pointer parameter, the column is skipped.\n.BR fetchwithnulls\nallows you to retieve the\n.IR null\nstatus of the field by passing an\n.IR int*\nafter each result pointer, which returns true or false if the field is null.\nYou can always use libpq functions on the PGresult pointer returned by\n.BR doquery .\n.BR reset_fetch\nstarts the fetch back at the beginning.\n.PP\n.BR get_result ,\n.BR set_result ,\nand\n.BR unset_result\nallow you to handle multiple result sets at the same time.\n.PP\nThere are a variety of demonstration programs in the\n.BR pgeasy\nsource directory.", "msg_date": "Mon, 11 Oct 1999 15:20:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "pginterface/pgeasy" } ]
[ { "msg_contents": "Here's the long-awaited next edition. For those who must get their hands\non some code I have a tarball at\nhttp://www.pathwaynet.com/~peter/psql.tar.bz2\nIt includes a file RELEASENOTES which should help you get started.\n\nCHANGELOG\n\n* Sorted out -e and -E: -e toggles query echoing in non-interactive\n mode (why would you want to echo queries in interactive mode?). -E\n enables showing of queries sent behind your back and is independent\n of -e.\n* Sorted out usernames and passwords: Switch -U specifies username or\n \"?\" for \"prompt me\". Switch -P is \"prompt for password\". Old -u is\n equivalent to -U ? -P. Equivalent changes to \\connect (e.g., \\c - ?\n = connect to current database and ask for new username (and prompting\n for password iff -u or -P mode)).\n* Command history is saved and loaded automatically. Also, history and\n readline are separately enabled in the code. (like anyone needs that)\n* Allow versioned startup files (like .psqlrc-6.6.0), at least during\n development.\n* psql now has internal variables. Use \\set and \\unset to\n set/unset/display them. The amount of internal state got out of hand,\n so this should help. Most of the settings will be moved to some\n variable. (The implementation is currently quite naive, but\n encapsulated well to change it later on.)\n* -q switch is now equivalent to setting variable quiet. Audited what\n messages should be quieted, what are errors, what is normal output,\n what is query output.\n* Copy-in now has customizable prompt as well.\n* Wrote a better strtok, with quoting capabilities, which is now used\n to parse the options of the slash commands. (This will break\n backward compatibility if you use filenames with spaces and quotes\n since they will now be interpreted differently. Too bad.)\n* Cleaned up various \\d<x> commands. The queries are now\n human-readable in -E mode. (Also they now show up in -E mode, and\n not in -e mode!)\n* -E mode is now the variable echo_secret. If you set the variable to\n 'noexec' then the \"backdoor\" queries will only be displayed, not\n executed. (Nice if you just want to study the queries.)\n* If a connection goes bad during query execution, PQreset is\n attempted. If it still is bad, then an _interactive_ session will be\n in unconnected mode. (Changed prompt to reflect that.)\n* Ctrl-C only sends cancel request if query in progress (otherwise\n default action = terminate program). This removes a major annoyance\n (I hope).\n* Refined \\c[onnect]'ion failures: Non-interactive scripts will\n terminate, even recursively. However, if the underlying session was\n an interactive one, it does not terminate. The database connection\n will be lost, however.\n* Password prompts are automatic (both startup and \\connect). Can\n still use -P switch, but that might prove unnecessary. [ Cheers to\n Roland R.! ]\n* Implemented \\lo_import, \\lo_export, \\lo_unlink, \\lo_list. (Still\n needs some refinement, though.)\n* Can now use \\copy with oids and delimiters. No binary, yet.\n\n\nTODO LIST\n\n* generalized backslash command handling (struct, no ..else if...)\n* new printing routines\n* rewrite mainloop parser, strip comments\n* single line mode doesn't take slash commands\n* make scripts bomb out (optionally) on query error\n* remove Rollback warnings in lo_ ops\n* \\default(s?) command\n* allow several \\ cmds on a line (add '\\' to strtokx delims?, windows?)\n\n\nSIDE NOTES\n\n1. Since the new animal is now probably going to be 7.0, let's provide a\n psql that's worthy of that name, er, number. I hope I can lay a\n framework with this, but there are still a few months I think, so ideas\n are welcome.\n\n1.a) On a related note, since the core developers have more important\n issues to worry about, I wouldn't mind maintaining/accompanying/taking\n care of/keeping an eye on/whatever psql until release (and possibly\n thereafter).\n \n2. What about including an snprintf() into the source tree similar what is\n done with strdup()? (No, don't look at me, it totally escapes me how to\n do that and I don't want to cheat and look at the GNU sources for\n obvious reasons.)\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 11 Oct 1999 21:43:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "psql Week 2" }, { "msg_contents": "> * Ctrl-C only sends cancel request if query in progress (otherwise\n> default action = terminate program). This removes a major annoyance\n> (I hope).\n\nYes, I always wanted that fixed. You have hit a number of other TODO\nitems. Nice.\n\n> * Refined \\c[onnect]'ion failures: Non-interactive scripts will\n> terminate, even recursively. However, if the underlying session was\n> an interactive one, it does not terminate. The database connection\n> will be lost, however.\n\nNice.\n\n> * Password prompts are automatic (both startup and \\connect). Can\n> still use -P switch, but that might prove unnecessary. [ Cheers to\n> Roland R.! ]\n\nNice\n\n> * Implemented \\lo_import, \\lo_export, \\lo_unlink, \\lo_list. (Still\n> needs some refinement, though.)\n\nAlso nice.\n> * Can now use \\copy with oids and delimiters. No binary, yet.\n\nYes, quite nice.\n\n> \n> \n> TODO LIST\n> \n> * generalized backslash command handling (struct, no ..else if...)\n> * new printing routines\n\nHow about a backslash command to print the current date/time. Good for\nperformance debugging.\n\n> * rewrite mainloop parser, strip comments\n> * single line mode doesn't take slash commands\n> * make scripts bomb out (optionally) on query error\n> * remove Rollback warnings in lo_ ops\n> * \\default(s?) command\n> * allow several \\ cmds on a line (add '\\' to strtokx delims?, windows?)\n> \n> \n> SIDE NOTES\n> \n> 1. Since the new animal is now probably going to be 7.0, let's provide a\n> psql that's worthy of that name, er, number. I hope I can lay a\n> framework with this, but there are still a few months I think, so ideas\n> are welcome.\n> \n> 1.a) On a related note, since the core developers have more important\n> issues to worry about, I wouldn't mind maintaining/accompanying/taking\n> care of/keeping an eye on/whatever psql until release (and possibly\n> thereafter).\n\nGood. It needs it.\n\n> \n> 2. What about including an snprintf() into the source tree similar what is\n> done with strdup()? (No, don't look at me, it totally escapes me how to\n> do that and I don't want to cheat and look at the GNU sources for\n> obvious reasons.)\n\nWe can do that. I thought we already did.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 16:29:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 2" }, { "msg_contents": "Will the new psql replace the old one in 7.0? If so, I need to work on\nit to incomporate the multi-byte capability. Peter, can you tell me\nwhen you will finish the work?\n---\nTatsuo Ishii\n\n", "msg_date": "Tue, 12 Oct 1999 09:55:21 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 2 " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> 2. What about including an snprintf() into the source tree similar what is\n> done with strdup()?\n\nThere is one in the backend/port/ directory, along with some other\nimportant library routines that are missing on certain platforms.\nUp to now we haven't worried about including these into anything but\nthe backend, but I see no reason not to include them into psql if\nyou need 'em. (Probably would not be a good idea to put them into\nlibpq though, since that could cause conflicts with user apps that\nsupply their own versions.) See backend/port/Makefile.in for the\ntests that determine whether individual routines need to be included.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Oct 1999 21:28:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 2 " }, { "msg_contents": "On Oct 12, Tatsuo Ishii mentioned:\n\n> Will the new psql replace the old one in 7.0? If so, I need to work on\n> it to incomporate the multi-byte capability. Peter, can you tell me\n> when you will finish the work?\n\nIf I don't terminally mess it up I thunk that was the plan.\n\nI was initially planning on 4 weeks, but this week is really tight, so I\nmight need to finish with less. I don't want to occupy this thing forever\neither.\n\nMeanwhile I (think I) have been careful about multibyte stuff but it's\nprobably good if you take a look. Perhaps you can start with the tarball I\nposted. It should be working.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 13 Oct 1999 19:32:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql Week 2 " }, { "msg_contents": "On Oct 11, Tom Lane mentioned:\n\n> Peter Eisentraut <[email protected]> writes:\n> > 2. What about including an snprintf() into the source tree similar what is\n> > done with strdup()?\n> \n> There is one in the backend/port/ directory, along with some other\n> important library routines that are missing on certain platforms.\n> Up to now we haven't worried about including these into anything but\n> the backend, but I see no reason not to include them into psql if\n> you need 'em. (Probably would not be a good idea to put them into\n> libpq though, since that could cause conflicts with user apps that\n> supply their own versions.) See backend/port/Makefile.in for the\n> tests that determine whether individual routines need to be included.\n\nOkay, I'm sorry, I guess I never dug that far into the backend for that.\nAll those things seem kind of useful, so for good measure they could\nperhaps be moved into the src/utils dir or a src/port dir.\n\nI was not talking about putting them into libpq as public functions but if\nsomeone working on libpq needed them there a way could surely be found.\nNot me though right now.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 13 Oct 1999 19:37:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql Week 2 " }, { "msg_contents": "> I was initially planning on 4 weeks, but this week is really tight, so I\n> might need to finish with less. I don't want to occupy this thing forever\n> either.\n\nI'm not in hurry. I just want to make sure that I have enough time\nbefore 7.0 is out. So keep your pace, please.\n\n> Meanwhile I (think I) have been careful about multibyte stuff but it's\n> probably good if you take a look. Perhaps you can start with the tarball I\n> posted. It should be working.\n\nThat's a good news. I will check the tarbal.\n---\nTatsuo Ishii\n", "msg_date": "Thu, 14 Oct 1999 10:33:41 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 2 " } ]
[ { "msg_contents": "You might want to include into the installation instructions (or whereever\nit will end up) that GNU bison 1.25 is required. (right after the flex\nstuff)\n\nI have encountered problems in particular with the ecpg interface where\nthe Solaris yacc generated syntactically messed-up C code and bison 1.24\ncouldn't even process one of the .y files. (In case the ecpg maintainer is\nunaware of this, let me know and I'll try to get a full problem\ndescription. This happened with the CVS sources of today.)\n\nThis is just in addition to other problems that pop up once in a while\nwith vendor-supplied yaccs on the actual backend parser, I think.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 11 Oct 1999 21:52:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "bison" }, { "msg_contents": "> You might want to include into the installation instructions (or whereever\n> it will end up) that GNU bison 1.25 is required. (right after the flex\n> stuff)\n\nWe haven't been careful about building and shipping the bison output\nin the tarball distribution, as we have for the main parser. It just\nneeds someone to look at it, as well as look at Jan's backend\nlanguages which suffer from the same symptom as I recall...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 12 Oct 1999 05:43:02 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bison" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> You might want to include into the installation instructions (or whereever\n>> it will end up) that GNU bison 1.25 is required. (right after the flex\n>> stuff)\n\n> We haven't been careful about building and shipping the bison output\n> in the tarball distribution, as we have for the main parser.\n\nHuh? src/tools/release_prep automatically builds both the main parser\nand ecpg bison output files. Is there other stuff it should be\nhandling too?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 10:36:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bison " }, { "msg_contents": "> >> You might want to include into the installation instructions (or whereever\n> >> it will end up) that GNU bison 1.25 is required. (right after the flex\n> >> stuff)\n> > We haven't been careful about building and shipping the bison output\n> > in the tarball distribution, as we have for the main parser.\n> Huh? src/tools/release_prep automatically builds both the main parser\n> and ecpg bison output files. Is there other stuff it should be\n> handling too?\n\nafaik some of Jan's language stuff uses yacc also...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 15 Oct 1999 04:33:32 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bison" }, { "msg_contents": ">\n> > >> You might want to include into the installation instructions (or whereever\n> > >> it will end up) that GNU bison 1.25 is required. (right after the flex\n> > >> stuff)\n> > > We haven't been careful about building and shipping the bison output\n> > > in the tarball distribution, as we have for the main parser.\n> > Huh? src/tools/release_prep automatically builds both the main parser\n> > and ecpg bison output files. Is there other stuff it should be\n> > handling too?\n>\n> afaik some of Jan's language stuff uses yacc also...\n\n Yepp - PL/pgSQL has it's own scanner/parser (i.e.\n flex/bison). The tricky part in this case is that the\n languages object file will be loaded at runtime into the\n backend, where the main scanner/parser is already present.\n Thus I'm mangling with sed(1) over the generated sources to\n avoid global symbol conflicts.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 15 Oct 1999 12:18:24 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bison" } ]
[ { "msg_contents": "While doing some tests, I have encountered too many problems with\nincompatible BLKSZ (the backend comipled in different BLKSZ with the\none in database). I know this is my fault, but it would be nice if\nthere is better way to avoid this kind of disaster. For example:\n\n(1) there is a file called PG_BLKSZ under $PGDATA.\n\n(2) postmaster checks the contents of the file to see if it was\n compiled in the same BLKSZ.\n\n(3) If not, give some error messages and exit.\n\nComments?\n---\nTatsuo Ishii\n", "msg_date": "Tue, 12 Oct 1999 10:00:09 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Different BLKSZ" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> While doing some tests, I have encountered too many problems with\n> incompatible BLKSZ (the backend comipled in different BLKSZ with the\n> one in database). I know this is my fault, but it would be nice if\n> there is better way to avoid this kind of disaster. For example:\n> \n> (1) there is a file called PG_BLKSZ under $PGDATA.\n> \n> (2) postmaster checks the contents of the file to see if it was\n> compiled in the same BLKSZ.\n> \n> (3) If not, give some error messages and exit.\n\nThere is special file pg_control for the WAL purposes - good\nplace for the BLCKSZ...\n\nVadim\n", "msg_date": "Tue, 12 Oct 1999 17:00:11 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Different BLKSZ" }, { "msg_contents": ">> While doing some tests, I have encountered too many problems with\n>> incompatible BLKSZ (the backend comipled in different BLKSZ with the\n>> one in database). I know this is my fault, but it would be nice if\n>> there is better way to avoid this kind of disaster. For example:\n>> \n>> (1) there is a file called PG_BLKSZ under $PGDATA.\n>> \n>> (2) postmaster checks the contents of the file to see if it was\n>> compiled in the same BLKSZ.\n>> \n>> (3) If not, give some error messages and exit.\n>\n>There is special file pg_control for the WAL purposes - good\n>place for the BLCKSZ...\n\nNice. Do you have some functions to access the file? Seems it is a\nbinary file.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 12 Oct 1999 18:11:52 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Different BLKSZ " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> >\n> >There is special file pg_control for the WAL purposes - good\n> >place for the BLCKSZ...\n> \n> Nice. Do you have some functions to access the file? Seems it is a\n> binary file.\n\naccess/transam/xlog.c:StartupXLOG() is called on every database\nstartup and read control file - just add BLCKSZ to\nstruct ControlFileData and check it on startup. Don't forget\nto initialize this value in BootStrapXLOG() (while creating\ncontrol file).\n\nVadim\n", "msg_date": "Tue, 12 Oct 1999 17:19:20 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Different BLKSZ" }, { "msg_contents": ">> >There is special file pg_control for the WAL purposes - good\n>> >place for the BLCKSZ...\n>> \n>> Nice. Do you have some functions to access the file? Seems it is a\n>> binary file.\n>\n>access/transam/xlog.c:StartupXLOG() is called on every database\n>startup and read control file - just add BLCKSZ to\n>struct ControlFileData and check it on startup. Don't forget\n>to initialize this value in BootStrapXLOG() (while creating\n>control file).\n\nThanks. I will work on this issue.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 12 Oct 1999 18:32:15 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Different BLKSZ " }, { "msg_contents": ">>access/transam/xlog.c:StartupXLOG() is called on every database\n>>startup and read control file - just add BLCKSZ to\n>>struct ControlFileData and check it on startup. Don't forget\n>>to initialize this value in BootStrapXLOG() (while creating\n>>control file).\n>\n>Thanks. I will work on this issue.\n\nI have committed changes to xlog.c. If the blcksz of database does not\nmatch the one of the backend, you will get following error message and\npostmaster won't start.\n\nDEBUG: Data Base System is starting up at Tue Oct 12 19:11:03 1999\nFATAL 2: database was initialized in BLCKSZ(0), but the backend was\ncompiled in BLCKSZ(8192)\n\nThis change requires initdb.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 12 Oct 1999 19:29:21 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Different BLKSZ " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> While doing some tests, I have encountered too many problems with\n> incompatible BLKSZ (the backend comipled in different BLKSZ with the\n> one in database). I know this is my fault, but it would be nice if\n> there is better way to avoid this kind of disaster.\n\nI think this is a fine idea, but BLKSZ is not the only critical\nparameter that should be verified at startup. RELSEG_SIZE is\nalso critical and should be checked the same way. Are there any\nother configuration variables that affect database layout? I can't\nthink of any offhand, but maybe someone else can.\n\nAnother thing I would like to see handled in the same way is some sort\nof \"database layout serial number\" that is not the same as the official\nversion number. During development we frequently make initdb-forcing\nchanges to the contents of system tables, and sometimes not everyone\ngets the word. (I recall Thomas wasted a few hours that way after a\nrecent change of mine, for example.) We ought to have an internal\nversion number in some central source file that can be incremented at\nany time by anyone who's committing a change that requires initdb.\nThat would make sure that no one tries to run updated code against an\nincompatible database, even if they haven't been paying close attention\nto their hackers email. It could save a lot of wasted effort, I think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 10:27:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Different BLKSZ " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > While doing some tests, I have encountered too many problems with\n> > incompatible BLKSZ (the backend comipled in different BLKSZ with the\n> > one in database). I know this is my fault, but it would be nice if\n> > there is better way to avoid this kind of disaster.\n> \n> I think this is a fine idea, but BLKSZ is not the only critical\n> parameter that should be verified at startup. RELSEG_SIZE is\n> also critical and should be checked the same way. Are there any\n> other configuration variables that affect database layout? I can't\n> think of any offhand, but maybe someone else can.\n\nI have committed changes for RELSEG_SIZE too. initdb required.\n---\nTatsuo Ishii\n", "msg_date": "Sat, 16 Oct 1999 18:27:16 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Different BLKSZ " } ]