threads
listlengths
1
2.99k
[ { "msg_contents": "<<show_index.patch>> \n\nScrappy, can you apply it to current CVS please\n\nAndreas", "msg_date": "Wed, 22 Apr 1998 18:41:35 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "patch for explain.c that shows index (il secondo)" }, { "msg_contents": "\nApplied...\n\n\nOn Wed, 22 Apr 1998, Zeugswetter Andreas SARZ wrote:\n\n> <<show_index.patch>> \n> \n> Scrappy, can you apply it to current CVS please\n> \n> Andreas\n> \n\n", "msg_date": "Mon, 27 Apr 1998 12:57:13 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] patch for explain.c that shows index (il secondo)" } ]
[ { "msg_contents": "> Even if it takes an argument of datetime? I'm preparing this for\n> contrib, what should I do? Basically, there will be one function:\n> date_format(text, datetime) returns text, which is an implementation\n> of strftime. I use mktime, which is used elsewhere in the code, but\n> only inside of #ifdef USE_POSIX_TIME blocks. I don't beleive this\n> function to be portable, but it usually has an equiavalent of\n> timelocal() on other platforms. Any suggestions? I'm autoconf\n> illiterate.\n\nIt's not an autoconfig problem, it's a problem with trying to use Unix\nsystem times to do this. mktime assumes the limited range of 32-bit Unix\nsystem time as input, and datetime has much higher precision and much\nwider range. So, you can do two approaches:\n\n1) check the year field of the datetime input after it is broken up into\nthe tm structure by datetime2tm() and throw an elog(ERROR...) if it is\nout of range for mktime() or the non-posix equivalent. If it is within\nrange, just lop 1900 off of the year field and call mktime().\n\nor\n\n2) implement your own formatter which can handle a broad range of years.\n\nAs you might guess, (2) is preferable since it works for all valid\ndatetime values. You will also need to figure out how to handle the\nspecial cases \"infinity\", etc.; I would think you might want to pass\nthose through as-is.\n\nUsing datetime2tm() you already have access to the individual fields, so\nwriting something which steps through the formatting string looking for\nrelevant \"%x\" fields is pretty straight forward. Don't think that\nmktime() does much for you that you can't do yourself with 50 lines of\ncode (just guessing; ymmv :).\n\nI would also think about implementing the C code as \"datetime_format()\"\ninstead which would use the text,datetime argument pair, and then\noverload \"date_format()\" using an SQL procedure. That way you can use\neither additional C code _or_ just SQL procedures with conversions to\nimplement the same thing for the other date/time data types timestamp\nand abstime.\n\nHave fun with it...\n\n - Tom\n", "msg_date": "Thu, 23 Apr 1998 03:00:35 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Anything like strftime() for PostgreSQL?" }, { "msg_contents": "On Thu, 23 April 1998, at 03:00:35, Thomas G. Lockhart wrote:\n\n> > Even if it takes an argument of datetime? I'm preparing this for\n> > contrib, what should I do? Basically, there will be one function:\n> > date_format(text, datetime) returns text, which is an implementation\n> > of strftime. I use mktime, which is used elsewhere in the code, but\n> > only inside of #ifdef USE_POSIX_TIME blocks. I don't beleive this\n> > function to be portable, but it usually has an equiavalent of\n> > timelocal() on other platforms. Any suggestions? I'm autoconf\n> > illiterate.\n> \n> It's not an autoconfig problem, it's a problem with trying to use Unix\n> system times to do this. mktime assumes the limited range of 32-bit Unix\n> system time as input, and datetime has much higher precision and much\n> wider range. So, you can do two approaches:\n> \n> 1) check the year field of the datetime input after it is broken up into\n> the tm structure by datetime2tm() and throw an elog(ERROR...) if it is\n> out of range for mktime() or the non-posix equivalent. If it is within\n> range, just lop 1900 off of the year field and call mktime().\n\nHow do I handle the non-posix equivalent? is timelocal guaranteed to\nbe there if USE_POXIX_TIME isn't defined? I'd like this to be\nportable (which is why I mentioned autoconf)\n\n> \n> or\n> \n> 2) implement your own formatter which can handle a broad range of years.\n> \n> As you might guess, (2) is preferable since it works for all valid\n> datetime values. You will also need to figure out how to handle the\n> special cases \"infinity\", etc.; I would think you might want to pass\n> those through as-is.\n\nI agree.\n\n> \n> Using datetime2tm() you already have access to the individual fields, so\n> writing something which steps through the formatting string looking for\n> relevant \"%x\" fields is pretty straight forward. Don't think that\n> mktime() does much for you that you can't do yourself with 50 lines of\n> code (just guessing; ymmv :).\n\nYeah, unfortunately strftime (mktime is for getting the wday and yday\nvalues set correctly) has locale support, and quite a bit of options.\n\n> \n> I would also think about implementing the C code as \"datetime_format()\"\n> instead which would use the text,datetime argument pair, and then\n> overload \"date_format()\" using an SQL procedure. That way you can use\n> either additional C code _or_ just SQL procedures with conversions to\n> implement the same thing for the other date/time data types timestamp\n> and abstime.\n\nI'll do that..\n\n> \n> Have fun with it...\n> \n\nNah, I just want to get it out there. I have fun stuff to move on to\n:)\n", "msg_date": "Thu, 23 Apr 1998 18:29:34 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Anything like strftime() for\n PostgreSQL?" }, { "msg_contents": "> \n> On Thu, 23 April 1998, at 03:00:35, Thomas G. Lockhart wrote:\n> \n> > > Even if it takes an argument of datetime? I'm preparing this for\n> > > contrib, what should I do? Basically, there will be one function:\n> > > date_format(text, datetime) returns text, which is an implementation\n> > > of strftime. I use mktime, which is used elsewhere in the code, but\n> > > only inside of #ifdef USE_POSIX_TIME blocks. I don't beleive this\n> > > function to be portable, but it usually has an equiavalent of\n> > > timelocal() on other platforms. Any suggestions? I'm autoconf\n> > > illiterate.\n> > \n> > It's not an autoconfig problem, it's a problem with trying to use Unix\n> > system times to do this. mktime assumes the limited range of 32-bit Unix\n> > system time as input, and datetime has much higher precision and much\n> > wider range. So, you can do two approaches:\n> > \n> > 1) check the year field of the datetime input after it is broken up into\n> > the tm structure by datetime2tm() and throw an elog(ERROR...) if it is\n> > out of range for mktime() or the non-posix equivalent. If it is within\n> > range, just lop 1900 off of the year field and call mktime().\n> \n> How do I handle the non-posix equivalent? is timelocal guaranteed to\n> be there if USE_POXIX_TIME isn't defined? I'd like this to be\n> portable (which is why I mentioned autoconf)\n> \n> > \n> > or\n> > \n> > 2) implement your own formatter which can handle a broad range of years.\n> > \n> > As you might guess, (2) is preferable since it works for all valid\n> > datetime values. You will also need to figure out how to handle the\n> > special cases \"infinity\", etc.; I would think you might want to pass\n> > those through as-is.\n> \n> I agree.\n> \n> > \n> > Using datetime2tm() you already have access to the individual fields, so\n> > writing something which steps through the formatting string looking for\n> > relevant \"%x\" fields is pretty straight forward. Don't think that\n> > mktime() does much for you that you can't do yourself with 50 lines of\n> > code (just guessing; ymmv :).\n> \n> Yeah, unfortunately strftime (mktime is for getting the wday and yday\n> values set correctly) has locale support, and quite a bit of options.\n> \n> > \n> > I would also think about implementing the C code as \"datetime_format()\"\n> > instead which would use the text,datetime argument pair, and then\n> > overload \"date_format()\" using an SQL procedure. That way you can use\n> > either additional C code _or_ just SQL procedures with conversions to\n> > implement the same thing for the other date/time data types timestamp\n> > and abstime.\n> \n> I'll do that..\n> \n> > \n> > Have fun with it...\n> > \n> \n> Nah, I just want to get it out there. I have fun stuff to move on to\n> :)\n> \n\nConsider stealing one of the date manipulation packages from the Perl CPAN\narchive. One of them (can't remember which right now) has a full set of\ndate formatting, parseing, and arithemetic routines in C.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Thu, 23 Apr 1998 19:13:18 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Anything like strftime() for\n PostgreSQL?" } ]
[ { "msg_contents": "I just noticed that there is an operator '=:'. What is it used for? \n\nAt the moment I have disabled ':' a_expr in the parser since it's not\ndistinguishable from a variable. The same problem appears with this\noperator. But it can be worked around by using '= :<var>' instead of\n'=:<var>'. On the other hand I wonder whether this operator is still needed\nwhen ':' is not allowed in a_expr, b_expr, ...\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 23 Apr 1998 09:33:45 +0200 ()", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Just another question" }, { "msg_contents": "> I just noticed that there is an operator '=:'. What is it used for?\n\ntgl=> select * from pg_operator where oprname = '=:';\n...\n(0 rows)\n\n?? I don't see it here.\n\n - Tom\n", "msg_date": "Thu, 23 Apr 1998 12:06:51 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Just another question" } ]
[ { "msg_contents": "unsubscribe\n", "msg_date": "Thu, 23 Apr 1998 15:01:47 +0300", "msg_from": "Voinea Gheorghe <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "Hi, all\n\nI have this message when I try to compile pginterface...\nPlease help...\n\n$ make all\ncc -c -g -Wall -I/usr/local/pgsql/include pginterface.c halt.c\ncc -g -Wall -I/usr/local/pgsql/include -L/usr/local/pgsql/lib -lpq -o .c\n/usr/lib/crt1.o: In function `_start':\n/usr/lib/crt1.o(.text+0x5a): undefined reference to `main'\nmake: *** [.c] Error 1\nverde:~/pgsql/contrib/pginterface$\n Jose'\n\n", "msg_date": "Thu, 23 Apr 1998 12:06:00 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "pginterface compiling" }, { "msg_contents": "\"Jose' Soares Da Silva\" wrote:\n >Hi, all\n >\n >I have this message when I try to compile pginterface...\n >Please help...\n >\n >$ make all\n >cc -c -g -Wall -I/usr/local/pgsql/include pginterface.c halt.c\n >cc -g -Wall -I/usr/local/pgsql/include -L/usr/local/pgsql/lib -lpq -o \n >.c\n >/usr/lib/crt1.o: In function `_start':\n >/usr/lib/crt1.o(.text+0x5a): undefined reference to `main'\n >make: *** [.c] Error 1\n\nWell, you don't seem to have specified any files to link. The second\ncc command says to create an executable called `.c' but doesn't specify\nand files to compile or link, except the libpq library. So naturally, cc\ncannot find a main() routine; that is what it is complaining about.\n\nAh, I see it's not your fault but the Makefile's...\n\nI guess you are using GNU make, as I am. This works for me:\n\n=======================================================\n#\n# Makefile for GNU make\n#\n#\nPGINTERFACE = pginterface.o halt.o \nTARGET = pginsert pgwordcount pgnulltest\nCFLAGS = -g -Wall -I/usr/local/pgsql/include -I.\nLDFLAGS = -L/usr/local/pgsql/lib -lpq\n\nall : $(TARGET)\n\n$(TARGET): %: %.o ${PGINTERFACE}\n $(CC) -o $@ $(CFLAGS) [email protected] ${PGINTERFACE} $(LDFLAGS)\n\nclean:\n rm -f *.o $(TARGET) log core\n\ninstall:\n make clean\n make CFLAGS=-O\n install -s -o bin -g bin $(TARGET) /usr/local/bin\n=======================================================\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Thu, 23 Apr 1998 15:20:26 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] pginterface compiling " }, { "msg_contents": "> \n> Hi, all\n> \n> I have this message when I try to compile pginterface...\n> Please help...\n> \n> $ make all\n> cc -c -g -Wall -I/usr/local/pgsql/include pginterface.c halt.c\n> cc -g -Wall -I/usr/local/pgsql/include -L/usr/local/pgsql/lib -lpq -o .c\n> /usr/lib/crt1.o: In function `_start':\n> /usr/lib/crt1.o(.text+0x5a): undefined reference to `main'\n> make: *** [.c] Error 1\n> verde:~/pgsql/contrib/pginterface$\n> Jose'\n\nSomeone has been playing with the Makefile. Basically it is complaining\nbecause you have only specified libraries in the compile line. You need\na function to call the pginterface, with a main. Try one of the sample\n*.c files.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 11:19:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pginterface compiling" } ]
[ { "msg_contents": "But scan.l returns Op.\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tThomas G. Lockhart [SMTP:[email protected]]\n> Sent:\tThursday, April 23, 1998 2:07 PM\n> To:\tMichael Meskes\n> Cc:\tPostgreSQL Hacker\n> Subject:\tRe: [HACKERS] Just another question\n> \n> > I just noticed that there is an operator '=:'. What is it used for?\n> \n> tgl=> select * from pg_operator where oprname = '=:';\n> ...\n> (0 rows)\n> \n> ?? I don't see it here.\n> \n> - Tom\n", "msg_date": "Thu, 23 Apr 1998 14:14:12 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Just another question" }, { "msg_contents": "> > > I just noticed that there is an operator '=:'. What is it used \n> > > for?\n> > ?? I don't see it here.\n> But scan.l returns Op.\n\nOh, it is an _allowed_ operator symbol combination, if someone were to\ndefine an operator using it. But it isn't pre-defined anywhere, is it? \n\nAnd, it should be OK to require spaces to help delimit your embedded\nstuff; that is, \"=:\" is interpreted as a possible operator, while \"= :\"\n(with space) is \"equals embedded variable\"...\n\nI'd hate to keep removing single characters from the allowed operator\ncharacter set when we get syntax conflicts like this. We'll end up with\nonly the SQL92-allowed operator symbols before long :)\n\n - Tom\n", "msg_date": "Thu, 23 Apr 1998 12:56:22 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Just another question" } ]
[ { "msg_contents": "The next one:\n\ndiff -rcN ecpg/ChangeLog ecpg.mm/ChangeLog\n*** ecpg/ChangeLog\tTue Apr 21 15:22:19 1998\n--- ecpg.mm/ChangeLog\tThu Apr 23 13:55:53 1998\n***************\n*** 126,128 ****\n--- 126,137 ----\n \n \t- Set indicator to amount of data really written (truncation).\n \n+ Thu Apr 23 09:27:16 CEST 1998\n+ \n+ \t- Also allow call in whenever statement with the same functionality\n+ \t as do.\n+ \n+ Thu Apr 23 12:29:28 CEST 1998\n+ \n+ \t- Also rewrote variable declaration part. It is now possible to\n+ \t declare more than one variable per line.\ndiff -rcN ecpg/preproc/Makefile ecpg.mm/preproc/Makefile\n*** ecpg/preproc/Makefile\tTue Apr 21 15:22:21 1998\n--- ecpg.mm/preproc/Makefile\tThu Apr 23 14:25:39 1998\n***************\n*** 2,8 ****\n include $(SRCDIR)/Makefile.global\n \n MAJOR_VERSION=2\n! MINOR_VERSION=0\n PATCHLEVEL=0\n \n CFLAGS+=-I../include -DMAJOR_VERSION=$(MAJOR_VERSION) \\\n--- 2,8 ----\n include $(SRCDIR)/Makefile.global\n \n MAJOR_VERSION=2\n! MINOR_VERSION=1\n PATCHLEVEL=0\n \n CFLAGS+=-I../include -DMAJOR_VERSION=$(MAJOR_VERSION) \\\ndiff -rcN ecpg/preproc/ecpg_keywords.c ecpg.mm/preproc/ecpg_keywords.c\n*** ecpg/preproc/ecpg_keywords.c\tTue Apr 21 15:23:03 1998\n--- ecpg.mm/preproc/ecpg_keywords.c\tThu Apr 23 09:26:52 1998\n***************\n*** 21,26 ****\n--- 21,27 ----\n */\n static ScanKeyword ScanKeywords[] = {\n \t/* name\t\t\t\t\tvalue\t\t\t*/\n+ \t{\"call\", SQL_CALL},\n \t{\"connect\", SQL_CONNECT},\n \t{\"continue\", SQL_CONTINUE},\n \t{\"found\", SQL_FOUND},\ndiff -rcN ecpg/preproc/extern.h ecpg.mm/preproc/extern.h\n*** ecpg/preproc/extern.h\tTue Apr 21 15:22:21 1998\n--- ecpg.mm/preproc/extern.h\tThu Apr 23 14:09:38 1998\n***************\n*** 23,28 ****\n--- 23,40 ----\n \n extern struct cursor *cur;\n \n+ /* This is a linked list of the variable names and types. */\n+ struct variable\n+ {\n+ char * name;\n+ struct ECPGtype * type;\n+ int brace_level;\n+ struct variable * next;\n+ };\n+ \n+ extern struct ECPGtype ecpg_no_indicator;\n+ extern struct variable no_indicator;\n+ \n /* functions */\n \n extern void lex_init(void);\ndiff -rcN ecpg/preproc/pgc.l ecpg.mm/preproc/pgc.l\n*** ecpg/preproc/pgc.l\tTue Apr 21 15:22:23 1998\n--- ecpg.mm/preproc/pgc.l\tThu Apr 23 13:09:59 1998\n***************\n*** 70,75 ****\n--- 70,76 ----\n %x xb\n %x xc\n %x xd\n+ %x xdc\n %x xh\n %x xm\n %x xq\n***************\n*** 261,267 ****\n <xd>{xdstop}\t{\n \t\t\t\t\tBEGIN(SQL);\n \t\t\t\t\tyylval.str = strdup(literal);\n! \t\t\t\t\treturn (IDENT);\n \t\t\t\t}\n <xd>{xdinside}\t{\n \t\t\t\t\tif ((llen+yyleng) > (MAX_PARSE_BUFFER - 1))\n--- 262,268 ----\n <xd>{xdstop}\t{\n \t\t\t\t\tBEGIN(SQL);\n \t\t\t\t\tyylval.str = strdup(literal);\n! \t\t\t\t\treturn (CSTRING);\n \t\t\t\t}\n <xd>{xdinside}\t{\n \t\t\t\t\tif ((llen+yyleng) > (MAX_PARSE_BUFFER - 1))\n***************\n*** 269,275 ****\n \t\t\t\t\tmemcpy(literal+llen, yytext, yyleng+1);\n \t\t\t\t\tllen += yyleng;\n \t\t\t\t}\n! \n \n <xm>{space}*\t{ /* ignore */ }\n <xm>{xmstop}\t{\n--- 270,291 ----\n \t\t\t\t\tmemcpy(literal+llen, yytext, yyleng+1);\n \t\t\t\t\tllen += yyleng;\n \t\t\t\t}\n! <C>{xdstart}\t\t{\n! \t\t\t\t\tBEGIN(xdc);\n! \t\t\t\t\tllen = 0;\n! \t\t\t\t\t*literal = '\\0';\n! \t\t\t\t}\n! <xdc>{xdstop}\t{\n! \t\t\t\t\tBEGIN(C);\n! \t\t\t\t\tyylval.str = strdup(literal);\n! \t\t\t\t\treturn (CSTRING);\n! \t\t\t\t}\n! <xdc>{xdinside}\t{\n! \t\t\t\t\tif ((llen+yyleng) > (MAX_PARSE_BUFFER - 1))\n! \t\t\t\t\t\tyyerror(\"ERROR: quoted string parse buffer exceeded\");\n! \t\t\t\t\tmemcpy(literal+llen, yytext, yyleng+1);\n! \t\t\t\t\tllen += yyleng;\n! \t\t\t\t}\n \n <xm>{space}*\t{ /* ignore */ }\n <xm>{xmstop}\t{\n***************\n*** 283,289 ****\n <SQL>{self}/-[\\.0-9]\t\t{\n \t\t\t\t\treturn (yytext[0]);\n \t\t\t\t}\n! <SQL>{self}\t\t\t{ \treturn (yytext[0]); }\n <SQL>{operator}/-[\\.0-9]\t{\n \t\t\t\t\tyylval.str = strdup((char*)yytext);\n \t\t\t\t\treturn (Op);\n--- 299,305 ----\n <SQL>{self}/-[\\.0-9]\t\t{\n \t\t\t\t\treturn (yytext[0]);\n \t\t\t\t}\n! <SQL>{self}\t\t\t\t{ \treturn (yytext[0]); }\n <SQL>{operator}/-[\\.0-9]\t{\n \t\t\t\t\tyylval.str = strdup((char*)yytext);\n \t\t\t\t\treturn (Op);\n***************\n*** 423,434 ****\n \t\t\t\t\t}\n \t\t\t\t}\n <C>\";\"\t \t { return(';'); }\n <C>{space}\t\t{ ECHO; }\n! \\{\t\t\t{ return('{'); }\n! \\}\t\t\t{ return('}'); }\n! \\[\t\t\t{ return('['); }\n! \\]\t\t\t{ return(']'); }\n! \\=\t\t\t{ return('='); }\n <C>{other}\t\t\t{ return (S_ANYTHING); }\n <C>{exec}{space}{sql}{space}{include}\t{ BEGIN(incl); }\n <incl>{space}\t\t/* eat the whitespace */\n--- 439,452 ----\n \t\t\t\t\t}\n \t\t\t\t}\n <C>\";\"\t \t { return(';'); }\n+ <C>\",\"\t \t { return(','); }\n+ <C>\"*\"\t \t { return('*'); }\n <C>{space}\t\t{ ECHO; }\n! <C>\\{\t\t\t{ return('{'); }\n! <C>\\}\t\t\t{ return('}'); }\n! <C>\\[\t\t\t{ return('['); }\n! <C>\\]\t\t\t{ return(']'); }\n! <C>\\=\t\t\t{ return('='); }\n <C>{other}\t\t\t{ return (S_ANYTHING); }\n <C>{exec}{space}{sql}{space}{include}\t{ BEGIN(incl); }\n <incl>{space}\t\t/* eat the whitespace */\ndiff -rcN ecpg/preproc/preproc.y ecpg.mm/preproc/preproc.y\n*** ecpg/preproc/preproc.y\tTue Apr 21 15:23:26 1998\n--- ecpg.mm/preproc/preproc.y\tThu Apr 23 14:06:24 1998\n***************\n*** 12,20 ****\n * Variables containing simple states.\n */\n static int\tstruct_level = 0;\n! static char\t*do_str = NULL, errortext[128];\n! static int\tdo_length = 0;\n static int QueryIsRule = 0;\n \n /* temporarily store record members while creating the data structure */\n struct ECPGrecord_member *record_member_list[128] = { NULL };\n--- 12,21 ----\n * Variables containing simple states.\n */\n static int\tstruct_level = 0;\n! static char\terrortext[128];\n static int QueryIsRule = 0;\n+ static enum ECPGttype actual_type[128];\n+ static char *actual_storage[128];\n \n /* temporarily store record members while creating the data structure */\n struct ECPGrecord_member *record_member_list[128] = { NULL };\n***************\n*** 22,27 ****\n--- 23,31 ----\n /* keep a list of cursors */\n struct cursor *cur = NULL;\n \n+ struct ECPGtype ecpg_no_indicator = {ECPGt_NO_INDICATOR, 0L, {NULL}};\n+ struct variable no_indicator = {\"no_indicator\", &ecpg_no_indicator, 0, NULL};\n+ \n /*\n * Handle the filename and line numbering.\n */\n***************\n*** 82,96 ****\n */\n int braces_open;\n \n- /* This is a linked list of the variable names and types. */\n- struct variable\n- {\n- char * name;\n- struct ECPGtype * type;\n- int brace_level;\n- struct variable * next;\n- };\n- \n static struct variable * allvariables = NULL;\n \n static struct variable *\n--- 86,91 ----\n***************\n*** 167,175 ****\n static struct arguments * argsinsert = NULL;\n static struct arguments * argsresult = NULL;\n \n- static struct ECPGtype ecpg_no_indicator = {ECPGt_NO_INDICATOR, 0L, {NULL}};\n- static struct variable no_indicator = {\"no_indicator\", &ecpg_no_indicator, 0, NULL};\n- \n static void\n reset_variables(void)\n {\n--- 162,167 ----\n***************\n*** 209,215 ****\n dump_variables(list->next);\n \n /* Then the current element and its indicator */\n! ECPGdump_a_type(yyout, list->variable->name, list->variable->type, list->indicator->name, list->indicator->type, NULL, NULL);\n \n /* Then release the list element. */\n free(list);\n--- 201,209 ----\n dump_variables(list->next);\n \n /* Then the current element and its indicator */\n! ECPGdump_a_type(yyout, list->variable->name, list->variable->type,\n! \t(list->indicator->type->typ != ECPGt_NO_INDICATOR) ? list->indicator->name : NULL,\n! \t(list->indicator->type->typ != ECPGt_NO_INDICATOR) ? list->indicator->type : NULL, NULL, NULL);\n \n /* Then release the list element. */\n free(list);\n***************\n*** 375,388 ****\n \tdouble dval;\n int ival;\n \tchar * str;\n- \tstruct ECPGtemp_type type;\n \tstruct when action;\n \tint\t\t\ttagname;\n \tenum ECPGttype\t\ttype_enum;\n }\n \n /* special embedded SQL token */\n! %token\t\tSQL_CONNECT SQL_CONTINUE SQL_FOUND SQL_GO SQL_GOTO\n %token\t\tSQL_IMMEDIATE SQL_INDICATOR SQL_OPEN\n %token\t\tSQL_SECTION SQL_SEMI SQL_SQLERROR SQL_SQLPRINT SQL_START\n %token\t\tSQL_STOP SQL_WHENEVER\n--- 369,382 ----\n \tdouble dval;\n int ival;\n \tchar * str;\n \tstruct when action;\n+ \tstruct index\t\tindex;\n \tint\t\t\ttagname;\n \tenum ECPGttype\t\ttype_enum;\n }\n \n /* special embedded SQL token */\n! %token\t\tSQL_CALL SQL_CONNECT SQL_CONTINUE SQL_FOUND SQL_GO SQL_GOTO\n %token\t\tSQL_IMMEDIATE SQL_INDICATOR SQL_OPEN\n %token\t\tSQL_SECTION SQL_SEMI SQL_SQLERROR SQL_SQLPRINT SQL_START\n %token\t\tSQL_STOP SQL_WHENEVER\n***************\n*** 449,455 ****\n %token USER, PASSWORD, CREATEDB, NOCREATEDB, CREATEUSER, NOCREATEUSER, VALID, UNTIL\n \n /* Special keywords, not in the query language - see the \"lex\" file */\n! %token <str> IDENT SCONST Op\n %token <ival> ICONST PARAM\n %token <dval> FCONST\n \n--- 443,449 ----\n %token USER, PASSWORD, CREATEDB, NOCREATEDB, CREATEUSER, NOCREATEUSER, VALID, UNTIL\n \n /* Special keywords, not in the query language - see the \"lex\" file */\n! %token <str> IDENT SCONST Op CSTRING\n %token <ival> ICONST PARAM\n %token <dval> FCONST\n \n***************\n*** 538,548 ****\n %type <str>\tGrantStmt privileges operation_commalist operation\n \n %type <str>\tECPGWhenever ECPGConnect db_name ECPGOpen open_opts\n! %type <str>\tindicator ECPGExecute c_expr\n %type <str>\tstmt symbol\n \n %type <action> action\n \n %%\n prog: statements;\n \n--- 532,549 ----\n %type <str>\tGrantStmt privileges operation_commalist operation\n \n %type <str>\tECPGWhenever ECPGConnect db_name ECPGOpen open_opts\n! %type <str>\tindicator ECPGExecute c_expr variable_list dotext\n! %type <str> storage_clause opt_initializer vartext c_anything blockstart\n! %type <str> blockend variable_list variable var_anything sql_anything\n! %type <str>\topt_pointer ecpg_ident\n! \n %type <str>\tstmt symbol\n \n+ %type <type_enum> simple_type type struct_type\n+ \n %type <action> action\n \n+ %type <index>\topt_index\n %%\n prog: statements;\n \n***************\n*** 551,559 ****\n \n statement: ecpgstart stmt SQL_SEMI\n \t| ECPGDeclaration\n! \t| c_anything\n! \t| blockstart\n! \t| blockend\n \n stmt: AddAttrStmt\t\t\t{ output_statement($1); }\n \t\t| AlterUserStmt\t\t{ output_statement($1); }\n--- 552,560 ----\n \n statement: ecpgstart stmt SQL_SEMI\n \t| ECPGDeclaration\n! \t| c_anything\t\t\t{ fputs($1, yyout); }\n! \t| blockstart\t\t\t{ fputs($1, yyout); }\n! \t| blockend\t\t\t{ fputs($1, yyout); }\n \n stmt: AddAttrStmt\t\t\t{ output_statement($1); }\n \t\t| AlterUserStmt\t\t{ output_statement($1); }\n***************\n*** 1332,1338 ****\n \t\t\t\t\t$$ = make_name();\n \t\t\t\t}\n \t\t\t| Sconst\t{ $$ = $1; }\n! \t\t\t| IDENT\t\t{ $$ = $1; }\n \t\t;\n \n DropTrigStmt: DROP TRIGGER name ON relation_name\n--- 1333,1339 ----\n \t\t\t\t\t$$ = make_name();\n \t\t\t\t}\n \t\t\t| Sconst\t{ $$ = $1; }\n! \t\t\t| ecpg_ident\t\t{ $$ = $1; }\n \t\t;\n \n DropTrigStmt: DROP TRIGGER name ON relation_name\n***************\n*** 1829,1835 ****\n \n event_object: relation_name '.' attr_name\n \t\t\t\t{\n! \t\t\t\t\t$$ = make3_str($1, \",\", $3);\n \t\t\t\t}\n \t\t| relation_name\n \t\t\t\t{\n--- 1830,1836 ----\n \n event_object: relation_name '.' attr_name\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat3_str($1, \".\", $3);\n \t\t\t\t}\n \t\t| relation_name\n \t\t\t\t{\n***************\n*** 2243,2249 ****\n \t\t\t\t}\n \t\t| ColId '.' ColId OptUseOp\n \t\t\t\t{\n! \t\t\t\t\t$$ = make4_str($1, \".\", $3, $4);\n \t\t\t\t}\n \t\t| Iconst OptUseOp\n \t\t\t\t{\n--- 2244,2250 ----\n \t\t\t\t}\n \t\t| ColId '.' ColId OptUseOp\n \t\t\t\t{\n! \t\t\t\t\t$$ = make2_str(cat3_str($1, \".\", $3), $4);\n \t\t\t\t}\n \t\t| Iconst OptUseOp\n \t\t\t\t{\n***************\n*** 2292,2298 ****\n \t\t\t\t}\n \t\t| ColId '.' ColId\n \t\t\t\t{\n! \t\t\t\t\t$$ = make3_str($1, \",\", $3);\n \t\t\t\t}\n \t\t| Iconst\n \t\t\t\t{\n--- 2293,2299 ----\n \t\t\t\t}\n \t\t| ColId '.' ColId\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat3_str($1, \",\", $3);\n \t\t\t\t}\n \t\t| Iconst\n \t\t\t\t{\n***************\n*** 2383,2389 ****\n \t\t\t\t}\n \t\t| ColId '.' ColId\n \t\t\t\t{\n! \t\t\t\t\t$$ = make3_str($1, \".\", $3);\n \t\t\t\t}\n \t\t| Iconst\n \t\t\t\t{\n--- 2384,2390 ----\n \t\t\t\t}\n \t\t| ColId '.' ColId\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat3_str($1, \".\", $3);\n \t\t\t\t}\n \t\t| Iconst\n \t\t\t\t{\n***************\n*** 2455,2461 ****\n \t\t\t\t}\n \t\t;\n \n! generic: IDENT\t\t\t\t\t{ $$ = $1; }\n \t\t| TYPE_P\t\t\t{ $$ = \"type\"; }\n \t\t;\n \n--- 2456,2462 ----\n \t\t\t\t}\n \t\t;\n \n! generic: ecpg_ident\t\t\t\t\t{ $$ = $1; }\n \t\t| TYPE_P\t\t\t{ $$ = \"type\"; }\n \t\t;\n \n***************\n*** 3409,3428 ****\n \n attr: relation_name '.' attrs\n \t\t\t\t{\n! \t\t\t\t\t$$ = make3_str($1, \".\", $3);\n \t\t\t\t}\n \t\t| ParamNo '.' attrs\n \t\t\t\t{\n! \t\t\t\t\t$$ = make3_str($1, \".\", $3);\n \t\t\t\t}\n \t\t;\n \n attrs:\t attr_name\n \t\t\t\t{ $$ = $1; }\n \t\t| attrs '.' attr_name\n! \t\t\t\t{ $$ = make3_str($1, \".\", $3); }\n \t\t| attrs '.' '*'\n! \t\t\t\t{ $$ = make2_str($1, \".*\"); }\n \t\t;\n \n \n--- 3410,3429 ----\n \n attr: relation_name '.' attrs\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat3_str($1, \".\", $3);\n \t\t\t\t}\n \t\t| ParamNo '.' attrs\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat3_str($1, \".\", $3);\n \t\t\t\t}\n \t\t;\n \n attrs:\t attr_name\n \t\t\t\t{ $$ = $1; }\n \t\t| attrs '.' attr_name\n! \t\t\t\t{ $$ = cat3_str($1, \".\", $3); }\n \t\t| attrs '.' '*'\n! \t\t\t\t{ $$ = cat2_str($1, \".*\"); }\n \t\t;\n \n \n***************\n*** 3449,3455 ****\n \t\t\t\t}\n \t\t| relation_name '.' '*'\n \t\t\t\t{\n! \t\t\t\t\t$$ = make2_str($1, \".*\");\n \t\t\t\t}\n \t\t;\n \n--- 3450,3456 ----\n \t\t\t\t}\n \t\t| relation_name '.' '*'\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat2_str($1, \".*\");\n \t\t\t\t}\n \t\t;\n \n***************\n*** 3475,3481 ****\n \t\t\t\t}\n \t\t| relation_name '.' '*'\n \t\t\t\t{\n! \t\t\t\t\t$$ = make2_str($1, \".*\");\n \t\t\t\t}\n \t\t| '*'\n \t\t\t\t{\n--- 3476,3482 ----\n \t\t\t\t}\n \t\t| relation_name '.' '*'\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat2_str($1, \".*\");\n \t\t\t\t}\n \t\t| '*'\n \t\t\t\t{\n***************\n*** 3505,3513 ****\n \t\t;\n \n database_name:\t\t\tColId\t\t\t{ $$ = $1; };\n! access_method:\t\t\tIDENT\t\t\t{ $$ = $1; };\n attr_name:\t\t\t\tColId\t\t\t{ $$ = $1; };\n! class:\t\t\t\t\tIDENT\t\t\t{ $$ = $1; };\n index_name:\t\t\t\tColId\t\t\t{ $$ = $1; };\n \n /* Functions\n--- 3506,3514 ----\n \t\t;\n \n database_name:\t\t\tColId\t\t\t{ $$ = $1; };\n! access_method:\t\t\tecpg_ident\t\t\t{ $$ = $1; };\n attr_name:\t\t\t\tColId\t\t\t{ $$ = $1; };\n! class:\t\t\t\t\tecpg_ident\t\t\t{ $$ = $1; };\n index_name:\t\t\t\tColId\t\t\t{ $$ = $1; };\n \n /* Functions\n***************\n*** 3518,3524 ****\n func_name:\t\t\t\tColId\t\t\t{ $$ = $1; };\n \n file_name:\t\t\t\tSconst\t\t\t{ $$ = $1; };\n! recipe_name:\t\t\tIDENT\t\t\t{ $$ = $1; };\n \n /* Constants\n * Include TRUE/FALSE for SQL3 support. - thomas 1997-10-24\n--- 3519,3525 ----\n func_name:\t\t\t\tColId\t\t\t{ $$ = $1; };\n \n file_name:\t\t\t\tSconst\t\t\t{ $$ = $1; };\n! recipe_name:\t\t\tecpg_ident\t\t\t{ $$ = $1; };\n \n /* Constants\n * Include TRUE/FALSE for SQL3 support. - thomas 1997-10-24\n***************\n*** 3569,3575 ****\n \t\t\t\t\t\t\t$$[strlen($1)+2]='\\0';\n \t\t\t\t\t\t\t$$[strlen($1)+1]='\\'';\n \t\t\t\t\t\t}\n! UserId: IDENT { $$ = $1;};\n \n /* Column and type identifier\n * Does not include explicit datetime types\n--- 3570,3576 ----\n \t\t\t\t\t\t\t$$[strlen($1)+2]='\\0';\n \t\t\t\t\t\t\t$$[strlen($1)+1]='\\'';\n \t\t\t\t\t\t}\n! UserId: ecpg_ident { $$ = $1;};\n \n /* Column and type identifier\n * Does not include explicit datetime types\n***************\n*** 3591,3597 ****\n * list due to shift/reduce conflicts in yacc. If so, move\n * down to the ColLabel entity. - thomas 1997-11-06\n */\n! ColId: IDENT\t\t\t\t\t\t\t{ $$ = $1; }\n \t\t| datetime\t\t\t\t\t\t{ $$ = $1; }\n \t\t| ACTION\t\t\t\t\t\t{ $$ = \"action\"; }\n \t\t| CACHE\t\t\t\t\t\t\t{ $$ = \"cache\"; }\n--- 3592,3598 ----\n * list due to shift/reduce conflicts in yacc. If so, move\n * down to the ColLabel entity. - thomas 1997-11-06\n */\n! ColId: ecpg_ident\t\t\t\t\t\t\t{ $$ = $1; }\n \t\t| datetime\t\t\t\t\t\t{ $$ = $1; }\n \t\t| ACTION\t\t\t\t\t\t{ $$ = \"action\"; }\n \t\t| CACHE\t\t\t\t\t\t\t{ $$ = \"cache\"; }\n***************\n*** 3688,3854 ****\n output_line_number();\n }\n \n! variable_declarations : /* empty */\n! | variable_declarations variable_declaration;\n \n! /* Here is where we can enter support for typedef. */\n! variable_declaration: type initializer ';' { \n! /* don't worry about our list when we're working on a struct */\n! if (struct_level == 0)\n! {\n! new_variable($<type>1.name, $<type>1.typ);\n! free((void *)$<type>1.name);\n! }\n! fputs(\";\", yyout); \n! }\n \n! initializer : /*empty */\n! | '=' {fwrite(yytext, yyleng, 1, yyout);} vartext;\n \n! type : maybe_storage_clause type_detailed { $<type>$ = $<type>2; };\n! type_detailed : varchar_type { $<type>$ = $<type>1; }\n! \t | simple_type { $<type>$ = $<type>1; }\n! \t | string_type { $<type>$ = $<type>1; }\n! /*\t | array_type {$<type>$ = $<type>1; }\n! \t | pointer_type {$<type>$ = $<type>1; }*/\n! \t | struct_type {$<type>$ = $<type>1; };\n! \n! varchar_type : varchar_tag symbol index {\n! if ($<ival>3 > 0L)\n! \tfprintf(yyout, \"struct varchar_%s { int len; char arr[%d]; } %s\", $2, $<ival>3, $2);\n! else\n! \tfprintf(yyout, \"struct varchar_%s { int len; char arr[]; } %s\", $2, $2);\n! if (struct_level == 0)\n! {\n! \t$<type>$.name = $2;\n! \t$<type>$.typ = ECPGmake_varchar_type(ECPGt_varchar, $<ival>3);\n! }\n! else\n! \tECPGmake_record_member($2, ECPGmake_varchar_type(ECPGt_varchar, $<ival>3), &(record_member_list[struct_level-1]));\n! }\n \n! varchar_tag: S_VARCHAR /*| S_VARCHAR2 */;\n \n! simple_type : simple_tag symbol {\n! fprintf(yyout, \"%s %s\", ECPGtype_name($<type_enum>1), $2);\n! if (struct_level == 0)\n! {\n! \t$<type>$.name = $2;\n! \t$<type>$.typ = ECPGmake_simple_type($<type_enum>1, 1);\n! }\n! else\n! ECPGmake_record_member($2, ECPGmake_simple_type($<type_enum>1, 1), &(record_member_list[struct_level-1]));\n! }\n \n! string_type : char_tag symbol index {\n! if ($<ival>3 > 0L)\n! \t fprintf(yyout, \"%s %s [%d]\", ECPGtype_name($<type_enum>1), $2, $<ival>3);\n! else\n! \t fprintf(yyout, \"%s %s []\", ECPGtype_name($<type_enum>1), $2);\n! if (struct_level == 0)\n! {\n! \t$<type>$.name = $2;\n! \t$<type>$.typ = ECPGmake_simple_type($<type_enum>1, $<ival>3);\n! }\n! else\n! \tECPGmake_record_member($2, ECPGmake_simple_type($<type_enum>1, $<ival>3), &(record_member_list[struct_level-1]));\n! }\n! \t|\tchar_tag '*' symbol {\n! fprintf(yyout, \"%s *%s\", ECPGtype_name($<type_enum>1), $3);\n! if (struct_level == 0)\n! {\n! \t$<type>$.name = $3;\n! \t$<type>$.typ = ECPGmake_simple_type($<type_enum>1, 0);\n! }\n! else\n! \tECPGmake_record_member($3, ECPGmake_simple_type($<type_enum>1, 0), &(record_member_list[struct_level-1]));\n! }\n! \t|\tchar_tag symbol {\n! fprintf(yyout, \"%s %s\", ECPGtype_name($<type_enum>1), $2);\n! if (struct_level == 0)\n! {\n! \t$<type>$.name = $2;\n! \t$<type>$.typ = ECPGmake_simple_type($<type_enum>1, 1);\n! }\n! else\n! ECPGmake_record_member($2, ECPGmake_simple_type($<type_enum>1, 1), &(record_member_list[struct_level-1]));\n! }\n \n! char_tag : S_CHAR { $<type_enum>$ = ECPGt_char; }\n! | S_UNSIGNED S_CHAR { $<type_enum>$ = ECPGt_unsigned_char; }\n \n! /*\n! array_type : simple_tag symbol index {\n! if ($<ival>3 > 0)\n! \t fprintf(yyout, \"%s %s [%ld]\", ECPGtype_name($<type_enum>1), $2, $<ival>3);\n! else\n! \t fprintf(yyout, \"%s %s []\", ECPGtype_name($<type_enum>1), $2);\n! if (struct_level == 0)\n! {\n! \t$<type>$.name = $2;\n! \t$<type>$.typ = ECPGmake_array_type(ECPGmake_simple_type($<type_enum>1), $<ival>3);\n! }\n! else\n! \tECPGmake_record_member($2, ECPGmake_array_type(ECPGmake_simple_type($<type_enum>1), $<ival>3), &(record_member_list[struct_level-1]));\n! }\n \n! pointer_type : simple_tag '*' symbol {\n! fprintf(yyout, \"%s * %s\", ECPGtype_name($<type_enum>1), $3);\n! if (struct_level == 0)\n! {\n! \t$<type>$.name = $3;\n! \t$<type>$.typ = ECPGmake_array_type(ECPGmake_simple_type($<type_enum>1), 0);\n! }\n! else\n! \tECPGmake_record_member($3, ECPGmake_array_type(ECPGmake_simple_type($<type_enum>1), 0), &(record_member_list[struct_level-1]));\n! }\n! */\n \n! s_struct : S_STRUCT symbol {\n! struct_level++;\n! fprintf(yyout, \"struct %s {\", $2);\n! }\n \n! struct_type : s_struct '{' variable_declarations '}' symbol {\n! struct_level--;\n! if (struct_level == 0)\n! {\n! \t$<type>$.name = $5;\n! \t$<type>$.typ = ECPGmake_record_type(record_member_list[struct_level]);\n! }\n! else\n! \tECPGmake_record_member($5, ECPGmake_record_type(record_member_list[struct_level]), &(record_member_list[struct_level-1])); \n! fprintf(yyout, \"} %s\", $5);\n! record_member_list[struct_level] = NULL;\n! }\n \n! simple_tag : S_SHORT { $<type_enum>$ = ECPGt_short; }\n! | S_UNSIGNED S_SHORT { $<type_enum>$ = ECPGt_unsigned_short; }\n! \t | S_INT { $<type_enum>$ = ECPGt_int; }\n! | S_UNSIGNED S_INT { $<type_enum>$ = ECPGt_unsigned_int; }\n! \t | S_LONG { $<type_enum>$ = ECPGt_long; }\n! | S_UNSIGNED S_LONG { $<type_enum>$ = ECPGt_unsigned_long; }\n! | S_FLOAT { $<type_enum>$ = ECPGt_float; }\n! | S_DOUBLE { $<type_enum>$ = ECPGt_double; }\n! \t | S_BOOL { $<type_enum>$ = ECPGt_bool; };\n! \n! maybe_storage_clause : S_EXTERN { fwrite(yytext, yyleng, 1, yyout); }\n! \t\t | S_STATIC { fwrite(yytext, yyleng, 1, yyout); }\n! \t\t | S_SIGNED { fwrite(yytext, yyleng, 1, yyout); }\n! \t\t | S_CONST { fwrite(yytext, yyleng, 1, yyout); }\n! \t\t | S_REGISTER { fwrite(yytext, yyleng, 1, yyout); }\n! \t\t | S_AUTO { fwrite(yytext, yyleng, 1, yyout); }\n! | /* empty */ { };\n! \t \n! index : '[' Iconst ']' { $<ival>$ = atol($2); }\n! \t| '[' ']' { $<ival>$ = 0L; }\n \n /*\n * the exec sql connect statement: connect to the given database \n */\n! ECPGConnect : SQL_CONNECT db_name { $$ = $2; }\n \n! db_name : database_name { $$ = $1; }\n \t| ':' name { /* check if we have a char variable */\n \t\t\tstruct variable *p = find_variable($2);\n \t\t\tenum ECPGttype typ = p->type->typ;\n--- 3689,3825 ----\n output_line_number();\n }\n \n! variable_declarations: /* empty */\n! \t| declaration variable_declarations;\n \n! declaration: storage_clause type\n! \t{\n! \t\tactual_storage[struct_level] = $1;\n! \t\tactual_type[struct_level] = $2;\n! \t\tif ($2 != ECPGt_varchar && $2 != ECPGt_record)\n! \t\t\tfprintf(yyout, \"%s %s\", $1, ECPGtype_name($2));\n! \t}\n! \tvariable_list ';' { fputc(';', yyout); }\n \n! storage_clause : S_EXTERN\t{ $$ = \"extern\"; }\n! | S_STATIC\t\t{ $$ = \"static\"; }\n! | S_SIGNED\t\t{ $$ = \"signed\"; }\n! | S_CONST\t\t{ $$ = \"const\"; }\n! | S_REGISTER\t\t{ $$ = \"register\"; }\n! | S_AUTO\t\t\t{ $$ = \"auto\"; }\n! | /* empty */\t\t{ $$ = \"\" ; }\n \n! type: simple_type\n! \t| struct_type\n \n! struct_type: s_struct '{' variable_declarations '}'\n! \t{\n! \t struct_level--;\n! \t $$ = actual_type[struct_level] = ECPGt_record;\n! \t}\n \n! s_struct : S_STRUCT symbol\n! \t{\n! \t\tstruct_level++;\n! \t\tfprintf(yyout, \"struct %s {\", $2);\n! \t}\n \n! simple_type: S_SHORT\t\t{ $$ = ECPGt_short; }\n! | S_UNSIGNED S_SHORT { $$ = ECPGt_unsigned_short; }\n! \t | S_INT \t\t{ $$ = ECPGt_int; }\n! | S_UNSIGNED S_INT\t{ $$ = ECPGt_unsigned_int; }\n! \t | S_LONG\t\t{ $$ = ECPGt_long; }\n! | S_UNSIGNED S_LONG\t{ $$ = ECPGt_unsigned_long; }\n! | S_FLOAT\t\t{ $$ = ECPGt_float; }\n! | S_DOUBLE\t\t{ $$ = ECPGt_double; }\n! \t | S_BOOL\t\t{ $$ = ECPGt_bool; };\n! \t | S_CHAR\t\t{ $$ = ECPGt_char; }\n! | S_UNSIGNED S_CHAR\t{ $$ = ECPGt_unsigned_char; }\n! \t | S_VARCHAR\t\t{ $$ = ECPGt_varchar; }\n \n! variable_list: variable \n! \t| variable_list ','\n! \t{\n! \t\tif (actual_type[struct_level] != ECPGt_varchar)\n! \t\t\tfputs(\", \", yyout);\n! \t\telse\n! \t\t\tfputs(\";\\n \", yyout);\n! \t} variable\n \n! variable: opt_pointer symbol opt_index opt_initializer\n! \t\t{\n! \t\t\tint length = $3.ival;\n \n! \t\t\t/* pointer has to get length 0 */\n! \t\t\tif (strlen($1) > 0)\n! \t\t\t\tlength = 0;\n \n! \t\t\tswitch (actual_type[struct_level])\n! \t\t\t{\n! \t\t\t case ECPGt_record:\n! \t\t\t\tif (struct_level == 0)\n! \t\t\t\t\tnew_variable($2, ECPGmake_record_type(record_member_list[struct_level]));\n! \t\t\t\telse\n! \t\t\t\t ECPGmake_record_member($2, ECPGmake_record_type(record_member_list[struct_level]), &(record_member_list[struct_level-1]));\n \n! \t\t\t\trecord_member_list[struct_level] = NULL;\n! \t\t \t\tfprintf(yyout, \"} %s%s%s%s\", $1, $2, $3.str, $4);\n \n! \t\t\t\tbreak;\n! \t\t\t case ECPGt_varchar:\n! \t\t\t\tif (strlen($4) != 0)\n! \t\t\t\t\tyyerror(\"varchar initilization impossible\");\n! \n! \t\t\t\tif (struct_level == 0) \n! \t\t\t\t\tnew_variable($2, ECPGmake_varchar_type(actual_type[struct_level], length));\n! \t\t\t\telse\n! \t\t\t\t ECPGmake_record_member($2, ECPGmake_varchar_type(actual_type[struct_level], length), &(record_member_list[struct_level-1]));\n! \t\t\t\t\n! \t\t\t\tif (length > 0)\n! \t\t\t\t\tfprintf(yyout, \"%s struct varchar_%s { int len; char arr[%d]; } %s\", actual_storage[struct_level], $2, length, $2);\n! \t\t\t\telse\n! \t\t\t\t\tfprintf(yyout, \"%s struct varchar_%s { int len; char arr[]; } %s\", actual_storage[struct_level], $2, $2);\n! \n! \t\t\t\tbreak;\n! \n! \t\t\t default:\n! \t\t\t\tif (struct_level == 0)\n! \t\t\t\t\tnew_variable($2, ECPGmake_simple_type(actual_type[struct_level], length));\n! \t\t\t\telse\n! \t\t\t\t ECPGmake_record_member($2, ECPGmake_simple_type(actual_type[struct_level], length), &(record_member_list[struct_level-1]));\n! \n! \t\t\t\tfprintf(yyout, \"%s%s%s%s\", $1, $2, $3.str, $4);\n! \n! \t\t\t\tbreak;\n! \t\t\t}\n! \t\t}\n! \n! opt_initializer: /* empty */\t\t{ $$ = \"\"; }\n! \t| '=' vartext\t\t\t{ $$ = cat2_str(\"=\", $2); }\n! \n! opt_pointer: /* empty */\t{ $$ = \"\"; }\n! \t| '*'\t\t\t{ $$ = \"*\"; }\n! \n! opt_index: '[' Iconst ']'\t{\n! \t\t\t\t\t$$.ival = atol($2);\n! \t\t\t\t\t$$.str = cat3_str(\"[\", $2, \"]\");\n! \t\t\t\t}\n! | '[' ']'\n! \t\t\t\t{\n! \t\t\t\t\t$$.ival = 0;\n! \t\t\t\t\t$$.str = \"[]\";\n! \t\t\t\t}\n! \t| /* empty */\t\t{\n! \t\t\t\t\t$$.ival = 1;\n! \t\t\t\t\t$$.str = \"\";\n! \t\t\t\t}\n \n /*\n * the exec sql connect statement: connect to the given database \n */\n! ECPGConnect: SQL_CONNECT db_name { $$ = $2; }\n \n! db_name: database_name { $$ = $1; }\n \t| ':' name { /* check if we have a char variable */\n \t\t\tstruct variable *p = find_variable($2);\n \t\t\tenum ECPGttype typ = p->type->typ;\n***************\n*** 3895,3901 ****\n \t\t\t\t}\n \n /*\n! * whenever statement: decide what to do in case of error/no dat\n */\n ECPGWhenever: SQL_WHENEVER SQL_SQLERROR action {\n \twhen_error.code = $<action>3.code;\n--- 3866,3875 ----\n \t\t\t\t}\n \n /*\n! * whenever statement: decide what to do in case of error/no data found\n! * according to SQL standards we miss: SQLSTATE, CONSTRAINT, SQLEXCEPTION\n! * and SQLWARNING\n! \n */\n ECPGWhenever: SQL_WHENEVER SQL_SQLERROR action {\n \twhen_error.code = $<action>3.code;\n***************\n*** 3933,3949 ****\n $<action>$.command = $3;\n \t$<action>$.str = make2_str(\"goto \", $3);\n }\n! | DO name '(' {\n! \tdo_str = (char *) mm_alloc(do_length = strlen($2) + 4);\n! \tsprintf(do_str, \"%s (\", $2);\n! } dotext ')' {\n! \tdo_str[strlen(do_str)+1]='\\0';\n! \tdo_str[strlen(do_str)]=')';\n \t$<action>$.code = W_DO;\n! \t$<action>$.command = do_str;\n! \t$<action>$.str = make2_str(\"do \", do_str);\n! \tdo_str = NULL;\n! \tdo_length = 0;\n }\n \n /* some other stuff for ecpg */\n--- 3907,3921 ----\n $<action>$.command = $3;\n \t$<action>$.str = make2_str(\"goto \", $3);\n }\n! | DO name '(' dotext ')' {\n! \t$<action>$.code = W_DO;\n! \t$<action>$.command = cat4_str($2, \"(\", $4, \")\");\n! \t$<action>$.str = make2_str(\"do\", $<action>$.command);\n! }\n! | SQL_CALL name '(' dotext ')' {\n \t$<action>$.code = W_DO;\n! \t$<action>$.command = cat4_str($2, \"(\", $4, \")\");\n! \t$<action>$.str = make2_str(\"call\", $<action>$.command);\n }\n \n /* some other stuff for ecpg */\n***************\n*** 4231,4246 ****\n \n ecpgstart: SQL_START { reset_variables();}\n \n! dotext: /* empty */\n! \t| dotext sql_anything {\n! if (strlen(do_str) + yyleng + 1 >= do_length)\n! do_str = mm_realloc(do_str, do_length += yyleng);\n \n! strcat(do_str, yytext);\n! }\n! \n! vartext: both_anything { fwrite(yytext, yyleng, 1, yyout); }\n! | vartext both_anything { fwrite(yytext, yyleng, 1, yyout); }\n \n coutputvariable : ':' name indicator {\n \t\tadd_variable(&argsresult, find_variable($2), ($3 == NULL) ? &no_indicator : find_variable($3)); \n--- 4203,4213 ----\n \n ecpgstart: SQL_START { reset_variables();}\n \n! dotext: /* empty */\t\t{ $$ = \"\"; }\n! \t| dotext sql_anything\t{ $$ = cat2_str($1, $2); }\n \n! vartext: var_anything\t\t{ $$ = $1; }\n! | vartext var_anything { $$ = cat2_str($1, $2); }\n \n coutputvariable : ':' name indicator {\n \t\tadd_variable(&argsresult, find_variable($2), ($3 == NULL) ? &no_indicator : find_variable($3)); \n***************\n*** 4259,4289 ****\n \t| SQL_INDICATOR ':' name \t{ check_indicator((find_variable($3))->type); $$ = $3; }\n \t| SQL_INDICATOR name\t\t{ check_indicator((find_variable($2))->type); $$ = $2; }\n \n /*\n * C stuff\n */\n \n! symbol: IDENT\t{ $$ = $1; }\n! \n! c_anything: both_anything\t{ fwrite(yytext, yyleng, 1, yyout); }\n! \t| ';'\t\t\t{ fputc(';', yyout); }\n! \n! sql_anything: IDENT {} | ICONST {} | FCONST {}\n \n! both_anything: IDENT {} | ICONST {} | FCONST {}\n! \t| S_AUTO | S_BOOL | S_CHAR | S_CONST | S_DOUBLE | S_EXTERN | S_FLOAT\n! \t| S_INT\t| S_LONG | S_REGISTER | S_SHORT\t| S_SIGNED | S_STATIC\n! \t| S_STRUCT | S_UNSIGNED\t| S_VARCHAR | S_ANYTHING\n! \t| '[' | ']' | '(' | ')' | '='\n \n blockstart : '{' {\n braces_open++;\n! fputc('{', yyout);\n }\n \n blockend : '}' {\n remove_variables(braces_open--);\n! fputc('}', yyout);\n }\n \n %%\n--- 4226,4288 ----\n \t| SQL_INDICATOR ':' name \t{ check_indicator((find_variable($3))->type); $$ = $3; }\n \t| SQL_INDICATOR name\t\t{ check_indicator((find_variable($2))->type); $$ = $2; }\n \n+ ecpg_ident: IDENT\t{ $$ = $1; }\n+ \t| CSTRING\t{ $$ = cat3_str(\"\\\"\", $1, \"\\\"\"); }\n /*\n * C stuff\n */\n \n! symbol: ecpg_ident\t{ $$ = $1; }\n \n! c_anything: ecpg_ident \t{ $$ = $1; }\n! \t| Iconst\t{ $$ = $1; }\n! \t| FCONST\t{ $$ = make_name(); }\n! \t| '*'\t\t\t{ $$ = \"*\"; }\n! \t| ';'\t\t\t{ $$ = \";\"; }\n! \t| S_AUTO\t{ $$ = \"auto\"; }\n! \t| S_BOOL\t{ $$ = \"bool\"; }\n! \t| S_CHAR\t{ $$ = \"char\"; }\n! \t| S_CONST\t{ $$ = \"const\"; }\n! \t| S_DOUBLE\t{ $$ = \"double\"; }\n! \t| S_EXTERN\t{ $$ = \"extern\"; }\n! \t| S_FLOAT\t{ $$ = \"float\"; }\n! | S_INT\t\t{ $$ = \"int\"; }\n! \t| S_LONG\t{ $$ = \"long\"; }\n! \t| S_REGISTER\t{ $$ = \"register\"; }\n! \t| S_SHORT\t{ $$ = \"short\"; }\n! \t| S_SIGNED\t{ $$ = \"signed\"; }\n! \t| S_STATIC\t{ $$ = \"static\"; }\n! | S_STRUCT\t{ $$ = \"struct\"; }\n! \t| S_UNSIGNED\t{ $$ = \"unsigned\"; }\n! \t| S_VARCHAR\t{ $$ = \"varchar\"; }\n! \t| S_ANYTHING\t{ $$ = make_name(); }\n! | '['\t\t{ $$ = \"[\"; }\n! \t| ']'\t\t{ $$ = \"]\"; }\n! \t| '('\t\t{ $$ = \"(\"; }\n! \t| ')'\t\t{ $$ = \")\"; }\n! \t| '='\t\t{ $$ = \"=\"; }\n! \t| ','\t\t{ $$ = \",\"; }\n! \n! sql_anything: ecpg_ident\t{ $$ = $1; }\n! \t| Iconst\t{ $$ = $1; }\n! \t| FCONST\t{ $$ = make_name(); }\n! \t| ','\t\t{ $$ = \",\"; }\n! \n! var_anything: ecpg_ident \t{ $$ = $1; }\n! \t| Iconst\t{ $$ = $1; }\n! \t| FCONST\t{ $$ = make_name(); }\n! /*FIXME:\t| ','\t\t{ $$ = \",\"; }*/\n! \t| '{'\t\t{ $$ = \"{\"; }\n! \t| '}'\t\t{ $$ = \"}\"; }\n \n blockstart : '{' {\n braces_open++;\n! $$ = \"{\";\n }\n \n blockend : '}' {\n remove_variables(braces_open--);\n! $$ = \"}\";\n }\n \n %%\ndiff -rcN ecpg/preproc/type.c ecpg.mm/preproc/type.c\n*** ecpg/preproc/type.c\tTue Apr 21 15:23:26 1998\n--- ecpg.mm/preproc/type.c\tThu Apr 23 14:11:56 1998\n***************\n*** 130,135 ****\n--- 130,141 ----\n void\n ECPGdump_a_type(FILE *o, const char *name, struct ECPGtype * typ, const char *ind_name, struct ECPGtype * ind_typ, const char *prefix, const char *ind_prefix)\n {\n+ \tif (ind_typ == NULL)\n+ \t{\n+ \t\tind_typ = &ecpg_no_indicator;\n+ \t\tind_name = \"no_indicator\";\n+ \t}\n+ \t\n \tif (IS_SIMPLE_TYPE(typ->typ))\n \t{\n \t\tECPGdump_a_simple(o, name, typ->typ, typ->size, 0, 0, prefix);\n***************\n*** 267,273 ****\n \t * then we are in a record in a record and the offset is used as\n \t * offset.\n \t */\n! \tstruct ECPGrecord_member *p, *ind_p;\n \tchar\t\tobuf[BUFSIZ];\n \tchar\t\tpbuf[BUFSIZ], ind_pbuf[BUFSIZ];\n \tconst char *offset;\n--- 273,279 ----\n \t * then we are in a record in a record and the offset is used as\n \t * offset.\n \t */\n! \tstruct ECPGrecord_member *p, *ind_p = NULL;\n \tchar\t\tobuf[BUFSIZ];\n \tchar\t\tpbuf[BUFSIZ], ind_pbuf[BUFSIZ];\n \tconst char *offset;\n***************\n*** 288,296 ****\n \tsprintf(ind_pbuf, \"%s%s.\", ind_prefix ? ind_prefix : \"\", ind_name);\n \tind_prefix = ind_pbuf;\n \n! \tfor (p = typ->u.members, ind_p = ind_typ->u.members; p; p = p->next, ind_p = ind_p->next)\n \t{\n! \t\tECPGdump_a_type(o, p->name, p->typ, ind_p->name, ind_p->typ, prefix, ind_prefix);\n \t}\n }\n \n--- 294,304 ----\n \tsprintf(ind_pbuf, \"%s%s.\", ind_prefix ? ind_prefix : \"\", ind_name);\n \tind_prefix = ind_pbuf;\n \n! \tif (ind_typ != NULL) ind_p = ind_typ->u.members;\n! \tfor (p = typ->u.members; p; p = p->next)\n \t{\n! \t\tECPGdump_a_type(o, p->name, p->typ, (ind_p != NULL) ? ind_p->name : NULL, (ind_p != NULL) ? ind_p->typ : NULL, prefix, ind_prefix);\n! \t\tif (ind_p != NULL) ind_p = ind_p->next;\n \t}\n }\n \ndiff -rcN ecpg/preproc/type.h ecpg.mm/preproc/type.h\n*** ecpg/preproc/type.h\tTue Apr 21 15:23:26 1998\n--- ecpg.mm/preproc/type.h\tThu Apr 23 10:50:36 1998\n***************\n*** 74,76 ****\n--- 74,82 ----\n \tchar\t\t*command;\n \tchar\t \t*str;\n };\n+ \n+ struct index\n+ {\n+ \tint ival;\n+ \tchar *str;\n+ };\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 23 Apr 1998 14:30:03 +0200 ()", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "ecpg patches" } ]
[ { "msg_contents": "I send my patches to this list instead of the patches list. I hope it\ndoesn't matter.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 23 Apr 1998 14:38:35 +0200 ()", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Oops" } ]
[ { "msg_contents": "Yes, I agree. So I let it like it is. But I think this has to be added\nto the docs.\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tThomas G. Lockhart [SMTP:[email protected]]\n> Sent:\tThursday, April 23, 1998 2:56 PM\n> To:\tMeskes, Michael\n> Cc:\tPostgreSQL Hacker\n> Subject:\tRe: [HACKERS] Just another question\n> \n> > > > I just noticed that there is an operator '=:'. What is it used \n> > > > for?\n> > > ?? I don't see it here.\n> > But scan.l returns Op.\n> \n> Oh, it is an _allowed_ operator symbol combination, if someone were to\n> define an operator using it. But it isn't pre-defined anywhere, is it?\n> \n> \n> And, it should be OK to require spaces to help delimit your embedded\n> stuff; that is, \"=:\" is interpreted as a possible operator, while \"=\n> :\"\n> (with space) is \"equals embedded variable\"...\n> \n> I'd hate to keep removing single characters from the allowed operator\n> character set when we get syntax conflicts like this. We'll end up\n> with\n> only the SQL92-allowed operator symbols before long :)\n> \n> - Tom\n", "msg_date": "Thu, 23 Apr 1998 14:48:05 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Just another question" } ]
[ { "msg_contents": "May be, may be not. It would be handy for them all to be on the patches list\nto keep them all to gether (& I'm thinking on the web archives).\n\nI don't know about anyone else, but I have the patches list go to a seperate\nuser to the hackers one, so if they come here, I have to then forward it to\nthe patches user for archiving (I'm in the habit of archiving the mail\nhere).\n\nAnyhow, enough of my moaning, I'd better do some work ;-)\n\n--\nPeter T Mount, [email protected], [email protected]\nPlease note that this is from my works email. If you reply, please cc my\nhome address.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On\nBehalf Of Michael Meskes\nSent: Thursday, April 23, 1998 2:03 PM\nTo: PostgreSQL Hacker\nSubject: [HACKERS] Oops\n\n\nI send my patches to this list instead of the patches list. I hope it\ndoesn't matter.\n\nMichael\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n", "msg_date": "Thu, 23 Apr 1998 14:20:05 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Oops" } ]
[ { "msg_contents": "Hello,\n\nI was wondering if it would be possible, in the Postgres backend, to\nsend back the defined column size for the varchar data type (and\npossibly the char() type, i.e., bpchar) on a query? Currently, it just\nsends back -1 for the size, which makes it difficult in the frontend\n(i.e., odbc driver) to determine what the size of the column is.\n\nThank you,\n\nByron\n\n", "msg_date": "Thu, 23 Apr 1998 10:45:55 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "retrieving varchar size" }, { "msg_contents": "Byron Nikolaidis wrote:\n> \n> Hello,\n> \n> I was wondering if it would be possible, in the Postgres backend, to\n> send back the defined column size for the varchar data type (and\n> possibly the char() type, i.e., bpchar) on a query? Currently, it just\n> sends back -1 for the size, which makes it difficult in the frontend\n> (i.e., odbc driver) to determine what the size of the column is.\n\nWhile the right solution to this is of course getting the size from \nbackend, there exists a workaround now (assuming that the query is not \ntoo expensive). While ASCII cursors always hide the varchar sizes, \nbinary ones return the size in actual data (by zero-padding the \nreturned data to max size), so one can determine the actual max \nsizes by opening the query in binary cursor and then examining \nenough records to get one non-null field for each varchar field.\n\nHannu\n", "msg_date": "Thu, 23 Apr 1998 19:26:36 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> \n> Hello,\n> \n> I was wondering if it would be possible, in the Postgres backend, to\n> send back the defined column size for the varchar data type (and\n> possibly the char() type, i.e., bpchar) on a query? Currently, it just\n> sends back -1 for the size, which makes it difficult in the frontend\n> (i.e., odbc driver) to determine what the size of the column is.\n> \n\nThis is kind of tough to do.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 12:36:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] retrieving varchar size" }, { "msg_contents": "> \n> Byron Nikolaidis wrote:\n> > \n> > Hello,\n> > \n> > I was wondering if it would be possible, in the Postgres backend, to\n> > send back the defined column size for the varchar data type (and\n> > possibly the char() type, i.e., bpchar) on a query? Currently, it just\n> > sends back -1 for the size, which makes it difficult in the frontend\n> > (i.e., odbc driver) to determine what the size of the column is.\n> \n> While the right solution to this is of course getting the size from \n> backend, there exists a workaround now (assuming that the query is not \n> too expensive). While ASCII cursors always hide the varchar sizes, \n> binary ones return the size in actual data (by zero-padding the \n> returned data to max size), so one can determine the actual max \n> sizes by opening the query in binary cursor and then examining \n> enough records to get one non-null field for each varchar field.\n\nAs of 6.3, this is only true of char() fields. Varchar() is now\nvariable length.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 12:45:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Byron Nikolaidis wrote:\n> > >\n> > > Hello,\n> > >\n> > > I was wondering if it would be possible, in the Postgres backend, to\n> > > send back the defined column size for the varchar data type (and\n> > > possibly the char() type, i.e., bpchar) on a query? Currently, it just\n> > > sends back -1 for the size, which makes it difficult in the frontend\n> > > (i.e., odbc driver) to determine what the size of the column is.\n> \n> This is kind of tough to do.\n\nWhat makes it tough? \n\nIs this info not available where needed, or is changing the protocol\ntough.\n\nIn the latter case, I would suggest an additional SQL command for open\ncursors,\nor a pseudo table for open cursor where you could do a simple select\nstatement:\n\nDECLARE CURSOR FOO_CURSOR FOR SELECT * FROM MYTABLE;\n\nSELECT _FIELD_NAME,_FIELD_TYPE,_FIELD_SIZE FROM\nFOO_CURSOR_INFO_PSEUTOTABLE;\n\n> > While the right solution to this is of course getting the size from\n> > backend, there exists a workaround now (assuming that the query is not\n> > too expensive). While ASCII cursors always hide the varchar sizes,\n> > binary ones return the size in actual data (by zero-padding the\n> > returned data to max size), so one can determine the actual max\n> > sizes by opening the query in binary cursor and then examining\n> > enough records to get one non-null field for each varchar field.\n> \n> As of 6.3, this is only true of char() fields. Varchar() is now\n> variable length.\n\nAs knowing field size is quite essential for Borland applications some \nsolution should be found for this.\n\nHannu\n", "msg_date": "Thu, 23 Apr 1998 20:28:39 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> \n> In the latter case, I would suggest an additional SQL command for open\n> cursors,\n> or a pseudo table for open cursor where you could do a simple select\n> statement:\n> \n> DECLARE CURSOR FOO_CURSOR FOR SELECT * FROM MYTABLE;\n> \n> SELECT _FIELD_NAME,_FIELD_TYPE,_FIELD_SIZE FROM\n> FOO_CURSOR_INFO_PSEUTOTABLE;\n\nThe information you want is in pg_attribute.atttypmod. It is normally\n-1, but is set for char() and varchar() fields, and includes the 4-byte\nlength. See bin/psql/psql.c for a sample of its use.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 13:35:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "Bruce Momjian wrote:\n\n> >\n> > In the latter case, I would suggest an additional SQL command for open\n> > cursors,\n> > or a pseudo table for open cursor where you could do a simple select\n> > statement:\n> >\n> > DECLARE CURSOR FOO_CURSOR FOR SELECT * FROM MYTABLE;\n> >\n> > SELECT _FIELD_NAME,_FIELD_TYPE,_FIELD_SIZE FROM\n> > FOO_CURSOR_INFO_PSEUTOTABLE;\n>\n> The information you want is in pg_attribute.atttypmod. It is normally\n> -1, but is set for char() and varchar() fields, and includes the 4-byte\n> length. See bin/psql/psql.c for a sample of its use.\n\nI see everyone writing in terms of length. You do mean precision, don't\nyou? For our purposes, this precision should arrive in the result\nheader. (redundancy in each tuple could be over looked) The goal is to be\nable to put realistic bounds on memory allocation before the entire result is\nread in. For this to work, functions must also be able to propagate the\ntheir precision.\n\nDid I spell doom to this idea?", "msg_date": "Thu, 23 Apr 1998 15:29:55 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "David Hartwig wrote:\n> \n> Bruce Momjian wrote:\n> \n> > >\n> > > In the latter case, I would suggest an additional SQL command for open\n> > > cursors,\n> > > or a pseudo table for open cursor where you could do a simple select\n> > > statement:\n> > >\n> > > DECLARE CURSOR FOO_CURSOR FOR SELECT * FROM MYTABLE;\n> > >\n> > > SELECT _FIELD_NAME,_FIELD_TYPE,_FIELD_SIZE FROM\n> > > FOO_CURSOR_INFO_PSEUTOTABLE;\n> >\n> > The information you want is in pg_attribute.atttypmod. It is normally\n> > -1, but is set for char() and varchar() fields, and includes the 4-byte\n> > length. See bin/psql/psql.c for a sample of its use.\n\nis this on client side or server side?\n\nLast time I checked (it was in 6.2 protocol) it was not sent to client.\n\nWhat I need is the defined max length of varchar (or char), not just\nactual length of each field of that type. This is used by Borlands BDE,\nand if this changes, depending on the where clause, it breaks BDE.\n\n> I see everyone writing in terms of length. You do mean precision, don't\n> you? \n\nin case varchars have precision, yes ;)\n\n> For our purposes, this precision should arrive in the result\n> header. (redundancy in each tuple could be over looked) The goal is to be\n> able to put realistic bounds on memory allocation before the entire result is\n> read in. For this to work, functions must also be able to propagate the\n> their precision.\n\nYes, the functions should behave as objects, so that you can get\nmetadata on them.\n\nSo functions should know, depending on max lengths of their arguments,\nhow long strings they return.\n\nBut even without this functionality, having this info is essential to\ngetting Borland stuff to work.\n\n> Did I spell doom to this idea?\n\nI hope not.\n\nHannu\n", "msg_date": "Thu, 23 Apr 1998 23:26:25 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> > The information you want is in pg_attribute.atttypmod. It is normally\n> > -1, but is set for char() and varchar() fields, and includes the 4-byte\n> > length. See bin/psql/psql.c for a sample of its use.\n> \n> I see everyone writing in terms of length. You do mean precision, don't\n> you? For our purposes, this precision should arrive in the result\n> header. (redundancy in each tuple could be over looked) The goal is to be\n> able to put realistic bounds on memory allocation before the entire result is\n> read in. For this to work, functions must also be able to propagate the\n> their precision.\n> \n> Did I spell doom to this idea?\n\nHmm. The problem is that many of us use the old 'text' type, which\ndoesn't have a defined length. Not sure how to handle this in a\nportable libpq way?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 17:09:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> is this on client side or server side?\n> \n> Last time I checked (it was in 6.2 protocol) it was not sent to client.\n> \n> What I need is the defined max length of varchar (or char), not just\n> actual length of each field of that type. This is used by Borlands BDE,\n> and if this changes, depending on the where clause, it breaks BDE.\n\nCan't you do:\n\n\tselect atttypmod from pg_attribute \n\twhere attrelid = 10003 and attname = 'col1';\n\nThat will give the length + 4 bytes.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 17:16:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "\n\nBruce Momjian wrote:\n\n> Can't you do:\n>\n> select atttypmod from pg_attribute\n> where attrelid = 10003 and attname = 'col1';\n>\n> That will give the length + 4 bytes.\n>\n\nThe problem with that theory is this. If the frontend application just\nexecutes some random query, such as \"select * from table\", you really do not\nknow anything about what is coming back. You must rely on the little bit of\ninformation the protocol gives you. In the case of Postgres, it gives you\nthe fieldname, datatype, and size for each column in the result.\nUnfortunately, for varchar and char(n), the size reports -1. This is not\nvery helpful for describing the result set.\n\nYour above example works fine (in fact we use that already) when you know the\ntable and column name, as in metadata functions such as SQLColumns() in the\nODBC driver.\n\nByron\n\n", "msg_date": "Thu, 23 Apr 1998 20:35:43 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> The problem with that theory is this. If the frontend application just\n> executes some random query, such as \"select * from table\", you really do not\n> know anything about what is coming back. You must rely on the little bit of\n> information the protocol gives you. In the case of Postgres, it gives you\n> the fieldname, datatype, and size for each column in the result.\n> Unfortunately, for varchar and char(n), the size reports -1. This is not\n> very helpful for describing the result set.\n> \n> Your above example works fine (in fact we use that already) when you know the\n> table and column name, as in metadata functions such as SQLColumns() in the\n> ODBC driver.\n\nYep. We could pass back atttypmod as part of the PGresult. I can add\nthat to the TODO list. Would that help?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 21:37:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "\n\nBruce Momjian wrote:\n\n> > The problem with that theory is this. If the frontend application just\n> > executes some random query, such as \"select * from table\", you really do not\n> > know anything about what is coming back. You must rely on the little bit of\n> > information the protocol gives you. In the case of Postgres, it gives you\n> > the fieldname, datatype, and size for each column in the result.\n> > Unfortunately, for varchar and char(n), the size reports -1. This is not\n> > very helpful for describing the result set.\n> >\n> > Your above example works fine (in fact we use that already) when you know the\n> > table and column name, as in metadata functions such as SQLColumns() in the\n> > ODBC driver.\n>\n> Yep. We could pass back atttypmod as part of the PGresult. I can add\n> that to the TODO list. Would that help?\n\nYes, that would do it!\n\nThank you for listening to our ravings on this issue.\n\nByron\n\n", "msg_date": "Thu, 23 Apr 1998 22:44:07 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> \n> \n> \n> Bruce Momjian wrote:\n> \n> > > The problem with that theory is this. If the frontend application just\n> > > executes some random query, such as \"select * from table\", you really do not\n> > > know anything about what is coming back. You must rely on the little bit of\n> > > information the protocol gives you. In the case of Postgres, it gives you\n> > > the fieldname, datatype, and size for each column in the result.\n> > > Unfortunately, for varchar and char(n), the size reports -1. This is not\n> > > very helpful for describing the result set.\n> > >\n> > > Your above example works fine (in fact we use that already) when you know the\n> > > table and column name, as in metadata functions such as SQLColumns() in the\n> > > ODBC driver.\n> >\n> > Yep. We could pass back atttypmod as part of the PGresult. I can add\n> > that to the TODO list. Would that help?\n> \n> Yes, that would do it!\n> \n> Thank you for listening to our ravings on this issue.\n\nAdded to TODO:\n\n\t* Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure\n\nThis is a good suggestion.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 24 Apr 1998 00:05:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Added to TODO:\n> \t* Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure\n> This is a good suggestion.\n\nThis will require a frontend/backend protocol change, no?\n\nIf so, right now would be a great time to address it; I'm about halfway\nthrough rewriting libpq for the asynchronous-query support we discussed\nlast week, and would be happy to make the client-side mods while I still\nhave the code in my head.\n\nAs long as we are opening up the protocol, there is an incredibly grotty\nhack in libpq that I'd like to get rid of. It's hard for me to be\nsure whether it's even necessary, but: when libpq gets a 'C' response\n(which the documentation says is a \"completed response\") it assumes that\nthis is *not* the end of the transaction, and that the only way to be\nsure that everything's been read is to send an empty query and wait for\nthe empty query's 'I' response to be returned.\n\n\tcase 'C':\t\t/* portal query command, no rows returned */\n\t\t/*\n\t\t * since backend may produce more than one result\n\t\t * for some commands need to poll until clear.\n\t\t * Send an empty query down, and keep reading out of\n\t\t * the pipe until an 'I' is received.\n\t\t */\n\nDoes this ring a bell with anyone? I'm prepared to believe that it's\nuseless code, but have no easy way to be sure.\n\nNeedless to say, if there really is an ambiguity then the *right* answer\nis to fix the protocol so that the end of a query/response cycle is\nunambiguously determinable. It looks to me like this hack is costing us\nan extra round trip to the server for every ordinary query. That sucks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Apr 1998 10:50:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size " }, { "msg_contents": "Yes, it rings a bell alright,\n\nWhen you execute a multiple query (denoted by semicolans) like \"set geqo to\n'off'; show datestyle; select * from table\", you get that multiple returns and\nMUST read until you get the 'I'. If you don't, your screwed the next time you\ntry and read anything cause all that stuff is still in the pipe.\n\nQuestion though, I didnt think my request would have caused a major protocol\nchange. I though that the '-1' would simply be replaced by the correct size?\n\nByron\n\n\nTom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Added to TODO:\n> > * Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure\n> > This is a good suggestion.\n>\n> This will require a frontend/backend protocol change, no?\n>\n> If so, right now would be a great time to address it; I'm about halfway\n> through rewriting libpq for the asynchronous-query support we discussed\n> last week, and would be happy to make the client-side mods while I still\n> have the code in my head.\n>\n> As long as we are opening up the protocol, there is an incredibly grotty\n> hack in libpq that I'd like to get rid of. It's hard for me to be\n> sure whether it's even necessary, but: when libpq gets a 'C' response\n> (which the documentation says is a \"completed response\") it assumes that\n> this is *not* the end of the transaction, and that the only way to be\n> sure that everything's been read is to send an empty query and wait for\n> the empty query's 'I' response to be returned.\n>\n> case 'C': /* portal query command, no rows returned */\n> /*\n> * since backend may produce more than one result\n> * for some commands need to poll until clear.\n> * Send an empty query down, and keep reading out of\n> * the pipe until an 'I' is received.\n> */\n>\n> Does this ring a bell with anyone? I'm prepared to believe that it's\n> useless code, but have no easy way to be sure.\n>\n> Needless to say, if there really is an ambiguity then the *right* answer\n> is to fix the protocol so that the end of a query/response cycle is\n> unambiguously determinable. It looks to me like this hack is costing us\n> an extra round trip to the server for every ordinary query. That sucks.\n>\n> regards, tom lane\n\n\n\n", "msg_date": "Fri, 24 Apr 1998 11:12:26 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "Byron Nikolaidis <[email protected]> writes:\n> Yes, it rings a bell alright, When you execute a multiple query\n> (denoted by semicolans) like \"set geqo to 'off'; show datestyle;\n> select * from table\", you get that multiple returns and MUST read\n> until you get the 'I'. If you don't, your screwed the next time you\n> try and read anything cause all that stuff is still in the pipe.\n\nThat seems pretty bogus. What happens if you do\n\tselect * from table1; select * from table2\n? The way the code in libpq looks, I think the response from the\nfirst select would get lost entirely (probably even cause a memory\nleak). It's not set up to handle receipt of more than one command\nresponse in any clean fashion. We'd need to revise the application\nAPI to make that work right.\n\nPlaying around with psql, it seems that you can't actually get psql\nto submit a multi-command line as a single query; it seems to break\nit up into separate queries. Which is what libpq can cope with.\n\nI think we should either forbid multiple commands per PQexec call,\nor fix libpq to handle them properly (and hence be able to return\na series of PGresults, not just one).\n\n> Question though, I didnt think my request would have caused a major\n> protocol change. I though that the '-1' would simply be replaced by\n> the correct size?\n\nI assumed we'd want to add the restypmod as a new field in PGresult\nand in the protocol. But I'm just a newbie.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Apr 1998 12:52:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size " }, { "msg_contents": "> \n> Yes, it rings a bell alright,\n> \n> When you execute a multiple query (denoted by semicolans) like \"set geqo to\n> 'off'; show datestyle; select * from table\", you get that multiple returns and\n> MUST read until you get the 'I'. If you don't, your screwed the next time you\n> try and read anything cause all that stuff is still in the pipe.\n\nGood point. If we don't send the empty query, the queued up results get\nout of sync with the requests.\n\nOne solution is to handle it the way psql does. It keeps track of the\nquotes, backslashes, and semicolons in the input string, and sends just\none query each time to the backend, and prints the results.\n\nNow, with libpq, I think the proper solution would be to scan the input\nstring, and count the number of queries being send, send the whole\nstrings (with the multiple queries) and retrieve that many answers from\nthe backend, discarding all but the last result. If you do that, I can\nremove the stuff from psql.c.\n\n> \n> Question though, I didnt think my request would have caused a major protocol\n> change. I though that the '-1' would simply be replaced by the correct size?\n\nWell, the -1 is in attlen, which is the type length. text, char,\nvarchar are all varlena(variable length)/-1. atttypmod is the length\nspecified at attribute creation time. It is similar, but not the same\nas the length, and trying to put the typmod in the length field really\nmesses up the clarity of what is going on. We added atttypmod to\nclarify the code in the backend, and it should be sent to the front end.\nSoon, maybe will have atttypmod specifiying the precision of DECIMAL, or\ncurrency of MONEY.\n\nAs far as adding atttypmod to libpq, I say do it. If you look in the\nbackend's BeginCommand(), under the Remote case label, you will see it\nsending the atttypid to the front end, using the TupleDesc that was\npassed to it. Just after sending the atttyplen, I can send the\natttypmod value, which is an int16. I can do all the backend changes. \nThere are a few places where this would have to be changed in the\nbackend.\n\nOther front-end libraries reading this protocol will have to change to\nto accept this field.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 25 Apr 1998 22:30:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> \n> Byron Nikolaidis <[email protected]> writes:\n> > Yes, it rings a bell alright, When you execute a multiple query\n> > (denoted by semicolans) like \"set geqo to 'off'; show datestyle;\n> > select * from table\", you get that multiple returns and MUST read\n> > until you get the 'I'. If you don't, your screwed the next time you\n> > try and read anything cause all that stuff is still in the pipe.\n> \n> That seems pretty bogus. What happens if you do\n> \tselect * from table1; select * from table2\n> ? The way the code in libpq looks, I think the response from the\n> first select would get lost entirely (probably even cause a memory\n> leak). It's not set up to handle receipt of more than one command\n> response in any clean fashion. We'd need to revise the application\n> API to make that work right.\n\n> \n> Playing around with psql, it seems that you can't actually get psql\n> to submit a multi-command line as a single query; it seems to break\n> it up into separate queries. Which is what libpq can cope with.\n\nYep, you figured it out. (See earlier posting.)\n\nI have now thought about the problem some more, and I think an even\nbetter solution would be that if the backend receives multiple commands\nin a single query, it just returns the first or last result. There is\nno mechanism in libpq to send a query and get multiple results back, so\nwhy not just return one result.\n\nNo need to cound the number of queries sent, and no reason to send empty\nqueries to the backend looking for the last result.\n\nIf you want me to do this for the backend, let me know and I will do it.\n\nFirst or last result? What do we return now?\n\n> \n> I think we should either forbid multiple commands per PQexec call,\n> or fix libpq to handle them properly (and hence be able to return\n> a series of PGresults, not just one).\n> \n> > Question though, I didnt think my request would have caused a major\n> > protocol change. I though that the '-1' would simply be replaced by\n> > the correct size?\n> \n> I assumed we'd want to add the restypmod as a new field in PGresult\n> and in the protocol. But I'm just a newbie.\n\nrestypmod may not be available at the time of returning the result, but\nthe TupleDesc is, and it has the proper atttypmod.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 25 Apr 1998 23:26:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "On Sat, 25 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > Yes, it rings a bell alright,\n> > \n> > When you execute a multiple query (denoted by semicolans) like \"set geqo to\n> > 'off'; show datestyle; select * from table\", you get that multiple returns and\n> > MUST read until you get the 'I'. If you don't, your screwed the next time you\n> > try and read anything cause all that stuff is still in the pipe.\n> \n> Good point. If we don't send the empty query, the queued up results get\n> out of sync with the requests.\n> \n> One solution is to handle it the way psql does. It keeps track of the\n> quotes, backslashes, and semicolons in the input string, and sends just\n> one query each time to the backend, and prints the results.\n> \n> Now, with libpq, I think the proper solution would be to scan the input\n> string, and count the number of queries being send, send the whole\n> strings (with the multiple queries) and retrieve that many answers from\n> the backend, discarding all but the last result. If you do that, I can\n> remove the stuff from psql.c.\n\nI think for libpq, that would be a good idea, but it would mean that there\nis a difference in behaviour between the interfaces.\n\nThe JDBC spec allows for multiple ResultSet's to be returned from a query,\nand our driver handles this already.\n\nNow is this the client libpq, or the backend libpq you are thinking of\nchanging? If it's the backend one, then this will break JDBC with multiple\nresult sets.\n\n> > Question though, I didnt think my request would have caused a major protocol\n> > change. I though that the '-1' would simply be replaced by the correct size?\n> \n> Well, the -1 is in attlen, which is the type length. text, char,\n> varchar are all varlena(variable length)/-1. atttypmod is the length\n> specified at attribute creation time. It is similar, but not the same\n> as the length, and trying to put the typmod in the length field really\n> messes up the clarity of what is going on. We added atttypmod to\n> clarify the code in the backend, and it should be sent to the front end.\n> Soon, maybe will have atttypmod specifiying the precision of DECIMAL, or\n> currency of MONEY.\n\nThat would be useful.\n\n> As far as adding atttypmod to libpq, I say do it. If you look in the\n> backend's BeginCommand(), under the Remote case label, you will see it\n> sending the atttypid to the front end, using the TupleDesc that was\n> passed to it. Just after sending the atttyplen, I can send the\n> atttypmod value, which is an int16. I can do all the backend changes. \n> There are a few places where this would have to be changed in the\n> backend.\n> \n> Other front-end libraries reading this protocol will have to change to\n> to accept this field.\n\nAs soon as you do it, I'll convert JDBC.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk)\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sun, 26 Apr 1998 11:25:23 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> > One solution is to handle it the way psql does. It keeps track of the\n> > quotes, backslashes, and semicolons in the input string, and sends just\n> > one query each time to the backend, and prints the results.\n> > \n> > Now, with libpq, I think the proper solution would be to scan the input\n> > string, and count the number of queries being send, send the whole\n> > strings (with the multiple queries) and retrieve that many answers from\n> > the backend, discarding all but the last result. If you do that, I can\n> > remove the stuff from psql.c.\n> \n> I think for libpq, that would be a good idea, but it would mean that there\n> is a difference in behaviour between the interfaces.\n> \n> The JDBC spec allows for multiple ResultSet's to be returned from a query,\n> and our driver handles this already.\n\nOh. That prevents us from changing the backend to ignore returning more\nthan one result for multiple queries in a PQexec. Perhaps we need a new\nreturn query protocol character like 'J' to denote query returns that\nare not the LAST return, so libpq can throw them away, and jdbc and\nprocess them as normal, but also figure out when it gets the last one.\n\n\n> \n> Now is this the client libpq, or the backend libpq you are thinking of\n> changing? If it's the backend one, then this will break JDBC with multiple\n> result sets.\n> \n> > > Question though, I didnt think my request would have caused a major protocol\n> > > change. I though that the '-1' would simply be replaced by the correct size?\n> > \n> > Well, the -1 is in attlen, which is the type length. text, char,\n> > varchar are all varlena(variable length)/-1. atttypmod is the length\n> > specified at attribute creation time. It is similar, but not the same\n> > as the length, and trying to put the typmod in the length field really\n> > messes up the clarity of what is going on. We added atttypmod to\n> > clarify the code in the backend, and it should be sent to the front end.\n> > Soon, maybe will have atttypmod specifiying the precision of DECIMAL, or\n> > currency of MONEY.\n> \n> That would be useful.\n> \n> > As far as adding atttypmod to libpq, I say do it. If you look in the\n> > backend's BeginCommand(), under the Remote case label, you will see it\n> > sending the atttypid to the front end, using the TupleDesc that was\n> > passed to it. Just after sending the atttyplen, I can send the\n> > atttypmod value, which is an int16. I can do all the backend changes. \n> > There are a few places where this would have to be changed in the\n> > backend.\n> > \n> > Other front-end libraries reading this protocol will have to change to\n> > to accept this field.\n> \n> As soon as you do it, I'll convert JDBC.\n> \n> -- \n> Peter T Mount [email protected] or [email protected]\n> Main Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk)\n> ************ Someday I may rebuild this signature completely ;-) ************\n> Work Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 10:24:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "On Sun, 26 Apr 1998, Bruce Momjian wrote:\n\n[snip]\n\n> > I think for libpq, that would be a good idea, but it would mean that there\n> > is a difference in behaviour between the interfaces.\n> > \n> > The JDBC spec allows for multiple ResultSet's to be returned from a query,\n> > and our driver handles this already.\n> \n> Oh. That prevents us from changing the backend to ignore returning more\n> than one result for multiple queries in a PQexec. Perhaps we need a new\n> return query protocol character like 'J' to denote query returns that\n> are not the LAST return, so libpq can throw them away, and jdbc and\n> process them as normal, but also figure out when it gets the last one.\n\nThat should be easy enough to implement.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk)\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sun, 26 Apr 1998 16:59:33 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Oh. That prevents us from changing the backend to ignore returning more\n> than one result for multiple queries in a PQexec. Perhaps we need a new\n> return query protocol character like 'J' to denote query returns that\n> are not the LAST return, so libpq can throw them away, and jdbc and\n> process them as normal, but also figure out when it gets the last one.\n\nThat would require the code processing an individual command in the\nbackend to know whether it was the last one or not, which seems like\na very undesirable interaction.\n\nInstead, I'd suggest we simply add a new BE->FE message that says\n\"I'm done processing your query and am returning to idle state\".\nThis would be sent at the end of *every* query, correct or failing.\nTrivial to implement: send it at the bottom of the main loop in\npostgres.c.\n\nThe more general question is whether we ought to redesign libpq's API\nto permit multiple command responses to be returned from one query.\nI think that would be a good idea, if we can do it in a way that doesn't\nbreak existing applications for the single-command-per-query case.\n(BTW, I'm defining \"query\" as \"string handed to PQexec\"; perhaps this\nis backwards from the usual terminology?)\n\nMaybe have libpq queue up the results and return the first one, then\nprovide a function to pull the rest from the queue:\n\n\tresult = PQexec(conn, query);\n\t// process result, eventually free it with PQclear\n\twhile ((result = PQnextResult(conn)) != NULL)\n\t{\n\t\t// process result, eventually free it with PQclear\n\t}\n\t// ready to send new query\n\nAn app that didn't use PQnextResult would still work as long as it\nnever sent multiple commands per query. (Question: if the app sends\na multi-command query and doesn't call PQnextResult, the next PQexec\nwill know it because the result queue is nonempty. Should PQexec\ncomplain, or just silently clear the queue?)\n\nOne thing that likely would *not* work very nicely is copy in/out\nas part of a multi-command query, since there is currently no provision\nfor PQendcopy to return result(s). This is pretty braindead IMHO,\nbut I'm not sure we can change PQendcopy's API. Any thoughts? What\nI'd really like to see is PQendcopy returning a PGresult that indicates\nsuccess or failure of the copy, and then additional results could be\nqueued up behind that for retrieval with PQnextResult.\n\n>>>> Other front-end libraries reading this protocol will have to change\n>>>> to accept this field.\n\nAnd the end-of-query indicator. I think now is the time to do it if\nwe're gonna do it. Right now, it seems most code is using libpq rather\nthan seeing the protocol directly, so fixing these problems should be\npretty painless. But wasn't there some discussion recently of running\nthe protocol directly from Tcl code? If that gets popular it will\nbecome much harder to change the protocol.\n\nAs long as we are opening up the issue, there are some other bits of\nbad design in the FE/BE protocol:\n\n1. 'B' and 'D' are used as message types for *both* result tuples and\nStartCopyIn/StartCopyOut messages. You can only distinguish them by\ncontext, ie, have you seen a 'T' lately. This is very bad. It's not\nlike we have to do this because we're out of possible message types.\n\n2. Copy In and Copy Out data ought to be part of the protocol, that\nis every line of copy in/out data ought to be prefixed with a message\ntype code. Fixing this might be more trouble than its worth however,\nif there are any applications that don't go through PQgetline/PQputline.\n\n\nBTW, I have made good progress with rewriting libpq in an asynchronous\nstyle; the new code ran the regression tests on Friday. But I haven't\ntested any actual async behavior yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Apr 1998 12:57:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > Oh. That prevents us from changing the backend to ignore returning more\n> > than one result for multiple queries in a PQexec. Perhaps we need a new\n> > return query protocol character like 'J' to denote query returns that\n> > are not the LAST return, so libpq can throw them away, and jdbc and\n> > process them as normal, but also figure out when it gets the last one.\n> \n> That would require the code processing an individual command in the\n> backend to know whether it was the last one or not, which seems like\n> a very undesirable interaction.\n\nI think it is pretty easy to do.\n\n> Instead, I'd suggest we simply add a new BE->FE message that says\n> \"I'm done processing your query and am returning to idle state\".\n> This would be sent at the end of *every* query, correct or failing.\n> Trivial to implement: send it at the bottom of the main loop in\n> postgres.c.\n> \n\nIf you are happy with this, it is certainly better than my idea.\n\n> The more general question is whether we ought to redesign libpq's API\n> to permit multiple command responses to be returned from one query.\n> I think that would be a good idea, if we can do it in a way that doesn't\n> break existing applications for the single-command-per-query case.\n> (BTW, I'm defining \"query\" as \"string handed to PQexec\"; perhaps this\n> is backwards from the usual terminology?)\n> \n\nMy idea is to make a PQexecv() just like PQexec, except it returns an\narray of results, with the end of the array terminated with a NULL, sort\nof like readv(), except you return an array, rather than supplying one,\ni.e.:\n\n\tPGresult *resarray;\n\tresarray = PQexecv('select * from test; select * from test2');\n\nand it handles by:\n\t\n\tPGresult *res;\n\tfor (res = resarray; res; res++)\n\t\tprocess_result_and_clear(res);\n\tfree(resarray);\n\nYou also have to free the array that holds the result pointers, as well\nas the result pointers themselves.\n\n> Maybe have libpq queue up the results and return the first one, then\n> provide a function to pull the rest from the queue:\n> \n> \tresult = PQexec(conn, query);\n> \t// process result, eventually free it with PQclear\n> \twhile ((result = PQnextResult(conn)) != NULL)\n> \t{\n> \t\t// process result, eventually free it with PQclear\n> \t}\n> \t// ready to send new query\n> \n> An app that didn't use PQnextResult would still work as long as it\n> never sent multiple commands per query. (Question: if the app sends\n> a multi-command query and doesn't call PQnextResult, the next PQexec\n> will know it because the result queue is nonempty. Should PQexec\n> complain, or just silently clear the queue?)\n\nWith my idea, we can properly handle or discard multiple results\ndepending on whether they use PQexec() or PQexecv().\n\n> One thing that likely would *not* work very nicely is copy in/out\n> as part of a multi-command query, since there is currently no provision\n> for PQendcopy to return result(s). This is pretty braindead IMHO,\n> but I'm not sure we can change PQendcopy's API. Any thoughts? What\n> I'd really like to see is PQendcopy returning a PGresult that indicates\n> success or failure of the copy, and then additional results could be\n> queued up behind that for retrieval with PQnextResult.\n\nNot sure on this one. If we change the API, we have to have a good\nreason to do it. API additions are OK.\n\n> \n> >>>> Other front-end libraries reading this protocol will have to change\n> >>>> to accept this field.\n> \n> And the end-of-query indicator. I think now is the time to do it if\n> we're gonna do it. Right now, it seems most code is using libpq rather\n> than seeing the protocol directly, so fixing these problems should be\n> pretty painless. But wasn't there some discussion recently of running\n> the protocol directly from Tcl code? If that gets popular it will\n> become much harder to change the protocol.\n\nYep, let's change it now.\n\n> \n> As long as we are opening up the issue, there are some other bits of\n> bad design in the FE/BE protocol:\n> \n> 1. 'B' and 'D' are used as message types for *both* result tuples and\n> StartCopyIn/StartCopyOut messages. You can only distinguish them by\n> context, ie, have you seen a 'T' lately. This is very bad. It's not\n> like we have to do this because we're out of possible message types.\n\nYep, let's use distinct ones.\n\n> \n> 2. Copy In and Copy Out data ought to be part of the protocol, that\n> is every line of copy in/out data ought to be prefixed with a message\n> type code. Fixing this might be more trouble than its worth however,\n> if there are any applications that don't go through PQgetline/PQputline.\n\nAgain, if we clearly document the change, we are far enough from 6.4\nthat perl and other people will handle the change by the time 6.4 is\nreleased. Changes the affect user apps is more difficult.\n\n> BTW, I have made good progress with rewriting libpq in an asynchronous\n> style; the new code ran the regression tests on Friday. But I haven't\n> tested any actual async behavior yet.\n\nGood. You may need a patch from me for the backend before you can test\nsome of your changes. Let me know what you decide, and I will send you\na patch for testing.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 17:38:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> > One thing that likely would *not* work very nicely is copy in/out\n> > as part of a multi-command query, since there is currently no provision\n> > for PQendcopy to return result(s). This is pretty braindead IMHO,\n> > but I'm not sure we can change PQendcopy's API. Any thoughts? What\n\n\nAdding a return result to PQendcopy would not be a big deal. Just\ndocument it so the few people who do this in application know to free\nthe result. The interface libraries can handle the change.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 18:55:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> My idea is to make a PQexecv() just like PQexec, except it returns an\n> array of results, with the end of the array terminated with a NULL,\n> [ as opposed to my idea of returning PGresults one at a time ]\n\nHmm. I think the one-at-a-time approach is probably better, mainly\nbecause it doesn't require libpq to have generated all the PGresult\nobjects before it can return the first one.\n\nHere is an example in which the array approach doesn't work very well:\n\n\tQUERY: copy stdin to relation ; select * from relation\n\nWhat we want is for the application to receive a PGRES_COPY_IN result,\nperform the data transfer, call PQendcopy, and then receive a PGresult\nfor the select.\n\nI don't see any way to make this work if the library has to give back\nan array of results right off the bat. With the other method, PQendcopy\nwill read the select command's output and stuff it into the (hidden)\nresult queue. Then when the application calls PQnextResult, presto,\nthere it is. Correct logic for an application that submits multi-\ncommand query strings would be something like\n\n\tresult = PQexec(conn, query);\n\n\twhile (result) {\n\t\tswitch (PQresultStatus(result)) {\n\t\t...\n\t\tcase PGRES_COPY_IN:\n\t\t\t// ... copy data here ...\n\t\t\tif (PQendcopy(conn))\n\t\t\t\treportError();\n\t\t\tbreak;\n\t\t...\n\t\t}\n\n\t\tPQclear(result);\n\t\tresult = PQnextResult(conn);\n\t}\n\n\nAnother thought: we might consider making PQexec return as soon as it's\nreceived the first query result, thereby allowing the frontend to\noverlap its processing of this result with the backend's processing of\nthe rest of the query string. Then, PQnextResult would actually read a\nnew result (or the \"I'm done\" message), rather than just return a result\nthat had already been stored. I wasn't originally thinking of\nimplementing it that way, but it seems like a mighty attractive idea.\nNo way to do it if we return results as an array.\n\n>> I'd really like to see is PQendcopy returning a PGresult that indicates\n>> success or failure of the copy, and then additional results could be\n>> queued up behind that for retrieval with PQnextResult.\n\n> Not sure on this one. If we change the API, we have to have a good\n> reason to do it. API additions are OK.\n\nWell, we can settle for having PQendcopy return 0 or 1 as it does now.\nIt's not quite as clean as having it return a real PGresult, but it's\nprobably not worth breaking existing apps just to improve the\nconsistency of the API. It'd still be possible to queue up subsequent\ncommands' results (if any) in the result queue.\n\n>> 2. Copy In and Copy Out data ought to be part of the protocol, that\n>> is every line of copy in/out data ought to be prefixed with a message\n>> type code. Fixing this might be more trouble than its worth however,\n>> if there are any applications that don't go through PQgetline/PQputline.\n\n> Again, if we clearly document the change, we are far enough from 6.4\n> that perl and other people will handle the change by the time 6.4 is\n> released. Changes the affect user apps is more difficult.\n\nI have mixed feelings about this particular item. It would make the\nprotocol more robust, but it's not clear that the gain is worth the\nrisk of breaking any existing apps. I'm willing to drop it if no one\nelse is excited about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Apr 1998 19:13:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > My idea is to make a PQexecv() just like PQexec, except it returns an\n> > array of results, with the end of the array terminated with a NULL,\n> > [ as opposed to my idea of returning PGresults one at a time ]\n> \n> Hmm. I think the one-at-a-time approach is probably better, mainly\n> because it doesn't require libpq to have generated all the PGresult\n> objects before it can return the first one.\n> \n> Here is an example in which the array approach doesn't work very well:\n> \n> \tQUERY: copy stdin to relation ; select * from relation\n> \n> What we want is for the application to receive a PGRES_COPY_IN result,\n> perform the data transfer, call PQendcopy, and then receive a PGresult\n> for the select.\n> \n> I don't see any way to make this work if the library has to give back\n> an array of results right off the bat. With the other method, PQendcopy\n> will read the select command's output and stuff it into the (hidden)\n> result queue. Then when the application calls PQnextResult, presto,\n> there it is. Correct logic for an application that submits multi-\n> command query strings would be something like\n\nOK, you just need to remember to throw away any un-called-for results if\nthey do another PQexec without retrieving all the results returned by\nthe backend.\n\n> Another thought: we might consider making PQexec return as soon as it's\n> received the first query result, thereby allowing the frontend to\n> overlap its processing of this result with the backend's processing of\n> the rest of the query string. Then, PQnextResult would actually read a\n> new result (or the \"I'm done\" message), rather than just return a result\n> that had already been stored. I wasn't originally thinking of\n> implementing it that way, but it seems like a mighty attractive idea.\n> No way to do it if we return results as an array.\n\nYep.\n\n\n> Well, we can settle for having PQendcopy return 0 or 1 as it does now.\n> It's not quite as clean as having it return a real PGresult, but it's\n> probably not worth breaking existing apps just to improve the\n> consistency of the API. It'd still be possible to queue up subsequent\n> commands' results (if any) in the result queue.\n\nOK.\n\n> > Again, if we clearly document the change, we are far enough from 6.4\n> > that perl and other people will handle the change by the time 6.4 is\n> > released. Changes the affect user apps is more difficult.\n> \n> I have mixed feelings about this particular item. It would make the\n> protocol more robust, but it's not clear that the gain is worth the\n> risk of breaking any existing apps. I'm willing to drop it if no one\n> else is excited about it.\n\nIt's up to you.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 19:41:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "The historical reason why the POSTGRES backend is required to send multiple\nresult sets is to support cursors on queries involving type inheritance and\nanonymous target lists.\n\n\tbegin\n\tdeclare c cursor for\n\t\tselect e.oid, e.* from EMP* e\n\tfetch 10 in c\n\t...\n\nTo handle the command sequence above, frontend applications would need to\nbe provided with a new result descriptor when the \"fetch 10 in c\" crosses a\nresult set boundary.\n\n\n", "msg_date": "Sun, 26 Apr 1998 17:55:54 -0700", "msg_from": "Michael Hirohama <[email protected]>", "msg_from_op": false, "msg_subject": "Re: retrieving varchar size" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > My idea is to make a PQexecv() just like PQexec, except it returns an\n> > array of results, with the end of the array terminated with a NULL,\n> > [ as opposed to my idea of returning PGresults one at a time ]\n> \n> Hmm. I think the one-at-a-time approach is probably better, mainly\n> because it doesn't require libpq to have generated all the PGresult\n> objects before it can return the first one.\n> \n> Here is an example in which the array approach doesn't work very well:\n> \n> \tQUERY: copy stdin to relation ; select * from relation\n> \n> What we want is for the application to receive a PGRES_COPY_IN result,\n> perform the data transfer, call PQendcopy, and then receive a PGresult\n> for the select.\n> \n> I don't see any way to make this work if the library has to give back\n> an array of results right off the bat. With the other method, PQendcopy\n> will read the select command's output and stuff it into the (hidden)\n> result queue. Then when the application calls PQnextResult, presto,\n> there it is. Correct logic for an application that submits multi-\n> command query strings would be something like\n> \n> \tresult = PQexec(conn, query);\n> \n> \twhile (result) {\n> \t\tswitch (PQresultStatus(result)) {\n> \t\t...\n> \t\tcase PGRES_COPY_IN:\n> \t\t\t// ... copy data here ...\n> \t\t\tif (PQendcopy(conn))\n> \t\t\t\treportError();\n> \t\t\tbreak;\n> \t\t...\n> \t\t}\n> \n> \t\tPQclear(result);\n> \t\tresult = PQnextResult(conn);\n> \t}\n> \n> \n> Another thought: we might consider making PQexec return as soon as it's\n> received the first query result, thereby allowing the frontend to\n> overlap its processing of this result with the backend's processing of\n> the rest of the query string. Then, PQnextResult would actually read a\n> new result (or the \"I'm done\" message), rather than just return a result\n> that had already been stored. I wasn't originally thinking of\n> implementing it that way, but it seems like a mighty attractive idea.\n> No way to do it if we return results as an array.\n\n\nOr we might even make PQexec return as soon as the query is sent and parsed.\nIt could ruturn a handle to the query that could be used to get results later.\nThis is pretty much exactly in line with the way the Perl DBI stuff works and\nI think also odbc.\n\n queryhandle = PQexec(conn, querystring);\n\n while (result = PQgetresult(queryhandle)) {\n do stuff with result;\n PQclear(result);\n }\n\nThis protocol allows for multiple results per query, and asynchronous operation\nbefore getting the result.\n\nPerhaps a polling form might be added too:\n\n queryhandle = PQexec(conn, querystring);\n\n while (1) {\n handle_user_interface_events();\n\n if (PQready(queryhandle)) {\n result = PQgetresult(queryhandle);\n if (result == NULL)\n break;\n do stuff with result;\n PQclear(result);\n }\n }\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n\n", "msg_date": "Sun, 26 Apr 1998 18:46:42 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "\n\nTom Lane wrote:\n\n> >>>> Other front-end libraries reading this protocol will have to change\n> >>>> to accept this field.\n>\n> And the end-of-query indicator. I think now is the time to do it if\n> we're gonna do it. Right now, it seems most code is using libpq rather\n> than seeing the protocol directly, so fixing these problems should be\n> pretty painless. But wasn't there some discussion recently of running\n> the protocol directly from Tcl code? If that gets popular it will\n> become much harder to change the protocol.\n>\n\nHello,\n\nPlease remember that the ODBC driver handles the protocol directly, (it does\nnot use libpq). I would assume that if you guys make protocol changes you\nwill post a summary of them on the interfaces list?\n\nByron\n\n\n\n", "msg_date": "Mon, 27 Apr 1998 10:28:42 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "Michael Hirohama <[email protected]> writes:\n> The historical reason why the POSTGRES backend is required to send multiple\n> result sets is to support cursors on queries involving type inheritance and\n> anonymous target lists.\n> \tbegin\n> \tdeclare c cursor for\n> \t\tselect e.oid, e.* from EMP* e\n> \tfetch 10 in c\n> \t...\n> To handle the command sequence above, frontend applications would need to\n> be provided with a new result descriptor when the \"fetch 10 in c\" crosses a\n> result set boundary.\n\nHmm. I noted the place in libpq where it fails if multiple 'T' (tuple\ndescriptor) messages arrive during a query retrieval. But the comments\nmade it sound like the condition shouldn't occur.\n\nDoes what you describe actually work in the current backend?\n\nThe problem on the libpq side is basically that the PGresult structure\nis not able to represent more than one tuple descriptor. AFAICS, we can't\ntamper with that without breaking all existing applications. However,\nif we make the changes being discussed in this thread then it would be\na simple matter to return a *series* of PGresult structures for this\nsort of query.\n\nWhether an application is capable of handling that is another story,\nbut at least the data could be passed through.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Apr 1998 10:39:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: retrieving varchar size " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, you just need to remember to throw away any un-called-for results if\n> they do another PQexec without retrieving all the results returned by\n> the backend.\n\nOK, so you feel the right behavior is \"throw away unconsumed results\"\nand not \"raise an error\"?\n\nI don't have a strong feeling either way; I'm just asking what the\nconsensus is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Apr 1998 10:41:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size " }, { "msg_contents": "> \n> \n> \n> Tom Lane wrote:\n> \n> > >>>> Other front-end libraries reading this protocol will have to change\n> > >>>> to accept this field.\n> >\n> > And the end-of-query indicator. I think now is the time to do it if\n> > we're gonna do it. Right now, it seems most code is using libpq rather\n> > than seeing the protocol directly, so fixing these problems should be\n> > pretty painless. But wasn't there some discussion recently of running\n> > the protocol directly from Tcl code? If that gets popular it will\n> > become much harder to change the protocol.\n> >\n> \n> Hello,\n> \n> Please remember that the ODBC driver handles the protocol directly, (it does\n> not use libpq). I would assume that if you guys make protocol changes you\n> will post a summary of them on the interfaces list?\n\nAbsolutely. Guaranteed.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 10:51:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "[email protected] (David Gould) writes:\n> Or we might even make PQexec return as soon as the query is sent and parsed.\n> It could ruturn a handle to the query that could be used to get results later.\n> Perhaps a polling form might be added too:\n\nWe're way ahead of you ;-). See last week's discussion on \"Proposal\nfor async support in libpq\" (it was only on the hackers list, not\ninterfaces). I have already implemented the original proposal, though\nnot tested it fully.\n\nThe proposal will have to be modified some to deal with this notion\nof returning multiple results from a single query. I haven't worked\nout exactly what I'd like to see, but it won't be too far different\nfrom what David is envisioning.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Apr 1998 10:52:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size " }, { "msg_contents": "> \n> Does what you describe actually work in the current backend?\n> \n> The problem on the libpq side is basically that the PGresult structure\n> is not able to represent more than one tuple descriptor. AFAICS, we can't\n> tamper with that without breaking all existing applications. However,\n> if we make the changes being discussed in this thread then it would be\n> a simple matter to return a *series* of PGresult structures for this\n> sort of query.\n> \n> Whether an application is capable of handling that is another story,\n> but at least the data could be passed through.\n> \nI would move forward, and see if anything breaks. If the regression\ntests pass, that is a good sign it works.\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 10:52:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: retrieving varchar size" }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > OK, you just need to remember to throw away any un-called-for results if\n> > they do another PQexec without retrieving all the results returned by\n> > the backend.\n> \n> OK, so you feel the right behavior is \"throw away unconsumed results\"\n> and not \"raise an error\"?\n> \n> I don't have a strong feeling either way; I'm just asking what the\n> consensus is.\n\nThrow them away. That is what we have always done, and if they wanted\nthem, they wouldn't have put them all in one pgexec().\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 10:54:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> > > We could pass back atttypmod as part of the PGresult. I can add\n> > > that to the TODO list. Would that help?\n> > Yes, that would do it!\n> > Thank you for listening to our ravings on this issue.\n> Added to TODO:\n> * Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure\n> This is a good suggestion.\n\nHow do we determine atttypmod for queries like\n\n select '123' || '456';\n\n?? I might be able to address this with my upcoming type conversion work\nbut I don't know if we have enough hooks for this right now...\n\n - Tom\n", "msg_date": "Mon, 27 Apr 1998 15:20:25 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> \n> > > > We could pass back atttypmod as part of the PGresult. I can add\n> > > > that to the TODO list. Would that help?\n> > > Yes, that would do it!\n> > > Thank you for listening to our ravings on this issue.\n> > Added to TODO:\n> > * Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure\n> > This is a good suggestion.\n> \n> How do we determine atttypmod for queries like\n> \n> select '123' || '456';\n> \n> ?? I might be able to address this with my upcoming type conversion work\n> but I don't know if we have enough hooks for this right now...\n\nNo way, I think. This would have a atttypmod of -1, which is true\nbecause there is no atttypmod size for this. Once a char()/varchar()\ngoes into a function, anything can come out.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 22:17:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "On Mon, 27 Apr 1998, Byron Nikolaidis wrote:\n> Tom Lane wrote:\n> \n> > >>>> Other front-end libraries reading this protocol will have to change\n> > >>>> to accept this field.\n> >\n> > And the end-of-query indicator. I think now is the time to do it if\n> > we're gonna do it. Right now, it seems most code is using libpq rather\n> > than seeing the protocol directly, so fixing these problems should be\n> > pretty painless. But wasn't there some discussion recently of running\n> > the protocol directly from Tcl code? If that gets popular it will\n> > become much harder to change the protocol.\n> >\n> \n> Hello,\n> \n> Please remember that the ODBC driver handles the protocol directly, (it does\n> not use libpq). I would assume that if you guys make protocol changes you\n> will post a summary of them on the interfaces list?\n\nThe JDBC driver is the same, as it too handles the protocol directly. It's\nthe reason why I keep an eye on any discussion that may effect it.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk)\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 28 Apr 1998 06:36:51 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "Michael Hirohama <[email protected]> wrote:\n> The historical reason why the POSTGRES backend is required to send multiple\n> result sets is to support cursors on queries involving type inheritance and\n> anonymous target lists.\n> \tbegin\n> \tdeclare c cursor for\n> \t\tselect e.oid, e.* from EMP* e\n> \tfetch 10 in c\n> \t...\n> To handle the command sequence above, frontend applications would need to\n> be provided with a new result descriptor when the \"fetch 10 in c\" crosses a\n> result set boundary.\n\nI tried this and was unable to produce a failure. It looks like the\nselect only returns the set of fields applicable to the base class,\nregardless of what additional fields may be possessed by some\nsubclasses. Which, in fact, is more or less what I'd expect.\n\nIs Michael remembering some old behavior that is no longer implemented?\nAnd if so, is the old or new behavior the correct one?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Apr 1998 16:36:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: retrieving varchar size " }, { "msg_contents": "Tom Lane <[email protected]> wrote on Thu, 30 Apr 1998:\n>Michael Hirohama <[email protected]> wrote:\n>> The historical reason why the POSTGRES backend is required to send multiple\n>> result sets is to support cursors on queries involving type inheritance and\n>> anonymous target lists.\n>> \tbegin\n>> \tdeclare c cursor for\n>> \t\tselect e.oid, e.* from EMP* e\n>> \tfetch 10 in c\n>> \t...\n>> To handle the command sequence above, frontend applications would need to\n>> be provided with a new result descriptor when the \"fetch 10 in c\" crosses a\n>> result set boundary.\n>\n>I tried this and was unable to produce a failure. It looks like the\n>select only returns the set of fields applicable to the base class,\n>regardless of what additional fields may be possessed by some\n>subclasses. Which, in fact, is more or less what I'd expect.\n>\n>Is Michael remembering some old behavior that is no longer implemented?\n>And if so, is the old or new behavior the correct one?\n>\n>\t\t\tregards, tom lane\n\nForgive me for my slow memory: I remember now that there was a decision\nmade to not support exploding the expansion of anonymous target lists\nbecause of the extra complexity it would introduce into the parser and\nexecutor. Thus, Postgres would return at most result set per query\nprocessed. Smart users and smart applications would be able to navigate\nthe inheritance hierarchy by explicitly specifying tables and columns as\nneeded.\n\n ~~ <[email protected]>\n\n\n", "msg_date": "Thu, 14 May 1998 16:51:23 -0700", "msg_from": "Michael Hirohama <[email protected]>", "msg_from_op": false, "msg_subject": "Re: retrieving varchar size" } ]
[ { "msg_contents": "> \n> Hi!\n> I was trying to install postgresql-6.3.2 on BSDI-3.1 operating system.\n> First, it doesn't define BSDI-3.0 template (postgresql-6.3 does).\n> Second, there is an error on compilation:\n> \n> gmake[2]: Entering directory\n> `/usr/home/ser/download/postgres/postgresql-6.3.2/src/backend/parser'\n> /usr/bin/yacc -d gram.y\n> /usr/bin/yacc: f - maximum table size exceeded\n> gmake[2]: *** [parse.h] Error 2\n> gmake[2]: Leaving directory\n> `/usr/home/ser/download/postgres/postgresql-6.3.2/src/backend/parser'\n> gmake[1]: *** [parser.dir] Error 2\n> gmake[1]: Leaving directory\n> `/usr/home/ser/download/postgres/postgresql-6.3.2/src/backend'\n> gmake: *** [all] Error 2\n> \n> Any suggestions ? Thanx.\n> ---------------------------\n\nI am running the same OS. Just touch backend/parser/gram.c, and\nrecompile. Bison is required to compile gram.y, but we supply it as\npart of the install.\n\nHackers, is the gram.c file not new enough in the 6.3.2 tarball?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 10:51:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] BSDI-3.1" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Hackers, is the gram.c file not new enough in the 6.3.2 tarball?\n\nApparently not: I note that my recompile rebuilt it too. (Fortunately\nI have bison installed.) tar says\n\n$ tar tvfz ~postgres/archive/postgresql-6.3.2.tar.gz | grep /gram\n-rw-r--r-- pgsql/wheel 398333 1998-04-17 03:00 postgresql-6.3.2/src/backend/parser/gram.c\n-rw-r--r-- pgsql/wheel 126012 1998-04-17 03:00 postgresql-6.3.2/src/backend/parser/gram.y\n\nwhich isn't accurate enough to be helpful, but it looks suspicious.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Apr 1998 11:22:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] BSDI-3.1 " }, { "msg_contents": "> > Hackers, is the gram.c file not new enough in the 6.3.2 tarball?\n> \n> Apparently not: I note that my recompile rebuilt it too. (Fortunately\n> I have bison installed.) tar says\n> \n> $ tar tvfz ~postgres/archive/postgresql-6.3.2.tar.gz | grep /gram\n> -rw-r--r-- pgsql/wheel 398333 1998-04-17 03:00 postgresql-6.3.2/src/backend/parser/gram.c\n> -rw-r--r-- pgsql/wheel 126012 1998-04-17 03:00 postgresql-6.3.2/src/backend/parser/gram.y\n\nBruce, what I usually try to do (but sometimes forget) is to commit the\ntwo files separately, doing a \"touch\" on gram.c after committing gram.y\nand before committing gram.c. That way, there is a significant time\ndifference between the two files.\n\n - Tom\n", "msg_date": "Thu, 23 Apr 1998 15:41:24 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] BSDI-3.1" } ]
[ { "msg_contents": "\nIn order to have a shared libpq made automatically for AIX ports, it's\nnecessary to rework the shared lib rules in interfaces/libpq/Makefile.\n\nIt seems to me that the PORTNAME dependent rules should be in the\nrespective makefiles/Makefile.$(PORTNAME), no?\n\nThe libpq$(DLSUFFIX) make should be handled by %$(DLSUFFIX) rules for\nthat port. If it needs extra handling, then there should be special\nrules for libpq$(DLSUFFIX).\n\nThe various LDFLAGS_SL would be appended to the SHARED_LIB line in\nthe template and $(CFLAGS_SL) would then be used in the make rule for\nthe shared lib.\n\nThe most basic thing to do for this is to move the $(shlib) rule to \neach Makefile.$(PORTNAME) and replace it in the libpq Makefile with a\nsimple rule to make libpq.o and then let each port make the shlib in\nits own way.\n\nIt's not as complicated or as messy as it reads. I need someone for\nthe linux, bsd, i386-solaris, univel and hpux ports to work with on\nmoving the shlib rule. Better than just moving it over myself and\nbreaking it in the process. :)\n\nSo to summarize a little, in libpq/Makefile, make a libpq.o and then\nin Makefile.$(PORTNAME), make the shared libpq. Seem reasonable?\n\nThanks,\ndarrenk\n", "msg_date": "Thu, 23 Apr 1998 11:24:30 -0400", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Removing PORTNAME from libpq/Makefile" }, { "msg_contents": "[email protected] (Darren King) writes:\n> So to summarize a little, in libpq/Makefile, make a libpq.o and then\n> in Makefile.$(PORTNAME), make the shared libpq. Seem reasonable?\n\nClose, but no cigar. What happens when we have two, or three, or ten\nshared libs to make?\n\nThe right thing to do is to have makefiles/Makefile.PLATFORM contain\nsome sort of generic shared-library-making rule that can then be\napplied in libpq/Makefile and any other module makefile that wants\nto produce a shared library.\n\nI'm a little out of practice on generic rules in gmakefiles, but\nsince we already assume that gmake is being used, it shouldn't be\ntoo hard to do it this way.\n\nI'll be glad to help with the HPUX version of the rule.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Apr 1998 12:11:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing PORTNAME from libpq/Makefile " } ]
[ { "msg_contents": "John Fieber <[email protected]> wrote:\n> Since 9.* and 10.* are apparently so different, maybe there would\n> be less breakage if they were treated as separate ports?\n\nI've thought about this and concluded that it'd probably just result\nin duplication of effort. There are differences, which have to be\ntaken care of by conditional tests in the Makefiles and/or #ifdefs.\nBut I think most of the discrepancies we are hearing about have to do\nwith other differences across installations. Specifically,\n (a) whether people are using gcc or the vendor cc;\n (b) what patch level people are at for libc, libm, vendor cc, ...\nHP issues separate patch streams for all these system components,\nwhich is great for getting fast turnaround on bug fixes, but it's\na nightmare when it comes to guessing what someone else's \"HP-UX 9.05\"\ninstallation really is.\n\nBruce suggested that I look at Stan Brown's back messages to the\nhackers list, which I did. As far as I can tell, Stan's major problem\nwas that he didn't know how to tell configure to use cc rather than\ngcc when both are installed. The environment-variable override trick\n(\"CC=cc configure ...\" or \"setenv CC cc; configure ...\") probably ought\nto be documented in the INSTALL instructions. The other problems he\nmentioned all seem to be solved in the 6.3.2 release. Most of the\nproblems I ran into were really a question of porting 6.3.2 to HPUX 9,\nnot 10 which is what Stan used.\n\nI would like to recommend that y'all go ahead and apply the HPUX patches\nI sent to the patches list on Tuesday. I have confirmed that they work on\nmy local installations of HPUX 9 *and* 10. I cannot guarantee that they\nwill work on every installation of HPUX, but I will be willing to take\nresponsibility for coordinating any tweaks needed to handle problems\nthat pop up elsewhere.\n\nBTW, is anyone planning to fix src/backend/port/getrusage.c so that\nit doesn't have to be hand-edited before use? I'm nervous about messing\nwith it without knowing what systems it is needed on. But ISTM that we\nought to be able to auto-configure it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Apr 1998 11:58:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "HP-UX porting strategy (moved from PATCHES)" } ]
[ { "msg_contents": "The following can be read in the Samba cvs.log,\nit makes me believe that shmem is faster than mmap.\nInformix also uses shmem and not mmap.\n\n****************************************************************\nDate: Wednesday October 29, 1997 @ 1:19\nAuthor: tridge\n\nUpdate of /data/cvs/samba/source\nIn directory samba:/tmp/cvs-serv8959\n\nModified Files:\n Makefile includes.h locking_shm.c proto.h shmem.c smb.h\nAdded Files:\n shmem_sysv.c\nLog Message:\nSYSV IPC implementation of fast share modes.\n\nIt will try sysv IPC first, then if that fails it will try mmap(),\nthen after that it will try share files.\n\nI have defined USE_SYSV_IPC for Linux, Solaris and HPUX at the\nmoment. Probably a lot more could have it defined. In fact, the vast\nmajority of systems support it. Need autoconf again :-)\n\nIt should actually be faster than the mmap() version, and doesn't need\nany lock files. This means the problem of the share mem file being on\na NFS drive will be gone.\n****************************************************************************\n***\n\nAndreas\n", "msg_date": "Thu, 23 Apr 1998 19:18:06 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "Using mmap instead of shmem" } ]
[ { "msg_contents": "\nFor the linux, bsd, i386-solaris and univel ports, ...\n\nIf I were to try to make foo$(DLSUFFIX) from bar.c and bah.c, I would\nthink the general sequence of events would be:\n\n1. $(CC) $(CFLAGS_SL) -o bar.o bar.c\n2. $(CC) $(CFLAGS_SL) -o bah.o bah.c\n3. $(LD) $(LDFLAGS_SL) -r -o foo.o bar.o bah.o\n4. $(LD) $(LDFLAGS_SL) -o foo$(DLSUFFIX) foo.o\n\nCould someone for each port tell me what $(CFLAGS_SL) and $(LDFLAGS_SL)\nare needed for each of these steps?\n\nI have reworked the libpq Makefile to make a shared libpq for aix and\nI'd like to move the libpq port-specific code to the port Makefiles\nwithout breaking it.\n\nAny help from others with shared library knowledge on these ports would\nbe greatly appreciated.\n\ndarrenk\n", "msg_date": "Thu, 23 Apr 1998 16:25:39 -0400", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "linux, bsd, i386-solaris and univel shared libraries." }, { "msg_contents": "Darren King writes:\n> \n> For the linux, bsd, i386-solaris and univel ports, ...\n> \n> If I were to try to make foo$(DLSUFFIX) from bar.c and bah.c, I would\n> think the general sequence of events would be:\n> \n> 1. $(CC) $(CFLAGS_SL) -o bar.o bar.c\n> 2. $(CC) $(CFLAGS_SL) -o bah.o bah.c\n> 3. $(LD) $(LDFLAGS_SL) -r -o foo.o bar.o bah.o\n> 4. $(LD) $(LDFLAGS_SL) -o foo$(DLSUFFIX) foo.o\n> \n> Could someone for each port tell me what $(CFLAGS_SL) and $(LDFLAGS_SL)\n> are needed for each of these steps?\n\nLinux:\n\nCFLAGS_SL: -fpic\nLDFLAGS_SL: -shared -soname foo$(DLSUFFIX_WITH_MAJOR_VERSION_NUMBER)\n\nI hope I didn't miss anything.\n\nMichael \n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 24 Apr 1998 11:18:27 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] linux, bsd, i386-solaris and univel shared libraries." }, { "msg_contents": "On Fri, Apr 24, 1998 at 11:18:27 +0200, Michael Meskes wrote:\n\n> Darren King writes:\n> > \n> > For the linux, bsd, i386-solaris and univel ports, ...\n> > \n> > If I were to try to make foo$(DLSUFFIX) from bar.c and bah.c, I would\n> > think the general sequence of events would be:\n> > \n> > 1. $(CC) $(CFLAGS_SL) -o bar.o bar.c\n> > 2. $(CC) $(CFLAGS_SL) -o bah.o bah.c\n> > 3. $(LD) $(LDFLAGS_SL) -r -o foo.o bar.o bah.o\n> > 4. $(LD) $(LDFLAGS_SL) -o foo$(DLSUFFIX) foo.o\n> > \n> > Could someone for each port tell me what $(CFLAGS_SL) and $(LDFLAGS_SL)\n> > are needed for each of these steps?\n> \n> Linux:\n> \n> CFLAGS_SL: -fpic\n ^^^^\nThe shared library must be compiled with `-fPIC', and the static version\nmust not be. In other words, each `*.c' file is compiled twice.\n\n> LDFLAGS_SL: -shared -soname foo$(DLSUFFIX_WITH_MAJOR_VERSION_NUMBER)\n> \n> I hope I didn't miss anything.\n\nTake a look at the debian policy manual :-)).\n\nHere is the patch against the `ecpg/lib/Makefile.in'.\n\n======================Makefile.in.patch===============================\n--- Makefile.in.old\tWed Apr 22 13:36:00 1998\n+++ Makefile.in\tFri Apr 24 12:59:34 1998\n@@ -22,3 +22,2 @@\n LDFLAGS_SL = -shared -soname libecpg.so.$(SO_MAJOR_VERSION)\n- CFLAGS += $(CFLAGS_SL)\n endif\n@@ -48,4 +47,4 @@\n \n-$(shlib): ecpglib.o typename.o\n-\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.o typename.o \n+$(shlib): ecpglib.sho typename.sho\n+\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho\n \tln -sf $@ libecpg.so\n@@ -53,3 +52,3 @@\n clean:\n-\trm -f *.o *.a core a.out *~ $(shlib) libecpg.so\n+\trm -f *.o *.sho *.a core a.out *~ $(shlib) libecpg.so\n \n@@ -71,4 +70,9 @@\n ecpglib.o : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n-\t$(CC) $(CFLAGS) -I../include $(PQ_INCLUDE) -c ecpglib.c\n+\t$(CC) $(CFLAGS) -I../include $(PQ_INCLUDE) -c $< -o $@\n typename.o : typename.c ../include/ecpgtype.h\n-\t$(CC) $(CFLAGS) -I../include $(PQ_INCLUDE) -c typename.c\n+\t$(CC) $(CFLAGS) -I../include $(PQ_INCLUDE) -c $< -o $@\n+\n+ecpglib.sho : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n+\t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\n+typename.sho : typename.c ../include/ecpgtype.h\n+\t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\n======================Makefile.in.patch===============================\n\nRegards,\n\n-- Alen\n\n----------------------------------------------------------------------\nAlen Zekulic <[email protected]>\nKey fingerprint = 47 82 56 37 1D 94 94 F8 16 C1 D8 33 1D 9D 61 73\n", "msg_date": "Fri, 24 Apr 1998 13:28:10 +0200", "msg_from": "Alen Zekulic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] linux, bsd, i386-solaris and univel shared libraries." }, { "msg_contents": "Alen Zekulic writes:\n> > CFLAGS_SL: -fpic\n> ^^^^\n> The shared library must be compiled with `-fPIC', and the static version\n> must not be. In other words, each `*.c' file is compiled twice.\n\nNormaly yes. But I didn't see this in any PostgreSQL makefile. I think using\n-fPIC still creates a working static library. It does generate less\nefficient code though.\n\n> Take a look at the debian policy manual :-)).\n\n:-)\n\n> Here is the patch against the `ecpg/lib/Makefile.in'.\n\nApplied to my source. I think we should add a similar patch to\nlibpq/Makefile and whatever else creates shared libs.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 24 Apr 1998 13:33:05 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] linux, bsd, i386-solaris and univel shared libraries." } ]
[ { "msg_contents": "\nI'm leaving my current job at the end of the month and will no\nlonger be able to watch over the aix ports. Not that there aren't\nother very knowledgable aix people on the list, I just seem to be\nthe most vocal. *grin*\n\nMy new position is at United Parcel Service maintaining and coding\nfor their web site. I don't believe I will be able to code there\nas I do now here at Insight. I'll probably only monitor the lists.\n\nI'm looking into getting a PC at home so I can keep coding, but I\ndon't think see that happening until June. Then I'll jump back in\nthe fire again.\n\nI have a couple of patches in the works that I intend to finish\nand submit before next Thursday.\n\nI still have the big patch to allow the change in block size, but\ndon't think I'll have time to get the various char types to change\nwith it. If there is still interest in this, I will make a new\npatch against the latest snapshot or 6.3.2 and send to PATCHES.\n\nAny bugs and/or glitches I've caused, speak before the 30th or hold\nyour peace until June. :)\n\nDarren King\[email protected]\n\n\n", "msg_date": "Thu, 23 Apr 1998 18:31:36 -0400", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "AIX needs a new port maintainer." }, { "msg_contents": "> \n> \n> I'm leaving my current job at the end of the month and will no\n> longer be able to watch over the aix ports. Not that there aren't\n> other very knowledgable aix people on the list, I just seem to be\n> the most vocal. *grin*\n> \n> My new position is at United Parcel Service maintaining and coding\n> for their web site. I don't believe I will be able to code there\n> as I do now here at Insight. I'll probably only monitor the lists.\n\nGood luck.\n\n> I have a couple of patches in the works that I intend to finish\n> and submit before next Thursday.\n\nThat's good.\n\n> I still have the big patch to allow the change in block size, but\n> don't think I'll have time to get the various char types to change\n> with it. If there is still interest in this, I will make a new\n> patch against the latest snapshot or 6.3.2 and send to PATCHES.\n\nOK, give us what you have, and we may be able to fix the rest.\n\n> \n> Any bugs and/or glitches I've caused, speak before the 30th or hold\n> your peace until June. :)\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 23 Apr 1998 20:06:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] AIX needs a new port maintainer." }, { "msg_contents": "On Thu, 23 Apr 1998, Darren King wrote:\n\n> Any bugs and/or glitches I've caused, speak before the 30th or hold\n> your peace until June. :)\n\n\tGood luck in the new job, and look forward to hearing from you in\nJune again :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 23 Apr 1998 21:48:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] AIX needs a new port maintainer." }, { "msg_contents": "Darren King wrote:\n> \n> I'm leaving my current job at the end of the month and will no\n ^^^^^^^^^^^^^^^^^^^^^^^\nApril ? Ha-ha - and me too! :)\nGood luck!\n\nVadim\n", "msg_date": "Fri, 24 Apr 1998 14:40:08 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] AIX needs a new port maintainer." } ]
[ { "msg_contents": "Zsolt Varga wrote:\n\n> On Thu, 23 Apr 1998, Byron Nikolaidis wrote:\n>\n> some little thing...\n> from delphi I see the following Postgres types as:\n> ---------------------------------------------\n> bool --> char(2)\n\nBool is being handled as character data, so the 2 includes the null\nterminator.This was under debate for a while. Handling it as a real\nSQL_BOOL didn't seem to work well under Access so thats why it is char.\nI could probably fix this now.\n\n> varchar(x) x > 255 --> varchar(255)\n\nVarchar's are limited in the driver to 255 to allow MS Access to be able\nto index on these fields. Same with char(x).\n\n> text -- > longvarchar(484)\n\nI am still trying to figure this one out. The size should be 4096. The\nmemo text DOES seem to display properly, even though the size is screwed\nup.\n\n> char(x) --> varchar(x);\n\nchar(x), (i.e., bpchar), as well as varchar, is handled as SQL_VARCHAR.\nIn the future, we may have a feature that allows the user to decide how\nto map Postgres data types to ODBC data types. Perhaps char(x), for\nnow, should be mapped to SQL_CHAR?\n\nByron\n\nP.S. I have done more testing with Borland Database Explorer, and fixed\na few more things. I only had time to update the DLL not the self\nextracting EXE). Download\n\"http://www.insightdist.com/psqlodbc/postdll.zip\" and unzip into your\n\\windows\\system directory. You must have already installed the new\ndriver at least once for this to work!\n\nNote that until the backend is patched as was discussed with Bruce in\nprevious conversations on the hackers/interfaces lists, Postgres varchar\ntype will return \"I don't know\" as the precision. Borland handles this\nok, but reserves a large amount of display area for the data (because it\ndoesnt know how big the field is).\n\nPostgres char(x) type will show the correct length.\n\nByron\n\n", "msg_date": "Thu, 23 Apr 1998 23:35:40 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ODBC + Delphi (GREAT!) + some little thing//" } ]
[ { "msg_contents": "\tPlease see the Insight Dist site for a newer source and binary\ndistribution of the ODBC driver\n\t\n\thttp://www.insightdist.com/psqlodbc\n\nJulie\n\nQuoting Jose' Soares Da Silva ([email protected]):\n> Hello,\n> \n> I have a problem using Access97 and PostODBC (po021-32.tgz).\n> I can link PostgreSQL 6.3.1 tables to Access'97 but I can open them\n> only if they are empty.\n> If I insert data into tables and then I try to access it, I have the\n> following message:\n> \n> Receiving an unsupported type from Postgres (#14) SELECT (#513)\n> \n> \t\t\t Thanks for any help\n> \t\t\t\t\t Jose'\n> \n\n-- \n[ Julia Anne Case ] [ Ships are safe inside the harbor, ]\n[Programmer at large] [ but is that what ships are really for. ] \n[ Admining Linux ] [ To thine own self be true. ]\n[ Windows/WindowsNT ] [ Fair is where you take your cows to be judged. ]\n", "msg_date": "Fri, 24 Apr 1998 08:40:33 +0000", "msg_from": "\"Julia A.Case\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "Hello,\n\nI have a problem using Access97 and PostODBC (po021-32.tgz).\nI can link PostgreSQL 6.3.1 tables to Access'97 but I can open them\nonly if they are empty.\nIf I insert data into tables and then I try to access it, I have the\nfollowing message:\n\n Receiving an unsupported type from Postgres (#14) SELECT (#513)\n \n\t\t\t Thanks for any help\n\t\t\t\t\t Jose'\n\n", "msg_date": "Fri, 24 Apr 1998 15:04:55 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Access'97 and ODBC" }, { "msg_contents": "On Fri, 24 Apr 1998, Julia A.Case wrote:\n\n> \tPlease see the Insight Dist site for a newer source and binary\n> distribution of the ODBC driver\n> \t\n> \thttp://www.insightdist.com/psqlodbc\n> \n> Julie\n\nThanks Julie. Now it works, but now I have a little problem about\ndate formats.\nI have a table with field1 DATE and field2 TIMESTAMP. If I insert data\ninto these fields, field2 looks OK, but Access97 show me a strange\ndate on field1.\n\nThis is Access97 output:\n field1: 27/7/99\n field2: 1998-04-27 12:20:21+02\n \nThis is psql output:\n Field | Value\n -- RECORD 0 --\n field1| 1998-04-27\n field2| 1998-04-27 12:20:21+02\n----\nPS: My DateStyle is setting to 'ISO'\n Jose'\n\n", "msg_date": "Mon, 27 Apr 1998 14:25:11 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "Hello,\n\nThe ODBC driver can not yet handle multiple datestyle formats. Currently,\nit expects dates to be in US format. There will be a future option that\nallows you to configure that for the driver or per datasource.\n\nByron\n\n\nJose' Soares Da Silva wrote:\n\n> Thanks Julie. Now it works, but now I have a little problem about\n> date formats.\n> I have a table with field1 DATE and field2 TIMESTAMP. If I insert data\n> into these fields, field2 looks OK, but Access97 show me a strange\n> date on field1.\n>\n> This is Access97 output:\n> field1: 27/7/99\n> field2: 1998-04-27 12:20:21+02\n>\n> This is psql output:\n> Field | Value\n> -- RECORD 0 --\n> field1| 1998-04-27\n> field2| 1998-04-27 12:20:21+02\n> ----\n> PS: My DateStyle is setting to 'ISO'\n> Jose'\n\n\n\n", "msg_date": "Mon, 27 Apr 1998 10:40:17 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "Jose' Soares Da Silva wrote:\n> \n> I have a table with field1 DATE and field2 TIMESTAMP. If I insert data\n> into these fields, field2 looks OK, but Access97 show me a strange\n> date on field1.\n> \n> This is Access97 output:\n> field1: 27/7/99\n> field2: 1998-04-27 12:20:21+02\n> \n> This is psql output:\n> Field | Value\n> -- RECORD 0 --\n> field1| 1998-04-27\n> field2| 1998-04-27 12:20:21+02\n> ----\n> PS: My DateStyle is setting to 'ISO'\n\nYou should set it to 'US' when using Insight ODBC drivers. \n\nIt should affect the output in no way, but the driver expects it from \nthe backend in US format. As this is a per-connection setting it can \nsafely be set from the driver at startup without affecting other \nconnections.\n\nThere has been some discussion about 'fixing' it and making the \ndriver recognize other date formats. That would be IMHO unnecessary. \nIt should be enough just to do \"SET DateStyle TO 'US';\" at startup.\n\nThis can be currently done by setting some registry entries, but \nthis should really be just a part of driver startup.\n\nHannu\n", "msg_date": "Tue, 28 Apr 1998 17:51:02 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "\n\nHannu Krosing wrote:\n\n> Jose' Soares Da Silva wrote:\n> >\n> > I have a table with field1 DATE and field2 TIMESTAMP. If I insert data\n> > into these fields, field2 looks OK, but Access97 show me a strange\n> > date on field1.\n> >\n> > This is Access97 output:\n> > field1: 27/7/99\n> > field2: 1998-04-27 12:20:21+02\n> >\n> > This is psql output:\n> > Field | Value\n> > -- RECORD 0 --\n> > field1| 1998-04-27\n> > field2| 1998-04-27 12:20:21+02\n> > ----\n> > PS: My DateStyle is setting to 'ISO'\n>\n> You should set it to 'US' when using Insight ODBC drivers.\n>\n> It should affect the output in no way, but the driver expects it from\n> the backend in US format. As this is a per-connection setting it can\n> safely be set from the driver at startup without affecting other\n> connections.\n>\n> There has been some discussion about 'fixing' it and making the\n> driver recognize other date formats. That would be IMHO unnecessary.\n> It should be enough just to do \"SET DateStyle TO 'US';\" at startup.\n>\n> This can be currently done by setting some registry entries, but\n> this should really be just a part of driver startup.\n>\n> Hannu\n\n\nHannu,\n\nI understand what you are saying here, and am very tempted to just go with\nsetting the datestyle to US at connection time by default. It is true that\nthis would have no negative effect on applications such as Access.\n\nBut, before I do, is there cases out there where people are executing DIRECT\nqueries through the driver where they are expecting the date to be in a\nparticular format such as:\n\ninsert into tablex (date1) values('28-04-1998') # DD-MM-YYYY\nformat\n\nIf the driver always sets the datestyle to \"US\", the above insert might not\nwork. Of course, I would imagine the query should be written more portably\nusing the ODBC shorthand escape syntax, as:\n\ninsert into tablex (date1) values( {d '1998-04-28'} ),\n\nwhich would work correctly. The reverse is true also, if the user does\n\"select date1 from tablex\", and uses SQL_C_CHAR as the return type,\nexpecting the format to be EURO, when in fact it would be US.\n\nIf no one has any objections, I will change the driver to always set the\ndatestyle to US, and forget about adding a selection to the dialogs to\nselect it.\n\nByron\n\n\n\n", "msg_date": "Tue, 28 Apr 1998 17:32:43 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "Hello,\n\nAt 17.32 28/04/98 -0400, Byron Nikolaidis wrote:\n\n>I understand what you are saying here, and am very tempted to just go with\n>setting the datestyle to US at connection time by default. It is true that\n>this would have no negative effect on applications such as Access.\n>\n>But, before I do, is there cases out there where people are executing DIRECT\n>queries through the driver where they are expecting the date to be in a\n>particular format such as:\n>\n>insert into tablex (date1) values('28-04-1998') # DD-MM-YYYY\n>format\n>\n>If the driver always sets the datestyle to \"US\", the above insert might not\n>work. Of course, I would imagine the query should be written more portably\n>using the ODBC shorthand escape syntax, as:\n>\n>insert into tablex (date1) values( {d '1998-04-28'} ),\n>\n>which would work correctly. The reverse is true also, if the user does\n>\"select date1 from tablex\", and uses SQL_C_CHAR as the return type,\n>expecting the format to be EURO, when in fact it would be US.\n>\n>If no one has any objections, I will change the driver to always set the\n>datestyle to US, and forget about adding a selection to the dialogs to\n>select it.\n\nMicrosoft says that the US date format is *always* recognized by the Jet\ndatabase engine, no matter of the windows interntional settings, and it\nsuggest to use US date format as a kind of international date format. This\nmeans that whenever you don't know in which country your program will be\nexecuted, it is safe to use the US date format. Setting US datestyle by\ndefault in the ODBC driver will provide a behaviour which is much similar\nto the Jet database engine, i.e. the behaviour Access/VB programmers\nusually have to deal with. So go on with this solution !\n\nBye !\n\nP.S. I tested the new ODBC driver with index support. VisData still isn't\nable to show the index list, anyway it sees them because it allow updates.\nUsed with VB the ODBC is rather slow compared with other ODBC (About 10\ntime slower than MS SQL and Velocis, about 30 times slower than MySql) but\nit works pretty well. Anyway it is about 3/4 times faster than the OpenLink\ndriver, which is also pretty buggy ;) Really good job Byron !\n\n\tDr. Sbragion Denis\n\tInfoTecna\n\tTel, Fax: +39 39 2324054\n\tURL: http://space.tin.it/internet/dsbragio\n", "msg_date": "Wed, 29 Apr 1998 09:05:07 +0200", "msg_from": "Sbragion Denis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "\n\nSbragion Denis wrote:\n\n> P.S. I tested the new ODBC driver with index support. VisData still isn't\n> able to show the index list, anyway it sees them because it allow updates.\n> Used with VB the ODBC is rather slow compared with other ODBC (About 10\n> time slower than MS SQL and Velocis, about 30 times slower than MySql) but\n> it works pretty well. Anyway it is about 3/4 times faster than the OpenLink\n> driver, which is also pretty buggy ;) Really good job Byron !\n>\n\nI'm not sure why VisData still isn't able to show the index list. First of all,\nI dont know what \"VisData\" is anyway! Perhaps you could use the odbc tracing\nfeature (through the 32 bit odbc administrator) and send the \"sql.log\" to me.\nMake sure it is empty before you begin your session. This will really slow\nthings down by the way.\n\nAs for performance, the backend affects that equation greatly. You should see\nwhat happens in Access when you are using unique indexes. Even with one keypart,\nAccess generates that infamous query we have been talking about (with all the\nANDs and ORs), which really slows things down.\n\n\nByron\n\n", "msg_date": "Wed, 29 Apr 1998 09:31:25 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "> \"For Postgres v6.3 (and earlier) the default date/time style is\n> \"traditional Postgres\". In future releases, the default may become\n> ISO-8601, which alleviates date specification ambiguities and Y2K\n> collation problems.\"\n> \n> I vote for changing default date format to ISO-8601 to reflect \n> PostgreSQL documentation and for adherence to Standard SQL92.\n\nI was thinking that if the default format changes it should change at a\nmajor rev (i.e. v7.0) since one might expect interfaces to need updates\nat a major rev anyway.\n\nBut let me turn around the question, in case no one is bothered by this:\n\nDoes anyone think that the default date format _shouldn't_ change to\nISO-8601 for the next release?\n\n(I expect to hear that it shouldn't change, but figured I should confirm\nit...).\n\n - Tom\n", "msg_date": "Wed, 29 Apr 1998 13:59:29 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "On Tue, 28 Apr 1998, Byron Nikolaidis wrote:\n\n> \n> \n> Hannu Krosing wrote:\n> \n> > Jose' Soares Da Silva wrote:\n> > >\n> > > I have a table with field1 DATE and field2 TIMESTAMP. If I insert data\n> > > into these fields, field2 looks OK, but Access97 show me a strange\n> > > date on field1.\n> > >\n> > > This is Access97 output:\n> > > field1: 27/7/99\n> > > field2: 1998-04-27 12:20:21+02\n> > >\n> > > This is psql output:\n> > > Field | Value\n> > > -- RECORD 0 --\n> > > field1| 1998-04-27\n> > > field2| 1998-04-27 12:20:21+02\n> > > ----\n> > > PS: My DateStyle is setting to 'ISO'\n> >\n> > You should set it to 'US' when using Insight ODBC drivers.\n> >\n> > It should affect the output in no way, but the driver expects it from\n> > the backend in US format. As this is a per-connection setting it can\n> > safely be set from the driver at startup without affecting other\n> > connections.\n> >\n> > There has been some discussion about 'fixing' it and making the\n> > driver recognize other date formats. That would be IMHO unnecessary.\n> > It should be enough just to do \"SET DateStyle TO 'US';\" at startup.\n> >\n> > This can be currently done by setting some registry entries, but\n> > this should really be just a part of driver startup.\n> >\n> > Hannu\n> \n> \n> Hannu,\n> \n> I understand what you are saying here, and am very tempted to just go with\n> setting the datestyle to US at connection time by default. It is true that\n> this would have no negative effect on applications such as Access.\n> \n> But, before I do, is there cases out there where people are executing DIRECT\n> queries through the driver where they are expecting the date to be in a\n> particular format such as:\n> \n> insert into tablex (date1) values('28-04-1998') # DD-MM-YYYY\n> format\n> \n> If the driver always sets the datestyle to \"US\", the above insert might not\n> work. Of course, I would imagine the query should be written more portably\n> using the ODBC shorthand escape syntax, as:\n> \n> insert into tablex (date1) values( {d '1998-04-28'} ),\n> \n> which would work correctly. The reverse is true also, if the user does\n> \"select date1 from tablex\", and uses SQL_C_CHAR as the return type,\n> expecting the format to be EURO, when in fact it would be US.\n> \n> If no one has any objections, I will change the driver to always set the\n> datestyle to US, and forget about adding a selection to the dialogs to\n> select it.\n\nWhy not ISO-8601 this is the Standard SQL92 date format (i.e. YYYY-MM-DD)\nand for coherence with PostgreSQL User's Guide, quoting Thomas Lockhart\nat page 14, chapter 4, under \"Date/Time Styles\":\n \n \"For Postgres v6.3 (and earlier) the default date/time style is\n \"traditional Postgres\". In future releases, the default may become\n ISO-8601, which alleviates date specification ambiguities and Y2K\n collation problems.\"\n \nI vote for changing default date format to ISO-8601 to reflect PostgreSQL\ndocumentation and for adherence to Standard SQL92.\n Jose'\n\n", "msg_date": "Wed, 29 Apr 1998 15:04:26 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "Hello,\n\nAt 09.31 29/04/98 -0400, Byron Nikolaidis wrote:\n>I'm not sure why VisData still isn't able to show the index list. First\nof all,\n>I dont know what \"VisData\" is anyway! Perhaps you could use the odbc tracing\n\nVisData is a small tool provided with visual basic 5.0. It provides a\ngraphical representation of all the feature of any database that could be\nopened through visual basic, including ODBC databases. It is quite an hard\ntest for any ODBC driver because it tries to show *almost anything* that\ncould be retrieved through an ODBC driver, not only data. Most ODBC\ndrivers, even some \"famous\" one, fail with VisData and still can perfectly\nbe used in normal applications.\n\n>feature (through the 32 bit odbc administrator) and send the \"sql.log\" to me.\n>Make sure it is empty before you begin your session. This will really slow\n>things down by the way.\n\nI'll do it ASAP, and I'll provide also the exact sequence of operation\nperformed to show the problems. Anyway the problem showed with VisData has\nno importance at all, at least using Visual Basic and Access. ASAP I'll\nalso perform some test using Power Builder, wich uses the ODBC in a\ndifferent way than VB.\n\n>As for performance, the backend affects that equation greatly. You should\nsee\n>what happens in Access when you are using unique indexes. Even with one\nkeypart,\n>Access generates that infamous query we have been talking about (with all the\n>ANDs and ORs), which really slows things down.\n\nI know. Anyway I was not using Access but a small test program I wrote\nmyself. This program perform random operations (insert, update, select and\ndelete) through recordset opened on simple tables, so it doesn't suffer\nthe Access \"feature\" of creating too complex queries. I know this is not a\ndeep test, anyway it is the sort of operations 90% of VB code perform on\ndatabases. I think first we should obtain a functioning ODBC driver, i.e.\nyou should continue on the way you are going now. After this we could take\ncare of performances. Doing things in reverse order usually produce \"very\nfast non functioning code\", which is not usefull at all ;)\n\nBye !\n\n\tDr. Sbragion Denis\n\tInfoTecna\n\tTel, Fax: +39 39 2324054\n\tURL: http://space.tin.it/internet/dsbragio\n", "msg_date": "Thu, 30 Apr 1998 08:38:36 +0200", "msg_from": "Sbragion Denis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "\"Jose' Soares Da Silva\" <[email protected]> writes:\n\n> I vote for changing default date format to ISO-8601 to reflect\n> PostgreSQL documentation and for adherence to Standard SQL92.\n\nHear! Hear! Good standards beat silly conventions any day!\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "30 Apr 1998 09:44:30 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "Thanks to every body that replied my question. Now dates are Ok.\n\nNow I have another problem using M$-Access;\n I have a table like this one:\n\nTable = comuni\n+------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+------------------------------+----------------------------------+-------+\n| istat | char() not null | 6 |\n| nome | varchar() | 50 |\n| provincia | char() | 2 |\n| codice_fiscale | char() | 4 |\n| cap | char() | 5 |\n| regione | char() | 3 |\n| distretto | char() | 4 |\n+------------------------------+----------------------------------+-------+\n... in this table I have stored 8k rows, if I load it from M$-Access and \nthen I modify a row and I try to save it to database, it goes in a loop\nI don't know what's happening.\n Please help me. Thanks, Jose'\n\n\n\nOn Tue, 28 Apr 1998, Hannu Krosing wrote:\n\n> Jose' Soares Da Silva wrote:\n> > \n> > I have a table with field1 DATE and field2 TIMESTAMP. If I insert data\n> > into these fields, field2 looks OK, but Access97 show me a strange\n> > date on field1.\n> > \n> > This is Access97 output:\n> > field1: 27/7/99\n> > field2: 1998-04-27 12:20:21+02\n> > \n> > This is psql output:\n> > Field | Value\n> > -- RECORD 0 --\n> > field1| 1998-04-27\n> > field2| 1998-04-27 12:20:21+02\n> > ----\n> > PS: My DateStyle is setting to 'ISO'\n> \n> You should set it to 'US' when using Insight ODBC drivers. \n> \n> It should affect the output in no way, but the driver expects it from \n> the backend in US format. As this is a per-connection setting it can \n> safely be set from the driver at startup without affecting other \n> connections.\n> \n> There has been some discussion about 'fixing' it and making the \n> driver recognize other date formats. That would be IMHO unnecessary. \n> It should be enough just to do \"SET DateStyle TO 'US';\" at startup.\n> \n> This can be currently done by setting some registry entries, but \n> this should really be just a part of driver startup.\n> \n> Hannu\n\n", "msg_date": "Thu, 30 Apr 1998 15:36:00 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "Jose' Soares Da Silva wrote:\n\n> Now I have another problem using M$-Access;\n> I have a table like this one:\n>\n> Table = comuni\n> +------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +------------------------------+----------------------------------+-------+\n> | istat | char() not null | 6 |\n> | nome | varchar() | 50 |\n> | provincia | char() | 2 |\n> | codice_fiscale | char() | 4 |\n> | cap | char() | 5 |\n> | regione | char() | 3 |\n> | distretto | char() | 4 |\n> +------------------------------+----------------------------------+-------+\n> ... in this table I have stored 8k rows, if I load it from M$-Access and\n> then I modify a row and I try to save it to database, it goes in a loop\n> I don't know what's happening.\n> Please help me. Thanks, Jose'\n>\n\nThis problem has to do with the Postgres' locking mechanism. You cant update a\ntable while you have the table open for reading. You may be asking yourself,\nbut I do not have the table open for reading. Ahhh, but Access does because of\nthe way the odbc driver uses cursors to manage backend data.\n\nHere is the illustration:\n---------------------\nAccess uses two backend connections. On one connection, it does a query to get\nkey values from the table:\n\"declare c1 cursor for select key from table\"\n\nIt then fetches 101 keys from this query. This fetch results in the following\n2 queries to the backend:\n\"fetch 100 in c1\"\n\"fetch 100 in c1\"\n\n(Note that there are 8000+ rows in the table so this leaves the table locked)\n\nOn the other connection, it actually does the update query:\n\"update table set a1=2 where key=1\"\n\nThis update will wait forever because the other query has the table completely\nlocked.\n\nWorkarounds\n--------------\nIn Access, you can go to the end of the table first, before you begin your\nupdate. Then, any update or insert you do should work.\n\nYou can also do your update on a smaller subset of records by using a filter in\nAccess. 200 or less rows would allow the driver to handle it since all the\nkeys would have been read in as illustrated above.\n\nNow for the ultimate question\n-----------------------------\nWhat is the current status/priority of the locking enhancements for Postgres?\nClearly, this is an important problem and needs to be addressed. Even though\nthe above example only involves Microsoft Access, we have applications which\nneed to write data to tables that may already be open for reading for a long\ntime,\nsuch as while doing a massive report with lots of joins. With the current\nlocking strategy, these applications are impossible.\n\nRegards,\n\nByron\n\n", "msg_date": "Thu, 30 Apr 1998 12:10:45 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Postgres Locking, Access'97 and ODBC" }, { "msg_contents": "On Thu, 30 Apr 1998, Byron Nikolaidis wrote:\n\nThank you very much Byron for your explanation.\n\n> Jose' Soares Da Silva wrote:\n> \n> > Now I have another problem using M$-Access;\n> > I have a table like this one:\n> >\n> > Table = comuni\n> > +------------------------------+----------------------------------+-------+\n> > | Field | Type | Length|\n> > +------------------------------+----------------------------------+-------+\n> > | istat | char() not null | 6 |\n> > | nome | varchar() | 50 |\n> > | provincia | char() | 2 |\n> > | codice_fiscale | char() | 4 |\n> > | cap | char() | 5 |\n> > | regione | char() | 3 |\n> > | distretto | char() | 4 |\n> > +------------------------------+----------------------------------+-------+\n> > ... in this table I have stored 8k rows, if I load it from M$-Access and\n> > then I modify a row and I try to save it to database, it goes in a loop\n> > I don't know what's happening.\n> > Please help me. Thanks, Jose'\n> >\n> \n> This problem has to do with the Postgres' locking mechanism. You cant update a\n> table while you have the table open for reading. You may be asking yourself,\n> but I do not have the table open for reading. Ahhh, but Access does because of\n> the way the odbc driver uses cursors to manage backend data.\n> \n> Here is the illustration:\n> ---------------------\n> Access uses two backend connections. On one connection, it does a query to get\n> key values from the table:\n> \"declare c1 cursor for select key from table\"\n> \n> It then fetches 101 keys from this query. This fetch results in the following\n> 2 queries to the backend:\n> \"fetch 100 in c1\"\n> \"fetch 100 in c1\"\n> \n> (Note that there are 8000+ rows in the table so this leaves the table locked)\n> \n> On the other connection, it actually does the update query:\n> \"update table set a1=2 where key=1\"\n> \n> This update will wait forever because the other query has the table completely\n> locked.\n> \n> Workarounds\n> --------------\n> In Access, you can go to the end of the table first, before you begin your\n> update. Then, any update or insert you do should work.\n> \n> You can also do your update on a smaller subset of records by using a filter in\n> Access. 200 or less rows would allow the driver to handle it since all the\n> keys would have been read in as illustrated above.\n\nSeems this problem exists also when I read only one row.\nI tried this: \nI got the first row using a form, then I modified a field on this form and\nthen I tried to load the next row (by using right arrow), and Access\nis already there locked by PostgreSQL.\nps command give me the followinng result: (two backend connections as you said)\n\n3033 ? S 0:00 postmaster -i -o -F -B 512 -S\n5034 ? S 0:01 /usr/local/pgsql/bin/postgres -p -Q -P5 -F -B 512 -v 6553\n5035 ? S 0:07 /usr/local/pgsql/bin/postgres -p -Q -P5 -F -B 512 -v 6553\n\n> \n> Now for the ultimate question\n> -----------------------------\n> What is the current status/priority of the locking enhancements for Postgres?\n> Clearly, this is an important problem and needs to be addressed. Even though\n> the above example only involves Microsoft Access, we have applications which\n> need to write data to tables that may already be open for reading for a long\n> time,\n> such as while doing a massive report with lots of joins. With the current\n> locking strategy, these applications are impossible.\n\nIs there in project to work on this problem ?\n Jose'\n\n", "msg_date": "Tue, 5 May 1998 10:00:27 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Postgres Locking, Access'97 and ODBC" }, { "msg_contents": "Hi, all!\n\nI created a table with a TIMESTAMP data type to use with M$-Access, because\nAccess uses such field to control concurrent access on records.\nBut I have a problem M$-Access doesn't recognize a TIMESTAMP type, it see\nsuch fields as \"text\" instead of \"date/time\".\nIs there a way to make Access recognize TIMESTAMPs ?\n Thanks, Jose'\n\n\n", "msg_date": "Tue, 9 Jun 1998 10:18:08 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "\n\nJose' Soares Da Silva wrote:\n\n> Hi, all!\n>\n> I created a table with a TIMESTAMP data type to use with M$-Access, because\n> Access uses such field to control concurrent access on records.\n> But I have a problem M$-Access doesn't recognize a TIMESTAMP type, it see\n> such fields as \"text\" instead of \"date/time\".\n> Is there a way to make Access recognize TIMESTAMPs ?\n> Thanks, Jose'\n\n I could add TimeStamp as a supported data type of the odbc driver. Currently,\n'abstime' is supported but not 'timestamp'.\n\nByron\n\n", "msg_date": "Tue, 09 Jun 1998 09:16:13 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "On Tue, 9 Jun 1998, Jose' Soares Da Silva wrote:\n\n> Hi, all!\n> \n> I created a table with a TIMESTAMP data type to use with M$-Access, because\n> Access uses such field to control concurrent access on records.\n> But I have a problem M$-Access doesn't recognize a TIMESTAMP type, it see\n> such fields as \"text\" instead of \"date/time\".\n> Is there a way to make Access recognize TIMESTAMPs ?\n> Thanks, Jose'\nAlso the following types are recognized as text:\n int28\n oid8\n oidint2\n oidint4\n\nI forgot to say that I'm using :\n PostgreSQL-6.3.2\n Linyx ELF 2.0.33\n psqlodbc-06.30.0243\n M$-Access97\n Ciao, Jose'\n\n", "msg_date": "Tue, 9 Jun 1998 14:42:23 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "\n\nByron Nikolaidis wrote:\n\n> I could add TimeStamp as a supported data type of the odbc driver. Currently,\n> 'abstime' is supported but not 'timestamp'.\n>\n\nAlso, the postgres \"datetime\" type is already supported as well.\nMaybe that would work for you temporarily.\nAs a matter of fact, all the date/time types \"look\" the same since we now use\n'ISO'.\n\nByron\n\n", "msg_date": "Tue, 09 Jun 1998 11:12:13 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "\n\nJose' Soares Da Silva wrote:\n\n> Also the following types are recognized as text:\n> int28\n> oid8\n> oidint2\n> oidint4\n>\n\nJust a little history here...any data type that is not directly supported by the\nodbc driver will get mapped to SQL_VARCHAR or SQL_LONGVARCHAR, depending on\ndriver 'data type options'. That allows you to view it and possibly update it,\nif there is an appropriate operator. This is great compared to what the driver\nused to do in the old days with unsupported types (i.e., crash with no\ndescriptive error message)!\n\nFor int28 and oid8, there is no SQL data type that maps. Text is the only way to\ndisplay it that I know of.\n\noidint2 and oidint4 are just integers I guess, and probably could be mapped to\nSQL_SMALLINT and SQL_INTEGER, respectively.\n\n\nByron\n\n", "msg_date": "Tue, 09 Jun 1998 11:55:41 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "On Tue, 9 Jun 1998, Byron Nikolaidis wrote:\n\n> \n> \n> Jose' Soares Da Silva wrote:\n> \n> > Hi, all!\n> >\n> > I created a table with a TIMESTAMP data type to use with M$-Access, because\n> > Access uses such field to control concurrent access on records.\n> > But I have a problem M$-Access doesn't recognize a TIMESTAMP type, it see\n> > such fields as \"text\" instead of \"date/time\".\n> > Is there a way to make Access recognize TIMESTAMPs ?\n> > Thanks, Jose'\n> \n> I could add TimeStamp as a supported data type of the odbc driver. Currently,\n> 'abstime' is supported but not 'timestamp'.\n> \nThank you Byron.\nI think this is great. M$-Access should work well with a timestamp field,\nI have problems with concurrent access and I think it is because this data type.\n Jose'\n\n", "msg_date": "Tue, 9 Jun 1998 17:25:11 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "On Tue, 9 Jun 1998, Byron Nikolaidis wrote:\n\n> \n> \n> Byron Nikolaidis wrote:\n> \n> > I could add TimeStamp as a supported data type of the odbc driver. Currently,\n> > 'abstime' is supported but not 'timestamp'.\n> >\n> \n> Also, the postgres \"datetime\" type is already supported as well.\n> Maybe that would work for you temporarily.\n> As a matter of fact, all the date/time types \"look\" the same since we now use\n> 'ISO'.\nMy problem is that I need a TIMESTAMP data type defined in M$-Access because\nM$-Access wants it to have best performance when it updates a table via ODBC.\nM$-Access doesn't lock a record being modified, to allow control concurrent\naccess to data M$-Access reads again the record to verify if it was modified by \nanother user, before update it to database.\nIf there's a TIMESTAMP M$-Access verifies only, if this field was modified, \notherwise it verifies every field of the table, and obviously it is slower.\nI beleave it would very useful if you could add this feature to psqlodbc.\n Thanks, Jose'\n\n> \n> Byron\n> \n> \n> \n\n Ciao, Jose'\n ___, / \n |_+_| /| / ~ \n~~~~~~~~~~~~~~~~~~~~~~~~~ | / | /| ~~~~~~~~~~~~~~~~~~~~~\n Jose' Soares Da Silva ~ |/ | / | / \"As armas e os Baroes\n Progetto \"OS LUSIADAS\" ~ | |/| | /| assinalados, que da\n SFERA CARTA SOFTWARE ~ /| / | | / | Occidental praia Lusitana\n Via Bazzanese, 69 / | / | | /| | por mares nunca de antes\nCasalecchio R. BO - Italy / | / | |/ | | navegados, passarono\nhttp://www.sferacarta.com / |/____|_/__|_| ainda alem da Taprobana\"\n [email protected] /____|__| | __|___________ ~\n Fax. ++39 51 6131537 ____________|_____|_/ LUSIADAS / (Luis de Camoes,\n Tel. ++39 51 591054 \\ o / Os Lusiadas, canto I)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n", "msg_date": "Wed, 10 Jun 1998 09:27:01 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "\n\nJose' Soares Da Silva wrote:\n\n> My problem is that I need a TIMESTAMP data type defined in M$-Access because\n> M$-Access wants it to have best performance when it updates a table via ODBC.\n> M$-Access doesn't lock a record being modified, to allow control concurrent\n> access to data M$-Access reads again the record to verify if it was modified by\n> another user, before update it to database.\n> If there's a TIMESTAMP M$-Access verifies only, if this field was modified,\n> otherwise it verifies every field of the table, and obviously it is slower.\n> I beleave it would very useful if you could add this feature to psqlodbc.\n> Thanks, Jose'\n>\n\nI have absolutely no problem with adding the postgres 'timestamp' type, in fact, I\nalready added it.\nBut, the thing is, the postgres types abstime and datetime, ALREADY map to\nSQL_TIMESTAMP!\n\nI think, that this actually has to do with SQLSpecialColumns 'SQL_ROWVER'. Access\nchecks for this but we don't return anything. SQL_ROWVER is defined as the column(s)\nin the specified table, if any, that are automatically updated by the data source when\nany value in the row is updated by any transaction (as in SQLBase ROWID or Sybase\nTIMESTAMP).\n\nIt seems to me, that this suggests that if we had a hidden timestamp column, Access\nwould use that to verify. I don't believe we have such a column in postgres?\n\nByron\n\n\n\n", "msg_date": "Wed, 10 Jun 1998 11:15:49 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "\n\n\nByron Nikolaidis wrote:\n\n> Jose' Soares Da Silva wrote:\n>\n> > My problem is that I need a TIMESTAMP data type defined in M$-Access because\n> > M$-Access wants it to have best performance when it updates a table via ODBC.\n> > M$-Access doesn't lock a record being modified, to allow control concurrent\n> > access to data M$-Access reads again the record to verify if it was modified by\n> > another user, before update it to database.\n> > If there's a TIMESTAMP M$-Access verifies only, if this field was modified,\n> > otherwise it verifies every field of the table, and obviously it is slower.\n> > I beleave it would very useful if you could add this feature to psqlodbc.\n> > Thanks, Jose'\n> >\n>\n\nI did some testing with SQLSpecialColumns 'SQL_ROWVER'. As I noted in my previous mail,\nwe dont return anything for this function in the driver. I tried hard-coding a column\nthat was a SQL_TIMESTAMP type (in my table it was a postgres 'datetime'). Access did use\nthat column. Here are the results:\n\ntest1 table\n----------\na,c,d,e,f,g = int2\nb,h = varchar\ndatetim = datetime\n\nAccess results without ROWVER (this is the way things currently are)\n---------------------------------------------------------------------\nBEGIN\nupdate test1 set b='hello' where a=7 AND b='other' AND c=3 AND d=4 AND e is NULL AND f is\nNULL AND g=5 AND h='stuff'\nCOMMIT\n\nAccess results with ROWVER\n-------------------------------\nBEGIN\nupdate test1 set b='hello' where a=7 AND datetim = '1998-05-30 10:59:00';\nselect a,b,c,d,e,f,g,h,datetim where a=7;\nCOMMIT\n\nConclusion:\n-----------\nThe update statement was definately smaller and only involved the key and the timestamp\ncolumn. The extra select that it does to verify no one has changed anything (using the\nvalue of the timestamp) slowed the update down, though. I don't think the speed gain on\nthe smaller update statement makes up for the extra query. In either case, the backend\nlocking problem would still prevent the update if the table was opened by someone else (or\neven the same application, as in our declare/fetch problem).\n\nAlso, something would have to be done to actually put a timestamp value in every time a\nrow was added or updated. Access actually prevented me from entering a value in my\n'datetim' field because it assumed the dbms would fill it in. I guess you could use a\ntrigger to update the timestamp field. OR if we had a pseudo column that qualified, we\ncould use that, however when I tried using a pseudo column, Access barfed on me\ncomplaining \"Table TMP%#$$^ already exists\". If I added the pseudo column to the output,\nthe message went away. I have no idea what the heck that means?\n\nAny ideas or thoughts?\n\nByron\n\n\n\n", "msg_date": "Wed, 10 Jun 1998 13:45:51 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: M$-Access'97 and TIMESTAMPs" }, { "msg_contents": "On Wed, 10 Jun 1998, Byron Nikolaidis wrote:\n\n> \n> \n> \n> Byron Nikolaidis wrote:\n> \n> > Jose' Soares Da Silva wrote:\n> >\n> > > My problem is that I need a TIMESTAMP data type defined in M$-Access because\n> > > M$-Access wants it to have best performance when it updates a table via ODBC.\n> > > M$-Access doesn't lock a record being modified, to allow control concurrent\n> > > access to data M$-Access reads again the record to verify if it was modified by\n> > > another user, before update it to database.\n> > > If there's a TIMESTAMP M$-Access verifies only, if this field was modified,\n> > > otherwise it verifies every field of the table, and obviously it is slower.\n> > > I beleave it would very useful if you could add this feature to psqlodbc.\n> > > Thanks, Jose'\n> > >\n> >\n> \n> I did some testing with SQLSpecialColumns 'SQL_ROWVER'. As I noted in my previous mail,\n> we dont return anything for this function in the driver. I tried hard-coding a column\n> that was a SQL_TIMESTAMP type (in my table it was a postgres 'datetime'). Access did use\n> that column. Here are the results:\n> \n> test1 table\n> ----------\n> a,c,d,e,f,g = int2\n> b,h = varchar\n> datetim = datetime\n> \n> Access results without ROWVER (this is the way things currently are)\n> ---------------------------------------------------------------------\n> BEGIN\n> update test1 set b='hello' where a=7 AND b='other' AND c=3 AND d=4 AND e is NULL AND f is\n> NULL AND g=5 AND h='stuff'\n> COMMIT\n> \n> Access results with ROWVER\n> -------------------------------\n> BEGIN\n> update test1 set b='hello' where a=7 AND datetim = '1998-05-30 10:59:00';\n> select a,b,c,d,e,f,g,h,datetim where a=7;\n> COMMIT\n> \n> Conclusion:\n> -----------\n> The update statement was definately smaller and only involved the key and the timestamp\n> column. The extra select that it does to verify no one has changed anything (using the\n> value of the timestamp) slowed the update down, though. I don't think the speed gain on\n> the smaller update statement makes up for the extra query. In either case, the backend\n\nI don't know for sure, if in this way Access is faster, I red on Access\nmanual that it is faster using ROWVER during updates.\nI think the extra select is to refresh the data on the Client side, otherwise \nAccess doesn't refresh the Client and it says that another user has\nmodified the record (but that other user is me).\n\n> locking problem would still prevent the update if the table was opened by someone else (or\n> even the same application, as in our declare/fetch problem).\n> \n> Also, something would have to be done to actually put a timestamp value in every time a\n> row was added or updated. Access actually prevented me from entering a value in my\n> 'datetim' field because it assumed the dbms would fill it in. I guess you could use a\n> trigger to update the timestamp field. OR if we had a pseudo column that qualified, we\n> could use that, however when I tried using a pseudo column, Access barfed on me\n> complaining \"Table TMP%#$$^ already exists\". If I added the pseudo column to the output,\n> the message went away. I have no idea what the heck that means?\n> \n> Any ideas or thoughts?\n> \n> Byron\n Jose'\n\n", "msg_date": "Thu, 11 Jun 1998 10:15:56 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: M$-Access'97 and TIMESTAMPs" } ]
[ { "msg_contents": "\n> > Could someone for each port tell me what $(CFLAGS_SL) and $(LDFLAGS_SL)\n> > are needed for each of these steps?\n> \n\tAIX 4.2+:\n\n> CFLAGS_SL:\n> LDFLAGS_SL: -G -bexpall -bnoentry -lc\n> \n\tIf functions in other system libraries are used, the corresponding\nlibrary should\n\talso be linked.\n\n\tAndreas\n", "msg_date": "Fri, 24 Apr 1998 12:28:54 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] linux, bsd, i386-solaris and univel shared librarie s." } ]
[ { "msg_contents": "Hello!\n\nI use an external function in postgres and here i try create operator for\nlike with regional character support ::\n\n-----< Cut begin\ncreate function mylike(text,text) returns bool\nas '/home/vip/neko/tudakozo/comp.so' language 'c';\nCREATE\nselect mylike('Jeno','Jen�'),mylike('aJenO','Jen�'),mylike('fJeNof','Jen�'),\n mylike('JEn�ke','Jen�');\nmylike|mylike|mylike|mylike\n------+------+------+------\nt |t |t |t\n(1 row)\n\nselect 'f:', mylike('asd','fds');\n?column?|mylike\n--------+------\nf: |f\n(1 row)\n\n-- It works\n\ndrop operator ~~ (text,text);\nDROP\ncreate operator ~~ (leftarg=text,rightarg=text,procedure=mylike);\nCREATE\nselect 'this will be true'::text ~~ 'true';\n?column?\n--------\nf\n(1 row)\n-- it seems not ;(\n-----< Cut end\n\nI'm not a realy postgres guru, but whith postgres 6.2 can i use this\nfunction as operator too. If that was a bug, this letter is a bug report,\nbut if it isn't please help me! The used postgres's vn: 6.3-2, it runs on a\nlinux (RH4.2;i386 + updates)\n\nsprintf (\"`-''-/\").___..--''\"`-._ Error In\n(\"%|s\", `6_ 6 ) `-. ( ).`-.__.`) Loading Object\n\"Petike\" (_Y_.)' ._ ) `._ `. ``-..-' line:3\n/* Neko */ _..`--'_..-_/ /--'_.' ,' Before /*Neko*/\n ); (il),-'' (li),' ((!.-'\t see: http://lsc.kva.hu\n\n", "msg_date": "Fri, 24 Apr 1998 15:42:08 +0200 (DFT)", "msg_from": "\"Vazsonyi Peter[ke]\" <[email protected]>", "msg_from_op": true, "msg_subject": "create operator problem" }, { "msg_contents": "\"Vazsonyi Peter[ke]\" <[email protected]> writes:\n\n> drop operator ~~ (text,text);\n> DROP\n> create operator ~~ (leftarg=text,rightarg=text,procedure=mylike);\n> CREATE\n> select 'this will be true'::text ~~ 'true';\n> ?column?\n> --------\n> f\n> (1 row)\n> -- it seems not ;(\n\nI got bitten by this, too. There's special handling of ~~ hardcoded\ninto the parser, which expects that it implements the vanilla flavor\nof likeness testing. It has to do with enabling the use of indices to\nspeed up the matching. Bottom line: you can't redefine it.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "24 Apr 1998 17:43:47 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create operator problem" }, { "msg_contents": "> \n> Hello!\n> \n> I use an external function in postgres and here i try create operator for\n> like with regional character support ::\n> \n> -----< Cut begin\n> create function mylike(text,text) returns bool\n> as '/home/vip/neko/tudakozo/comp.so' language 'c';\n> CREATE\n> select mylike('Jeno','Jen���'),mylike('aJenO','Jen���'),mylike('fJeNof','Jen���'),\n> mylike('JEn���ke','Jen���');\n> mylike|mylike|mylike|mylike\n> ------+------+------+------\n> t |t |t |t\n> (1 row)\n> \n> select 'f:', mylike('asd','fds');\n> ?column?|mylike\n> --------+------\n> f: |f\n> (1 row)\n> \n> -- It works\n> \n> drop operator ~~ (text,text);\n> DROP\n> create operator ~~ (leftarg=text,rightarg=text,procedure=mylike);\n> CREATE\n> select 'this will be true'::text ~~ 'true';\n> ?column?\n> --------\n> f\n> (1 row)\n> -- it seems not ;(\n> -----< Cut end\n> \n> I'm not a realy postgres guru, but whith postgres 6.2 can i use this\n> function as operator too. If that was a bug, this letter is a bug report,\n> but if it isn't please help me! The used postgres's vn: 6.3-2, it runs on a\n> linux (RH4.2;i386 + updates)\n\nWe overload ~~ to allow indexing of LIKE operations. Sorry. I will add\nsomething to error on redefine of ~~.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 25 Apr 1998 18:07:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create operator problem" } ]
[ { "msg_contents": "Zeev Suraski wrote:\n> \n> At 17:49 24/04/98 -0500, John Fieber wrote:\n> >On Fri, 24 Apr 1998, Alex Belits wrote:\n> >\n> >> > Also, total memory usage is not simply usage of one invocation\n> >> > times the number of invocations. With a decent virtual memory\n> >> > system, all invocations share memory for the text segment which\n> >> > is over a megabyte for postgres. So, subtract (N-1) x 1MB from\n> >> > your total.\n> >>\n> >> Database servers have large amount of data in their processes, so they\n> >> still will have to allocate it separately, even though they handle the\n> >> same database.\n> >\n> >Certainly. I was just pointing ou a potential memory use\n> >miscalculation on the order of 1MB per process (the text size of\n> >postgres), which is not exactly trivial.\n> \n> I might be missing something, but idle processes of an SQL server should\n> take virtually no memory. The code image is shared, the read-only data is\n> shared, and the only memory that's not shared is the memory taken for\n> process specific stuff, mainly memory needed during the processing of a\n> query. That memory will be freed as soon as the query is done, so it\n> doesn't really matter.\n> Again, I don't know if there might be some Postgres specific issues\n> involved, but I've had a MySQL server with 150 threads taking all around\n> 5MB (while processing some queries, too).\n\nIs there a method of getting under Linux o.s. the *real* ammount of\nmemory that all postgres process use overall ?\nI would inform you then how much memory will eat those processes and if\nthere is too big, I'll inform also PostgreSQL developers about it.\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Sat, 25 Apr 1998 08:58:44 +0300", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PHP3] BIG, BIG problems with pg_pConnect in PHP3,\n\tPostgreSQL and Apache httpd" }, { "msg_contents": "> Zeev Suraski wrote:\n>> I might be missing something, but idle processes of an SQL server should\n>> take virtually no memory. The code image is shared, the read-only data is\n>> shared, and the only memory that's not shared is the memory taken for\n>> process specific stuff, mainly memory needed during the processing of a\n>> query. That memory will be freed as soon as the query is done, so it\n>> doesn't really matter.\n\nWell, not really. On most versions of Unix, free() will never give\nacquired memory back to the OS, so a process's data space never shrinks.\nTherefore, each backend process will own an amount of memory\ncorresponding to the largest/most complex query it has processed to date.\nAn idle backend won't necessarily have a minimal amount of data space.\n\nOf course, if the process is idle then its data space is likely to get\nswapped out. So you're right that the amount of real memory it is\nusing might be little or none.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 Apr 1998 12:46:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PHP3] BIG, BIG problems with pg_pConnect in PHP3,\n\tPostgreSQL and Apache httpd" } ]
[ { "msg_contents": "\nMoved to [email protected] (where the developers hang out)\n\nOn 24 Apr 1998, Bruce Stephens wrote:\n\n> -----\n> The NULL contraint: PostgreSQL only allows NOT NULL (NULL being the\n> default). I altered the backend grammar for this one.\n\n\tPatch?\n\n> Floating point literals: PostgreSQL requires that positive floating\n> point constants start with a digit, but the script has \".10\" and\n> things. Same here, I altered the lexical spec for floats, but it's\n> possible there was a reason for it being the way it was.\n\n\tPatch?\n\n> View syntax: The script has \"CREATE VIEW foo (a, b, c) AS SELECT ...\"\n> which doesn't seem to be acceptable to PostgreSQL. I rephrased these\n> as \"CREATE VIEW foo AS SELECT blah AS a, ...\" and so on.\n> \n> Commands separated by \"go\", not \";\". Don't know whether this would be\n> easy or hard to do, or whether it's important. Global substitution\n> for this.\n> \n> Some types, like \"tinyint\" aren't available, so I just substituted\n> \"int\".\n> \n> Some of the views are only creatable as the PostgreSQL superuser.\n> (This is on the TODO list, I think.)\n> -----\n> \n> I think that was it. Presumably the developers will be making some\n> effort to get this to work (at least most of it: \"go\" vs \";\" is a bit\n> irrelevant, but NULL is important, IMHO); it's surely slightly\n> embarrassing to recommend a book which has an example that won't run!\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 25 Apr 1998 14:49:55 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Practical SQL Handbook - demo script for postgreSQL" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> Moved to [email protected] (where the developers hang out)\n\n> > The NULL contraint: PostgreSQL only allows NOT NULL (NULL being the\n> > default). I altered the backend grammar for this one.\n> \n> \tPatch?\n\nOK. The patch to gram.y is almost certainly wrong: it's just a hack\nto get NULL acceptable---it should surely go in the same place as the\ncheck for NOT NULL.\n\nThe floating point literal change is probably right, but it may break\nthings (it may well cause more things to be regarded as floats than\nshould be). Again, somebody who knows about this stuff definitely\nneeds to check.\n\nI hope this helps all the same.\n\n\n*** /mnt/1gig2/postgres/make/pgsql/src/backend/parser/gram.y\tFri Apr 17 05:12:56 1998\n--- gram.y\tMon Apr 20 22:59:01 1998\n***************\n*** 735,740 ****\n--- 735,741 ----\n \t\t;\n \n ColQualifier: ColQualList\t\t\t\t\t\t{ $$ = $1; }\n+ \t| NULL_P { $$ = NULL; }\n \t\t\t| /*EMPTY*/\t\t\t\t\t\t\t{ $$ = NULL; }\n \t\t;\n \n*** /mnt/1gig2/postgres/make/pgsql/src/backend/parser/scan.l\tWed Apr 8 07:35:00 1998\n--- scan.l\tMon Apr 20 23:22:16 1998\n***************\n*** 153,159 ****\n xmstop\t\t\t-\n \n integer\t\t\t-?{digit}+\n! real\t\t\t-?{digit}+\\.{digit}+([Ee][-+]?{digit}+)?\n \n param\t\t\t\\${integer}\n \n--- 153,159 ----\n xmstop\t\t\t-\n \n integer\t\t\t-?{digit}+\n! real\t\t\t-?{digit}*\\.{digit}+([Ee][-+]?{digit}+)?\n \n param\t\t\t\\${integer}\n \n", "msg_date": "25 Apr 1998 19:43:01 +0100", "msg_from": "Bruce Stephens <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Practical SQL Handbook - demo script for postgreSQL" }, { "msg_contents": "> > > The NULL contraint: PostgreSQL only allows NOT NULL (NULL being \n> > > the default). I altered the backend grammar for this one.\n> >\n> > Patch?\n> \n> OK. The patch to gram.y is almost certainly wrong: it's just a hack\n> to get NULL acceptable---it should surely go in the same place as the\n> check for NOT NULL.\n\nYes, and no. Putting the grammar where you did disallows any other\nclauses, such as DEFAULT or CONSTRAINT, in the declaration. Trying to\nput it in the proper place results in shift/reduce conflicts, since it\nis ambiguous with other allowed syntax.\n\nbtw, afaik this is not SQL92 anyway...\n\n> The floating point literal change is probably right, but it may break\n> things (it may well cause more things to be regarded as floats than\n> should be). Again, somebody who knows about this stuff definitely\n> needs to check.\n> \n> I hope this helps all the same.\n\nYes it does! I've got a more general floating patch to apply, but would\nnot have done it without your prompting. Discussion and proposals are\nhow we progress. Good work.\n\nDon't know how or if we want to proceed with a bare \"NULL\" clause.\nShould we bother with a special case of _only_ NULL in a declaration, as\nin Bruce's patch?\n\n - Tom\n", "msg_date": "Mon, 27 Apr 1998 16:39:43 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Practical SQL Handbook - demo script\n\tfor postgreSQL" }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n\n> Don't know how or if we want to proceed with a bare \"NULL\" clause.\n> Should we bother with a special case of _only_ NULL in a\n> declaration, as in Bruce's patch?\n\nMy patch is clearly wrong. The NULL should be parallel to NOT NULL,\nand ought just to be ignored (since NULL is the default). I think\nit's worth doing (as the book says, NULL may not be the default on\nyour system, and anyway, it's always better to specify just for\nclarity).\n\nI think explicitly specifying NULL is probably good practice, so it\nshould be supported.\n", "msg_date": "27 Apr 1998 20:19:11 +0100", "msg_from": "Bruce Stephens <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Practical SQL Handbook - demo script\n\tfor postgreSQL" }, { "msg_contents": "> > Don't know how or if we want to proceed with a bare \"NULL\" clause.\n> > Should we bother with a special case of _only_ NULL in a\n> > declaration, as in Bruce's patch?\n> My patch is clearly wrong. The NULL should be parallel to NOT NULL,\n> and ought just to be ignored (since NULL is the default). I think\n> it's worth doing (as the book says, NULL may not be the default on\n> your system, and anyway, it's always better to specify just for\n> clarity).\n> I think explicitly specifying NULL is probably good practice, so it\n> should be supported.\n\nMaybe (SQL92 is full of inconsistant/non-symmetric features), but you\nwill need to figure out how to do it without shift/reduce conflicts in\nthe grammar. The fact that they are there means that either it is\nimpossible to unambiguously parse the allowed syntax, or that the\ngrammar definition in the yacc language needs to be restructured a bit.\nIt isn't obvious to me how to restructure for this case; I've fixed this\nkind of problem in other parts of the grammar and the tricks I used\nthere don't look usable here.\n\nI know it isn't helpful to always fall back on \"big philosophy\" when you\nare proposing a small fix/improvement, but we should think about how\nmuch clutter we want to put in to the grammar. The \"bare NULL\" is\napparently _not_ SQL92 (it does not appear in the BNF definitions in my\nSQL book by Date).\n\nI'd like us to think about limiting the extensions to SQL92 in favor of\nextending the grammar toward Postgres' OR features. Just a thought...\n\n - Tom\n", "msg_date": "Tue, 28 Apr 1998 02:22:42 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Practical SQL Handbook - demo script\n\tfor postgreSQL" }, { "msg_contents": "> > > Don't know how or if we want to proceed with a bare \"NULL\" clause.\n> > > Should we bother with a special case of _only_ NULL in a\n> > > declaration, as in Bruce's patch?\n> > My patch is clearly wrong. The NULL should be parallel to NOT NULL,\n> > and ought just to be ignored (since NULL is the default). I think\n> > it's worth doing (as the book says, NULL may not be the default on\n> > your system, and anyway, it's always better to specify just for\n> > clarity).\n> > I think explicitly specifying NULL is probably good practice, so it\n> > should be supported.\n> \n> Maybe (SQL92 is full of inconsistant/non-symmetric features), but you\n> will need to figure out how to do it without shift/reduce conflicts in\n> the grammar. The fact that they are there means that either it is\n> impossible to unambiguously parse the allowed syntax, or that the\n> grammar definition in the yacc language needs to be restructured a bit.\n> It isn't obvious to me how to restructure for this case; I've fixed this\n> kind of problem in other parts of the grammar and the tricks I used\n> there don't look usable here.\n> \n> I know it isn't helpful to always fall back on \"big philosophy\" when you\n> are proposing a small fix/improvement, but we should think about how\n> much clutter we want to put in to the grammar. The \"bare NULL\" is\n> apparently _not_ SQL92 (it does not appear in the BNF definitions in my\n> SQL book by Date).\n> \n> I'd like us to think about limiting the extensions to SQL92 in favor of\n> extending the grammar toward Postgres' OR features. Just a thought...\n> \n> - Tom\n\nI strongly agree. Particularly about not whacking at the grammar. Even\n\"standard\" SQL is quite confusing when writing queries. What is being asked\nfor is not part of the standard, and more importantly does not add any\ncapability to the system. Any extensions need to be vary carefully thought\nout, and even then avoided unless there is a _compelling_ reason for them.\n\nThe test I try to use is \"could I explain this feature over the phone and \nand provide a consistant 'story' about why it works the way it does\"? So\nthat the listener can cope with all the exceptions, limitations, side\neffects, errors etc just by relying on the theory from the explanation?\n\nGenerally if a proposed extension fails this test it turns out to be either\nunimportant, or conceptually flawed.\n\nRemember, the standard already has enough ad-hack semantics and syntactic\nsugar, we certainly don't need to add more.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Mon, 27 Apr 1998 22:49:33 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Practical SQL Handbook - demo script\n\tfor postgreSQL" } ]
[ { "msg_contents": "Ok, I have finally gotten all of the defines for Dec/Alpha and\nLinux/Alpha sorted out as Marc asked. There is no longer any need for\n'-Dalpha' or '-Dlinuxalpha' in either the Dec/Alpha or the Linux/Alpha\ntemplate files (./src/template/{alpha,linuxalpha}). I have replaced every\ninstance of 'alpha' or '__alpha__' with '__alpha', as that appears to be\nthe common symbol between C compilers on both operating systems (RH4.2 &\nDecUnix 4.0b) for alpha.\n\tAttached is the patch against the April 25 snapshot. I have\ncompiled and tested it on my UDB, and it does as well as straight 6.3.2. I\nalso compiled the patched version on my Pentium 100, and verified it does\nnot break anything there (and therefore should not on any other platform,\nI hope...). I don't have access to a Dec/Alpha box, so would someone\nplease test it on such a machine? I don't see any problems occuring, but\nit is best to check and make sure. If there are any problems, send a\ndetailed description to me and I will get it sorted out. Of course,\npatches are fine too. :) \n\tWhile this patch doesn't improve the stablity of pgsql on\nLinux/Alpha, at least it paves the road, making future improvments easier\nand more sane. In the future, I recommend anything that needs to be\n'#ifdef'ed as alpha-spefic use the symbol '__alpha' and then everything\nshould work automatically, with out messing with template files, CFLAGS\nlines, or defines!\n\tThats about it for now, talk to you all later!\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------", "msg_date": "Sat, 25 Apr 1998 15:45:46 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Patch to remove -Dalpha for Alphas..." } ]
[ { "msg_contents": "\nI have re-applied Darren King's char2-16 removal code, and have update\nversion.h to 6.4. It is official.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 00:01:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "PG-version" } ]
[ { "msg_contents": "\ni've a few questions for my ssl patch:\n\nthe way i've implemented ssl is by having a structure called PGcomm\nwhich replaces the pair of Pfdebug/Pfin/Pfout. this structure\ncontains those values as well as the SSL state stuff (context * and\nconnection *). All functions which use(d) Pfin/Pfout/Pfdebug, either\nas an argument or an extern variable, were modified to use this\nstructure. Does this seem appropriate?\n\nIs there any value to having an OO like approach to the fe/be\ncommunication API. So that other transport mechanisms/protocols can\nbe loaded in at will. Something other than the kludgish way I've got\n#ifdef POSTGRESQL_SSL.\n\nWould it be good to make positive (IMHO of course) changes to postgres\nthat make it easier for things like this to be done? It would also\nallow my patch to be a lot cleaner, which is important. It would also\neliminate the need for me to patch every fwrite/fread that gets added\nto the code.\n\nAlso, why does it exec() instead of just setting some variables and\ncalling the function that ends up getting run anyway? That would\neliminate the SSL data from getting destroyed and keeping it from\nhaving to renegotiate the SSL connection.\n\n--brett\nhttp://www.chicken.org/pgsql/ssl/\n", "msg_date": "Sun, 26 Apr 1998 00:16:42 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "ssl implementation questions" } ]
[ { "msg_contents": "\nAre these functions used at all? A M-x tags-search didn't find them.\nI'm not sure how they work over SSL (if at all).\n", "msg_date": "Sun, 26 Apr 1998 01:08:09 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "pq_sendoob/pq_recvoob" }, { "msg_contents": "> \n> \n> Are these functions used at all? A M-x tags-search didn't find them.\n> I'm not sure how they work over SSL (if at all).\n> \n> \n\nNo, not used. I think we thought they were passed unencrypted by SSL?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 10:13:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pq_sendoob/pq_recvoob" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Are these functions used at all? A M-x tags-search didn't find them.\n>> I'm not sure how they work over SSL (if at all).\n\n> No, not used. I think we thought they were passed unencrypted by SSL?\n\nWe are thinking of adding a \"please cancel query in progress\" function\nto the FE/BE protocol, whereby the frontend could attempt to cancel a\npreviously issued query. The cancel request would be sent from FE to BE\nby an OOB message, so that the BE could detect it with a signal handler.\nThis would mean that cancellation would not work over an SSL link.\nI'm willing to live with that, myself.\n\nThere are no cases where an OOB message is sent from BE to FE, and I\nthink we concluded that it would be too dangerous to try to do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Apr 1998 13:03:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pq_sendoob/pq_recvoob " } ]
[ { "msg_contents": ">> The JDBC spec allows for multiple ResultSet's to be returned from a query,\n>> and our driver handles this already.\n>\n>Oh. That prevents us from changing the backend to ignore returning more\n>than one result for multiple queries in a PQexec.\n\nI think this is also a leftover from postgres 4.2, where one query could return multiple\nresult sets (with different columns).\n\n>Perhaps we need a new\n>return query protocol character like 'J' to denote query returns that\n>are not the LAST return, so libpq can throw them away, and jdbc and\n>process them as normal, but also figure out when it gets the last one.\n\nThis might be hard to do with a select rule, that triggers other result sets.\n(Is this still possible ?)\n\nAndreas \n\n\n\n\n", "msg_date": "Mon, 27 Apr 1998 10:41:32 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" }, { "msg_contents": "> \n> >> The JDBC spec allows for multiple ResultSet's to be returned from a query,\n> >> and our driver handles this already.\n> >\n> >Oh. That prevents us from changing the backend to ignore returning more\n> >than one result for multiple queries in a PQexec.\n> \n> I think this is also a leftover from postgres 4.2, where one query could return multiple\n> result sets (with different columns).\n> \n> >Perhaps we need a new\n> >return query protocol character like 'J' to denote query returns that\n> >are not the LAST return, so libpq can throw them away, and jdbc and\n> >process them as normal, but also figure out when it gets the last one.\n> \n> This might be hard to do with a select rule, that triggers other result sets.\n> (Is this still possible ?)\n\nWe are going to use a separate end-of-query-results packet instead.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 09:27:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] retrieving varchar size" } ]
[ { "msg_contents": "Herouth Maoz wrote:\n> \n> At 7:59 +0300 on 26/4/98, Jan Vicherek wrote:\n> \n> > Hello, has anybody heard anything on the subject of using PG as backend\n> > for LDAP ? *Any* pointers are appreciated.\n> \n> While we're at it, anything on the *reverse* question? That is, using LDAP\n> as the user/password and authentication agent for Postgres? Could be\n> *extremely* useful in our university, where all our students have LDAP\n> entries, which we use in web access control, and email access. It would be\n> great if we could grant permissions to LDAP groups.\n\nI saw something about a LDAP/PAM module.\nSupporting PAM-auth in pgsql would give us LDAP and much more.\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Mon, 27 Apr 1998 11:11:37 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] PostgreSQL and LDAP ?" }, { "msg_contents": "On Mon, 27 Apr 1998, G���ran Thyni wrote:\n> I saw something about a LDAP/PAM module.\n> Supporting PAM-auth in pgsql would give us LDAP and much more.\n\nPAM is rather evil. The fact that one has to add 'support' for it should\nbe enough of an indicator.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Mon, 27 Apr 1998 09:41:27 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] PostgreSQL and LDAP ?" } ]
[ { "msg_contents": "I cannot run initdb anymore.\n\ninitdb: using /usr/local/pgsql/lib/local1_template1.bki.source as input to create the\ntemplate database.\ninitdb: using /usr/local/pgsql/lib/global1.bki.source as input to create the\nglobal classes.\ninitdb: using /usr/local/pgsql/lib/pg_hba.conf.sample as the host-based\nauthentication control file.\n\nWe are initializing the database system with username postgres (uid=31).\nThis user will own all the files and must also own the server process.\n\ninitdb: creating template database in /usr/local/pgsql/data/base/template1\nRunning: postgres -boot -C -F -D/usr/local/pgsql/data -Q template1\ninitdb: could not create template database\ninitdb: cleaning up by wiping out /usr/local/pgsql/data/base/template1\n\nUsing -d didn't show me much. I'm using the source I downloaded a few hours\nago. And this time I did a make clean; make all.\n\nAlso I'd like to know if the operator \"->\" is in use for something. I'd like\nto use it for C variables to be able to do something like this:\n\nselect name into :structpointer->name\n\nIf it is used though I have to disable this feature.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Mon, 27 Apr 1998 13:56:55 +0200 ()", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "initdb problem and operator question" }, { "msg_contents": "> I'd like to know if the operator \"->\" is in use for something. I'd \n> like to use it for C variables to be able to do something like this:\n> \n> select name into :structpointer->name\n> \n> If it is used though I have to disable this feature.\n\nNot currently used. afaik this syntax wasn't allowed in the Ingres\nembedded SQL. Do other ones allow it? Perhaps you could implement it in\nyour scanner as a special case? That way, extra spaces could be used to\nallow \"->\" to continue to be a potential Postgres operator...\n\n - Tom\n", "msg_date": "Tue, 28 Apr 1998 02:58:20 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] initdb problem and operator question" }, { "msg_contents": "Thomas G. Lockhart writes:\n> Not currently used. afaik this syntax wasn't allowed in the Ingres\n> embedded SQL. Do other ones allow it? Perhaps you could implement it in\n\nNot that I know of. But I like the possibility to allow it. And why\nshouldn't we be better than the commercial ones? :-)\n\n> your scanner as a special case? That way, extra spaces could be used to\n> allow \"->\" to continue to be a potential Postgres operator...\n\nYou mean: :a->b means the variable and :a -> b means the operator? Sounds\ngood to me.\n\nI'll check that\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Tue, 28 Apr 1998 14:44:33 +0200 (p���\u000e@\u0010)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] initdb problem and operator question" } ]
[ { "msg_contents": "> Yep, its a bug. Not sure about the cause, but will look into it in\n> the\n> next few weeks.\n> \nThanks Bruce, I was beginning to feel as neglected as a cross-eyed\nfoster child who had contracted leprosy. But I realized you guys were\nbusy with the 3.6.2 release/freeze. I wish I could help with the\ndevelopment.\n\n\t-DEJ\n\n> > \n> > Just thought I'd try the cluster command. What am I doing wrong.\n> > ReadHat 5.0\n> > 6.3.1 rpm's\n> > \n> > [djackson@www]$ psql template1\n> > Welcome to the POSTGRESQL interactive sql monitor:\n> > Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> > \n> > type \\? for help on slash commands\n> > type \\q to quit\n> > type \\g or terminate with semicolon to execute query\n> > You are currently connected to the database: template1\n> > \n> > template1=> \\d\n> > Couldn't find any tables, sequences or indices!\n> > template1=> \\l\n> > datname |datdba|datpath \n> > ---------+------+---------\n> > template1| 100|template1\n> > postgres | 100|postgres \n> > (2 rows)\n> > \n> > template1=> create database test;\n> > CREATEDB\n> > template1=> \\connect test \n> > connecting to new database: test\n> > test=> create table list (k int2);\n> > CREATE\n> > test=> insert into list values (1);\n> > INSERT 33769 1\n> > test=> insert into list select max(k)+1;\n> > .\n> > .\n> > .\n> > test=> select * from list;\n> > k\n> > -\n> > 1\n> > 2\n> > 3\n> > 4\n> > 5\n> > 6\n> > (6 rows)\n> > \n> > test=> create table list2 (k1 int2 NOT NULL, k2 int2 NOT NULL);\n> > CREATE\n> > test=> create UNIQUE INDEX l1 ON list2(k1, k2);\n> > CREATE\n> > test=> create UNIQUE INDEX l2 ON list2(k2, k1); \n> > CREATE\n> > test=> insert into list2 select l1.k, l2.k from list as l1, list as\n> l2;\n> > INSERT 0 36\n> > test=> select * from list2;\n> > k1|k2\n> > --+--\n> > 1| 1\n> > 2| 1\n> > 3| 1\n> > .\n> > .\n> > .\n> > 4| 6\n> > 5| 6\n> > 6| 6\n> > (36 rows)\n> > \n> > test=> vacuum verbose analyze list2;\n> > NOTICE: Rel list2: Pages 1: Changed 0, Reapped 0, Empty 0, New 0;\n> Tup\n> > 36: Vac 0, Crash 0, UnUsed 0, MinLen 44, MaxLen 44; Re-using:\n> > Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> > NOTICE: Ind l2: Pages 2; Tuples 36. Elapsed 0/0 sec.\n> > NOTICE: Ind l1: Pages 2; Tuples 36. Elapsed 0/0 sec.\n> > VACUUM\n> > test=> cluster l1 on list2;\n> > ERROR: Cannot create unique index. Table contains non-unique values\n> > test=> cluster l2 on list2; \n> > PQexec() -- Request was sent to backend, but backend closed the\n> channel\n> > before responding.\n> > This probably means the backend terminated abnormally before\n> or\n> > while processing the request.\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania\n> 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Mon, 27 Apr 1998 11:59:09 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Re: [HACKERS] Bug or Short between my brain and t\n\the keyboard?" } ]
[ { "msg_contents": "> > > > The NULL contraint: PostgreSQL only allows NOT NULL (NULL being \n> > > > the default). I altered the backend grammar for this one.\n> > >\n> > > Patch?\n> > \n> > OK. The patch to gram.y is almost certainly wrong: it's just a hack\n> > to get NULL acceptable---it should surely go in the same place as\n> the\n> > check for NOT NULL.\n> \n> Yes, and no. Putting the grammar where you did disallows any other\n> clauses, such as DEFAULT or CONSTRAINT, in the declaration. Trying to\n> put it in the proper place results in shift/reduce conflicts, since it\n> is ambiguous with other allowed syntax.\n> \n> btw, afaik this is not SQL92 anyway...\n> \n> > The floating point literal change is probably right, but it may\n> break\n> > things (it may well cause more things to be regarded as floats than\n> > should be). Again, somebody who knows about this stuff definitely\n> > needs to check.\n> > \n> > I hope this helps all the same.\n> \n> Yes it does! I've got a more general floating patch to apply, but\n> would\n> not have done it without your prompting. Discussion and proposals are\n> how we progress. Good work.\n> \n> Don't know how or if we want to proceed with a bare \"NULL\" clause.\n> Should we bother with a special case of _only_ NULL in a declaration,\n> as\n> in Bruce's patch?\nContinuing with the discussion/proposal theme: I vote yes for the bare\nNULL if it can be done with a minimum of hassle. It would at the very\nleast improve compatibility with SYBASE AND MS SQL Server. I know that\nthese aren't goals, but it doesn't hurt to have it happen. Could\nsomeone check the Create table syntax and see if it's SQL92 (I have a\nsuspicion that it is).\n\nI'm not sure about the 'shift/reduce', but couldn't you interpret the\nNULL not preceded by NOT in a CREATE TABLE /ALTER TABLE as an empty\nstring. I'm assuming here that the NOT NULL is treated as one token in\nthe grammar/parser.\n\n\t-DEJ\n", "msg_date": "Mon, 27 Apr 1998 12:34:53 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: [QUESTIONS] Practical SQL Handbook - demo scrip\n\tt for postgreSQL" } ]
[ { "msg_contents": "Hello,\n\nI have posted a new version of the ODBC driver at our web site.\n(http://www.insightdist.com/psqlodbc). We are also now including a\nversion number (this one is 06.30.0010). You can click on this link and\nsee what changes this version includes. Also, you can look with the\nODBC Administrator under \"ODBC Drivers\" and get the current version for\ncorrespondence and so forth.\n\n1. This new version fixes problems with execution time parameters\n(SQLParamData, SQLPutData) for text fields where parameters were being\ndropped and a '?' was appearing in the query.\n\n2. Also, functionality has been added to return information about UNIQUE\nINDEXES. This was never implemented in the old driver (it assumed\nPostgres couldn't have any). This should allow Access 2.0 users to be\nallowed to update records. Also, it should allow Visual Basic to do\nupdates.\n\n--------- HACKERS INVITED TO PLEASE READ THIS SECTION ---------\n\nOne downside about UNIQUE INDEXES however, is how Microsoft Access\nhandles them when you open the table in datasheet view. Whether you\nspecify the unique index at link time, or the driver provides the info,\nAccess will try to use queries which show up a problem with the backend:\n\nHere is an example of an Access query with a unique index on a single\nfield:\n\nSELECT balance_id,company_id, balance_date, is_audited,comment,\nbalance_type, balance_filename FROM balance WHERE balance_id = 1 OR\nbalance_id = 2 OR balance_id = 3 OR balance_id = 4 OR balance_id = 5 OR\nbalance_id = 6 OR balance_id = 7 OR balance_id = 8 OR balance_id = 9 OR\nbalance_id = 10\n\nThe more keyparts you have, the worse the problem is (2 keyparts):\n\nSELECT balance_id,company_id, balance_date, is_audited,comment,\nbalance_type, balance_filename FROM balance WHERE balance_id = 1 AND\ncompany_id=1 OR balance_id = 1 AND company_id=2 OR balance_id = 1 AND\ncompany_id=3 OR balance_id = 2 AND company_id=1 OR balance_id = 2 AND\ncompany_id=2 OR balance_id = 2 AND company_id=3 OR balance_id = 3 AND\ncompany_id=1 OR balance_id = 3 AND company_id=2 OR balance_id = 3 AND\ncompany_id=3 OR balance_id = 4 AND company_id=1\n\nAny more than 2 keyparts, results in crashing the backend with the\nmessage\n\"palloc failure: memory exhausted\". Even at 2 keyparts, performance\nsuffers greatly.\n\nIn both of the above examples, Access is trying to retrieve 10 records\nusing a \"Prepared\" statement (prepared statementents are \"faked\" in the\ndriver, since they are not implemented in the backend) with the unique\nindex of the table.\n\nWe have known about this problem and have discussed it with the hackers\nlist in the past. It is on the todo list under \"Performance\" and it\nappears as\n\"Allow indexes to be used with OR clauses(Vadim) \". I am not sure of\nthe priority of this fix, however, or how difficult it would be to\nimplement it.\n\nThe reason we are mentioning this with renewed vigor, is that in the\npast, with the old driver, Access 7.0 and Access 97, would ask the user\nwhat they wanted the unique index to be. You could tell it whatever you\nwanted, and even, not specify any unique index. Now, with this new\nunique index fix, you will not have a choice as to whether you want to\nuse unique indexes or not, which, depending on how many fields are being\nindexed on, may crash the backend.\n\nOf course, if you are not using \"unique\" indexes on your table, Access\n7.0 and 97 will ask you at link time, as before.\n\nDoes anyone have any knowledge of the above problem and/or the priority\nof the fix that Vadim is mentioned on?\n\nSorry for the long length of this letter.\n\nRegards,\n\nByron\n\n", "msg_date": "Mon, 27 Apr 1998 15:05:11 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "New Driver and Unique Indexes" }, { "msg_contents": "Byron Nikolaidis wrote:\n> \n> SELECT balance_id,company_id, balance_date, is_audited,comment,\n> balance_type, balance_filename FROM balance WHERE balance_id = 1 AND\n> company_id=1 OR balance_id = 1 AND company_id=2 OR balance_id = 1 AND\n> company_id=3 OR balance_id = 2 AND company_id=1 OR balance_id = 2 AND\n> company_id=2 OR balance_id = 2 AND company_id=3 OR balance_id = 3 AND\n> company_id=1 OR balance_id = 3 AND company_id=2 OR balance_id = 3 AND\n> company_id=3 OR balance_id = 4 AND company_id=1\n> \n> Any more than 2 keyparts, results in crashing the backend with the\n> message\n> \"palloc failure: memory exhausted\". Even at 2 keyparts, performance\n> suffers greatly.\n\nTHis is known problem of canonificator in optimizer. This query will \ncrash backend without any indices too.\nWe told about this in the 6.3-beta period.\nNo fix currently. I don't know when it will be at all.\n\nVadim\n", "msg_date": "Tue, 28 Apr 1998 09:57:31 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New Driver and Unique Indexes" }, { "msg_contents": "Sorry for koi-8 charset :)\n\nVadim B. Mikheev wrote:\n> \n> Byron Nikolaidis wrote:\n> >\n> > SELECT balance_id,company_id, balance_date, is_audited,comment,\n> > balance_type, balance_filename FROM balance WHERE balance_id = 1 AND\n> > company_id=1 OR balance_id = 1 AND company_id=2 OR balance_id = 1 AND\n> > company_id=3 OR balance_id = 2 AND company_id=1 OR balance_id = 2 AND\n> > company_id=2 OR balance_id = 2 AND company_id=3 OR balance_id = 3 AND\n> > company_id=1 OR balance_id = 3 AND company_id=2 OR balance_id = 3 AND\n> > company_id=3 OR balance_id = 4 AND company_id=1\n> >\n> > Any more than 2 keyparts, results in crashing the backend with the\n> > message\n> > \"palloc failure: memory exhausted\". Even at 2 keyparts, performance\n> > suffers greatly.\n> \n> THis is known problem of canonificator in optimizer. This query will\n> crash backend without any indices too.\n> We told about this in the 6.3-beta period.\n> No fix currently. I don't know when it will be at all.\n> \n> Vadim\n", "msg_date": "Tue, 28 Apr 1998 10:04:48 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] New Driver and Unique Indexes" }, { "msg_contents": "> One downside about UNIQUE INDEXES however, is how Microsoft Access\n> handles them when you open the table in datasheet view. Whether you\n> specify the unique index at link time, or the driver provides the info,\n> Access will try to use queries which show up a problem with the backend:\n> \n> Here is an example of an Access query with a unique index on a single\n> field:\n> \n> SELECT balance_id,company_id, balance_date, is_audited,comment,\n> balance_type, balance_filename FROM balance WHERE balance_id = 1 OR\n> balance_id = 2 OR balance_id = 3 OR balance_id = 4 OR balance_id = 5 OR\n> balance_id = 6 OR balance_id = 7 OR balance_id = 8 OR balance_id = 9 OR\n> balance_id = 10\n> \n> The more keyparts you have, the worse the problem is (2 keyparts):\n> \n> SELECT balance_id,company_id, balance_date, is_audited,comment,\n> balance_type, balance_filename FROM balance WHERE balance_id = 1 AND\n> company_id=1 OR balance_id = 1 AND company_id=2 OR balance_id = 1 AND\n> company_id=3 OR balance_id = 2 AND company_id=1 OR balance_id = 2 AND\n> company_id=2 OR balance_id = 2 AND company_id=3 OR balance_id = 3 AND\n> company_id=1 OR balance_id = 3 AND company_id=2 OR balance_id = 3 AND\n> company_id=3 OR balance_id = 4 AND company_id=1\n\nOK, I have the dope on this one. The palloc failure is not the OR\nindexing, but rather the item:\n\n\t* Fix memory exhaustion when using many OR's\n\nThe bug report that prompted this is attached. As you can see, it was\nalso prompted by MS-Access. The problem is that the backend uses the\ntext-book method of processing OR's by converting the WHERE clause to\nConjunctive-Normal-Form(CNF), and this exponentially explodes the number\nof tests where there are many OR clauses.\n\nWe are not sure how to fix it yet. Vadim has improved the handling of\nthis in 6.3.*, but it still is not perfect and needs a solution. \nObviously other databases are not CNF'ifing the queries so there must be\na solution. David?\n\n---------------------------------------------------------------------------\n\nDate: Mon, 12 Jan 1998 15:53:18 -0500\nFrom: David Hartwig <[email protected]>\nTo: Bruce Momjian <[email protected]>\nSubject: Re: [BUGS] General Bug Report: palloc fails with lots of ANDs and ORs\n\nThis is a multi-part message in MIME format.\n--------------20C7AC27E8BCA117B23354BE\nContent-Type: text/plain; charset=us-ascii\nContent-Transfer-Encoding: 7bit\n\nBruce,\n\nI did some homework. Here is what I have. The default max data segment size on our (AIX 4.1.4) box is around 130000 kbytes.\n\nI put together a query which put me just past the threshold of the palloc \"out of memory error\". It is as follows:\n\ncreate table outlet (\n number int,\n name varchar(30),\n ...\n}\n\ncreate unique index outlet_key on outlet using btree (number);\n\nselect count(*) from outlet\nwhere\n (number = 1 and number = 1 and number = 1) or\n (number = 1 and number = 1 and number = 1) or\n (number = 1 and number = 1 and number = 1) or\n (number = 1 and number = 1 and number = 1) or\n (number = 1 and number = 1 and number = 1) or\n (number = 1 and number = 1 and number = 1) or\n (number = 1 and number = 1 and number = 1) or\n (number = 1 and number = 1 and number = 1) or\n (number = 1 and number = 1 and number = 1);\n\nNot pretty but it makes the point. Take out two OR clauses and the query works fine (but a bit slow).\n\nThe above query is all it takes to use up all 130000 Kbytes of memory. And, since the query takes a long time to finally fail, I was able to\nobserve the memory consumption.\n\nI extended the max data segment to 300000. And tried again. I could observer the memory consumption up to about 280000 when the system\nsuddenly got sick. I was getting all kinds of messages like \"cant fork\"; bad stuff. The system did finally recover on its own. I am not\nsure happened there. I know that ulimit puts us right around the physical memory limits of out system.\n\nUsing 300 meg for the above query seems like a bit of a problem. It is difficult to imagine where all that memory is being used. I will\nresearch the problem further if you need more information.\n\nBruce Momjian wrote:\n\n> Try changing your OS default memory size. Unsure how to do this under\n> AIX.\n>\n> >\n> >\n> > ============================================================================\n> > POSTGRESQL BUG REPORT TEMPLATE\n> > ============================================================================\n> >\n> >\n> > Your name : David Hartwig\n> > Your email address : [email protected]\n> >\n> > Category : runtime: back-end: SQL\n> > Severity : serious\n> >\n> > Summary: palloc fails with lots of ANDs and ORs\n> >\n> > System Configuration\n> > --------------------\n> > Operating System : AIX 4.1\n> >\n> > PostgreSQL version : 6.2\n> >\n> > Compiler used : native CC\n> >\n> > Hardware:\n> > ---------\n> > RS 6000\n> >\n> > Versions of other tools:\n> > ------------------------\n> > NA\n> >\n> > --------------------------------------------------------------------------\n> >\n> > Problem Description:\n> > --------------------\n> > The follow is a mail message describing the problem on the PostODBC mailing list:\n> >\n> >\n> > I have run across this also. We traced it down to a failure in the PostgreSQL server. This occurs under the following conditions.\n> >\n> > 1. MS Access\n> > 2. Specify a multi-part key in the link time setup with postgresql\n> > 3. Click on table view.\n> >\n> > What happens is MS Access takes the following steps. First it selects all possible key values for the table being viewed. I\n> > suspect it maps the key values to the relative row position in the display. Then it uses the mapping to generate future queries based\n> > on the mapping and the rows showing on the screen. The queries take the following form:\n> >\n> > SELECT keypart1, keypart2, keypart3, col4, col5, col6 ... FROM example_table\n> > WHERE\n> > (keypart1 = row1keypartval1 AND keypart2 = row1keypartval2 AND keypart3 = row1keypartval3) OR\n> > (keypart1 = row2keypartval1 AND keypart2 = row2keypartval2 AND keypart3 = row2keypartval3) OR\n> > .\n> > . -- 28 lines of this stuff. Why 28... Why not 28\n> > .\n> > (keypart1 = row27keypartval1 AND keypart2 = row27keypartval2 AND keypart3 = row27keypartval3) OR\n> > (keypart1 = row28keypartval1 AND keypart2 = row28keypartval2 AND keypart3 = row28keypartval3);\n> >\n> >\n> > The PostgreSQL sever chokes on this statement claiming it is out of memory. (palloc) In this example I used a three part key. I\n> > do not recall if a three part key is enough to trash the backend. It has been a while. I have tried sending these kinds of statements\n> > directly through the psql monitor and get the same result.\n> >\n> >\n> > --------------------------------------------------------------------------\n> >\n> > Test Case:\n> > ----------\n> > select c1, c1 c3, c4, c5 ... from example_table\n> > where\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > (c1 = something and c2 = something and c3 = something and c4 = something);\n> >\n> >\n> > --------------------------------------------------------------------------\n> >\n> > Solution:\n> > ---------\n> >\n> >\n> > --------------------------------------------------------------------------\n> >\n> >\n> >\n>\n> --\n> Bruce Momjian\n> [email protected]\n\n\n\n--------------20C7AC27E8BCA117B23354BE\nContent-Type: text/x-vcard; charset=us-ascii; name=\"vcard.vcf\"\nContent-Transfer-Encoding: 7bit\nContent-Description: Card for David Hartwig\nContent-Disposition: attachment; filename=\"vcard.vcf\"\n\nbegin: vcard\nfn: David Hartwig\nn: Hartwig;David\norg: Insight Distribution Systems\nadr: 222 Shilling Circle;;;Hunt Valley ;MD;21030;USA\nemail;internet: [email protected]\ntitle: Manager Research & Development\ntel;work: (410)403-2308\nx-mozilla-cpt: ;0\nx-mozilla-html: TRUE\nversion: 2.1\nend: vcard\n\n\n--------------20C7AC27E8BCA117B23354BE--\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 22:42:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New Driver and Unique Indexes" }, { "msg_contents": "Byron Nikolaidis wrote:\n> \n> Hello,\n> \n> I have posted a new version of the ODBC driver at our web site.\n> (http://www.insightdist.com/psqlodbc). We are also now including a\n> version number (this one is 06.30.0010). You can click on this link and\n> see what changes this version includes. Also, you can look with the\n> ODBC Administrator under \"ODBC Drivers\" and get the current version for\n> correspondence and so forth.\n> \n> 1. This new version fixes problems with execution time parameters\n> (SQLParamData, SQLPutData) for text fields where parameters were being\n> dropped and a '?' was appearing in the query.\n\nGood! And thanks ;)!\n \n> \n> --------- HACKERS INVITED TO PLEASE READ THIS SECTION ---------\n> \n> One downside about UNIQUE INDEXES however, is how Microsoft Access\n> handles them when you open the table in datasheet view. Whether you\n> specify the unique index at link time, or the driver provides the info,\n> Access will try to use queries which show up a problem with the backend:\n> \n> Here is an example of an Access query with a unique index on a single\n> field:\n> \n> SELECT balance_id,company_id, balance_date, is_audited,comment,\n> balance_type, balance_filename FROM balance WHERE balance_id = 1 OR\n> balance_id = 2 OR balance_id = 3 OR balance_id = 4 OR balance_id = 5 OR\n> balance_id = 6 OR balance_id = 7 OR balance_id = 8 OR balance_id = 9 OR\n> balance_id = 10\n> \n> The more keyparts you have, the worse the problem is (2 keyparts):\n> \n> SELECT balance_id,company_id, balance_date, is_audited,comment,\n> balance_type, balance_filename FROM balance WHERE balance_id = 1 AND\n> company_id=1 OR balance_id = 1 AND company_id=2 OR balance_id = 1 AND\n> company_id=3 OR balance_id = 2 AND company_id=1 OR balance_id = 2 AND\n> company_id=2 OR balance_id = 2 AND company_id=3 OR balance_id = 3 AND\n> company_id=1 OR balance_id = 3 AND company_id=2 OR balance_id = 3 AND\n> company_id=3 OR balance_id = 4 AND company_id=1\n\nas a quick (?) fix, can't this kind of query be identified in the driver\n(now)\nor in the backend(later) and rewritten to a union query like this\n\nSELECT balance_id,company_id, balance_date, is_audited,comment,\nbalance_type, balance_filename FROM balance \nWHERE balance_id = 1 AND company_id=1 \nunion\nSELECT balance_id,company_id, balance_date, is_audited,comment,\nbalance_type, balance_filename FROM balance\nWHERE balance_id = 1 AND company_id=2\nunion\n.\n.\n.\nunion\nSELECT balance_id,company_id, balance_date, is_audited,comment,\nbalance_type, balance_filename FROM balance \nWHERE balance_id = 4 AND company_id=1\n;\n\nOr is the optimiser too smart and rewrites it back to the original form\n?\n\nonce the identification phase is done in the backend, it should be \neasy to check that all the fields ORed together are from an unique \nindex and do an index scan instead of a rewrite to union.\n\n> Any more than 2 keyparts, results in crashing the backend with the\n> message \"palloc failure: memory exhausted\". Even at 2 keyparts, performance\n> suffers greatly.\n\nActually it did not crash on me even on 3 keyparts, the backend just \ngrew to 97MB and stayed so until I closed access ;(. \n\nOnce had to kill both access and backend, but then I had been \ncareless and viewed two tables with a primary key of more \nthan 1 field ;)\n\n> In both of the above examples, Access is trying to retrieve 10 records\n> using a \"Prepared\" statement (prepared statementents are \"faked\" in the\n> driver, since they are not implemented in the backend) with the unique\n> index of the table.\n\nperhaps the rewriting of ORs to UNION could be done while \"Preparing\".\n\nThe heuristics would be just to check if the where clause has \naltenating ANDs and ORs and then split and rewrite it to union at each\nOR.\n\nThis of course can hit the infamous 8k limitation of query size \n(is it still there ?)\n\n> \n> The reason we are mentioning this with renewed vigor, is that in the\n> past, with the old driver, Access 7.0 and Access 97, would ask the user\n> what they wanted the unique index to be. You could tell it whatever you\n> wanted, and even, not specify any unique index. Now, with this new\n> unique index fix, you will not have a choice as to whether you want to\n> use unique indexes or not, which, depending on how many fields are being\n> indexed on, may crash the backend.\n\nAs a temporary fix, you could just return the unique indexes of one \nfield only. You cold easyly remove the check later when backend gets\nfixed.\n\nor the ones with specific naming, for example ending in *_mspkey ?\n \n> \n> Sorry for the long length of this letter.\n> \n\nUntil this is fixed it should go in some readme with BIG BOLD LETTERS ;)\n\nHannu\n", "msg_date": "Tue, 28 Apr 1998 10:43:03 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Driver and Unique Indexes" } ]
[ { "msg_contents": "I have tried almost everything I can think of, put postgres keepy dying.\nI am running on a FreeBSD 2.2.5 system. It was complaining about not\nenough shared memory. I bumped the shared mem to 16mb (the system has 128)\nnow it doesn't complain, it just dumps core.\nThe numbers are different this time, but today, the command I am trying to\nexecute is:\nINSERT INTO word_detail VALUES (131730,18596,1)\nnow word_detail is:\n| word_id | int4 | 4 |\n| url_id | int4 | 4 |\n| word_count | int2 | 2 |\n\nand it has non-unique indexes on word_id and url_id\nsc=> select count(*) from word_detail;\nField| Value\n-- RECORD 0 --\ncount| 637466\n\nThere is quite a bit of data here as well...\nsc=> INSERT INTO word_detail VALUES (131730,18596,1);\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n\nNo debugging comes out in the log either:\nFindBackend: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\n ---debug info---\n Quiet = f\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 256\n sortmem = 4096\n query echo = t\n DatabaseName = [sc]\n ----------------\n\n InitPostgres()..\n StartTransactionCommand() at Mon Apr 27 19:46:41 1998\n\n ProcessQuery() at Mon Apr 27 19:46:41 1998\n\n\nI even tried selecting this into another table and re-building the\nindexes.\n\nIt allowed me to insert about 50 more values in before it started crapping\nout again. Any ideas? The binaries compiled normally and passed all the\ntests. \n\n-Mike\n\n", "msg_date": "Mon, 27 Apr 1998 19:31:25 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres still dying on insert" }, { "msg_contents": "Michael Richards wrote:\n> \n> sc=> INSERT INTO word_detail VALUES (131730,18596,1);\n> PQexec() -- Request was sent to backend, but backend closed the channel\n> before responding.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n\nDid you look in postmaster log ?\n\nVadim\n", "msg_date": "Tue, 28 Apr 1998 10:01:25 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Postgres still dying on insert" }, { "msg_contents": "On Tue, 28 Apr 1998, Vadim B. Mikheev wrote:\n\n> > sc=> INSERT INTO word_detail VALUES (131730,18596,1);\n> > PQexec() -- Request was sent to backend, but backend closed the channel\n> > before responding.\n> > This probably means the backend terminated abnormally before or\n> > while processing the request.\n> \n> Did you look in postmaster log ?\n\nThe log shows nothing... That is why I am at a loss of where to look next.\nI even ran it in another terminal without disassociating from the\nterminal. Absolutely nothing appeared on the terminal after the query\nstarted running...\n\n-Mike\n\n", "msg_date": "Tue, 28 Apr 1998 00:07:54 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Postgres still dying on insert" }, { "msg_contents": "On Mon, 27 Apr 1998, Michael Richards wrote:\n\n> I have tried almost everything I can think of, put postgres keepy dying.\n> I am running on a FreeBSD 2.2.5 system. It was complaining about not\n> enough shared memory. I bumped the shared mem to 16mb (the system has 128)\n> now it doesn't complain, it just dumps core.\n> The numbers are different this time, but today, the command I am trying to\n> execute is:\n> INSERT INTO word_detail VALUES (131730,18596,1)\n> now word_detail is:\n> | word_id | int4 | 4 |\n> | url_id | int4 | 4 |\n> | word_count | int2 | 2 |\n> \n> and it has non-unique indexes on word_id and url_id\n> sc=> select count(*) from word_detail;\n> Field| Value\n> -- RECORD 0 --\n> count| 637466\n> \n> There is quite a bit of data here as well...\n> sc=> INSERT INTO word_detail VALUES (131730,18596,1);\n> PQexec() -- Request was sent to backend, but backend closed the channel\n> before responding.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n\n\tWhen this crash happens, do you restart postmaster itself? If so,\nwill that INSERT work right afterwards?\n\n\tIf not, what happens if you create a temporary table with the same\n'schema' and insert that record? Does that work?\n\n\tIf not, what happens if you go to 'int8'?\n\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 28 Apr 1998 00:37:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres still dying on insert" }, { "msg_contents": "Hello,\n\nI've a very similiar problem with postgres 6.3.1.\nThe postmaster dies with no comment on insertion on my Linux 2.0.33\n96MB box, when I whant to create a new user. (I've postet the problem\nwith no response.)\nI've track it down to an insert to pg_user.\n\nBecause of my lack of time I downgrate to 6.2.1, and all whent well.\n\nI assume the cause is in my limited disk-place (about 30MB on the \ndisk where the database reside).\n\nI have not testet 6.3.2.\n\n\tRalf\n\n-- \nFraunhofer IPK\nDipl.-Inform. Ralf Berger\nPascalstr. 8-9\n10587 Berlin\n\nTel.: ++49-(0)30 390 06 129\nFax.: ++49-(0)30 391 10 37\n\n---\n\nIn anything at all, perfection is finally attained not when\nthere is no longer anything to add, but when there is \nno longer anything to take away.\n\nAntoine de Saint Exupery\n", "msg_date": "Tue, 28 Apr 1998 11:30:50 +0200", "msg_from": "Ralf Berger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Postgres still dying on insert" }, { "msg_contents": "\nHowdy,\n\nI just moved our existing customer database from 6.2.1 to 6.3.2. I couldn't\nget the pg_dump command to work right so I made the following script. \n\nThe end result was I made over 8000 insertions in under an hour without a\nfailure. A random sampling of different areas shows the records made it\nover intact. \n\nI had problems with 6.3.1 but 6.3.2 seems to be ok.. My big wish list is\nfor access97 to seamlessly work with it....\n\n-Rob\n\n\nNOTE: This mess was a quick hack job with no commenting... took me about 5\nminutes to make. IF only I could code C++ that fast... Anyway just type\ndatawarez.pl <filename>\nwhere <filename> is the name of the table. \n\nThe file is generated from an earlier version of Postgres using \\o\n<filename> and then a select * from <filename>; at the command prompt. \n\nThis perl script does single line queries only. No Transactions.\n\n-r\n\n#! /usr/bin/perl\n#\n# PROGRAM: DATAWAREZ.PL DATE: 28-APR-98\n# CREATOR: ROBERT HILTIBIDAL\n#\n# PURPOSE: DATAWAREZ,PL takes the data file from the command line and\n# then outouts each line as a queruy to the database\n\n# Get the filename\n$file = $ARGV[0];\n\n# Set the count variable\n$count = 0;\n\n# Start the mess\nopen(sql,\"$file\");\nwhile (<sql>) {\n if ($count == 0) {\n @fields = split(/\\|/,$_);\n $fields[$#fields] =~ s/\\n//g;\n }\n else {\n @data = split(/\\|/,$_);\n $data[$#data] =~ s/\\n//g;\n $data[$#data] =~ s/\\r//g;\n $data[$#data] =~ s/\\f//g;\n }\n $count++;\n if ($count > 1) {\n $query = \"Insert into $file (\";\n $fieldcount =0;\n foreach $element (@fields) {\n if ($fieldcount == $#fields) {\n $query .= \"$element)\";\n }\n else { \n $query .= \"$element,\";\n }\n $fieldcount++;\n }\n $query .= \" VALUES (\";\n $datcount = 0;\n foreach $element (@data) {\n if ($datcount == $#data) {\n $query .= \"\\'$element\\');\";\n }\n else { \n $query .= \"\\'$element\\',\";\n }\n $datcount++;\n } \n print \"$query \\n\\n\";\n @results = `/usr/local/pgsql/bin/psql -t -A -q -d YOURDB -c \"$query\"`;\n print @results,\"\\n\\n\";\n\n }\n}\nclose sql;\n\n\n\n\n\n\nAt 11:30 AM 4/28/98 +0200, Ralf Berger wrote:\n>Hello,\n>\n>I've a very similiar problem with postgres 6.3.1.\n>The postmaster dies with no comment on insertion on my Linux 2.0.33\n>96MB box, when I whant to create a new user. (I've postet the problem\n>with no response.)\n>I've track it down to an insert to pg_user.\n>\n>Because of my lack of time I downgrate to 6.2.1, and all whent well.\n>\n>I assume the cause is in my limited disk-place (about 30MB on the \n>disk where the database reside).\n>\n>I have not testet 6.3.2.\n>\n>\tRalf\n>\n>-- \n>Fraunhofer IPK\n>Dipl.-Inform. Ralf Berger\n>Pascalstr. 8-9\n>10587 Berlin\n>\n>Tel.: ++49-(0)30 390 06 129\n>Fax.: ++49-(0)30 391 10 37\n>\n>---\n>\n>In anything at all, perfection is finally attained not when\n>there is no longer anything to add, but when there is \n>no longer anything to take away.\n>\n>Antoine de Saint Exupery\n>--\n>Official WWW Site: http://www.postgresql.org\n>Online Docs & FAQ: http://www.postgresql.org/docs\n>Searchable Lists: http://www.postgresql.org/mhonarc\n>\n>\n\n##########################################################\nRobert Hiltibidal Office 217-544-2775\nSystems Programmer Fax 217-527-3550\nFGInet, Inc\[email protected]\[email protected]\n \n", "msg_date": "Tue, 28 Apr 1998 07:15:53 -0500", "msg_from": "Robert Hiltibidal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Postgres still dying on insert" } ]
[ { "msg_contents": ">> > The postgresql-?.?.?/contrib/linux/postgres.init is meant to start your\n>> > postmaster at boot time and stop it at halt/reboot. Excelent.\n>> > But it is made for postgres account running tcsh. I know nothing about\ntchs\n>> > and my postgres account defaults to bash. So (thanks to Steve\n\"Stevers!\"\n>> > Coile) I changed it to bash:\n>\n>OK, but _I_ don't run bash. So someone else is now maintaining this\n>file? Why didn't we keep both forms in the file, with one commented out?\n>What are we trying to accomplish here??\n\nLet me explain myself.\n\nThe point is I changed script because IT DIDN'T STOP THE POSTMATER.\n\nThe original 'touch' line was commented out so there was no way (by means of\nsysV) the script would stop gracefully the postmaster. (I don't know what\ndamage would occur from improper shut down, but I dislike taking chances.)\n\nAnd even more, just uncommenting the 'touch' line wouldn't make it right\nsince the sysV expects the 'touched file' to be named after the halt/reboot\nscript symlink (this also implies keeping the same name on both symlinks).\nIn the original script it was ${POSTMASTER} which expanded to 'postmaster'\nwhile the sysV (X) editor symlinked it to [KS]??postgres.\n\nAbout bash, it's the usual shell for regular users. I guess the start/stop\nscript was meant for users who don't know/cannot write their own.\n\n\nClaudiu Balciza\n\n\n\n", "msg_date": "Tue, 28 Apr 1998 08:46:34 +0300", "msg_from": "\"Claudiu Balciza\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postgres init script things solved" }, { "msg_contents": "> >> > postmaster at boot time and stop it at halt/reboot. Excelent.\n> >> > But it is made for postgres account running tcsh. I know nothing about\n> tchs\n> >> > and my postgres account defaults to bash. So (thanks to Steve\n> \"Stevers!\"\n> >> > Coile) I changed it to bash:\n...\n> >OK, but _I_ don't run bash. So someone else is now maintaining this\n> >file? Why didn't we keep both forms in the file, with one commented out?\n> >What are we trying to accomplish here??\n... \n> About bash, it's the usual shell for regular users. I guess the start/stop\n> script was meant for users who don't know/cannot write their own.\n\nPerhaps we should write scripts for just plain old 'sh'. This is available\neverywhere (sort of even on SCO).\n\nIf not that, bash might be a better choice than tcsh as it is perhaps more\ncommon (we even run it on all our Solaris machines where I work).\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Mon, 27 Apr 1998 23:39:06 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres init script things solved" } ]
[ { "msg_contents": "\n\nAfter a long wait (as I was busy with other things), I have posted the\nthe Spinlock back off patch I promised to the patches list. This does\nsemi-random backoff using select() to lessen throughput degradation due\nto spinlock contention with large numbers of runnable backends.\n\nThis patch is meant to work on all current platforms, but I have only tested\nit on Linux 2.0.32 i386 glibc (Redhat 5.0).\n\nI restructured the files s_lock.c and s_lock.h to better separate the portable\nparts from the machine dependant parts. Probably the best way to see what\nhappened is to apply the patch and then look at the changed files rather than\nto try to read the patch directly.\n\nI have also added a timeout feature to the attempt to grab a spinlock. If after\na suitably long time (currently a few minutes) a lock still cannot be locked,\nwe printf() a message and abort() the backend.\n\nI hope that I have preserved the correctness of the tas() assembly code, but\nthis needs to be tested on each platform to make sure I have the sense of\nthe tests right. Remember, tas() is test_and_set and returns the PRIOR STATE\nof the lock. If the prior state was FREE, the caller of TAS is now the lock\nowner. Otherwise, the lock was already locked by someone else.\n\nTo make it easier to test on each platform, I have added a test routine and\nmakefile target to verify the S_LOCK() functionality. To run this:\n\nIf not done already\n apply patch\n run configure\nand then\n cd src/backend/buffer\n make tas_test\n\nIf the test appears to hang (or you end up after a few minutes with the\n\"Stuck Spinlock\" message), then S_LOCK() is working. Otherwise, please have\na look at what TAS() is returning and either fix it for the platform, or let\nme know and I will give it a whack.\n\nFiles affected:\n src/backend/storage/buffer/s_lock.c\n src/backend/storage/buffer/Makefile\n src/include/port/linux.h\n src/include/storage/s_lock.h\n\n\nLet me know if there are any problems or questions.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n\n", "msg_date": "Tue, 28 Apr 1998 00:31:45 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "S_LOCK contention reduction via backoff,\n patch posted to patches list." } ]
[ { "msg_contents": "> > >OK, but _I_ don't run bash. So someone else is now maintaining this\n> > >file? Why didn't we keep both forms in the file, with one commented out?\n> > >What are we trying to accomplish here??\n> ...\n> > About bash, it's the usual shell for regular users. I guess the start/stop\n> > script was meant for users who don't know/cannot write their own.\n> \n> Perhaps we should write scripts for just plain old 'sh'. This is available\n> everywhere (sort of even on SCO).\n> \n> If not that, bash might be a better choice than tcsh as it is perhaps more\n> common (we even run it on all our Solaris machines where I work).\n\nIf you want portability, then plain old Bourne shell should probably be\nthe shell of choice. Most implementations's on /bin/sh behave the same,\nand such scripts will run just fine under bash or ksh.\n\nThe same is not true in the csh world...different implementations of\n/bin/csh tend to have different bugs (eg some of them have the sense of\n|| and && swapped). Tcsh is (I believe) fairly bug free...but much less\nwidely available than /bin/sh...\n\nPete.\n", "msg_date": "Tue, 28 Apr 1998 10:25:13 +0100", "msg_from": "Peter Bentley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postgres init script things solved" } ]
[ { "msg_contents": "The script, createuser, will accept user names that CREATE USER will\nreject, and that cannot be named in a GRANT command.\n\nI would still prefer it if CREATE USER and GRANT would accept any valid\nUnix login name, including some punctuation characters and mixed-case,\nbut here is a patch to make things consistent with their current\nbehaviour.\n\nThis patch to createuser rejects user names that contain any characters\napart from alphanumerics and '_' and it converts upper- to lower-case,\nwarning the user that it is doing so. It uses tr to do this; I am using\nthe GNU version; if other versions don't support the character classes\nthat I have used, the classes should be changed to more long-winded lists\nof characters.\n\nThe patch also changes `done', used as a variable name, to `isdone',\nbecause my colour-coded vim editor was complaining of invalid syntax.\n\n\nThis is the patch:\n\ndiff -cr postgresql-6.3.2.orig/src/bin/createuser/createuser.sh \npostgresql-6.3.2/src/bin/createuser/createuser.sh\n*** postgresql-6.3.2.orig/src/bin/createuser/createuser.sh\tWed Feb 25 13:08:37 \n1998\n--- postgresql-6.3.2/src/bin/createuser/createuser.sh\tTue Apr 28 11:28:40 1998\n***************\n*** 94,103 ****\n # get the user name of the new user. Make sure it doesn't already exist.\n #\n \n! if [ -z \"$NEWUSER\" ]\n! then\n echo _fUnKy_DASH_N_sTuFf_ \"Enter name of user to add ---> \n_fUnKy_BACKSLASH_C_sTuFf_\"\n read NEWUSER\n fi\n \n QUERY=\"select usesysid from pg_user where usename = '$NEWUSER' \"\n--- 94,119 ----\n # get the user name of the new user. Make sure it doesn't already exist.\n #\n \n! while [ -z \"$NEWUSER\" ]\n! do\n echo _fUnKy_DASH_N_sTuFf_ \"Enter name of user to add ---> \n_fUnKy_BACKSLASH_C_sTuFf_\"\n read NEWUSER\n+ done\n+ \n+ # Check username conforms to allowed patterns\n+ x=`echo _fUnKy_DASH_N_sTuFf_ $NEWUSER _fUnKy_BACKSLASH_C_sTuFf_ | tr -d \n'[:alnum:]_'`\n+ if [ ! -z \"$x\" ]\n+ then\n+ echo $CMDNAME: invalid characters in username \\'$NEWUSER\\' >&2\n+ exit 1\n+ fi\n+ x=`echo _fUnKy_DASH_N_sTuFf_ $NEWUSER _fUnKy_BACKSLASH_C_sTuFf_ |\n+ tr '[:upper:]' '[:lower:]'`\n+ \n+ if [ $x != $NEWUSER ]\n+ then\n+ echo $CMDNAME: upper-case characters in username \\'$NEWUSER\\' folded \nto lower-case >&2\n+ NEWUSER=$x\n fi\n \n QUERY=\"select usesysid from pg_user where usename = '$NEWUSER' \"\n***************\n*** 116,128 ****\n exit 1\n fi\n \n! done=0\n \n #\n # get the system id of the new user. Make sure it is unique.\n #\n \n! while [ $done -ne 1 ]\n do\n SYSID=\n DEFSYSID=`pg_id $NEWUSER 2>/dev/null`\n--- 132,144 ----\n exit 1\n fi\n \n! isdone=0\n \n #\n # get the system id of the new user. Make sure it is unique.\n #\n \n! while [ $isdone -ne 1 ]\n do\n SYSID=\n DEFSYSID=`pg_id $NEWUSER 2>/dev/null`\n***************\n*** 156,162 ****\n \t\techo \"$CMDNAME: $SYSID already belongs to $RES, pick another\"\n \t\tDEFMSG= DEFSYSID= SYSID=\n \telse\n! \t\tdone=1\n \tfi\n done\n done\n--- 172,178 ----\n \t\techo \"$CMDNAME: $SYSID already belongs to $RES, pick another\"\n \t\tDEFMSG= DEFSYSID= SYSID=\n \telse\n! \t\tisdone=1\n \tfi\n done\n done\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Tue, 28 Apr 1998 12:04:42 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for createuser to make it consistent with CREATE USER" } ]
[ { "msg_contents": "If PQfn() receives NOTICEs from the backend, it fails because there is no\nprovision to deal with them.\n\nThis patch (supplied by Anders Hammarquist <[email protected]> to me as Debian\nmaintainer of postgresql) cures the problem:\n\ndiff -cr postgresql-6.3.2.orig/src/interfaces/libpq/fe-exec.c \npostgresql-6.3.2/src/interfaces/libpq/fe-exec.c\n*** postgresql-6.3.2.orig/src/interfaces/libpq/fe-exec.c\tMon Mar 16 08:0\n0:26 \n1998\n- --- postgresql-6.3.2/src/interfaces/libpq/fe-exec.c\tTue Apr 28 06:47:22 199\n8\n***************\n*** 1545,1556 ****\n \t}\n \tpqFlush(pfout, pfdebug);\n \n! \tid = pqGetc(pfin, pfdebug);\n! \tif (id != 'V')\n \t{\n \t\tif (id == 'E')\n \t\t{\n \t\t\tpqGets(conn->errorMessage, ERROR_MSG_LENGTH, pfin, pfde\nbug);\n \t\t}\n \t\telse\n \t\t\tsprintf(conn->errorMessage,\n- --- 1545,1570 ----\n \t}\n \tpqFlush(pfout, pfdebug);\n \n! \twhile ((id = pqGetc(pfin, pfdebug)) != 'V')\n \t{\n \t\tif (id == 'E')\n \t\t{\n \t\t\tpqGets(conn->errorMessage, ERROR_MSG_LENGTH, pfin, pfde\nbug);\n+ \t\t}\n+ \t\telse if (id == 'N')\n+ \t {\n+ \t /* print notice and go back to processing return \n+ \t\t\t values */\n+ \t if (pqGets(conn->errorMessage, ERROR_MSG_LENGTH, \n+ \t\t\t\tpfin, pfdebug) == 1)\n+ \t\t\t{\n+ \t\t\t\tsprintf(conn->errorMessage,\n+ \t\t\t\t\"Notice return detected from backend, but \"\n+ \t\t\t\t\"message cannot be read\");\n+ \t\t\t}\n+ \t\t\telse\n+ \t\t\t\tfprintf(stderr, \"%s\\n\", conn->errorMessage);\n+ \t\t\tcontinue;\n \t\t}\n \t\telse\n \t\t\tsprintf(conn->errorMessage,\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Tue, 28 Apr 1998 12:10:00 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for NOTICE messages to PQfn() from backend" } ]
[ { "msg_contents": "> Yes it is; I hadn't tried double-quotes, because single-quotes are used\n> for strings - it didn't occur to me! (Incidentally, WHY double-quotes here\n> instead of single-quotes? Surely that's against SQL practice?)\n\nNo SQL Standard needs eighter no quotes or double quotes, since \nthe user name is handeled as an identifier (like a table name).\nThis is to cleanly distinguish between a string and an identifier.\nIf you use special characters for user names you have to double quote them.\n\n> bray=> grant all on address to \"www-data\";\n> ERROR: aclparse: mode flags must use \"arwR\"\n\nThis seems to be a different problem.\n\nAndreas \n\n\n", "msg_date": "Tue, 28 Apr 1998 18:10:34 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: Bug#21681: postgresql: Doesn't allow granting to\n\twww-data" } ]
[ { "msg_contents": "Hi, all\n\nPostgreSQL has two COPY commands to import/export data;\n\n copy [binary] <class_name> [with oids]\n {to|from} {<filename>|stdin|stdout} [using delimiters <delim>];\nand...\n\n \\copy table {from | to} <fname>\n \nboth of them work in a different way;\n In the first one you have to specify 'filename' surrounded by ''\n and if you don't specify an absolute pathname PostgreSQL uses\n $PGDATA/base/<databasename>/<filename>\n\n In the last one you have to specify 'filename' without by ''\n and if you don't specify an absolute pathname PostgreSQL uses\n current working directory.\n and last... if you don't specify any parameter it show me this:\n\njava=> \\copy\nconnecting to new database: opy\nPQexec() -- There is no connection to the backend.\n\nCould not connect to new database. exiting\n\nMy question is:\n \n Why do we have two commands to doing the same operation ?\n Why are they different ?\n\n Thanks, Jose'\n\n", "msg_date": "Tue, 28 Apr 1998 16:18:26 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "copy command" } ]
[ { "msg_contents": "Here is a revised proposal that takes into account the discussions\nof the last few days. Any comments?\n\n\nI propose to revise libpq and modify the frontend/backend protocol\nto provide the following benefits:\n * Provide a clean way of reading multiple results from a single query\n string. Among other things, this solves the problem of allowing a\n single query to return several result sets with different descriptors.\n * Allow a frontend to perform other work while awaiting the result of\n a query.\n * Add the ability to cancel queries in progress.\n * Eliminate the need for frontends to issue dummy queries in order\n to detect NOTIFY responses.\n * Eliminate the need for libpq to issue dummy queries internally\n to determine when a query is complete.\n\nWe can't break existing code for this, so the behavior of PQexec()\ncan't change. Instead, I propose new functions to add to the API.\nInternally, PQexec will be reimplemented in terms of these new\nfunctions, but old applications shouldn't notice any difference.\n\n\nThe new functions are:\n\n\tbool PQsendQuery (PGconn *conn, const char *query);\n\nSubmits a query without waiting for the result. Returns TRUE if the\nquery has been successfully dispatched, otherwise FALSE (in the FALSE\ncase, an error message is left in conn->errorMessage).\n\n\tPGresult* PQgetResult (PGconn *conn);\n\nWaits for input from the backend, and consumes input until (a) a result is\navailable, (b) the current query is over, or (c) a copy in/out operation\nis detected. NULL is returned if the query is over; in all other cases a\nsuitable PGresult is returned (which the caller must eventually free).\nNote that no actual \"wait\" will occur if the necessary input has already\nbeen consumed; see below.\n\n\tbool PQisBusy (PGconn *conn);\n\nReturns TRUE if a query operation is busy (that is, a call to PQgetResult\nwould block waiting for more input). Returns FALSE if PQgetResult would\nreturn immediately.\n\n\tvoid PQconsumeInput (PGconn *conn);\n\nThis can be called at any time to check for and process new input from\nthe backend. It returns no status indication, but after calling it\nthe application can use PQisBusy() and/or PQnotifies() to see if a query\nwas completed or a NOTIFY message arrived. This function will never wait\nfor more input to arrive.\n\n\tint PQsocket (PGconn *conn);\n\nReturns the Unix file descriptor for the socket connection to the backend,\nor -1 if there is no open connection. This is a violation of modularity,\nof course, but there is no alternative: an application that needs\nasynchronous execution needs to be able to use select() to wait for input\nfrom either the backend or any other input streams it may have. To use\nselect() the underlying socket must be made visible.\n\n\tPGnotify *PQnotifies (PGconn *conn);\n\nThis function doesn't change; we just observe that notifications may\nbecome available as a side effect of executing either PQgetResult() or\nPQconsumeInput(), not just PQexec().\n\n\tvoid PQrequestCancel (PGconn *conn);\n\nIssues a cancel request if possible. There is no direct way to tell whether\nthis has any effect ... see discussion below.\n\n\nDiscussion:\n\nAn application can continue to use PQexec() as before, and notice\nvery little difference in behavior.\n\nApplications that want to be able to handle multiple results from a\nsingle query should replace PQexec calls with logic like this:\n\n\t// Submit the query\n\tif (! PQsendQuery(conn, query))\n\t\treportTheError();\n\t// Wait for and process result(s)\n\twhile ((result = PQgetResult(conn)) != NULL) {\n\t\tswitch (PQresultStatus(result)) {\n\t\t... process result, for example:\n\t\tcase PGRES_COPY_IN:\n\t\t\t// ... copy data here ...\n\t\t\tif (PQendcopy(conn))\n\t\t\t\treportTheError();\n\t\t\tbreak;\n\t\t...\n\t\t}\n\t\tPQclear(result);\n\t}\n\t// When fall out of loop, we're done and ready for a new query\n\nNote that PQgetResult will always report errors by returning a PGresult\nwith status PGRES_NONFATAL_ERROR or PGRES_FATAL_ERROR, not by returning\nNULL (since NULL implies non-error termination of the processing loop).\n\nPQexec() will be implemented as follows:\n\n\tif (! PQsendQuery(conn, query))\n\t\treturn makeEmptyPGresult(conn, PGRES_FATAL_ERROR);\n\tlastResult = NULL;\n\twhile ((result = PQgetResult(conn)) != NULL) {\n\t\tPQclear(lastResult);\n\t\tlastResult = result;\n\t}\n\treturn lastResult;\n\nThis maintains the current behavior that the last result of a series\nof commands is returned by PQexec. (The old implementation is only\ncapable of doing that correctly in a limited set of cases, but in the\ncases where it behaves usefully at all, that's how it behaves.)\n\nThere is a small difference in behavior, which is that PQexec will now\nreturn a PGresult with status PGRES_FATAL_ERROR in cases where the old\nimplementation would just have returned NULL (and set conn->errorMessage).\nHowever, any correctly coded application should handle this the same way.\n\nIn the above examples, the frontend application is still synchronous: it\nblocks while waiting for the backend to reply to a query. This is often\nundesirable, since the application may have other work to do, such as\nresponding to user input. Applications can now handle that by using\nPQisBusy and PQconsumeInput along with PQsendQuery and PQgetResult.\n\nThe general idea is that the application's main loop will use select()\nto wait for input (from either the backend or its other input sources).\nWhen select() indicates that input is pending from the backend, the app\nwill call PQconsumeInput, followed by checking PQisBusy and/or PQnotifies\nto see what has happened. If PQisBusy returns FALSE then PQgetResult\ncan safely be called to obtain and process a result without blocking.\n\nNote also that NOTIFY messages can arrive asynchronously from the backend.\nThey can be detected *without issuing a query* by calling PQconsumeInput\nfollowed by PQnotifies. I expect a lot of people will build \"partially\nasync\" applications that detect notifies this way but still do all their\nqueries through PQexec (or better, PQsendQuery followed by a synchronous\nPQgetResult loop). This compromise allows notifies to be detected without\nwasting time by issuing null queries, yet the basic logic of issuing a\nseries of queries remains simple.\n\nFinally, since the application can retain control while waiting for a\nquery response, it becomes meaningful to try to cancel a query in progress.\nThis is done by calling PQrequestCancel(). Note that PQrequestCancel()\nmay not have any effect --- if there is no query in progress, or if the\nbackend has already finished the query, then it *will* have no effect.\nThe application must continue to follow the result-reading protocol after\nissuing a cancel request. If the cancel is successful, its effect will be\nto cause the current query to fail and return an error message.\n\n\nPROTOCOL CHANGES:\n\nWe should change the protocol version number to 2.0.\nIt would be possible for the backend to continue to support 1.0 clients,\nif you think it's worth the trouble to do so.\n\n1. New message type:\n\nCommand Done\n\tByte1('Z')\n\nThe backend will emit this message at completion of processing of every\ncommand string, just before it resumes waiting for frontend input.\nThis change eliminates libpq's current hack of issuing empty queries to\nsee whether the backend is done. Note that 'Z' must be emitted after\n*every* query or function invocation, no matter how it terminated.\n\n2. The RowDescription ('T') message is extended by adding a new value\nfor each field. Just after the type-size value, there will now be\nan int16 \"atttypmod\" value. (Would someone provide text specifying\nexactly what this value means?) libpq will store this value in\na new \"adtmod\" field of PGresAttDesc structs.\n\n3. The \"Start Copy In\" response message is changed from 'D' to 'G',\nand the \"Start Copy Out\" response message is changed from 'B' to 'H'.\nThese changes eliminate potential confusion with the data row messages,\nwhich also have message codes 'D' and 'B'.\n\n4. The frontend may request cancellation of the current query by sending\na single byte of OOB (out-of-band) data. The contents of the data byte\nare irrelevant, since the cancellation will be triggered by the associated\nsignal and not by the data itself. (But we should probably specify that\nthe byte be zero, in case we later think of a reason to have different\nkinds of OOB messages.) There is no specific reply to this message.\nIf the backend does cancel a query, the query terminates with an ordinary\nerror message indicating that the query was cancelled.\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Apr 1998 12:21:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "> 2. The RowDescription ('T') message is extended by adding a new value\n> for each field. Just after the type-size value, there will now be\n> an int16 \"atttypmod\" value. (Would someone provide text specifying\n> exactly what this value means?) libpq will store this value in\n> a new \"adtmod\" field of PGresAttDesc structs.\n\n>From src/include/catalog/pg_attribute.h:\n\n /*\n * atttypmod records type-specific modifications supplied at table\n * creation time, and passes it to input and output functions as the\n * third argument.\n */\n\nCurrently only used for char() and varchar(), and includes a 4-byte\nheader.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 28 Apr 1998 12:42:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "Tom Lane wrote:\n> \n> Here is a revised proposal that takes into account the discussions\n> of the last few days. Any comments?\n\nJust one at the end \n\n[snip]\n\n> 4. The frontend may request cancellation of the current query by sending\n> a single byte of OOB (out-of-band) data. The contents of the data byte\n> are irrelevant, since the cancellation will be triggered by the associated\n> signal and not by the data itself. (But we should probably specify that\n> the byte be zero, in case we later think of a reason to have different\n> kinds of OOB messages.) There is no specific reply to this message.\n> If the backend does cancel a query, the query terminates with an ordinary\n> error message indicating that the query was cancelled.\n\nYou didn't come right out and say it, but are you intending to support\nmultiple queries within a connection? I gather not. Not that I'm\nsuggesting that this be done, as it seems this would complicate the\nuser's application and the backend. With only one possible OOB\nmessage, you can't tell it which query to cancel.\n\nOcie Mitchell\n", "msg_date": "Tue, 28 Apr 1998 11:03:09 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "I suggest the application already has fork or fork/exec to\nimplement an asynchronous design. Does that also keep the\nsocket out of the application's domain?\n\nBob\[email protected]\n\nReceived: from hub.org (hub.org [209.47.148.200])\n\tby humbug.antnet.com (8.8.5/8.8.5) with ESMTP id LAA21503\n\tfor <[email protected]>; Tue, 28 Apr 1998 11:28:48 -0500 (CDT)\nReceived: from localhost (majordom@localhost) by hub.org (8.8.8/8.7.5) with SMTP id MAA01511; Tue, 28 Apr 1998 12:23:18 -0400 (EDT)\nReceived: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Tue, 28 Apr 1998 12:23:16 -0400 (EDT)\nReceived: (from majordom@localhost) by hub.org (8.8.8/8.7.5) id MAA01498 for pgsql-interfaces-outgoing; Tue, 28 Apr 1998 12:23:09 -0400 (EDT)\nReceived: from sss.sss.pgh.pa.us (sss.pgh.pa.us [206.210.65.6]) by hub.org (8.8.8/8.7.5) with ESMTP id MAA01401; Tue, 28 Apr 1998 12:22:04 -0400 (EDT)\nReceived: from sss.sss.pgh.pa.us (localhost [127.0.0.1])\n\tby sss.sss.pgh.pa.us (8.8.5/8.8.5) with ESMTP id MAA07043;\n\tTue, 28 Apr 1998 12:21:56 -0400 (EDT)\nTo: [email protected], [email protected]\nSubject: [INTERFACES] Revised proposal for libpq and FE/BE protocol changes\nDate: Tue, 28 Apr 1998 12:21:55 -0400\nMessage-ID: <[email protected]>\nFrom: Tom Lane <[email protected]>\nSender: [email protected]\nPrecedence: bulk\n\nHere is a revised proposal that takes into account the discussions\nof the last few days. Any comments?\n\n\nI propose to revise libpq and modify the frontend/backend protocol\nto provide the following benefits:\n * Provide a clean way of reading multiple results from a single query\n string. Among other things, this solves the problem of allowing a\n single query to return several result sets with different descriptors.\n * Allow a frontend to perform other work while awaiting the result of\n a query.\n * Add the ability to cancel queries in progress.\n * Eliminate the need for frontends to issue dummy queries in order\n to detect NOTIFY responses.\n * Eliminate the need for libpq to issue dummy queries internally\n to determine when a query is complete.\n\nWe can't break existing code for this, so the behavior of PQexec()\ncan't change. Instead, I propose new functions to add to the API.\nInternally, PQexec will be reimplemented in terms of these new\nfunctions, but old applications shouldn't notice any difference.\n\n\nThe new functions are:\n\n\tbool PQsendQuery (PGconn *conn, const char *query);\n\nSubmits a query without waiting for the result. Returns TRUE if the\nquery has been successfully dispatched, otherwise FALSE (in the FALSE\ncase, an error message is left in conn->errorMessage).\n\n\tPGresult* PQgetResult (PGconn *conn);\n\nWaits for input from the backend, and consumes input until (a) a result is\navailable, (b) the current query is over, or (c) a copy in/out operation\nis detected. NULL is returned if the query is over; in all other cases a\nsuitable PGresult is returned (which the caller must eventually free).\nNote that no actual \"wait\" will occur if the necessary input has already\nbeen consumed; see below.\n\n\tbool PQisBusy (PGconn *conn);\n\nReturns TRUE if a query operation is busy (that is, a call to PQgetResult\nwould block waiting for more input). Returns FALSE if PQgetResult would\nreturn immediately.\n\n\tvoid PQconsumeInput (PGconn *conn);\n\nThis can be called at any time to check for and process new input from\nthe backend. It returns no status indication, but after calling it\nthe application can use PQisBusy() and/or PQnotifies() to see if a query\nwas completed or a NOTIFY message arrived. This function will never wait\nfor more input to arrive.\n\n\tint PQsocket (PGconn *conn);\n\nReturns the Unix file descriptor for the socket connection to the backend,\nor -1 if there is no open connection. This is a violation of modularity,\nof course, but there is no alternative: an application that needs\nasynchronous execution needs to be able to use select() to wait for input\nfrom either the backend or any other input streams it may have. To use\nselect() the underlying socket must be made visible.\n\n\tPGnotify *PQnotifies (PGconn *conn);\n\nThis function doesn't change; we just observe that notifications may\nbecome available as a side effect of executing either PQgetResult() or\nPQconsumeInput(), not just PQexec().\n\n\tvoid PQrequestCancel (PGconn *conn);\n\nIssues a cancel request if possible. There is no direct way to tell whether\nthis has any effect ... see discussion below.\n\n\nDiscussion:\n\nAn application can continue to use PQexec() as before, and notice\nvery little difference in behavior.\n\nApplications that want to be able to handle multiple results from a\nsingle query should replace PQexec calls with logic like this:\n\n\t// Submit the query\n\tif (! PQsendQuery(conn, query))\n\t\treportTheError();\n\t// Wait for and process result(s)\n\twhile ((result = PQgetResult(conn)) != NULL) {\n\t\tswitch (PQresultStatus(result)) {\n\t\t... process result, for example:\n\t\tcase PGRES_COPY_IN:\n\t\t\t// ... copy data here ...\n\t\t\tif (PQendcopy(conn))\n\t\t\t\treportTheError();\n\t\t\tbreak;\n\t\t...\n\t\t}\n\t\tPQclear(result);\n\t}\n\t// When fall out of loop, we're done and ready for a new query\n\nNote that PQgetResult will always report errors by returning a PGresult\nwith status PGRES_NONFATAL_ERROR or PGRES_FATAL_ERROR, not by returning\nNULL (since NULL implies non-error termination of the processing loop).\n\nPQexec() will be implemented as follows:\n\n\tif (! PQsendQuery(conn, query))\n\t\treturn makeEmptyPGresult(conn, PGRES_FATAL_ERROR);\n\tlastResult = NULL;\n\twhile ((result = PQgetResult(conn)) != NULL) {\n\t\tPQclear(lastResult);\n\t\tlastResult = result;\n\t}\n\treturn lastResult;\n\nThis maintains the current behavior that the last result of a series\nof commands is returned by PQexec. (The old implementation is only\ncapable of doing that correctly in a limited set of cases, but in the\ncases where it behaves usefully at all, that's how it behaves.)\n\nThere is a small difference in behavior, which is that PQexec will now\nreturn a PGresult with status PGRES_FATAL_ERROR in cases where the old\nimplementation would just have returned NULL (and set conn->errorMessage).\nHowever, any correctly coded application should handle this the same way.\n\nIn the above examples, the frontend application is still synchronous: it\nblocks while waiting for the backend to reply to a query. This is often\nundesirable, since the application may have other work to do, such as\nresponding to user input. Applications can now handle that by using\nPQisBusy and PQconsumeInput along with PQsendQuery and PQgetResult.\n\nThe general idea is that the application's main loop will use select()\nto wait for input (from either the backend or its other input sources).\nWhen select() indicates that input is pending from the backend, the app\nwill call PQconsumeInput, followed by checking PQisBusy and/or PQnotifies\nto see what has happened. If PQisBusy returns FALSE then PQgetResult\ncan safely be called to obtain and process a result without blocking.\n\nNote also that NOTIFY messages can arrive asynchronously from the backend.\nThey can be detected *without issuing a query* by calling PQconsumeInput\nfollowed by PQnotifies. I expect a lot of people will build \"partially\nasync\" applications that detect notifies this way but still do all their\nqueries through PQexec (or better, PQsendQuery followed by a synchronous\nPQgetResult loop). This compromise allows notifies to be detected without\nwasting time by issuing null queries, yet the basic logic of issuing a\nseries of queries remains simple.\n\nFinally, since the application can retain control while waiting for a\nquery response, it becomes meaningful to try to cancel a query in progress.\nThis is done by calling PQrequestCancel(). Note that PQrequestCancel()\nmay not have any effect --- if there is no query in progress, or if the\nbackend has already finished the query, then it *will* have no effect.\nThe application must continue to follow the result-reading protocol after\nissuing a cancel request. If the cancel is successful, its effect will be\nto cause the current query to fail and return an error message.\n\n\nPROTOCOL CHANGES:\n\nWe should change the protocol version number to 2.0.\nIt would be possible for the backend to continue to support 1.0 clients,\nif you think it's worth the trouble to do so.\n\n1. New message type:\n\nCommand Done\n\tByte1('Z')\n\nThe backend will emit this message at completion of processing of every\ncommand string, just before it resumes waiting for frontend input.\nThis change eliminates libpq's current hack of issuing empty queries to\nsee whether the backend is done. Note that 'Z' must be emitted after\n*every* query or function invocation, no matter how it terminated.\n\n2. The RowDescription ('T') message is extended by adding a new value\nfor each field. Just after the type-size value, there will now be\nan int16 \"atttypmod\" value. (Would someone provide text specifying\nexactly what this value means?) libpq will store this value in\na new \"adtmod\" field of PGresAttDesc structs.\n\n3. The \"Start Copy In\" response message is changed from 'D' to 'G',\nand the \"Start Copy Out\" response message is changed from 'B' to 'H'.\nThese changes eliminate potential confusion with the data row messages,\nwhich also have message codes 'D' and 'B'.\n\n4. The frontend may request cancellation of the current query by sending\na single byte of OOB (out-of-band) data. The contents of the data byte\nare irrelevant, since the cancellation will be triggered by the associated\nsignal and not by the data itself. (But we should probably specify that\nthe byte be zero, in case we later think of a reason to have different\nkinds of OOB messages.) There is no specific reply to this message.\nIf the backend does cancel a query, the query terminates with an ordinary\nerror message indicating that the query was cancelled.\n\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 28 Apr 1998 15:53:24 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Revised proposal for libpq and FE/BE protocol\n\tchanges" }, { "msg_contents": "> \n> Tom Lane wrote:\n> > \n> > Here is a revised proposal that takes into account the discussions\n> > of the last few days. Any comments?\n> \n> Just one at the end \n> \n> [snip]\n> \n> > 4. The frontend may request cancellation of the current query by sending\n> > a single byte of OOB (out-of-band) data. The contents of the data byte\n> > are irrelevant, since the cancellation will be triggered by the associated\n> > signal and not by the data itself. (But we should probably specify that\n> > the byte be zero, in case we later think of a reason to have different\n> > kinds of OOB messages.) There is no specific reply to this message.\n> > If the backend does cancel a query, the query terminates with an ordinary\n> > error message indicating that the query was cancelled.\n> \n> You didn't come right out and say it, but are you intending to support\n> multiple queries within a connection? I gather not. Not that I'm\n> suggesting that this be done, as it seems this would complicate the\n> user's application and the backend. With only one possible OOB\n> message, you can't tell it which query to cancel.\n> \n> Ocie Mitchell\n\nWaves hand wildly... I know, I know!\n\n All of them!\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Tue, 28 Apr 1998 22:32:33 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "Tom Lane writes:\n> I propose to revise libpq and modify the frontend/backend protocol\n> to provide the following benefits:\n> * Provide a clean way of reading multiple results from a single query\n> string. Among other things, this solves the problem of allowing a\n> single query to return several result sets with different descriptors.\n\nDoes this mean I can read in a complete C array with one call? I mean\nsomething like this:\n\nchar emp_name[10][10];\n\nexec sql select name into :emp_name from emp;\n\nBut then I didn't see anything like this in your examples. Do I have to\niterate using PQgetResult then?\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 29 Apr 1998 10:35:31 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "[email protected] writes:\n> You didn't come right out and say it, but are you intending to support\n> multiple queries within a connection? I gather not. Not that I'm\n> suggesting that this be done, as it seems this would complicate the\n> user's application and the backend. With only one possible OOB\n> message, you can't tell it which query to cancel.\n\nThat was something I asked about a few days ago, and didn't get any\nresponses suggesting that anyone thought it was likely to happen.\n\nWe would need wholesale changes everywhere in the protocol to support\nconcurrent queries: answers and errors coming back would have to be\ntagged to indicate which query they apply to. The lack of a tag in\nthe cancel message isn't the controlling factor.\n\nIn the current system architecture, much the easiest way to execute\nconcurrent queries is to open up more than one connection. There's\nnothing that says a frontend process can't fire up multiple backend\nprocesses. I think this is probably sufficient, because I don't\nforesee such a thing becoming really popular anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Apr 1998 10:28:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes " }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> Does this mean I can read in a complete C array with one call? I mean\n> something like this:\n> char emp_name[10][10];\n> exec sql select name into :emp_name from emp;\n\nAs far as I know that works now; or at least, if it doesn't work it's\na limitation of the embedded-SQL interface, and not anything that has\nto do with libpq or the fe/be protocol.\n\nA \"result\" in libpq's terms is the result of a single SQL command.\nThe result of a successful query, for example, is typically multiple\nrows of data. You only need a PQgetResult loop if (a) you send a\nquery string that contains several commands, or (b) you issue a\nquery whose answer contains more than one kind of tuple.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Apr 1998 10:35:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes " }, { "msg_contents": "[email protected] writes:\n> I suggest the application already has fork or fork/exec to\n> implement an asynchronous design.\n\nTrue, if you don't mind assuming you have threads then you could\ndedicate one thread to blocking in libpq while your other threads manage\nyour user interface and so forth. But most of these revisions would\nstill be useful in that situation. The current libpq does not cope well\nwith query strings containing multiple commands; it doesn't cope at all\nwith queries that return more than one type of tuple; it requires dummy\nqueries (wasting both processing time and network bandwidth) to check\nfor NOTIFY messages; and so forth. None of those problems can be solved\njust by moving calls to libpq into a separate thread.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Apr 1998 10:50:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Revised proposal for libpq and FE/BE\n\tprotocol changes" }, { "msg_contents": "> That was something I asked about a few days ago, and didn't get any\n> responses suggesting that anyone thought it was likely to happen.\n> \n> We would need wholesale changes everywhere in the protocol to support\n> concurrent queries: answers and errors coming back would have to be\n> tagged to indicate which query they apply to. The lack of a tag in\n> the cancel message isn't the controlling factor.\n> \n> In the current system architecture, much the easiest way to execute\n> concurrent queries is to open up more than one connection. There's\n> nothing that says a frontend process can't fire up multiple backend\n> processes. I think this is probably sufficient, because I don't\n> foresee such a thing becoming really popular anyway.\n\nIf we can remove the exec() in 6.4, that will make backend startup even\nquicker.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 29 Apr 1998 12:04:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "Tom Lane wrote:\n> \n> PROTOCOL CHANGES:\n> \n> We should change the protocol version number to 2.0.\n> It would be possible for the backend to continue to support 1.0 clients,\n> if you think it's worth the trouble to do so.\n\nOr 1.1? The changes don't seem too traumatic. Either way, maintaining\nsupport for 1.0 is important as not all of us use libpq and we need time\nto catch up. Also we don't want to put barriers in the way of companies\nlike Openlink who seem willing to provide support for PostgreSQL in\ncommercial products.\n\n> 1. New message type:\n> \n> Command Done\n> Byte1('Z')\n> \n> The backend will emit this message at completion of processing of every\n> command string, just before it resumes waiting for frontend input.\n> This change eliminates libpq's current hack of issuing empty queries to\n> see whether the backend is done. Note that 'Z' must be emitted after\n> *every* query or function invocation, no matter how it terminated.\n\nThe completion response already does this for successful queries, and\nthe error response for unsuccessful ones. I came to the conclusion (but\nnot with absolute certainty) a while back that the empty query hack was\nneeded for an old feature of the backend that is no longer there. From\nlooking at a dump of the data between psql and the backend for 6.3.2 I\ndon't think that those empty queries are issued any more. I have\nimplemented a pure Tcl frontend that doesn't issue them and I haven't\nseen any problems.\n\nThe exception to the above is the single empty query sent immediately\nafter the frontend has been successfully authenticated. This is useful\nbecause it has the side effect of checking that the user has privileges\nagainst the particular database - it is better to do this as part of the\nsession set up rather than the first real query which may be some time\nlater.\n\n> 3. The \"Start Copy In\" response message is changed from 'D' to 'G',\n> and the \"Start Copy Out\" response message is changed from 'B' to 'H'.\n> These changes eliminate potential confusion with the data row messages,\n> which also have message codes 'D' and 'B'.\n\nThe context means there should be no confusion - but if the protocol is\nbeing changed anyway then it makes sense to do this.\n\nPhil\n", "msg_date": "Wed, 29 Apr 1998 21:07:53 +0000", "msg_from": "Phil Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "Phil Thompson <[email protected]> writes:\n> Tom Lane wrote:\n>> We should change the protocol version number to 2.0.\n>> It would be possible for the backend to continue to support 1.0 clients,\n>> if you think it's worth the trouble to do so.\n\n> Or 1.1? The changes don't seem too traumatic.\n\nWell, pqcomm.h says that an incompatible change should have a new major\nversion number, and minor though these changes be, they *are*\nincompatible.\n\n> Either way, maintaining support for 1.0 is important as not all of us\n> use libpq and we need time to catch up.\n\nNo argument from me. It shouldn't be hard to emit the new stuff\nconditionally.\n\n>> Command Done\n>> Byte1('Z')\n\n> The completion response already does this for successful queries, and\n> the error response for unsuccessful ones.\n\nYou missed the point: it is possible to send more than one SQL command\nin a single query string. The reason that libpq sends empty queries is\nto determine whether the backend is actually done processing the string.\nI suppose we could instead try to make libpq smart enough to parse the\nstring it's sending and determine how many responses to expect ... but\nit seems much easier and more robust to have the backend tell us when\nit's done.\n\n> From looking at a dump of the data between psql and the backend for\n> 6.3.2 I don't think that those empty queries are issued any more.\n> I have implemented a pure Tcl frontend that doesn't issue them and I\n> haven't seen any problems.\n\nYou didn't exercise the cases where they are sent. A command that\ngenerates a \"C\" response without tuple data is needed to make libpq\ninsert an empty query at the moment. The code is a horrible kluge,\nbecause it only works for cases like\n\tset geqo to 'off'; show datestyle; select * from table;\nand not for, say,\n\tselect * from table1; select * from table2;\n\npsql masks the problem because it splits up your input into separate\ncommands and hands them to libpq one at a time. A dumber UI is needed\nto exhibit the trouble. (We should be able to rip out the\ncommand-splitting code from psql after making this change, BTW.\nI think it'll be better to have neither psql nor libpq know much of\nanything about SQL syntax.)\n\n> The exception to the above is the single empty query sent immediately\n> after the frontend has been successfully authenticated. This is useful\n\nRight. I didn't plan to remove that one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Apr 1998 18:01:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes " }, { "msg_contents": "> \n> Tom Lane wrote:\n> > \n> > PROTOCOL CHANGES:\n> > \n> > We should change the protocol version number to 2.0.\n> > It would be possible for the backend to continue to support 1.0 clients,\n> > if you think it's worth the trouble to do so.\n> \n> Or 1.1? The changes don't seem too traumatic. Either way, maintaining\n> support for 1.0 is important as not all of us use libpq and we need time\n> to catch up. Also we don't want to put barriers in the way of companies\n> like Openlink who seem willing to provide support for PostgreSQL in\n> commercial products.\n\nYes, but there will be a month for people to get their third-part stuff\nchanged, and the changes are pretty straight-forward. Having support\nfor both in the backend/frontend is going to make that code more\ndifficult.\n\nIf it was only a small change, we could keep it compatable, but it seems\nit would be best to just announce it early on. People can start testing\ntheir new drivers long before the beta period begins.\n\nAlso, we are making this change well in advance of the beta, so I hope\nthey would have enough time to make the transition.\n\n> > The backend will emit this message at completion of processing of every\n> > command string, just before it resumes waiting for frontend input.\n> > This change eliminates libpq's current hack of issuing empty queries to\n> > see whether the backend is done. Note that 'Z' must be emitted after\n> > *every* query or function invocation, no matter how it terminated.\n> \n> The completion response already does this for successful queries, and\n> the error response for unsuccessful ones. I came to the conclusion (but\n> not with absolute certainty) a while back that the empty query hack was\n> needed for an old feature of the backend that is no longer there. From\n> looking at a dump of the data between psql and the backend for 6.3.2 I\n> don't think that those empty queries are issued any more. I have\n> implemented a pure Tcl frontend that doesn't issue them and I haven't\n> seen any problems.\n> \n> The exception to the above is the single empty query sent immediately\n> after the frontend has been successfully authenticated. This is useful\n> because it has the side effect of checking that the user has privileges\n> against the particular database - it is better to do this as part of the\n> session set up rather than the first real query which may be some time\n> later.\n\nGood insight on the libpq interface. I think we need the new return\ncode because of the possibility of multiple results from the backend. \nIn the old code, without the empty query, doesn't a query with multiple\nstatements cause the send/return results to get out of sync.\n\n> > 3. The \"Start Copy In\" response message is changed from 'D' to 'G',\n> > and the \"Start Copy Out\" response message is changed from 'B' to 'H'.\n> > These changes eliminate potential confusion with the data row messages,\n> > which also have message codes 'D' and 'B'.\n> \n> The context means there should be no confusion - but if the protocol is\n> being changed anyway then it makes sense to do this.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 29 Apr 1998 19:03:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "Tom Lane wrote:\n> \n> Phil Thompson <[email protected]> writes:\n> > Tom Lane wrote:\n> >> We should change the protocol version number to 2.0.\n> >> It would be possible for the backend to continue to support 1.0 clients,\n> >> if you think it's worth the trouble to do so.\n> \n> > Or 1.1? The changes don't seem too traumatic.\n> \n> Well, pqcomm.h says that an incompatible change should have a new major\n> version number, and minor though these changes be, they *are*\n> incompatible.\n\nErr...good point :)\n\n> >> Command Done\n> >> Byte1('Z')\n> \n> > The completion response already does this for successful queries, and\n> > the error response for unsuccessful ones.\n> \n> You missed the point:\n\nI've misunderstood the protocol - and the protocol specification is\ntherefore wrong (or at least incomplete) in this respect. Do you want\nto fix the spec and include your enhancements or shall I?\n\nPhil\n", "msg_date": "Thu, 30 Apr 1998 18:21:17 +0000", "msg_from": "Phil Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > Either way, maintaining\n> > support for 1.0 is important as not all of us use libpq and we need time\n> > to catch up. Also we don't want to put barriers in the way of companies\n> > like Openlink who seem willing to provide support for PostgreSQL in\n> > commercial products.\n> \n> Yes, but there will be a month for people to get their third-part stuff\n> changed, and the changes are pretty straight-forward. Having support\n> for both in the backend/frontend is going to make that code more\n> difficult.\n\nI agree it will be easy enough for most of us, but may be less so for\ncompanies that traditionally don't release often. Although I don't use\nOpenlink's software and can't comment on whether it's any good (or if\nanybody actually uses it), I take it as a compliment to PostgreSQL that\na commercial organisation is willing to provide some support for it. \nNot maintaining backwards compatibility for at least some time isn't\ngoing to encourage them to continue that support.\n\nPhil\n", "msg_date": "Thu, 30 Apr 1998 18:33:43 +0000", "msg_from": "Phil Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "Phil Thompson <[email protected]> writes:\n> I've misunderstood the protocol - and the protocol specification is\n> therefore wrong (or at least incomplete) in this respect. Do you want\n> to fix the spec and include your enhancements or shall I?\n\nYes, there are some things I thought were wrong in the programmer's guide\nchapter about the FE/BE protocol. I'd be happy to submit revised text.\n\nI haven't paid any attention yet to how the documentation is handled.\nIs the stuff in the distribution under doc/src/sgml considered the\neditable master text, or is it generated from some other format?\nDo I need to subscribe to pgsql-docs to find out what to do? :-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Apr 1998 14:42:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes " }, { "msg_contents": "Tom Lane wrote:\n> \n> Phil Thompson <[email protected]> writes:\n> > I've misunderstood the protocol - and the protocol specification is\n> > therefore wrong (or at least incomplete) in this respect. Do you want\n> > to fix the spec and include your enhancements or shall I?\n> \n> Yes, there are some things I thought were wrong in the programmer's guide\n> chapter about the FE/BE protocol. I'd be happy to submit revised text.\n> \n> I haven't paid any attention yet to how the documentation is handled.\n> Is the stuff in the distribution under doc/src/sgml considered the\n> editable master text, or is it generated from some other format?\n\nYes, doc/src/sgml/*.sgml is the editable format. Phil's writeup is in\nprotocol.sgml and that is all that would need to be touched. Then submit\nthe patches (or a replacement file since you are the only one editing\nthat at the moment) and we'll snarf them up and merge.\n\nThe SGML markup is not the prettiest, but don't be intimidated/annoyed\nby it. Especially if you are making incremental changes, I would guess\nthat you will see how to cut and paste using the existing markup tags in\nthe file. Let us know if you have any trouble with it. We can fix markup\nproblems after you post changes too...\n\n> Do I need to subscribe to pgsql-docs to find out what to do? :-)\n\nOnly if you want to keep writing :)\n\n - Tom\n", "msg_date": "Fri, 01 May 1998 01:46:54 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] Revised proposal for libpq and FE/BE\n\tprotocol changes" }, { "msg_contents": "Bruce Momjian wrote: \n> > \n> > Tom Lane wrote:\n> > > \n> > > PROTOCOL CHANGES:\n> > > \n> > > We should change the protocol version number to 2.0.\n> > > It would be possible for the backend to continue to support 1.0 clients,\n> > > if you think it's worth the trouble to do so.\n> > \n> > Or 1.1? The changes don't seem too traumatic. Either way, maintaining\n> > support for 1.0 is important as not all of us use libpq and we need time\n> > to catch up. Also we don't want to put barriers in the way of companies\n> > like Openlink who seem willing to provide support for PostgreSQL in\n> > commercial products.\n> \n> Yes, but there will be a month for people to get their third-part stuff\n> changed, and the changes are pretty straight-forward. Having support\n> for both in the backend/frontend is going to make that code more\n> difficult.\n> \n> If it was only a small change, we could keep it compatable, but it seems\n> it would be best to just announce it early on. People can start testing\n> their new drivers long before the beta period begins.\n> \n> Also, we are making this change well in advance of the beta, so I hope\n> they would have enough time to make the transition.\n\nI know this is old old discussion, so \"shut up, we're done with it\" is a\nfine answer...\n\nBut, I think maintaining compatibility with 1.0 is important. If we expect\npeople to really use this software to build real applications, then we\ncannot expect them to be interested in revising or even recompiling their\napplications.\n\nFor example a web development consulting house. They build shopping cart\nor other database using web sites for their clients. They do not want to\nhave to go to each of their customers (say hundreds of sites) and recompile\neverything just to take advantage of a new server that happens to fix some\nbugs they needed fixed. They also don't want to reload the databases or\notherwise get involved in upgrade issues. And, their clients don't want\nto pay for this either.\n\nDatabase customers at least in the commercial world can be incredibly\nconservative. It is not at all uncommon to have large sites running DBMS\nengines that are three major releases (ie, well over three years) old.\nOnce they get an app working, they really don't want anything to change.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Fri, 22 May 1998 00:00:02 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "> Database customers at least in the commercial world can be incredibly\n> conservative. It is not at all uncommon to have large sites running DBMS\n> engines that are three major releases (ie, well over three years) old.\n> Once they get an app working, they really don't want anything to change.\n\nYes, this is true. Their data is locked in Our database. And you can't\njust restart it like a PC OS or word processor. Database demands are\nmuch different.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 10:16:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "> Database customers at least in the commercial world can be incredibly\n> conservative. It is not at all uncommon to have large sites running DBMS\n> engines that are three major releases (ie, well over three years) old.\n> Once they get an app working, they really don't want anything to change.\n\nAnd, oh yea, we kept database compatability, thanks to Tom Lane.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 10:16:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "[email protected] (David Gould) writes:\n> I know this is old old discussion, so \"shut up, we're done with it\" is a\n> fine answer...\n> But, I think maintaining compatibility with 1.0 is important. \n\nWe did maintain compatibility, in that the server will still talk to\na client that identifies itself as 1.0 in the initial handshake.\n(Kudos to whoever had the foresight to put a protocol version number\nin the startup message, BTW.)\n\nAs things are currently set up, however, a new compilation of libpq\nwill only know how to talk protocol 2.0, so it cannot be used to\ntalk to an old server. Are you concerned about that? I don't see\nany easy way around it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 May 1998 10:49:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes " }, { "msg_contents": "Tom, just wondering were we are with this. Can you update libpq.3? I\nthink until the sgml of the manual is converted, they are the most\ncurrent. I just made some cleanups there myself. Are the sgml sources\nupdated with the protocol changes?\n\nAlso, are these items completed? How about our cancel query key? I\nthink it is random/secure enough for our purposes. Can you make the\nchanges, or do you need changes from me?\n\n---------------------------------------------------------------------------\n\n\n> Here is a revised proposal that takes into account the discussions\n> of the last few days. Any comments?\n> \n> \n> I propose to revise libpq and modify the frontend/backend protocol\n> to provide the following benefits:\n> * Provide a clean way of reading multiple results from a single query\n> string. Among other things, this solves the problem of allowing a\n> single query to return several result sets with different descriptors.\n> * Allow a frontend to perform other work while awaiting the result of\n> a query.\n> * Add the ability to cancel queries in progress.\n> * Eliminate the need for frontends to issue dummy queries in order\n> to detect NOTIFY responses.\n> * Eliminate the need for libpq to issue dummy queries internally\n> to determine when a query is complete.\n> \n> We can't break existing code for this, so the behavior of PQexec()\n> can't change. Instead, I propose new functions to add to the API.\n> Internally, PQexec will be reimplemented in terms of these new\n> functions, but old applications shouldn't notice any difference.\n> \n> \n> The new functions are:\n> \n> \tbool PQsendQuery (PGconn *conn, const char *query);\n> \n> Submits a query without waiting for the result. Returns TRUE if the\n> query has been successfully dispatched, otherwise FALSE (in the FALSE\n> case, an error message is left in conn->errorMessage).\n> \n> \tPGresult* PQgetResult (PGconn *conn);\n> \n> Waits for input from the backend, and consumes input until (a) a result is\n> available, (b) the current query is over, or (c) a copy in/out operation\n> is detected. NULL is returned if the query is over; in all other cases a\n> suitable PGresult is returned (which the caller must eventually free).\n> Note that no actual \"wait\" will occur if the necessary input has already\n> been consumed; see below.\n> \n> \tbool PQisBusy (PGconn *conn);\n> \n> Returns TRUE if a query operation is busy (that is, a call to PQgetResult\n> would block waiting for more input). Returns FALSE if PQgetResult would\n> return immediately.\n> \n> \tvoid PQconsumeInput (PGconn *conn);\n> \n> This can be called at any time to check for and process new input from\n> the backend. It returns no status indication, but after calling it\n> the application can use PQisBusy() and/or PQnotifies() to see if a query\n> was completed or a NOTIFY message arrived. This function will never wait\n> for more input to arrive.\n> \n> \tint PQsocket (PGconn *conn);\n> \n> Returns the Unix file descriptor for the socket connection to the backend,\n> or -1 if there is no open connection. This is a violation of modularity,\n> of course, but there is no alternative: an application that needs\n> asynchronous execution needs to be able to use select() to wait for input\n> from either the backend or any other input streams it may have. To use\n> select() the underlying socket must be made visible.\n> \n> \tPGnotify *PQnotifies (PGconn *conn);\n> \n> This function doesn't change; we just observe that notifications may\n> become available as a side effect of executing either PQgetResult() or\n> PQconsumeInput(), not just PQexec().\n> \n> \tvoid PQrequestCancel (PGconn *conn);\n> \n> Issues a cancel request if possible. There is no direct way to tell whether\n> this has any effect ... see discussion below.\n> \n> \n> Discussion:\n> \n> An application can continue to use PQexec() as before, and notice\n> very little difference in behavior.\n> \n> Applications that want to be able to handle multiple results from a\n> single query should replace PQexec calls with logic like this:\n> \n> \t// Submit the query\n> \tif (! PQsendQuery(conn, query))\n> \t\treportTheError();\n> \t// Wait for and process result(s)\n> \twhile ((result = PQgetResult(conn)) != NULL) {\n> \t\tswitch (PQresultStatus(result)) {\n> \t\t... process result, for example:\n> \t\tcase PGRES_COPY_IN:\n> \t\t\t// ... copy data here ...\n> \t\t\tif (PQendcopy(conn))\n> \t\t\t\treportTheError();\n> \t\t\tbreak;\n> \t\t...\n> \t\t}\n> \t\tPQclear(result);\n> \t}\n> \t// When fall out of loop, we're done and ready for a new query\n> \n> Note that PQgetResult will always report errors by returning a PGresult\n> with status PGRES_NONFATAL_ERROR or PGRES_FATAL_ERROR, not by returning\n> NULL (since NULL implies non-error termination of the processing loop).\n> \n> PQexec() will be implemented as follows:\n> \n> \tif (! PQsendQuery(conn, query))\n> \t\treturn makeEmptyPGresult(conn, PGRES_FATAL_ERROR);\n> \tlastResult = NULL;\n> \twhile ((result = PQgetResult(conn)) != NULL) {\n> \t\tPQclear(lastResult);\n> \t\tlastResult = result;\n> \t}\n> \treturn lastResult;\n> \n> This maintains the current behavior that the last result of a series\n> of commands is returned by PQexec. (The old implementation is only\n> capable of doing that correctly in a limited set of cases, but in the\n> cases where it behaves usefully at all, that's how it behaves.)\n> \n> There is a small difference in behavior, which is that PQexec will now\n> return a PGresult with status PGRES_FATAL_ERROR in cases where the old\n> implementation would just have returned NULL (and set conn->errorMessage).\n> However, any correctly coded application should handle this the same way.\n> \n> In the above examples, the frontend application is still synchronous: it\n> blocks while waiting for the backend to reply to a query. This is often\n> undesirable, since the application may have other work to do, such as\n> responding to user input. Applications can now handle that by using\n> PQisBusy and PQconsumeInput along with PQsendQuery and PQgetResult.\n> \n> The general idea is that the application's main loop will use select()\n> to wait for input (from either the backend or its other input sources).\n> When select() indicates that input is pending from the backend, the app\n> will call PQconsumeInput, followed by checking PQisBusy and/or PQnotifies\n> to see what has happened. If PQisBusy returns FALSE then PQgetResult\n> can safely be called to obtain and process a result without blocking.\n> \n> Note also that NOTIFY messages can arrive asynchronously from the backend.\n> They can be detected *without issuing a query* by calling PQconsumeInput\n> followed by PQnotifies. I expect a lot of people will build \"partially\n> async\" applications that detect notifies this way but still do all their\n> queries through PQexec (or better, PQsendQuery followed by a synchronous\n> PQgetResult loop). This compromise allows notifies to be detected without\n> wasting time by issuing null queries, yet the basic logic of issuing a\n> series of queries remains simple.\n> \n> Finally, since the application can retain control while waiting for a\n> query response, it becomes meaningful to try to cancel a query in progress.\n> This is done by calling PQrequestCancel(). Note that PQrequestCancel()\n> may not have any effect --- if there is no query in progress, or if the\n> backend has already finished the query, then it *will* have no effect.\n> The application must continue to follow the result-reading protocol after\n> issuing a cancel request. If the cancel is successful, its effect will be\n> to cause the current query to fail and return an error message.\n> \n> \n> PROTOCOL CHANGES:\n> \n> We should change the protocol version number to 2.0.\n> It would be possible for the backend to continue to support 1.0 clients,\n> if you think it's worth the trouble to do so.\n> \n> 1. New message type:\n> \n> Command Done\n> \tByte1('Z')\n> \n> The backend will emit this message at completion of processing of every\n> command string, just before it resumes waiting for frontend input.\n> This change eliminates libpq's current hack of issuing empty queries to\n> see whether the backend is done. Note that 'Z' must be emitted after\n> *every* query or function invocation, no matter how it terminated.\n> \n> 2. The RowDescription ('T') message is extended by adding a new value\n> for each field. Just after the type-size value, there will now be\n> an int16 \"atttypmod\" value. (Would someone provide text specifying\n> exactly what this value means?) libpq will store this value in\n> a new \"adtmod\" field of PGresAttDesc structs.\n> \n> 3. The \"Start Copy In\" response message is changed from 'D' to 'G',\n> and the \"Start Copy Out\" response message is changed from 'B' to 'H'.\n> These changes eliminate potential confusion with the data row messages,\n> which also have message codes 'D' and 'B'.\n> \n> 4. The frontend may request cancellation of the current query by sending\n> a single byte of OOB (out-of-band) data. The contents of the data byte\n> are irrelevant, since the cancellation will be triggered by the associated\n> signal and not by the data itself. (But we should probably specify that\n> the byte be zero, in case we later think of a reason to have different\n> kinds of OOB messages.) There is no specific reply to this message.\n> If the backend does cancel a query, the query terminates with an ordinary\n> error message indicating that the query was cancelled.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 4 Jul 1998 20:05:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, just wondering were we are with this. Can you update libpq.3?\n\nWill do. I've been busy with other stuff and have not had time to spend\non pgsql, but will try to push it up the priority list.\n\n> Are the sgml sources updated with the protocol changes?\n\nYes, those are done. I had been hoping not to have to update libpq.3\nmanually, that's all...\n\n> Also, are these items completed? How about our cancel query key? I\n> think it is random/secure enough for our purposes. Can you make the\n> changes, or do you need changes from me?\n\nI haven't looked to see what's been done there --- have you finished an\ninitial implementation, or is the cancel-via-postmaster-instead-of-OOB\nchange still just at the talk stage? I can handle the libpq end of it,\nbut feel less secure about modifying the postmaster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Jul 1998 12:05:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, just wondering were we are with this. Can you update libpq.3?\n> \n> Will do. I've been busy with other stuff and have not had time to spend\n> on pgsql, but will try to push it up the priority list.\n> \n> > Are the sgml sources updated with the protocol changes?\n> \n> Yes, those are done. I had been hoping not to have to update libpq.3\n> manually, that's all...\n> \n> > Also, are these items completed? How about our cancel query key? I\n> > think it is random/secure enough for our purposes. Can you make the\n> > changes, or do you need changes from me?\n> \n> I haven't looked to see what's been done there --- have you finished an\n> initial implementation, or is the cancel-via-postmaster-instead-of-OOB\n> change still just at the talk stage? I can handle the libpq end of it,\n> but feel less secure about modifying the postmaster.\n\nPerhaps you can work up a patch, and I can review it, or give ideas on\nhow to proceed. The actual protocol layers in the postmaster,\nespecially for authentication, are very hard for me to understand.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Jul 1998 21:35:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" } ]
[ { "msg_contents": "At 4:18 PM 4/28/98, Jose' Soares Da Silva wrote:\n>stuff about copy vs. \\copy\n\nIn at least one instance, the reason for 2 versions is that copy is faster\n(presumably) than \\copy, but for some sort of security reason, \\copy is\n\"safer\" because it goes through the PostgreSQL backend.\n\nAt least, that's my understanding based on a message I got from psql about\nme not having permission to use copy, so use \\copy instead. [See mail from\na few days ago.]\n\nOf course, \\copy is the one that won't allow a delimiter other than \\t\n(tab), so that kinda screwed me over for awhile.\n\nThanks to some kind list members I got a Perl and a sed script to change my\ndelimiters, but it actually turned out that I could just change my\ndelimiter to \\t (tab) in my export package or a text editor, once I was\ntold that \\t (tab) was the default, which I couldn't find documented\nanywhere (maybe I missed it...) I could even almost read the Perl script,\nexcept for the regexp part.\n\nSo, suggestions for postgres hackers/documenters:\n\n#1. Modify the docs to explicitly state that \\t (tab) is the default delimiter.\n#2. Modify the docs to explicitly state what form the argument to USING\nDELIMITERS can take.\n [Presumably just one character, but I didn't try it with multiple. I\n*CAN'T* try it.]\n#3. Modify \\copy to match copy in syntax (IE include the delimiter stuff).\n#4. Beef up the FAQ about importing tables and the copy command,\nparticularly for folks who can't use copy, and mention options such as\nexporting as tab separated text or altering the separator charachter to \\t\n(tab).\n\n--\n--\n-- \"TANSTAAFL\" Rich [email protected]\n\n\n", "msg_date": "Tue, 28 Apr 1998 14:46:34 -0500", "msg_from": "[email protected] (Richard Lynch)", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] copy command" }, { "msg_contents": "> \n> At 4:18 PM 4/28/98, Jose' Soares Da Silva wrote:\n> >stuff about copy vs. \\copy\n> \n> In at least one instance, the reason for 2 versions is that copy is faster\n> (presumably) than \\copy, but for some sort of security reason, \\copy is\n> \"safer\" because it goes through the PostgreSQL backend.\n> \n> At least, that's my understanding based on a message I got from psql about\n> me not having permission to use copy, so use \\copy instead. [See mail from\n> a few days ago.]\n> \n> Of course, \\copy is the one that won't allow a delimiter other than \\t\n> (tab), so that kinda screwed me over for awhile.\n> \n> Thanks to some kind list members I got a Perl and a sed script to change my\n> delimiters, but it actually turned out that I could just change my\n> delimiter to \\t (tab) in my export package or a text editor, once I was\n> told that \\t (tab) was the default, which I couldn't find documented\n> anywhere (maybe I missed it...) I could even almost read the Perl script,\n> except for the regexp part.\n> \n> So, suggestions for postgres hackers/documenters:\n> \n> #1. Modify the docs to explicitly state that \\t (tab) is the default delimiter.\n\nAdded.\n\n> #2. Modify the docs to explicitly state what form the argument to USING\n> DELIMITERS can take.\n\nDone.\n\n> [Presumably just one character, but I didn't try it with multiple. I\n> *CAN'T* try it.]\n> #3. Modify \\copy to match copy in syntax (IE include the delimiter stuff).\n\nAdded to TODO lst.\n\n> #4. Beef up the FAQ about importing tables and the copy command,\n> particularly for folks who can't use copy, and mention options such as\n> exporting as tab separated text or altering the separator charachter to \\t\n> (tab).\n\nNot sure about this one. Someone is working on a newbies FAQ.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 15 Jun 1998 22:47:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] copy command" } ]
[ { "msg_contents": "On Mon, 27 Apr 1998, Peter Stockwell wrote:\n\n> Hi Ryan\n> \n> So far so good. For some reason, the DEC/alpha patch didn't recognize\n> your diffs and rather than fathom that out, I installed them manually.\n\n\tThey were unified diffs, and I have known some versions of patch\nto not like those. \n\n> Compilation (gcc 2.7.2.3) completed just fine. A silly configuration\n> problem has caused a delay in trying the regression tests for the\n> patched version, but I would expect it to complete OK. I'll be in\n> touch to confirm this.\n....\n> I have now run the regression tests on the retrieved snapshot and\n> achieved the same result as before (time failures, others trivial), so\n> it looks successful. I have done the build using gcc (2.7.2.3) - I\n> have not tried to use the native cc.\n\n\tGood! At least I have not broken anything major! I think that\nshould prove that my patches don't break Dec/Alpha. Thanks for your help. \n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n\n", "msg_date": "Tue, 28 Apr 1998 18:49:01 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Patch to remove -Dalpha for Alphas..." } ]
[ { "msg_contents": "\nI am trying to use cvsup to get the latest but keep failing as follows:\n\nleslie:~$ cvsup sup.pgsql\nConnected to postgresql.org\nUpdater failed: Premature EOF from server\nWill retry at 04:40:54\n\n\nHere is my sup file\n\n*default release=cvs\n*default prefix=/cvs\n*default backup compress use-rel-suffix\n\npgsql host=postgresql.org base=/cvs/pgsql delete \n\n\nAny thoughts?\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Tue, 28 Apr 1998 22:40:23 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "CVSup help??" }, { "msg_contents": "> *default release=cvs\n> *default prefix=/cvs\n> *default backup compress use-rel-suffix\n> \n> pgsql host=postgresql.org base=/cvs/pgsql delete\n\nI haven't seen any problems recently. My CVSup file follows...\n\n - Tom\n\n# This file represents the standard CVSup distribution file\n# for the PostgreSQL ORDBMS project\n# Modified by [email protected] 1997-08-28\n# - Point to my local snapshot source tree\n#\n# Defaults that apply to all the collections\n*default host=postgresql.org\n*default compress\n*default release=cvs\n*default delete use-rel-suffix\n#*default tag=.\n#*default tag=cvs\n#*default date=97.08.29.00.00.00\n\n# base directory points to where CVSup will store its 'bookmarks'\nfile(s)\n# will create subdirectory sup/\n*default base=/opt/postgres # /usr/local/pgsql\n\n# prefix directory points to where CVSup will store the actual\ndistribution(s)\n*default prefix=/opt/postgres/cvs # /usr/local/pgsql\n\n# complete distribution, including all below\npgsql\n\n# individual distributions vs 'the whole thing'\n# pgsql-doc\n# pgsql-perl5\n# pgsql-src\n", "msg_date": "Wed, 29 Apr 1998 12:02:14 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "On Tue, 28 Apr 1998, David Gould wrote:\n\n> \n> I am trying to use cvsup to get the latest but keep failing as follows:\n> \n> leslie:~$ cvsup sup.pgsql\n> Connected to postgresql.org\n> Updater failed: Premature EOF from server\n> Will retry at 04:40:54\n> \n> \n> Here is my sup file\n> \n> *default release=cvs\n> *default prefix=/cvs\n> *default backup compress use-rel-suffix\n> \n> pgsql host=postgresql.org base=/cvs/pgsql delete \n> \n> \n> Any thoughts?\n\nHrmmm...you have no 'tag=.' line, for starters...that's the only thing\nthat jumps out at me though...\n\nJust tested the server from here, and all appears to be well with the\nserver...just removed the 'tag=.' line that I have in mine, and that's\nright too...just pulls down the CVS/RCS files directly ...\n\nAnyone else havign similar problems?\n\n\n\n", "msg_date": "Wed, 29 Apr 1998 08:13:57 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "> On Tue, 28 Apr 1998, David Gould wrote:\n> > \n> > I am trying to use cvsup to get the latest but keep failing as follows:\n> > \n> > leslie:~$ cvsup sup.pgsql\n> > Connected to postgresql.org\n> > Updater failed: Premature EOF from server\n> > Will retry at 04:40:54\n> > \n> > \n> > Here is my sup file\n> > \n> > *default release=cvs\n> > *default prefix=/cvs\n> > *default backup compress use-rel-suffix\n> > \n> > pgsql host=postgresql.org base=/cvs/pgsql delete \n> > \n> > \n> > Any thoughts?\n> \n> Hrmmm...you have no 'tag=.' line, for starters...that's the only thing\n> that jumps out at me though...\n\nThat is what I meant to do. I want the CVS files.\n \n> Just tested the server from here, and all appears to be well with the\n> server...just removed the 'tag=.' line that I have in mine, and that's\n> right too...just pulls down the CVS/RCS files directly ...\n\nSo the missing tag=. line is not the problem?\n \n> Anyone else havign similar problems?\n\nI have tried this several times on two different evenings. Is there a time\nwindow? Or is there a maximum connection count?\n\nAny thoughts on how to debug this thing?\n\nthanks\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Wed, 29 Apr 1998 11:47:07 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "On Wed, 29 Apr 1998, David Gould wrote:\n\n> > Just tested the server from here, and all appears to be well with the\n> > server...just removed the 'tag=.' line that I have in mine, and that's\n> > right too...just pulls down the CVS/RCS files directly ...\n> \n> So the missing tag=. line is not the problem?\n\n\tNope, that one is correct...\n\n> > Anyone else havign similar problems?\n> \n> I have tried this several times on two different evenings. Is there a time\n> window? Or is there a maximum connection count?\n\n\t5, but I don't think its ever hit that max, and I know the error\nmessage is different then that if it had...I get it all the time at\nFreeBSD :(\n\n> Any thoughts on how to debug this thing?\n\n\tThis is under Linux, correct? Do you have anything like 'truss'\nor 'ktrace' that you can figure out where in the code its dying?\n\n\n", "msg_date": "Wed, 29 Apr 1998 14:50:23 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "Sometimes I just delete the whole thing and re-cvsup. Just an idea.\n\n> \n> > On Tue, 28 Apr 1998, David Gould wrote:\n> > > \n> > > I am trying to use cvsup to get the latest but keep failing as follows:\n> > > \n> > > leslie:~$ cvsup sup.pgsql\n> > > Connected to postgresql.org\n> > > Updater failed: Premature EOF from server\n> > > Will retry at 04:40:54 \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 29 Apr 1998 15:09:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> This is under Linux, correct? Do you have anything like 'truss' or\n> 'ktrace' that you can figure out where in the code its dying?\n\nI've been using cvsup without incident for a few weeks on Linux (RH5),\nto get the CVS files. The relevant Linux command is strace.\n", "msg_date": "29 Apr 1998 20:18:13 +0100", "msg_from": "Bruce Stephens <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "> On Wed, 29 Apr 1998, David Gould wrote:\n> \n> > > Just tested the server from here, and all appears to be well with the\n> > > server...just removed the 'tag=.' line that I have in mine, and that's\n> > > right too...just pulls down the CVS/RCS files directly ...\n> > \n> > So the missing tag=. line is not the problem?\n> \n> \tNope, that one is correct...\n> \n> > > Anyone else havign similar problems?\n> > \n> > I have tried this several times on two different evenings. Is there a time\n> > window? Or is there a maximum connection count?\n> \n> \t5, but I don't think its ever hit that max, and I know the error\n> message is different then that if it had...I get it all the time at\n> FreeBSD :(\n> \n> > Any thoughts on how to debug this thing?\n> \n> \tThis is under Linux, correct? Do you have anything like 'truss'\n> or 'ktrace' that you can figure out where in the code its dying?\n> \n\nOf course, Linux has everything ;-)\n\nIn this case it is strace. I will try it tonight. Although, the message\nkinda implies that the server end is refusing or dropping the connection.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Wed, 29 Apr 1998 12:38:05 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "> On Tue, 28 Apr 1998, David Gould wrote:\n> \n> > \n> > I am trying to use cvsup to get the latest but keep failing as follows:\n> > \n> > leslie:~$ cvsup sup.pgsql\n> > Connected to postgresql.org\n> > Updater failed: Premature EOF from server\n> > Will retry at 04:40:54\n> > \n> > \n> > Here is my sup file\n> > \n> > *default release=cvs\n> > *default prefix=/cvs\n> > *default backup compress use-rel-suffix\n> > \n> > pgsql host=postgresql.org base=/cvs/pgsql delete \n> > \n> > \n> > Any thoughts?\n> \n\nI ran strace on it. Here is the relevant dialog. It looks like the server\njust hangs up on me. Is there a log file on the server that might indicate\nwhy?\n\n--------------\nconnect(4, {sin_family=AF_INET, sin_port=htons(5999), sin_addr=inet_addr(\"209.47.148.214\")}, 16) = 0\n\nwrite(1, \"Connected to postgresql.org\\n\", 28Connected to postgresql.org\n) = 28\n\n\nread(4, \"OK 15 4 REL_15_2 CVSup server re\"..., 8192) = 36\nwrite(4, \"PROTO 15 4 REL_15_2\\n\", 20) = 20\n\nwrite(4, \"USER ? leslie.illustra.com\\n\", 27) = 27\nread(4, \"OK\\n\", 8192) = 3\n\nwrite(4, \"ATTR 6\\n0\\ne7\\ne1\\nf1\\nf1\\n9\\n.\\n\"..., 25) = 25\nread(4, \"ATTR 6\\n0\\ne7\\ne1\\nf1\\nf1\\n9\\n.\\n\"..., 8192) = 25\n\nwrite(4, \"COLL pgsql cvs 2 66499\\n.\\n.\\n\", 27) = 27\nread(4, \"COLL pgsql cvs 66499\\nPRFX /usr/\"..., 8192) = 49\n\nbind(5, {sin_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr(\"158.58.56.127\")}, 16) = 0\nlisten(5, 8) = 0\nwrite(4, \"PORT 158 58 56 127 5814\\n\", 24) = 24\n\naccept(5, {sin_family=AF_INET, sin_port=htons(47873), sin_addr=inet_addr(\"24.0.0.0\")}, [16]) = 6\nclose(5) = 0\n\nwrite(4, \"COLL pgsql cvs\\n\", 15) = 15\nwrite(4, \"x\\1\\322\\343\\2\\10\", 6) = 6\nwrite(4, \"0\\0\\0h\\0009\", 6) = 6\nwrite(4, \".\\n\", 2) = 2\n\nread(6, 0x819400c, 8192) = -1 ECONNRESET (Connection reset by peer)\n\n\nwrite(1, \"Updater failed: Premature EOF fr\"..., 42Updater failed: Premature EOF from server\n) = 42\n----------------\n\nThanks\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n\n", "msg_date": "Thu, 30 Apr 1998 20:53:45 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "\nHere is what the log file shows for illustra:\n\n# grep illustra !$\ngrep illustra cvsupd\nApr 27 03:36:14 hub cvsupd[15904]: +757 [email protected]\n(leslie.illustra.com) [REL_15_2/15.4]\nApr 27 04:09:33 hub cvsupd[20474]: +759 [email protected]\n(leslie.illustra.com) [REL_15_2/15.4]\nApr 29 01:36:58 hub cvsupd[1662]: +832 [email protected]\n(leslie.illustra.com) [REL_15_2/15.4]\nApr 30 04:18:19 hub cvsupd[2873]: +866 [email protected]\n(leslie.illustra.com) [REL_15_2/15.4]\nApr 30 04:18:34 hub cvsupd[2990]: +867 [email protected]\n(leslie.illustra.com) [REL_15_2/15.4]\nApr 30 04:20:34 hub cvsupd[4246]: +868 [email protected]\n(leslie.illustra.com) [REL_15_2/15.4]\nApr 30 23:09:27 hub cvsupd[9460]: +891 [email protected]\n(leslie.illustra.com) [REL_15_2/15.4]\n\nNot sure why the ?@ifmxoak though...everyone else appears to have a proper\nuserid in there...\n\nOn Thu, 30 Apr 1998, David Gould wrote:\n\n> \n> I ran strace on it. Here is the relevant dialog. It looks like the server\n> just hangs up on me. Is there a log file on the server that might indicate\n> why?\n> \n> --------------\n> connect(4, {sin_family=AF_INET, sin_port=htons(5999), sin_addr=inet_addr(\"209.47.148.214\")}, 16) = 0\n> \n> write(1, \"Connected to postgresql.org\\n\", 28Connected to postgresql.org\n> ) = 28\n> \n> \n> read(4, \"OK 15 4 REL_15_2 CVSup server re\"..., 8192) = 36\n> write(4, \"PROTO 15 4 REL_15_2\\n\", 20) = 20\n> \n> write(4, \"USER ? leslie.illustra.com\\n\", 27) = 27\n> read(4, \"OK\\n\", 8192) = 3\n> \n> write(4, \"ATTR 6\\n0\\ne7\\ne1\\nf1\\nf1\\n9\\n.\\n\"..., 25) = 25\n> read(4, \"ATTR 6\\n0\\ne7\\ne1\\nf1\\nf1\\n9\\n.\\n\"..., 8192) = 25\n> \n> write(4, \"COLL pgsql cvs 2 66499\\n.\\n.\\n\", 27) = 27\n> read(4, \"COLL pgsql cvs 66499\\nPRFX /usr/\"..., 8192) = 49\n> \n> bind(5, {sin_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr(\"158.58.56.127\")}, 16) = 0\n> listen(5, 8) = 0\n> write(4, \"PORT 158 58 56 127 5814\\n\", 24) = 24\n> \n> accept(5, {sin_family=AF_INET, sin_port=htons(47873), sin_addr=inet_addr(\"24.0.0.0\")}, [16]) = 6\n> close(5) = 0\n> \n> write(4, \"COLL pgsql cvs\\n\", 15) = 15\n> write(4, \"x\\1\\322\\343\\2\\10\", 6) = 6\n> write(4, \"0\\0\\0h\\0009\", 6) = 6\n> write(4, \".\\n\", 2) = 2\n> \n> read(6, 0x819400c, 8192) = -1 ECONNRESET (Connection reset by peer)\n> \n> \n> write(1, \"Updater failed: Premature EOF fr\"..., 42Updater failed: Premature EOF from server\n> ) = 42\n> ----------------\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 1 May 1998 01:00:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "> \n> Here is what the log file shows for illustra:\n> \n> # grep illustra !$\n> grep illustra cvsupd\n> Apr 27 03:36:14 hub cvsupd[15904]: +757 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 27 04:09:33 hub cvsupd[20474]: +759 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 29 01:36:58 hub cvsupd[1662]: +832 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 30 04:18:19 hub cvsupd[2873]: +866 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 30 04:18:34 hub cvsupd[2990]: +867 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 30 04:20:34 hub cvsupd[4246]: +868 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 30 23:09:27 hub cvsupd[9460]: +891 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> \n> Not sure why the ?@ifmxoak though...everyone else appears to have a proper\n> userid in there...\n> \n> On Thu, 30 Apr 1998, David Gould wrote:\n> \n> > \n> > I ran strace on it. Here is the relevant dialog. It looks like the server\n> > just hangs up on me. Is there a log file on the server that might indicate\n> > why?\n> > \n> > --------------\n> > connect(4, {sin_family=AF_INET, sin_port=htons(5999), sin_addr=inet_addr(\"209.47.148.214\")}, 16) = 0\n> > \n> > write(1, \"Connected to postgresql.org\\n\", 28Connected to postgresql.org\n> > ) = 28\n> > \n> > \n> > read(4, \"OK 15 4 REL_15_2 CVSup server re\"..., 8192) = 36\n> > write(4, \"PROTO 15 4 REL_15_2\\n\", 20) = 20\n> > \n> > write(4, \"USER ? leslie.illustra.com\\n\", 27) = 27\n> > read(4, \"OK\\n\", 8192) = 3\n> > \n> > write(4, \"ATTR 6\\n0\\ne7\\ne1\\nf1\\nf1\\n9\\n.\\n\"..., 25) = 25\n> > read(4, \"ATTR 6\\n0\\ne7\\ne1\\nf1\\nf1\\n9\\n.\\n\"..., 8192) = 25\n> > \n> > write(4, \"COLL pgsql cvs 2 66499\\n.\\n.\\n\", 27) = 27\n> > read(4, \"COLL pgsql cvs 66499\\nPRFX /usr/\"..., 8192) = 49\n> > \n> > bind(5, {sin_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr(\"158.58.56.127\")}, 16) = 0\n> > listen(5, 8) = 0\n> > write(4, \"PORT 158 58 56 127 5814\\n\", 24) = 24\n> > \n> > accept(5, {sin_family=AF_INET, sin_port=htons(47873), sin_addr=inet_addr(\"24.0.0.0\")}, [16]) = 6\n> > close(5) = 0\n> > \n> > write(4, \"COLL pgsql cvs\\n\", 15) = 15\n> > write(4, \"x\\1\\322\\343\\2\\10\", 6) = 6\n> > write(4, \"0\\0\\0h\\0009\", 6) = 6\n> > write(4, \".\\n\", 2) = 2\n> > \n> > read(6, 0x819400c, 8192) = -1 ECONNRESET (Connection reset by peer)\n> > \n> > \n> > write(1, \"Updater failed: Premature EOF fr\"..., 42Updater failed: Premature EOF from server\n> > ) = 42\n> > ----------------\n\nOk, I think I start to understand.\n\nopen(\"/var/run/utmp\", O_RDONLY) = 5\nread(5, \"\\10\\0\\0\\0\\4\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"..., 56) = 56\nread(5, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"..., 56) = 56\n... a bunch more reads from /var/run/utmp\nread(5, \"\\\\\\361\\377\\277\\0\\0\\0\\0\\0\\0\\0\\0\\0\"..., 56) = 40\nclose(5) = 0\nuname({sys=\"Linux\", node=\"leslie.illustra.com\", ...}) = 0\nwrite(4, \"USER ? leslie.illustra.com\\n\", 27) = 27\n\nSo you get the user '?' cause that is what cvsup sent after reading through\nmy /var/tmp/utmp.\n\nI am using the staticly linked cvsup client on a glibc (RH 5) linux system.\nI tried the dynamic linked one and it had real problems loading shared libs\neven though I have an old libc5 available.\n\nSo, my guess is that the staticly linked libc in cvsup is not understanding\nthe format of utmp on a glibc system. Hence, it cannot figure out my user\nname to send to your server, hence your server pulls the plug on me.\n\nAny idea on how to get a glibc version of cvsup? Or should I just go to\nthe DEC Moduala-3 site and install all that (I am told this is a bit of\na production) and then try to get the source for cvsup and build it?\n\nThanks\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Thu, 30 Apr 1998 21:12:41 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "On Thu, 30 Apr 1998, David Gould wrote:\n\n> Any idea on how to get a glibc version of cvsup? Or should I just go to\n> the DEC Moduala-3 site and install all that (I am told this is a bit of\n> a production) and then try to get the source for cvsup and build it?\n\n\tThat's about it...:(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 1 May 1998 01:28:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "> I am using the staticly linked cvsup client on a glibc (RH 5) linux system.\n> I tried the dynamic linked one and it had real problems loading shared libs\n> even though I have an old libc5 available.\n\nGood debug job, and glad the problem is now understood.\n\n> \n> So, my guess is that the staticly linked libc in cvsup is not understanding\n> the format of utmp on a glibc system. Hence, it cannot figure out my user\n> name to send to your server, hence your server pulls the plug on me.\n> \n> Any idea on how to get a glibc version of cvsup? Or should I just go to\n> the DEC Moduala-3 site and install all that (I am told this is a bit of\n> a production) and then try to get the source for cvsup and build it?\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 1 May 1998 00:36:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "Hi,\n\nThis is my last mail before leaving on a 5-day trip, so I'm not sure\nI can be of much help right away.\n\n> > write(4, \"COLL pgsql cvs\\n\", 15) = 15\n> > write(4, \"x\\1\\322\\343\\2\\10\", 6) = 6\n> > write(4, \"0\\0\\0h\\0009\", 6) = 6\n> > write(4, \".\\n\", 2) = 2\n\nIt'll be easier to debug if you turn off compression. Then\neverything will be readable by mortals.\n\n> # grep illustra !$\n> grep illustra cvsupd\n> Apr 27 03:36:14 hub cvsupd[15904]: +757 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 27 04:09:33 hub cvsupd[20474]: +759 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 29 01:36:58 hub cvsupd[1662]: +832 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n> Apr 30 04:18:19 hub cvsupd[2873]: +866 [email protected]\n> (leslie.illustra.com) [REL_15_2/15.4]\n\nWhat did the logs say for the process terminations? Look for lines\nwith \"-757\", \"-759\", \"-832\", \"-866\", and so forth.\n--\n John Polstra [email protected]\n John D. Polstra & Co., Inc. Seattle, Washington USA\n \"Self-knowledge is always bad news.\" -- John Barth\n", "msg_date": "Thu, 30 Apr 1998 22:00:15 -0700", "msg_from": "John Polstra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help?? " }, { "msg_contents": "> \n> Hi,\n> \n> This is my last mail before leaving on a 5-day trip, so I'm not sure\n> I can be of much help right away.\n> \n> > > write(4, \"COLL pgsql cvs\\n\", 15) = 15\n> > > write(4, \"x\\1\\322\\343\\2\\10\", 6) = 6\n> > > write(4, \"0\\0\\0h\\0009\", 6) = 6\n> > > write(4, \".\\n\", 2) = 2\n> \n> It'll be easier to debug if you turn off compression. Then\n> everything will be readable by mortals.\n\nThe relevant part of the dialog w/o compression (cvsup -Z) is:\n\nwrite(4, \"COLL pgsql cvs\\n\", 15) = 15\nwrite(4, \".\\n\", 2) = 2\nwrite(4, \".\\n\", 2) = 2\n\nSo I think the compression was just sending some sort of header for its\nown use.\n\nBtw, are you the cvsup maintainer (author (guru (god)))?\n\n-dg\n\n", "msg_date": "Thu, 30 Apr 1998 22:25:57 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVSup help??" }, { "msg_contents": "> Btw, are you the cvsup maintainer (author (guru (god)))?\n\nYes (yes (maybe (no))). ;-)\n\nI'm not clear on whether this problem has been solved or not. If it\nhasn't, I'd be happy to help you with it when I get back from my trip.\n\nJohn\n--\n John Polstra [email protected]\n John D. Polstra & Co., Inc. Seattle, Washington USA\n \"Self-knowledge is always bad news.\" -- John Barth\n", "msg_date": "Thu, 30 Apr 1998 22:30:34 -0700", "msg_from": "John Polstra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup help?? " } ]
[ { "msg_contents": "That does exist but brings in the same problem as I'm having with JDBC - you\ncan't guarantee that one thread/process will try to send a query while\nanother thread/process is waiting for results.\n\nI'm keeping a keen eye on this proposal, because it would make my life a lot\neasier in making the JDBC driver thread safe, which is one of the main\nthings I'm aiming for 6.4\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of\[email protected]\nSent: Tuesday, April 28, 1998 9:53 PM\nTo: Tom Lane\nCc: [email protected]; [email protected];\[email protected]\nSubject: Re: [INTERFACES] Revised proposal for libpq and FE/BE protocol\nchanges\n\n\nI suggest the application already has fork or fork/exec to\nimplement an asynchronous design. Does that also keep the\nsocket out of the application's domain?\n\nBob\[email protected]\n\nReceived: from hub.org (hub.org [209.47.148.200])\n\tby humbug.antnet.com (8.8.5/8.8.5) with ESMTP id LAA21503\n\tfor <[email protected]>; Tue, 28 Apr 1998 11:28:48 -0500 (CDT)\nReceived: from localhost (majordom@localhost) by hub.org (8.8.8/8.7.5) with\nSMTP id MAA01511; Tue, 28 Apr 1998 12:23:18 -0400 (EDT)\nReceived: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Tue, 28\nApr 1998 12:23:16 -0400 (EDT)\nReceived: (from majordom@localhost) by hub.org (8.8.8/8.7.5) id MAA01498 for\npgsql-interfaces-outgoing; Tue, 28 Apr 1998 12:23:09 -0400 (EDT)\nReceived: from sss.sss.pgh.pa.us (sss.pgh.pa.us [206.210.65.6]) by hub.org\n(8.8.8/8.7.5) with ESMTP id MAA01401; Tue, 28 Apr 1998 12:22:04 -0400 (EDT)\nReceived: from sss.sss.pgh.pa.us (localhost [127.0.0.1])\n\tby sss.sss.pgh.pa.us (8.8.5/8.8.5) with ESMTP id MAA07043;\n\tTue, 28 Apr 1998 12:21:56 -0400 (EDT)\nTo: [email protected], [email protected]\nSubject: [INTERFACES] Revised proposal for libpq and FE/BE protocol changes\nDate: Tue, 28 Apr 1998 12:21:55 -0400\nMessage-ID: <[email protected]>\nFrom: Tom Lane <[email protected]>\nSender: [email protected]\nPrecedence: bulk\n\nHere is a revised proposal that takes into account the discussions\nof the last few days. Any comments?\n\n\nI propose to revise libpq and modify the frontend/backend protocol\nto provide the following benefits:\n * Provide a clean way of reading multiple results from a single query\n string. Among other things, this solves the problem of allowing a\n single query to return several result sets with different descriptors.\n * Allow a frontend to perform other work while awaiting the result of\n a query.\n * Add the ability to cancel queries in progress.\n * Eliminate the need for frontends to issue dummy queries in order\n to detect NOTIFY responses.\n * Eliminate the need for libpq to issue dummy queries internally\n to determine when a query is complete.\n\nWe can't break existing code for this, so the behavior of PQexec()\ncan't change. Instead, I propose new functions to add to the API.\nInternally, PQexec will be reimplemented in terms of these new\nfunctions, but old applications shouldn't notice any difference.\n\n\nThe new functions are:\n\n\tbool PQsendQuery (PGconn *conn, const char *query);\n\nSubmits a query without waiting for the result. Returns TRUE if the\nquery has been successfully dispatched, otherwise FALSE (in the FALSE\ncase, an error message is left in conn->errorMessage).\n\n\tPGresult* PQgetResult (PGconn *conn);\n\nWaits for input from the backend, and consumes input until (a) a result is\navailable, (b) the current query is over, or (c) a copy in/out operation\nis detected. NULL is returned if the query is over; in all other cases a\nsuitable PGresult is returned (which the caller must eventually free).\nNote that no actual \"wait\" will occur if the necessary input has already\nbeen consumed; see below.\n\n\tbool PQisBusy (PGconn *conn);\n\nReturns TRUE if a query operation is busy (that is, a call to PQgetResult\nwould block waiting for more input). Returns FALSE if PQgetResult would\nreturn immediately.\n\n\tvoid PQconsumeInput (PGconn *conn);\n\nThis can be called at any time to check for and process new input from\nthe backend. It returns no status indication, but after calling it\nthe application can use PQisBusy() and/or PQnotifies() to see if a query\nwas completed or a NOTIFY message arrived. This function will never wait\nfor more input to arrive.\n\n\tint PQsocket (PGconn *conn);\n\nReturns the Unix file descriptor for the socket connection to the backend,\nor -1 if there is no open connection. This is a violation of modularity,\nof course, but there is no alternative: an application that needs\nasynchronous execution needs to be able to use select() to wait for input\nfrom either the backend or any other input streams it may have. To use\nselect() the underlying socket must be made visible.\n\n\tPGnotify *PQnotifies (PGconn *conn);\n\nThis function doesn't change; we just observe that notifications may\nbecome available as a side effect of executing either PQgetResult() or\nPQconsumeInput(), not just PQexec().\n\n\tvoid PQrequestCancel (PGconn *conn);\n\nIssues a cancel request if possible. There is no direct way to tell whether\nthis has any effect ... see discussion below.\n\n\nDiscussion:\n\nAn application can continue to use PQexec() as before, and notice\nvery little difference in behavior.\n\nApplications that want to be able to handle multiple results from a\nsingle query should replace PQexec calls with logic like this:\n\n\t// Submit the query\n\tif (! PQsendQuery(conn, query))\n\t\treportTheError();\n\t// Wait for and process result(s)\n\twhile ((result = PQgetResult(conn)) != NULL) {\n\t\tswitch (PQresultStatus(result)) {\n\t\t... process result, for example:\n\t\tcase PGRES_COPY_IN:\n\t\t\t// ... copy data here ...\n\t\t\tif (PQendcopy(conn))\n\t\t\t\treportTheError();\n\t\t\tbreak;\n\t\t...\n\t\t}\n\t\tPQclear(result);\n\t}\n\t// When fall out of loop, we're done and ready for a new query\n\nNote that PQgetResult will always report errors by returning a PGresult\nwith status PGRES_NONFATAL_ERROR or PGRES_FATAL_ERROR, not by returning\nNULL (since NULL implies non-error termination of the processing loop).\n\nPQexec() will be implemented as follows:\n\n\tif (! PQsendQuery(conn, query))\n\t\treturn makeEmptyPGresult(conn, PGRES_FATAL_ERROR);\n\tlastResult = NULL;\n\twhile ((result = PQgetResult(conn)) != NULL) {\n\t\tPQclear(lastResult);\n\t\tlastResult = result;\n\t}\n\treturn lastResult;\n\nThis maintains the current behavior that the last result of a series\nof commands is returned by PQexec. (The old implementation is only\ncapable of doing that correctly in a limited set of cases, but in the\ncases where it behaves usefully at all, that's how it behaves.)\n\nThere is a small difference in behavior, which is that PQexec will now\nreturn a PGresult with status PGRES_FATAL_ERROR in cases where the old\nimplementation would just have returned NULL (and set conn->errorMessage).\nHowever, any correctly coded application should handle this the same way.\n\nIn the above examples, the frontend application is still synchronous: it\nblocks while waiting for the backend to reply to a query. This is often\nundesirable, since the application may have other work to do, such as\nresponding to user input. Applications can now handle that by using\nPQisBusy and PQconsumeInput along with PQsendQuery and PQgetResult.\n\nThe general idea is that the application's main loop will use select()\nto wait for input (from either the backend or its other input sources).\nWhen select() indicates that input is pending from the backend, the app\nwill call PQconsumeInput, followed by checking PQisBusy and/or PQnotifies\nto see what has happened. If PQisBusy returns FALSE then PQgetResult\ncan safely be called to obtain and process a result without blocking.\n\nNote also that NOTIFY messages can arrive asynchronously from the backend.\nThey can be detected *without issuing a query* by calling PQconsumeInput\nfollowed by PQnotifies. I expect a lot of people will build \"partially\nasync\" applications that detect notifies this way but still do all their\nqueries through PQexec (or better, PQsendQuery followed by a synchronous\nPQgetResult loop). This compromise allows notifies to be detected without\nwasting time by issuing null queries, yet the basic logic of issuing a\nseries of queries remains simple.\n\nFinally, since the application can retain control while waiting for a\nquery response, it becomes meaningful to try to cancel a query in progress.\nThis is done by calling PQrequestCancel(). Note that PQrequestCancel()\nmay not have any effect --- if there is no query in progress, or if the\nbackend has already finished the query, then it *will* have no effect.\nThe application must continue to follow the result-reading protocol after\nissuing a cancel request. If the cancel is successful, its effect will be\nto cause the current query to fail and return an error message.\n\n\nPROTOCOL CHANGES:\n\nWe should change the protocol version number to 2.0.\nIt would be possible for the backend to continue to support 1.0 clients,\nif you think it's worth the trouble to do so.\n\n1. New message type:\n\nCommand Done\n\tByte1('Z')\n\nThe backend will emit this message at completion of processing of every\ncommand string, just before it resumes waiting for frontend input.\nThis change eliminates libpq's current hack of issuing empty queries to\nsee whether the backend is done. Note that 'Z' must be emitted after\n*every* query or function invocation, no matter how it terminated.\n\n2. The RowDescription ('T') message is extended by adding a new value\nfor each field. Just after the type-size value, there will now be\nan int16 \"atttypmod\" value. (Would someone provide text specifying\nexactly what this value means?) libpq will store this value in\na new \"adtmod\" field of PGresAttDesc structs.\n\n3. The \"Start Copy In\" response message is changed from 'D' to 'G',\nand the \"Start Copy Out\" response message is changed from 'B' to 'H'.\nThese changes eliminate potential confusion with the data row messages,\nwhich also have message codes 'D' and 'B'.\n\n4. The frontend may request cancellation of the current query by sending\na single byte of OOB (out-of-band) data. The contents of the data byte\nare irrelevant, since the cancellation will be triggered by the associated\nsignal and not by the data itself. (But we should probably specify that\nthe byte be zero, in case we later think of a reason to have different\nkinds of OOB messages.) There is no specific reply to this message.\nIf the backend does cancel a query, the query terminates with an ordinary\nerror message indicating that the query was cancelled.\n\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 29 Apr 1998 08:01:29 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] Revised proposal for libpq and FE/BE protocol\n\tchanges" } ]
[ { "msg_contents": "Is there a list of ANSI return code and error messages somewhere? I haven't\nfound anything yet by grep'ping through the SQL files Tom send me. All I\nknow is (and that was inside these files too) that NOT FOUND is 100.\n\nAny ideas?\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 29 Apr 1998 10:47:44 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "ANSI error messages" }, { "msg_contents": "Michael,\n\nThe SQLSTATE codes appear in section 22 of SQL 92. See\n http://csf.colorado.edu/local/sql/22_status.txt\n\nApparently, the SQLCODE values are deprecated.\n\nMichael\[email protected]\n\n :}Is there a list of ANSI return code and error messages somewhere? I haven't\n :}found anything yet by grep'ping through the SQL files Tom send me. All I\n :}know is (and that was inside these files too) that NOT FOUND is 100.\n :}\n :}Any ideas?\n :}\n :}Michael\n :}-- \n :}Dr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\n :}[email protected] | Europark A2, Adenauerstr. 20\n :}[email protected] | 52146 Wuerselen\n :}Go SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\n :}Use Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n :}\n", "msg_date": "Wed, 29 Apr 1998 11:00:38 -0700", "msg_from": "Michael Yount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ANSI error messages " }, { "msg_contents": "Michael Yount writes:\n> The SQLSTATE codes appear in section 22 of SQL 92. See\n> http://csf.colorado.edu/local/sql/22_status.txt\n> \n> Apparently, the SQLCODE values are deprecated.\n\nThanks. Found them. Since we do not have SQLSTATE yet, I worry about SQLCODE\nonly.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 30 Apr 1998 09:22:56 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ANSI error messages" } ]
[ { "msg_contents": "I have had a report from someone using Servlets, that they are opening\nsomething like 5 to 10 connections from a single Java Servlet, which then\nbrokers them to clients.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Tom Lane\nSent: Wednesday, April 29, 1998 3:28 PM\nTo: [email protected]; [email protected]\nSubject: [INTERFACES] Re: [HACKERS] Revised proposal for libpq and FE/BE\nprotocol changes\n\nIn the current system architecture, much the easiest way to execute\nconcurrent queries is to open up more than one connection. There's\nnothing that says a frontend process can't fire up multiple backend\nprocesses. I think this is probably sufficient, because I don't\nforesee such a thing becoming really popular anyway.\n\n", "msg_date": "Wed, 29 Apr 1998 16:18:22 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] Re: [HACKERS] Revised proposal for libpq and FE/BE\n\tprotocol changes" } ]
[ { "msg_contents": "\nSilly question since I work with aix and it doesn't appear to use\nthe version numbers for shared libs...\n\nIs there any purpose to the version numbers that some ports append\nto a shared lib name, besides keeping different versions around?\n\nI've managed to move the port specific code from all of the various\ninterfaces that make shared libs, but I'd like to understand the\nrhyme/reason before I post a patch that breaks all other ports.\n\nUsing libpq as an example, is there a difference to the system if...\n\n$(MAKE) libpq.so\n$(INSTALL) libpq.so libpq.so.1\n$(LN) libpq.so.1 libpq.so\n\n...rather than...\n\n$(MAKE) libpq.so.1\n$(INSTALL) libpq.so.1 libpq.so.1\n$(LN) libpq.so.1 libpq.so\n\n???\n\nIf no difference to the system, the former is _much_ easier to add\nshared lib support for aix and use the %.$(DLSUFFIX) rules in the\nport Makefiles.\n\nThis would be perhaps the final step to removing $(PORTNAME) from the\ncode, these Makefiles would not have to be generated by configure, and\nmakes the interfaces/* Makefiles much cleaner.\n\ndarrenk\n", "msg_date": "Wed, 29 Apr 1998 15:50:21 -0400", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Shared libs with version numbers." }, { "msg_contents": "Darren King writes:\n> Is there any purpose to the version numbers that some ports append\n> to a shared lib name, besides keeping different versions around?\n\nYes. Making sure your application doesn't load an incompatible version of\nthe lib.\n\n> Using libpq as an example, is there a difference to the system if...\n> \n> $(MAKE) libpq.so\n> $(INSTALL) libpq.so libpq.so.1\n> $(LN) libpq.so.1 libpq.so\n> \n> ...rather than...\n> \n> $(MAKE) libpq.so.1\n> $(INSTALL) libpq.so.1 libpq.so.1\n> $(LN) libpq.so.1 libpq.so\n\nNo. The file the system knows is libpq.so.1 either way. You might even call\nit foo.bar in your Makefile as long as it is installed as libpq.so.1.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 30 Apr 1998 09:25:02 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Shared libs with version numbers." }, { "msg_contents": "[email protected] (Darren King) writes:\n\n> Is there any purpose to the version numbers that some ports append\n> to a shared lib name, besides keeping different versions around?\n\nWell, no. That _is_ the point. Sort of. Version numbers make it\npossible to have different generations of shared libraries installed,\nand have different binaries use different ones. You can then install\na new version of the library -- and nothing will use it. Once you\nrecompile some binary, though, it will henceforth dynamically link\nagainst the new one, i.e. the one that it was built for. This is, of\ncourse, important because it allows the interface to change when the\nversion changes, without forcing you to recompile _everything_.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "30 Apr 1998 09:41:10 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Shared libs with version numbers." } ]
[ { "msg_contents": "Dear all,\n\nCan anyone tell me how to unlocak a vacuum ? Because last time i run the vacuum and stop while running. When i do a vacuum again this morning, it says\n\nmichaely=> vacuum;\nWARN: Can't create a lock file - another vacuum cleaner running ?\n\nThanks for your attention and any solutions will be highly appreciated !\n\nCheers,\n/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\\n| Michael Yeung \t\t\t\t|\n| Alpha Network Shop\t\t\t|\n| Tel:(02)9413-3886 \tFax:(02)9413-3617\t|\n| Mobile:0411-233-597\t\t\t|\n| mailto:[email protected]\t\t|\n\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\n\n", "msg_date": "Thu, 30 Apr 1998 10:30:28 +1000", "msg_from": "Michael Yeung <[email protected]>", "msg_from_op": true, "msg_subject": "Unlock the vacuum" }, { "msg_contents": "On Thu, 30 Apr 1998, Michael Yeung wrote:\n\n> Dear all,\n> \n> Can anyone tell me how to unlocak a vacuum ? Because last time i run the vacuum and stop while running. When i do a vacuum again this morning, it says\n> \n> michaely=> vacuum;\n> WARN: Can't create a lock file - another vacuum cleaner running ?\n\n\tAssuming that you are *sure* there are no other vacuum processes\nrunning, look for:\n\n\t<PGHOME>/data/base/<database>/pg_vlock\n\n\tAnd remove that...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 30 Apr 1998 00:40:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unlock the vacuum" } ]
[ { "msg_contents": "\nI'm planning on removing the exec from DoExec() and instead just\ndispatch to the appropriate function.\n\nI don't plan on any changes to the usage of \"arguments\" to this new\nprocess, basically I'll just store them somewhere and then the forked\nbackend can process them.\n\nIs there anything I should keep in mind? I'd like this to eventually\nbe integrated into the source tree -- any particular reason why we use\nexec() when we're just re-invoking the same binary?\n\np.s. this is so my ssl patch doesn't have to negotiate twice -- very expensive\n", "msg_date": "Wed, 29 Apr 1998 18:20:55 -0700", "msg_from": "Brett McCormickS <[email protected]>", "msg_from_op": true, "msg_subject": "removing the exec() from doexec()" }, { "msg_contents": "> \n> \n> I'm planning on removing the exec from DoExec() and instead just\n> dispatch to the appropriate function.\n> \n> I don't plan on any changes to the usage of \"arguments\" to this new\n> process, basically I'll just store them somewhere and then the forked\n> backend can process them.\n> \n> Is there anything I should keep in mind? I'd like this to eventually\n> be integrated into the source tree -- any particular reason why we use\n> exec() when we're just re-invoking the same binary?\n> \n> p.s. this is so my ssl patch doesn't have to negotiate twice -- very expensive\n\nNo reason for the exec(). I believe the only advantage is that it gives\nus a separate process name in the 'ps' listing. I have looked into\nsimulating this.\n\nThis exec() takes 15% of our startup time. I have wanted it removed for\nmany releases now. The only problem is to rip out the code that\nre-attached to shared memory and stuff like that, because you will no\nlonger loose the shared memory in the exec(). The IPC code is\ncomplicated, so good luck. I or others can help if you get stuck.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 29 Apr 1998 21:44:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "On Wed, 29 Apr 1998, Bruce Momjian wrote:\n\n> No reason for the exec(). I believe the only advantage is that it gives\n> us a separate process name in the 'ps' listing. I have looked into\n> simulating this.\n\n\tUnder FreeBSD, there is:\n\nsetproctitle(3) - set the process title for ps 1\n\n\tThis isn't available under Solaris though, last I checked...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 29 Apr 1998 23:20:52 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "\nsure enough.. well, try this on your OS and you can find out if perl\nknows how to change it. it doesn't work under solaris. the args to\nps might be different for your system.\n\nperl -e '$0 = \"it_works!\";system \"ps -p $$\"'\n\nHowever, the args to the processes are so different that it seems easy\nto tell the difference.. if you're a human. computers might have\nmore trouble. I've been known to use \"killall postgres\" (yes, I know,\nI'm bad!!)\n\nI only do it so that I can restart the postmaster. Our webserver is\npretty much continually connected, and when it deadlocks, all the\nclients queue up. It would be nice to have a set of commands to show\nyou all connections, the machine/remote port they're from (for\nidentd), the username/dbname they're connected as, when they\nconnected, idle time, etc. like \"finger\" for postgres.\n\nI'm willing to work on it, if someone can point me in the right\ndirection. (First things first though)\n\nOn Wed, 29 April 1998, at 23:20:52, The Hermit Hacker wrote:\n\n> \n> \tUnder FreeBSD, there is:\n> \n> setproctitle(3) - set the process title for ps 1\n> \n> \tThis isn't available under Solaris though, last I checked...\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n", "msg_date": "Wed, 29 Apr 1998 19:36:31 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "> \n> On Wed, 29 Apr 1998, Bruce Momjian wrote:\n> \n> > No reason for the exec(). I believe the only advantage is that it gives\n> > us a separate process name in the 'ps' listing. I have looked into\n> > simulating this.\n> \n> \tUnder FreeBSD, there is:\n> \n> setproctitle(3) - set the process title for ps 1\n> \n> \tThis isn't available under Solaris though, last I checked...\n\nNot even BSDI, which is BSD 4.4 like FreeBSD.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 29 Apr 1998 22:42:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "> \n> \n> sure enough.. well, try this on your OS and you can find out if perl\n> knows how to change it. it doesn't work under solaris. the args to\n> ps might be different for your system.\n> \n> perl -e '$0 = \"it_works!\";system \"ps -p $$\"'\n> \n> However, the args to the processes are so different that it seems easy\n> to tell the difference.. if you're a human. computers might have\n> more trouble. I've been known to use \"killall postgres\" (yes, I know,\n> I'm bad!!)\n\nThe args don't change on a fork() either. The only way is to look at\nthe parent of all the postgres children.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 29 Apr 1998 22:51:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "> > On Wed, 29 Apr 1998, Bruce Momjian wrote:\n> > \n> > > No reason for the exec(). I believe the only advantage is that it gives\n> > > us a separate process name in the 'ps' listing. I have looked into\n> > > simulating this.\n> > \n> > \tUnder FreeBSD, there is:\n> > \n> > setproctitle(3) - set the process title for ps 1\n> > \n> > \tThis isn't available under Solaris though, last I checked...\n> \n> Not even BSDI, which is BSD 4.4 like FreeBSD.\n\nubik:~$ uname -a\nLinux ubik 2.0.32 #1 Wed Nov 19 00:46:45 EST 1997 i586 unknown\nubik:~$ perl -e '$0 = \"it_works!\";system \"ps p $$\"'\n PID TTY STAT TIME COMMAND\n 7629 p8 S 0:00 it_works! \n\n-dg\n\n\n", "msg_date": "Wed, 29 Apr 1998 23:24:39 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "> \n> > > On Wed, 29 Apr 1998, Bruce Momjian wrote:\n> > > \n> > > > No reason for the exec(). I believe the only advantage is that it gives\n> > > > us a separate process name in the 'ps' listing. I have looked into\n> > > > simulating this.\n> > > \n> > > \tUnder FreeBSD, there is:\n> > > \n> > > setproctitle(3) - set the process title for ps 1\n> > > \n> > > \tThis isn't available under Solaris though, last I checked...\n> > \n> > Not even BSDI, which is BSD 4.4 like FreeBSD.\n> \n> ubik:~$ uname -a\n> Linux ubik 2.0.32 #1 Wed Nov 19 00:46:45 EST 1997 i586 unknown\n> ubik:~$ perl -e '$0 = \"it_works!\";system \"ps p $$\"'\n> PID TTY STAT TIME COMMAND\n> 7629 p8 S 0:00 it_works! \n\nLet me clarify. BSDI does not have setproctitle, but the perl test does\nworks, sort of:\n\n$ perl -e '$0 = \"it_works!\";system \"ps -p $$\"'\n PID TT STAT TIME COMMAND\n13095 pc S+ 0:00.02 it_works! rks! ! (perl)\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 30 Apr 1998 10:18:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Wed, 29 Apr 1998, Bruce Momjian wrote:\n>> No reason for the exec(). I believe the only advantage is that it gives\n>> us a separate process name in the 'ps' listing. I have looked into\n>> simulating this.\n> \tUnder FreeBSD, there is:\n> setproctitle(3) - set the process title for ps 1\n> \tThis isn't available under Solaris though, last I checked...\n\nSetting the process title from C is messy, but there is a readily\navailable reference. The Berkeley sendmail distribution includes code\nto emulate setproctitle on practically every platform. See conf.h and\nconf.c in any recent sendmail release. Warning: it's grotty enough to\nmake a strong man weep. Don't read near mealtime ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Apr 1998 10:52:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec() " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> This exec() takes 15% of our startup time. I have wanted it removed for\n> many releases now.\n\nDon't forget that you will have to use a real fork() rather than\nvfork(). Some of the apparent savings will not materialize.\n\nI agree, though, that using exec() to reinvoke the same binary is\npretty silly, especially when you don't want exec's normal side-effects\nof detaching shared mem etc. And being able to rip out the dependency\non whether vfork() exists would be nice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Apr 1998 10:59:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec() " }, { "msg_contents": "> Setting the process title from C is messy, but there is a readily\n> available reference. The Berkeley sendmail distribution includes code\n> to emulate setproctitle on practically every platform. See conf.h and\n> conf.c in any recent sendmail release. Warning: it's grotty enough to\n> make a strong man weep. Don't read near mealtime ;-)\n\nYep, I have seen it. Good advice. What does grotty mean?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 30 Apr 1998 11:16:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> What does grotty mean?\n\nHmm, I thought for sure that would be in the Hackers' Dictionary,\nbut not so. Anyway, it means ugly, messy, contorted, bletcherous,\nrandom.\n\nAfter you read sendmail's setproctitle code, you'll understand ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Apr 1998 11:40:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec() " } ]
[ { "msg_contents": "> On Tue, 28 Apr 1998, David Gould wrote:\n> > After a long wait (as I was busy with other things), here is the Spinlock\n> > back off patch I promised. This does semi-random backoff using select() to\n> > lessen throughput degradation due to spinlock contention with large numbers\n> > of runnable backends.\n> > \n> > This patch is meant to work on all current platforms, but I have only tested\n> > it on Linux 2.0.32 i386 glibc (Redhat 5.0).\n> \n> Hi David...\n> \n> \tJust tried to apply this to the current source tree, and pretty\n> much failed miserably :( can you check it and let me know?\n> \n\n\n\"Hey Rocky, watch me submit a patch to pgsql. This time for sure\".\n\nActually, I took the opportunity to think about it overnight and add a\nfew frills.\n\nI just sent the patch (in diff -c format this time) \nthe patches list. I am reproducing the blurb here.\n\nIt should apply cleanly to 6.3.2 and the current snapshot.\n\nHere is the Spinlock back off patch I promised. This does semi-random\nbackoff using select() to lessen throughput degradation due to spinlock\ncontention with large numbers of runnable backends.\n\nThis patch is meant to work on all current platforms, but I have only tested\nit on Linux 2.0.32 i386 glibc (Redhat 5.0).\n\nI restructured the files s_lock.c and s_lock.h to better separate the portable\nparts from the machine dependant parts. Probably the best way to see what\nhappened is to apply the patch and then look at the changed files rather than\nto try to read the patch directly.\n\nI have also added a timeout feature to the attempt to grab a spinlock. If after\na suitably long time (currently a few minutes) a lock still cannot be locked,\nwe printf() a message and abort() the backend.\n\nI hope that I have preserved the correctness of the tas() assembly code, but\nthis needs to be tested on each platform to make sure I have the sense of\nthe tests right. Remember, tas() is test_and_set and returns the PRIOR STATE\nof the lock. If the prior state was FREE, the caller of TAS is now the lock\nowner. Otherwise, the lock was already locked by someone else.\n\nTo make it easier to test on each platform, I have added a test routine and\nmakefile target to verify the S_LOCK() functionality. To run this:\n\nIf not done already\n cd pgsql\n apply patch\n run configure\nand then\n cd src/backend/buffer\n make s_lock_test\n\nIf the test appears to hang (or you end up after a few minutes with the\n\"Stuck Spinlock\" message), then S_LOCK() is working. Otherwise, please have\na look at what TAS() is returning and either fix it for the platform, or let\nme know and I will give it a whack.\n\nLet me know if there are any problems or questions.\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Wed, 29 Apr 1998 19:06:35 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [PATCHES] S_LOCK reduced contention through backoff patch" }, { "msg_contents": "David Gould wrote:\n\n[snip]\n> Here is the Spinlock back off patch I promised. This does semi-random\n> backoff using select() to lessen throughput degradation due to spinlock\n> contention with large numbers of runnable backends.\n\nDoes this actually use some sort of random number generator? I'm\nthinking that this may not be entirely necessary. With Ethernet, this\nis needed to avoid another colission, but with locks, one process is\nguaranteed to get a lock.\n\nOcie\n", "msg_date": "Wed, 29 Apr 1998 20:51:58 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] S_LOCK reduced contention through backoff\n\tpatch" }, { "msg_contents": "> \n> David Gould wrote:\n> \n> [snip]\n> > Here is the Spinlock back off patch I promised. This does semi-random\n> > backoff using select() to lessen throughput degradation due to spinlock\n> > contention with large numbers of runnable backends.\n> \n> Does this actually use some sort of random number generator? I'm\n> thinking that this may not be entirely necessary. With Ethernet, this\n> is needed to avoid another colission, but with locks, one process is\n> guaranteed to get a lock.\n\n>From the patch. Looks very good to me.\n\n! * Each time we busy spin we select the next element of this array as the\n! * number of microseconds to wait. This accomplishes pseudo random back-off.\n! * Values are not critical and are weighted to the low end of the range. They\n! * were chosen to work even with different select() timer resolutions on\n! * different platforms.\n! * note: total time to cycle through all 16 entries might be about .1 second.\n! */\n! int s_spincycle[S_NSPINCYCLE] =\n! {0, 0, 0, 1000, 5000, 0, 10000, 3000,\n! 0, 10000, 0, 15000, 9000, 21000, 6000, 30000\n! };\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 30 Apr 1998 00:21:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] S_LOCK reduced contention through backoff\n\tpatch" }, { "msg_contents": "Ocie: \n> David Gould wrote:\n> \n> [snip]\n> > Here is the Spinlock back off patch I promised. This does semi-random\n> > backoff using select() to lessen throughput degradation due to spinlock\n> > contention with large numbers of runnable backends.\n> \n> Does this actually use some sort of random number generator? I'm\n\nNo. Have a look at the patch.\n\n> thinking that this may not be entirely necessary. With Ethernet, this\n> is needed to avoid another colission, but with locks, one process is\n> guaranteed to get a lock.\n\nIn the case where this comes into play, one process already has the lock.\nWe have already collided. We are trying to limit the number of additional\ncollisions.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Wed, 29 Apr 1998 23:32:31 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] S_LOCK reduced contention through backoff\n\tpatch" } ]
[ { "msg_contents": "\nfrom todo:\n\nAllow compression of large fields or a compressed field type\n\nI like this idea. Should be pretty easy too. Are we interested in\nputting this in the distribution, or as a contrib? I could easily\ncreate a compressed field type like the text type. However, how do\nyou actually get the data in there? Assuming you're trying to get\naround the 8k tuple limit, there's still the 8k query length. Does\ncopy do ok with >8k tuples (assuming the resulting tuple size is < 8k).\n\nCompression of large objects is also a good idea, but I'm not sure how\nit would be implemented, or how it would affect reads/writes (you\ncan't really seek with zlib, which is what I would use).\n\nI've also been thinking about data encryption. Assuming it would be\ntoo hard & long to revamp or add a new storage manager and actually\nencrypt the pages themselves, we can encrypt what gets stored in the\nfield, and either have a type for it, or a function. What about the\nidea of a 'data translator', a function which would act as a filter\nbetween the in/out functions and the actual storage of data on disk/in\nmemory. So that it could be applied to fields which would then be\nautomagically compressed.\n", "msg_date": "Wed, 29 Apr 1998 19:27:09 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "data compression/encryption" } ]
[ { "msg_contents": "\nWhen you run postgresql as root, the command it gives for putting in\nyour startup script is a little weird. The main issue is that 2>&1\nonly works in bash, not tcsh. >& works in both, so it seems\npreferable. Another minor issue is that it echoes the command and\npipes it through su. Shouldn't this be \"su - postgres -c 'cmd'\"? Do\nall versions of su have the '-c' argument? piping it through seems\nweird, but maybe it isn't.\n\nthis is a straight diff for src/backend/main/main.c\n\n--cut here--\n38c38\n< echo \\\"postmaster -B 256 >/var/log/pglog 2>&1 &\\\" | su - postgres\\n\\n\"\n---\n> su - postgres -c 'postmaster -B 256 >& /var/log/pglog' &\\n\\n\"\n--cut here--\n", "msg_date": "Wed, 29 Apr 1998 20:09:05 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "text patch -- sugg cmd when run as root" }, { "msg_contents": "> When you run postgresql as root, the command it gives for putting in\n> your startup script is a little weird. The main issue is that 2>&1\n> only works in bash, not tcsh. >& works in both, so it seems\n> preferable. Another minor issue is that it echoes the command and\n> pipes it through su. Shouldn't this be \"su - postgres -c 'cmd'\"? Do\n> all versions of su have the '-c' argument? piping it through seems\n> weird, but maybe it isn't.\n> \n> this is a straight diff for src/backend/main/main.c\n> \n> --cut here--\n> 38c38\n> < echo \\\"postmaster -B 256 >/var/log/pglog 2>&1 &\\\" | su - postgres\\n\\n\"\n> ---\n> > su - postgres -c 'postmaster -B 256 >& /var/log/pglog' &\\n\\n\"\n> --cut here--\n\nYou have tcsh as the root shell??? \n\nSeriously, most systems have 'sh' as the root shell, with bash a distant\nsecond possibility. And, the '2>&1' syntax works in 'sh', and 'ksh' and 'bash'.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Wed, 29 Apr 1998 23:28:42 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] text patch -- sugg cmd when run as root" }, { "msg_contents": "On Wed, 29 Apr 1998, David Gould wrote:\n\n> > When you run postgresql as root, the command it gives for putting in\n> > your startup script is a little weird. The main issue is that 2>&1\n> > only works in bash, not tcsh. >& works in both, so it seems\n> > preferable. Another minor issue is that it echoes the command and\n> > pipes it through su. Shouldn't this be \"su - postgres -c 'cmd'\"? Do\n> > all versions of su have the '-c' argument? piping it through seems\n> > weird, but maybe it isn't.\n> > \n> > this is a straight diff for src/backend/main/main.c\n> > \n> > --cut here--\n> > 38c38\n> > < echo \\\"postmaster -B 256 >/var/log/pglog 2>&1 &\\\" | su - postgres\\n\\n\"\n> > ---\n> > > su - postgres -c 'postmaster -B 256 >& /var/log/pglog' &\\n\\n\"\n> > --cut here--\n> \n> You have tcsh as the root shell??? \n\n\tAs do I...so? I just make sure I put a copy in /bin and you're\nfine...or, at least, I haven't been burnt yet. I can't stand the other\nshells :(\n\n\n", "msg_date": "Thu, 30 Apr 1998 09:53:43 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] text patch -- sugg cmd when run as root" }, { "msg_contents": "On Thu, 30 Apr 1998, The Hermit Hacker wrote:\n\n>On Wed, 29 Apr 1998, David Gould wrote:\n>\n>> > When you run postgresql as root, the command it gives for putting in\n>> > your startup script is a little weird. The main issue is that 2>&1\n>> > only works in bash, not tcsh. >& works in both, so it seems\n>> > preferable. Another minor issue is that it echoes the command and\n>> > pipes it through su. Shouldn't this be \"su - postgres -c 'cmd'\"? Do\n>> > all versions of su have the '-c' argument? piping it through seems\n>> > weird, but maybe it isn't.\n>> > \n>> > this is a straight diff for src/backend/main/main.c\n>> > \n>> > --cut here--\n>> > 38c38\n>> > < echo \\\"postmaster -B 256 >/var/log/pglog 2>&1 &\\\" | su - postgres\\n\\n\"\n>> > ---\n>> > > su - postgres -c 'postmaster -B 256 >& /var/log/pglog' &\\n\\n\"\n>> > --cut here--\n>> \n>> You have tcsh as the root shell??? \n>\n>\tAs do I...so? I just make sure I put a copy in /bin and you're\n>fine...or, at least, I haven't been burnt yet. I can't stand the other\n>shells :(\n\nIMHO, the startup script should be written for plain sh (best) or plain\ncsh, because those are the shells that are guaranteed to exist on any Un*x\nsystem. And, it doesn't matter which shell you are using (bash, tcsh, ksh,\nzsh or whatever), simply put \"#!/bin/sh\" or \"#!/bin/csh\" as the first line\non the script, and you're done.\n\nAs a side note: Marc, if you use tcsh as root's shell, you also must check\nthat tcsh is statically linked. Anyway, I keep /bin/sh as root's shell,\nand the first command I execute when I log on as root is \"bash ; exit\". I\ncould even modify root's .profile to execute it automatically, but I'm too\nlazy :-)\n\n-------------------------------------------------------------------\nPedro José Lobo Perea Tel: +34 91 336 78 19\nCentro de Cálculo Fax: +34 91 331 92 29\nEUIT Telecomunicación - UPM e-mail: [email protected]\n\n", "msg_date": "Thu, 30 Apr 1998 16:28:45 +0200 (MET DST)", "msg_from": "\"Pedro J. Lobo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] text patch -- sugg cmd when run as root" }, { "msg_contents": "Pedro:\n> IMHO, the startup script should be written for plain sh (best) or plain\n> csh, because those are the shells that are guaranteed to exist on any Un*x\n> system. And, it doesn't matter which shell you are using (bash, tcsh, ksh,\n> zsh or whatever), simply put \"#!/bin/sh\" or \"#!/bin/csh\" as the first line\n> on the script, and you're done.\n\nI don't have csh on some of my systems. I think the only safe choice is \nplain sh.\n-dg\n", "msg_date": "Thu, 30 Apr 1998 11:23:55 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] text patch -- sugg cmd when run as root" }, { "msg_contents": "> \n> When you run postgresql as root, the command it gives for putting in\n> your startup script is a little weird. The main issue is that 2>&1\n> only works in bash, not tcsh. >& works in both, so it seems\n> preferable. Another minor issue is that it echoes the command and\n> pipes it through su. Shouldn't this be \"su - postgres -c 'cmd'\"? Do\n> all versions of su have the '-c' argument? piping it through seems\n> weird, but maybe it isn't.\n> \n> this is a straight diff for src/backend/main/main.c\n> \n> --cut here--\n> 38c38\n> < echo \\\"postmaster -B 256 >/var/log/pglog 2>&1 &\\\" | su - postgres\\n\\n\"\n> ---\n> > su - postgres -c 'postmaster -B 256 >& /var/log/pglog' &\\n\\n\"\n> --cut here--\n> \n> \n\nI have changed the text to:\n\n\\n\\\"root\\\" execution of the PostgreSQL backend is not permitted.\\n\\n\\\nThe backend must be started under it's own userid to prevent\\n\\ \na possible system security compromise. See the INSTALL file for\\n\\\nmore information on how to properly start the postmaster.\\n\\n\" \n \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 16:17:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] text patch -- sugg cmd when run as root" }, { "msg_contents": "\nbeautiful -- it turns out that it was my mistake (I am very used to\nbash) and vanilla sh does not handle tcsh-style redirection, as in:\n\ncommand >& file\n\nOn Tue, 12 May 1998, at 16:17:28, Bruce Momjian wrote:\n\n> I have changed the text to:\n> \n> \\n\\\"root\\\" execution of the PostgreSQL backend is not permitted.\\n\\n\\\n> The backend must be started under it's own userid to prevent\\n\\ \n> a possible system security compromise. See the INSTALL file for\\n\\\n> more information on how to properly start the postmaster.\\n\\n\" \n", "msg_date": "Tue, 12 May 1998 13:39:16 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] text patch -- sugg cmd when run as root" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Bruce Momjian <[email protected]>\nTo: maillist <[email protected]>\nCc: [email protected] <[email protected]>; [email protected]\n<[email protected]>\nDate: donderdag 30 april 1998 10:12\nSubject: Re: [HACKERS] removing the exec() from doexec()\n\n\n>> This exec() takes 15% of our startup time. I have wanted it removed for\n>> many releases now. The only problem is to rip out the code that\n>> re-attached to shared memory and stuff like that, because you will no\n>> longer loose the shared memory in the exec(). The IPC code is\n>> complicated, so good luck. I or others can help if you get stuck.\n>>\n>\n\nI tried to replace the execs with forks once (an hour or two job).\nIndeed I got some shared memory initialisations problems.\n\nI got distracted before I finished it.\n\nSuccess.\n\nWith regrads from Maurice.\n\n\n", "msg_date": "Thu, 30 Apr 1998 08:32:16 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" } ]
[ { "msg_contents": "\n-----Original Message-----\nFrom: Brett McCormick <[email protected]>\nTo: [email protected] <[email protected]>\nDate: donderdag 30 april 1998 10:42\nSubject: [HACKERS] data compression/encryption\n\n\n>\n>from todo:\n>\n>Allow compression of large fields or a compressed field type\n>\n>I like this idea. Should be pretty easy too. Are we interested in\n>putting this in the distribution, or as a contrib? I could easily\n>create a compressed field type like the text type. However, how do\n>you actually get the data in there? Assuming you're trying to get\n>around the 8k tuple limit, there's still the 8k query length. Does\n>copy do ok with >8k tuples (assuming the resulting tuple size is < 8k).\n>\n>Compression of large objects is also a good idea, but I'm not sure how\n>it would be implemented, or how it would affect reads/writes (you\n>can't really seek with zlib, which is what I would use).\n\n>\n>I've also been thinking about data encryption. Assuming it would be\n>too hard & long to revamp or add a new storage manager and actually\n>encrypt the pages themselves, we can encrypt what gets stored in the\n>field, and either have a type for it, or a function. What about the\n>idea of a 'data translator', a function which would act as a filter\n>between the in/out functions and the actual storage of data on disk/in\n>memory. So that it could be applied to fields which would then be\n>automagically compressed.\n\nI've been looking at how information is stored at the lowest level in\npostgresql,\nand if I'm not mistaken compressing and/or encrypting of items on a\npage is doable. Since items can be shuffeled around on a page without\nchanging their tid.\n\nI haven't given much thought to how such functionality could be presented\nto the user.\n\nRegards,\n Maurice.\n\n\n", "msg_date": "Thu, 30 Apr 1998 08:46:22 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] data compression/encryption" } ]
[ { "msg_contents": "It seems this patch is incomplete. The file has 20 #if's but only 19\n#endif's.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 30 Apr 1998 09:40:57 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "s_lock.h patch" }, { "msg_contents": "Michael Meskes: \n> It seems this patch is incomplete. The file has 20 #if's but only 19\n> #endif's.\n\nHmmm, I just checked my file, and the patch I got back from the list and\nI don't see this. Could you send me a copy of your patched file so I can see\nmaybe what is the problem?\n\nThanks\n\n-dg \n\n", "msg_date": "Thu, 30 Apr 1998 01:12:01 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] s_lock.h patch" }, { "msg_contents": "David Gould writes:\n> Michael Meskes: \n> > It seems this patch is incomplete. The file has 20 #if's but only 19\n> > #endif's.\n> \n> Hmmm, I just checked my file, and the patch I got back from the list and\n> I don't see this. Could you send me a copy of your patched file so I can see\n> maybe what is the problem?\n\nI wasn't precise enough. I didn't apply the patch but cvsup'ed the latest\nfile. Anyway, here is my s_lock.h:\n\n/*-------------------------------------------------------------------------\n *\n * s_lock.h--\n *\t This file contains the implementation (if any) for spinlocks.\n *\n * Copyright (c) 1994, Regents of the University of California\n *\n *\n * IDENTIFICATION\n *\t $Header: /usr/local/cvsroot/pgsql/src/include/storage/s_lock.h,v 1.30 1998/04/29 12:40:56 scrappy Exp $\n *\n *-------------------------------------------------------------------------\n */\n/*\n *\t DESCRIPTION\n * The public functions that must be provided are:\n *\n * void S_INIT_LOCK(slock_t *lock)\n *\n *\t\tvoid S_LOCK(slock_t *lock)\n *\n * void S_UNLOCK(slock_t *lock)\n *\n * int S_LOCK_FREE(slock_t *lock) \n * \tTests if the lock is free. Returns non-zero if free, 0 if locked.\n *\n * The S_LOCK() function (in s_lock.c) implements a primitive but\n *\t\tstill useful random backoff to avoid hordes of busywaiting lockers\n *\t\tchewing CPU.\n *\n *\t\tvoid\n *\t\tS_LOCK(slock_t *lock)\n *\t\t{\n *\t\t while (TAS(lock))\n *\t\t {\n *\t\t\t// back off the cpu for a semi-random short time\n *\t\t }\n *\t\t}\n *\n *\t\tThis implementation takes advantage of a tas function written \n * (in assembly language) on machines that have a native test-and-set\n * instruction. Alternative mutex implementations may also be used.\n *\t\tThis function is hidden under the TAS macro to allow substitutions.\n *\n *\t\t#define TAS(lock) tas(lock)\n *\t\tint tas(slock_t *lock)\t\t// True if lock already set\n *\n *\t\tIf none of this can be done, POSTGRES will default to using\n *\t\tSystem V semaphores (and take a large performance hit -- around 40%\n *\t\tof its time on a DS5000/240 is spent in semop(3)...).\n *\n *\tNOTES\n *\t\tAIX has a test-and-set but the recommended interface is the cs(3)\n *\t\tsystem call. This provides an 8-instruction (plus system call\n *\t\toverhead) uninterruptible compare-and-set operation. True\n *\t\tspinlocks might be faster but using cs(3) still speeds up the\n *\t\tregression test suite by about 25%. I don't have an assembler\n *\t\tmanual for POWER in any case.\n *\n *\t\tThere are default implementations for all these macros at the bottom\n *\t\tof this file. Check if your platform can use these or needs to\n *\t\toverride them.\n *\n */\n#ifndef S_LOCK_H\n#define S_LOCK_H\n\n#include \"storage/ipc.h\"\n\n#if defined(HAS_TEST_AND_SET)\n\n#if defined(linux)\n/***************************************************************************\n * All Linux\n */\n\n#if defined(__alpha__)\n\n#define S_UNLOCK(lock) { __asm__(\"mb\"); *(lock) = 0; }\n\n#endif\t\t\t\t\t\t\t/* defined(__alpha__) && defined(linux) */\n\n\n\n\n#else /* defined(linux) */\n/***************************************************************************\n * All non Linux\n */\n\n#if defined (nextstep)\n/*\n * NEXTSTEP (mach)\n * slock_t is defined as a struct mutex.\n */\n\n#define S_LOCK(lock)\tmutex_lock(lock)\n\n#define S_UNLOCK(lock)\tmutex_unlock(lock)\n\n#define S_INIT_LOCK(lock)\tmutex_init(lock)\n\n/* For Mach, we have to delve inside the entrails of `struct mutex'. Ick! */\n#define S_LOCK_FREE(alock)\t((alock)->lock == 0)\n\n#endif\t\t\t\t\t\t\t/* nextstep */\n\n\n\n#if defined(__sgi)\n/*\n * SGI IRIX 5\n * slock_t is defined as a struct abilock_t, which has a single unsigned long\n * member.\n *\n * This stuff may be supplemented in the future with Masato Kataoka's MIPS-II\n * assembly from his NECEWS SVR4 port, but we probably ought to retain this\n * for the R3000 chips out there.\n */\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\twhile (!acquire_lock(lock)) \\\n\t\t\t\t\t\t\t\t; \\\n\t\t\t\t\t\t} while (0)\n\n#define S_UNLOCK(lock)\trelease_lock(lock)\n\n#define S_INIT_LOCK(lock)\tinit_lock(lock)\n\n/* S_LOCK_FREE should return 1 if lock is free; 0 if lock is locked */\n\n#define S_LOCK_FREE(lock)\t(stat_lock(lock) == UNLOCKED)\n\n#endif\t\t\t\t\t\t\t/* __sgi */\n\n\n/*\n * OSF/1 (Alpha AXP)\n *\n * Note that slock_t on the Alpha AXP is msemaphore instead of char\n * (see storage/ipc.h).\n */\n\n#if defined(__alpha) && !defined(linux)\n\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\twhile (msem_lock((lock), MSEM_IF_NOWAIT) < 0) \\\n\t\t\t\t\t\t\t\t; \\\n\t\t\t\t\t\t} while (0)\n\n#define S_UNLOCK(lock)\tmsem_unlock((lock), 0)\n\n#define S_INIT_LOCK(lock)\tmsem_init((lock), MSEM_UNLOCKED)\n\n#define S_LOCK_FREE(lock)\t(!(lock)->msem_state)\n\n#endif\t\t\t\t\t\t\t/* alpha */\n\n/*\n * Solaris 2\n */\n\n#if (defined(__i386__) || defined(__sparc__)) && defined(__sun__)\n/* for xxxxx_solaris, this is defined in port/.../tas.s */\n\nstatic int\ttas(slock_t *lock);\n\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\twhile (tas(lock)) \\\n\t\t\t\t\t\t\t\t; \\\n\t\t\t\t\t\t} while (0)\n\n#define S_UNLOCK(lock)\t(*(lock) = 0)\n\n#define S_INIT_LOCK(lock)\tS_UNLOCK(lock)\n\n#endif\t\t\t\t\t\t\t/* i86pc_solaris || sparc_solaris */\n\n/*\n * AIX (POWER)\n *\n * Note that slock_t on POWER/POWER2/PowerPC is int instead of char\n * (see storage/ipc.h).\n */\n\n#if defined(_AIX)\n\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\twhile (cs((int *) (lock), 0, 1)) \\\n\t\t\t\t\t\t\t\t; \\\n\t\t\t\t\t\t} while (0)\n\n#define S_UNLOCK(lock)\t(*(lock) = 0)\n\n#define S_INIT_LOCK(lock)\tS_UNLOCK(lock)\n\n#endif\t\t\t\t\t\t\t/* _AIX */\n\n/*\n * HP-UX (PA-RISC)\n *\n * Note that slock_t on PA-RISC is a structure instead of char\n * (see storage/ipc.h).\n */\n\n#if defined(__hpux)\n\n/*\n* a \"set\" slock_t has a single word cleared. a \"clear\" slock_t has\n* all words set to non-zero.\n*/\nstatic slock_t clear_lock = {-1, -1, -1, -1};\n\nstatic int\ttas(slock_t *lock);\n\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\twhile (tas(lock)) \\\n\t\t\t\t\t\t\t\t; \\\n\t\t\t\t\t\t} while (0)\n\n#define S_UNLOCK(lock)\t(*(lock) = clear_lock)\t/* struct assignment */\n\n#define S_INIT_LOCK(lock)\tS_UNLOCK(lock)\n\n#define S_LOCK_FREE(lock)\t( *(int *) (((long) (lock) + 15) & ~15) != 0)\n\n#endif\t\t\t\t\t\t\t/* __hpux */\n\n/*\n * sun3\n */\n\n#if defined(sun3)\n\nstatic int\ttas(slock_t *lock);\n\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\twhile (tas(lock)) \\\n\t\t\t\t\t\t\t\t; \\\n\t\t\t\t\t\t} while (0)\n\n#define S_UNLOCK(lock)\t(*(lock) = 0)\n\n#define S_INIT_LOCK(lock)\tS_UNLOCK(lock)\n\nstatic int\ntas_dummy()\n{\n\tasm(\"LLA0:\");\n\tasm(\"\t.data\");\n\tasm(\"\t.text\");\n\tasm(\"|#PROC# 04\");\n\tasm(\"\t.globl\t_tas\");\n\tasm(\"_tas:\");\n\tasm(\"|#PROLOGUE# 1\");\n\tasm(\"\tmovel sp@(0x4),a0\");\n\tasm(\"\ttas\ta0@\");\n\tasm(\"\tbeq\tLLA1\");\n\tasm(\"\tmoveq #-128,d0\");\n\tasm(\"\trts\");\n\tasm(\"LLA1:\");\n\tasm(\"\tmoveq #0,d0\");\n\tasm(\"\trts\");\n\tasm(\"\t.data\");\n}\n\n#endif\t\t\t\t\t\t\t/* sun3 */\n\n/*\n * sparc machines\n */\n\n#if defined(NEED_SPARC_TAS_ASM)\n\n/* if we're using -ansi w/ gcc, use __asm__ instead of asm */\n#if defined(__STRICT_ANSI__)\n#define asm(x)\t__asm__(x)\n#endif\n\nstatic int\ttas(slock_t *lock);\n\nstatic void\ntas_dummy()\n{\n\tasm(\".seg \\\"data\\\"\");\n\tasm(\".seg \\\"text\\\"\");\n\tasm(\"_tas:\");\n\n\t/*\n\t * Sparc atomic test and set (sparc calls it \"atomic load-store\")\n\t */\n\n\tasm(\"ldstub [%r8], %r8\");\n\n\t/*\n\t * Did test and set actually do the set?\n\t */\n\n\tasm(\"tst %r8\");\n\n\tasm(\"be,a ReturnZero\");\n\n\t/*\n\t * otherwise, just return.\n\t */\n\n\tasm(\"clr %r8\");\n\tasm(\"mov 0x1, %r8\");\n\tasm(\"ReturnZero:\");\n\tasm(\"retl\");\n\tasm(\"nop\");\n}\n\n#define S_LOCK(addr)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\twhile (tas(addr)) \\\n\t\t\t\t\t\t\t\t; \\\n\t\t\t\t\t\t} while (0)\n\n/*\n * addr should be as in the above S_LOCK routine\n */\n#define S_UNLOCK(addr)\t(*(addr) = 0)\n\n#define S_INIT_LOCK(addr)\t(*(addr) = 0)\n\n#endif\t\t\t\t\t\t\t/* NEED_SPARC_TAS_ASM */\n\n/*\n * VAXen -- even multiprocessor ones\n */\n\n#if defined(NEED_VAX_TAS_ASM)\n\n#define S_LOCK(addr)\t\t__asm__(\"1: bbssi $0,(%0),1b\": :\"r\"(addr))\n#define S_UNLOCK(addr)\t\t(*(addr) = 0)\n#define S_INIT_LOCK(addr)\t(*(addr) = 0)\n\n#endif\t\t\t\t\t\t\t/* NEED_VAX_TAS_ASM */\n\n/*\n * i386 based things\n */\n\n#if defined(NEED_I386_TAS_ASM)\n\n#if defined(USE_UNIVEL_CC)\nasm void\nS_LOCK(char *lval)\n{\n% lab again;\n/* Upon entry, %eax will contain the pointer to the lock byte */\n\tpushl %ebx\n\txchgl %eax, %ebx\n\tmovb $255, %al\nagain:\n\tlock\n\txchgb %al, (%ebx)\n\tcmpb $0, %al\n\tjne again\n\tpopl %ebx\n}\n\n#else\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\tslock_t\t\t_res; \\\n\t\t\t\t\t\t\tdo \\\n\t\t\t\t\t\t\t{ \\\n\t\t\t\t__asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n\t\t\t\t\t\t\t} while (_res != 0); \\\n\t\t\t\t\t\t} while (0)\n#endif\n\n#define S_UNLOCK(lock)\t(*(lock) = 0)\n\n#define S_INIT_LOCK(lock)\tS_UNLOCK(lock)\n\n#endif\t\t\t\t\t\t\t/* NEED_I386_TAS_ASM */\n\n\n#if defined(__alpha) && defined(linux)\n\nvoid\t\tS_LOCK(slock_t *lock);\n\n#define S_UNLOCK(lock) { __asm__(\"mb\"); *(lock) = 0; }\n\n#define S_INIT_LOCK(lock)\tS_UNLOCK(lock)\n\n#endif\t\t\t\t\t\t\t/* defined(__alpha) && defined(linux) */\n\n#if defined(linux) && defined(sparc)\n\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\tslock_t\t\t_res; \\\n\t\t\t\t\t\t\tslock_t\t\t*tmplock = lock ; \\\n\t\t\t\t\t\t\tdo \\\n\t\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\t\t__asm__(\"ldstub [%1], %0\" \\\n\t\t\t\t\t\t:\t\t\"=&r\"(_res), \"=r\"(tmplock) \\\n\t\t\t\t\t\t:\t\t\"1\"(tmplock)); \\\n\t\t\t\t\t\t\t} while (_res != 0); \\\n\t\t\t\t\t\t} while (0)\n\n#define S_UNLOCK(lock)\t(*(lock) = 0)\n\n#define S_INIT_LOCK(lock)\tS_UNLOCK(lock)\n\n#endif\t\t\t\t\t\t\t/* defined(linux) && defined(sparc) */\n\n#if defined(linux) && defined(PPC)\n\nstatic int\ntas_dummy()\n{\n\t__asm__(\"\t\\n\\\ntas:\t\t\t\\n\\\n\tlwarx\t5,0,3\t\\n\\\n\tcmpwi\t5,0\t\\n\\\n\tbne\tfail\t\\n\\\n\taddi\t5,5,1\t\\n\\\n stwcx. 5,0,3\t\\n\\\n beq\tsuccess\t\\n\\\nfail:\tli\t3,1\t\\n\\\n\tblr\t\t\\n\\\nsuccess:\t\t\\n\\\n\tli 3,0\t\t\\n\\\n blr\t\t\\n\\\n\t\");\n}\n\n#define S_LOCK(lock)\tdo \\\n\t\t\t\t\t\t{ \\\n\t\t\t\t\t\t\twhile (tas(lock)) \\\n\t\t\t\t\t\t\t\t; \\\n\t\t\t\t\t\t} while (0)\n\n#define S_UNLOCK(lock)\t(*(lock) = 0)\n\n#define S_INIT_LOCK(lock)\tS_UNLOCK(lock)\n\n#endif\t\t\t\t\t\t\t/* defined(linux) && defined(PPC) */\n\n#ifndef S_LOCK_FREE\t\t\t\t/* for those who have not already defined\n\t\t\t\t\t\t\t\t * it */\n#define S_LOCK_FREE(lock)\t\t((*lock) == 0)\n#endif\n\n#endif\t\t\t\t\t\t\t/* HAS_TEST_AND_SET */\n\n#endif\t\t\t\t\t\t\t/* S_LOCK_H */\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 30 Apr 1998 10:14:35 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] s_lock.h patch" } ]
[ { "msg_contents": "Hi there,\n\nYou seem to be doing very much thought in this area ;-)\nOne idea strikes me though:\n\tA standard ODBC Call Level Interface exists !\nWhy not try to get the ideas for libpq from there ? Maybe we could even\nimplement those \nsame functions.\nA small extract:\n\tSQLPrepare\n\tSQLExecute\n\tSQLFetch\n\tSQLFreeStmt\n\nOn the discussion of the protocol, I would suggest using and looking into\n\tpvm3 at ftp://netlib2.cs.utk.edu/pvm3\nthis would open a wide area of low and high speed client server protocols\nlike: ipcshm, unix domain sockets, and native access to high performace\ninterconnects.\n\nI use it to communicate from backend user defined functions with a neural\nnetwork \nsimulator. It simplifies the communication substantially.\n\nAndreas \n\n> Tom Lane wrote:\n> \n> Here is a revised proposal that takes into account the discussions\n> of the last few days. Any comments?\n\t<snip>\n", "msg_date": "Thu, 30 Apr 1998 09:43:38 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Revised proposal for libpq and FE/BE protocol chang es" }, { "msg_contents": "Zeugswetter Andreas SARZ <[email protected]> writes:\n> One idea strikes me though:\n> \tA standard ODBC Call Level Interface exists !\n> Why not try to get the ideas for libpq from there ? Maybe we could even\n> implement those same functions.\n\nHmm. If we wanted to completely redesign the libpq API, and thereby\nbreak every frontend application there is, this'd be a good idea.\nI wasn't looking to do that. I figured an incremental improvement\nto libpq was what was called for.\n\nHowever, there's nothing stopping someone from producing a brand new\nfrontend library, which applications could migrate to over time.\n(Or more likely, new apps would be written to the new API while old\nones stick with what works...)\n\n\t\t\tregards, tom lane\n\nPS: Thanks for the pointer to PVM; looks interesting.\n", "msg_date": "Thu, 30 Apr 1998 11:16:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" } ]
[ { "msg_contents": "While scanning through ecpg's todo list I found:\n\n> Missing library functions to_date et al.\n\nDo we have some functions comparable to these Oracle functions? If not I\nthink this functionality belongs into the backend. So I will delete it from\nmy list.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 30 Apr 1998 13:28:28 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "TODO list" } ]
[ { "msg_contents": "Compression of large objects is also a good idea, but I'm not sure how\nit would be implemented, or how it would affect reads/writes (you\ncan't really seek with zlib, which is what I would use).\n\nLook at lzo realtime compression utility at: \n\thttp://wildsau.idv.uni-linz.ac.at/mfx/lzop.html\nIt is extremly fast. I have to support first class Austrian technology ;-)\n\nAndreas\n\n\n\n", "msg_date": "Thu, 30 Apr 1998 14:32:32 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] data compression/encryption" } ]
[ { "msg_contents": "On 30 Apr 1998, Tom Ivar Helbekkmo wrote:\n\n> \"Jose' Soares Da Silva\" <[email protected]> writes:\n> \n> > I vote for changing default date format to ISO-8601 to reflect\n> > PostgreSQL documentation and for adherence to Standard SQL92.\n> \n> Hear! Hear! Good standards beat silly conventions any day!\n> \nSeems that you don't like conventions Tom, but you want\nthat all world use dates with American format.\nSeems that you want impose one convention.\nWe're working with a database which name is PostgreSQL.\nI suppose that you know what's mean the last 3 letters.\n Jose' \n\n", "msg_date": "Thu, 30 Apr 1998 13:03:44 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "> > > I vote for changing default date format to ISO-8601 to reflect\n> > Hear! Hear! Good standards beat silly conventions any day!\n> Seems that you don't like conventions Tom, but you want\n> that all world use dates with American format.\n> Seems that you want impose one convention.\n> We're working with a database which name is PostgreSQL.\n> I suppose that you know what's mean the last 3 letters.\n\nUh, Jose', he was agreeing with you :))\n\nAnyway, imo the only issue is _when_ this kind of change should take\nplace. My comment in the documentation did not promise that it would\nchange in the next release, only that it might change in a future\nrelease. btw, I don't think that the ISO date style is mandated by the\nSQL92 standard, but it does seem like a good idea, particularly as we\napproach y2k...\n\nOf course, since we now have the PGDATESTYLE environment variable,\nusable by both the backend (at startup) and libpq (at connect time),\nperhaps a change in default date format is not something to worry about\ntoo much.\n\nI haven't heard any negative comments (yet) about changing the default\ndate format to ISO-8601 (yyyy-mm-dd). Does anyone have a strong feeling\nthat this should _not_ happen for v6.4??\n\nSpeak up or it might happen ;)\n\n - Tom\n", "msg_date": "Thu, 30 Apr 1998 13:08:17 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "On Thu, 30 Apr 1998, Jose' Soares Da Silva wrote:\n\n> On 30 Apr 1998, Tom Ivar Helbekkmo wrote:\n> \n> > \"Jose' Soares Da Silva\" <[email protected]> writes:\n> > \n> > > I vote for changing default date format to ISO-8601 to reflect\n> > > PostgreSQL documentation and for adherence to Standard SQL92.\n> > \n> > Hear! Hear! Good standards beat silly conventions any day!\n> > \n> Seems that you don't like conventions Tom, but you want\n> that all world use dates with American format.\n> Seems that you want impose one convention.\n\n\tCan someone inform me of what ISO-8601 exactly is?\n\n\n", "msg_date": "Thu, 30 Apr 1998 10:54:41 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "On Thu, 30 Apr 1998, Thomas G. Lockhart wrote:\n\n\t\n> > > > I vote for changing default date format to ISO-8601 to reflect\n> > > Hear! Hear! Good standards beat silly conventions any day!\n> > Seems that you don't like conventions Tom, but you want\n> > that all world use dates with American format.\n> > Seems that you want impose one convention.\n> > We're working with a database which name is PostgreSQL.\n> > I suppose that you know what's mean the last 3 letters.\n> \n> Uh, Jose', he was agreeing with you :))\n\nI'm sorry Tom Ivar, my mistake (guilt of my poor english)\n\n> \n> Anyway, imo the only issue is _when_ this kind of change should take\n> place. My comment in the documentation did not promise that it would\n> change in the next release,\n\nYes I know...\n\n> only that it might change in a future\n> release. btw, I don't think that the ISO date style is mandated by the\n> SQL92 standard, but it does seem like a good idea, particularly as we\n> approach y2k...\n\nI think so, Tom. Here the syntax from...\n\n(Second Informal Review Draft) ISO/IEC 9075:1992, Database\n Language SQL- July 30, 1992\n\n5.3 <literal>\n <date literal> ::=\n DATE <date string>\n\n <date string> ::=\n <quote> <date value> <quote>\n\n <date value> ::=\n <years value> <minus sign> <months value> <minus sign> <days value>\n\nexample date syntax: DATE '0001-01-01'\n\t DATE '9999-12-31'\n\nOk, I know that keyword DATE before value is a silly and an useless\nthing but YYYY-MM-DD format it's an intelligent thing.\n\n> Of course, since we now have the PGDATESTYLE environment variable,\n> usable by both the backend (at startup) and libpq (at connect time),\n> perhaps a change in default date format is not something to worry about\n> too much.\n> \n> I haven't heard any negative comments (yet) about changing the default\n> date format to ISO-8601 (yyyy-mm-dd). Does anyone have a strong feeling\n> that this should _not_ happen for v6.4??\n> \n> Speak up or it might happen ;)\n\nGo for it Tom! Jose'\n\n", "msg_date": "Thu, 30 Apr 1998 16:13:00 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "On Thu, 30 Apr 1998, The Hermit Hacker wrote:\n\n> On Thu, 30 Apr 1998, Jose' Soares Da Silva wrote:\n> \n> > On 30 Apr 1998, Tom Ivar Helbekkmo wrote:\n> > \n> > > \"Jose' Soares Da Silva\" <[email protected]> writes:\n> > > \n> > > > I vote for changing default date format to ISO-8601 to reflect\n> > > > PostgreSQL documentation and for adherence to Standard SQL92.\n> > > \n> > > Hear! Hear! Good standards beat silly conventions any day!\n> > > \n> > Seems that you don't like conventions Tom, but you want\n> > that all world use dates with American format.\n> > Seems that you want impose one convention.\n> \n> \tCan someone inform me of what ISO-8601 exactly is?\n> \n\n - ISO 8601:1988, Data elements and interchange formats - Information\n interchange-Representation of dates and times.\n\t\t \n 3.1.2 Definitions taken from ISO 8601\n\n This International Standard makes use of the following terms\n defined in ISO 8601:\n\n a) Coordinated Universal Time (UTC)\n b) date (\"date, calendar\" in ISO 8601)\n\n See (Second Informal Review Draft) ISO/IEC 9075:1992,\n Database Language SQL- July 30, 1992)\n\t\t\t\t \n\nThe required ISO 8601 syntax for DATE is:\n\nDATE 'YYYY-MM-DD'\n\nComments:\n\n 1) DATE combines the datetime fields YEAR, MONTH and DAY.\n\n 2) DATE defines a set of correctly formed values that\n represent any valid Gregorian calendar date between January 1, 1\n AD and December 31, 9999 AD.\n\n 3) Any operation that attempts to make a DATE <data type>\n contain a YEAR value that is less than 1 or greater than 9999\n will fail; the DBMS will return the:\n\tSQLSTATE error 22007 \"data exception-invalid datetime format\".\n\n 4) DATE expects dates to have the following form: yyyy-mm-dd\n e.g.: 1994-07-15 represents July 15, 1994.\n 5) DATE has a length of 10.\n 6) Date literals must start with the <keyword> DATE and\n include 'yyyy-mm-dd';\n e.g.:\n\n CREATE mytable (mydate DATE);\n INSERT INTO mytable (mydate) VALUES (DATE '1996-01-01');\n\n Jose'\n\n", "msg_date": "Thu, 30 Apr 1998 17:43:37 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> \tCan someone inform me of what ISO-8601 exactly is?\n\nIt's the international standard for representation of date and time.\nISO is the International Organization for Standardization (yeah, I\nknow, the letters are in the wrong order -- although not in French).\n8601 is big and complicated, and some of the legal variations in there\nlook pretty silly, but the gist of it is simple and good, and I'm\ntyping this text at approximately 1998-04-30 20:59:07 UTC.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "30 Apr 1998 23:01:57 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Access'97 and ODBC" } ]
[ { "msg_contents": "> Anyway, imo the only issue is _when_ this kind of change should take\n> place. My comment in the documentation did not promise that it would\n> change in the next release, only that it might change in a future\n> release. btw, I don't think that the ISO date style is mandated by the\n> SQL92 standard, but it does seem like a good idea, particularly as we\n> approach y2k...\n> \n> Of course, since we now have the PGDATESTYLE environment variable,\n> usable by both the backend (at startup) and libpq (at connect time),\n> perhaps a change in default date format is not something to worry about\n> too much.\n> \n> I haven't heard any negative comments (yet) about changing the default\n> date format to ISO-8601 (yyyy-mm-dd). Does anyone have a strong feeling\n> that this should _not_ happen for v6.4??\n> \n> Speak up or it might happen ;)\n\nI'll cast my vote FOR it if it helps speed it along.\n\nThat format makes sorting/ordering a no-brainer. Might not help inside\npostgres, but for putting result sets out to a flat file for script\nprocessing, you could then use the unix sort command. Much easier...\n\nGo for it, whenever.\n\ndarrenk\n", "msg_date": "Thu, 30 Apr 1998 09:17:53 -0400", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Access'97 and ODBC" } ]
[ { "msg_contents": "Hi,\n\nsorry to bug you as an individual, but I had no replies\nat all to my postings below. Can you point me to a\nplace/person/source where I might try to seek an answer?\n\nThanks.\n\nGautam\n\n=======\n1st posting (2nd posting attached to this email.)\n========\nI am trying \n\ndrop table lines;\nDROP\n\ncreate table lines (\n l line\n);\nCREATE\n\ninsert into lines values ('((0,0),(1,2))'::line);\nWARN:fmgr_info: function 0: cache lookup failed\n\nEOF\n\n\nIf I change line to lseg everthing work ok.\n\nI believe this is Postgres 6.3 (but I don't know how to\nreadily find that out on a running/installed version.\nIs there a way to know by looking into some file someplace?)\n\nThanks\n\n\n-- \nGautam H. Thaker\nDistributed Processing Lab; Lockheed Martin Adv. Tech. Labs\nA&E 3W; 1 Federal Street; Camden, NJ 08102\n609-338-3907, fax 609-338-4144 email: [email protected]\ntemplate1=> select '(0,0)'::point ## '((2,0),(0,2))'::lseg as\nclosest_point;\nclosest_point\n-------------\n(1,1) \n(1 row)\n\nlooks to be correct but\n\n\ntemplate1=> select '(1,1)'::point ## '((0,0),(0,2))'::lseg as\nclosest_point;\nclosest_point\n-------------\n(0,0) \n(1 row)\n\n\nseems to be in error as the closest point should be \"(0,1)\" on the lseg,\nis it not?\n\n(please excuse me if am totally brain dead here....)\n\n-- \nGautam H. Thaker\nDistributed Processing Lab; Lockheed Martin Adv. Tech. Labs\nA&E 3W; 1 Federal Street; Camden, NJ 08102\n609-338-3907, fax 609-338-4144 email: [email protected]\n--\nOfficial WWW Site: http://www.postgresql.org\nOnline Docs & FAQ: http://www.postgresql.org/docs\nSearchable Lists: http://www.postgresql.org/mhonarc", "msg_date": "Thu, 30 Apr 1998 09:40:24 -0400", "msg_from": "Gautam H Thaker <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: [QUESTIONS] an apparent error in answer from \"##\" (closest\n\tproximity)operator]" }, { "msg_contents": "> OK, I will try to work on this and provide you tested code.\n> (Since this is my first attempt to code in Postgres\n> it might take me a while though I have hacked for many\n> years overall.)\n\nNo problem.\n\n> Lines are more useful to me than lsegs. Is it easy\n> enough to add these input/output routines so that I can\n> continue to move forward prior to V6.4?\n\nYes. I'm starting to do that now, and we can coordinate patches. We may\nas well copy the hackers list on at least our planning e-mails...\n\n - Tom\n", "msg_date": "Fri, 01 May 1998 01:35:36 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CODE ANALYSIS FOR (an apparent error in answer from \"##\" (closest\n\tproximity)operator)" } ]
[ { "msg_contents": "Zeugswetter Andreas SARZ <[email protected]> writes:\n> One idea strikes me though:\n> \tA standard ODBC Call Level Interface exists !\n> Why not try to get the ideas for libpq from there ? Maybe we could even\n> implement those same functions.\n\nHmm. If we wanted to completely redesign the libpq API, and thereby\nbreak every frontend application there is, this'd be a good idea.\nI wasn't looking to do that. I figured an incremental improvement\nto libpq was what was called for.\n\nYup, I guess that sounds more sane and doable. It was just a thought.\n\nAndreas \n\n\n", "msg_date": "Thu, 30 Apr 1998 18:33:26 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Revised proposal for libpq and FE/BE protocol changes" } ]
[ { "msg_contents": "PostgreSQL HOWTO version 6.0 is available at\n http://sunsite.unc.edu/LDP/HOWTO/PostgreSQL-HOWTO.html\n\nAnd mirrors sites are at :-\n http://www.caldera.com/LDP/HOWTO/PostgreSQL-HOWTO.html\n http://www.WGS.com/LDP/HOWTO/PostgreSQL-HOWTO.html\n http://www.cc.gatech.edu/linux/LDP/HOWTO/PostgreSQL-HOWTO.html\n http://www.redhat.com/linux-info/ldp/HOWTO/PostgreSQL-HOWTO.html\n\nOther mirror sites near you can be found at\n http://sunsite.unc.edu/LDP/hmirrors.html\nselect a site and go to directory /LDP/HOWTO/PostgreSQL-HOWTO.html\n\nPlease let me know your suggestions/ideas or any errors/mistakes in the\ndoc.\nIf you know any URLs, pointers useful for PostgreSQL let me know and I\nwill add to the doc.\n\nPlease e-mail at [email protected]\n\nAL DEV\n", "msg_date": "Thu, 30 Apr 1998 13:55:57 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "PostgreSQL HOWTO Version 6.0 released" } ]
[ { "msg_contents": "Hi,\n\nI don't understand the problem CVSup is intended to solve given\nthat CVS allows remote access to the repository using standard cvs\ncommands. Is there a specific reason why we can't/don't have readonly access\nto the postgresql repository?\n\nI think it's neat to be able to use commands like \"cvs diff\" etc. However\nI really hate it that my changes seem to get overwritten why I using\nCVSup since this doesn't happen when using the \"cvs update\".\n\nCan anyone explain why this is the way it is?\n\nThanks, with regards from Maurice.\n\n\n", "msg_date": "Fri, 1 May 1998 08:36:05 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "CVSup" }, { "msg_contents": "\nThis may be the totally wrong place for asking this question, but what\nexactly *is* CVSup?\n\nOn Fri, 1 May 1998, at 08:36:05, Maurice Gittens wrote:\n\n> I don't understand the problem CVSup is intended to solve given\n> that CVS allows remote access to the repository using standard cvs\n> commands. Is there a specific reason why we can't/don't have readonly access\n> to the postgresql repository?\n> \n> I think it's neat to be able to use commands like \"cvs diff\" etc. However\n> I really hate it that my changes seem to get overwritten why I using\n> CVSup since this doesn't happen when using the \"cvs update\".\n> \n> Can anyone explain why this is the way it is?\n> \n> Thanks, with regards from Maurice.\n> \n> \n", "msg_date": "Thu, 30 Apr 1998 23:45:14 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": "On Fri, 1 May 1998, Maurice Gittens wrote:\n\n> Hi,\n> \n> I don't understand the problem CVSup is intended to solve given\n> that CVS allows remote access to the repository using standard cvs\n> commands. Is there a specific reason why we can't/don't have readonly access\n> to the postgresql repository?\n> \n> I think it's neat to be able to use commands like \"cvs diff\" etc. However\n> I really hate it that my changes seem to get overwritten why I using\n> CVSup since this doesn't happen when using the \"cvs update\".\n> \n> Can anyone explain why this is the way it is?\n\n\tWhen I set things up, I could find no instructions for setting up\nanon-cvs that I felt comfortable implementing from a security\nstandpoint...\n\n\tIf you remove the 'tag=.' part of the CVSup config file, you can\npull down the complete CVS repository to your machine to manipulate as you\nwant to...\n\n\n", "msg_date": "Fri, 1 May 1998 07:22:00 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": "On Thu, 30 Apr 1998, Brett McCormick wrote:\n\n> \n> This may be the totally wrong place for asking this question, but what\n> exactly *is* CVSup?\n\n\tSee ftp.postgresql.org:/pub/CVSup ...\n\n\n> \n> On Fri, 1 May 1998, at 08:36:05, Maurice Gittens wrote:\n> \n> > I don't understand the problem CVSup is intended to solve given\n> > that CVS allows remote access to the repository using standard cvs\n> > commands. Is there a specific reason why we can't/don't have readonly access\n> > to the postgresql repository?\n> > \n> > I think it's neat to be able to use commands like \"cvs diff\" etc. However\n> > I really hate it that my changes seem to get overwritten why I using\n> > CVSup since this doesn't happen when using the \"cvs update\".\n> > \n> > Can anyone explain why this is the way it is?\n> > \n> > Thanks, with regards from Maurice.\n> > \n> > \n> \n\n", "msg_date": "Fri, 1 May 1998 07:22:20 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": "> \tWhen I set things up, I could find no instructions for setting up\n> anon-cvs that I felt comfortable implementing from a security\n> standpoint...\n\nIf you use a semi-recent version of CVS, set the --allow-root restriction\nin inetd.conf and create a cvs \"passwd\" file along with a \"writers\" file,\nthen I really don't see where the security problem is. You are not\ncreating system-level accounts, and if you do not put the anonymous user\nin your \"writers\" file, then this user will not be able to alter the\nrepository in any way.\n\n-Rasmus\n\n", "msg_date": "Fri, 1 May 1998 07:36:48 -0400 (Eastern Daylight Time)", "msg_from": "Rasmus Lerdorf <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" } ]
[ { "msg_contents": "I'd like to keep track of any changes you make so I can keep the JDBC\nGeometric classes in sync.\n\n--\nPeter T Mount, [email protected], [email protected]\nPlease note that this is from my works email. If you reply, please cc my\nhome address.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On\nBehalf Of Thomas G. Lockhart\nSent: Friday, May 01, 1998 3:22 AM\nTo: Gautam H Thaker\nCc: Postgres Hackers List\nSubject: [HACKERS] Re: CODE ANALYSIS FOR (an apparent error in answer\nfrom \"##\" (closest proximity)operator)\n\n\n> OK, I will try to work on this and provide you tested code.\n> (Since this is my first attempt to code in Postgres\n> it might take me a while though I have hacked for many\n> years overall.)\n\nNo problem.\n\n> Lines are more useful to me than lsegs. Is it easy\n> enough to add these input/output routines so that I can\n> continue to move forward prior to V6.4?\n\nYes. I'm starting to do that now, and we can coordinate patches. We may\nas well copy the hackers list on at least our planning e-mails...\n\n - Tom\n\n", "msg_date": "Fri, 1 May 1998 08:12:00 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: CODE ANALYSIS FOR (an apparent error in answer from\n\t\"##\" (closest proximity)operator)" } ]
[ { "msg_contents": "\n-----Original Message-----\nFrom: The Hermit Hacker <[email protected]>\nTo: Maurice Gittens <[email protected]>\nCc: [email protected] <[email protected]>\nDate: vrijdag 1 mei 1998 19:34\nSubject: Re: [HACKERS] CVSup\n\n\n>\n> If you remove the 'tag=.' part of the CVSup config file, you can\n>pull down the complete CVS repository to your machine to manipulate as you\n>want to...\n>\n\nWhat way would suggest to keep in sync with the changes other folks are\nmaking? I mean, if I have the repository on my local system I still have to\nget changes changes merged in from the \"main\" postgresql repository.\nWhen I think about it all solutions seem more clumsy (and less flexible)\nthan simply using the standard remote access to the repository.\n\nCould you enlighten me?\n\nWith thanks from Maurice.\n\n\n\n", "msg_date": "Fri, 1 May 1998 13:46:01 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": "On Fri, 1 May 1998, Maurice Gittens wrote:\n\n> \n> -----Original Message-----\n> From: The Hermit Hacker <[email protected]>\n> To: Maurice Gittens <[email protected]>\n> Cc: [email protected] <[email protected]>\n> Date: vrijdag 1 mei 1998 19:34\n> Subject: Re: [HACKERS] CVSup\n> \n> \n> >\n> > If you remove the 'tag=.' part of the CVSup config file, you can\n> >pull down the complete CVS repository to your machine to manipulate as you\n> >want to...\n> >\n> \n> What way would suggest to keep in sync with the changes other folks are\n> making? I mean, if I have the repository on my local system I still have to\n> get changes changes merged in from the \"main\" postgresql repository.\n> When I think about it all solutions seem more clumsy (and less flexible)\n> than simply using the standard remote access to the repository.\n> \n> Could you enlighten me?\n\n\tIf you pull down the repository using CVSup into\n/usr/local/cvsroot, for example, and set your CVSROOT environment variable\nto point to that, you access the same thing that everyone with commit\nprivileges has access to, except you don't have commit privileges...\n\n\tIn one sense, this is better...you don't have to deal with the lag\nof connecting to the remove CVS server every time you want to look at a\nlog or a diff...the only time you have to \"re-sync\" with the remote server\nis when you want to pull down any recent changes, which, if you follow the\ncommitters mailing list, you do when you notice a rash of changes...\n\n\tA dialup PPP user is better served by pulling own theh whole CVS\nrepositiry and then being able to disconnect/work then using CVS directly\nwhere you need to be connected to do anything...\n\n\n\n", "msg_date": "Fri, 1 May 1998 07:54:24 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": ">>>>> \"mgittens\" == Maurice Gittens <[email protected]> writes:\n\n > -----Original Message----- From: The Hermit Hacker\n > <[email protected]> To: Maurice Gittens <[email protected]> Cc:\n > [email protected] <[email protected]> Date: vrijdag 1\n > mei 1998 19:34 Subject: Re: [HACKERS] CVSup\n\n\n >> If you remove the 'tag=.' part of the CVSup config file, you\n >> can pull down the complete CVS repository to your machine to\n >> manipulate as you want to...\n >> \n\n > What way would suggest to keep in sync with the changes other\n > folks are making? I mean, if I have the repository on my local\n > system I still have to get changes changes merged in from the\n > \"main\" postgresql repository. When I think about it all\n > solutions seem more clumsy (and less flexible) than simply using\n > the standard remote access to the repository.\n\n1) Remote CVS is a resource pig, especially for large tree. It puts a \nlarge load on the server. I would guess the load is easily 10x larger \nfor remote CVS vs CVSUP. cvs log/diff being local instead of over the\ninternet is great for development especially with dialup lines to the\nInternet. \n\n\n2) The ability to have a local copy of the 'official tree' allows for \nsome possible ideas to work easier. If one is making local changes\nthe 'official tree' could be done as vendor imports into a local tree\nallowing local changes not to be overwritten.\n > Could you enlighten me?\n\n > With thanks from Maurice.\n\n-- \nKent S. Gordon\nArchitect\niNetSpace Co.\nvoice: (972)851-3494 fax:(972)702-0384 e-mail:[email protected]\n", "msg_date": "Fri, 1 May 1998 08:16:36 -0500 (CDT)", "msg_from": "\"Kent S. Gordon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": "> >> If you remove the 'tag=.' part of the CVSup config file, you\n> >> can pull down the complete CVS repository to your machine to\n> >> manipulate as you want to...\n> >>\n> 1) Remote CVS is a resource pig, especially for large tree. It puts a\n> large load on the server. I would guess the load is easily 10x larger\n> for remote CVS vs CVSUP. cvs log/diff being local instead of over the\n> internet is great for development especially with dialup lines to the\n> Internet.\n> 2) The ability to have a local copy of the 'official tree' allows for\n> some possible ideas to work easier. If one is making local changes\n> the 'official tree' could be done as vendor imports into a local tree\n> allowing local changes not to be overwritten.\n\nWould someone be interested in collecting CVSup information (at least\npartly from the mhonarc archive)? We could/should have a chapter in the\nDeveloper's Guide on this...\n\n - Tom\n", "msg_date": "Fri, 01 May 1998 13:36:47 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": ">>>>> \"lockhart\" == Thomas G Lockhart <[email protected]> writes:\n\n\n > Would someone be interested in collecting CVSup information (at\n > least partly from the mhonarc archive)? We could/should have a\n > chapter in the Developer's Guide on this...\n\nI would suggest looking at the CVSup pages from FreeBSD\n( http://www.freebsd.org/handbook/cvsup.html ). This page along with\na similar pages for anoncvs\n( http://www.freebsd.org/handbook/anoncvs.html ) are good resources\nfor understanding the trade-offs. CVSup was developed/maintained\nmainly on FreeBSD ( a new release (15.4) was just announced).\n > - Tom\n\n-- \nKent S. Gordon\nArchitect\niNetSpace Co.\nvoice: (972)851-3494 fax:(972)702-0384 e-mail:[email protected]\n", "msg_date": "Fri, 1 May 1998 08:47:23 -0500 (CDT)", "msg_from": "\"Kent S. Gordon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": "I'd like to second Maurice's plea for plain-vanilla CVS access.\nNot all of us *want* the entire Postgres CVS repository living\non our local disk; the current sources are quite sufficient.\nCVS access would be better than downloading snapshot tarballs.\n\nI looked at the CVSup pages, and while the program looks slicker\nthan greased lightning for its intended purpose, I'm also quite\nconcerned about the amount of effort needed to port it to any\nnon-FreeBSD system. For starters, I gotta install DEC Modula-3,\nwhich does not claim to have been ported to HPUX. Then I get to\nfind out whether CVSup itself has any portability bugs. This\nsounds like a lot of work for a very second-order goal.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 May 1998 11:10:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup " }, { "msg_contents": "> \n> I'd like to second Maurice's plea for plain-vanilla CVS access.\n> Not all of us *want* the entire Postgres CVS repository living\n> on our local disk; the current sources are quite sufficient.\n> CVS access would be better than downloading snapshot tarballs.\n> \n> I looked at the CVSup pages, and while the program looks slicker\n> than greased lightning for its intended purpose, I'm also quite\n> concerned about the amount of effort needed to port it to any\n> non-FreeBSD system. For starters, I gotta install DEC Modula-3,\n> which does not claim to have been ported to HPUX. Then I get to\n> find out whether CVSup itself has any portability bugs. This\n> sounds like a lot of work for a very second-order goal.\n\nIt is hard to disagree with this.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 1 May 1998 11:20:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": "On Fri, 1 May 1998, Tom Lane wrote:\n\n> I'd like to second Maurice's plea for plain-vanilla CVS access.\n> Not all of us *want* the entire Postgres CVS repository living\n> on our local disk; the current sources are quite sufficient.\n> CVS access would be better than downloading snapshot tarballs.\n\n\tThe new serve is being installed on May 7th...remind me about this\nafterwards...right now, the old box is pretty much at her limit with\neverything she has to handle (not just PostgreSQL related)...\n\n\tThe new server is built and being configured right now...alot of\nthings should be improved, *especially* the mailing list searches shoudl\nbe faster :)\n\n\n", "msg_date": "Fri, 1 May 1998 11:48:00 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup " } ]
[ { "msg_contents": "Hello Gautam. Here are some patches to get you started on fixing up the\nsupport for line objects. The patches came from my sort-of-rev-locked\ndevelopment tree frozen on 980408, which should/might apply cleanly to a\nv6.3.2 source tree.\n\nTo apply:\n1) untar the enclosed file into the main tree (i.e. in \"src/..\")\n2) cd to the new directory patch.980501/\n3) run patch on each file: \"patch < geo_ops.c.patch\" etc.\n - check patch's comments to ensure the patches applied cleanly\n4) cd to ../src\n5) do a \"make clean install\"\n6) do a \"rm -rf ../data\"\n7) do an initdb\n\nSome of the stubs you need will now be defined in the catalogs. At the\nmoment I put blocks of \"#ifdef LINEDEBUG\" around the new code, calling\nelog(ERROR,). Just remove the #ifdefs and the elog messages and start\ncoding.\n\nBefore submitting as patches to the Postgres development tree we will\nwant to get a new snapshot and move around some of our new definitions\nin the catalogs (reassign some of the OIDs). The post-v6.3.2 catalogs\nlook slightly different and probably conflict a little bit.\n\nAnyway, let me know how it is going. Ask if you have any questions...\n\nHave fun.\n\n - Tom", "msg_date": "Fri, 01 May 1998 14:19:25 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "patches for line geometry" } ]
[ { "msg_contents": "Attached is my patch that fixes the routine close_ps().\nI can try to fix other as I run into them, esp. for\nline when line can be input. I tested my fix with:\n\n-- try vertical lseg.\nselect '(1,1)'::point ## '((0,0),(0,2))'::lseg;\n?column?\n--------\n(0,1) \n(1 row)\n\n\n-- try horizontal lseg.\nselect '(1,1)'::point ## '((0,2),(2,2))'::lseg;\n?column?\n--------\n(1,2) \n(1 row)\n\n(both of above were returning wrong answers before.)\n\n\n-- \nGautam H. Thaker\nDistributed Processing Lab; Lockheed Martin Adv. Tech. Labs\nA&E 3W; 1 Federal Street; Camden, NJ 08102\n609-338-3907, fax 609-338-4144 email: [email protected]\n", "msg_date": "Fri, 01 May 1998 14:54:58 -0400", "msg_from": "Gautam H Thaker <[email protected]>", "msg_from_op": true, "msg_subject": "my patch for geo_opc.c (close_ps routine.)" } ]
[ { "msg_contents": "Opps, did not include the actual patch in my last email:\n\nAttached is my patch that fixes the routine close_ps().\nI can try to fix other as I run into them, esp. for\nline when line can be input. I tested my fix with:\n\n-- try vertical lseg.\nselect '(1,1)'::point ## '((0,0),(0,2))'::lseg;\n?column?\n--------\n(0,1) \n(1 row)\n\n\n-- try horizontal lseg.\nselect '(1,1)'::point ## '((0,2),(2,2))'::lseg;\n?column?\n--------\n(1,2) \n(1 row)\n\n(both of above were returning wrong answers before.)\n\n\n-- \nGautam H. Thaker\nDistributed Processing Lab; Lockheed Martin Adv. Tech. Labs\nA&E 3W; 1 Federal Street; Camden, NJ 08102\n609-338-3907, fax 609-338-4144 email: [email protected]", "msg_date": "Fri, 01 May 1998 15:17:58 -0400", "msg_from": "Gautam H Thaker <[email protected]>", "msg_from_op": true, "msg_subject": "actual patch for close_ps() [file: geo_ops.c]" }, { "msg_contents": "> Attached is my patch that fixes the routine close_ps().\n\nHi Gautam. The patch seems \"backwards\". Can you next time run the diff\nas\n\n diff -c geo_ops.c.last geo_ops.c > geo_ops.c.patch\n\nto make it a bit easier to apply? Haven't tried testing yet, but should\nget to it sometime soon...\n\n - Tom\n", "msg_date": "Sat, 02 May 1998 05:08:45 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] actual patch for close_ps() [file: geo_ops.c]" } ]
[ { "msg_contents": "Hello all,\n\n I asked many times in other mailing lists, but nobody seems to\nreply(or is this kind of question is already mentioned elsewhere? it is\nvery hard to find something in mail archieves on PostgreSQL site)...\n\n Does anybody know WHY all examples(in C) end with \"exit(0)\" not\nreturn 0 ? when I tried large object examples(testlo.c or c++\nexamples), it always segfaulted! if I comment out exit(0) on redhat 5.0\nlinux. memory leak or some bugs?\n\nBest Regards, C.S.Park\n\n\n", "msg_date": "Sat, 02 May 1998 04:20:12 +0900", "msg_from": "\"Park, Chul-Su\" <[email protected]>", "msg_from_op": true, "msg_subject": "[Q] exit(0) in C examples" } ]
[ { "msg_contents": "\ncreate rule radius1 as on update to user where (current.usrppp <> new.usrppp) do notify radius;\n\nupdate user set usrname = 'Brett McCormick' where usrid = 'brett';\nNOTIFY\n\nthe notification comes through on the radius relation (which, interestingly, doesn't exist)\n\nlinux 2.0.33, postgresql 6.3.2..\nshould I be filling out a bug report? ;)\nor just using gdb\n\ncreating the table & dropping and recreating the rule has no effect\n", "msg_date": "Fri, 1 May 1998 13:45:32 -0700", "msg_from": "Brett McCormickS <[email protected]>", "msg_from_op": true, "msg_subject": "rule/notify bug?" }, { "msg_contents": "Brett McCormickS <[email protected]> writes:\n> create rule radius1 as on update to user where (current.usrppp <> new.usrppp) do notify radius;\n\n> update user set usrname = 'Brett McCormick' where usrid = 'brett';\n> NOTIFY\n\n> the notification comes through on the radius relation\n> (which, interestingly, doesn't exist)\n\nYup, that's what you told it to do: \"notify radius\". Listen/notify\nnames are really arbitrary identifiers, not relation names. This\nis a good thing: you can signal conditions that aren't tightly\ntied to a single table.\n\n> should I be filling out a bug report? ;)\n\nIt's not a bug, it's a feature ;-)\n\nThe listen/notify documentation could be clearer about this, though.\nMaybe it's a documentation bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 May 1998 18:23:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] rule/notify bug? " } ]
[ { "msg_contents": "I am getting the following error from the current tree. This is with\nAssert checking turned on.\n\n---------------------------------------------------------------------------\n\ngcc2 -O2 -m486 -pipe -g -Wall -I../../../include -I../../../backend -I/u/readline -I../.. -c nbtsearch.c -o nbtsearch.o\nnbtsearch.c: In function `_bt_skeycmp':\nnbtsearch.c:320: `NullValueRegProcedure' undeclared (first use this function)\nnbtsearch.c:320: (Each undeclared identifier is reported only once\nnbtsearch.c:320: for each function it appears in.)\nnbtsearch.c: In function `_bt_compare':\nnbtsearch.c:668: `NullValueRegProcedure' undeclared (first use this function)\ngmake[3]: *** [nbtsearch.o] Error 1\ngmake[3]: Leaving directory `/usr/local/src/pgsql/pgsql/src/backend/access/nbtree'\ngmake[2]: *** [submake] Error 2\ngmake[2]: Leaving directory `/usr/local/src/pgsql/pgsql/src/backend/access'\ngmake[1]: *** [access.dir] Error 2\ngmake[1]: Leaving directory `/usr/local/src/pgsql/pgsql/src/backend'\ngmake: *** [all] Error 2\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 2 May 1998 22:42:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "compile error with Assert()" } ]
[ { "msg_contents": "There is a missing #endif in s_lock.h.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 2 May 1998 23:05:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "s_lock.h problems" } ]
[ { "msg_contents": "Being a new comer to postgres hacking I am afraid\nI did not test enough with regards to the previous patch I had sent\nthat attempted to fixthe \"##\" operator for point to a line segment.\nMy changes fixed the problems I had seen, but the\nregression for geometry now fails with the error:\n\nregression=> SELECT '' AS thirty, p.f1, l.s, p.f1 ## l.s AS closest\nregression-> FROM LSEG_TBL l, POINT_TBL p;\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\nregression=>\n\nClearly I have to do more testing and fixing. I will do so, do the\nregression tests etc. before claiming to fix anything. Sorry about this\nfolks, I am new to this and apologize for my mistakes.\n\nGautam\n\n\n", "msg_date": "Sun, 03 May 1998 15:18:39 +0000", "msg_from": "Gautam Thaker <[email protected]>", "msg_from_op": true, "msg_subject": "on patch for close_ps() func. in geo_ops.c" }, { "msg_contents": "> Being a new comer to postgres hacking I am afraid\n> I did not test enough with regards to the previous patch I had sent\n> that attempted to fix the \"##\" operator for point to a line segment.\n> My changes fixed the problems I had seen, but the\n> regression for geometry now fails with the error:\n> \n> regression=> SELECT '' AS thirty, p.f1, l.s, p.f1 ## l.s AS closest\n> regression-> FROM LSEG_TBL l, POINT_TBL p;\n> PQexec() -- Request was sent to backend, but backend closed the \n> channel before responding.\n> \n> Clearly I have to do more testing and fixing. I will do so, do the\n> regression tests etc. before claiming to fix anything. Sorry about \n> this folks, I am new to this and apologize for my mistakes.\n\nEveryone is or was new to Postgres development at one time or another.\nWe have several months until the next release to work through the fixes\nand features you want to add, so there is no problem with this.\n\nAs you gain experience with developing patches and fixes for Postgres,\nyou will hopefully find ways to test and submit the patches in the most\nreliable manner, but _we all_ have had to go through that learning\nperiod (well, OK, we are all still learning about that :)\n\nAnyway, don't get discouraged. Let me know when you have some more\npatches/fixes to apply, and I can help test them and package them for\nthe source tree.\n\nThe only unsuccessful code developers are the ones who submit a patch\nand then walk away; stick with the problem and you'll be making a very\nvaluable contribution. I'll help where you need it.\n\nTalk to you soon, and we're looking forward to more patches...\n\n - Tom\n", "msg_date": "Tue, 05 May 1998 16:29:42 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: on patch for close_ps() func. in geo_ops.c" }, { "msg_contents": "> I have gone back and done more coding and more testing. The regression\n> test for geometry no longer causes the back end to core dump. The \n> abort was not happening the \"close_ps()\" function that I had hacked, \n> but is in interpt_sl() routine. This routine dumps core if asked to \n> find an intersection between a line segment and a line which in fact \n> do not intersect. What I did was to fix close_ps() to not call\n> interpt_sl() with parameters that do not intersect. I handle such \n> special cases separately (and hopefully cleanly) in close_ps().\n> Please let me know what you think. If you think of I will clean up\n> and send proper patches (in right order this time, hopefully!)\n\nThings look good. Would it be possible to fix interpt_sl() while you are\nlooking at this? Otherwise it will lurk in the code waiting to bite\nsomeone else later. At the moment it is not directly callable as an SQL\nfunction, but could/should be now that the \"line\" type is visible to\nusers.\n\nAnyway, no need really to \"send proper patches\"; from here on how about\nsending me patches based on the last file (or patch) you sent? Use \"diff\n-c\" to generate the patch...\n\nWe should settle on an external representation for the \"line\" type;\nalthough a point/slope representation is nice and intuitive, I'd suggest\ntrying the Ax+By+C=0 representation (used internally too) since we can\nthen avoid having representation problems with vertical lines (which\nhave infinite slope). Another possibility is to use a line segment\nrepresentation (two points) but we might have to be careful about\nprecision and rounding issues.\n\nOnce things settle down a bit I'll integrate the whole thing into the\nsource tree; there are several other files which have been touched in\nthe catalog and elsewhere and we'll need to move those patches to the\ncurrent source tree and test them before committing.\n\n - Tom\n", "msg_date": "Thu, 07 May 1998 14:45:58 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: on patch for close_ps() func. in geo_ops.c" } ]
[ { "msg_contents": "\nHi.\nI have a suggestion. I would be interested in implementing it if ppl think\nit is a good idea. \n\nWhy not have a default location where postgres writes its log files. The\ncurrent way of doing it seems to be a little klunky. \nI have an either/or suggested fix. \na) add a switch to specify where the info and the errors files go,\n/var/log/postgres/info and maybe /var/log/postgres/errors\n\nb) write all the stuff to syslog\n\nI like b better because it would allow remote logging. One could then use\nthe features of syslog to dump the stuff they want to know about to an\napproprate log file. I believe this would also be more effective at making\nsure messages get sent when the backend crashes. I have seen a few places\nwhere the messages don't seem to get there because of buffering...\n\n-Mike\n\n\n", "msg_date": "Sun, 3 May 1998 21:31:23 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestions" }, { "msg_contents": "On Sun, 3 May 1998, Michael Richards wrote:\n\n> \n> Hi.\n> I have a suggestion. I would be interested in implementing it if ppl think\n> it is a good idea. \n> \n> Why not have a default location where postgres writes its log files. The\n> current way of doing it seems to be a little klunky. \n> I have an either/or suggested fix. \n> a) add a switch to specify where the info and the errors files go,\n> /var/log/postgres/info and maybe /var/log/postgres/errors\n> \n> b) write all the stuff to syslog\n\n\tb) is the preferred way of doing it...it should just be a matter\nof adding it to backend/utils/elog.c ...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 3 May 1998 21:50:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Suggestions" }, { "msg_contents": "> On Sun, 3 May 1998, Michael Richards wrote:\n> > \n> > Hi.\n> > I have a suggestion. I would be interested in implementing it if ppl think\n> > it is a good idea. \n> > \n> > Why not have a default location where postgres writes its log files. The\n> > current way of doing it seems to be a little klunky. \n> > I have an either/or suggested fix. \n> > a) add a switch to specify where the info and the errors files go,\n> > /var/log/postgres/info and maybe /var/log/postgres/errors\n> > \n> > b) write all the stuff to syslog\n> \n> \tb) is the preferred way of doing it...it should just be a matter\n> of adding it to backend/utils/elog.c ...\n> \n\nOne problem might be that postgres can write _a_ _lot_ of messages to\nthe log and I would not want to fill my /var/log partition with them as\nthis would interfere with other logging.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n", "msg_date": "Mon, 4 May 1998 12:05:41 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Suggestions" }, { "msg_contents": "On Mon, 4 May 1998, David Gould wrote:\n\n> > On Sun, 3 May 1998, Michael Richards wrote:\n> > > \n> > > Hi.\n> > > I have a suggestion. I would be interested in implementing it if ppl think\n> > > it is a good idea. \n> > > \n> > > Why not have a default location where postgres writes its log files. The\n> > > current way of doing it seems to be a little klunky. \n> > > I have an either/or suggested fix. \n> > > a) add a switch to specify where the info and the errors files go,\n> > > /var/log/postgres/info and maybe /var/log/postgres/errors\n> > > \n> > > b) write all the stuff to syslog\n> > \n> > \tb) is the preferred way of doing it...it should just be a matter\n> > of adding it to backend/utils/elog.c ...\n> > \n> \n> One problem might be that postgres can write _a_ _lot_ of messages to\n> the log and I would not want to fill my /var/log partition with them as\n> this would interfere with other logging.\n\n\tUse syslog.conf to redirect to a different file on a different\npartition...benefit to using syslog is that you can rotate the log files\nwihtout having to restart the postmaster ...\n\n\n", "msg_date": "Mon, 4 May 1998 15:18:15 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Suggestions" } ]
[ { "msg_contents": "Sending this to the hackers list to see if anyone knows anything about\nit. One of our developers had the problem too, but I thought it was\nfixed.\n\n> \n> Hello,\n> \n> Sorry to send you this message directly to you, but I'm currently not\n> suscribed to any psql mailing list and this is the only way I thought to\n> know if my email has been received.\n> I've sent you some messages in the past regarding the Linux fflush() problem,\n> the one that made psql receive a EPIPE signal.\n> I've downloaded the latest available version 6.3.2 and has the same\n> problem. \n> If you don't remember, there was a fflush() called in fe-misc.c, exactly\n> in pqPuts() function that receives EPIPE and it's currently not ignored.\n> I don't know if this is Linux specific or not, but it'd be great to fix\n> the problem in the distribution. Do you agree?\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 3 May 1998 21:52:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 6.3.2" }, { "msg_contents": "> > I've sent you some messages in the past regarding the Linux fflush() \n> > problem, the one that made psql receive a EPIPE signal.\n> > I've downloaded the latest available version 6.3.2 and has the \n> > same problem.\n> > If you don't remember, there was a fflush() called in fe-misc.c, \n> > exactly in pqPuts() function that receives EPIPE and it's currently \n> > not ignored.\n> > I don't know if this is Linux specific or not, but it'd be great to \n> > fix the problem in the distribution. Do you agree?\n\nYes, I recall the \"broken pipe\" problem and thought that someone had\nfixed it (most platforms didn't seem to see the problem, but Linux did).\n\nI'm not currently running v6.3.2, having rev-locked on 980408 to get\nsome development done for v6.4. Did you supply a patch to fix the\nproblem earlier?\n\n - Tom\n", "msg_date": "Mon, 04 May 1998 03:56:47 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 6.3.2" }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n>>>> If you don't remember, there was a fflush() called in fe-misc.c, \n>>>> exactly in pqPuts() function that receives EPIPE and it's currently \n>>>> not ignored.\n\n> Yes, I recall the \"broken pipe\" problem and thought that someone had\n> fixed it (most platforms didn't seem to see the problem, but Linux did).\n\nfe-connect.c is set up to ignore SIGPIPE while trying to shut down the\nconnection. (The 6.3.2 release is broken on non-POSIX-signal platforms,\nbecause it resets SIGPIPE to SIG_DFL afterwards, which may well not be\nwhat the application wanted. I've fixed that in the version that I plan\nto submit soon.)\n\nThere is no equivalent code to ignore SIGPIPE during ordinary writes to\nthe backend. I'm hesitant to add it on the following grounds:\n 1. Speed: a write would need three kernel calls instead of one.\n 2. I'm suspicious of code that alters signal settings during normal\n operation. Especially in a library that can't know what else is\n going on in the application. Disabling the application's signal\n handler, even for short intervals, is best avoided.\n 3. It's only an issue if the backend crashes, which shouldn't happen\n anyway ... shouldn't happen anyway ... shouldn't ... ;-)\n\nThe real question is what scenario is causing SIGPIPE to be delivered\nin the first place. A search of the pgsql-hackers archives for\n\"SIGPIPE\" yields only a mention of seeing SIGPIPE some of the time\n(not always) when trying to connect to a nonexistent database.\nIf that's what's being complained of here, I'll try to look into it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 May 1998 10:56:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGPIPE gripe" }, { "msg_contents": "> > Yes, I recall the \"broken pipe\" problem and thought that someone had\n> > fixed it (most platforms didn't seem to see the problem, but Linux \n> > did).\n> fe-connect.c is set up to ignore SIGPIPE while trying to shut down the\n> connection. (The 6.3.2 release is broken on non-POSIX-signal \n> platforms, because it resets SIGPIPE to SIG_DFL afterwards, which may \n> well not be what the application wanted. I've fixed that in the \n> version that I plan to submit soon.)\n> There is no equivalent code to ignore SIGPIPE during ordinary writes \n> to the backend. I'm hesitant to add it on the following grounds:\n> 1. Speed: a write would need three kernel calls instead of one.\n> 2. I'm suspicious of code that alters signal settings during normal\n> operation. Especially in a library that can't know what else is\n> going on in the application. Disabling the application's signal\n> handler, even for short intervals, is best avoided.\n> 3. It's only an issue if the backend crashes, which shouldn't happen\n> anyway ... shouldn't happen anyway ... shouldn't ... ;-)\n> The real question is what scenario is causing SIGPIPE to be delivered\n> in the first place. A search of the pgsql-hackers archives for\n> \"SIGPIPE\" yields only a mention of seeing SIGPIPE some of the time\n> (not always) when trying to connect to a nonexistent database.\n> If that's what's being complained of here, I'll try to look into it.\n\ngolem$ psql nada\nConnection to database 'nada' failed.\nFATAL 1: Database nada does not exist in pg_database\ngolem$ psql nada\nBroken pipe\ngolem$\n\nThis is on a Linux box with Postgres code frozen on 980408. I assume\nthat full v6.3.2 exhibits the same...\n\n - Tom\n", "msg_date": "Mon, 04 May 1998 15:13:02 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGPIPE gripe" }, { "msg_contents": "I said:\n> The real question is what scenario is causing SIGPIPE to be delivered\n> in the first place. A search of the pgsql-hackers archives for\n> \"SIGPIPE\" yields only a mention of seeing SIGPIPE some of the time\n> (not always) when trying to connect to a nonexistent database.\n\nOK, I've been able to reproduce this; I understand the problem and\nI have a proposed fix.\n\nHere's the scenario. On the server side, this happens:\n\n\tPostmaster receives new connection request from client\n\n\t(possible authentication cycle here)\n\n\tPostmaster sends \"AUTH OK\" to client\n\n\tPostmaster forks backend\n\n\tBackend discovers that database name is invalid\n\n\tBackend sends error message\n\n\tBackend closes connection and exits\n\nMeanwhile, once the client receives the \"AUTH OK\" it initiates \nan empty query cycle (which is commented as intending to discover\nwhether the database exists!):\n\n\t...\n\n\tClient receives \"AUTH_OK\"\n\n\tClient sends \"Q \" query\n\n\tClient waits for response\n\nThe problem, of course, is that if the backend manages to exit\nbefore the client gets to send its empty query, then the client\nis writing on a closed connection. Boom, SIGPIPE.\n\nI thought about hacking around this by having the postmaster check\nthe validity of the database name before it does the authorization\ncycle. But that's a bad idea; first because it allows unauthorized\nusers to probe the validity of database names, and second because\nit only fixes this particular instance of the problem. The general\nproblem is that the FE/BE protocol does not make provision for errors\nreported by the backend during startup. ISTM there are many ways in\nwhich the BE might fail during startup, not all of which could\nreasonably be checked in advance by the postmaster.\n\nSo ... since we're altering the protocol anyway ... the right fix is\nto alter the protocol a little more. Remember that \"Z\" message that\nthe backend is now sending at the end of every query cycle? What\nwe ought to do is make the BE send \"Z\" at completion of startup,\nas well. (In other words, \"Z\" will really mean \"Ready for Query\"\nrather than \"Query Done\". This is actually easier to implement in\npostgres.c than the other way.) Now the client's startup procedure\nlooks like\n\n\t...\n\n\tClient receives \"AUTH_OK\"\n\n\tClient waits for \"Z\" ; if get \"E\" instead, BE startup failed.\n\nI suspect it's not really necessary to do an empty query after this,\nbut we may as well leave that in there for additional reliability.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 May 1998 12:07:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGPIPE gripe " }, { "msg_contents": "> \tClient receives \"AUTH_OK\"\n> \n> \tClient waits for \"Z\" ; if get \"E\" instead, BE startup failed.\n> \n> I suspect it's not really necessary to do an empty query after this,\n> but we may as well leave that in there for additional reliability.\n\nI say go without the extra query. We can always add it later if there\nis a problem. Backend startup time should be as fast as possible,\nespecially for short requests like www.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 4 May 1998 12:34:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: SIGPIPE gripe" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I suspect it's not really necessary to do an empty query after this,\n>> but we may as well leave that in there for additional reliability.\n\n> I say go without the extra query. We can always add it later if there\n> is a problem. Backend startup time should be as fast as possible,\n\nGood point. OK, I'll leave the empty-query code in fe-connect.c,\nbut ifdef it out, and we'll see if anyone has any problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 May 1998 12:44:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGPIPE gripe " }, { "msg_contents": "> > > I've sent you some messages in the past regarding the Linux fflush() \n> > > problem, the one that made psql receive a EPIPE signal.\n> > > I've downloaded the latest available version 6.3.2 and has the \n> > > same problem.\n> > > If you don't remember, there was a fflush() called in fe-misc.c, \n> > > exactly in pqPuts() function that receives EPIPE and it's currently \n> > > not ignored.\n> > > I don't know if this is Linux specific or not, but it'd be great to \n> > > fix the problem in the distribution. Do you agree?\n> \n> Yes, I recall the \"broken pipe\" problem and thought that someone had\n> fixed it (most platforms didn't seem to see the problem, but Linux did).\n> \n> I'm not currently running v6.3.2, having rev-locked on 980408 to get\n> some development done for v6.4. Did you supply a patch to fix the\n> problem earlier?\n> \n\n Well, kind of. I've tracked the problem down to PQexec() and suggested\nsome way to fix it.\n I've checked the current psql and the problem is when it calls to\nPQconnectdb(). At this point EPIPE isn't ignored. Inside PQconnectdb()\nthere is a call to PQexec() to see if the database exists.\n Then PQexec() calls pqPuts() and you get the broken pipe.\n Before the call to PQconnectdb in psql there isn't any call to pqsignal.\n \n Federico Schwindt\n\n\n\n \n", "msg_date": "Mon, 4 May 1998 15:15:21 -0300 (EST)", "msg_from": "Federico Schwindt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 6.3.2" }, { "msg_contents": "> So ... since we're altering the protocol anyway ... the right fix is\n> to alter the protocol a little more. Remember that \"Z\" message that\n> the backend is now sending at the end of every query cycle? What\n> we ought to do is make the BE send \"Z\" at completion of startup,\n> as well. (In other words, \"Z\" will really mean \"Ready for Query\"\n> rather than \"Query Done\". This is actually easier to implement in\n> postgres.c than the other way.) Now the client's startup procedure\n> looks like\n> \n> \t...\n> \n> \tClient receives \"AUTH_OK\"\n> \n> \tClient waits for \"Z\" ; if get \"E\" instead, BE startup failed.\n\nBE fails, client gets SIGPIPE? or client waits forever?\n\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Mon, 4 May 1998 12:31:42 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGPIPE gripe" }, { "msg_contents": "On Mon, 4 May 1998, Tom Lane wrote:\n\n[snip]\n\n> Meanwhile, once the client receives the \"AUTH OK\" it initiates \n> an empty query cycle (which is commented as intending to discover\n> whether the database exists!):\n> \n> \t...\n> \n> \tClient receives \"AUTH_OK\"\n> \n> \tClient sends \"Q \" query\n> \n> \tClient waits for response\n> \n> The problem, of course, is that if the backend manages to exit\n> before the client gets to send its empty query, then the client\n> is writing on a closed connection. Boom, SIGPIPE.\n\n[snip]\n\n> So ... since we're altering the protocol anyway ... the right fix is\n> to alter the protocol a little more. Remember that \"Z\" message that\n> the backend is now sending at the end of every query cycle? What\n> we ought to do is make the BE send \"Z\" at completion of startup,\n> as well. (In other words, \"Z\" will really mean \"Ready for Query\"\n> rather than \"Query Done\". This is actually easier to implement in\n> postgres.c than the other way.) Now the client's startup procedure\n> looks like\n> \n> \t...\n> \n> \tClient receives \"AUTH_OK\"\n> \n> \tClient waits for \"Z\" ; if get \"E\" instead, BE startup failed.\n\nThis sounds fair enough. Infact we could then throw a more meaningful\nerror message when this occurs.\n\n> I suspect it's not really necessary to do an empty query after this,\n> but we may as well leave that in there for additional reliability.\n\nIn JDBC, I replaced (back in 6.2) the empty query with one to get the\ncurrent DateStyle (which the driver then uses to handle dates correctly).\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Mon, 4 May 1998 20:51:06 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGPIPE gripe " }, { "msg_contents": "[email protected] (David Gould) writes:\n>> So ... since we're altering the protocol anyway ... the right fix is\n>> to alter the protocol a little more.\n>> \n>> Client waits for \"Z\" ; if get \"E\" instead, BE startup failed.\n\n> BE fails, client gets SIGPIPE? or client waits forever?\n\nNeither: the client detects EOF on its input and realizes that the\nbackend failed. Already done and tested.\n\n(SIGPIPE is only for *write* on a closed channel, not *read*.\nRead just returns EOF.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 May 1998 17:16:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGPIPE gripe " } ]
[ { "msg_contents": "If I change a file in my working copy of the cvs tree, is there a way to\ndiscard my changes and re-sync with the main tree, short of removing the\nfiles I have changed, or making a diff and patching with -R?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 4 May 1998 00:03:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "cvs question" }, { "msg_contents": "On Mon, 4 May 1998, Bruce Momjian wrote:\n\n> If I change a file in my working copy of the cvs tree, is there a way to\n> discard my changes and re-sync with the main tree, short of removing the\n> files I have changed, or making a diff and patching with -R?\n\n\tNot that I am personally aware of...\n\n\n", "msg_date": "Mon, 4 May 1998 11:37:00 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs question" }, { "msg_contents": "> \n> On Mon, 4 May 1998, Bruce Momjian wrote:\n> \n> > If I change a file in my working copy of the cvs tree, is there a way to\n> > discard my changes and re-sync with the main tree, short of removing the\n> > files I have changed, or making a diff and patching with -R?\n> \n> \tNot that I am personally aware of...\n> \n> \n> \n\nI have written a script to do an 'cvs -q -n update pgsql', get the file\nnames beginning with 'M', remove those files, and then do a normal\nupdate.\n\nSeems very strange, but it works. Still looking for a better way.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 4 May 1998 11:42:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] cvs question" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nThen <[email protected]> spoke up and said:\n> I have written a script to do an 'cvs -q -n update pgsql', get the file\n> names beginning with 'M', remove those files, and then do a normal\n> update.\n\nThat's better than the \"cannonical\" method: cvs co pgsql\n- -- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\n\niQBVAwUBNU3jYIdzVnzma+gdAQGwggIAl6XEJ8F32P9ND09ZTLJrk3a4JAnAFmX6\nesjzA5COPucpDCcmMKQdU0p5bslUv1PLiWaaLJi0PzqCz41YojG+Qw==\n=KS1P\n-----END PGP SIGNATURE-----\n\n", "msg_date": "4 May 1998 11:48:49 -0400", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs question" } ]
[ { "msg_contents": "Can I assume that this as well as the text->varchar will be fixed in\n6.4?\nIf anyone needs any help with this I'm open (it'll require some serious\nhand holding though, and flowers), Thomas?!.\n\nadserver=> select NOW();\nnow \n----------------------\n1998-05-04 10:03:29-05\n(1 row)\n\nadserver=> select NOW()::DATETIME;\ndatetime \n----------------------------\nMon May 04 10:03:40 1998 CDT\n(1 row)\n\nadserver=> select NOW()::DATETIME::TIMESTAMP;\nERROR: function datetime_stamp(datetime) does not exist\n\n\t--DEJ\n", "msg_date": "Mon, 4 May 1998 11:08:18 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Auto Type conversion" }, { "msg_contents": "> Can I assume that this as well as the text->varchar will be fixed in\n> 6.4?\n\ntext->varchar is likely to be addressed (as well as other string issues\nsuch as ensuring correct maximum length in target columns).\n\n> adserver=> select NOW()::DATETIME::TIMESTAMP;\n> ERROR: function datetime_stamp(datetime) does not exist\n\nHmm. I wrote most of the routines you might want to go _to_ datetime,\nbut did not fully populate the functions to go _away_ from datetime. For\ntimestamp in particular, I didn't want to spend the time, since I was\nplanning on replacing timestamp with datetime sometime soon.\n\nHowever, I haven't taken that step yet because:\n\n1) I think that the current datetime implementation makes more sense\nthan the SQL92 specification for timestamp (of course, I wrote it so I'm\na bit biased :)\n\n2) imho implementing _full_ SQL92 timestamp behavior is a waste of time\n(damaged functionality wrt datetime and bizarre syntax, usage, and\nbehavior, among other reasons).\n\n3) others may have a strong opinion that a _full_ SQL92 timestamp is\nimportant (I would hope that they have a real need for it, rather than\nit being a \"well, it should\" argument because afaik no one actually uses\nthe most arcane SQL92 features of timestamp, since they make little\nsense).\n\n4) I'm not likely to be willing to support a damaged form of\ndatetime/timestamp at the expense of a full-featured datetime, but the\nproject might decide to head that direction.\n\nMy feeling is that the SQL92 form of timestamp is a mish-mash of\nrequirements and features to accomodate existing database products.\nStarting from scratch, no one would have come close to the SQL92\nstandard for this. The datetime type is more in keeping with how date\nand time actually behave, and is what timestamp should be.\n\nAnyway, a discussion of this may be in order. Anyone??\n\n - Tom\n", "msg_date": "Mon, 04 May 1998 16:52:21 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Auto Type conversion" } ]
[ { "msg_contents": "I don't see one, but I think there ought to be one.\n\nMy revised libpq managed to pass the regression tests\neven though PQfn() was broken; I didn't discover that\nuntil I tried to run the large-object example in\nsrc/test/example/testlo.c. (Perhaps that file could\nbe converted into a regression test with little effort?)\n\nBTW, src/test/example/testlo2.c looks like a waste of\ndisk space. It seems to be an old version of testlo.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 May 1998 12:37:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "No large-object test in regression suite?" } ]
[ { "msg_contents": "I am running up the new version of posgresql on my 10.20 box. \n\nI am compiling with gcc.2.7.2\nI am using the -fPIC -DPIC -fno-gnu-linker flags but the weirdest thing\nhappens with the front end. \n\n/usr/lib/dld.sl: Can't open shared library: ../../interfaces/libpq/libpq.sl\n/usr/lib/dld.sl: No such file or directory\n\nOddly enough it works fine from the source tree.\n\ndo-de do do.\n\n\n", "msg_date": "Mon, 4 May 1998 11:05:02 -0700", "msg_from": "Donald Delmar Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql6.3.2 libdld and the twilight zone." }, { "msg_contents": "Donald Delmar Davis <[email protected]> writes:\n> I am running up the new version of posgresql on my 10.20 box. \n> /usr/lib/dld.sl: Can't open shared library: ../../interfaces/libpq/libpq.sl\n> /usr/lib/dld.sl: No such file or directory\n> Oddly enough it works fine from the source tree.\n\nThe problem is that you don't have a good path to the installed\nlibpq.sl. The HP executables created by the 6.3.2 release only have the\nrelative path that you see above; if you're not in a directory such\nthat that relative path finds libpq.sl, you lose.\n\nQuickest workaround is to set SHLIB_PATH environment variable to\npoint to the installed lib directory. (You might also have to\ntweak the executables to get them to pay attention to SHLIB_PATH;\nsee chatr(1) to find out how to check and change this setting.)\n\nI submitted HPUX patches a week or two ago that include a real fix\nfor this, namely embedding the name of the installed lib directory\ninto the executables so that they work without needing SHLIB_PATH.\nI dunno if these patches have gotten into the current snapshots,\nbut you can look in the pgsql-patches mail list archive for them.\n\nBTW you also need to make sure libpq.sl has permissions 555.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 May 1998 17:10:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgresql6.3.2 libdld and the twilight zone. " } ]
[ { "msg_contents": "> > Can I assume that this as well as the text->varchar will be fixed in\n> > 6.4?\n> \n> text->varchar is likely to be addressed (as well as other string\n> issues\n> such as ensuring correct maximum length in target columns).\n> \n> > adserver=> select NOW()::DATETIME::TIMESTAMP;\n> > ERROR: function datetime_stamp(datetime) does not exist\n> \n> Hmm. I wrote most of the routines you might want to go _to_ datetime,\n> but did not fully populate the functions to go _away_ from datetime.\n> For\n> timestamp in particular, I didn't want to spend the time, since I was\n> planning on replacing timestamp with datetime sometime soon.\nIs there an easy way to format the output for the DATETIME datatype on a\nper query basis. I really like the DATETIME functionality, but there\nare times when the TIMESTAMP output format would be more convenient.\n\n> However, I haven't taken that step yet because:\n> \n> 1) I think that the current datetime implementation makes more sense\n> than the SQL92 specification for timestamp (of course, I wrote it so\n> I'm\n> a bit biased :)\n> \n> 2) imho implementing _full_ SQL92 timestamp behavior is a waste of\n> time\n> (damaged functionality wrt datetime and bizarre syntax, usage, and\n> behavior, among other reasons).\nWhat is the bizarre/archaic functionality? I would vote for\ncompatibility unless it introduces some huge programming concerns.\n\n> 3) others may have a strong opinion that a _full_ SQL92 timestamp is\n> important (I would hope that they have a real need for it, rather than\n> it being a \"well, it should\" argument because afaik no one actually\n> uses\n> the most arcane SQL92 features of timestamp, since they make little\n> sense).\nWell my argument is a 'well, it should'. But I'm willing to help, if at\nall possible, to reach the idea I express.\n\n> 4) I'm not likely to be willing to support a damaged form of\n> datetime/timestamp at the expense of a full-featured datetime, but the\n> project might decide to head that direction.\nWhy not implement timestamp as a different type interface to the same\nimplementation? Unless the SQL92 standard conflicts directly with the\ncurrent implementation of DATETIME, in which case I suggest leaving them\nseparate and implement good CAST between them.\n\n> My feeling is that the SQL92 form of timestamp is a mish-mash of\n> requirements and features to accomodate existing database products.\n> Starting from scratch, no one would have come close to the SQL92\n> standard for this. The datetime type is more in keeping with how date\n> and time actually behave, and is what timestamp should be.\nWell, if we implement the standard as it stands as TIMESTAMP and leave\nthe current DATETIME functionality and implement auto-typecasting\nbetween the two then we get the best of both worlds, as long as it\ndoesn't take 500 man-hours to implement.\n\n> Anyway, a discussion of this may be in order. Anyone??\nOoh, Ooh, me, me. \n\n> - Tom\n\t\t-DEJ\n", "msg_date": "Mon, 4 May 1998 13:26:28 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Auto Type conversion" } ]
[ { "msg_contents": "\n> Hi!\n> \n> Is there a way excluding temp tables to make a COUNT (DISTINCT xxx)\n> query?\n> \n> Thanx for the attention!\n> \n> \tMarin\n> \nI'd like to know as well. I think that someone was trying to implement\nsub-selects in aggregate functions, did that get done? \n select distinct count(col_name) from table_name;\ndoesn't work BTW, it gives that number of rows in the table even if\ncol_name has duplicates.\n\n\t\t-DEJ\n", "msg_date": "Mon, 4 May 1998 14:32:39 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] COUNT (DISTINCT xxx) ?" } ]
[ { "msg_contents": "> Marc Howard Zuckman wrote:\n> > On Wed, 29 Apr 1998, Marc Howard Zuckman wrote:\n> > >\n> > > What's the best way to halt the query? Since I have a backup done\n> > > just before initiating the delete, I can just reload the database\n> > > if necessary.\n> > \n> > I was too pessimistic. Ten minutes after I posted this message, the\n> > query terminated normally\n> \n> But it is a very good question. I was reluctant to ask about it, but\n> since it already happened... How do I cancel gracefully?\n> \n> --Gene\n> \nWhy don't we ask the experts?\n\n\t\t-DEJ\n", "msg_date": "Mon, 4 May 1998 15:41:56 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Best way to halt an unending query." }, { "msg_contents": ">> But it is a very good question. I was reluctant to ask about it, but\n>> since it already happened... How do I cancel gracefully?\n\nRight now, you don't. We're working on adding a cancel facility though.\nI've got the libpq (client) side done, now Bruce just has to make the\nbackend respond ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 May 1998 17:24:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [QUESTIONS] Best way to halt an unending query. " } ]
[ { "msg_contents": "OK to send an e-mail to [email protected]? \n", "msg_date": "Mon, 4 May 1998 17:21:41 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "OK to send e-mail?" }, { "msg_contents": "[email protected] writes:\n\n> OK to send an e-mail to [email protected]? \n\nIntruder alert!\n\nCould someone forge an error reply to that message, so that our list\ndoesn't get put on their spam list? (And, of course, if someone could\ntrack them down physically and break their legs, so much the better!)\n\nActually, now that this has started, the proper way to go may be to\nstart blocking postings to the lists from anyone not on them. On the\ndown side, this means that people must send their postings from the\naddress they're subscribed as.\n\n-tih (who hopes all UCE senders die slow and painful deaths)\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "05 May 1998 07:19:25 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "On 5 May 1998, Tom Ivar Helbekkmo wrote:\n\n> [email protected] writes:\n> \n> > OK to send an e-mail to [email protected]? \n> \n> Intruder alert!\n> \n> Could someone forge an error reply to that message, so that our list\n> doesn't get put on their spam list? (And, of course, if someone could\n> track them down physically and break their legs, so much the better!)\n> \n> Actually, now that this has started, the proper way to go may be to\n> start blocking postings to the lists from anyone not on them. On the\n> down side, this means that people must send their postings from the\n> address they're subscribed as.\n\nThe other downside is that anyone in need of help has to subscribe before\nthey can ask their question.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"I'm just not a fan of promoting stupidity!\n We have elected officials for that job!\" -- Rock\n==========================================================================\n\n\n\n", "msg_date": "Tue, 5 May 1998 11:36:56 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "Tom Ivar Helbekkmo wrote:\n> \n> [email protected] writes:\n> \n> > OK to send an e-mail to [email protected]? \n> \n> Intruder alert!\n> \n> Could someone forge an error reply to that message, so that our list\n> doesn't get put on their spam list? (And, of course, if someone could\n> track them down physically and break their legs, so much the better!)\n> \n> Actually, now that this has started, the proper way to go may be to\n> start blocking postings to the lists from anyone not on them. On the\n> down side, this means that people must send their postings from the\n> address they're subscribed as.\n\nWhy don't we make it known (In the periodic developers FAQ posting?)\nthat we do not accept unsolicited email and that we will charge a fee\n($50 per line per subscriber :). I for one would be more than happy\nto do the detective work to track these down. As for what to do with\nthe money -- perhaps we should see how much we get first.\n\nIf we want to do this, we should pick some price and mention it in our\nFAQ.\n\nOcie\n\n", "msg_date": "Tue, 5 May 1998 10:06:07 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "On 5 May 1998, Tom Ivar Helbekkmo wrote:\n\n> [email protected] writes:\n> \n> > OK to send an e-mail to [email protected]? \n> \n> Intruder alert!\n> \n> Could someone forge an error reply to that message, so that our list\n> doesn't get put on their spam list? (And, of course, if someone could\n> track them down physically and break their legs, so much the better!)\n> \n> Actually, now that this has started, the proper way to go may be to\n> start blocking postings to the lists from anyone not on them. On the\n> down side, this means that people must send their postings from the\n> address they're subscribed as.\n\n\tEven better, I have to get my next set of filters in place...that\ndoesn't allow connects from SPAM sites :)\n\n> -tih (who hopes all UCE senders die slow and painful deaths)\n\n\tMakes two of us, but closing the lists isn't the way to go...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 5 May 1998 17:23:42 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "On Tue, 5 May 1998 [email protected] wrote:\n\n> Why don't we make it known (In the periodic developers FAQ posting?)\n> that we do not accept unsolicited email and that we will charge a fee\n> ($50 per line per subscriber :). I for one would be more than happy\n> to do the detective work to track these down. As for what to do with\n> the money -- perhaps we should see how much we get first.\n> \n> If we want to do this, we should pick some price and mention it in our\n> FAQ.\n\n\t*rofl* I like it...I could never figure out whether or not this\nis something that *is* collectable...I see it in ppls sig's\nperiodically...\n\n\tOcie...write up a proposal and let us know :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 5 May 1998 17:24:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "[email protected] writes:\n> Why don't we make it known (In the periodic developers FAQ posting?)\n> that we do not accept unsolicited email and that we will charge a fee\n> ($50 per line per subscriber :). I for one would be more than happy\n> to do the detective work to track these down. As for what to do with\n> the money -- perhaps we should see how much we get first.\n\nThis won't work. We have a similar policy with the Debian project. But we\nhave yet to see money. No lawyer will help you there. And the spammers won't\nsend money because they like the policy. :-(\n\nI think the best way is to install anti-spamming software.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 6 May 1998 09:08:38 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "Michael Meskes wrote:\n> \n> [email protected] writes:\n> > Why don't we make it known (In the periodic developers FAQ posting?)\n> > that we do not accept unsolicited email and that we will charge a fee\n> > ($50 per line per subscriber :). I for one would be more than happy\n> > to do the detective work to track these down. As for what to do with\n> > the money -- perhaps we should see how much we get first.\n> \n> This won't work. We have a similar policy with the Debian project. But we\n> have yet to see money. No lawyer will help you there. And the spammers won't\n> send money because they like the policy. :-(\n\nNo, but if we can figure out who did it and send them a bill for our\nservices rendered (reading their spam), which they solicited by\nposting it (as per the conditions in our FAQ, then we can turn them\nover to a collection agency if they don't come through.\n\nOf course tracking down the poster is a trick in the first place.\n\n> \n> I think the best way is to install anti-spamming software.\n\nI think any such method can be circumvented. The only long-term\nsolution is to make spamming unprofitable. One thing that would go a\nlong way is to reverse-verify the sender's address. If the sender has\nforged this, the mail is dropped and we get the sound of one spam\nclapping :) The problem is that most sites nowadays won't verify email\naddresses. This sounds like a good project for a free relational\ndatabase. Anybody know of any good ones out there? :)\n\nOcie\n", "msg_date": "Wed, 6 May 1998 10:49:51 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "On Wed, 6 May 1998 [email protected] wrote:\n\n> I think any such method can be circumvented. The only long-term\n> solution is to make spamming unprofitable. One thing that would go a\n> long way is to reverse-verify the sender's address. If the sender has\n> forged this, the mail is dropped and we get the sound of one spam\n> clapping :) The problem is that most sites nowadays won't verify email\n> addresses. This sounds like a good project for a free relational\n> database. Anybody know of any good ones out there? :)\n\n\tActually, I currently have two anti-spam filters in place...one of\nwhich verifies the domain of the poster...its not 100% perfect, but it\ndoes a pretty good job of it...\n\n\n", "msg_date": "Wed, 6 May 1998 14:01:37 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "Thus spake [email protected]\n> solution is to make spamming unprofitable. One thing that would go a\n> long way is to reverse-verify the sender's address. If the sender has\n> forged this, the mail is dropped and we get the sound of one spam\n> clapping :) The problem is that most sites nowadays won't verify email\n> addresses. This sounds like a good project for a free relational\n> database. Anybody know of any good ones out there? :)\n\nI am running software that allows me to check for reverse DNS on a\nconnection and refuse SMTP connections if they don't have any. In\naddition I can refuse email from known spam sites and even from sites\nthat use known spammers for their DNS so they can't get throwaway\ndomains and drop them before the Internic kills them for non-payment.\n\nAt home I implement this fully and find it very satisfying. A lot of\nspam gets dropped. I tried to do something similar at vex.net, my\nISP, but the testing I did suggested that customers just wouldn't\nstand for it. There are a lot of broken sites without proper reverse\nDNS and they just refuse to fix themselves. I suspect if we had to\nverify addresses we would be hearing echoes up and down our password\nfile.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 6 May 1998 16:46:56 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n\n> Thus spake [email protected]\n> > solution is to make spamming unprofitable. One thing that would go a\n> > long way is to reverse-verify the sender's address. If the sender has\n> > forged this, the mail is dropped and we get the sound of one spam\n> > clapping :) The problem is that most sites nowadays won't verify email\n> > addresses. This sounds like a good project for a free relational\n> > database. Anybody know of any good ones out there? :)\n\nSomebody out there wrote a program that puts new emailers mail into purgatory\nuntil they respond to an automated request to verify. After a while, purgatory\ngets purged. If they reply, tho, then the message is released.\n\n", "msg_date": "Wed, 06 May 1998 15:47:08 -0700", "msg_from": "Bruce Korb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "> \n> D'Arcy J.M. Cain wrote:\n> \n> > Thus spake [email protected]\n> > > solution is to make spamming unprofitable. One thing that would go a\n> > > long way is to reverse-verify the sender's address. If the sender has\n> > > forged this, the mail is dropped and we get the sound of one spam\n> > > clapping :) The problem is that most sites nowadays won't verify email\n> > > addresses. This sounds like a good project for a free relational\n> > > database. Anybody know of any good ones out there? :)\n> \n> Somebody out there wrote a program that puts new emailers mail into purgatory\n> until they respond to an automated request to verify. After a while, purgatory\n> gets purged. If they reply, tho, then the message is released.\n\nThis sounds interesting.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 6 May 1998 19:11:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "Bruce Korb wrote:\n> \n> D'Arcy J.M. Cain wrote:\n> \n> > Thus spake [email protected]\n> > > solution is to make spamming unprofitable. One thing that would go a\n> > > long way is to reverse-verify the sender's address. If the sender has\n> > > forged this, the mail is dropped and we get the sound of one spam\n> > > clapping :) The problem is that most sites nowadays won't verify email\n> > > addresses. This sounds like a good project for a free relational\n> > > database. Anybody know of any good ones out there? :)\n> \n> Somebody out there wrote a program that puts new emailers mail into purgatory\n> until they respond to an automated request to verify. After a while, purgatory\n> gets purged. If they reply, tho, then the message is released.\n\n\nThat doesn't sound too bad, especially for a mailing list like this.\nWe could even \"prime\" it by adding the current subscribers to the\nlist.\n\nOcie\n\n", "msg_date": "Wed, 6 May 1998 16:17:21 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" }, { "msg_contents": "On Wed, 6 May 1998, D'Arcy J.M. Cain wrote:\n\n> At home I implement this fully and find it very satisfying. A lot of\n> spam gets dropped. I tried to do something similar at vex.net, my\n> ISP, but the testing I did suggested that customers just wouldn't\n> stand for it. \n\n\tI have the anti-relay spam filter and the reverse DNS ones\ninstalled on Hub.Org and at work (work has ~5000 mail users, no complaints\nafter a year being in place)...the only one I haven't added to Hub.Org yet\nis the 'spam list', which I do have at work. Next one to move over, I\nguess...\n\n> There are a lot of broken sites without proper reverse\n> DNS and they just refuse to fix themselves. I suspect if we had to\n> verify addresses we would be hearing echoes up and down our password\n> file.\n\t\n\tI don't find it too bad...complaints from our users are pretty\nmuch zero (even from the professors) as far as email and filtering is\nconcerned...most ppl are happy because spamming is reduced...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 May 1998 22:07:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OK to send e-mail?" } ]
[ { "msg_contents": "We were under the impression that the 6.3 resolved the index corruption\nissue with regard to large tables(BTP_CHAIN). The problem still exist\nand our table indices are still corrupting every night. We are very\ndesperate and need your help to resolve this issue. Our database size is\nabout 80M and growing. We have one particular table (User_Account) about\n12M, that is heavily accessed and updated. Almost every time the number\nof simultaneous access increases, the index on this table corrupts. What\ntype of information can I present to you to help resolve this issue.\n\nSincerely \n\nAli Ebrahimi\nAtlas Field Support Manager\n", "msg_date": "Mon, 4 May 1998 15:39:29 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Postgres 6.3 still has problem with the index" }, { "msg_contents": "On Mon, 4 May 1998 [email protected] wrote:\n\n> We were under the impression that the 6.3 resolved the index corruption\n> issue with regard to large tables(BTP_CHAIN). The problem still exist\n> and our table indices are still corrupting every night. We are very\n> desperate and need your help to resolve this issue. Our database size is\n> about 80M and growing. We have one particular table (User_Account) about\n> 12M, that is heavily accessed and updated. Almost every time the number\n> of simultaneous access increases, the index on this table corrupts. What\n> type of information can I present to you to help resolve this issue.\n\n\tv6.3 or v6.3.2?\n\n\tI had a problem with this on our server for the longest time, to\nthe extent that I added code that tells you which index is corrupted (if\nit doesn't tell you, you are running an older version)...\n\n\tv6.3.2, I believe, cleared it up...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 4 May 1998 20:48:06 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres 6.3 still has problem with the index" } ]
[ { "msg_contents": "I fully support Tom's opinion on the SQL92 timestamp specification.\nI know that working with datetime with commercial DBMS's is a major pain,\nbecause the syntax varies with context.\ne.g. select versus create table default value syntax.\nI therefore support a consistent datetime implementation that might\nnot conform to SQL92 100%, even though I traditionally tend to say \n\"stick to the standard\".\n\nAndreas\n\n> Can I assume that this as well as the text->varchar will be fixed in\n> 6.4?\n\ntext->varchar is likely to be addressed (as well as other string issues\nsuch as ensuring correct maximum length in target columns).\n\n> adserver=> select NOW()::DATETIME::TIMESTAMP;\n> ERROR: function datetime_stamp(datetime) does not exist\n\nHmm. I wrote most of the routines you might want to go _to_ datetime,\nbut did not fully populate the functions to go _away_ from datetime. For\ntimestamp in particular, I didn't want to spend the time, since I was\nplanning on replacing timestamp with datetime sometime soon.\n\nHowever, I haven't taken that step yet because:\n\n1) I think that the current datetime implementation makes more sense\nthan the SQL92 specification for timestamp (of course, I wrote it so I'm\na bit biased :)\n\n2) imho implementing _full_ SQL92 timestamp behavior is a waste of time\n(damaged functionality wrt datetime and bizarre syntax, usage, and\nbehavior, among other reasons).\n\n3) others may have a strong opinion that a _full_ SQL92 timestamp is\nimportant (I would hope that they have a real need for it, rather than\nit being a \"well, it should\" argument because afaik no one actually uses\nthe most arcane SQL92 features of timestamp, since they make little\nsense).\n\n4) I'm not likely to be willing to support a damaged form of\ndatetime/timestamp at the expense of a full-featured datetime, but the\nproject might decide to head that direction.\n\nMy feeling is that the SQL92 form of timestamp is a mish-mash of\nrequirements and features to accomodate existing database products.\nStarting from scratch, no one would have come close to the SQL92\nstandard for this. The datetime type is more in keeping with how date\nand time actually behave, and is what timestamp should be.\n\nAnyway, a discussion of this may be in order. Anyone??\n\n - Tom\n\n\n", "msg_date": "Tue, 5 May 1998 10:24:20 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Auto Type conversion" } ]
[ { "msg_contents": "> Hi.\n> I have a suggestion. I would be interested in implementing it if ppl think\n> it is a good idea. \n> \n> Why not have a default location where postgres writes its log files. The\n> current way of doing it seems to be a little klunky. \n> I have an either/or suggested fix. \n> a) add a switch to specify where the info and the errors files go,\n> /var/log/postgres/info and maybe /var/log/postgres/errors\n> \n> b) write all the stuff to syslog\n> \n\nOnly if this is all optional (through runtime switches or whatever) so we\ncan make sure it doesn't need root intervention to get postgresql installed.\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Tue, 5 May 1998 10:53:26 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Suggestions" }, { "msg_contents": "On Tue, 5 May 1998, Andrew Martin wrote:\n\n> > Hi.\n> > I have a suggestion. I would be interested in implementing it if ppl think\n> > it is a good idea. \n> > \n> > Why not have a default location where postgres writes its log files. The\n> > current way of doing it seems to be a little klunky. \n> > I have an either/or suggested fix. \n> > a) add a switch to specify where the info and the errors files go,\n> > /var/log/postgres/info and maybe /var/log/postgres/errors\n> > \n> > b) write all the stuff to syslog\n> > \n> \n> Only if this is all optional (through runtime switches or whatever) so we\n> can make sure it doesn't need root intervention to get postgresql installed.\n\n\tusing syslogd doesn't require root intervention...just have it\ndefault to daemon.notice, which will go to the 'default' syslog files, and\nlet it be changeable via command line switched for 'level' and\n'facility'...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 5 May 1998 17:27:43 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Suggestions" } ]
[ { "msg_contents": "> How would one go about estimating the performance penalty for variable\n> length fields?...\n> \n> Is it dependant on the number of records?\n> In practice, is it swallowed up by the performance of the internet?\n> Exactly when is the performance hit?\n> IE Does it happen when returning those fields, or just on comparisons\n> or what?\n> What, if any, other penalties come with text fields?\n> EG In msql, they can't be indexed...\n> \n> I'm trying to decide if it's worth doing all the work of data\n> integrity and\n> such to have fixed length character fields, or if I should just make\n> 'em\n> all text and be done with it.\n> \n> Probably no more than 1000 records, but a lot of text fields, for\n> address,\n> phone, etc.\n> \n> Thanks for any discussion on this, and I'll try and collate it into\n> something suitable for adding to the docs/FAQ.\n> \nWhen the backend access a specific field it need to calculate where that\nfield is in the DISK, PAGE, and/or ROW. This means that all the fields\non a ROW after the first variable length field has the access overhead\nof resolving the length of the variable length field. This could be\ndone once or for each field accessed (I'd have to look at the code to be\nsure).\n\nYou can most likely get firm numbers from the HACKERS list.\n\n\t\t-DEJ\n", "msg_date": "Tue, 5 May 1998 12:04:17 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Pre-Estimating Penalty for variable length fields" } ]
[ { "msg_contents": "Thanks for your response.\nWe currently are using 6.3. I just looked at the www.postgresql.org and\nnoticed that under News Flash they have announced the 6.3.2. When I\nchecked the changes in 6.3.2, I did not notice any fixes with respect to\nthe Index problem. Is this not Documented? Has it fixed your problem?\nAli Ebrahimi [email protected]\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:[email protected]]\n> Sent:\tMonday, May 04, 1998 4:48 PM\n> To:\[email protected]\n> Cc:\[email protected]\n> Subject:\tRe: [HACKERS] Postgres 6.3 still has problem with the\n> index\n> \n> On Mon, 4 May 1998 [email protected] wrote:\n> \n> > We were under the impression that the 6.3 resolved the index\n> corruption\n> > issue with regard to large tables(BTP_CHAIN). The problem still\n> exist\n> > and our table indices are still corrupting every night. We are very\n> > desperate and need your help to resolve this issue. Our database\n> size is\n> > about 80M and growing. We have one particular table (User_Account)\n> about\n> > 12M, that is heavily accessed and updated. Almost every time the\n> number\n> > of simultaneous access increases, the index on this table corrupts.\n> What\n> > type of information can I present to you to help resolve this issue.\n> \n> \tv6.3 or v6.3.2?\n> \n> \tI had a problem with this on our server for the longest time, to\n> the extent that I added code that tells you which index is corrupted\n> (if\n> it doesn't tell you, you are running an older version)...\n> \n> \tv6.3.2, I believe, cleared it up...\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org \n", "msg_date": "Tue, 5 May 1998 10:18:11 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres 6.3 still has problem with the index" }, { "msg_contents": "On Tue, 5 May 1998 [email protected] wrote:\n\n> Thanks for your response.\n> We currently are using 6.3. I just looked at the www.postgresql.org and\n> noticed that under News Flash they have announced the 6.3.2. When I\n> checked the changes in 6.3.2, I did not notice any fixes with respect to\n> the Index problem. Is this not Documented? Has it fixed your problem?\n\nI'm not sure if it has just gone away or not, but I haven't seen in since\nupgrading to v6.3.2 ...\n\nAs for not being documented, possibly not, as it may have been a\n'secondary' bug of another bug that was fixed (ie. memory corruption)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 5 May 1998 17:29:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Postgres 6.3 still has problem with the index" } ]
[ { "msg_contents": "Forwarded to HACKERS.\n\n> -----Original Message-----\n> From:\tGreg Skidmore [SMTP:[email protected]]\n> Sent:\tThursday, April 30, 1998 7:24 PM\n> To:\[email protected]\n> Subject:\t[QUESTIONS] Is there any work on Support for Foreign Key\n> Clause?\n> \n> To Anyone who knows the answer:\n> \n> In PostgreSQL 6.3, SQL that contains the \"FOREIGN KEY\" clause results\n> in\n> an error message stating that the clause has not yet been implemented.\n> Is there any current effort to implement it?\n> \n> Thank you for your help.\n> \n> Greg Skidmore\n> \n> \n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Tue, 5 May 1998 12:19:07 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Is there any work on Support for Foreign Key Clau se?" }, { "msg_contents": "> > In PostgreSQL 6.3, SQL that contains the \"FOREIGN KEY\" clause \n> > results in an error message stating that the clause has not yet been \n> > implemented.\n\nActually a \"NOTICE\", not an error...\n\n> > Is there any current effort to implement it?\n\nNo, but it has been discussed for v6.4...\n\n - Tom\n", "msg_date": "Wed, 06 May 1998 01:44:56 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Is there any work on Support for Foreign Key Clau se?" } ]
[ { "msg_contents": "Forwarded to HACKERS.\n\n> -----Original Message-----\n> From:\tRichard W. Kruetzer [SMTP:[email protected]]\n> Sent:\tFriday, May 01, 1998 7:42 PM\n> To:\[email protected]\n> Subject:\t[QUESTIONS] Transaction Logs\n> \n> I have not been able to find out the situation regarding \"transaction\n> logs\" for Postgresql. That is:\n> \n> Does Postgresql support Transaction Logging, and if so, does this mean\n> that, for example, in a disk crash, one can recover the database UP TO\n> THE LAST SUCCESSFUL TRANSACTION? (Assuming the transaction logs are\n> on another disk...)\n> \n> Is there any documentation describing how one would do this?\n> \n> Please reply to:\n> \n> \[email protected]\n> \n> Thanks!\n> \n> Dick Kreutzer\n> AmeriCom Inc.\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Tue, 5 May 1998 12:19:33 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Transaction Logs" }, { "msg_contents": "On Tue, 5 May 1998, Jackson, DeJuan wrote:\n\n> Forwarded to HACKERS.\n\n*rofl* I just clued into these...someone else cares *sniffle*\n\n\n> \n> > -----Original Message-----\n> > From:\tRichard W. Kruetzer [SMTP:[email protected]]\n> > Sent:\tFriday, May 01, 1998 7:42 PM\n> > To:\[email protected]\n> > Subject:\t[QUESTIONS] Transaction Logs\n> > \n> > I have not been able to find out the situation regarding \"transaction\n> > logs\" for Postgresql. That is:\n> > \n> > Does Postgresql support Transaction Logging, and if so, does this mean\n> > that, for example, in a disk crash, one can recover the database UP TO\n> > THE LAST SUCCESSFUL TRANSACTION? (Assuming the transaction logs are\n> > on another disk...)\n> > \n> > Is there any documentation describing how one would do this?\n> > \n> > Please reply to:\n> > \n> > \[email protected]\n> > \n> > Thanks!\n> > \n> > Dick Kreutzer\n> > AmeriCom Inc.\n> > --\n> > Official WWW Site: http://www.postgresql.org\n> > Online Docs & FAQ: http://www.postgresql.org/docs\n> > Searchable Lists: http://www.postgresql.org/mhonarc\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 5 May 1998 17:30:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [QUESTIONS] Transaction Logs" } ]
[ { "msg_contents": "Forwarded to HACKERS.\n\n> -----Original Message-----\n> From:\tLen Morgan [SMTP:[email protected]]\n> Sent:\tSaturday, May 02, 1998 8:21 AM\n> To:\[email protected]\n> Subject:\t[QUESTIONS] Identity Crisis\n> \n> I am running 6.3.2 at one of my client's sites that normally has 5\n> users\n> connected. The problem I run into is that after a couple of days, I\n> end\n> up with as many as 12-15 postgres backends running on the server. The\n> users are encountering a bug in the code and then closing the window\n> and\n> restarting the program which never closes down the connection to the\n> back end. My question is this: Is there a way to identify on the line\n> which starts up the backends, which host is starting it? In other\n> words, if I do ps -ax on the server, can I have an ip address or host\n> name show up to let me know which host is connected to which backend?\n> This will not entirely solve my problem, but it will let me know which\n> user I need to educate about the proper way to work around the program\n> bug. Currently, I just kill the processes off one by one. If\n> somebody\n> calls me in the next five minutes or so, it was a \"live\" one.\n> Otherwise, nobody missed it. Also, if I send -SIGTERM to the\n> individual\n> backend's pid, will this properly clean up any memory that is in use?\n> \n> Thanks,\n> \n> len morgan\n> \n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Tue, 5 May 1998 12:22:37 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Identity Crisis" }, { "msg_contents": "Len Morgan writes:\n>> My question is this: Is there a way to identify on the line\n>> which starts up the backends, which host is starting it? In other\n>> words, if I do ps -ax on the server, can I have an ip address or host\n>> name show up to let me know which host is connected to which backend?\n\nHackers, if anyone does something with setting the backend process title,\nthis seems like a good idea to me.\n\nHowever, a feature that may or may not show up in 6.4 is not going to\nhelp Len with his immediate problem. Len, I'd suggest a couple of\nthings you can do today:\n\n1. netstat on your server will show open TCP connections. Look for\n connections to port 5432 at your end. If you have a lot of users\n it might be hard to spot the culprit --- but I suspect that looking\n for the machine that shows a number of open connections, not just\n one, will do it.\n\n2. If that doesn't work, but you can identify a backend process that's\n been laying around for awhile, you can use \"lsof\" to find out\n which network connection leads to that process. For example,\n I use ps to find that process 21309 is a backend, then:\n\n$ lsof -p 21309\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF INODE NAME\npostgres 21309 postgres 3u inet 0x0d2d1b00 0t0 TCP *:5432 (LISTEN)\npostgres 21309 postgres 5u inet 0x0d749000 0t302 TCP localhost:5432->localhost:2325 (ESTABLISHED)\npostgres 21309 postgres 6u inet 0x0d749000 0t302 TCP localhost:5432->localhost:2325 (ESTABLISHED)\n(lots of non-inet open files for this process snipped)\n\nSo I see the client connected to this server is at port 2325 on\nlocalhost.\n\nlsof (list open files) might already be installed on your machine,\nif not see http://www-rcd.cc.purdue.edu/abe/. It's an invaluable\ntool for debugging all sorts of Unix problems, well worth having\nin your sysadmin kit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 May 1998 14:23:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [QUESTIONS] Identity Crisis " } ]
[ { "msg_contents": "Forwarded to HACKERS.\n\n> -----Original Message-----\n> From:\tDaryl Sayers [SMTP:[email protected]]\n> Sent:\tMonday, May 04, 1998 8:48 PM\n> To:\[email protected]\n> Subject:\t[QUESTIONS] Insert Errors\n> \n> \n> I have Postgresql 6.3 running on Linux 2.0.33. It was working for some\n> time. I have not added anything for a while and now it dont work.\n> Here is a simple test that shows the problem.\n> \n> daryl=> create table mytest ( xx int );\n> CREATE\n> daryl=> insert into mytest values ( 100 );\n> ERROR: mytest: cannot extend\n> \n> The same error shows up in the log file.\n> Any ideas.\n> \n> \n> -- \n> Daryl Sayers Ph: (02) 9417 3788\n> Stone Group Asia Pacific Fax: (02) 9417 3741\n> Unit 20, 380 Eastern Valley Way Email: [email protected]\n> Roseville, 2069 NSW Australia WWW:\n> http://www.stonemicro.com.au\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Tue, 5 May 1998 12:39:23 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Insert Errors" } ]
[ { "msg_contents": "\nResponded to pgsql-hackers...\n\nOn Tue, 5 May 1998, Jonathan Sand wrote:\n\n> The 'create user' command allows a group to be specified. How do I create \n> a postgres group to which postgres users can belong? I've scanned the \n> lists and man pages exhaustively. Must be obvious, but I haven't figured \n> it out.\n> \n> Jonathan Sand\n> \n> [email protected]\n> \n> Hardware: n. that aspect of a computer system which can be hit.\n>\n\nJust tried this out, and we have a bug here:\n\ntemplate1=> create user tester in group pg_user;\nCREATE USER\ntemplate1=> select * from pg_group;\ngroname|grosysid|grolist\n-------+--------+-------\n(0 rows)\n\ntemplate1=> select * from pg_user; \nusename|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil \n-------+--------+-----------+--------+--------+---------+--------+----------------------------\npgsql | 1005|t |t |t |t |********|Sat Jan 31 02:00:00 2037 AST\nscrappy| 10|t |t |t |t |********| \ntester | 1006|f |t |f |t |********| \n(3 rows)\n\ntemplate1=> insert into pg_group values ('test',0,'{scrappy}');\nERROR: pg_atoi: error in \"scrappy\": can't parse \"scrappy\"\ntemplate1=> insert into pg_group values ('test',0,'{10}');\nINSERT 18497 1\ntemplate1=> create user test in group pg_user;\nERROR: defineUser: user \"test\" has already been created\ntemplate1=> create user test in group test; \nERROR: defineUser: user \"test\" has already been created\ntemplate1=> select * from pg_user;\nusename|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil \n-------+--------+-----------+--------+--------+---------+--------+----------------------------\npgsql | 1005|t |t |t |t |********|Sat Jan 31 02:00:00 2037 AST\nscrappy| 10|t |t |t |t |********| \ntester | 1006|f |t |f |t |********| \n(3 rows)\n\nIf I do a different usename ('beater'), it creates fine, but doesn't go \nanywhere as far as pg_group...\n\nIf nobody is working on this area of the code, I'll use it as my personal\nstarting point into it...just let me know...\n \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 5 May 1998 17:36:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] groups of users" } ]
[ { "msg_contents": "Here is the latest I received from our Field Engineering regard to the\nIndex Corruption and the growth of our tables. I would appreciate any\nhelp to figure out why we get the index corruption and why our tables\ngrow so fast?\n\n[Ali Ebrahimi] [email protected] \n\n> \n> PG_VERSION v6.3, FreeBSD 3.0-971031-SNAP\n> \n> The 'before and after' file lists below show a fifty percent \n> increase in the size of user account during the course of one\n> day. \n> Since the database is around 300,000 records we might assume that\n> the \n> increase reflects about 150,000 added records. We are averaging\n> about \n> 25,000 completed transactions to acct_history each day. This\n> data \n> suggests about six updates to user_acct for each update to \n> acct_history which is about what we expect.\n> \n> What we don't expect is for our user_acct to grow (so much!) in\n> size \n> with simple updates.\n> \n> We also don't expect 'btree: BTP_CHAIN flag was expected'\n> errors to \n> be popping up. We have seen btree errors in both acct_history\n> and \n> user_acct. acct_history_acct_no_idx is non-unique, \n> user_acct_card_no_idx is unique, the other user_acct indexes are \n> non-unique. All indexes are btree.\n> \n> We did not see these errors until the tables grew over 80 Meg.\n> \n> In acct_history we are doing inserts only. In user_acct we are\n> \n> doing updates only.\n> \n> I estimate, at peak load, we are processing an average of five \n> transactions per second to user_acct. Spikes would probably go\n> as \n> high as 15-20 trans per second. On acct_history it would be more\n> like \n> one transaction per second. user_acct occurences outnumber \n> acct_history 5:1.\n> \n> Also notice the indices grew *after* reindexing and vacuuming.\n> We \n> did add 5000 cards today but aren't the indices suppposed to\n> update on \n> insert?\n> \n> We'd like to solve two problems here:\n> \n> 1. BTP_CHAIN errors cause system crash during peak traffic.\n> 2. Table sizes grow too much in short period of time.\n> \n> Best Regards,\n> \n> -Dave\n> \n> David Schanen : Atlas Telecom : [email protected]\n> \n> ---------- Before Reindex and Vacuum ---------\n> \n> pgsql 176611328 May 6 02:13 acct_history\n> pgsql 55787520 May 6 02:13 acct_history_acct_no_idx\n> pgsql 120594432 May 6 02:17 user_acct\n> pgsql 22855680 May 6 02:17 user_acct_acct_no_idx\n> pgsql 31547392 May 6 02:17 user_acct_card_acct_sim_idx\n> pgsql 15908864 May 6 02:17 user_acct_card_no_idx\n> pgsql 9986048 May 6 02:17 user_acct_serial_no_idx\n> pgsql 12328960 May 6 02:17 user_acct_sim_idx\n> pgsql 8192 May 6 02:13 user_acct_state\n> pgsql 16384 May 2 03:00 user_acct_state_state_idx\n> \n> ---------- After Reindex and Vacuum ---------\n> \n> pgsql 176611328 May 6 02:13 acct_history\n> pgsql 55787520 May 6 02:13 acct_history_acct_no_idx\n> pgsql 81649664 May 6 02:41 user_acct\n> pgsql 29327360 May 6 02:41 user_acct_acct_no_idx\n> pgsql 45195264 May 6 02:41 user_acct_card_acct_sim_idx\n> pgsql 18595840 May 6 02:41 user_acct_card_no_idx\n> pgsql 15212544 May 6 02:41 user_acct_serial_no_idx\n> pgsql 12566528 May 6 02:41 user_acct_sim_idx\n> pgsql 8192 May 6 02:13 user_acct_state\n> pgsql 16384 May 2 03:00 user_acct_state_state_idx\n", "msg_date": "Tue, 5 May 1998 14:16:19 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "FW: BTP_CHAIN flag was expected" }, { "msg_contents": "On Tue, 5 May 1998 [email protected] wrote:\n\n> > 1. BTP_CHAIN errors cause system crash during peak traffic.\n\n\tTry upgrading to v6.3.2 ... I won't guarantee it, but the\nBTP_CHAIN problem has disappeared on our system(s) since we've upgraded...\n\n> > 2. Table sizes grow too much in short period of time.\n\n\tWhat sort of transactions are being performed? If updates, then\nthe only way of shrinking the files back down again is to perform a vacuum\nperiodically... \n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 5 May 1998 18:29:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FW: BTP_CHAIN flag was expected" } ]
[ { "msg_contents": "\nupdate mempayment set paywho = 'icvproxy' from do_addpayment where dappays\neq = payseqid;\nNOTICE: Non-functional update, only first update is performed\nUPDATE 31\n\nmore than one update was indeed performed..\nI beleive this has happened to me before..\n", "msg_date": "Tue, 5 May 1998 23:08:32 -0700", "msg_from": "Brett McCormickS <[email protected]>", "msg_from_op": true, "msg_subject": "non-functional update notice unneccesarily" }, { "msg_contents": "> update mempayment set paywho = 'icvproxy' from do_addpayment where dappays\n> eq = payseqid;\n> NOTICE: Non-functional update, only first update is performed\n> UPDATE 31\n> \n> more than one update was indeed performed..\n\nThis is a confusing message, but I think it means that the\nparser/planner/optimizer decided to simplify your over-specified or\nredundant query. Don't know enough details about your tables and query\nto know for sure in this case, but you can see examples of this in the\nregression test suite.\n\n - Tom\n", "msg_date": "Wed, 06 May 1998 14:06:01 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] non-functional update notice unneccesarily" }, { "msg_contents": "> \n> \n> update mempayment set paywho = 'icvproxy' from do_addpayment where dappays\n> eq = payseqid;\n> NOTICE: Non-functional update, only first update is performed\n> UPDATE 31\n> \n> more than one update was indeed performed..\n> I beleive this has happened to me before..\n> \n> \n\nVadim has said to remove the message, and I have done so. The\nsurrounding code in heapam.c is unchanged, just the elog(NOTICE) is\ncommented out.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 15 Jun 1998 22:51:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] non-functional update notice unneccesarily" }, { "msg_contents": "\nThere are times when the message is appropriate, I believe. But this\nis not one of them. Are all instances of this message gone, or just\nthis one?\n\nOn Mon, 15 June 1998, at 22:51:13, Bruce Momjian wrote:\n\n> > update mempayment set paywho = 'icvproxy' from do_addpayment where dappays\n> > eq = payseqid;\n> > NOTICE: Non-functional update, only first update is performed\n> > UPDATE 31\n> > \n> > more than one update was indeed performed..\n> > I beleive this has happened to me before..\n> > \n> > \n> \n> Vadim has said to remove the message, and I have done so. The\n> surrounding code in heapam.c is unchanged, just the elog(NOTICE) is\n> commented out.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n", "msg_date": "Tue, 16 Jun 1998 13:58:40 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] non-functional update notice unneccesarily" }, { "msg_contents": "> \n> \n> There are times when the message is appropriate, I believe. But this\n> is not one of them. Are all instances of this message gone, or just\n> this one?\n> \n> On Mon, 15 June 1998, at 22:51:13, Bruce Momjian wrote:\n> \n> > > update mempayment set paywho = 'icvproxy' from do_addpayment where dappays\n> > > eq = payseqid;\n> > > NOTICE: Non-functional update, only first update is performed\n> > > UPDATE 31\n> > > \n> > > more than one update was indeed performed..\n> > > I beleive this has happened to me before..\n> > > \n> > > \n> > \n> > Vadim has said to remove the message, and I have done so. The\n> > surrounding code in heapam.c is unchanged, just the elog(NOTICE) is\n> > commented out.\n\nAll instances are gone.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 16 Jun 1998 17:32:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] non-functional update notice unneccesarily" } ]
[ { "msg_contents": "On a Digital dual PII/32MB RAM/8GB SCSI\n running the same query 45 times I get:\n\nMay 6 09:16:27 digital logger: NOTICE: SIAssignBackendId: discarding tag\n2147483646\nMay 6 09:16:27 digital logger: FATAL 1: Backend cache invalidation\ninitialization failed\n..................\nMay 6 09:16:40 digital logger: FATAL 1: Backend cache invalidation\ninitialization failed\nMay 6 09:17:09 digital PAM_pwdb[397]: (login) session opened for user root\nby (uid=0)\nMay 6 09:17:09 digital PAM_pwdb[397]: ROOT LOGIN ON tty4\nMay 6 09:21:41 digital logger: NOTICE: LockRelease: find xid, table\ncorrupted\nMay 6 09:23:09 digital logger: NOTICE: Message from PostgreSQL backend:\nMay 6 09:23:09 digital logger: ^IThe Postmaster has informed me that some\nother backend died abnormally and possibly corrupted shared memory.\nMay 6 09:23:09 digital logger: ^II have rolled back the current transaction\nand am going to terminate your database system connection and exit.\nMay 6 09:23:09 digital logger: ^IPlease reconnect to the database system\nand repeat your query.\n\nThis kills my sistem.\nIncreasing the memory to 96MB didn't work it out.\nIn real life I expect more than 45 queries simultaneously.\n\nWhat can I do ?\n\nTIA\nClaudiu", "msg_date": "Wed, 6 May 1998 09:33:43 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "FATAL: Backend cache invalidation initialisation failed " } ]
[ { "msg_contents": "\nJust tried this out, and we have a bug here:\n\nsimply not implemented, not a bug.\n\ntemplate1=> create user tester in group pg_user;\nCREATE USER\n\nso \"pg_user\" is supposed to be a new group name (not a good name)\nThe group \"pg_user\" must already exist. But since the \"in group\" clause\nis currently ignored, no error shows up.\n\ntemplate1=> select * from pg_group;\ngroname|grosysid|grolist\n-------+--------+-------\n(0 rows)\n\ntemplate1=> select * from pg_user; \nusename|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil \n-------+--------+-----------+--------+--------+---------+--------+----------------------------\npgsql | 1005|t |t |t |t |********|Sat Jan 31 02:00:00 2037 AST\nscrappy| 10|t |t |t |t |********| \ntester | 1006|f |t |f |t |********| \n(3 rows)\n\ntemplate1=> insert into pg_group values ('test',0,'{scrappy}');\nERROR: pg_atoi: error in \"scrappy\": can't parse \"scrappy\"\ntemplate1=> insert into pg_group values ('test',0,'{10}');\nINSERT 18497 1\n\nyou created a group \"test\" with one user (\"scrappy\") as it's only member. \nThis is currently the only way to do it.\n\ntemplate1=> create user test in group pg_user;\nERROR: defineUser: user \"test\" has already been created\n\nI think this is because of the SQL92 spec. that user and group names have to be\ndistinct. (no user and group with same name)\n\ntemplate1=> create user test in group test; \nERROR: defineUser: user \"test\" has already been created\ntemplate1=> select * from pg_user;\nusename|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil \n-------+--------+-----------+--------+--------+---------+--------+----------------------------\npgsql | 1005|t |t |t |t |********|Sat Jan 31 02:00:00 2037 AST\nscrappy| 10|t |t |t |t |********| \ntester | 1006|f |t |f |t |********| \n(3 rows)\n\nIf I do a different usename ('beater'), it creates fine, but doesn't go \nanywhere as far as pg_group...\n\nIf nobody is working on this area of the code, I'll use it as my personal\nstarting point into it...just let me know...\n\nI think a \"create group\" would be a very valuable contribution. \n(Make role an alias for group, to be SQL92 conformant)\n\nAndreas\n\n\n", "msg_date": "Wed, 6 May 1998 09:48:23 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: [QUESTIONS] groups of users" }, { "msg_contents": "On Wed, 6 May 1998, Andreas Zeugswetter wrote:\n\n> \n> Just tried this out, and we have a bug here:\n> \n> simply not implemented, not a bug.\n\n\tThen should generate a NOTICE to that effect...right now, its\nmisleading unless you go and do a select on pg_group to find that it\nwasn't actually performed...\n\n\tAs it stands now, it is a bug...\n\n> template1=> create user tester in group pg_user;\n> CREATE USER\n> \n> so \"pg_user\" is supposed to be a new group name (not a good name)\n\n\tSorry, just picked the first thing that came into my head :)\n\n> The group \"pg_user\" must already exist. But since the \"in group\" clause\n> is currently ignored, no error shows up.\n\n\tWhy? if group doesn't exist do:\n\ninsert into pg_group values ('groname',max(grosysid)+1,'{values}');\n\n\t\n> template1=> select * from pg_user; \n> usename|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil \n> -------+--------+-----------+--------+--------+---------+--------+----------------------------\n> pgsql | 1005|t |t |t |t |********|Sat Jan 31 02:00:00 2037 AST\n> scrappy| 10|t |t |t |t |********| \n> tester | 1006|f |t |f |t |********| \n> (3 rows)\n> \n> template1=> insert into pg_group values ('test',0,'{scrappy}');\n> ERROR: pg_atoi: error in \"scrappy\": can't parse \"scrappy\"\n> template1=> insert into pg_group values ('test',0,'{10}');\n> INSERT 18497 1\n> \n> you created a group \"test\" with one user (\"scrappy\") as it's only member. \n> This is currently the only way to do it.\n\n\tUnfortunately, the above test was done at home, but here it is\nagain:\n\ntemplate1=> select * from pg_group;\ngroname|grosysid|grolist \n-------+--------+----------------\npgsql | 0|{10,1044,65534} \nbanner | 1|{10,65534} \nacctng | 2|{0,99,10} \nsurvey | 3|{10,65534,0,206}\n(4 rows)\n\ntemplate1=> create user someone in group agroup;\nCREATE USER\ntemplate1=> select * from pg_group;\ngroname|grosysid|grolist \n-------+--------+----------------\npgsql | 0|{10,1044,65534} \nbanner | 1|{10,65534} \nacctng | 2|{0,99,10} \nsurvey | 3|{10,65534,0,206}\n(4 rows)\n\ntemplate1=> create user some in group agroup;\nERROR: defineUser: user \"some\" has already been created\ntemplate1=> \n\n\tThere is no group 'some'...it almost looks like its doing a '~*'\nmatch:\n\ntemplate1=> select usename from pg_user;\nusename \n--------\nscrappy \nneil \nnobody \ndarchell\nadrenlin\njulie \nbigtech \nnews \nacctng \nroot \nsalesorg\nsomeone \n(12 rows)\n\n\n\n", "msg_date": "Wed, 6 May 1998 11:37:59 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [QUESTIONS] groups of users" } ]