threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "So Sun has pulled another one on us. Not sure which versions of Solaris\nthis affects, but some have libncurses and libtermcap with overlapping\nsymbols of different sizes. This leads to a bunch of complaints every\ntime 'ld' is run; perhaps it could also lead to busted executables.\n\n(Via a complex chain of events this also causes the AC_HEADER_STDC in\n7.0*'s configure to fail, which is the underlying cause of the regular bug\nreports about something from stdarg.h (va_xxx) undefined.)\n\nTo start with, what do we need libtermcap and libcurses for? Readline\nrequires one or the other, but not both. Anything else?\n\nI'm not going to do anything about this now, but if we get more of these\nit'd be good to be prepared.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 15 Nov 2000 20:30:25 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "termcap and curses"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> To start with, what do we need libtermcap and libcurses for? Readline\n> requires one or the other, but not both. Anything else?\n\nI think that psql once required these. It probably does not anymore\n(except indirectly via readline). There's certainly no reason to be\nlinking them into the backend. Try yanking 'em and see what happens.\n\nOn HPUX, at least, it would be real nice not to include libcurses;\nsome genius thought it would be OK to define a function named\nselect() in libcurses ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Nov 2000 23:28:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: termcap and curses "
}
]
|
[
{
"msg_contents": "Which one of these should we use?\n\nint4 is a data type, int32 isn't. c.h has DatumGetInt8, but no\nDatumGetInt64; it also has DatumGetInt32 but no DatumGetInt4. fmgr has\nPG_GETARG_INT32 et al. Inconsistency everywhere.\n\nThe C standard has things like int32_t, but technically there's no\nguarantee that int32 is really 32 bits, you only know sizeof(int32) == 4.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 15 Nov 2000 20:41:08 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "int4 or int32"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Which one of these should we use?\n> int4 is a data type, int32 isn't. c.h has DatumGetInt8, but no\n> DatumGetInt64; it also has DatumGetInt32 but no DatumGetInt4. fmgr has\n> PG_GETARG_INT32 et al. Inconsistency everywhere.\n\nThe original convention was to use int4 etc at the SQL level, int32 etc\nat the C level. However the typedefs int4 etc have to be visible in\nthe include/catalog/pg_*.h headers, and so there's been a certain amount\nof leakage of those typedefs into the C sources.\n\nI think that int32 etc are better choices at the C level because of\nthe well-established precedent for naming integer types after numbers\nof bits in C code. I don't feel any strong urge to go around and\nchange the existing misusages, but if you want to, I won't object.\n\nI also have to plead guilty to having changed all the float-datatype\ncode to use float4 and float8 recently. This was mainly because the\nexisting typedefs for float32 and float64 had a built-in assumption\nthat these types would always be pass-by-reference, and I wanted to\nabstract the code away from that assumption. We can't touch those\ntypedefs for a release or three (else we'll break existing user\nfunctions written in C), so switching to the SQL-level names seemed\nlike the best bet. But it's not real consistent with the integer-type\nnaming conventions :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Nov 2000 23:38:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 or int32 "
},
{
"msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > Which one of these should we use?\n> > int4 is a data type, int32 isn't. c.h has DatumGetInt8, but no\n> > DatumGetInt64; it also has DatumGetInt32 but no DatumGetInt4. fmgr has\n> > PG_GETARG_INT32 et al. Inconsistency everywhere.\n> \n> The original convention was to use int4 etc at the SQL level, int32 etc\n> at the C level. However the typedefs int4 etc have to be visible in\n> the include/catalog/pg_*.h headers, and so there's been a certain amount\n> of leakage of those typedefs into the C sources.\n> \n> I think that int32 etc are better choices at the C level because of\n> the well-established precedent for naming integer types after numbers\n> of bits in C code. I don't feel any strong urge to go around and\n> change the existing misusages, but if you want to, I won't object.\n> \n> I also have to plead guilty to having changed all the float-datatype\n> code to use float4 and float8 recently. This was mainly because the\n> existing typedefs for float32 and float64 had a built-in assumption\n> that these types would always be pass-by-reference, and I wanted to\n> abstract the code away from that assumption. We can't touch those\n> typedefs for a release or three (else we'll break existing user\n> functions written in C), so switching to the SQL-level names seemed\n> like the best bet. But it's not real consistent with the integer-type\n> naming conventions :-(\n\nTom, I am wondering. If we don't change to int4/int8 internally now,\nwill we ever do it? The function manager change seems like the only\ngood time to do it, if we ever will. Basically, I am asking if we\nshould drop backward C compatibility for 7.1 and bite the bullet on the\nchange?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Nov 2000 01:06:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 or int32"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I think that int32 etc are better choices at the C level because of\n>> the well-established precedent for naming integer types after numbers\n>> of bits in C code. I don't feel any strong urge to go around and\n>> change the existing misusages, but if you want to, I won't object.\n\n> Tom, I am wondering. If we don't change to int4/int8 internally now,\n> will we ever do it?\n\nAs I thought I'd just made clear, I'm against standardizing on int4/int8\nat the C level. The average C programmer would think that \"int8\" is\na one-byte type, not an eight-byte type. There's way too much history\nbehind that for us to swim against the tide. Having different naming\nconventions at the C and SQL levels seems a better approach, especially\nsince it will exist to some extent anyway (int != integer, for\ninstance).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 01:14:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 or int32 "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> I think that int32 etc are better choices at the C level because of\n> >> the well-established precedent for naming integer types after numbers\n> >> of bits in C code. I don't feel any strong urge to go around and\n> >> change the existing misusages, but if you want to, I won't object.\n> \n> > Tom, I am wondering. If we don't change to int4/int8 internally now,\n> > will we ever do it?\n> \n> As I thought I'd just made clear, I'm against standardizing on int4/int8\n> at the C level. The average C programmer would think that \"int8\" is\n> a one-byte type, not an eight-byte type. There's way too much history\n> behind that for us to swim against the tide. Having different naming\n> conventions at the C and SQL levels seems a better approach, especially\n> since it will exist to some extent anyway (int != integer, for\n> instance).\n\nOK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Nov 2000 01:16:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 or int32"
},
{
"msg_contents": "There were only a few to fix, so I fixed them.\n\n> Peter Eisentraut <[email protected]> writes:\n> > Which one of these should we use?\n> > int4 is a data type, int32 isn't. c.h has DatumGetInt8, but no\n> > DatumGetInt64; it also has DatumGetInt32 but no DatumGetInt4. fmgr has\n> > PG_GETARG_INT32 et al. Inconsistency everywhere.\n> \n> The original convention was to use int4 etc at the SQL level, int32 etc\n> at the C level. However the typedefs int4 etc have to be visible in\n> the include/catalog/pg_*.h headers, and so there's been a certain amount\n> of leakage of those typedefs into the C sources.\n> \n> I think that int32 etc are better choices at the C level because of\n> the well-established precedent for naming integer types after numbers\n> of bits in C code. I don't feel any strong urge to go around and\n> change the existing misusages, but if you want to, I won't object.\n> \n> I also have to plead guilty to having changed all the float-datatype\n> code to use float4 and float8 recently. This was mainly because the\n> existing typedefs for float32 and float64 had a built-in assumption\n> that these types would always be pass-by-reference, and I wanted to\n> abstract the code away from that assumption. We can't touch those\n> typedefs for a release or three (else we'll break existing user\n> functions written in C), so switching to the SQL-level names seemed\n> like the best bet. But it's not real consistent with the integer-type\n> naming conventions :-(\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? GNUmakefile\n? src/Makefile.custom\n? src/GNUmakefile\n? src/Makefile.global\n? src/log\n? src/crtags\n? src/backend/postgres\n? src/backend/catalog/global.bki\n? src/backend/catalog/global.description\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_config/pg_config\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pgaccess/pgaccess\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/include/stamp-h\n? src/interfaces/ecpg/lib/libecpg.so.3.2.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2.1\n? src/interfaces/libpgtcl/libpgtcl.so.2.1\n? src/interfaces/libpq/libpq.so.2.1\n? src/interfaces/perl5/blib\n? src/interfaces/perl5/Makefile\n? src/interfaces/perl5/pm_to_blib\n? src/interfaces/perl5/Pg.c\n? src/interfaces/perl5/Pg.bs\n? src/pl/plperl/blib\n? src/pl/plperl/Makefile\n? src/pl/plperl/pm_to_blib\n? src/pl/plperl/SPI.c\n? src/pl/plperl/plperl.bs\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/Makefile.tcldefs\nIndex: src/backend/commands/command.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.116\ndiff -c -r1.116 command.c\n*** src/backend/commands/command.c\t2001/01/08 03:14:58\t1.116\n--- src/backend/commands/command.c\t2001/01/23 01:45:36\n***************\n*** 1446,1452 ****\n {\n \tRelation \tclass_rel;\n \tHeapTuple \ttuple;\n! \tint4 \t\tnewOwnerSysid;\n \tRelation\tidescs[Num_pg_class_indices];\n \n \t/*\n--- 1446,1452 ----\n {\n \tRelation \tclass_rel;\n \tHeapTuple \ttuple;\n! \tint32\t\tnewOwnerSysid;\n \tRelation\tidescs[Num_pg_class_indices];\n \n \t/*\nIndex: src/backend/commands/comment.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/comment.c,v\nretrieving revision 1.24\ndiff -c -r1.24 comment.c\n*** src/backend/commands/comment.c\t2000/11/16 22:30:18\t1.24\n--- src/backend/commands/comment.c\t2001/01/23 01:45:36\n***************\n*** 394,400 ****\n \tHeapScanDesc scan;\n \tOid\t\t\toid;\n \tbool\t\tsuperuser;\n! \tint4\t\tdba;\n \tOid\t\tuserid;\n \n \t/*** First find the tuple in pg_database for the database ***/\n--- 394,400 ----\n \tHeapScanDesc scan;\n \tOid\t\t\toid;\n \tbool\t\tsuperuser;\n! \tint32\t\tdba;\n \tOid\t\tuserid;\n \n \t/*** First find the tuple in pg_database for the database ***/\nIndex: src/include/commands/sequence.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/commands/sequence.h,v\nretrieving revision 1.13\ndiff -c -r1.13 sequence.h\n*** src/include/commands/sequence.h\t2000/12/28 13:00:28\t1.13\n--- src/include/commands/sequence.h\t2001/01/23 01:45:47\n***************\n*** 15,26 ****\n typedef struct FormData_pg_sequence\n {\n \tNameData\tsequence_name;\n! \tint4\t\tlast_value;\n! \tint4\t\tincrement_by;\n! \tint4\t\tmax_value;\n! \tint4\t\tmin_value;\n! \tint4\t\tcache_value;\n! \tint4\t\tlog_cnt;\n \tchar\t\tis_cycled;\n \tchar\t\tis_called;\n } FormData_pg_sequence;\n--- 15,26 ----\n typedef struct FormData_pg_sequence\n {\n \tNameData\tsequence_name;\n! \tint32\t\tlast_value;\n! \tint32\t\tincrement_by;\n! \tint32\t\tmax_value;\n! \tint32\t\tmin_value;\n! \tint32\t\tcache_value;\n! \tint32\t\tlog_cnt;\n \tchar\t\tis_cycled;\n \tchar\t\tis_called;\n } FormData_pg_sequence;\nIndex: src/include/utils/date.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/date.h,v\nretrieving revision 1.7\ndiff -c -r1.7 date.h\n*** src/include/utils/date.h\t2000/12/03 14:51:11\t1.7\n--- src/include/utils/date.h\t2001/01/23 01:45:48\n***************\n*** 25,31 ****\n {\n \tdouble\t\ttime;\t\t\t/* all time units other than months and\n \t\t\t\t\t\t\t\t * years */\n! \tint4\t\tzone;\t\t\t/* numeric time zone, in seconds */\n } TimeTzADT;\n \n /*\n--- 25,31 ----\n {\n \tdouble\t\ttime;\t\t\t/* all time units other than months and\n \t\t\t\t\t\t\t\t * years */\n! \tint\t\t\tzone;\t\t\t/* numeric time zone, in seconds */\n } TimeTzADT;\n \n /*\nIndex: src/include/utils/timestamp.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/timestamp.h,v\nretrieving revision 1.11\ndiff -c -r1.11 timestamp.h\n*** src/include/utils/timestamp.h\t2000/11/06 16:05:25\t1.11\n--- src/include/utils/timestamp.h\t2001/01/23 01:45:48\n***************\n*** 36,42 ****\n typedef struct\n {\n \tdouble\t\ttime;\t/* all time units other than months and years */\n! \tint4\t\tmonth;\t/* months and years, after time for alignment */\n } Interval;\n \n \n--- 36,42 ----\n typedef struct\n {\n \tdouble\t\ttime;\t/* all time units other than months and years */\n! \tint\t\tmonth;\t/* months and years, after time for alignment */\n } Interval;",
"msg_date": "Mon, 22 Jan 2001 20:48:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 or int32"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> There were only a few to fix, so I fixed them.\n\nI don't think it's a good idea to write unspecified-width \"int\" in\nthe struct decls for Interval and friends. If the compiler decides\nsomeday that that's int8, things break because the physical size of\nInterval etc. is hardwired over in pg_type.h. Use \"int32\", or\nperhaps revert these to int4.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Jan 2001 21:00:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 or int32 "
},
{
"msg_contents": "Done.\n\n> Bruce Momjian <[email protected]> writes:\n> > There were only a few to fix, so I fixed them.\n> \n> I don't think it's a good idea to write unspecified-width \"int\" in\n> the struct decls for Interval and friends. If the compiler decides\n> someday that that's int8, things break because the physical size of\n> Interval etc. is hardwired over in pg_type.h. Use \"int32\", or\n> perhaps revert these to int4.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? GNUmakefile\n? src/Makefile.custom\n? src/GNUmakefile\n? src/Makefile.global\n? src/log\n? src/crtags\n? src/backend/postgres\n? src/backend/catalog/global.description\n? src/backend/catalog/global.bki\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_config/pg_config\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pgaccess/pgaccess\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/include/stamp-h\n? src/interfaces/ecpg/lib/libecpg.so.3.2.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2.1\n? src/interfaces/libpgtcl/libpgtcl.so.2.1\n? src/interfaces/libpq/libpq.so.2.1\n? src/interfaces/perl5/blib\n? src/interfaces/perl5/Makefile\n? src/interfaces/perl5/pm_to_blib\n? src/interfaces/perl5/Pg.c\n? src/interfaces/perl5/Pg.bs\n? src/pl/plperl/blib\n? src/pl/plperl/Makefile\n? src/pl/plperl/pm_to_blib\n? src/pl/plperl/SPI.c\n? src/pl/plperl/plperl.bs\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/Makefile.tcldefs\nIndex: src/include/utils/date.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/date.h,v\nretrieving revision 1.8\ndiff -c -r1.8 date.h\n*** src/include/utils/date.h\t2001/01/23 01:48:17\t1.8\n--- src/include/utils/date.h\t2001/01/23 02:23:55\n***************\n*** 25,31 ****\n {\n \tdouble\t\ttime;\t\t\t/* all time units other than months and\n \t\t\t\t\t\t\t\t * years */\n! \tint\t\t\tzone;\t\t\t/* numeric time zone, in seconds */\n } TimeTzADT;\n \n /*\n--- 25,31 ----\n {\n \tdouble\t\ttime;\t\t\t/* all time units other than months and\n \t\t\t\t\t\t\t\t * years */\n! \tint32\t\tzone;\t\t\t/* numeric time zone, in seconds */\n } TimeTzADT;\n \n /*\nIndex: src/include/utils/timestamp.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/timestamp.h,v\nretrieving revision 1.12\ndiff -c -r1.12 timestamp.h\n*** src/include/utils/timestamp.h\t2001/01/23 01:48:17\t1.12\n--- src/include/utils/timestamp.h\t2001/01/23 02:23:55\n***************\n*** 36,42 ****\n typedef struct\n {\n \tdouble\t\ttime;\t/* all time units other than months and years */\n! \tint\t\tmonth;\t/* months and years, after time for alignment */\n } Interval;\n \n \n--- 36,42 ----\n typedef struct\n {\n \tdouble\t\ttime;\t/* all time units other than months and years */\n! \tint32\t\tmonth;\t/* months and years, after time for alignment */\n } Interval;",
"msg_date": "Mon, 22 Jan 2001 21:24:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 or int32"
}
]
|
[
{
"msg_contents": "\nAWARD WINNING - SPECIAL NOTICE!!!!\n\nPostgreSQL has earned the Linux Journal Third Annual Editor's Choice\nAwards recognition as the Best Database, to be presented at\nCOMDEX/LinuxBusinessExpo in Las Vegas this afternoon !\n\nI have just learned of this award and want to make sure that *every*\nmember of the Global Development Project has an opportunity to participate\nin this event. If any of you are at the Show and can meet me at booth P761\nat the Sands Convention Centre, there will be a photo-op at some point\nthis afternoon.\n\nAs you know, the last Open Source Development Network event in SF was not\nsent out to all of our members, many of whom learned about it long after\nthe event itself. We want to make sure that this isn't something that\ncontinues, the community is too important to leave anyone out of these\nexciting opportunities.\n\nFor those who may wonder why we are at Comdex without having invited you\nto meet us, we are here in our 'business' hat, helping one of our key\nOracle/PostgreSQL partners with their booth, and we were concerned that\nsome in the community would feel our invitation had some conflicts of\ninterest.\n\nSO .... Get over here ASAP so that the whole (well some) of the community\ncan be included in the Photo Op that will be going out to celebrate the\nexceptional contributions that have been made to the *BEST DATABASE* in\nthe Linux world... AGAIN!!!\n\nCongratulations all around from those of us that made to this show...!!!\n\nThomas G. Lockhart\nMarc G. Fournier\nVadim Mikheev\n\nFor a copy of the announcement, check:\nhttp://www.linuxjournal.com\n\n\n",
"msg_date": "Wed, 15 Nov 2000 14:46:01 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "AWARD WINNING - SPECIAL NOTICE !! "
}
]
|
[
{
"msg_contents": "One question:\nwill Postgres 7.1 be able to do offline backups?\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 15 Nov 2000 18:44:52 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL and offline backups"
}
]
|
[
{
"msg_contents": "\n> > Earlier, Vadim was talking about arranging to share fsyncs of the WAL\n> > log file across transactions (after writing your commit record to the\n> > log, sleep a few milliseconds to see if anyone else fsyncs before you\n> > do; if not, issue the fsync yourself). That would offer less-than-\n> > one-fsync-per-transaction performance without giving up any \n> > guarantees.\n> > If people feel a compulsion to have a tunable parameter, let 'em tune\n> > the length of the pre-fsync sleep ...\n> \n> Already implemented (without ability to tune this parameter - \n> xact.c:CommitDelay, - yet). Currently CommitDelay is 5, so\n> backend sleeps 1/200 sec before checking/forcing log fsync.\n\nShould definitely make that tuneable (per installation is imho sufficient), \nno use in waiting if the dba knows there is only very little concurrency. \nIIRC DB/2 defaults to not using this \"commit pooling\".\n\nAndreas\n",
"msg_date": "Thu, 16 Nov 2000 10:08:58 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: RE: [COMMITTERS] pgsql/src/backend/access/transam (\n\txact.c xlog.c)"
}
]
|
[
{
"msg_contents": "\n> > To answer another misconception that I saw in this thread:\n> > \n> > : The old language names \"internal\" and \"C\" will continue to refer to\n> > : functions with the old calling convention. We should deprecate\n> > : old-style functions because of their portability problems, but the\n> > : support for them will only be one small function handler routine,\n> > : so we can leave them in place for as long as necessary.\n> \n> My question is can we drop newC and use just plain C in 7.2 or 7.3?\n\nHas anybody had time to look at how this is done in DB/2, Oracle ? Philip ?\n\nIn Informix there is an additional keyword \"parameter style\".\n\nThus you have:\ncreate function foo (a int, b int) return{s|ing} int\nexternal name '/path/libmod.so(symbol)' language C\n[parameter style informix] [not variant];\n\nWe could have \"parameter style postgresql\" and map that to \nsome arbitrary string that would not be something the user sees.\n\nAs you see this is really very close to what we have or want\nand I am really unhappy that there has been no effort at all \nto look at what others do. Not that we want to copy some stupidity,\nbut if it is sane .... These are also the companies that \nhave the most influence on future ANSI specs, and thus if we keep \nclose we will have a better position to stay conformant.\n\nActually my proposal would be to not advertise \"newC\" in 7.1 and do\nsome more research in that area until we have a solid and maybe compatible\ninterface that also makes the missing features possible \n(multiple columns and rows for return, enter the function more than once\nto retrieve only part of the result if it consists of many rows).\n\nAndreas\n",
"msg_date": "Thu, 16 Nov 2000 10:39:08 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Coping with 'C' vs 'newC' function language namesh"
},
{
"msg_contents": "> Actually my proposal would be to not advertise \"newC\" in 7.1 and do\n> some more research in that area until we have a solid and maybe compatible\n> interface that also makes the missing features possible \n> (multiple columns and rows for return, enter the function more than once\n> to retrieve only part of the result if it consists of many rows).\n\nMy problem with newC is that I think it is going to cause confusing by\npeople who create new-style functions and call the language \"C\". I\nrecommend making our current code \"C\" style, and calling pre-7.1\nfunctions \"C70\", that way, we can still enable old functions to work,\nthey just have to use \"C70\" to make them work, and all our new code is\nthe clean \"C\" type.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Nov 2000 12:03:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Coping with 'C' vs 'newC' function language namesh"
}
]
|
[
{
"msg_contents": "\n> > My solution would be to use INT_MIN for all ports, which has the advantage \n> > that the above problematic comparison can be converted to !=,\n> > since no integer will be smaller than INT_MIN.\n> \n> I agree. When I was looking at this code this morning, I was wondering\n> what INT_MIN was supposed to represent anyway, if NOSTART_ABSTIME is\n> INT_MIN + 1. I think someone messed this up between 4.2 and Postgres95.\n\nHas there been any consensus yet ? If yes, could you apply my patch please ?\nOr should I ask Bruce, for his \"faster than his shadow\" patch services ?\n\nThanks\nAndreas\n",
"msg_date": "Thu, 16 Nov 2000 11:13:24 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Could turn on -O2 in AIX "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> I agree. When I was looking at this code this morning, I was wondering\n>> what INT_MIN was supposed to represent anyway, if NOSTART_ABSTIME is\n>> INT_MIN + 1. I think someone messed this up between 4.2 and Postgres95.\n\n> Has there been any consensus yet ? If yes, could you apply my patch please ?\n\nI have it on my to-do list, but I was waiting to see if Thomas had an\nobjection (since he knows more about the datetime types than the rest\nof us). He's been at Comdex the last few days, which probably explains\nthe delay.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 10:28:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Could turn on -O2 in AIX "
}
]
|
[
{
"msg_contents": "\nSituation:\n 7.1 has a new backend interface for loadable modules. (but it\n supports old 7.0 interface too)\n\nNow the problem:\n How do you tell the backend with what interface to use for\n particular function? At the moment 7.1-cvs uses 'newC' instead\n of 'C' in the LANGUAGE part.\n\nBut this not good:\n\n1) the 'newC' will be quite silly after couple of years, when it\n will be the standard.\n\n2) there is another change in the horizon, which would be the\n automatic detection of function parameters, or rather the\n shared object should provide info about it.\n\n3) It has nothing to do with 'C'. The loadable modules can be\n programmed in any language, as long it supports C calling\n conventions.\n\n4) And IMHO \"LANGUAGE 'C'\" is a hack, LANGUAGE construct should be\n used only for actual definitions. Now should we extend one hack\n with another hack?\n\n\nRequirement:\n 7.1 should understand the 7.0 syntax, 7.2 should understand 7.1\n and 7.0 syntax. That means the dump/restore should work\n between versions. Whether 7.2 has the 'oldC' handler is another\n matter, but it should not load it with wrong defaults.\n\nI propose new command:\n\n CREATE FUNCTION name\n\t( [ftype [, ...] ] ) RETURNS rtype\n\tFROM [ LIBRARY ] obj_file AS link_sym\n\t[ WITH [ TYPE = ( 0 | 1 | ... ) ]\n\t [[,] ATTRIBUTE = ( attr [, ...] ) ] ]\n\n This mostly like the current \"CREATE FUNCTION .. LANGUAGE 'C'\".\n Main difference is that the TYPE=0 means the old 'C' interface\n and TYPE=1 means 'newC' interface. Default is 1. (As said,\n 7.1 supports the old LANGUAGE 'C' variant, so I think it is\n not needed the default to be 0.)\n\n\n CREATE FUNCTION ... AS defn ... LANGUAGE 'C' ..\n\n means 7.0 oldC/Informix interface. No new languages will\n come in this way. (I mean those where the defn is actually\n objname, symbol pair.)\n \n This only is for compatibility. The \".. LANGUAGE ..\" should\n be only used for the actual definitions.\n\nAlternative:\n\n newC will be created as:\n\n CREATE FUNCTION .. LANGUAGE 'C' WITH (pg_params)\n\n default is old_params, 7.1 pg_dump dumps newC with \"(pg_params)\".\n But as I said this is a hack.\n\n\nNow some future ideas. I really think that something like that\nshould come into PostgreSQL eventually.\n\n\n LOAD MODULE name FROM [LIBRARY] foomodule.so\n\n The lib has a struct (e.g.) pg_module_<name>_info which defines\n init/remove functions, functions, operators and types. PostgreSQL\n registers module somehow, and when the module gets DROPped then\n PostgreSQL calls its remove funtions and removes all stuff it has\n itself registered.\n\n LOAD FUNCTION name FROM [LIBRARY] foo.so\n\n This means that in the object file there is defined struct\n (e.g.) pg_function_<name>_info. (Probably by help of macros).\n\n { I am not sure if the following is needed, better they go through\n the LOAD MODULE? }\n\n LOAD TYPE name FROM [LIBRARY] foo.so\n\n Module has struct (e.g.) pg_type_<name>_info.\n\n LOAD OPERATOR name FROM [LIBRARY] foo.so AS obj_name\n\n Module has struct (e.g.) pg_operator_<obj_name>_info\n\nRandom notes:\n\n* why struct not some init funtion? ->\n\t* it will be easier to function/module programmer.\n\t* avoids duplicate code\n\t* makes possible different interfaces.\n\t* main backend can detect incompatible interface\n\n* I am not knowledgeable in dump/restore problems. Someone\n who is should comment on this what features are else needed.\n\n* The *.so finding should accept some search paths (LD_LIBRARY_PATH?)\n (Does it now?)\n\n* In future maybe some currently 'core' parts can be separated into\n 'core modules' e.g. all geometric stuff. So they can be\n loaded only as needed.\n\n* There was a previous discussion on modules:\n\n Mark Hollomon's idea:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-06/msg00959.html\n\n Jan Wieck objections:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-06/msg00983.html\n\n IMHO the objections are not very strong but sure the modules\n interface needs lot of work.\n\n\n\n-- \nmarko\n\n",
"msg_date": "Thu, 16 Nov 2000 17:54:36 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "[rfc] new CREATE FUNCTION (and more)"
},
{
"msg_contents": "Marko Kreen <[email protected]> writes:\n> This mostly like the current \"CREATE FUNCTION .. LANGUAGE 'C'\".\n> Main difference is that the TYPE=0 means the old 'C' interface\n> and TYPE=1 means 'newC' interface. Default is 1.\n\nThis improves matters how, exactly? As far as I can see, this just\nreplaces a readable construct with magic numbers, for a net loss in\nreadability and no change in functionality.\n\nI don't have any great love for the names 'C' and 'newC' either, but\nunless we are willing to break backward-compatibility of function\ndeclarations in 7.1, I think we are stuck with those names or ones\nisomorphic to them.\n\nIn the long run, it seems that it'd be a good idea to embed function\ndeclaration info straight into a loadable module, per Philip's idea\nof a special function or your idea of a table. However that does not\nchange the issue of names for function-call conventions in the least,\nit merely avoids the problem of keeping a script of SQL declarations\nin sync with the library file. (One brain-dead-simple definition of\nthe info function or table is that it returns/contains a text string\nthat's exactly the SQL commands needed to create the function\ndefinitions, except we could allow them to omit the pathname\nof the library file. We can probably do better than that, but as\nfar as raw functionality goes, that will accomplish everything that\na fancier-looking API would do.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 11:20:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "> Marko Kreen <[email protected]> writes:\n> > This mostly like the current \"CREATE FUNCTION .. LANGUAGE 'C'\".\n> > Main difference is that the TYPE=0 means the old 'C' interface\n> > and TYPE=1 means 'newC' interface. Default is 1.\n> \n> This improves matters how, exactly? As far as I can see, this just\n> replaces a readable construct with magic numbers, for a net loss in\n> readability and no change in functionality.\n> \n> I don't have any great love for the names 'C' and 'newC' either, but\n> unless we are willing to break backward-compatibility of function\n> declarations in 7.1, I think we are stuck with those names or ones\n> isomorphic to them.\n\nI am recommending C70 for old functions, and C for current-style\nfunctions. That way, we can implement C71 if we want for backward\ncompatibility. I think making everyone use newC for the current style\nis going to be confusing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Nov 2000 12:05:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more)"
},
{
"msg_contents": "On Thu, Nov 16, 2000 at 11:20:58AM -0500, Tom Lane wrote:\n> Marko Kreen <[email protected]> writes:\n> > This mostly like the current \"CREATE FUNCTION .. LANGUAGE 'C'\".\n> > Main difference is that the TYPE=0 means the old 'C' interface\n> > and TYPE=1 means 'newC' interface. Default is 1.\n> \n> This improves matters how, exactly? As far as I can see, this just\n> replaces a readable construct with magic numbers, for a net loss in\n> readability and no change in functionality.\n\nHmm. I think I have to agree. The thing is I did all-powerful\nCREATE FUNCTION, then I noticed that the module-provided-info\nstuff is separate functionality and split them off into LOAD *\nfunctions. So I did not noticed that the remaining CREATE\nFUNCTION has not much point anymore... :)\n\n> I don't have any great love for the names 'C' and 'newC' either, but\n> unless we are willing to break backward-compatibility of function\n> declarations in 7.1, I think we are stuck with those names or ones\n> isomorphic to them.\n\nOk. I only want to note that the \"newC\" interface is \"good\" in\nthe sense that it probably stays around a long time. It would\nbe nice if the name seems reasonable after a couple of years\ntoo. But I better shut up on this issue now.\n\n> In the long run, it seems that it'd be a good idea to embed function\n> declaration info straight into a loadable module, per Philip's idea\n> of a special function or your idea of a table. However that does not\n> change the issue of names for function-call conventions in the least,\n\nYes. \n\n> it merely avoids the problem of keeping a script of SQL declarations\n> in sync with the library file. (One brain-dead-simple definition of\n> the info function or table is that it returns/contains a text string\n> that's exactly the SQL commands needed to create the function\n> definitions, except we could allow them to omit the pathname\n> of the library file. We can probably do better than that, but as\n> far as raw functionality goes, that will accomplish everything that\n> a fancier-looking API would do.)\n\nEmbedded stuff makes the handling less error-prone and\ncomfortable. Makefiles too dont bring any new functionality\nto the program being compiled... :)\n\nBut I think that \"LOAD MODULE\" starts bringing new functionality\nbut what exactly I do not know yet...\n\n-- \nmarko\n\n",
"msg_date": "Thu, 16 Nov 2000 19:24:38 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more)"
},
{
"msg_contents": "On Thu, Nov 16, 2000 at 11:20:58AM -0500, Tom Lane wrote:\n> I don't have any great love for the names 'C' and 'newC' either, but\n> unless we are willing to break backward-compatibility of function\n> declarations in 7.1, I think we are stuck with those names or ones\n> isomorphic to them.\n> \n> In the long run, it seems that it'd be a good idea to embed function\n> declaration info straight into a loadable module, per Philip's idea\n> of a special function or your idea of a table. \n\nUntil somebody implements Philip's idea, a much simpler approach could \nobviate the whole issue:\n\n - Keep the name 'C' for both old-style and new-style module declarations.\n - Require that new-style modules define a distinguished symbol, such as \n \"int __postgresql_call_7_1;\".\n\nThe module loader can look for symbols that start with \"__postgresql_call\"\nand adjust automatically, or report an error. This \n\n - Breaks no backward compatibility, \n - Defines a clear method for handling future changes, to prevent this \n problem from arising again, \n - Creates no particular inconvenience for writers of modules, and \n - Might be very easy to implement.\n\nNathan Myers\[email protected]\n",
"msg_date": "Thu, 16 Nov 2000 12:59:39 -0800",
"msg_from": "Nathan Myers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more)"
},
{
"msg_contents": "Nathan Myers <[email protected]> writes:\n> - Keep the name 'C' for both old-style and new-style module declarations.\n> - Require that new-style modules define a distinguished symbol, such as \n> \"int __postgresql_call_7_1;\".\n\nI was thinking along the same lines myself. I'd want to do it on a\nper-function basis, though, rather than assuming that all functions in\na module must use the same interface.\n\nI'd be inclined to define a macro that creates the signal object,\nso that you'd write something like\n\nPG_FUNCTION_API_V2(foo);\n\nDatum\nfoo(PG_FUNCTION_ARGS)\n{\n\t...\n}\n\nto create a dynamically loadable new-style function.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 20:06:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "At 12:59 16/11/00 -0800, Nathan Myers wrote:\n>\n> - Keep the name 'C' for both old-style and new-style module declarations.\n> - Require that new-style modules define a distinguished symbol, such as \n> \"int __postgresql_call_7_1;\".\n>\n>The module loader can look for symbols that start with \"__postgresql_call\"\n>and adjust automatically, or report an error. This \n\nCute idea. *If* people like the idea of an info function of some kind then\nall we have to do is agree on it's name (not even the parameters, I think),\nthen we can get the 7.1 function manager to look for it. This is definitely\nthe simplest implementation of a brain-dead info function!\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 17 Nov 2000 12:52:45 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more)"
},
{
"msg_contents": "At 20:06 16/11/00 -0500, Tom Lane wrote:\n>Nathan Myers <[email protected]> writes:\n>> - Keep the name 'C' for both old-style and new-style module declarations.\n>> - Require that new-style modules define a distinguished symbol, such as \n>> \"int __postgresql_call_7_1;\".\n>\n>I was thinking along the same lines myself. I'd want to do it on a\n>per-function basis, though, rather than assuming that all functions in\n>a module must use the same interface.\n>\n>I'd be inclined to define a macro that creates the signal object,\n>so that you'd write something like\n>\n>PG_FUNCTION_API_V2(foo);\n\nThis sounds perfect. Would you generate an 'info' function to return a list\nof entry points, or just use dummy object names? The info function has the\nadvantage that it can return version information as well, and a clutter of\ndummy entry points might look a little messy.\n\nI had been thinking along the lines of a generic extensible interface using\na known function, but by using macros you can even hide that layer, making\nwhatever we do now completely compatible. \n\nWhat's also cute, is if the underlying method is designed carefully you may\nbe able to get one library working with multiple interfaces, especially if\nthe stuff that comes with PG_FUNCTION_ARGS can also provide version\ninformation.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 17 Nov 2000 13:00:20 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n>> I'd be inclined to define a macro that creates the signal object,\n>> so that you'd write something like\n>> \n>> PG_FUNCTION_API_V2(foo);\n\n> This sounds perfect. Would you generate an 'info' function to return a list\n> of entry points, or just use dummy object names? The info function has the\n> advantage that it can return version information as well, and a clutter of\n> dummy entry points might look a little messy.\n\nWhat I was thinking was that the macro would expand either to\n\n\tint pg_api_foo = 2;\n\nor\n\n\tint pg_api_foo(void) { return 2; }\n\nThe former would be more compact, presumably, but the latter would\nprobably be more portable --- we already have to have the ability to\nfind and call functions in a dynamic-link library, whereas I'm not so\nsure about the ability to find and read values of global variables.\n\nIn either case, the system would be able to extract an integer version\nvalue associated with each function defined by the library. (If we\ndon't find the version-defining symbol, we assume old-style C API.)\nMeaning of values other than \"2\" reserved for future definition.\n\nI like this way better than a central info function for the whole\nlibrary, because you'd write the version declaration right with the\nfunction itself. A central info function would be more of a pain to\nmaintain, I think.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 21:08:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "On Thu, Nov 16, 2000 at 08:06:30PM -0500, Tom Lane wrote:\n> Nathan Myers <[email protected]> writes:\n> > - Keep the name 'C' for both old-style and new-style module declarations.\n> > - Require that new-style modules define a distinguished symbol, such as \n> > \"int __postgresql_call_7_1;\".\n> \n> I was thinking along the same lines myself. I'd want to do it on a\n> per-function basis, though, rather than assuming that all functions in\n> a module must use the same interface.\n> \n> I'd be inclined to define a macro that creates the signal object,\n> so that you'd write something like\n> \n> PG_FUNCTION_API_V2(foo);\n> \n> Datum\n> foo(PG_FUNCTION_ARGS)\n> {\n> \t...\n> }\n> \n> to create a dynamically loadable new-style function.\n> \n> Comments?\n\nI like it :)\n\ne.g.\n\n\tstruct pg_function_info_header {\n\t\tint api_ver;\n\t};\n\nand \n\n\tPG_FUNCTION_TAG(foo);\n\nexpands to\n\n\tstruct pg_function_info_header __pg_function_foo_info = { 0 };\n\nso when we sometimes get around to add more fields to it\nwe increase the api_ver. For more info also the macros will\nbe different. This _TAG means \"no info is given it is only\ntagged as newC\".\n\nComments?\n\n-- \nmarko\n\n",
"msg_date": "Fri, 17 Nov 2000 04:11:10 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more)"
},
{
"msg_contents": "At 21:08 16/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>> I'd be inclined to define a macro that creates the signal object,\n>>> so that you'd write something like\n>>> \n>>> PG_FUNCTION_API_V2(foo);\n...\n>\n>What I was thinking was that the macro would expand either to\n>\n>\tint pg_api_foo = 2;\n>\n>or\n>\n>\tint pg_api_foo(void) { return 2; }\n>\n\nFor possible future compatibility, can you also do something like:\n\n PG_FUNCTION_API_V2;\n PG_FUNCTION_V2(foo);\n PG_FUNCTION_V2(bar);\n ...\n\nWhere\n\nPG_FUNCTION_API_V2 expands to:\n\n int pg_fmgr_api_version(void) { return 2; }\n\nAnd PG_FUNCTION_V2(foo) either does nothing or expands to:\n\n int pg_fmgr_api2_version_foo(void) { return 2; }\n\nThe first call will tell PG that (because it is version 2), it should\nexpect the next set of entry points. Since we will not be allowing mixed\nversions in this version of the API (I think), we could really have the\nsubsequent macros do nothing.\n\nThis way we make it more independant of future API versions by not\nrequiring a specific special entry point for each function. Then can do\nthings like use the same entry point for multiple functions, possibly act\nas stubs pointing to other libraries (by loading & returning another\nlibrary entry point) etc etc. \n\n\n>I like this way better than a central info function for the whole\n>library, because you'd write the version declaration right with the\n>function itself. A central info function would be more of a pain to\n>maintain, I think.\n\nIn the plans for PG_FUNCTION_API_V3(?), I actually have the info function\nreturning a struct with values for 'iscacheable', 'isstrict', and the\nactual entry point to use. This reduces duplication since the programmer is\nthe best person to know these attributes. But using macros is fine:\n\nPG_FUNCTION_API_V3 expand to:\n\n typedef struct {bool iscacheable, bool isstrict, ptr entrypoint }\npg_fmgr_info;\n int pg_fmgr_api_version(void) { return 3; }\n\nand\n\n PG_FUNCTION_V3(foo, false, true, foo_entry_point)\n\nexpand to:\n\n void pg_fmgr_api_version_foo(fmgr_info *i) \n { i->iscacheable=false; \n i->isstrict=true;\n i->entrypoint=foo_entry_point; }\n\nwill work as well.\n\nPerhaps in PG_FUNCTION_API_V4 we can implement some kind of interface for\nlisting supported entry points for module loading...\n\n\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 17 Nov 2000 13:36:02 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> For possible future compatibility, can you also do something like:\n> PG_FUNCTION_API_V2;\n> PG_FUNCTION_V2(foo);\n> PG_FUNCTION_V2(bar);\n> ...\n\n> Where\n> PG_FUNCTION_API_V2 expands to:\n> int pg_fmgr_api_version(void) { return 2; }\n> And PG_FUNCTION_V2(foo) either does nothing or expands to:\n> int pg_fmgr_api2_version_foo(void) { return 2; }\n\nI'm not following the point here. Why two different macros? It doesn't\nlook to me like the first one does anything. The per-routine macro\ncalls should be capable of doing everything that needs to be done.\n\nPer your comments and Marko's about future extension, it seems that\na single-word result might start to get a little cramped before long.\nI like Marko's design:\n\n\tstruct pg_function_info_header {\n\t\tint api_ver;\n\t};\n\nThe api_ver field is sufficient for now, but for values > 2 there\nmight be additional fields defined.\n\nWe can either have this struct be an initialized global variable,\nor have a called function that returns a pointer to it, depending on\nthe question of which way seems easier to implement/more portable.\nThe macro can hide the details of how it's done.\n\n> The first call will tell PG that (because it is version 2), it should\n> expect the next set of entry points. Since we will not be allowing mixed\n> versions in this version of the API (I think),\n\nYes, we will, because there is a case in the regression tests that\nwill break anything that doesn't cope with mixed versions ;-).\nI deliberately left some of the routines in regress.c old-style ...\n\n> This way we make it more independant of future API versions by not\n> requiring a specific special entry point for each function. Then can do\n> things like use the same entry point for multiple functions, possibly act\n> as stubs pointing to other libraries (by loading & returning another\n> library entry point) etc etc. \n\nHmm. This stub idea might be a sufficient reason to say that we want to\ndo a function call rather than look for a global variable. However,\nI am unpersuaded by the idea that a one-liner function per useful entry\npoint is an intolerable amount of overhead. Let's keep it simple here.\n\n> PG_FUNCTION_V3(foo, false, true, foo_entry_point)\n> expand to:\n> void pg_fmgr_api_version_foo(fmgr_info *i) \n> { i->iscacheable=false; \ni-> isstrict=true;\ni-> entrypoint=foo_entry_point; }\n\nI prefer something like\n\n\tconst inforec * pg_api_foo(void)\n\t{\n\t\tstatic inforec foo_info = { ... };\n\t\treturn &foo_info;\n\t}\n\nsince this avoids prejudging anything. (In your example, how does\nthe version function *know* how big the record it's been handed is?\nLoading a version-N library into a Postgres version < N might bomb\nhard because the info function scribbles on fields that aren't there.\nHanding back a pointer to something that the main code then treats\nas read-only seems much safer.) The above implementation with a\npreset static inforec is of course only one way it could be done\nwithout breaking the ABI for the info function...\n\n> Perhaps in PG_FUNCTION_API_V4 we can implement some kind of interface for\n> listing supported entry points for module loading...\n\nI think that should be seen as a separate feature, rather than\nmixing it up with support information about any individual function.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 22:10:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "At 22:10 16/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> For possible future compatibility, can you also do something like:\n>> PG_FUNCTION_API_V2;\n>> PG_FUNCTION_V2(foo);\n>> PG_FUNCTION_V2(bar);\n>> ...\n>\n>> Where\n>> PG_FUNCTION_API_V2 expands to:\n>> int pg_fmgr_api_version(void) { return 2; }\n>> And PG_FUNCTION_V2(foo) either does nothing or expands to:\n>> int pg_fmgr_api2_version_foo(void) { return 2; }\n>\n>I'm not following the point here. Why two different macros? It doesn't\n>look to me like the first one does anything. The per-routine macro\n>calls should be capable of doing everything that needs to be done.\n\nI think the PG_FUNCTION_API_V2 macros is very important because it will\ntell the function manager which 'protocol' to use. \n\nIn my view, the individual stub entry points for each function are part of\nthe protocol and should not be assumed. Returning a struct would be ideal\nsince it allows more flexibility in the furture.\n\nSo long as the version is always in the first bytes of the struct, we are\ncovered for compatibility.\n\n\n>> The first call will tell PG that (because it is version 2), it should\n>> expect the next set of entry points. Since we will not be allowing mixed\n>> versions in this version of the API (I think),\n>\n>Yes, we will, because there is a case in the regression tests that\n>will break anything that doesn't cope with mixed versions ;-).\n>I deliberately left some of the routines in regress.c old-style ...\n\nI'd still argue for a PG_FUNCTION_API_V2 macro for the reasons above. What\nthe fmgrs needs to do is:\n\n- call pg_fmgr_api_version() to get the protocol version\n- when it wants to call a function 'foo' see if there is a 'pg_api_foo'\nentry point, and if so, use the new interface, o/wise use the old one. No\nneed to even call it.\n\nFuture versions will call pg_fmgr_api_version() and possibly pass\nappropriate structs to info-functions or whatever.\n\n\n>\n>Hmm. This stub idea might be a sufficient reason to say that we want to\n>do a function call rather than look for a global variable.\n\nI agree.\n\n\n>I am unpersuaded by the idea that a one-liner function per useful entry\n>point is an intolerable amount of overhead. Let's keep it simple here.\n\nWasn't worried about the overhead at all, just the offense to my aesthetics\n8-).\n\n\n>> PG_FUNCTION_V3(foo, false, true, foo_entry_point)\n>> expand to:\n>> void pg_fmgr_api_version_foo(fmgr_info *i) \n>> { i->iscacheable=false; \n>i-> isstrict=true;\n>i-> entrypoint=foo_entry_point; }\n>\n>I prefer something like\n>\n>\tconst inforec * pg_api_foo(void)\n>\t{\n>\t\tstatic inforec foo_info = { ... };\n>\t\treturn &foo_info;\n>\t}\n>\n>since this avoids prejudging anything. (In your example, how does\n>the version function *know* how big the record it's been handed is?\n\nBecuase the function manager called pg_fmgr_api_version and made sure it\npassed the right structure.\n\n\n>Loading a version-N library into a Postgres version < N might bomb\n>hard because the info function scribbles on fields that aren't there.\n\nIf it calls pg_fmgr_api_version and can't get an acceptable version, it\nwould bomb nicely. Maybe in future versions we could even support some kind\nof protocol negotiation, but that seems less than useful at this stage.\n\n\n>Handing back a pointer to something that the main code then treats\n>as read-only seems much safer.)\n\nThis is fine too; but in all cases they have to be sure that they agree on\nwhat is being handed back. So long at the blocks always have the version\nnumber on the header it is fine.\n\n>> Perhaps in PG_FUNCTION_API_V4 we can implement some kind of interface for\n>> listing supported entry points for module loading...\n>\n>I think that should be seen as a separate feature, rather than\n>mixing it up with support information about any individual function.\n\nFine.\n\n\nMy only real issue with all of this is that we need to separate the\nprotocol selection from the the data exchange.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 17 Nov 2000 14:49:56 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> So long as the version is always in the first bytes of the struct, we are\n> covered for compatibility.\n\nRight ...\n\n> I'd still argue for a PG_FUNCTION_API_V2 macro for the reasons above. What\n> the fmgrs needs to do is:\n\n> - call pg_fmgr_api_version() to get the protocol version\n> - when it wants to call a function 'foo' see if there is a 'pg_api_foo'\n> entry point, and if so, use the new interface, o/wise use the old one. No\n> need to even call it.\n\nThis strikes me as completely backwards, because it prejudges an\nassumption that protocol decisions can be made on a library-wide basis.\nI see no need for a library-wide protocol definition. What I want to\ndo is call 'pg_api_foo' (if it exists) to find out all about the\nfunction 'foo', without any restriction on whether 'foo' is like 'bar'\nthat happens to have been linked into the same shlib.\n\nThe test to see if 'pg_api_foo' exists is going to be the expensive\npart of this anyway. Once you've done that, you may as well call it...\n\n> My only real issue with all of this is that we need to separate the\n> protocol selection from the the data exchange.\n\nNegotiating a protocol to negotiate protocol strikes me as considerable\noverkill. It should be plenty sufficient to say that a parameterless\nfunction with a determinable name will hand back a struct whose first\nword identifies the contents of the struct. Why do we need another\nlayer on top of that? Especially if it's a layer that makes the\nunsupported assumption that all functions in a given shlib are similar?\nThat way reduces flexibility, rather than increasing it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 23:07:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "At 23:07 16/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>\n>> - call pg_fmgr_api_version() to get the protocol version\n>> - when it wants to call a function 'foo' see if there is a 'pg_api_foo'\n>> entry point, and if so, use the new interface, o/wise use the old one. No\n>> need to even call it.\n>\n>This strikes me as completely backwards, because it prejudges an\n>assumption that protocol decisions can be made on a library-wide basis.\n...\n>unsupported assumption that all functions in a given shlib are similar?\n>That way reduces flexibility, rather than increasing it.\n\nNot at all. The call is, as you point out, defining the protocl for\nenquiry. Nothing more. With 7.1, the process above is sufficient. There is\nno need to call *in this version* because pg_fmgr_api_version returns\nenough information when combined with the existence of a well-defined entry\npoint. Future versions would need to call the entry point, I would expect.\n\nIf we really wanted to we could let it also return a struct, which could\nindicate if all, some or none of the functions in the module have info\nfunctions.\n\nI guess it's not a big issue, and and as you say 'negtiating to negotiate'\nis probably overkill. Also, if we really need it in a future version we can\nadd it easily enough - it's would be just one more 'look for entry point'\ncall in the function manager.\n\nI'll be very happy to see newC replaced with this...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 17 Nov 2000 15:33:42 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> Not at all. The call is, as you point out, defining the protocl for\n> enquiry. Nothing more. With 7.1, the process above is sufficient. There is\n> no need to call *in this version* because pg_fmgr_api_version returns\n> enough information when combined with the existence of a well-defined entry\n> point. Future versions would need to call the entry point, I would expect.\n\nWell, I was planning to go ahead and call the entry point anyway, just\nso that it could tell me \"it's an old-style function\" if it wanted to.\nNot sure that'll ever happen in practice, but that case ought to work\nIMHO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 23:40:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
},
{
"msg_contents": "Nathan Myers <[email protected]> writes:\n> Why declare a function instead of a static struct?\n\nWell, two reasons. One is that a function provides wiggle room if we\nlater decide that the library needs to do some computation before\nhanding back the function info struct. For example, Philip suggested\nthat the info function might be just a stub that causes an additional\nlibrary to be loaded. I'm not sure we'll ever really *need* such\nflexibility in the future, but when it costs hardly anything to leave\nthe option open, why not?\n\nThe second reason is that if it's a function call, we only have one\nprimitive lookup operation that we expect the dynamic loader to be able\nto support: find a function in a shlib by name. We have that already in\norder to be able to call the real function. If it's a global variable,\nthen we need a second primitive lookup operation: find a global variable\nin a shlib by name. Given the way that dynamic-link shared libraries\nwork, this is *not* necessarily the same as the first operation (code\nand data are handled much differently in a shared library!) and might\nnot even be available on all platforms. At the very least it'd imply a\nround of per-platform development and portability testing that I doubt\nwe can afford if we want to shoehorn this feature into the 7.1 schedule.\n\nIn short, using a variable looks like more work for less functionality,\nand so the choice seems pretty clear-cut to me: use a function.\n\n> Users are allowed to have functions that start \n> with \"pg\" already, and that's quite a reasonable prefix for \n> functions meant to be called by Postgres. Therefore, I suggest \n> a prefix \"_pg\" instead of \"pg\". Thus,\n\n> const struct _pg_user_function _pg_user_function_foo = { 2, };\n\nThe exact details of the name prefix need to be settled regardless\nof whether the name is attached to a variable or a function. I was\nthinking of pg_finfo_foo for a function named foo. We want to keep\nthe prefix reasonably short, so as to reduce the risk of duplicate-\nsymbol conflicts if the platform's linker truncates names to some\nfixed length (I'm sure there are still some that do :-(). Using\na leading underscore (_pg_info_foo) might help but I worry about\ncreating conflicts with system names if we do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2000 17:30:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [rfc] new CREATE FUNCTION (and more) "
}
]
|
[
{
"msg_contents": "Is there any guidelines on the formatting of the C code in\nPG? As I was working on guc-file.l yesterday, I noticed\nsome things with LONG lines (I broke some of them up).\n\nI was wondering if there were formal standards? \n\nAlso, do we care about extraneous #include's? \n(src/backend/parser/scansup.c has #include <ctype.h> which it\ndoesn't need on closer inspection, for example). \n\nWhen I copied scansup.c into guc-file.l I added the #include\n<ctype.h>, but it may not need it. \n\nLarry\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 16 Nov 2000 11:17:58 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "coding style guidelines?"
},
{
"msg_contents": "Larry Rosenman <[email protected]> writes:\n> Is there any guidelines on the formatting of the C code in\n> PG? As I was working on guc-file.l yesterday, I noticed\n> some things with LONG lines (I broke some of them up).\n> I was wondering if there were formal standards? \n\nBrace layout, comment layout and indentation are all brought into line\nby pg_indent, which Bruce runs at least once per release cycle.\nHowever, I don't think pg_indent will consider breaking non-comment lines\ninto multiple lines, so it's up to the code author to be reasonable in\nthat area.\n\nMy own practice is to try to make the code look nice in an 80-column\nwindow.\n\nBTW, if you are writing a comment that you don't want to have\nreformatted by pg_indent's rather braindead reformatter, protect it\nwith some dashes:\n\n\t/*----------\n\t * This text will not get reformatted.\n\t *----------\n\t */\n\n\n> Also, do we care about extraneous #include's? \n\nNot very much. You have to be particularly cautious about removing\nsystem-header #includes, since what looks redundant on your platform\nmay not be redundant for other platforms. I think Bruce has a tool\nto look for unnecessary includes of our own header files, but it\ndoesn't risk trying to remove system headers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 19:51:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: coding style guidelines? "
},
{
"msg_contents": "> Larry Rosenman <[email protected]> writes:\n> > Is there any guidelines on the formatting of the C code in\n> > PG? As I was working on guc-file.l yesterday, I noticed\n> > some things with LONG lines (I broke some of them up).\n> > I was wondering if there were formal standards? \n> \n> Brace layout, comment layout and indentation are all brought into line\n> by pg_indent, which Bruce runs at least once per release cycle.\n> However, I don't think pg_indent will consider breaking non-comment lines\n> into multiple lines, so it's up to the code author to be reasonable in\n> that area.\n\nIt does wrap >80 lines.\n\n> \n> My own practice is to try to make the code look nice in an 80-column\n> window.\n> \n> BTW, if you are writing a comment that you don't want to have\n> reformatted by pg_indent's rather braindead reformatter, protect it\n> with some dashes:\n> \n> \t/*----------\n> \t * This text will not get reformatted.\n> \t *----------\n> \t */\n> \n> \n> > Also, do we care about extraneous #include's? \n> \n> Not very much. You have to be particularly cautious about removing\n> system-header #includes, since what looks redundant on your platform\n> may not be redundant for other platforms. I think Bruce has a tool\n> to look for unnecessary includes of our own header files, but it\n> doesn't risk trying to remove system headers.\n\nYes, it does not touch system includes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Nov 2000 21:06:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: coding style guidelines?"
}
]
|
[
{
"msg_contents": "\n> > Actually my proposal would be to not advertise \"newC\" in 7.1 and do\n> > some more research in that area until we have a solid and \n> maybe compatible\n> > interface that also makes the missing features possible \n> > (multiple columns and rows for return, enter the function \n> more than once\n> > to retrieve only part of the result if it consists of many rows).\n> \n> My problem with newC is that I think it is going to cause confusing by\n> people who create new-style functions and call the language \"C\". I\n> recommend making our current code \"C\" style, and calling pre-7.1\n> functions \"C70\", that way, we can still enable old functions to work,\n> they just have to use \"C70\" to make them work, and all our new code is\n> the clean \"C\" type.\n\nThis would be ok if the \"newC\" would be like any one other implementation,\nbut it is not. It is a PostgreSQL specific fmgr interface.\n\nOur old \"C\" fmgr interface is more or less exactly the same as in Informix\n(no wonder, they copied Illustra). In Informix this fmgr interface is called \"C\",\nthat is why I would like to keep the \"old\" style \"C\" also. \nIt is something with a sort of pseudo standard character.\n\nFor the new interface, something that makes clear that it is PostgreSQL specific\nwould imho be good, like \"pgC\". \nOr see my previous mail about \"parameter style postgresql\".\n\nAndreas\n",
"msg_date": "Thu, 16 Nov 2000 18:26:32 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Coping with 'C' vs 'newC' function language nam\n\tesh"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > > Actually my proposal would be to not advertise \"newC\" in 7.1 and do\n> > > some more research in that area until we have a solid and \n> > maybe compatible\n> > > interface that also makes the missing features possible \n> > > (multiple columns and rows for return, enter the function \n> > more than once\n> > > to retrieve only part of the result if it consists of many rows).\n> > \n> > My problem with newC is that I think it is going to cause confusing by\n> > people who create new-style functions and call the language \"C\". I\n> > recommend making our current code \"C\" style, and calling pre-7.1\n> > functions \"C70\", that way, we can still enable old functions to work,\n> > they just have to use \"C70\" to make them work, and all our new code is\n> > the clean \"C\" type.\n> \n> This would be ok if the \"newC\" would be like any one other implementation,\n> but it is not. It is a PostgreSQL specific fmgr interface.\n> \n> Our old \"C\" fmgr interface is more or less exactly the same as in Informix\n> (no wonder, they copied Illustra). In Informix this fmgr interface is called \"C\",\n> that is why I would like to keep the \"old\" style \"C\" also. \n> It is something with a sort of pseudo standard character.\n\nBut we have very few Informix functions moving to PostgreSQL.\n\n> \n> For the new interface, something that makes clear that it is PostgreSQL specific\n> would imho be good, like \"pgC\". \n> Or see my previous mail about \"parameter style postgresql\".\n\nMy concern is that this is confusing. All our documentation says the\nstyle is called C. Functions are confusing enough. Adding a new name\nfor our default function type could add to the confusion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Nov 2000 12:30:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Coping with 'C' vs 'newC' function language nam\n esh"
}
]
|
[
{
"msg_contents": "\n> But we have very few Informix functions moving to PostgreSQL.\n\nI do not understand this comment.\nWhat you imho forget here is that a definition for an interface will eventually be\nincluded in the SQL standard. \nAnd it will be what Oracle or DB/2 (maybe even Informix) does.\n\nI conclude from previous mails, that none of us have the slightest idea\nhow this works in DB/2 or Oracle. This is imho bad.\n\n> My concern is that this is confusing. All our documentation says the\n> style is called C. Functions are confusing enough. Adding a new name\n> for our default function type could add to the confusion.\n\nYes, that is why imho some more research and adjustments are necessary \nbefore we make this the new default interface, and postpone public advertisement \nto 7.2.\n\nAndreas\n",
"msg_date": "Thu, 16 Nov 2000 18:51:15 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Coping with 'C' vs 'newC' function language\n\t nam esh"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > But we have very few Informix functions moving to PostgreSQL.\n> \n> I do not understand this comment.\n> What you imho forget here is that a definition for an interface will eventually be\n> included in the SQL standard. \n> And it will be what Oracle or DB/2 (maybe even Informix) does.\n\nOK, lets call the old style \"stdC\" and the new one \"C\".\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Nov 2000 12:59:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Coping with 'C' vs 'newC' function language nam esh"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, lets call the old style \"stdC\" and the new one \"C\".\n\nThe old style has the be 'C' because otherwise you break every old script,\nincluding dumps for upgrades, and Lamar will *really* be on your case this\ntime. ;-)\n\nAlso, the grammar clause \"LANGUAGE C\" is actually part of the standard, so\nnaming it \"LANGUAGE stdC\" will make it *less* standard. (Not that I buy\nInformix as being a \"standard\".)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 16 Nov 2000 19:38:46 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Coping with 'C' vs 'newC' function language nam esh"
}
]
|
[
{
"msg_contents": "Currently, CHAR is correctly interpreted as CHAR(1), but VARCHAR is\nincorrectly interpreted as VARCHAR(<infinity>). Any reason for that,\nbesides the fact that it of course makes much more sense than VARCHAR(1)?\n\nAdditionally, neither CHAR nor VARCHAR seem to bark on too long input,\nthey just truncate silently.\n\nI'm wondering because should the bit types be made to imitate this\nincorrect behaviour, or should they start out correctly?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 16 Nov 2000 19:16:59 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Varchar standard compliance"
},
{
"msg_contents": "I've been wondering the difference in varchar and TEXT in the aspect of\nlength and indexing - what would happen if you tried to index a\nvarchar(BLCKSZ) ? I know you can index smaller portions of text (at least it\nappears you can) so why not larger alphanumeric data? (I'm not complaining,\njust trying to understand.)\n\nI just made a varchar(30000) field, inserted some data into it and created\nan index on it, it seemed to work OK -- is it really only indexing X\ncharacters or something?\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Peter Eisentraut\" <[email protected]>\nTo: \"PostgreSQL Development\" <[email protected]>\nSent: Thursday, November 16, 2000 10:16 AM\nSubject: [HACKERS] Varchar standard compliance\n\n\n> Currently, CHAR is correctly interpreted as CHAR(1), but VARCHAR is\n> incorrectly interpreted as VARCHAR(<infinity>). Any reason for that,\n> besides the fact that it of course makes much more sense than VARCHAR(1)?\n>\n> Additionally, neither CHAR nor VARCHAR seem to bark on too long input,\n> they just truncate silently.\n>\n> I'm wondering because should the bit types be made to imitate this\n> incorrect behaviour, or should they start out correctly?\n>\n> --\n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n>\n>\n\n",
"msg_date": "Thu, 16 Nov 2000 11:40:39 -0800",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Varchar standard compliance"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Currently, CHAR is correctly interpreted as CHAR(1), but VARCHAR is\n> incorrectly interpreted as VARCHAR(<infinity>). Any reason for that,\n> besides the fact that it of course makes much more sense than VARCHAR(1)?\n\nOn what grounds do you claim that behavior is incorrect?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 19:56:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Varchar standard compliance "
},
{
"msg_contents": "Tom Lane writes:\n\n> > Currently, CHAR is correctly interpreted as CHAR(1), but VARCHAR is\n> > incorrectly interpreted as VARCHAR(<infinity>). Any reason for that,\n> > besides the fact that it of course makes much more sense than VARCHAR(1)?\n> \n> On what grounds do you claim that behavior is incorrect?\n\nBecause SQL says so:\n\n <character string type> ::=\n CHARACTER [ <left paren> <length> <right paren> ]\n | CHAR [ <left paren> <length> <right paren> ]\n | CHARACTER VARYING <left paren> <length> <right paren>\n | CHAR VARYING <left paren> <length> <right paren>\n | VARCHAR <left paren> <length> <right paren>\n\n 4) If <length> is omitted, then a <length> of 1 is implicit.\n\nIt doesn't make much sense to me either, but it's consistent with the\noverall SQL attitude of \"no anythings of possibly unlimited length\".\n\nIf we want to keep this, then there would really be no difference between\nVARCHAR and TEXT, right?\n\nI'm not partial to either side, but I wanted to know what the bit types\nshould do.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 17 Nov 2000 16:57:09 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Varchar standard compliance "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> On what grounds do you claim that behavior is incorrect?\n\n> Because SQL says so:\n\n> <character string type> ::=\n> CHARACTER [ <left paren> <length> <right paren> ]\n> | CHAR [ <left paren> <length> <right paren> ]\n> | CHARACTER VARYING <left paren> <length> <right paren>\n> | CHAR VARYING <left paren> <length> <right paren>\n> | VARCHAR <left paren> <length> <right paren>\n\n> 4) If <length> is omitted, then a <length> of 1 is implicit.\n\nWell, what that actually says is that CHAR means CHAR(1). The syntax\ndoes not allow VARCHAR without (n), so the thing we are noncompliant\non is not what we consider the default n to be, but whether there is\na default length for varchar at all. The spec is not offering one.\n\nI don't particularly want to enforce the spec's position that leaving\noff (n) is illegal, and given the choice between defaulting to\nVARCHAR(1) or VARCHAR(large), I'll take the second. The second one\nat least has some usefulness...\n\n> If we want to keep this, then there would really be no difference between\n> VARCHAR and TEXT, right?\n\nThere's no real difference between VARCHAR without a length limit and\nTEXT, no.\n\n> I'm not partial to either side, but I wanted to know what the bit types\n> should do.\n\nI'd be inclined to stick with our existing VARCHAR behavior just on\ngrounds of backwards compatibility. If you want to make the bit types\nbehave differently, I wouldn't say that's indefensible.\n\nHowever, one advantage of treating BIT VARYING without (n) as unlimited\nis that you'd have the equivalent functionality to TEXT without having\nto make a third bit type...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2000 11:27:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Varchar standard compliance "
},
{
"msg_contents": "Is there a reason why the conversion from CHAR to CHAR(1) is done in\nanalyze.c:transformColumnType rather than right in the\ngrammar? Currently, you get this incorrect behaviour:\n\npeter=# select cast('voodoo' as char(1));\n ?column?\n----------\n v\n(1 row)\n \npeter=# select cast('voodoo' as char);\n ?column?\n----------\n voodoo\n(1 row)\n\nBoth should return 'v'.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 17 Nov 2000 19:51:27 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Varchar standard compliance "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Is there a reason why the conversion from CHAR to CHAR(1) is done in\n> analyze.c:transformColumnType rather than right in the\n> grammar?\n\nWell, transformColumnType does database access, which is verboten during\nthe grammar phase. (The grammar has to execute correctly even if we're\nin transaction-abort state, else we'll not be able to recognize the\nCOMMIT or ROLLBACK command...)\n\nYou could possibly do the equivalent work in the grammar based strictly\non recognizing the keywords CHAR, NUMERIC, etc, but I think that\napproach will probably run into a dead end at some point. Really,\nthe grammar is NOT the place to be making semantic deductions. It\nshould give back an undecorated parse tree and let parse_analyze fill\nin semantic deductions. (We've been pretty lax about that in the past,\nbut I've been trying to move semantics code out of gram.y recently.)\n\n> peter=# select cast('voodoo' as char(1));\n> ?column?\n> ----------\n> v\n> (1 row)\n \n> peter=# select cast('voodoo' as char);\n> ?column?\n> ----------\n> voodoo\n> (1 row)\n\nPossibly transformColumnType() should be applied to datatype names\nappearing in casts (and other places?) as well as those appearing in\ntable column declarations. However, I find your example unconvincing:\nI'd expect the result of that cast to be of type char(6), not char(1).\nIn short, I don't believe the above-quoted behavior is wrong.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2000 20:09:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Varchar standard compliance "
}
]
|
[
{
"msg_contents": "\n\tThere's a message on -general about a possible\nproblem in the deferred RI constraints. He was doing a\nsequence like:\nbegin\n delete \n insert\nend\nand having it fail even though the deleted key was back in\nplace at the end.\n\n\tMy understanding of the spec is that that sequence should\nhave succeeded, but I could very well be wrong. Changing the \nnoaction check to fix this is probably fairly minimal (making\nsure that there isn't now a key with the old value before checking\nfor violated rows would probably be sufficient for match full and\nunspecified). And I guess technically this could happen for\nimmediate constraints as well if a single update changed a key to\na new value and another to the old one so the constraint was still\nsatisifed.\n\n\tBut, this brings up a question for the referential actions.\nIt doesn't look like the actions are limited to whether or not the\nrow would be violating, but instead based on what row it was associated\nwith before. (Makes sense, you'd want a cascade update to keep\nthe same associations). But that made me wonder about exactly \n*when* the actions were supposed to take place for deferred constraints.\nYou could say at check time, but that doesn't make sense for RESTRICT\nreally, and restrict doesn't have any special wording I see in its\ndefinition. So if you had a deferred on delete cascade constraint, and you\ndo begin; delete from pk; select * from fk; end; do you see the fk rows\nthat were associated with the deleted pk rows?\n\n\n",
"msg_date": "Thu, 16 Nov 2000 11:10:02 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions on RI spec (poss. bugs)"
},
{
"msg_contents": "Stephan Szabo wrote:\n>\n> There's a message on -general about a possible\n> problem in the deferred RI constraints. He was doing a\n> sequence like:\n> begin\n> delete\n> insert\n> end\n> and having it fail even though the deleted key was back in\n> place at the end.\n\n Isn't that (delete and reinsert the same PK) what the\n standard means with \"triggered data change violation\"?\n\n It is a second touching of a unique matching PK. And in this\n case the standard doesn't define a behaviour, instead it says\n you cannot do so.\n\n In the case of reinserting a deleted PK, does the new PK row\n inherit the references to the old PK row? If so, an ON DELETE\n CASCADE must be suppressed - no?\n\n If I'm right that it should be a \"triggered data change\n violation\", the problem is just changing into one we have\n with delete/reinsert in the ON DELETE CASCADE case. Haven't\n tested, but the current implementation shouldn't detect it.\n\n\nJan\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Tue, 21 Nov 2000 12:32:23 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions on RI spec (poss. bugs)"
},
{
"msg_contents": "Jan Wieck writes:\n\n> Stephan Szabo wrote:\n> >\n> > There's a message on -general about a possible\n> > problem in the deferred RI constraints. He was doing a\n> > sequence like:\n> > begin\n> > delete\n> > insert\n> > end\n> > and having it fail even though the deleted key was back in\n> > place at the end.\n> \n> Isn't that (delete and reinsert the same PK) what the\n> standard means with \"triggered data change violation\"?\n\nTriggered data change violations can only occur if the same attribute is\nchanged twice during the same *statement*, not transaction.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 21 Nov 2000 19:35:21 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions on RI spec (poss. bugs)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Jan Wieck writes:\n> \n> > Stephan Szabo wrote:\n> > >\n> > > There's a message on -general about a possible\n> > > problem in the deferred RI constraints. He was doing a\n> > > sequence like:\n> > > begin\n> > > delete\n> > > insert\n> > > end\n> > > and having it fail even though the deleted key was back in\n> > > place at the end.\n> >\n> > Isn't that (delete and reinsert the same PK) what the\n> > standard means with \"triggered data change violation\"?\n> \n> Triggered data change violations can only occur if the same attribute is\n> changed twice during the same *statement*, not transaction.\n>\nDo we also get \"Triggered data change violations\" when we delete and\nthen \ninsert on the FK side in a single transaction ?\n\nI just had to remove a FK constraint because I could not figure ot where \nthe violation was coming from ;(\n\n-----------------\nHannu\n",
"msg_date": "Wed, 22 Nov 2000 18:16:06 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions on RI spec (poss. bugs)"
},
{
"msg_contents": "\nOn Tue, 21 Nov 2000, Jan Wieck wrote:\n\n> Stephan Szabo wrote:\n> >\n> > There's a message on -general about a possible\n> > problem in the deferred RI constraints. He was doing a\n> > sequence like:\n> > begin\n> > delete\n> > insert\n> > end\n> > and having it fail even though the deleted key was back in\n> > place at the end.\n> \n> Isn't that (delete and reinsert the same PK) what the\n> standard means with \"triggered data change violation\"?\n> \n> It is a second touching of a unique matching PK. And in this\n> case the standard doesn't define a behaviour, instead it says\n> you cannot do so.\n\nAs Peter said, it really looks like the 99 draft anyway means twice in a\nsingle statement not transaction which is probably there to prevent\ninfinite loops. \n\n> In the case of reinserting a deleted PK, does the new PK row\n> inherit the references to the old PK row? If so, an ON DELETE\n> CASCADE must be suppressed - no?\nI'm not sure because it's unclear to me whether ri actions are actually\ndeferred. Restrict for example sounds like it occurs immediately on the\nstatement and it's not worded differently from others in the draft I have.\nSo, it's possible that the actions are supposed to occur immediately on\nthe statement, even if the constraint check is deferred. I really don't\nknow, but it would explain a behavioral difference between restrict and\nnoaction that makes having both make sense (restrict prevents you from \nmoving away - no action lets you move away as long as the constraint is\nokay at check time).\n\n",
"msg_date": "Wed, 22 Nov 2000 11:27:50 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions on RI spec (poss. bugs)"
}
]
|
[
{
"msg_contents": "Guys, hello.\n\nHere is a problem.\n\n--\n-- Creating 2 new functions and new type\n--\nBEGIN;\n\nCREATE FUNCTION enum_week_in (opaque)\n\tRETURNS int2\n\tAS '\n\tDECLARE\n\t invalue ALIAS for $1;\n\tBEGIN\n\t\tIF invalue='''' OR invalue=''0'' THEN RETURN 0; END IF;\n\t\tIF invalue=''Monday'' OR invalue=''1'' THEN RETURN 1; END IF;\n\t\tIF invalue=''Tuesday'' OR invalue=''2'' THEN RETURN 2; END IF;\n\t\tIF invalue=''Wednesday'' OR invalue=''3'' THEN RETURN 3; END IF;\n\t\tRAISE EXCEPTION ''incorrect input value: %'',invalue;\n\tEND;'\n\tLANGUAGE 'plpgsql'\n\tWITH (ISCACHABLE);\n\nCREATE FUNCTION enum_week_out (opaque)\n\tRETURNS text\n\tAS '\n\tDECLARE\n\t outvalue ALIAS for $1;\n\tBEGIN\n\t\tIF outvalue=0 THEN RETURN ''''; END IF;\n\t\tIF outvalue=1 THEN RETURN ''Monday''; END IF;\n\t\tIF outvalue=2 THEN RETURN ''Tuesday''; END IF;\n\t\tIF outvalue=3 THEN RETURN ''Wednesday''; END IF;\n\t\tRAISE EXCEPTION ''incorrect output value: %'',outvalue;\n\tEND;'\n\tLANGUAGE 'plpgsql'\n\tWITH (ISCACHABLE);\n\nCREATE TYPE enum_week (\n\tinternallength = 2,\n\tinput = enum_week_in,\n\toutput = enum_week_out,\n\tPASSEDBYVALUE\n);\n\nCOMMIT;\n\nWell, all is ok after it, e.g. functions and type were registered in system catalog.\n\nNow, when I try to do \"SELECT enum_week_in('Monday')\", I get the following:\n\nNOTICE: plpgsql: ERROR during compile of enum_week_in near line 0\n\nThe same will occure if I\n\nCREATE TABLE test (wday enum_week);\ninsert into test (wday) values ('Monday')\n\nIf I redefine the same functions with input argtype 'text'/'int2' they work fine.\nI guess the problem is that PL/pgSQL doesn't handle opaque type correctly.\n\nAny ideas ?\n\nI don't care how but I need to emulate ENUM type, just to convert MySQL dumps to PostgreSQL. E.g. ENUM values \nstored in MySQL dump should be restorable in Postgres without any conversion.\n\nI running PostgreSQL 7.0.3 on Linux RedHat 6.2, kernel 2.2.15, Intel Celeron CPU; Postgres was \nupgraded from 7.0.2 without changing anything in system catalog.\n\nThanks,\nMax Rudensky.\n",
"msg_date": "Thu, 16 Nov 2000 21:24:20 +0200",
"msg_from": "Max Fonin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Enum type emulation: problem with opaque type in PL/pgSQL functions"
},
{
"msg_contents": "Max Fonin <[email protected]> writes:\n> I guess the problem is that PL/pgSQL doesn't handle opaque type correctly.\n\nNo it doesn't, which is not surprising considering that opaque isn't\nreally a type at all. The error message could be improved though :-(\n\nCurrently I believe that the only way to write datatype I/O routines\nis to do it in C, because what they really need to deal in is C-style\nstrings, and those are not an SQL-level type.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Nov 2000 11:13:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Enum type emulation: problem with opaque type in\n\tPL/pgSQL functions"
},
{
"msg_contents": "> I don't care how but I need to emulate ENUM type, just to convert \n> MySQL dumps to PostgreSQL. E.g. ENUM values \n> stored in MySQL dump should be restorable in Postgres without any \n> conversion.\n\nIn MySQL, ENUM is like this:\n\ncreate table blah (\n\tsex ENUM ('M', 'F')\n);\n\nThis can be emulated in Postgres like this:\n\ncreate table blah (\n\tsex CHAR(1) CHECK (sex IN ('M', 'F'))\n);\n\nThe _real_ trick is implementing MySQL sets in Postgres...\n\nChris\n\n",
"msg_date": "Fri, 24 Nov 2000 09:13:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Enum type emulation: problem with opaque type in\n\tPL/pgSQL functions"
},
{
"msg_contents": "On Thu, 23 Nov 2000 11:13:28 -0500\nTom Lane <[email protected]> wrote:\n\n> Max Fonin <[email protected]> writes:\n> > I guess the problem is that PL/pgSQL doesn't handle opaque type correctly.\n> \n> No it doesn't, which is not surprising considering that opaque isn't\n> really a type at all. The error message could be improved though :-(\n\nWell, I understood that the C is the only way very quick.\nReally, OPAQUE is just reference type like char* or void*, isn't it ?\n\nOK, I implemented emulation and now have some working version at http://ziet.zhitomir.ua/~fonin/code/my2pg.pl.\nThis is MySQL->Postgres dump converter and I've succeed with loading my production MySQL database converted \nwith it to Postgres.\nHowever it still needs manuall correction (see BUGS section in POD).\n\nBTW, can't somebody tell me when PG 7.1 will be released :) ?\n\n> Currently I believe that the only way to write datatype I/O routines\n> is to do it in C, because what they really need to deal in is C-style\n> strings, and those are not an SQL-level type.\n> \n> \t\t\tregards, tom lane\n\nThanks,\nMax Rudensky.\n",
"msg_date": "Fri, 24 Nov 2000 13:02:14 +0200",
"msg_from": "Max Fonin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Enum type emulation: problem with opaque type in\n\tPL/pgSQL functions"
},
{
"msg_contents": "Max Fonin <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>>>> I guess the problem is that PL/pgSQL doesn't handle opaque type correctly.\n>> \n>> No it doesn't, which is not surprising considering that opaque isn't\n>> really a type at all. The error message could be improved though :-(\n\n> Well, I understood that the C is the only way very quick.\n> Really, OPAQUE is just reference type like char* or void*, isn't it ?\n\nNo, it isn't a type at all. Opaque really means, in essence, that\nyou're not saying what the function's arguments or result are.\n\nThere are several reasons for handling datatype I/O routines that way:\n\n1. The actual argument types include C strings, which aren't an SQL\ndatatype.\n\n2. The I/O routines for a new type have to be defined before you can\nsay CREATE TYPE, and thus they can't name their true input or result\ntype anyway.\n\n3. We have some \"generic\" I/O routines like array_in and array_out,\nwhich work for multiple datatypes and so can't be declared as taking\nany specific datatype.\n\nBTW, the existing declarations of I/O routines for built-in types are\npretty messy and inconsistent (in particular, a lot of them are declared\nto take or return int4 when they do no such thing). This could be\ncleaned up somewhat if we invented an SQL type name for \"C string\",\nbut I don't see any way around the other two points.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 11:37:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Enum type emulation: problem with opaque type in\n\tPL/pgSQL functions"
}
]
|
[
{
"msg_contents": "> > BUT, do we know for sure that sleep(0) is not optimized in \n> > the library to just return? \n> \n> We can only do our best here. I think guessing whether other backends\n> are _about_ to commit is pretty shaky, and sleeping every time is a\n> waste. This seems the cleanest.\n\nA long ago you, Bruce, made me gift - book about transaction processing\n(thanks again -:)). This sleeping before fsync in commit is described\nthere as standard technique. And the reason is cleanest.\nMen, cost of fsync is very high! { write (64 bytes) + fsync() }\ntakes ~ 1/50 sec. Yes, additional 1/200 sec or so results in worse\nperformance when there is only one backend running but greatly\nincrease overall performance for 100 simultaneous backends. Ie this\ndelay is trade off to gain better scalability.\n\nI agreed that it must be configurable, smaller or probably 0 by\ndefault, use approximate # of simultaneously running backends for\nguessing (postmaster could maintain this number in shmem and\nbackends could just read it without any locking - exact number is\nnot required), good described as tuning patameter in documentation.\nAnyway I object sleep(0).\n\nVadim\n",
"msg_date": "Thu, 16 Nov 2000 13:11:54 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c\n xlog.c)"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> A long ago you, Bruce, made me gift - book about transaction processing\n> (thanks again -:)). This sleeping before fsync in commit is described\n> there as standard technique. And the reason is cleanest.\n> Men, cost of fsync is very high! { write (64 bytes) + fsync() }\n> takes ~ 1/50 sec. Yes, additional 1/200 sec or so results in worse\n> performance when there is only one backend running but greatly\n> increase overall performance for 100 simultaneous backends. Ie this\n> delay is trade off to gain better scalability.\n\n> I agreed that it must be configurable, smaller or probably 0 by\n> default, use approximate # of simultaneously running backends for\n> guessing (postmaster could maintain this number in shmem and\n> backends could just read it without any locking - exact number is\n> not required), good described as tuning patameter in documentation.\n> Anyway I object sleep(0).\n\nGood points. Another idea that Bruce and I kicked around on the phone\nwas to make the pre-fsync delay be self-adjusting; that is, it'd\nautomatically move up and down based on system load. For example,\nyou could keep track of the time since the last xact commit, and guess\nthat the time to the next one will be similar. If that's greater than\nyour intended sleep delay, forget the sleep and just fsync. But the\nshorter the time since the last commit, the longer you should be willing\nto delay. This'd need some experimentation to get right, but it seems a\nlot better than asking the dbadmin to pick a value.\n\nAnother thing that should happen is that once someone fsyncs, all the\nother backends waiting should be awoken immediately, instead of waiting\nfor their delays to time out. Not sure how doable this is --- there's\nno wait-for-semaphore-with-timeout in SysV IPC, is there? Perhaps we\ncan distinguish the first waiter (the guy who will ultimately do the\nfsync, he's just hoping for some passengers) from the rest, who see\nthat someone's already waiting for fsync and just wait for him to do it.\nThose other guys don't do a time wait, they sleep on a semaphore that\nthe first waiter will release once he's done the fsync.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 17:05:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c\n\txlog.c)"
},
{
"msg_contents": "> > sleep(3) should conform to POSIX specification, if anyone has the\n> > reference they can check it to see what the effect of sleep(0)\n> > should be.\n> \n> Yes, but Posix also specifies sched_yield() which rather explicitly\n> allows a process to yield its timeslice. No idea how well that is\n> supported.\n\nOK, I have a new idea.\n\nThere are two parts to transaction commit. The first is writing all\ndirty buffers or log changes to the kernel, and second is fsync of the\nlog file.\n\nI suggest having a per-backend shared memory byte that has the following\nvalues:\n\n\tSTART_LOG_WRITE\n\tWAIT_ON_FSYNC\n\tNOT_IN_COMMIT\n\tbackend_number_doing_fsync\n\nI suggest that when each backend starts a commit, it sets its byte to\nSTART_LOG_WRITE. When it gets ready to fsync, it checks all backends. \nIf all are NOT_IN_COMMIT, it does fsync and continues.\n\nIf one or more are in START_LOG_WRITE, it waits until no one is in\nSTART_LOG_WRITE. It then checks all WAIT_ON_FSYNC, and if it is the\nlowest backend in WAIT_ON_FSYNC, marks all others with its backend\nnumber, and does fsync. It then clears all backends with its number to\nNOT_IN_COMMIT. Other backend will see they are not the lowest\nWAIT_ON_FSYNC and will wait for their byte to be set to NOT_IN_COMMIT\nso they can then continue, knowing their data was synced.\n\nThis allows a single backend not to sleep, and allows multiple backends\nto bunch up only when they are all about to commit.\n\nThe reason backend numbers are written is so other backends entering the\ncommit code will not interfere with the backends performing fsync.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Nov 2000 00:00:34 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c\n xlog.c)"
},
{
"msg_contents": "Added to TODO:\n\n\t* Delay fsync() when other backends are about to commit too\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> > > BUT, do we know for sure that sleep(0) is not optimized in \n> > > the library to just return? \n> > \n> > We can only do our best here. I think guessing whether other backends\n> > are _about_ to commit is pretty shaky, and sleeping every time is a\n> > waste. This seems the cleanest.\n> \n> A long ago you, Bruce, made me gift - book about transaction processing\n> (thanks again -:)). This sleeping before fsync in commit is described\n> there as standard technique. And the reason is cleanest.\n> Men, cost of fsync is very high! { write (64 bytes) + fsync() }\n> takes ~ 1/50 sec. Yes, additional 1/200 sec or so results in worse\n> performance when there is only one backend running but greatly\n> increase overall performance for 100 simultaneous backends. Ie this\n> delay is trade off to gain better scalability.\n> \n> I agreed that it must be configurable, smaller or probably 0 by\n> default, use approximate # of simultaneously running backends for\n> guessing (postmaster could maintain this number in shmem and\n> backends could just read it without any locking - exact number is\n> not required), good described as tuning patameter in documentation.\n> Anyway I object sleep(0).\n> \n> Vadim\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 21:26:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: [COMMITTERS] pgsql/src/backend/access/transam ( \n\txact.c xlog.c)"
},
{
"msg_contents": "Hi there,\n\n I would like to inquire of any support for WinME to run\nPostgreSQL. Should anyone knows how, I would be grateful to ask for\nadvice. I need to run PostgreSQL on my WinME box.\n\n-- \n Manny C. Cabido\n ====================================\n e-mail:[email protected]\n [email protected]\n =====================================\n\n",
"msg_date": "Tue, 23 Jan 2001 11:05:32 +0800 (PHT)",
"msg_from": "Manuel Cabido <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on WinME?"
}
]
|
[
{
"msg_contents": "> > You are going to kernel call/yield anyway to fsync, so why \n> > not try and if someone does the fsync, we don't need to do it.\n> > I am suggesting re-checking the need for fsync after the return\n> > from sleep(0).\n> \n> It might make more sense to keep a private copy of the last time\n> the file was modified per-backend by that particular backend and\n> a timestamp of the last fsync shared globally so one can forgo the\n> fsync if \"it hasn't been dirtied by me since the last fsync\"\n> \n> This would provide a rendevous point for the fsync call although\n> cost more as one would need to periodically call gettimeofday to\n> set the modified by me timestamp as well as the post-fsync shared\n> timestamp.\n\nAlready made, but without timestamps. WAL maintains last byte of log\nwritten/fsynced in shmem, so XLogFlush(_last_byte_to_be_flushed_)\nwill do nothing if data are already on disk.\n\nVadim\n",
"msg_date": "Thu, 16 Nov 2000 13:26:15 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: RE: [COMMITTERS] pgsql/src/backend/access/transam\n\t( xact.c xlog.c)"
}
]
|
[
{
"msg_contents": "> > > > No. Checkpoints are to speedup after crash recovery and\n> > > > to remove/archive log files. With WAL server doesn't write\n> > > > any datafiles on commit, only commit record goes to log\n> > > > (and log fsync-ed). Dirty buffers remains in memory long\n> \n> Ok, so with CHECKPOINTS, we could move the offline log files to\n> somewhere else so that we could archive them, in my\n> undertstanding. Now question is, how we could recover from disaster\n> like losing every table files except log files. Can we do this with\n> WAL? If so, how can we do it?\n\nNot currently. WAL based BAR is required. I think there will be no BAR\nin 7.1, but it may be added in 7.1.X (no initdb will be required).\nAnyway BAR implementation is not in my plans. All in your hands, guys -:)\n\nVadim\n",
"msg_date": "Thu, 16 Nov 2000 13:31:54 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c\n xlog.c)"
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > > > > No. Checkpoints are to speedup after crash recovery and\n> > > > > to remove/archive log files. With WAL server doesn't write\n> > > > > any datafiles on commit, only commit record goes to log\n> > > > > (and log fsync-ed). Dirty buffers remains in memory long\n> >\n> > Ok, so with CHECKPOINTS, we could move the offline log files to\n> > somewhere else so that we could archive them, in my\n> > undertstanding. Now question is, how we could recover from disaster\n> > like losing every table files except log files. Can we do this with\n> > WAL? If so, how can we do it?\n> \n> Not currently. WAL based BAR is required. I think there will be no BAR\n> in 7.1, but it may be added in 7.1.X (no initdb will be required).\n> Anyway BAR implementation is not in my plans. All in your hands, guys -:)\n> \n> Vadim\n\nCam I ask what BAR is ?\n\n",
"msg_date": "Sun, 19 Nov 2000 19:05:18 +0100",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: RE: [COMMITTERS] pgsql/src/backend/access/transam (xact.c xlog.c)"
},
{
"msg_contents": "At 07:05 PM 11/19/00 +0100, [email protected] wrote:\n\n>Cam I ask what BAR is ?\n\nBackup and recovery, presumably...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 19 Nov 2000 10:37:43 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: [COMMITTERS] pgsql/src/backend/access/transam (xact.c\n xlog.c)"
},
{
"msg_contents": "> > > Ok, so with CHECKPOINTS, we could move the offline log files to\n> > > somewhere else so that we could archive them, in my\n> > > undertstanding. Now question is, how we could recover from disaster\n> > > like losing every table files except log files. Can we do this with\n> > > WAL? If so, how can we do it?\n> > \n> > Not currently. WAL based BAR is required. I think there will be no BAR\n> > in 7.1, but it may be added in 7.1.X (no initdb will be required).\n> > Anyway BAR implementation is not in my plans. All in your hands, guys -:)\n> > \n> > Vadim\n> \n> Cam I ask what BAR is ?\n\nBackup And Restore.\n\nVadim\n\n\n",
"msg_date": "Sun, 19 Nov 2000 11:26:56 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: [COMMITTERS] pgsql/src/backend/access/transam (xact.c xlog.c)"
}
]
|
[
{
"msg_contents": "Hi:\nI have a MS Access database with tables containing TEXT fields.\nI need import that info in a postgres 7 table. \nHow to do it? \nIf I use copy from, dont work. \n\ntia\n\nCarlos Jacobs\n",
"msg_date": "Thu, 16 Nov 2000 18:56:08 -0300",
"msg_from": "\"Carlos Jacobs\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Import text field"
},
{
"msg_contents": "Carlos Jacobs wrote:\n> \n> Hi:\n> I have a MS Access database with tables containing TEXT fields.\n> I need import that info in a postgres 7 table.\n> How to do it?\n> If I use copy from, dont work.\n\nI have a perl program which will import this sort of multi-line CSV data\nthat is not handled by the COPY ... DELIMITER ... sort of mechanism in\nPostgreSQL.\n\nE-mail me privately if you want a copy.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Fri, 24 Nov 2000 21:50:32 +1300",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Import text field"
},
{
"msg_contents": "Hello everybody\n\nFirst of all I wold like to thank you in advance because I am sure\nthat I would get my answer immediately.\n\nWell I want to use nested select statements ,is it possible.\nIs there any counterpart for 'Sysdate from dual' as in Oracle\n\nwith Regards\n\nSanjay Arora\n",
"msg_date": "Wed, 29 Nov 2000 20:31:07 +0530",
"msg_from": "Sanjay Arora <[email protected]>",
"msg_from_op": false,
"msg_subject": "How to use nested select statements"
},
{
"msg_contents": "At 08:31 PM 11/29/00 +0530, Sanjay Arora wrote:\n\n>Well I want to use nested select statements ,is it possible.\n\nIn PG 7.0 you can use subselects in the \"where\" clause.\n\n>Is there any counterpart for 'Sysdate from dual' as in Oracle\n\nYou don't need \"dual\", just \"select now()\" will work.\n\nIf you're porting Oracle code, you can make life a lot easier by\ndefining some stuff:\n\ncreate function sysdate() returns datetime as '\nbegin\n return ''now'';\nend;' language 'plpgsql';\n\ncreate view dual as select sysdate();\n\nThen \"select sysdate from dual\", \"select (any expression) from dual\", etc all do what\nyou'd expect.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 29 Nov 2000 07:43:26 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to use nested select statements"
}
]
|
[
{
"msg_contents": "I have committed changes to keep reference counts for system cache entries.\nThis should eliminate the issues we've had with cache entries sometimes\ngetting dropped while still in use. Some notes:\n\n1. The routine formerly called SearchSysCacheTuple is now SearchSysCache().\nIt increments the reference count on the returned cache entry. You must\ndrop the reference count when done using the cache entry, so the typical\ncall scenario is now something like\n\n\ttuple = SearchSysCache(...);\n\tif (HeapTupleIsValid(tuple))\n\t{\n\t\t... use tuple ...\n\t\tReleaseSysCache(tuple);\n\t}\n\n2. If a cache inval message arrives for a cache entry with refcount > 0,\nthe cache entry will not be dropped until the refcount goes to zero.\nHowever, it will immediately be marked \"dead\" and so will not be found\nby subsequent cache searches.\n\n3. It turned out not to be hard to make the parser drop reference counts\nwhen done with cache entries, so I went over to a hard-and-fast rule\nthat everyone must drop acquired refcounts. If you don't, you'll get\nan annoying NOTICE at commit time, just like for buffer refcount leaks.\n\n4. There are several convenience routines for common usage patterns:\n\n* SearchSysCacheCopy() --- formerly SearchSysCacheTupleCopy() --- still\nexists, although the need for it is less than before. You do NOT need\nthis routine just to hang onto a reference to a cache entry for awhile.\nYou use it if you want to update the tuple and need a modifiable copy\nto scribble on. When you use this routine, you get back a palloc'd\ntuple (free it with heap_freetuple), and the original cache entry does\nnot have its refcount bumped.\n\n* SearchSysCacheExists() just probes for the existence of a tuple via\nthe cache; it returns true or false without bumping the refcount.\n\n* GetSysCacheOid() returns the cache entry's OID, or InvalidOid if no\nentry found, leaving the refcount un-bumped.\n\n* There are some other new convenience routines too in parse_oper.c,\nparse_type.c, and lsyscache.c, to reduce the number of places that\nhave to bother with the full SearchSysCache/ReleaseSysCache protocol.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2000 17:31:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "SearchSysCache changes committed"
}
]
|
[
{
"msg_contents": "Hi,\nI'm new to PostgreSQL and have been asked to determine the cause of what\nappear to be hung processes on FreeBSD after one or more frontend apps\ncrash. I did alot of searching through the msg lists and found a few\ndiscussions that seem related, but I was unable to find a resolution in the\nmsg archives. I noticed the last item in changes for PostgreSQL v7.0.3: \nFix for crash of backend, on abort (Tom) \nIs this related? Our scenario is, a frontend java program creates multiple\nconnections to PostgreSQL v7.0.2 attempting to exceed MAXBACKENDS. If the\nprogram crashes(unhandled exception) we're left with hung (or waiting\nprocesses) on FreeBSD equal to the number of successful connections (ps log\nbelow). Subsequent connection attempts are eventually rejected (when\nMAXBACKENDS is reached) with \"Sorry, too many clients already\". I've waited\nfor over an hour to see if these processes get cleaned up, but they don't.\nThe only msgs I could dig up that seem like they _could_ be related are a\ndiscussion between Dirk Niggemann and Tom Lane in Oct/1999 (\"timeouts in\nlibpq- can libpq requests block forever/a very long time?\" - PGTIMEOUT and\nPGCONNTIMEOUT) - I could be way off the mark on this one though...\nThanks for any and all advice.\nPeter Schmidt\n\npostgres@dev-postgres:~ > ps -cl -U postgres\n UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND\n 500 1395 1 0 2 0 4040 2380 select Ss ?? 0:01.17 postgres\n 500 2255 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2256 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2257 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2258 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2259 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2260 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2261 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2262 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2263 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2264 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2265 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2266 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2267 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2268 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2269 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2270 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2271 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2272 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2273 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2274 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2275 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2317 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2318 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2319 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2320 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2321 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2322 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2323 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2324 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2325 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2326 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2327 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 892 890 0 10 0 1636 1412 wait S p2 0:00.37 bash\n 500 979 892 0 28 0 1672 1368 - T p2 0:00.02 psql\n 500 2385 892 0 28 0 440 264 - R+ p2 0:00.00 ps\n\n\n\n\n\n\n\n\nHung backends\n\n\nHi,\nI'm new to PostgreSQL and have been asked to determine the cause of what appear to be hung processes on FreeBSD after one or more frontend apps crash. I did alot of searching through the msg lists and found a few discussions that seem related, but I was unable to find a resolution in the msg archives. I noticed the last item in changes for PostgreSQL v7.0.3: \nFix for crash of backend, on abort (Tom) \nIs this related? Our scenario is, a frontend java program creates multiple connections to PostgreSQL v7.0.2 attempting to exceed MAXBACKENDS. If the program crashes(unhandled exception) we're left with hung (or waiting processes) on FreeBSD equal to the number of successful connections (ps log below). Subsequent connection attempts are eventually rejected (when MAXBACKENDS is reached) with \"Sorry, too many clients already\". I've waited for over an hour to see if these processes get cleaned up, but they don't.\nThe only msgs I could dig up that seem like they _could_ be related are a discussion between Dirk Niggemann and Tom Lane in Oct/1999 (\"timeouts in libpq- can libpq requests block forever/a very long time?\" - PGTIMEOUT and PGCONNTIMEOUT) - I could be way off the mark on this one though...\nThanks for any and all advice.\nPeter Schmidt\n\npostgres@dev-postgres:~ > ps -cl -U postgres\n UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND\n 500 1395 1 0 2 0 4040 2380 select Ss ?? 0:01.17 postgres\n 500 2255 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2256 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2257 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2258 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2259 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2260 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2261 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2262 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2263 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2264 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2265 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2266 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2267 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2268 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2269 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2270 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2271 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2272 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2273 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2274 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2275 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2317 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2318 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2319 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2320 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2321 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2322 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2323 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2324 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2325 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2326 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 2327 1395 0 2 0 4384 2984 sbwait I ?? 0:00.01 postgres\n 500 892 890 0 10 0 1636 1412 wait S p2 0:00.37 bash\n 500 979 892 0 28 0 1672 1368 - T p2 0:00.02 psql\n 500 2385 892 0 28 0 440 264 - R+ p2 0:00.00 ps",
"msg_date": "Thu, 16 Nov 2000 20:21:57 -0800",
"msg_from": "\"Schmidt, Peter\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hung backends"
}
]
|
[
{
"msg_contents": "I did a CVS checkout today, and the following database creation fails.\n\nIn psql:-\n\nYou are now connected to database template1 as user postgres.\ntemplate1=# select version();\n version\n------------------------------------------------------------------------\n\n PostgreSQL 7.1devel on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\ntemplate1=# create database test;\nCREATE DATABASE\ntemplate1=# \\c test\nFATAL 1: Database \"test\" does not exist in the system catalog.\nPrevious connection kept\n\n>> Now restart the postmaster\n\ntemplate1=# \\c test\nYou are now connected to database test.\n\nIs it just me?\n\nRegards,\nGrant\n\n--\n> Poorly planned software requires a genius to write it\n> and a hero to use it.\n\nGrant Finnemore BSc(Eng) (mailto:[email protected])\nSoftware Engineer Universal Computer Services\nTel (+27)(11)712-1366 PO Box 31266 Braamfontein 2017, South Africa\nCell (+27)(82)604-5536 20th Floor, 209 Smit St., Braamfontein\nFax (+27)(11)339-3421 Johannesburg, South Africa\n\n\n",
"msg_date": "Fri, 17 Nov 2000 08:37:33 +0200",
"msg_from": "Grant Finnemore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Failure to recognise new database"
},
{
"msg_contents": "> Is it just me?\n\nI'm pretty sure I saw something similar on a newly initialized database.\n\nThe sequence was:\n\ninitdb\npostmaster -i -o -F\ncreatedb\npsql\n(database \"thomas\" not found)\npsql template1\n\\d\n(see \"thomas\")\npsql\n(database \"thomas\" found just fine)\n\n - Thomas\n",
"msg_date": "Fri, 17 Nov 2000 07:17:39 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure to recognise new database"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Is it just me?\n\n> I'm pretty sure I saw something similar on a newly initialized database.\n\nAre you guys running with WAL enabled? If so, this is probably the\nBufferSync issue that Hiroshi thought I broke a couple days ago.\nLet me know...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2000 02:35:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure to recognise new database "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> >> Is it just me?\n>\n> > I'm pretty sure I saw something similar on a newly initialized database.\n>\n> Are you guys running with WAL enabled? If so, this is probably the\n> BufferSync issue that Hiroshi thought I broke a couple days ago.\n> Let me know...\n\nYes, I am running WAL enabled.\n\n>\n>\n> regards, tom lane\n\nRegards,\nGrant\n\n--\n> Poorly planned software requires a genius to write it\n> and a hero to use it.\n\nGrant Finnemore BSc(Eng) (mailto:[email protected])\nSoftware Engineer Universal Computer Services\nTel (+27)(11)712-1366 PO Box 31266 Braamfontein 2017, South Africa\nCell (+27)(82)604-5536 20th Floor, 209 Smit St., Braamfontein\nFax (+27)(11)339-3421 Johannesburg, South Africa\n\n\n",
"msg_date": "Fri, 17 Nov 2000 11:05:34 +0200",
"msg_from": "Grant Finnemore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failure to recognise new database"
},
{
"msg_contents": "> Are you guys running with WAL enabled? If so, this is probably the\n> BufferSync issue that Hiroshi thought I broke a couple days ago.\n> Let me know...\n\nYes, I too am running with WAL enabled.\n\n - Thomas\n",
"msg_date": "Fri, 17 Nov 2000 13:47:49 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure to recognise new database"
},
{
"msg_contents": "Grant Finnemore <[email protected]> writes:\n> Tom Lane wrote:\n>> Are you guys running with WAL enabled? If so, this is probably the\n>> BufferSync issue that Hiroshi thought I broke a couple days ago.\n>> Let me know...\n\n> Yes, I am running WAL enabled.\n\nOK, I put back the BufferSync call. Sorry 'bout that, chief...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2000 22:45:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure to recognise new database "
}
]
|
[
{
"msg_contents": "\n> > >> Ewe, so we have this 1/200 second delay for every transaction. Seems\n> > >> bad to me.\n> > >\n> > >I think as long as it becomes a tunable this isn't a bad idea at\n> > >all. Fixing it at 1/200 isn't so great because people not wrapping\n> > >large amounts of inserts/updates with transaction blocks will\n> > >suffer.\n> > \n> > I think the default should probably be no delay, and the documentation\n> > on enabling this needs to be clear and obvious (i.e. hard to miss).\n> \n> I just talked to Tom Lane about this. I think a sleep(0) just before\n> the flush would be the best. It would reliquish the cpu slice if\n> another process is ready to run. If no other backend is running, it\n> probably just returns. If there is another one, it gives it \n> a chance to\n> complete. On return from sleep(0), it can check if it still needs to\n> flush. This would tend to bunch up flushers so they flush only once,\n> while not delaying cases where only one backend is running.\n\nI don't think anything that simply yields the processor works on \nmultiprocessor machines. \n\nThe point is, that fsync is so expensive, that a wait time in the \nmilliseconds is needed, and not micro seconds, to really improve\ntx throughput for many clients.\n\nI support the default to not delay point, since only a very heavily loaded\ndatabase will see a lot of fsyncs in the same millisecond timeslice.\nA dba coping with a very heavily loaded database will need to tune \nanyway, so for him one additional config is no problem.\n\nAndreas\n",
"msg_date": "Fri, 17 Nov 2000 10:09:52 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: RE: [COMMITTERS] pgsql/src/backend/access/transam\n\t( xact.c xlog.c)"
}
]
|
[
{
"msg_contents": "\n> Also, the grammar clause \"LANGUAGE C\" is actually part of the standard, so\n> naming it \"LANGUAGE stdC\" will make it *less* standard. (Not that I buy\n> Informix as being a \"standard\".)\n\nI only quoted Informix, because it is the only one where I know how it works.\nIt might even be, that the Oracle and DB/2 interface is also similar to our \"oldC\",\nI simply don't know.\n\nThe fact, that part of this is already in the standard (like \"language c\") makes me \neven more firm in my opinion, that more research is needed before advertising \"newC\".\n\nAndreas\n",
"msg_date": "Fri, 17 Nov 2000 10:29:37 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: AW: Coping with 'C' vs 'newC' function lang uage nam esh"
}
]
|
[
{
"msg_contents": "At 10:39 16/11/00 +0100, Zeugswetter Andreas SB wrote:\n>\n>Has anybody had time to look at how this is done in DB/2, Oracle ? Philip ?\n>\n\nDon't know about Oracle or DB2, but Dec/RDB has:\n\n Create Function <name1> [Stored Name Is <name2>] (...) Returns <type>;\n [ External Name <name3> ] [ Location <libfile> ] \n [Language {ADA|C|Fortran|Pascal|General] \n General Parameter Style [Not] Variant\n\nwhere <name1> is the function name, the 'Stored Name' relates to\ncross-schema functions, the 'External Name' is the name of the entry point,\n'location' is the file location and the 'Variant' part corresponds to our\n'iscacheable' attribute.\n\nFunctions themselves require no special coding. This is pretty much what I\nthink 'Language C' does for us now - just calling a shared library. There\nis definitely a case to be made for coninuing to support standard calls to\ngeneral libraries unmodified. In Tom's current proposal, I think this will\nbe the case if there is no 'function-info' function available. \n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 17 Nov 2000 21:43:28 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Coping with 'C' vs 'newC' function language\n namesh"
},
{
"msg_contents": "On Fri, Nov 17, 2000 at 09:43:28PM +1100, Philip Warner wrote:\n> At 10:39 16/11/00 +0100, Zeugswetter Andreas SB wrote:\n> >\n> >Has anybody had time to look at how this is done in DB/2, Oracle ? Philip ?\n> >\n> \n> Don't know about Oracle or DB2, but Dec/RDB has:\n> \n\nWell, I don't know 'nuthing about Oracle, but I _did_ sign up for the\nOTN web site some time ago, specifically to get at Oracle docs. ;-)\n\nAfter clicking around there for a bit I came up with this, which is my\ninterpretation of a 'flowchart' style language diagram:\n\nCREATE [OR REPLACE] FUNCTION [<schema>.]<name> \n [( <argument1> [IN|OUT|IN OUT] [NOCOPY] <datatype1> [, <...>] )] \n RETURN <datatype> \n [{AUTHID {CURRENT_USER|DEFINER} | DETERMINISTIC | PARALLEL_ENABLE} <...>]\n{IS|AS} \n { <pl/sql_function_body>| \n LANGUAGE {JAVA NAME '<java_method_name>'| \n C [NAME <c_func_name>] LIBRARY <lib_name> \n\t\t\t [WITH CONTEXT] [PARAMETERS (...)] \n\t\t\t }\n }\n\nThe actual filesystem path to the DLL or .so is defined with a CREATE\nLIBRARY command.\n\nThe WITH CONTEXT bit is a pointer to an opaque structure that the\nunderlying function is supposed to pass on to service routines it might\ncall, for their use.\n\nIt seems the parameters can be defined either in place with the name,\nor with the PARAMETERS keyword.\n\nPhilip's comments about this directly calling into a C shared library\nseem to apply, as well.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Fri, 17 Nov 2000 12:57:37 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Coping with 'C' vs 'newC' function language namesh"
}
]
|
[
{
"msg_contents": "\nSince I see, that Tom has implemented the \"keep a AccessShareLock lock until \ntransaction end\" philisophy I would like to state a protest.\n\nThis is a fundamental change in behavior and I would like to see \na vote on this.\n\nThe one example we already know is:\n\nsession1\t\t\t\tsession2\nbegin work;\t\t\t\tbegin work;\nselect * from tenk1 limit 1;\n\t\t\t\t\tselect * from tenk1 limit 1;\nlock table tenk1; --now waits (why should it ?)\n\t\t\t\t\tlock table tenk1; -- NOTICE: Deadlock detected --> ABORT\n\nI think this is not acceptable in committed read isolation. The AccessShareLock\nneeds to be released after each statement finishes.\n\nThank you\nAndreas\n",
"msg_date": "Fri, 17 Nov 2000 13:15:32 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fundamental change of locking behavior in 7.1"
},
{
"msg_contents": "> -----Original Message-----\n> From: Zeugswetter Andreas SB\n> \n> Since I see, that Tom has implemented the \"keep a AccessShareLock \n> lock until \n> transaction end\" philisophy I would like to state a protest.\n> \n> This is a fundamental change in behavior and I would like to see \n> a vote on this.\n> \n> The one example we already know is:\n> \n> session1\t\t\t\tsession2\n> begin work;\t\t\t\tbegin work;\n> select * from tenk1 limit 1;\n> \t\t\t\t\tselect * from tenk1 limit 1;\n> lock table tenk1; --now waits (why should it ?)\n> \t\t\t\t\tlock table tenk1; -- \n> NOTICE: Deadlock detected --> ABORT\n>\n\nIn PostgreSQL,'lock table' acquires a AccessExclusiveLock by default.\nIMHO ExclusiveLock is sufficient for ordinary purpose. It doesn't conflict\nwith AccessShareLock. Oracle doesn't have AccessExclusive(Share)Lock\nand I've been suspicious why users could acquire the lock explicitly.\n\nComments ?\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Sat, 18 Nov 2000 07:36:49 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Fundamental change of locking behavior in 7.1"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> In PostgreSQL,'lock table' acquires a AccessExclusiveLock by default.\n> IMHO ExclusiveLock is sufficient for ordinary purpose.\n\nPeople who want that kind of lock can get it with LOCK TABLE IN\nEXCLUSIVE MODE. I do not think it's a good idea to change the\ndefault kind of lock acquired by a plain LOCK TABLE, if that's\nwhat you're suggesting ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2000 20:13:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fundamental change of locking behavior in 7.1 "
}
]
|
[
{
"msg_contents": "I missed the proposal, discussion, implementation, and announcement of\nthe recent changes to make dump/reload more robust (it seems that I was\nunsubscribed from -hackers for a few days, then out of town for a few\nmore :/ Amazing what a single week can bring!\n\nAnyway, there were a couple of fields added to the pg_database table:\ndatistemplate and datallowconn. Tom, you mentioned that these were given\nthose names to reflect current functionality, but in the long run we\nwould likely have something closer to a \"readonly\" attribute.\n\nUpcoming functionality like replication and distributed databases will\nneed the concept of \"readonly\" and/or \"offline\" to help with error\ndetection and recovery. Other databases (I'm familiar with Ingres) have\nthis concept already, with the database being allowed to change its\nstatus to protect itself from further damage, and to protect users from\ntrying to use a damaged database.\n\nThese attributes would also help to manage some kinds of dump/restore\noperations and will probably be helpful for WAL-related\nrollback/rollforward operations (Vadim?).\n\nWould it be reasonable to label these fields for their likely 7.2\nfunctionality, rather than labeling them as they are now? Since this is\nthe first time they are appearing, it would be nice to not have to\nchange the names later...\n\nComments?\n\n - Thomas\n",
"msg_date": "Fri, 17 Nov 2000 15:45:37 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Database startup info"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Would it be reasonable to label these fields for their likely 7.2\n> functionality, rather than labeling them as they are now? Since this is\n> the first time they are appearing, it would be nice to not have to\n> change the names later...\n\nI don't have a problem with renaming \"datallowconn\" to \"datoffline\"\n(and reversing its sense) if you feel like doing that --- but please\nnote that these are only field names, they do not constrain whatever\ncommand-level API we might put on top of the thing later. In any\ncase, I'm not sure it's a good idea to call the thing \"datoffline\"\nwhen changing it doesn't actively throw off current connections.\nNames that are intended to be suggestive should be accurately\nsuggestive, IMHO. (Maybe I should've called it datallownewconn.)\n\nAs for datistemplate, that is NOT the same as datreadonly, and when\nwe get around to supporting read-only databases there should be a\nseparate column for that, IMHO. datistemplate is actually a permissions\nbit (are people who are neither superuser nor the database owner\nallowed to clone a particular database?) and has nothing to do with\nwhether the DB is read-only. When we have read-only functionality,\nI'd want to change CREATE DATABASE to require the source to be\nboth datistemplate and datreadonly --- but there are also substantial\nuses for databases that are readonly but not templates. So we need\ntwo bits. (Perhaps readonly status should apply to schemas, not\ndatabases, anyway --- haven't studied that part of the spec yet...)\n\nIn short: I think datistemplate is fine as is. If you want to tweak\nthe name or behavior of datallowconn, go for it (though implementing a\ncommand to set it might be a better plan than just tweaking the field\nname...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2000 11:16:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database startup info "
}
]
|
[
{
"msg_contents": "\n> > More to the point, I think we have to assume old-style interface if we\n> > see ... LANGUAGE 'C' with no other decoration, because any other\n> > assumption is guaranteed to break all existing user-defined functions.\n> \n> Just successfully loading an old-style C function doesn't\n> guarantee that it works anyway. I pointed out before that the\n> changes due to TOAST require each function that takes\n> arguments of varlen types to expect toasted values. Worst\n> case a dump might reload and anything works fine, but a month\n> later the first toasted value appears and the old-style C\n> function corrupts the data without a single warning.\n> \n> We need to WARN, WARN and explicitly WARN users of selfmade C\n> functions about this in any possible place!\n\nImho the better solution would be, to always detoast values before we pass\nthem to old-style C-functions. And toast the return value when the function returns.\n\nAndreas\n",
"msg_date": "Fri, 17 Nov 2000 17:05:46 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Coping with 'C' vs 'newC' function language names"
}
]
|
[
{
"msg_contents": "\n> > Just successfully loading an old-style C function doesn't\n> > guarantee that it works anyway. I pointed out before that the\n> > changes due to TOAST require each function that takes\n> > arguments of varlen types to expect toasted values.\n> \n> Couldn't the function handler detoast the values before handing them to\n> the function? That would be slower, but it would allow people to continue\n> to use the \"simpler\" interface.\n\nAre there really that many things you can do with a toasted value ?\nOff hand I can only think of equality functions. All others should need \na detoasted value anyways.\n\nAndreas\n",
"msg_date": "Fri, 17 Nov 2000 18:02:19 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Coping with 'C' vs 'newC' function language names"
}
]
|
[
{
"msg_contents": "I backed up my database from Postgres 6.5.3 and migrated to 7.0.2\nseveral a few months ago. For some reason, data was lost in the\ntransition. I've finally pinned it down to the attached file (abridged\nto point out the problem).\n\nIt looks like two things happened in the backup. First, when I move from\n'G' to 'F' in the names column, I seem to lose the column called\n'dsp_chan'. Second, the double quotes around the float_4 array called\n'spike_hist' aren't included.\n\nI'm not sure if the double quotes are necessary, but the missing column\nis probably a problem. I added this column after the database was\ncreated by using 'alter table ellipse_cell_proc add column dsp_chan' and\nthen put it in the correct position by using:\n\nSELECT name, arm, rep, cycle, hemisphere, area, cell, dsp_chan,\nspike_hist INTO xxx FROM ellipse_cell_proc;\nDROP TABLE ellipse_cell_proc;\nALTER TABLE xxx RENAME TO ellipse_cell_proc;\n\nCan anyone explain what went wrong with the backup or where I erred\nadding the column?\n\nThanks.\n-Tony",
"msg_date": "Fri, 17 Nov 2000 11:27:32 -0800",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird backup file"
}
]
|
[
{
"msg_contents": "Hey Guys,\n\ndo you know of any intermediate to advanced Postgres Courses in the UK prefereably in London and if not would anyone with advanced knowledge be interested in setting some up.\n\nThanks,\nAbe\n\n\n\n\n\n\n\nHey Guys,\n \ndo you know of any intermediate to advanced \nPostgres Courses in the UK prefereably in London and if not would anyone with \nadvanced knowledge be interested in setting some up.\n \nThanks,\nAbe",
"msg_date": "Fri, 17 Nov 2000 23:26:10 -0000",
"msg_from": "\"Abe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Courses"
}
]
|
[
{
"msg_contents": "At present the Unix socket's location is hard-coded as /tmp.\n\nAs a result of a bug report, I have moved it in the Debian package to \n/var/run/postgresql/. (The bug was that tmpreaper was deleting it and\nthus blocking new connections.)\n\nI suppose that we cannot assume that /var/run exists across all target\nsystems, so could the socket location be made a configurable parameter\nin 7.1?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For by grace are ye saved through faith; and that not\n of yourselves. It is the gift of God; not of works, \n lest any man should boast.\" Ephesians 2:8,9 \n\n\n",
"msg_date": "Sat, 18 Nov 2000 00:31:30 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "location of Unix socket"
},
{
"msg_contents": "* Oliver Elphick <[email protected]> [001117 16:41] wrote:\n> At present the Unix socket's location is hard-coded as /tmp.\n> \n> As a result of a bug report, I have moved it in the Debian package to \n> /var/run/postgresql/. (The bug was that tmpreaper was deleting it and\n> thus blocking new connections.)\n> \n> I suppose that we cannot assume that /var/run exists across all target\n> systems, so could the socket location be made a configurable parameter\n> in 7.1?\n\nWhat about X sockets and ssh-agent sockets, and so on?\n\nWhere's the source to this thing? :)\n\nIt would make more sense to fix tempreaper to ignore non regular\nfiles.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Fri, 17 Nov 2000 16:49:43 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "On Fri, Nov 17, 2000 at 04:49:43PM -0800, Alfred Perlstein wrote:\n> * Oliver Elphick <[email protected]> [001117 16:41] wrote:\n> > At present the Unix socket's location is hard-coded as /tmp.\n> > \n> > As a result of a bug report, I have moved it in the Debian package to \n> > /var/run/postgresql/. (The bug was that tmpreaper was deleting it and\n> > thus blocking new connections.)\n> > \n> > I suppose that we cannot assume that /var/run exists across all target\n> > systems, so could the socket location be made a configurable parameter\n> > in 7.1?\n> \n> What about X sockets and ssh-agent sockets, and so on?\n> Where's the source to this thing? :)\n> \n> It would make more sense to fix tempreaper to ignore non regular\n> files.\n\nX sockets are in subdirectories, e.g. /tmp/.X11-unix/X0.\n/tmp is a bad place for this stuff anyway.\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 17 Nov 2000 17:00:16 -0800",
"msg_from": "Nathan Myers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "The 7.1 code will the socket location configurable.\n\n> At present the Unix socket's location is hard-coded as /tmp.\n> \n> As a result of a bug report, I have moved it in the Debian package to \n> /var/run/postgresql/. (The bug was that tmpreaper was deleting it and\n> thus blocking new connections.)\n> \n> I suppose that we cannot assume that /var/run exists across all target\n> systems, so could the socket location be made a configurable parameter\n> in 7.1?\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"For by grace are ye saved through faith; and that not\n> of yourselves. It is the gift of God; not of works, \n> lest any man should boast.\" Ephesians 2:8,9 \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Nov 2000 20:28:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "Nathan Myers <[email protected]> writes:\n>> * Oliver Elphick <[email protected]> [001117 16:41] wrote:\n>>>> could the socket location be made a configurable parameter\n>>>> in 7.1?\n\n> /tmp is a bad place for this stuff anyway.\n\nThere have been *very long* discussions of this issue in the past,\nsee for example the threads \"flock patch breaks things here\" and\n\"postmaster locking issues\" in pghackers around 8/31/98 and 10/10/98.\nCould we have some review of the archives before people go off on\na new thread?\n\nThe bottom line is that the location of the socket file is a fundamental\npart of the client/server protocol. You can't just move it around on\na whim, or your clients will be unable to find your server.\n\nWe have just accepted a patch that allows explicit runtime specification\nof the socket-file path. (I've got severe doubts about it, because of\nthis issue --- but at least it doesn't affect people who don't use it.)\n\nBut if the socket-file path becomes a site-configuration item then we\nwill see a lot of complaints. Look at the frequency with which we see\npeople asking about \"Undefined variable client_encoding\" notices ---\nthat proves that those folk are using clients and servers that weren't\nconfigured identically. That notice is at least pretty harmless ...\nbut if the configuration determines whether or not you can even contact\nthe server, it's not harmless.\n\nI agree that /tmp was a stupid place to put the files, but we've got to\ntread very lightly about moving them, or we'll create worse problems\nthan we solve.\n\nBTW: a prediction, Oliver: you *will* live to regret making a\ndistribution-specific change in the socket file location. Dunno\nhow long it will take, but I foresee bug reports from this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2000 21:04:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> The 7.1 code will the socket location configurable.\n\nBtw., are you still about to change it to the directory rather than the\nfile? I'd suggest that you change the GUC parameter to\n\"unix_socket_directory\", to be consistent in naming with related\nparameters.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 18 Nov 2000 12:03:13 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "Yes, I will make the change.\n\n> Bruce Momjian writes:\n> \n> > The 7.1 code will the socket location configurable.\n> \n> Btw., are you still about to change it to the directory rather than the\n> file? I'd suggest that you change the GUC parameter to\n> \"unix_socket_directory\", to be consistent in naming with related\n> parameters.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Nov 2000 09:32:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > The 7.1 code will the socket location configurable.\n> \n> Btw., are you still about to change it to the directory rather than the\n> file? I'd suggest that you change the GUC parameter to\n> \"unix_socket_directory\", to be consistent in naming with related\n> parameters.\n\nDone. I did not change PQunixsocket or the unixsocket PQconnectdb\nconnection option. Should they be changed too?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 24 Nov 2000 23:15:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Done. I did not change PQunixsocket or the unixsocket PQconnectdb\n> connection option. Should they be changed too?\n\nThey should be removed because PQhost does this now.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 25 Nov 2000 14:43:29 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Done. I did not change PQunixsocket or the unixsocket PQconnectdb\n> > connection option. Should they be changed too?\n> \n> They should be removed because PQhost does this now.\n\nI assume you mean PQunixsocket. As part of the database connection, if\npghost begins with a slash, the value is assigned to pgunixsocket and\npghost is cleared. Here is the code:\n\n /* ----------\n * Allow unix socket specification in the host name\n * ----------\n */\n if (conn->pghost && conn->pghost[0] == '/') \n {\n if (conn->pgunixsocket)\n free(conn->pgunixsocket);\n conn->pgunixsocket = conn->pghost; \n conn->pghost = NULL;\n }\n\nAm I handling this properly? I hate to be dragging around the unix\nsocket directory name in pghost for too long and hate to be propogating\nthe slash test throughout the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Nov 2000 15:09:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "Well, actually, unixsocket can be specified by PQconnectdb. Sounds like\nit is a big mess. Care to tame it? I am heading to Japan tomorrow and\ndon't want to leave it 1/2 done.\n\n\n> Bruce Momjian writes:\n> \n> > Am I handling this properly? I hate to be dragging around the unix\n> > socket directory name in pghost for too long and hate to be propogating\n> > the slash test throughout the code.\n> \n> ISTM that you could just do this in connectDBStart() where it actually\n> decides on AF_UNIX. It's just a different place to do it and you don't\n> have to maintain it in two different places (PQconnectdb-style and\n> PQsetdbLogin-style).\n> \n> For symmetry PQhost() should return what was put in as host. Since you\n> cannot put in a unix socket as a separate connection parameter there\n> cannot be a function PQunixsocket to get one out. In fact, ISTM there\n> should not be anything that's explicitly called 'unixsocket'.\n> \n> I don't like the code in fe-connect.c one bit, it's way messed up. \n> Evidently there's even some code in there that allows you to do this:\n> \n> $ psql tcp:postgresql://localhost:5432/peter\n> \n> which is certainly a cool idea, only that it ends up with\n> \n> psql: connectDBStart() -- unknown hostname: J\"J\"@st\n> \n> Eventually I think the URL-style is the way to go, especially with SSL\n> becoming mainline, so I'd hate to publish too many new functions of\n> questionable value for a feature which is not very well thought out yet.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Nov 2000 16:00:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Am I handling this properly? I hate to be dragging around the unix\n> socket directory name in pghost for too long and hate to be propogating\n> the slash test throughout the code.\n\nISTM that you could just do this in connectDBStart() where it actually\ndecides on AF_UNIX. It's just a different place to do it and you don't\nhave to maintain it in two different places (PQconnectdb-style and\nPQsetdbLogin-style).\n\nFor symmetry PQhost() should return what was put in as host. Since you\ncannot put in a unix socket as a separate connection parameter there\ncannot be a function PQunixsocket to get one out. In fact, ISTM there\nshould not be anything that's explicitly called 'unixsocket'.\n\nI don't like the code in fe-connect.c one bit, it's way messed up. \nEvidently there's even some code in there that allows you to do this:\n\n$ psql tcp:postgresql://localhost:5432/peter\n\nwhich is certainly a cool idea, only that it ends up with\n\npsql: connectDBStart() -- unknown hostname: J\"J\"@st\n\nEventually I think the URL-style is the way to go, especially with SSL\nbecoming mainline, so I'd hate to publish too many new functions of\nquestionable value for a feature which is not very well thought out yet.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 27 Nov 2000 22:02:33 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Am I handling this properly? I hate to be dragging around the unix\n> socket directory name in pghost for too long and hate to be propogating\n> the slash test throughout the code.\n\nIt's probably cleanest to do that the way you are doing it. However,\none could argue we should make PQhost() return\n\tpghost ? pghost : pgunixsocket\nwhich'd make the external behavior compatible with the way one specifies\nthe connection.\n\nBasically, the idea was to *not* have a distinct unixsocket spec\nanywhere in libpq's external API, so that existing apps wouldn't need\na rewrite to support this feature. Keeping unixsocket separate inside\nthe library is a good idea, but it's independent of the API.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Nov 2000 16:04:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Am I handling this properly? I hate to be dragging around the unix\n> > socket directory name in pghost for too long and hate to be propogating\n> > the slash test throughout the code.\n> \n> It's probably cleanest to do that the way you are doing it. However,\n> one could argue we should make PQhost() return\n> \tpghost ? pghost : pgunixsocket\n> which'd make the external behavior compatible with the way one specifies\n> the connection.\n> \n> Basically, the idea was to *not* have a distinct unixsocket spec\n> anywhere in libpq's external API, so that existing apps wouldn't need\n> a rewrite to support this feature. Keeping unixsocket separate inside\n> the library is a good idea, but it's independent of the API.\n\nDone. New code:\n\n return conn->pghost ? conn->pghost : conn->pgunixsocket;\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Nov 2000 16:06:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I don't like the code in fe-connect.c one bit, it's way messed up. \n\nYes. We've accepted several extremely questionable (not to mention\npoorly documented or completely undocumented) \"features\" in there\nrecently. If I'd been paying more attention I would've voted against\nboth the URL patch and the SERVICE patch, as I think they're both\nless than fully baked --- and I don't see word one about either in\nthe libpq SGML documentation.\n\nSomeone should probably review the history and either fix or remove\nthe more dubious patches, before we get stuck having to be\nbackwards-compatible with bad ideas.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Nov 2000 16:20:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <[email protected]> writes:\n> > I don't like the code in fe-connect.c one bit, it's way messed up.\n>\n> Yes. We've accepted several extremely questionable (not to mention\n> poorly documented or completely undocumented) \"features\" in there\n> recently. If I'd been paying more attention I would've voted against\n> both the URL patch and the SERVICE patch, as I think they're both\n> less than fully baked --- and I don't see word one about either in\n> the libpq SGML documentation.\n>\n> Someone should probably review the history and either fix or remove\n> the more dubious patches, before we get stuck having to be\n> backwards-compatible with bad ideas.\n\nI'm going to disable the URL patch, since it doesn't seem to work and\nbreaks legitimate uses of database names with funny characters. The\nservice patch seemed kind of useful, but since it's not documented and I\ndon't feel like finding out, I think we can let it go the SSL way, i.e.,\nsort out for next release.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 30 Nov 2000 19:05:09 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I'm going to disable the URL patch, since it doesn't seem to work and\n> breaks legitimate uses of database names with funny characters. The\n> service patch seemed kind of useful, but since it's not documented and I\n> don't feel like finding out, I think we can let it go the SSL way, i.e.,\n> sort out for next release.\n\nSounds like a plan. The service patch at least doesn't look like it\nwill cause surprises for anyone who doesn't know about it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Nov 2000 13:07:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: location of Unix socket "
}
]
|
[
{
"msg_contents": " Date: Friday, November 17, 2000 @ 22:55:51\nAuthor: tgl\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n from hub.org:/home/projects/pgsql/tmp/cvs-serv20150\n\nModified Files:\n\tcash.c \n\n----------------------------- Log Message -----------------------------\n\nModify locale code to defend against possibility that it was compiled\nwith an -fsigned-char/-funsigned-char setting opposite to that of libc,\nthus breaking the convention that 'undefined' values returned by\nlocaleconv() are represented by CHAR_MAX. It is sheer stupidity that\ngcc even has such a switch --- it's just as bad as the structure-packing\ncontrol switches offered by the more brain-dead PC compilers --- and\nas for the behavior of Linux distribution vendors who set RPM_OPT_FLAGS\ndifferently from the way they built libc, well, words fail me...\n",
"msg_date": "Fri, 17 Nov 2000 22:55:51 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "pgsql/src/backend/utils/adt (cash.c)"
},
{
"msg_contents": "> Modify locale code to defend against possibility that it was compiled\n> with an -fsigned-char/-funsigned-char setting opposite to that of libc,\n> thus breaking the convention that 'undefined' values returned by\n> localeconv() are represented by CHAR_MAX. It is sheer stupidity that\n> gcc even has such a switch --- it's just as bad as the structure-packing\n> control switches offered by the more brain-dead PC compilers --- and\n> as for the behavior of Linux distribution vendors who set RPM_OPT_FLAGS\n> differently from the way they built libc, well, words fail me...\n\nWhich distros would these be? I know that Mandrake chooses some mutually\nexclusive flags (-On and -fast-math) but am not sure which other ones\nare inconsistant...\n\n - Thomas\n",
"msg_date": "Mon, 20 Nov 2000 07:04:10 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/utils/adt (cash.c)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> as for the behavior of Linux distribution vendors who set RPM_OPT_FLAGS\n>> differently from the way they built libc, well, words fail me...\n\n> Which distros would these be? I know that Mandrake chooses some mutually\n> exclusive flags (-On and -fast-math) but am not sure which other ones\n> are inconsistant...\n\nThe particular problem I was having was with LinuxPPC 2000. gcc's\ndefault behavior on PPC is -funsigned-char, and that seems to be the\nway that libc was built in that distro. But /usr/lib/rpm/rpmrc sets\nRPM_OPT_FLAGS to \"-fsigned-char -O2\". (The -O2 wreaks havoc with\nPostgres too, pre-fmgr-rewrite, but at least we knew about that effect.)\n\nOn closer examination, I think the blame lies with the RPM people and\nnot with LinuxPPC per se, because /usr/lib/rpm/rpmrc comes straight\nfrom the RPM distro. Seems to me that libc *should* be built with\nthe default char-signedness for the platform, because otherwise programs\nbuilt outside the RPM environment will be broken. When RPM attempts to\nforce a non-default signedness for programs built in the RPM\nenvironment, the only possible consequence is that someone or other gets\nbroken --- either RPM-based programs or non-RPM-based-programs, take\nyour pick. Ergo, it's RPM that's broken.\n\nThat same file has a bunch of apparently non-default compiler options\nfor other platforms besides PPC (for your amusement, I attach the\nrelevant lines from rpm-3.0.5 below). I wonder how many of those are\nequally misguided...\n\n\t\t\tregards, tom lane\n\n# Values for RPM_OPT_FLAGS for various platforms\n\noptflags: i386 -O2 -m486 -fno-strength-reduce\noptflags: i486 -O2 -march=i486\noptflags: i586 -O2 -march=i586\noptflags: i686 -O2 -march=i686\noptflags: athlon -O2 -march=athlon\noptflags: ia64 -O2\n\n# XXX Please note that -mieee has been added in rpm-3.0.5.\noptflags: alpha -O2 -mieee\n\noptflags: sparc -O2 -m32 -mtune=ultrasparc\noptflags: sparcv9 -O2 -m32 -mcpu=ultrasparc\noptflags: sparc64 -O2 -m64 -mcpu=ultrasparc\noptflags: m68k -O2 -fomit-frame-pointer\noptflags: ppc -O2 -fsigned-char\noptflags: parisc -O2 -mpa-risc-1-0\noptflags: hppa1.0 -O2 -mpa-risc-1-0\noptflags: hppa1.1 -O2 -mpa-risc-1-0\noptflags: hppa1.2 -O2 -mpa-risc-1-0\noptflags: hppa2.0 -O2 -mpa-risc-1-0\noptflags: mipseb -O2\noptflags: mipsel -O2\noptflags: armv4b -O2 -fsigned-char -fomit-frame-pointer\noptflags: armv4l -O2 -fsigned-char -fomit-frame-pointer\noptflags: atarist -O2 -fomit-frame-pointer\noptflags: atariste -O2 -fomit-frame-pointer\noptflags: ataritt -O2 -fomit-frame-pointer\noptflags: falcon -O2 -fomit-frame-pointer\noptflags: atariclone -O2 -fomit-frame-pointer\noptflags: milan -O2 -fomit-frame-pointer\noptflags: hades -O2 -fomit-frame-pointer\n\n",
"msg_date": "Mon, 20 Nov 2000 10:43:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "RPM's -fsigned-char (Re: [COMMITTERS] pgsql/src/backend/utils/adt\n\t(cash.c))"
},
{
"msg_contents": "Tom Lane wrote:\n> Thomas Lockhart <[email protected]> writes:\n> >> as for the behavior of Linux distribution vendors who set RPM_OPT_FLAGS\n> >> differently from the way they built libc, well, words fail me...\n \n> > Which distros would these be? I know that Mandrake chooses some mutually\n> > exclusive flags (-On and -fast-math) but am not sure which other ones\n> > are inconsistant...\n \n> The particular problem I was having was with LinuxPPC 2000. gcc's\n> default behavior on PPC is -funsigned-char, and that seems to be the\n> way that libc was built in that distro. But /usr/lib/rpm/rpmrc sets\n> RPM_OPT_FLAGS to \"-fsigned-char -O2\". (The -O2 wreaks havoc with\n> Postgres too, pre-fmgr-rewrite, but at least we knew about that effect.)\n \n> On closer examination, I think the blame lies with the RPM people and\n> not with LinuxPPC per se, because /usr/lib/rpm/rpmrc comes straight\n> from the RPM distro. Seems to me that libc *should* be built with\n\nIt's more of a combination -- if the LinuxPPC people are overriding the\ndefault RPM_OPT_FLAGS with their own stuff for libc, that's not an RPM\nproblem.\n\nOTOH, RPM_OPT_FLAGS for that compiler on PPC should not have -O2, if -O2\ncauses other packages on that platform to barf. Of course, IIRC, we\nhave historically had problems with -O2 on some architectures pre-fmgr\nrewrite. So the problem lies with all three: it's our problem -O2\ncauses problems; it's LinuxPPC's problem that libc is compiled with the\nnon-RPM_OPT_FLAGS char signage; and it's RPM's problem that\nRPM_OPT_FLAGS has a non-default char signage for PPC. So, the short\nterm fix is to patch our spec file (which we've done for PPC).\n\nThe person to inform of generic RPM issues is Jeff Johnson\n([email protected]), aka Mr. Rpm.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 20 Nov 2000 15:05:28 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPM's -fsigned-char (Re: [COMMITTERS]\n\tpgsql/src/backend/utils/adt (cash.c))"
}
]
|
[
{
"msg_contents": "\n$ psql -U\npsql: option requires an argument -- U\nTry -? for help.\n$ psql -?\npsql: No match.\n$\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 19 Nov 2000 14:31:15 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql: anyone ever notice?"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n> $ psql -?\n> psql: No match.\n\nOdd --- I get the right thing:\n\n$ psql -?\nThis is psql, the PostgreSQL interactive terminal.\n\nUsage:\n psql [options] [dbname [username]]\n\nOptions:\n -a Echo all input from script\n -A Unaligned table output mode (-P format=unaligned)\n[etc etc]\n\nSomething different about long-option handling on your platform, maybe?\nWhat is your platform, anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Nov 2000 14:44:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: anyone ever notice? "
},
{
"msg_contents": "On Sun, 19 Nov 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > $ psql -?\n> > psql: No match.\n> \n> Odd --- I get the right thing:\n> \n> $ psql -?\n> This is psql, the PostgreSQL interactive terminal.\n\nIt has something to do with certain shell's expansion of ? - for the\nlongest time I'd have to do psql -\\? to get it to work.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Sun, 19 Nov 2000 14:14:34 -0600 (CST)",
"msg_from": "\"Dominic J. Eidson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: anyone ever notice? "
},
{
"msg_contents": "Vince Vielhaber writes:\n\n> $ psql -U\n> psql: option requires an argument -- U\n> Try -? for help.\n> $ psql -?\n> psql: No match.\n\nFriggin' csh. Try 'psql -\\?'.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 19 Nov 2000 21:16:38 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: anyone ever notice?"
},
{
"msg_contents": "It's a shell thing: Vince is running csh (or a derivative thereof)\nwhile Tom (and I) are running some sort of Bourne derived shell.\n\nVince, try:\n\npsql -\\?\n\nWhich works more universally.\n\nRoss\n\nOn Sun, Nov 19, 2000 at 02:44:01PM -0500, Tom Lane wrote:\n> Vince Vielhaber <[email protected]> writes:\n> > $ psql -?\n> > psql: No match.\n> \n> Odd --- I get the right thing:\n> \n> $ psql -?\n> This is psql, the PostgreSQL interactive terminal.\n> \n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Sun, 19 Nov 2000 14:21:08 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: anyone ever notice?"
},
{
"msg_contents": "all you guy unix?\nunder some shells, both * and ? are expanded to matched file names in current directory by shell,\nfor example FreeBSD's csh.\n\nyou should use psql -\\? to get help screen, this sucks, \n\"?\" shouldn't be used as a help screen argument.\n\nRegards,\nXuYifeng\n\n----- Original Message ----- \nFrom: Vince Vielhaber <[email protected]>\nTo: <[email protected]>\nSent: Monday, November 20, 2000 3:31 AM\nSubject: [HACKERS] psql: anyone ever notice?\n\n\n> \n> $ psql -U\n> psql: option requires an argument -- U\n> Try -? for help.\n> $ psql -?\n> psql: No match.\n> $\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n",
"msg_date": "Mon, 20 Nov 2000 10:47:32 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: anyone ever notice?"
},
{
"msg_contents": "On Mon, Nov 20, 2000 at 10:47:32AM +0800, xuyifeng wrote:\n> \"?\" shouldn't be used as a help screen argument.\n\ngenerally, code doesn't explicitly look for a '?'.\n\nrather, the code notes that the character is not mapped to any argument,\nand prints out a usage statement.\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n",
"msg_date": "Sun, 19 Nov 2000 21:56:06 -0500",
"msg_from": "Jim Mercer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: anyone ever notice?"
},
{
"msg_contents": "\"xuyifeng\" <[email protected]> writes:\n> you should use psql -\\? to get help screen, this sucks, \n> \"?\" shouldn't be used as a help screen argument.\n\nI tend to agree, given csh's unhelpful (ahem) behavior.\n\nAt the very least, it seems that \"psql -h\" ought to produce\nthe full help message, not just a complaint that the syntax\nis wrong.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Nov 2000 22:00:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: anyone ever notice? "
},
{
"msg_contents": "Vince Vielhaber writes:\n\n> $ psql -U\n> psql: option requires an argument -- U\n> Try -? for help.\n> $ psql -?\n> psql: No match.\n> $\n\nIt advertises '--help' now. (And yes, '--help' works everywhere.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 25 Nov 2000 20:34:02 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: anyone ever notice?"
}
]
|
[
{
"msg_contents": "If you care about the nitty-gritty details, see\nhttp://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/utils/fmgr/README\nparticularly the final section \"Telling the difference between old- and\nnew-style functions\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Nov 2000 17:16:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Final proposal for resolving C-vs-newC issue"
},
{
"msg_contents": "At 17:16 19/11/00 -0500, Tom Lane wrote:\n>If you care about the nitty-gritty details, see\n>http://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/utils/fmgr/README\n>particularly the final section \"Telling the difference between old- and\n>new-style functions\".\n\nThere is no mention of the handling of toasted values for old C functions.\nDoes this mean that it is possible for crashes to occur after a dump/load +\nupdates?\n\nSince I'd guess we will keep the old C style interface in perpetuity, since\nit allows calling of arbitrary object libraries, I think it would be very\nsensible to detoast all parameters prior to calling a 'raw'(?) function.\nPerhaps, if this is too expensive, we can add a new attribute to prevent\ndetoasting if necessary, perhaps 'eatstoast' ;-).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 20 Nov 2000 12:00:09 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final proposal for resolving C-vs-newC issue"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> At 17:16 19/11/00 -0500, Tom Lane wrote:\n>> http://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/utils/fmgr/README\n\n> There is no mention of the handling of toasted values for old C functions.\n\nDid you not read to the end?\n\n: To allow old-style dynamic functions to work safely on toastable datatypes,\n: the handler for old-style functions will automatically detoast toastable\n: arguments before passing them to the old-style function. A new-style\n: function is expected to take care of toasted arguments by using the\n: standard argument access macros defined above.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Nov 2000 21:31:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Final proposal for resolving C-vs-newC issue "
},
{
"msg_contents": "At 21:31 19/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> At 17:16 19/11/00 -0500, Tom Lane wrote:\n>>>\nhttp://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/utils/fmgr/README\n>\n>> There is no mention of the handling of toasted values for old C functions.\n>\n>Did you not read to the end?\n\n'fraid not...8-(. Thanks.\n\n>: To allow old-style dynamic functions to work safely on toastable datatypes,\n>: the handler for old-style functions will automatically detoast toastable\n>: arguments before passing them to the old-style function. A new-style\n>: function is expected to take care of toasted arguments by using the\n>: standard argument access macros defined above.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 20 Nov 2000 18:16:34 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final proposal for resolving C-vs-newC issue "
}
]
|
[
{
"msg_contents": "I decided that perhaps it was time to toss the current OpenACS datamodel\nat PG 7.1 to see what would happen (it's a bit shy of 10K lines, including\ncomments and white space).\n\nAll went well except for a handful of occurances of the following error:\n\nERROR: SS_finalize_plan: plan shouldn't reference subplan's variable\n\nThe code in question does something like:\n\ninsert into foo (key, name)\nselect (nextval('key_sequence', 'some_value')\nwhere not exists (select 1 from foo where name='some_value');\n\nThe key field is the primary key. The name field is constrained unique.\nThe check is to avoid getting a duplicate insertion error if the name\nisn't unique. Since this is a script which loads initial data into\nthe system, in essence this check allows the script to avoid flooding the\nuser with errors if they run it twice.\n\n From the error message it would appear that perhaps the plan for the insert\nis referencing table \"foo\" from the subselect, and someone doesn't think\nthat's\nkosher.\n\nHere's the actual sequence of events with a self-contained example at the end.\n\nOh, BTW - outer joins ROCK!\n\n[pgtest@gyrfalcon pgtest]$ \n[pgtest@gyrfalcon pgtest]$ createdb test\nCREATE DATABASE\n[pgtest@gyrfalcon pgtest]$ createlang plpgsql test\n[pgtest@gyrfalcon pgtest]$ psql test -f t.sql\npsql:t.sql:1: NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n'users_pkey' for table 'u\nsers'\nCREATE\npsql:t.sql:19: NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n'user_group_types_pkey' \nfor table 'user_group_types'\nCREATE\nCREATE\npsql:t.sql:46: NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n'user_groups_pkey' for t\nable 'user_groups'\npsql:t.sql:46: NOTICE: CREATE TABLE/UNIQUE will create implicit index\n'user_groups_short_name_key' \nfor table 'user_groups'\npsql:t.sql:46: NOTICE: CREATE TABLE will create implicit trigger(s) for\nFOREIGN KEY check(s)\nCREATE\nCREATE\nCREATE\nINSERT 40467 1\nINSERT 40468 1\npsql:t.sql:83: ERROR: SS_finalize_plan: plan shouldn't reference subplan's\nvariable\n[pgtest@gyrfalcon pgtest]$ more t.sql\ncreate table users (user_id integer primary key);\n\n\ncreate table user_group_types (\n group_type varchar(20) primary key,\n pretty_name varchar(50) not null,\n pretty_plural varchar(50) not null,\n approval_policy varchar(30) not null,\n default_new_member_policy varchar(30) default 'open' not null,\n group_module_administration varchar(20) default 'none',\n has_virtual_directory_p char(1) default 'f'\ncheck(has_virtual_directory_p in ('t','f\n')),\n group_type_public_directory varchar(200),\n group_type_admin_directory varchar(200),\n group_public_directory varchar(200),\n group_admin_directory varchar(200)\n constraint group_type_module_admin_check check (\n (group_module_administration is not null)\n and (group_module_administration in ('full', 'enabling', 'none')))\n);\n \ncreate sequence user_group_sequence;\ncreate table user_groups (\n group_id integer primary key,\n group_type varchar(20) not null references user_group_types,\n group_name varchar(100),\n short_name varchar(100) unique not null,\n admin_email varchar(100),\n registration_date datetime not null,\n creation_user integer not null references users(user_id),\n creation_ip_address varchar(50) not null,\n approved_p char(1) check (approved_p in ('t','f')),\n active_p char(1) default 't' check(active_p in ('t','f')),\n existence_public_p char(1) default 't' check\n(existence_public_p in ('t','f')),\n new_member_policy varchar(30) default 'open' not null,\n spam_policy varchar(30) default 'open' not null,\n constraint user_groups_spam_policy_check check(spam_policy in\n('open','closed','wait')),\n email_alert_p char(1) default 'f' check (email_alert_p in\n('t','f')),\n multi_role_p char(1) default 'f' check (multi_role_p in ('t','f')),\n group_admin_permissions_p char(1) default 'f' check\n(group_admin_permissions_p in ('t','f'\n)),\n index_page_enabled_p char(1) default 'f' check\n(index_page_enabled_p in ('t','f')),\n body lztext,\n html_p char(1) default 'f' check (html_p in\n('t','f')),\n modification_date datetime,\n modifying_user integer references users,\n parent_group_id integer references user_groups(group_id)\n);\n-- index parent_group_id to make parent lookups quick!\ncreate index user_groups_parent_grp_id_idx on user_groups(parent_group_id);\n\ncreate function user_group_add (varchar, varchar, varchar, varchar)\nRETURNS integer AS '\nDECLARE\n v_group_type alias for $1;\n v_pretty_name alias for $2;\n v_short_name alias for $3;\n v_multi_role_p alias for $4;\n v_system_user_id integer; \nBEGIN\n v_system_user_id := 1;\n -- create the actual group\n insert into user_groups \n (group_id, group_type, short_name, group_name, creation_user,\ncreation_ip_address, approved_p,\n existence_public_p, new_member_policy, multi_role_p)\n select nextval(''user_group_sequence''), v_group_type, v_short_name,\n v_pretty_name, v_system_user_id, ''0.0.0.0'', ''t'', ''f'', ''closed'',\n v_multi_role_p\n where not exists (select * from user_groups\n where upper(short_name) = upper(v_short_name));\n\n RETURN 1;\nend;' language 'plpgsql';\n\ninsert into users (user_id) values(1);\n\ninsert into user_group_types\n (group_type, pretty_name, pretty_plural, approval_policy)\nvalues\n ('group', 'Group', 'Groups', 'open');\n\nselect user_group_add('group', 'shortname', 'prettyname', 'f');\n\n[pgtest@gyrfalcon pgtest]$ \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 19 Nov 2000 18:44:27 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG 7.1 pre-beta bug ..."
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> All went well except for a handful of occurances of the following error:\n> ERROR: SS_finalize_plan: plan shouldn't reference subplan's variable\n\nThis is probably my fault --- will look at it.\n\nAppreciate the self-contained example...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Nov 2000 22:05:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 7.1 pre-beta bug ... "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> All went well except for a handful of occurances of the following error:\n> ERROR: SS_finalize_plan: plan shouldn't reference subplan's variable\n\nFixed, I believe. Your test case now gives\n\nregression=# select user_group_add('group', 'shortname', 'prettyname', 'f');\nERROR: ExecAppend: Fail to add null value in not null attribute registration_date\n\nbut that's correct AFAICT, and 7.0.2 agrees...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 19:20:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 7.1 pre-beta bug ... "
},
{
"msg_contents": "At 07:20 PM 11/20/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> All went well except for a handful of occurances of the following error:\n>> ERROR: SS_finalize_plan: plan shouldn't reference subplan's variable\n>\n>Fixed, I believe. Your test case now gives\n>\n>regression=# select user_group_add('group', 'shortname', 'prettyname', 'f');\n>ERROR: ExecAppend: Fail to add null value in not null attribute\nregistration_date\n>\n>but that's correct AFAICT, and 7.0.2 agrees...\n\nYeah, I boiled down my example a bit too far for the case where the RDBMS\nworks, apparently :) There's probably a trigger to fill the registration\ndate that I stripped out, something like that.\n\nThanks ... I'll not be able to get back to testing until later this week (I'm\nbusy with a client site, Oracle-based, boo-hoo).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 20 Nov 2000 16:45:21 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 7.1 pre-beta bug ... "
}
]
|
[
{
"msg_contents": "Is there a good reason why the attribute name limit is 31 chars? It would\nbe nice to extend it to 255 characters or so...\n\nChris\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n",
"msg_date": "Mon, 20 Nov 2000 11:36:39 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Attribute name limit"
}
]
|
[
{
"msg_contents": "Hi,\n\nI was looking at the ALTER TABLE DROP CONSTRAINT bit of PostgreSQL, and I\nstarted thinking about trying to implement it (as a bit of mental exercise).\n(And because it's highly annoying not being able to remove the damn things!\n\nPlease comment on all of this, and tell me if it's going to be over my head!\n\nI'm just trying to understand some stuff:\n\n* I assume that the command is supposed to allow the dropping of unique,\nprimary, foreign key and check constraints? Should 'not null' constraints\nalso be included here?\n\n* Unique constraints are implemented as indicies, so dropping a unique\nconstraint maps to dropping the relevant index.\n\n* Primary keys are implemented...how?? I can't for the life of me find\nwhere 'create table' occurs in the source code!\n\n* Foreign keys are implemented as two triggers? It seems that all that is\nrequired is the removal of these two triggers. I haven't checked carefully\nto see _exactly_ what the triggers are doing. I see there is one associated\nwith the 'one' table and one with the 'many' table. It seems that dropping\na foreign key constraint should be a case of removing the two triggers?\n\n* Check constraints. I seem to recall seeing code that implements check\nconstraints as triggers, but I wrote a query that retrieves all triggers\nassociated with a particular class and no check triggers were returned. How\nare check constraints implemented? How would you drop a check constraint?\n\n* Not null constraints. This seems to be a 'for completeness' constraint -\nI presume it's implemented as part of the attribute definition? I guess it\nwould be relatively straightforward to drop a 'not null' constraint,\nassuming they are actually named in there somewhere.\n\nWould anyone be able to correct my understanding of these issues?\n\nAlso - is there some good reason why this hasn't been implemented yet? Is\nthere some subtle reason, or is it just that no-one's bothered?\n\nThanks,\n\nChris\n\n",
"msg_date": "Mon, 20 Nov 2000 15:07:06 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table/Column Constraints"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> * I assume that the command is supposed to allow the dropping of unique,\n> primary, foreign key and check constraints? Should 'not null' constraints\n> also be included here?\n\nSure.\n\n> * Unique constraints are implemented as indicies, so dropping a unique\n> constraint maps to dropping the relevant index.\n\nOr just marking the index non-unique. Dropping it altogether might be\nbad for query performance.\n\n> * Primary keys are implemented...how?? I can't for the life of me find\n> where 'create table' occurs in the source code!\n\nPrimary key == UNIQUE NOT NULL, as far as I know, and there's also a\nflag somewhere in the index associated with the UNIQUE constraint.\n\n> * Check constraints. I seem to recall seeing code that implements check\n> constraints as triggers, but I wrote a query that retrieves all triggers\n> associated with a particular class and no check triggers were returned. How\n> are check constraints implemented? How would you drop a check constraint?\n\nNo, check constraints are stored in pg_relcheck. Don't forget to update\nthe count in pg_class.relchecks.\n\n> * Not null constraints. This seems to be a 'for completeness' constraint -\n> I presume it's implemented as part of the attribute definition?\n\nAFAIR it's just a bool in the pg_attribute row for the column.\n\n> Also - is there some good reason why this hasn't been implemented yet? Is\n> there some subtle reason, or is it just that no-one's bothered?\n\nI think no one's got round to it; attention has focused on DROP COLUMN,\nwhich is a great deal harder. If you feel like working on DROP\nCONSTRAINT, go for it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 10:51:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Christopher Kings-Lynne\" <[email protected]> writes:\n> > * I assume that the command is supposed to allow the dropping of unique,\n> > primary, foreign key and check constraints? Should 'not null' constraints\n> > also be included here?\n> \n> Sure.\n> \n> > * Unique constraints are implemented as indicies, so dropping a unique\n> > constraint maps to dropping the relevant index.\n> \n> Or just marking the index non-unique. Dropping it altogether might be\n> bad for query performance.\n\nIt also may break the db (make it impossible to update) if some FK\nconstraints are using it\n\n> > Also - is there some good reason why this hasn't been implemented yet? Is\n> > there some subtle reason, or is it just that no-one's bothered?\n> \n> I think no one's got round to it; attention has focused on DROP COLUMN,\n> which is a great deal harder. If you feel like working on DROP\n> CONSTRAINT, go for it...\n\nDumping constraints in human-readable form (instead of CREATE CONSTRAIN\nTRIGGER) would also be great.\n\n---------\nHannu\n",
"msg_date": "Mon, 20 Nov 2000 18:52:20 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "On Mon, Nov 20, 2000 at 06:52:20PM +0200, Hannu Krosing wrote:\n> \n> Dumping constraints in human-readable form (instead of CREATE CONSTRAIN\n> TRIGGER) would also be great.\n\nIn fact, IMHO, this would be a great place to start: we'd all love the\nfuctionality, it'd have you examining almost all the same code, and it'd\nbe a feature we could all test, in diverse situations. DROP CONSTRAINT\nis unlikely to be as widely tested. If you can build the introspection\ncorrectly, so that it dumps/reloads correctly for _everyone_, then I'd\ntrust your DROP CONSTRAINT work a lot more.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n\n",
"msg_date": "Mon, 20 Nov 2000 11:15:53 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> On Mon, Nov 20, 2000 at 06:52:20PM +0200, Hannu Krosing wrote:\n>> \n>> Dumping constraints in human-readable form (instead of CREATE CONSTRAIN\n>> TRIGGER) would also be great.\n\n> In fact, IMHO, this would be a great place to start: we'd all love the\n> fuctionality, it'd have you examining almost all the same code, and it'd\n> be a feature we could all test, in diverse situations. DROP CONSTRAINT\n> is unlikely to be as widely tested. If you can build the introspection\n> correctly, so that it dumps/reloads correctly for _everyone_, then I'd\n> trust your DROP CONSTRAINT work a lot more.\n\nYes. My take on this is that a lot of the constraint-related stuff,\nespecially foreign keys, is misdesigned: the reason it's so hard to\nextract the info is that we are only storing an execution-oriented\nrepresentation. There should be a purely declarative representation\nof each constraint someplace, too, for ease of introspection.\n\nSo, my idea is that this ought to be a three-part process:\n\n1. Redesign the representation of constraints into something more\nreasonable --- at least add a declarative representation, maybe alter\nor drop existing representation if it seems appropriate.\n\n2. Adjust pg_dump to use the declarative representation rather than\ntrying to reconstruct things from the execution-oriented representation.\n(Note this will imply that, for example, triggers generated to implement\nforeign keys should NOT be dumped. Thus, it needs to be reasonably easy\nto identify such triggers --- maybe an additional flag column is needed\nin pg_trigger to mark system-generated triggers.)\n\n3. Work on ALTER ... DROP CONSTRAINT.\n\nChristopher may now be wondering what he's got himself in for ;-).\nHowever, steps 2 and 3 should be pretty easy if step 1 accounts for\ntheir needs. Don't do this in a waterfall process --- when you hit a\nroadblock in 2 or 3, figure out what information you don't have, and\nreturn to step 1 to fix it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 12:35:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints "
},
{
"msg_contents": "> \"Christopher Kings-Lynne\" <[email protected]> writes:\n> > * I assume that the command is supposed to allow the dropping of unique,\n> > primary, foreign key and check constraints? Should 'not null'\n> constraints\n> > also be included here?\n\nOK, I have just checked the SQL standard thingy for DROP CONSTRAINT, and it\nseems that this is the syntax:\n\nALTER TABLE <table name> DROP CONSTRAINT <constraint name> <CASCADE |\nRESTRICT>\n\nI can't find out what CASCADE and RESTRICE mean?\n\nI presume that CASCADE means that if you're trying to remove a primary key\nthat is referenced by some other foreign keys, all those foreign keys should\nalso be dropped. However, if neither is specified, should it fail? Or\nshould it produce an error? And what on Earth does RESTRICT mean?\n\nAlso - given that the correct definition of a foreign key is that is is a\nnon-key attribute that refers to a primary key in another relation - would\nit be really bad behaviour to _not_ drop any referring foreign keys?\n\n> > * Unique constraints are implemented as indicies, so dropping a unique\n> > constraint maps to dropping the relevant index.\n> > * Not null constraints. This seems to be a 'for completeness'\n> constraint -\n> > I presume it's implemented as part of the attribute definition?\n>\n> AFAIR it's just a bool in the pg_attribute row for the column.\n\nMy question then is - if someone adds it as a named attribute, where is its\nname stored?\n\n> > Also - is there some good reason why this hasn't been\n> implemented yet? Is\n> > there some subtle reason, or is it just that no-one's bothered?\n>\n> I think no one's got round to it; attention has focused on DROP COLUMN,\n> which is a great deal harder. If you feel like working on DROP\n> CONSTRAINT, go for it...\n\nI have a couple of reasons for wanting to work on it and that's that I've\ncome from a MySQL (*gasp*) background and I've fallen in love with\nPostgres's coolness. However, I also love the admin tool 'phpMyAdmin'.\n'phpPgAdmin' is the Postgres equivalent - however it lacks convenience and\nmany features because various sql commands aren't implemented by Postgress.\nI believe that wider use of postgres would be greatly enhanced if phpPgAdmin\nhad all the features of phpMyAdmin - it would make it a lot easier for me to\nconvert people! See, if people can't easily drop constraints (and add\nconstraints) then it discourages people from playing around with them, and\nreally learning the advanced features of postgres.\n\nChris\n\n",
"msg_date": "Tue, 21 Nov 2000 09:26:29 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Table/Column Constraints "
},
{
"msg_contents": "> > In fact, IMHO, this would be a great place to start: we'd all love the\n> > fuctionality, it'd have you examining almost all the same code, and it'd\n> > be a feature we could all test, in diverse situations. DROP CONSTRAINT\n> > is unlikely to be as widely tested. If you can build the introspection\n> > correctly, so that it dumps/reloads correctly for _everyone_, then I'd\n> > trust your DROP CONSTRAINT work a lot more.\n\nJust to catch up here - does this mean that pg_dump has issues with\ncorrectly recreating the contraints? If you tell me exactly what the\nproblem is - I'll give it a burl. However, a reimplementation of\nconstraints would probably be beyond my knowledge atm.\n\n> Yes. My take on this is that a lot of the constraint-related stuff,\n> especially foreign keys, is misdesigned: the reason it's so hard to\n> extract the info is that we are only storing an execution-oriented\n> representation. There should be a purely declarative representation\n> of each constraint someplace, too, for ease of introspection.\n\nBy this, do you mean that the existence of a foreign key is implied rather\nthan explicit by the existence of various triggers, etc.?\n\n> So, my idea is that this ought to be a three-part process:\n>\n> 1. Redesign the representation of constraints into something more\n> reasonable --- at least add a declarative representation, maybe alter\n> or drop existing representation if it seems appropriate.\n\nProblem is that there are 5 difference types of constraints, implemented in\n5 different ways. Do you want a unifed, central catalog of constraints, or\njust for some of them, or what?\n\nMaybe it could be done like this (given my limited knowledge...)\n\na. Create a system catalog that names all contraints associated with tables.\nI assume that column contraints implicitly become table constraints. This\nwill also make it easy to have global unique contraint names. Actually -\nare the constraint names currently unique for an entire database?\n\nb. In all the places where the constraints are implemented. (ie.\npg_relcheck, indicies and pg_trigger add a column that flags the entry as\nbeing a 'system constraint'.\n\nThat way finding and dropping constraints should be ok, so long as\neverything is kept consistent!\n\n> 2. Adjust pg_dump to use the declarative representation rather than\n> trying to reconstruct things from the execution-oriented representation.\n> (Note this will imply that, for example, triggers generated to implement\n> foreign keys should NOT be dumped. Thus, it needs to be reasonably easy\n> to identify such triggers --- maybe an additional flag column is needed\n> in pg_trigger to mark system-generated triggers.)\n\nThis would be straightforward, given the implementation of (1).\n\nIt would be nice, however, if pg_dump produced the exact same sql as used to\ncreate a table. For instance, if you specify a column constraint, it comes\nback as a column constraint, rather than a trigger, or a table constraint.\nThis would especially aid portability of the dumped SQL.\n\n> 3. Work on ALTER ... DROP CONSTRAINT.\n\nAgain, this should be straightforward given (1).\n\n> Christopher may now be wondering what he's got himself in for ;-).\n\nThere's no better way to learn databases than to code for one I think!\n\nAny comments?\n\nChris\n\n",
"msg_date": "Tue, 21 Nov 2000 10:35:55 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Table/Column Constraints "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> Just to catch up here - does this mean that pg_dump has issues with\n> correctly recreating the contraints?\n\nWell, if you examine the pg_dump output, it doesn't really try ---\nyou'll see no sign of any foreign-key constraint declarations in\na pg_dump script, for example, only trigger declarations. This is\ncorrect as far as reproducing the working database goes, but it's\nbad news for making a readable/modifiable dump script. What's worse,\nthis representation ties us down over version updates: we cannot easily\nchange the internal representation of constraints, because the internal\nrepresentation is what's getting dumped. Loading an old dump file into\na new version with a different constraint implementation would not\nwork as desired. (This may mean that we can't change it, which would\n*really* be a problem...)\n\n>> There should be a purely declarative representation\n>> of each constraint someplace, too, for ease of introspection.\n\n> By this, do you mean that the existence of a foreign key is implied rather\n> than explicit by the existence of various triggers, etc.?\n\nExactly.\n\n>> 1. Redesign the representation of constraints into something more\n>> reasonable --- at least add a declarative representation, maybe alter\n>> or drop existing representation if it seems appropriate.\n\n> Problem is that there are 5 difference types of constraints, implemented in\n> 5 different ways. Do you want a unifed, central catalog of constraints, or\n> just for some of them, or what?\n\nDunno. Maybe a unified representation would make more sense, or maybe\nit's OK to treat them separately. The existing implementations of the\ndifferent types of constraints were done at different times, and perhaps\nare different \"just because\" rather than for any good reason. We need\ninvestigation before we can come up with a reasonable proposal.\n\n> I assume that column contraints implicitly become table constraints. This\n> will also make it easy to have global unique contraint names. Actually -\n> are the constraint names currently unique for an entire database?\n\nNo, and they shouldn't be --- only per-table, I think.\n\n> It would be nice, however, if pg_dump produced the exact same sql as used to\n> create a table. For instance, if you specify a column constraint, it comes\n> back as a column constraint, rather than a trigger, or a table constraint.\n> This would especially aid portability of the dumped SQL.\n\nRight, exactly my point above. We discard too much information that\nneeds to be retained somewhere...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 22:49:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints "
},
{
"msg_contents": "Tom Lane wrote:\n\n> > It would be nice, however, if pg_dump produced the exact same sql as used to\n> > create a table. For instance, if you specify a column constraint, it comes\n> > back as a column constraint, rather than a trigger, or a table constraint.\n> > This would especially aid portability of the dumped SQL.\n> \n> Right, exactly my point above. We discard too much information that\n> needs to be retained somewhere...\n\nI like this conversation as not a day goes by where I don't wish I could\nedit the dump of a database rather than keeping structure entirely\nseperate -- and actually do so in a useful manner. That said, whats the\npossibility of maintaining comments if the SQL dumps actually became\nhumanly editable?\n",
"msg_date": "Mon, 20 Nov 2000 23:32:44 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "> > Problem is that there are 5 difference types of constraints,\n> implemented in\n> > 5 different ways. Do you want a unifed, central catalog of\n> constraints, or\n> > just for some of them, or what?\n>\n> Dunno. Maybe a unified representation would make more sense, or maybe\n> it's OK to treat them separately. The existing implementations of the\n> different types of constraints were done at different times, and perhaps\n> are different \"just because\" rather than for any good reason. We need\n> investigation before we can come up with a reasonable proposal.\n\nIt strikes me that having a catalog (so to speak) of all contraints, with\nflags in the tables where the contraints are implemented would allow a\nseparation of presentation and implementation.\n\nFor example, say, if a catalog existed that clients could query to discover\nall constraint information, then it would be possible to change how foreign\nkeys are implemented, and not affect how this info is presented.\n\nHowever, if users still had to perform joins between some centralised table,\nand the tables where the constraints are actually kept (relcheck, trigger,\netc) then that defeats the purpose. Say - isn't that what 'views' are for?\n\n> > I assume that column contraints implicitly become table\n> constraints. This\n> > will also make it easy to have global unique contraint names.\n> Actually -\n> > are the constraint names currently unique for an entire database?\n>\n> No, and they shouldn't be --- only per-table, I think.\n\nOops - correct. Wasn't paying attention. I forgot that the table name is\nspecified as part of the ALTER statement.\n\nChris\n\n",
"msg_date": "Tue, 21 Nov 2000 12:43:06 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Table/Column Constraints "
},
{
"msg_contents": "> I like this conversation as not a day goes by where I don't wish I could\n> edit the dump of a database rather than keeping structure entirely\n> seperate -- and actually do so in a useful manner. That said, whats the\n> possibility of maintaining comments if the SQL dumps actually became\n> humanly editable?\n\n From reading the pg_dump source code, pg_dump creates a set of 'COMMENT ON\n...' statements that should recreate all the comments associated with an\noid. So - there shouldn't be a problem, should there?\n\nChris\n\n",
"msg_date": "Tue, 21 Nov 2000 12:46:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Table/Column Constraints"
},
{
"msg_contents": "> A join as such doesn't bother me. For example, it'd be proper for this\n> hypothetical constraint catalog to have a column of table OIDs, which\n> you'd have to join against pg_class to get the table name from. The\n> real issue is to make sure that we store enough info so that the\n> original table/constraint declarations can be reconstructed in a\n> straightforward fashion.\n\nThat would then require that an optional oid be stored that relates the\nconstraint to a particular attribute in a table, not just the table itself.\nThat way, column restraints can be reconstructed.\n\n> Peter has remarked that the SQL spec offers a set of system views\n> intended to provide exactly this info. That should be looked at;\n> if there's a workable standard for this stuff, we oughta follow it.\n\nSpeaking of - I simply cannot find a standard SQL specification anywhere on\nthe net, without buying one from ANSI. I'm forced to rely on\nvendor-specific docs - which are not standard in any way. Is anyone able to\nmail me such a thing?\n\nChris\n\n",
"msg_date": "Tue, 21 Nov 2000 13:02:34 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Table/Column Constraints "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> For example, say, if a catalog existed that clients could query to discover\n> all constraint information, then it would be possible to change how foreign\n> keys are implemented, and not affect how this info is presented.\n\n> However, if users still had to perform joins between some centralised table,\n> and the tables where the constraints are actually kept (relcheck, trigger,\n> etc) then that defeats the purpose. Say - isn't that what 'views' are for?\n\nA join as such doesn't bother me. For example, it'd be proper for this\nhypothetical constraint catalog to have a column of table OIDs, which\nyou'd have to join against pg_class to get the table name from. The\nreal issue is to make sure that we store enough info so that the\noriginal table/constraint declarations can be reconstructed in a\nstraightforward fashion.\n\nPeter has remarked that the SQL spec offers a set of system views\nintended to provide exactly this info. That should be looked at;\nif there's a workable standard for this stuff, we oughta follow it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Nov 2000 00:03:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints "
},
{
"msg_contents": "At 10:49 PM 11/20/00 -0500, Tom Lane wrote:\n>\"Christopher Kings-Lynne\" <[email protected]> writes:\n>> Just to catch up here - does this mean that pg_dump has issues with\n>> correctly recreating the contraints?\n>\n>Well, if you examine the pg_dump output, it doesn't really try ---\n>you'll see no sign of any foreign-key constraint declarations in\n>a pg_dump script, for example, only trigger declarations. This is\n>correct as far as reproducing the working database goes, but it's\n>bad news for making a readable/modifiable dump script.\n\nShort story, you are both right.\n\nChris - the dumps reload and recreate the constraints (in other words,\nthe answer to your question is \"no\")\n\nTom's correct in that decyphering the dump output is an ... interesting\nproblem.\n\n(Tom, I just want to make sure that Chris undertands that dump/restore\nDOES restore the constraints. The \"it doesn't really try\" statement\nyou made, if hastily read without the qualifier, would lead one to believe\nthat a dump/restore would lose constraints).\n\nWhat Tom's saying is the internal implementation of the SQL constraints\nare exposed during the dump, where it would be much better if the SQL\nthat constructed the constraint were output instead. The implementation\nisn't hidden from the dump, rather the declaration is hidden.\n\n>What's worse,\n>this representation ties us down over version updates: we cannot easily\n>change the internal representation of constraints, because the internal\n>representation is what's getting dumped.\n\nWhich follows up my statement above perfectly. If the implementation\nwere hidden, and the SQL equivalent dumped, we could change the implementation\nwithout breaking dump/restore ACROSS VERSIONS. (I capped because WITHIN\nA VERSION dump/restore works fine).\n\n \n>> Problem is that there are 5 difference types of constraints, implemented in\n>> 5 different ways. Do you want a unifed, central catalog of constraints, or\n>> just for some of them, or what?\n>\n>Dunno. Maybe a unified representation would make more sense, or maybe\n>it's OK to treat them separately. The existing implementations of the\n>different types of constraints were done at different times, and perhaps\n>are different \"just because\" rather than for any good reason. We need\n>investigation before we can come up with a reasonable proposal.\n\nI think you hit the nail on the head when earlier you said that representation\nwas driven by the implementation.\n\nOf course, one could say this is something of a PG tradition - check out\nviews,\nwhich in PG 7.0 still are dumped as rules to the rule system, which no other\nDB will understand.\n\nSo I can't say it's fair to pick on newer contraints like RI - they build\non a tradition of exposing the internal implementation to pg_dump and its\noutput, they didn't invent it.\n\nIf this problem is attacked, should one stop at constraints or make certain\nthat other elements like views are dumped properly, too? (or were views\nfixed for 7.1, I admit to a certain amount of \"ignoring pgsql-hackers over\nthe last few months\")\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 20 Nov 2000 21:06:02 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints "
},
{
"msg_contents": "> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > On Mon, Nov 20, 2000 at 06:52:20PM +0200, Hannu Krosing wrote:\n> >> \n> >> Dumping constraints in human-readable form (instead of CREATE CONSTRAIN\n> >> TRIGGER) would also be great.\n> \n> > In fact, IMHO, this would be a great place to start: we'd all love the\n> > fuctionality, it'd have you examining almost all the same code, and it'd\n> > be a feature we could all test, in diverse situations. DROP CONSTRAINT\n> > is unlikely to be as widely tested. If you can build the introspection\n> > correctly, so that it dumps/reloads correctly for _everyone_, then I'd\n> > trust your DROP CONSTRAINT work a lot more.\n> \n> Yes. My take on this is that a lot of the constraint-related stuff,\n> especially foreign keys, is misdesigned: the reason it's so hard to\n> extract the info is that we are only storing an execution-oriented\n> representation. There should be a purely declarative representation\n> of each constraint someplace, too, for ease of introspection.\n\nYes, and psql should be able to show constraint info too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Nov 2000 00:06:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "At 12:03 AM 11/21/00 -0500, Tom Lane wrote:\n\n>Peter has remarked that the SQL spec offers a set of system views\n>intended to provide exactly this info. That should be looked at;\n>if there's a workable standard for this stuff, we oughta follow it.\n\nThis and a BUNCH else.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 20 Nov 2000 21:11:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints "
},
{
"msg_contents": "* Christopher Kings-Lynne <[email protected]> [001120 23:10]:\n> Speaking of - I simply cannot find a standard SQL specification anywhere on\n> the net, without buying one from ANSI. I'm forced to rely on\n> vendor-specific docs - which are not standard in any way. Is anyone able to\n> mail me such a thing?\nI found a SQL99, Complete, Really book recently... Seems very\ncomplete. I'll get an ISBN if ya want...\n\nLER\n\n> \n> Chris\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 23:11:49 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> If this problem is attacked, should one stop at constraints or make certain\n> that other elements like views are dumped properly, too? (or were views\n> fixed for 7.1, I admit to a certain amount of \"ignoring pgsql-hackers over\n> the last few months\")\n\nOver the long run, there's a number of areas that need to be attacked\nbefore pg_dump output will fully correspond to what was entered.\n\"SERIAL\" columns are another favorite complaint, for example.\nBut I suggest that we try to deal with manageable pieces of the\nproblem ;-)\n\nViews do seem to be dumped as views by current sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Nov 2000 00:18:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints "
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > I like this conversation as not a day goes by where I don't wish I could\n> > edit the dump of a database rather than keeping structure entirely\n> > seperate -- and actually do so in a useful manner. That said, whats the\n> > possibility of maintaining comments if the SQL dumps actually became\n> > humanly editable?\n> \n> >From reading the pg_dump source code, pg_dump creates a set of 'COMMENT ON\n> ...' statements that should recreate all the comments associated with an\n> oid. So - there shouldn't be a problem, should there?\n\nI was thinking of SQL that looks something like:\n\n/*******************************\n * TABLE: example\n *\n * Used to accomplish stuff\n */\nCREATE TABLE example \n ( example_id serial\n\n /* Must be a ZIP or Postal Code */\n , region varchar(6) UNIQUE\n NOT NULL\n\n /* Descriptive text */\n , description varchar(60) NOT NULL\n );\n\n\nI've always made the assumption that anything in the /* */ was dropped.\n",
"msg_date": "Tue, 21 Nov 2000 00:22:05 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "> Speaking of - I simply cannot find a standard SQL specification anywhere on\n> the net, without buying one from ANSI. I'm forced to rely on\n> vendor-specific docs - which are not standard in any way. Is anyone able to\n> mail me such a thing?\n\nCheck the mailing list archives for the reference to a web site which\nhas what appears to be something close to the SQL99 standards document.\nLet me know if you don't find it and I'll tarball something up for you.\n\n - Thomas\n",
"msg_date": "Tue, 21 Nov 2000 05:42:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "At 00:22 21/11/00 -0500, Rod Taylor wrote:\n>Christopher Kings-Lynne wrote:\n>\n>/*******************************\n> * TABLE: example\n> *\n> * Used to accomplish stuff\n> */\n>CREATE TABLE example \n> ( example_id serial\n>\n> /* Must be a ZIP or Postal Code */\n> , region varchar(6) UNIQUE\n> NOT NULL\n>\n> /* Descriptive text */\n> , description varchar(60) NOT NULL\n> );\n\n From the point of view of efficient dump & load, I think you actually need\nto dump:\n\nCREATE TABLE example \n ( example_id serial\n\n -- Must be a ZIP or Postal Code\n , region varchar(6) \n\n -- Descriptive text\n , description varchar(60)\n );\n\nFollowed by:\n\nALTER TABLE example Alter Column region UNIQUE NOT NULL;\n...etc. (Whatever the correct syntax is).\n\nThe reason for this is that UNIQUE constraints in particular are probably\nvery nasty things to check when loading a table. I would expect it to be\nmore efficient to create tables, load them, and define constraints. Also,\nfor FK constraints this is essential. Unless of course someone implements a\n'SET ALL CONSTRAINTS OFF/ON'.\n\nIt is also nice to be able to dump constraints only.\n\nSo it's definitely a good idea to separate them, IMO.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Nov 2000 16:51:24 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "> CREATE TABLE example\n> ( example_id serial\n>\n> -- Must be a ZIP or Postal Code\n> , region varchar(6)\n>\n> -- Descriptive text\n> , description varchar(60)\n> );\n\nActually - this is something I _could_ do.\n\nAs the pg_dump is running, it shouldn't be too hard to select the comment\nassociated with each entity as it is being dumped. ie. In the example\nabove, the comments for each attribute would be retrieved from\npg_description (or whatever) and output as '-- ...' comments.\n\nThen, if the COMMENT ON statements are also still dumped at the bottom, you\nget the ability to see comments conveniently in your dump, but with the\nability to still hand-edit them before restoring the dump...\n\nChris\n\n",
"msg_date": "Tue, 21 Nov 2000 15:10:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Table/Column Constraints"
},
{
"msg_contents": "At 15:10 21/11/00 +0800, Christopher Kings-Lynne wrote:\n>> CREATE TABLE example\n>> ( example_id serial\n>>\n>> -- Must be a ZIP or Postal Code\n>> , region varchar(6)\n>>\n>> -- Descriptive text\n>> , description varchar(60)\n>> );\n>\n>Actually - this is something I _could_ do.\n>\n>As the pg_dump is running, it shouldn't be too hard to select the comment\n>associated with each entity as it is being dumped. ie. In the example\n>above, the comments for each attribute would be retrieved from\n>pg_description (or whatever) and output as '-- ...' comments.\n\nI was actually more worried about making sure the constraints were dumped\nseparately from the table, but maybe I missed the point of the original post. \n\n>Then, if the COMMENT ON statements are also still dumped at the bottom, you\n>get the ability to see comments conveniently in your dump, but with the\n>ability to still hand-edit them before restoring the dump...\n\nIf I recall correctly, the comments are actually grabbed when each table is\nretrieved, so it is easy to do. But is it really a good idea?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Nov 2000 19:10:56 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Table/Column Constraints"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n\n> Speaking of - I simply cannot find a standard SQL specification anywhere on\n> the net, without buying one from ANSI. I'm forced to rely on\n> vendor-specific docs - which are not standard in any way. Is anyone able to\n> mail me such a thing?\n\nYou may want to take a look through http://www.techstreet.com -- I\nsearched standards for the keyword 'database', and found that many \nof the SQL documents were available as PDFs for $18.00 each.\n\n-- \nKarl DeBisschop [email protected]\nLearning Network Reference http://www.infoplease.com\nNetsaint Plugin Developer [email protected]\n",
"msg_date": "Tue, 21 Nov 2000 08:57:34 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints"
},
{
"msg_contents": "At 12:18 AM 11/21/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> If this problem is attacked, should one stop at constraints or make certain\n>> that other elements like views are dumped properly, too? (or were views\n>> fixed for 7.1, I admit to a certain amount of \"ignoring pgsql-hackers over\n>> the last few months\")\n\n...\n\n>Views do seem to be dumped as views by current sources.\n\nGood...definitely a step in the right direction!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 21 Nov 2000 06:18:08 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table/Column Constraints "
},
{
"msg_contents": "\nOn Tue, 21 Nov 2000, Christopher Kings-Lynne wrote:\n\n> > > Problem is that there are 5 difference types of constraints,\n> > implemented in\n> > > 5 different ways. Do you want a unifed, central catalog of\n> > constraints, or\n> > > just for some of them, or what?\n> >\n> > Dunno. Maybe a unified representation would make more sense, or maybe\n> > it's OK to treat them separately. The existing implementations of the\n> > different types of constraints were done at different times, and perhaps\n> > are different \"just because\" rather than for any good reason. We need\n> > investigation before we can come up with a reasonable proposal.\n> \n> It strikes me that having a catalog (so to speak) of all contraints, with\n> flags in the tables where the contraints are implemented would allow a\n> separation of presentation and implementation.\n\nYeah, the hard part is storing enough information to recover the\nconstraint in an easy way without going to the implementation details,\nstrings aren't sufficient by themselves because that gets really difficult\nto maintain as table/columns change or are dropped. Maybe a central\ncatalog like the above and a backend function that takes care of\nformatting to text would work. Or keeping track of the dependent objects\nand re-figuring the text form (or drop constraint, or whatever) when those\nobjects are changed/dropped.\n\nI think that combining different constraints is good to some extent\nbecause there are alot of problems with many constraints (the RI ones have\nproblems, check constraints are currently not deferrable AFAIK,\nthe unique constraint doesn't actually have the correct semantics) and\nmaybe thinking about the whole set of them at the same time would be a\ngood idea.\n\n> > > I assume that column contraints implicitly become table\n> > constraints. This\n> > > will also make it easy to have global unique contraint names.\n> > Actually -\n> > > are the constraint names currently unique for an entire database?\n> >\n> > No, and they shouldn't be --- only per-table, I think.\n> \n> Oops - correct. Wasn't paying attention. I forgot that the table name is\n> specified as part of the ALTER statement.\n\nI'm not sure actually, it seems to say in the syntax rules for the\nconstraint name definition that the qualified identifier of a constraint\nneeds to be different from any other qualified identifier for any other\nconstraint in the same schema, so Christopher may have been right the\nfirst time (given we don't have schema).\n\n",
"msg_date": "Wed, 22 Nov 2000 11:53:50 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Table/Column Constraints "
}
]
|
[
{
"msg_contents": "\nI get the following when doing a fresh build:\n\nmake[4]: Entering directory\n`/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg/preproc'\nbison -y -d preproc.y\n(\"preproc.y\", line 2256) error: $5 of `CreatedbStmt' has no declared type\n(\"preproc.y\", line 2256) error: invalid $ value\n(\"preproc.y\", line 2256) error: $6 of `CreatedbStmt' has no declared type\n(\"preproc.y\", line 2265) error: $$ of `createdb_opt_list' has no declared type\n(\"preproc.y\", line 2265) error: $1 of `createdb_opt_list' has no declared type\n(\"preproc.y\", line 2267) error: $$ of `createdb_opt_list' has no declared type\n(\"preproc.y\", line 2267) error: $1 of `createdb_opt_list' has no declared type\n(\"preproc.y\", line 2267) error: $2 of `createdb_opt_list' has no declared type\n(\"preproc.y\", line 2270) error: $$ of `createdb_opt_item' has no declared type\n(\"preproc.y\", line 2271) error: $$ of `createdb_opt_item' has no declared type\n(\"preproc.y\", line 2272) error: $$ of `createdb_opt_item' has no declared type\n(\"preproc.y\", line 2273) error: $$ of `createdb_opt_item' has no declared type\n(\"preproc.y\", line 2276) error: $$ of `createdb_opt_item' has no declared type\n(\"preproc.y\", line 2280) error: $$ of `createdb_opt_item' has no declared type\n(\"preproc.y\", line 5365) error: symbol createdb_opt_encoding is used, but\nis not defined as a token and has no rules\n(\"preproc.y\", line 5365) error: symbol createdb_opt_location is used, but\nis not defined as a token and has no rules\nmake[4]: *** [preproc.c] Error 1\nmake[4]: Leaving directory\n`/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg/preproc'\nmake[3]: *** [all] Error 2\nmake[3]: Leaving directory\n`/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg'\nmake[2]: *** [all] Error 2\nmake[2]: Leaving directory\n`/home/pjw/work/postgresql-cvs/pgsql/src/interfaces'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/home/pjw/work/postgresql-cvs/pgsql/src'\nmake: *** [all] Error 2\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 20 Nov 2000 23:52:17 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Current CVS broken?"
},
{
"msg_contents": "On 20 November 2000 18:52, Philip Warner wrote:\n> I get the following when doing a fresh build:\n\nDid you made distclean?\n\n> make[4]: Entering directory\n> `/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg/preproc'\n> bison -y -d preproc.y\n> (\"preproc.y\", line 2256) error: $5 of `CreatedbStmt' has no declared type\n> (\"preproc.y\", line 2256) error: invalid $ value\n> (\"preproc.y\", line 2256) error: $6 of `CreatedbStmt' has no declared type\n> (\"preproc.y\", line 2265) error: $$ of `createdb_opt_list' has no declared\n> type (\"preproc.y\", line 2265) error: $1 of `createdb_opt_list' has no\n> declared type (\"preproc.y\", line 2267) error: $$ of `createdb_opt_list' has\n> no declared type (\"preproc.y\", line 2267) error: $1 of `createdb_opt_list'\n> has no declared type (\"preproc.y\", line 2267) error: $2 of\n> `createdb_opt_list' has no declared type (\"preproc.y\", line 2270) error: $$\n> of `createdb_opt_item' has no declared type (\"preproc.y\", line 2271) error:\n> $$ of `createdb_opt_item' has no declared type (\"preproc.y\", line 2272)\n> error: $$ of `createdb_opt_item' has no declared type (\"preproc.y\", line\n> 2273) error: $$ of `createdb_opt_item' has no declared type (\"preproc.y\",\n> line 2276) error: $$ of `createdb_opt_item' has no declared type\n> (\"preproc.y\", line 2280) error: $$ of `createdb_opt_item' has no declared\n> type (\"preproc.y\", line 5365) error: symbol createdb_opt_encoding is used,\n> but is not defined as a token and has no rules\n> (\"preproc.y\", line 5365) error: symbol createdb_opt_location is used, but\n> is not defined as a token and has no rules\n> make[4]: *** [preproc.c] Error 1\n> make[4]: Leaving directory\n> `/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg/preproc'\n> make[3]: *** [all] Error 2\n> make[3]: Leaving directory\n> `/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg'\n> make[2]: *** [all] Error 2\n> make[2]: Leaving directory\n> `/home/pjw/work/postgresql-cvs/pgsql/src/interfaces'\n> make[1]: *** [all] Error 2\n> make[1]: Leaving directory `/home/pjw/work/postgresql-cvs/pgsql/src'\n> make: *** [all] Error 2\n>\n>\n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n>\n> | --________--\n>\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Mon, 20 Nov 2000 18:57:48 +0600",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Current CVS broken?"
},
{
"msg_contents": "At 18:57 20/11/00 +0600, Denis Perchine wrote:\n>On 20 November 2000 18:52, Philip Warner wrote:\n>> I get the following when doing a fresh build:\n>\n>Did you made distclean?\n\nYep.\n\nConfig parames were:\n\n./configure \\\n --prefix=/var/lib/pgsql7.1.0b \\\n --with-odbc \\\n --with-x \\\n --enable-plpgsql \\\n --with-plpgsql \\\n --enable-cassert \\\n --enable-syslog \\\n --enable-debug \\\n --with-pgport=5434\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Nov 2000 00:15:09 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Current CVS broken?"
}
]
|
[
{
"msg_contents": "\nDear Hackers,\n\nWhile working on a postgres-based fulltext searching system \nwe encountered the following problem:\n\n There is a table \n create table t (\n x int []\n ) \n and a given integer constant y.\n The task is to find those records of this table, which contain the\n value y in the arrays x.\n\nThis could be implemented by writing a function \n array_contains(array,value)\nand selecting :\n select * from table where array_contains(table.x, y);\n\nSuch SQL statement would result in a long sequential scan, which is not\ncool and not fast. It could be much more useful if we could use an index\nfor such a query.\n\nIf there were a kind of B-tree index which allows to have several\nkey values for a record, the problem could be solved easily!\n\nWe would like to know if such a feature is already implemented \nin postgres indexes, otherwise are there any serious difficulties in\nimplementing it.\n\nMay be, GiSt could be useful for this task. Does anybody know any alive\nimplementation of GiST ?\n\nRegards, Ivan Panchenko \n\n\n\n\n\n\n",
"msg_date": "Mon, 20 Nov 2000 16:08:01 +0300 (MSK)",
"msg_from": "\"Ivan E. Panchenko\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexing on arrays "
},
{
"msg_contents": "\nI am also working on a full text search system, and I have a similar\nproblem, although I can get around the full table scan if I can simply\nreturn a full set of tuples.\n\nselect * from table where table.key in ( function('bla bla bla') );\n\nOr\n\ncreate table result as function('bla bla bla');\n\nselect * from table where table.key = result.key;\n\n\n\nI have been trying to figure out how to return a variable number and\nformat of tuples, but am getting lost in the code. Any help anyone has\nwould be greatly appreciated.\n\n\n\"Ivan E. Panchenko\" wrote:\n> \n> Dear Hackers,\n> \n> While working on a postgres-based fulltext searching system\n> we encountered the following problem:\n> \n> There is a table\n> create table t (\n> x int []\n> )\n> and a given integer constant y.\n> The task is to find those records of this table, which contain the\n> value y in the arrays x.\n> \n> This could be implemented by writing a function\n> array_contains(array,value)\n> and selecting :\n> select * from table where array_contains(table.x, y);\n> \n> Such SQL statement would result in a long sequential scan, which is not\n> cool and not fast. It could be much more useful if we could use an index\n> for such a query.\n> \n> If there were a kind of B-tree index which allows to have several\n> key values for a record, the problem could be solved easily!\n> \n> We would like to know if such a feature is already implemented\n> in postgres indexes, otherwise are there any serious difficulties in\n> implementing it.\n> \n> May be, GiSt could be useful for this task. Does anybody know any alive\n> implementation of GiST ?\n> \n> Regards, Ivan Panchenko\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Mon, 20 Nov 2000 23:16:06 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexing on arrays"
}
]
|
[
{
"msg_contents": "Is it normal that a query that takes <1 sec when executed from psql\nprompt \ntakes >15 sek when executed from a function (and takes >95% of cpu for\nall that time ?\n\nexample (on 7.0.2)\n\n>UPDATE item SET id_path = '';\n\nreturns immediately (on 2000 item table)\n\nthen I create a function\nCREATE FUNCTION \"regenerate_id_paths\" ( ) RETURNS int4 AS '\n BEGIN\n UPDATE item SET id_path = '''';\n RETURN -1;\n END;\n' LANGUAGE 'plpgsql';\n\nand then\n\n>select regenerate_id_paths( );\n\ntakes more than 15 sec and uses as much cpu as it can get while running;\n\n\n\n\nBTW, where can I learn more about pl/pgsql s�ntax ?\n\nThe postgre docs suggest that \"For more complex examples the programmer\nmight look at the regression\ntest for PL/pgSQL.\" but there is only one example using a for loop and\nnone using while. \n\nI suspect that it may be missing more.\n\n\n\n-----------\nHannu\n",
"msg_date": "Mon, 20 Nov 2000 15:28:57 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "pl/pgsql slowness"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> Is it normal that a query that takes <1 sec when executed from psql\n> prompt takes > 15 sek when executed from a function\n\nNo. I can't reproduce the quoted misbehavior under either 7.0.2 or\ncurrent sources; your example takes ~1 sec either way for me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 11:14:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql slowness "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > Is it normal that a query that takes <1 sec when executed from psql\n> > prompt takes > 15 sek when executed from a function\n> \n> No. I can't reproduce the quoted misbehavior under either 7.0.2 or\n> current sources; your example takes ~1 sec either way for me.\n\nSorry, my fault. \n\nI ran the queries over two similar tables, but the slow one had several\nhuge indexes on it.\nThey had grown huge over time and vacuum did nothing to reduce them. \n\nSo it seems that it had nithing to do with plpgsql.\n\n-----------\nHannu\n",
"msg_date": "Mon, 20 Nov 2000 19:06:01 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgsql slowness"
}
]
|
[
{
"msg_contents": "Configured as: \n\nCC=cc CXX=CC ./configure --prefix=/home/ler/pg-test --enable-syslog --with-CXX --with-perl --enable-multibyte --with-includes=/usr/local/include --with-libs=/usr/local/lib\n\nI get:\n\ngmake -C doc all\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc'\ngmake[1]: Nothing to be done for `all'.\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc'\ngmake -C src all\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/src'\ngmake -C backend all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake -C parser parse.h\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/parser'\nbison -y -d gram.y\nmv y.tab.c ./gram.c\nmv y.tab.h ./parse.h\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/parser'\nprereqdir=`cd parser/ && pwd` && \\\n cd ../../src/include/parser/ && rm -f parse.h && \\\n ln -s $prereqdir/parse.h .\ngmake -C utils fmgroids.h\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils'\nCPP='cc -E' AWK='gawk' /bin/sh Gen_fmgrtab.sh ../../../src/include/catalog/pg_proc.h\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ncd ../../src/include/utils/ && rm -f fmgroids.h && \\\n ln -s ../../../src/backend/utils/fmgroids.h .\ngmake -C access all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access'\ngmake -C common SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/common'\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o heaptuple.o heaptuple.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o indextuple.o indextuple.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o indexvalid.o indexvalid.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o printtup.o printtup.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o scankey.o scankey.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o tupdesc.o tupdesc.c\n/bin/ld -r -o SUBSYS.o heaptuple.o indextuple.o indexvalid.o printtup.o scankey.o tupdesc.o \ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/common'\ngmake -C gist SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/gist'\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o gist.o gist.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o gistget.o gistget.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o gistscan.o gistscan.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o giststrat.o giststrat.c\n/bin/ld -r -o SUBSYS.o gist.o gistget.o gistscan.o giststrat.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/gist'\ngmake -C hash SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/hash'\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hash.o hash.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hashfunc.o hashfunc.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hashinsert.o hashinsert.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hashovfl.o hashovfl.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hashpage.o hashpage.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hashscan.o hashscan.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hashsearch.o hashsearch.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hashstrat.o hashstrat.c\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o hashutil.o hashutil.c\n/bin/ld -r -o SUBSYS.o hash.o hashfunc.o hashinsert.o hashovfl.o hashpage.o hashscan.o hashsearch.o hashstrat.o hashutil.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/hash'\ngmake -C heap SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/heap'\ncc -O -K inline -DXLOG -I/usr/local/include -I../../../../src/include -c -o heapam.o heapam.c\nUX:acomp: ERROR: \"heapam.c\", line 1396: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 1504: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 1700: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 1703: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2105: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2129: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2143: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2190: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2213: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2233: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2310: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2338: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2363: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2409: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2432: invalid cast expression\nUX:acomp: ERROR: \"heapam.c\", line 2472: invalid cast expression\ngmake[4]: *** [heapam.o] Error 1\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/heap'\ngmake[3]: *** [heap-recursive] Error 2\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access'\ngmake[2]: *** [access-recursive] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\nNot Good....\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 09:04:02 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "err, XLOG/UW711/cc/Doesn't compile."
},
{
"msg_contents": "* Larry Rosenman <[email protected]> [001120 09:05]:\n[snip]\nmore info. It seems to not like the following from\nsrc/include/buffer/bufpage.h (line 305):\n#define PageSetLSN(page, lsn) \\\n (((PageHeader) (page))->pd_lsn = (XLogRecPtr) (lsn))\n\nI'm not sure what it's trying to do... \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 09:17:12 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: err, XLOG/UW711/cc/Doesn't compile."
}
]
|
[
{
"msg_contents": "After reading Vadim's note stating the WAL is enabled by default,\nI downloaded sources from CVS to rebuild the latest version.\n\nThere are errors in ecpg's preproc.y grammar that weren't there \nin the CVS sources I built yesterday.\n\nThe two rules \"createdb_opt_item\" and \"createdb_opt_list\" aren't\ndeclared as type string at the front of the file, so bison gets\nupset. \n\nTwo other rules, \"createdb_opt_encoding\" and \"createdb_opt_location\"\nwere declared as type string but never defined as rules. Bison\ndoesn't like that, either.\n\nThe first clause for rule CreatedbStmt references a sixth item that\ndoesn't exist in the definition.\n\nI've cleaned these up locally (though I just removed the $6 mentioned\nlast because I have no idea what was intended) so I can compile. Someone\nshould clean up the CVS sources...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 20 Nov 2000 08:07:52 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "building current sources"
}
]
|
[
{
"msg_contents": "I'm organising an open source developers meeting (that will be supported by VA Linux) in february and would like some pgsql developers to be present. Would you be so kind as to spread the word?\nPeople interested in attending or make a slide presentation can contact me at [email protected], register at the (very basic) website (http://www.raphinou.com).\nAlthough this is a .com adress, it is organised on a voluntary basis by LUGS ;-);\n\n",
"msg_date": "Mon, 20 Nov 2000 11:41:03 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Inquiry about PostgreSQL (from form)"
}
]
|
[
{
"msg_contents": "Hi: \n \nWonder if any of you know how to setup a postgreSQL server as a windows 2000 service or have a URL or document on how to do it. \n \nThank you \n\n--\nLuis Maga�a\nGnovus Networks & Software\nwww.gnovus.com\nTel. +52 (7) 4422425\[email protected]\n\n\n",
"msg_date": "Mon, 20 Nov 2000 11:24:06 -0600",
"msg_from": "Luis =?UNKNOWN?Q?Maga=F1a?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL as windows 2000 service"
}
]
|
[
{
"msg_contents": "> more info. It seems to not like the following from\n> src/include/buffer/bufpage.h (line 305):\n> #define PageSetLSN(page, lsn) \\\n> (((PageHeader) (page))->pd_lsn = (XLogRecPtr) (lsn))\n> \n> I'm not sure what it's trying to do... \n\nJust assign values to 8 bytes structure in pageheader.\nDid you make distclean?\n\nVadim\n",
"msg_date": "Mon, 20 Nov 2000 09:46:39 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: err, XLOG/UW711/cc/Doesn't compile."
},
{
"msg_contents": "* Mikheev, Vadim <[email protected]> [001120 12:00]:\n> > more info. It seems to not like the following from\n> > src/include/buffer/bufpage.h (line 305):\n> > #define PageSetLSN(page, lsn) \\\n> > (((PageHeader) (page))->pd_lsn = (XLogRecPtr) (lsn))\n> > \n> > I'm not sure what it's trying to do... \n> \n> Just assign values to 8 bytes structure in pageheader.\n> Did you make distclean?\nyes, gmake maintainer-clean.\n> \n> Vadim\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 12:01:10 -0600",
"msg_from": "\"'Larry Rosenman'\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: err, XLOG/UW711/cc/Doesn't compile."
},
{
"msg_contents": "Mikheev, Vadim writes:\n\n> > more info. It seems to not like the following from\n> > src/include/buffer/bufpage.h (line 305):\n> > #define PageSetLSN(page, lsn) \\\n> > (((PageHeader) (page))->pd_lsn = (XLogRecPtr) (lsn))\n> > \n> > I'm not sure what it's trying to do... \n> \n> Just assign values to 8 bytes structure in pageheader.\n\nIt's because XLogRecPtr is a struct. You can't assign structs with\n'='. Gotta use memcpy, etc.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 20 Nov 2000 19:46:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: err, XLOG/UW711/cc/Doesn't compile."
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> It's because XLogRecPtr is a struct. You can't assign structs with\n> '='. Gotta use memcpy, etc.\n\nStruct assignment is a required feature since ANSI C, and I'm pretty\nsure we use it in other places already. I doubt that's the explanation\nfor Larry's problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 13:58:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: err, XLOG/UW711/cc/Doesn't compile. "
},
{
"msg_contents": "I wrote:\n\n> Mikheev, Vadim writes:\n> \n> > > more info. It seems to not like the following from\n> > > src/include/buffer/bufpage.h (line 305):\n> > > #define PageSetLSN(page, lsn) \\\n> > > (((PageHeader) (page))->pd_lsn = (XLogRecPtr) (lsn))\n> > > \n> > > I'm not sure what it's trying to do... \n> > \n> > Just assign values to 8 bytes structure in pageheader.\n> \n> It's because XLogRecPtr is a struct. You can't assign structs with\n> '='. Gotta use memcpy, etc.\n\nCorrection: It's because the compiler won't let you cast to a\nstruct. Assigning seems to compile okay.\n\nThis code fails to compile:\n\n| typedef struct foo { int a; int b; } foo;\n| \n| main() {\n| foo x;\n| (foo) x;\n| }\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 20 Nov 2000 19:58:27 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: err, XLOG/UW711/cc/Doesn't compile."
}
]
|
[
{
"msg_contents": "I sent this e-mail last week but hadn't received any response. Given\nThomas' last message about seeing responses to threads he never recalled\nseeing in the first place, I'm wondering whether the original message\nmade it to the server.\n\n\n-Tony\n\np.s. I still can't seem to get the \"DIGEST\" to work on HACKERS. Seems to\nbe some problems with the majordomo.\n\n\nHere's my original message:\n\n-------- Original Message --------\nSubject: Weird backup file\nDate: Fri, 17 Nov 2000 11:27:32 -0800\nFrom: \"G. Anthony Reina\" <[email protected]>\nOrganization: The Neurosciences Institute\nTo: \"[email protected]\"\n<[email protected]>,[email protected]\n\nI backed up my database from Postgres 6.5.3 and migrated to 7.0.2\nseveral a few months ago. For some reason, data was lost in the\ntransition. I've finally pinned it down to the attached file (abridged\nto point out the problem).\n\nIt looks like two things happened in the backup. First, when I move from\n'G' to 'F' in the names column, I seem to lose the column called\n'dsp_chan'. Second, the double quotes around the float_4 array called\n'spike_hist' aren't included.\n\nI'm not sure if the double quotes are necessary, but the missing column\nis probably a problem. I added this column after the database was\ncreated by using 'alter table ellipse_cell_proc add column dsp_chan' and\nthen put it in the correct position by using:\n\nSELECT name, arm, rep, cycle, hemisphere, area, cell, dsp_chan,\nspike_hist INTO xxx FROM ellipse_cell_proc;\nDROP TABLE ellipse_cell_proc;\nALTER TABLE xxx RENAME TO ellipse_cell_proc;\n\nCan anyone explain what went wrong with the backup or where I erred\nadding the column?\n\nThanks.\n-Tony",
"msg_date": "Mon, 20 Nov 2000 10:18:41 -0800",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: Weird backup file]"
},
{
"msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> I backed up my database from Postgres 6.5.3 and migrated to 7.0.2\n> several a few months ago. For some reason, data was lost in the\n> transition. I've finally pinned it down to the attached file (abridged\n> to point out the problem).\n\n> It looks like two things happened in the backup. First, when I move from\n> 'G' to 'F' in the names column, I seem to lose the column called\n> 'dsp_chan'. Second, the double quotes around the float_4 array called\n> 'spike_hist' aren't included.\n\nIt looks like some float4[] array values got processed as text and\ninserted into the text column dsp_chan --- note that the broken rows\ninclude a \\N (null) indicator for the last column where spike_hist ought\nto be. Not quite clear how you got to that state. Possibly these are\nrows from the un-rearranged table?\n\n> I'm not sure if the double quotes are necessary, but the missing column\n> is probably a problem. I added this column after the database was\n> created by using 'alter table ellipse_cell_proc add column dsp_chan' and\n> then put it in the correct position by using:\n\n> SELECT name, arm, rep, cycle, hemisphere, area, cell, dsp_chan,\n> spike_hist INTO xxx FROM ellipse_cell_proc;\n> DROP TABLE ellipse_cell_proc;\n> ALTER TABLE xxx RENAME TO ellipse_cell_proc;\n\n> Can anyone explain what went wrong with the backup or where I erred\n> adding the column?\n\nYour procedure was fine, but ALTER TABLE RENAME was mighty flaky in\npre-7.0 releases. Even in 7.0, doing it inside a transaction block is\nasking for trouble (that's finally fixed for 7.1, thank goodness).\nI suspect you got bit by an ALTER bug. I'm not sure about the exact\nmechanism, but I have a suspicion: it looks a lot like some blocks\nof the original ellipse_cell_proc table got written into the new table.\nI know 6.5 failed to clear old shared disk buffers during a table\nrename. I can't recall if it was sloppy about that during a table drop\nas well, but it would've taken both bugs to cause this result if I'm\nguessing right that that was the failure path.\n\nThere are good reasons why we've been urging people to update to 7.0.*\nASAP ... I'm afraid you got bit by one :-(. Sorry about that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 19:41:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Weird backup file] "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Your procedure was fine, but ALTER TABLE RENAME was mighty flaky in\n> pre-7.0 releases. Even in 7.0, doing it inside a transaction block is\n> asking for trouble (that's finally fixed for 7.1, thank goodness).\n> I suspect you got bit by an ALTER bug. I'm not sure about the exact\n> mechanism, but I have a suspicion: it looks a lot like some blocks\n> of the original ellipse_cell_proc table got written into the new table.\n> I know 6.5 failed to clear old shared disk buffers during a table\n> rename. I can't recall if it was sloppy about that during a table drop\n> as well, but it would've taken both bugs to cause this result if I'm\n> guessing right that that was the failure path.\n>\n> There are good reasons why we've been urging people to update to 7.0.*\n> ASAP ... I'm afraid you got bit by one :-(. Sorry about that.\n>\n>\n\nOkay. At least the problem has been solved. It seems though that the last 2\ntimes I've done a backup (in order to upgrade to the latest Postgres version)\nI've had data lost because of some error. I'm getting a little concerned\nabout the quality of the Postgres backups.\n\n-Tony\n\n\n>\n\n",
"msg_date": "Mon, 20 Nov 2000 17:04:29 -0800",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: Weird backup file]"
}
]
|
[
{
"msg_contents": "ISTM that\n\n SET SESSION CHARACTERISTICS AS parameter value\n\nis really a more SQL'ish form of the current\n\n SET parameter =/TO value\n\nPerhaps they should be made equivalent, in order to avoid too many subtly\ndifferent subversions of the 'SET' command.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 20 Nov 2000 19:19:29 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "SET SESSION CHARACTERISTICS"
},
{
"msg_contents": "> SET SESSION CHARACTERISTICS AS parameter value\n> is really a more SQL'ish form of the current\n> SET parameter =/TO value\n> Perhaps they should be made equivalent, in order to avoid too many subtly\n> different subversions of the 'SET' command.\n\nHmm. What do you mean by \"equivalent\"? I assumed that the incredibly\nverbose SQL99 form is not particularly gratifying to type, and that we\nwould be interested in a shorter version of the same thing. So I kept\nthe original syntax and just added the statements that SQL99 calls out\nexplictly. Also, our \"SET\" syntax has lots more keywords than specified\nin SQL99...\n\n - Thomas\n",
"msg_date": "Tue, 21 Nov 2000 05:40:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET SESSION CHARACTERISTICS"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> > SET SESSION CHARACTERISTICS AS parameter value\n> > is really a more SQL'ish form of the current\n> > SET parameter =/TO value\n> > Perhaps they should be made equivalent, in order to avoid too many subtly\n> > different subversions of the 'SET' command.\n> \n> Hmm. What do you mean by \"equivalent\"?\n\nThat they have the same effect when invoked.\n\n> I assumed that the incredibly\n> verbose SQL99 form is not particularly gratifying to type, and that we\n> would be interested in a shorter version of the same thing.\n\nDefinitely. But it would also be nice if we didn't have too many SET\ncommands that have intersecting functionality but where it's not quite\nclear which controls what. Given that our custom short SET variant does\neffectively control \"session characteristics\" it only seemed logical to me\nthat we could map it to the more SQL'ish variant.\n\n> So I kept the original syntax and just added the statements that SQL99\n> calls out explictly.\n\nThen I don't know where you got the TRANSACTION COMMIT and TIME ZONE\nclauses from. SQL 99 doesn't have the former anywhere, and the latter\nonly as 'SET TIME ZONE' which we have already.\n\n> Also, our \"SET\" syntax has lots more keywords than specified in\n> SQL99...\n\nHmm, is it your argument that we should keep our custom parameters in our\ncustom command in order to avoid conflicts with future standards? Maybe\nso, but then we already lose.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 21 Nov 2000 17:20:39 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SET SESSION CHARACTERISTICS"
},
{
"msg_contents": "> > > SET SESSION CHARACTERISTICS AS parameter value\n> > > is really a more SQL'ish form of the current\n> > > SET parameter =/TO value\n> > > Perhaps they should be made equivalent, in order to avoid too many subtly\n> > > different subversions of the 'SET' command.\n> > Hmm. What do you mean by \"equivalent\"?\n> That they have the same effect when invoked.\n\nOK.\n\n> > I assumed that the incredibly\n> > verbose SQL99 form is not particularly gratifying to type, and that we\n> > would be interested in a shorter version of the same thing.\n> Definitely. But it would also be nice if we didn't have too many SET\n> commands that have intersecting functionality but where it's not quite\n> clear which controls what. Given that our custom short SET variant does\n> effectively control \"session characteristics\" it only seemed logical to me\n> that we could map it to the more SQL'ish variant.\n\nSure.\n\n> > So I kept the original syntax and just added the statements that SQL99\n> > calls out explictly.\n> Then I don't know where you got the TRANSACTION COMMIT and TIME ZONE\n> clauses from. SQL 99 doesn't have the former anywhere, and the latter\n> only as 'SET TIME ZONE' which we have already.\n\nOK, so maybe my recollection is not very good...\n\n> > Also, our \"SET\" syntax has lots more keywords than specified in\n> > SQL99...\n> Hmm, is it your argument that we should keep our custom parameters in our\n> custom command in order to avoid conflicts with future standards? Maybe\n> so, but then we already lose.\n\nWell, no argument really ;)\n\nI put the SET SESSION CHARACTERISTICS in as a start at the SQL99-defined\nfunctionality. Now would be a good time to make it right.\n\n - Thomas\n",
"msg_date": "Tue, 21 Nov 2000 16:44:16 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET SESSION CHARACTERISTICS"
}
]
|
[
{
"msg_contents": "> > > more info. It seems to not like the following from\n> > > src/include/buffer/bufpage.h (line 305):\n> > > #define PageSetLSN(page, lsn) \\\n> > > (((PageHeader) (page))->pd_lsn = (XLogRecPtr) (lsn))\n> > > \n> > > I'm not sure what it's trying to do... \n> > \n> > Just assign values to 8 bytes structure in pageheader.\n> \n> It's because XLogRecPtr is a struct. You can't assign structs with\n> '='. Gotta use memcpy, etc.\n\nI had no problems with this on Solaris & Linux. Also I think that\nthere are another places in code where it worked so far.\nAnyway, there are just two members in this struct - Larry, could\nyou try make this assignment by members and let us know ?\n\nVadim\n",
"msg_date": "Mon, 20 Nov 2000 11:12:18 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: err, XLOG/UW711/cc/Doesn't compile."
},
{
"msg_contents": "* Mikheev, Vadim <[email protected]> [001120 13:26]:\n> > > > more info. It seems to not like the following from\n> > > > src/include/buffer/bufpage.h (line 305):\n> > > > #define PageSetLSN(page, lsn) \\\n> > > > (((PageHeader) (page))->pd_lsn = (XLogRecPtr) (lsn))\n> > > > \n> > > > I'm not sure what it's trying to do... \n> > > \n> > > Just assign values to 8 bytes structure in pageheader.\n> > \n> > It's because XLogRecPtr is a struct. You can't assign structs with\n> > '='. Gotta use memcpy, etc.\n> \n> I had no problems with this on Solaris & Linux. Also I think that\n> there are another places in code where it worked so far.\n> Anyway, there are just two members in this struct - Larry, could\n> you try make this assignment by members and let us know ?\nI bet you used GCC on all those platforms... cc on UnixWare is\na C99 compiler.... \n\nI'll see if I can find the right pieces. This is evidently\n*NON*-Portable in it's current form. \n> \n> Vadim\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 13:28:11 -0600",
"msg_from": "\"'Larry Rosenman'\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: err, XLOG/UW711/cc/Doesn't compile."
}
]
|
[
{
"msg_contents": "> > It's because XLogRecPtr is a struct. You can't assign structs with\n> > '='. Gotta use memcpy, etc.\n> \n> Correction: It's because the compiler won't let you cast to a\n> struct. Assigning seems to compile okay.\n\nOh, ok - seems we can just get rid of casting there.\n\nVadim\n",
"msg_date": "Mon, 20 Nov 2000 11:13:49 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: err, XLOG/UW711/cc/Doesn't compile."
},
{
"msg_contents": "* Mikheev, Vadim <[email protected]> [001120 13:29]:\n> > > It's because XLogRecPtr is a struct. You can't assign structs with\n> > > '='. Gotta use memcpy, etc.\n> > \n> > Correction: It's because the compiler won't let you cast to a\n> > struct. Assigning seems to compile okay.\n> \n> Oh, ok - seems we can just get rid of casting there.\nYup, removing the cast fixes that problem, and then we run into \nDon Baccus' issue with ecpg's yacc file. \n\n\n> \n> Vadim\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 13:35:56 -0600",
"msg_from": "\"'Larry Rosenman'\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: err, XLOG/UW711/cc/Doesn't compile."
}
]
|
[
{
"msg_contents": "Is it okay now to disable the old regression test drivers and make use of\nthe new one throughout? It would be advantageous if we all ran the same\nthing in beta.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 20 Nov 2000 23:03:20 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression test drivers"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Is it okay now to disable the old regression test drivers and make use of\n> the new one throughout?\n\nSure. Just point 'make runtest' at the new script ...\n\nI do have one gripe about the new script: it suppresses error messages\nfrom the DROP DATABASE step. This is fine when the suppressed message\nis \"no such database\", not fine when it is something else --- like, say,\n\"can't drop DB because it has active users\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 17:39:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression test drivers "
},
{
"msg_contents": "Peter Eisentraut writes:\n\n> Is it okay now to disable the old regression test drivers and make use of\n> the new one throughout?\n\nDone.\n\nI'm also going to remove src/test/suite (an outdated predecessor of the\ncurrent test suite). Take a quick look at this piece of PostgreSQL\nhistory before it's gone. :)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 22 Nov 2000 15:14:08 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression test drivers"
}
]
|
[
{
"msg_contents": "After TGL's batch of commits, and Vadim's fix: (we still fail geometry\nnbd).\n\n*** ./expected/opr_sanity.out\tTue Nov 14 13:32:58 2000\n--- ./results/opr_sanity.out\tMon Nov 20 16:34:26 2000\n***************\n*** 481,489 ****\n NOT ((p2.pronargs = 2 AND p1.aggbasetype = p2.proargtypes[1]) OR\n (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n oid | aggname | oid | proname \n! -------+---------+-----+-------------\n! 16998 | max | 768 | int4larger\n! 17012 | min | 769 | int4smaller\n (2 rows)\n \n -- Cross-check finalfn (if present) against its entry in pg_proc.\n--- 481,489 ----\n NOT ((p2.pronargs = 2 AND p1.aggbasetype = p2.proargtypes[1]) OR\n (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n oid | aggname | oid | proname \n! ------+---------+-----+-------------\n! 2523 | max | 768 | int4larger\n! 2537 | min | 769 | int4smaller\n (2 rows)\n \n -- Cross-check finalfn (if present) against its entry in pg_proc.\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 16:38:44 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "regression fail?"
}
]
|
[
{
"msg_contents": "Current sources pass regress test except for\n\n*** ./expected/opr_sanity.out\tMon Nov 13 22:59:14 2000\n--- ./results/opr_sanity.out\tMon Nov 20 17:12:50 2000\n***************\n*** 481,489 ****\n NOT ((p2.pronargs = 2 AND p1.aggbasetype = p2.proargtypes[1]) OR\n (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n oid | aggname | oid | proname \n! -------+---------+-----+-------------\n! 16998 | max | 768 | int4larger\n! 17012 | min | 769 | int4smaller\n (2 rows)\n \n -- Cross-check finalfn (if present) against its entry in pg_proc.\n--- 481,489 ----\n NOT ((p2.pronargs = 2 AND p1.aggbasetype = p2.proargtypes[1]) OR\n (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n oid | aggname | oid | proname \n! ------+---------+-----+-------------\n! 2523 | max | 768 | int4larger\n! 2537 | min | 769 | int4smaller\n (2 rows)\n \n -- Cross-check finalfn (if present) against its entry in pg_proc.\n\nFurther investigation shows\n\ntemplate1=# select min(oid),max(oid) from pg_aggregate;\n min | max\n------+------\n 2503 | 2558\n(1 row)\n\nThis is bogus. The pg_aggregate entries should have OIDs above\n16384, not down in the reserved-OID range. It looks to me like\ninitial startup of the OID counter is wrong with WAL enabled.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 17:43:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Something screwy about OID assignment with WAL code"
}
]
|
[
{
"msg_contents": "> Further investigation shows\n> \n> template1=# select min(oid),max(oid) from pg_aggregate;\n> min | max\n> ------+------\n> 2503 | 2558\n> (1 row)\n> \n> This is bogus. The pg_aggregate entries should have OIDs above\n> 16384, not down in the reserved-OID range. It looks to me like\n> initial startup of the OID counter is wrong with WAL enabled.\n\nThanks - I'll take a look.\n\nVadim\n",
"msg_date": "Mon, 20 Nov 2000 15:14:49 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Something screwy about OID assignment with WAL code"
}
]
|
[
{
"msg_contents": " Date: Monday, November 20, 2000 @ 21:11:06\nAuthor: vadim\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam\n from hub.org:/home/projects/pgsql/tmp/cvs-serv62721/src/backend/access/transam\n\nModified Files:\n\txlog.c \n\n----------------------------- Log Message -----------------------------\n\nInit ShmemVariableCache in BootStrapXLOG()\n(should fix OID bootstraping).\n\n",
"msg_date": "Mon, 20 Nov 2000 21:11:07 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "pgsql/src/backend/access/transam (xlog.c)"
},
{
"msg_contents": "\nNope. Still fails...\n\n*** ./expected/opr_sanity.out\tTue Nov 14 13:32:58 2000\n--- ./results/opr_sanity.out\tMon Nov 20 20:27:46 2000\n***************\n*** 482,489 ****\n (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n oid | aggname | oid | proname \n -------+---------+-----+-------------\n! 16998 | max | 768 | int4larger\n! 17012 | min | 769 | int4smaller\n (2 rows)\n \n -- Cross-check finalfn (if present) against its entry in pg_proc.\n--- 482,489 ----\n (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n oid | aggname | oid | proname \n -------+---------+-----+-------------\n! 16997 | max | 768 | int4larger\n! 17011 | min | 769 | int4smaller\n (2 rows)\n \n -- Cross-check finalfn (if present) against its entry in pg_proc.\n\n======================================================================\n\n* [email protected] <[email protected]> [001120 20:11]:\n> Date: Monday, November 20, 2000 @ 21:11:06\n> Author: vadim\n> \n> Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam\n> from hub.org:/home/projects/pgsql/tmp/cvs-serv62721/src/backend/access/transam\n> \n> Modified Files:\n> \txlog.c \n> \n> ----------------------------- Log Message -----------------------------\n> \n> Init ShmemVariableCache in BootStrapXLOG()\n> (should fix OID bootstraping).\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 20:32:02 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/access/transam (xlog.c)"
},
{
"msg_contents": "Larry Rosenman <[email protected]> writes:\n> Nope. Still fails...\n\nDid you initdb?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 22:27:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/access/transam (xlog.c) "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001120 21:27]:\n> Larry Rosenman <[email protected]> writes:\n> > Nope. Still fails...\n> \n> Did you initdb?\nYes.\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 21:53:36 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/access/transam (xlog.c)"
},
{
"msg_contents": ">> Larry Rosenman <[email protected]> writes:\n>>>> Nope. Still fails...\n\nYou should've said that the OIDs are now just off-by-one from where they\nwere before, instead of off by several thousand. That I'm willing to\naccept as an implementation change ;-) I've updated the expected file.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 23:33:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/backend/access/transam (xlog.c) "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001120 22:34]:\n> >> Larry Rosenman <[email protected]> writes:\n> >>>> Nope. Still fails...\n> \n> You should've said that the OIDs are now just off-by-one from where they\n> were before, instead of off by several thousand. That I'm willing to\n> accept as an implementation change ;-) I've updated the expected file.\nSorry. Wasn't sure... We now pass....\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 22:48:09 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/backend/access/transam (xlog.c)"
},
{
"msg_contents": "Yeah, I shoulda said off by one... Tom fixed the \nexpected file :-) ....\n\nLER\n\n* Vadim Mikheev <[email protected]> [001121 00:01]:\n> > Nope. Still fails...\n> \n> I know, but looks better, eh? -:)\n> \n> > \n> > *** ./expected/opr_sanity.out Tue Nov 14 13:32:58 2000\n> > --- ./results/opr_sanity.out Mon Nov 20 20:27:46 2000\n> > ***************\n> > *** 482,489 ****\n> > (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n> > oid | aggname | oid | proname \n> > -------+---------+-----+-------------\n> > ! 16998 | max | 768 | int4larger\n> > ! 17012 | min | 769 | int4smaller\n> > (2 rows)\n> > \n> > -- Cross-check finalfn (if present) against its entry in pg_proc.\n> > --- 482,489 ----\n> > (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n> > oid | aggname | oid | proname \n> > -------+---------+-----+-------------\n> > ! 16997 | max | 768 | int4larger\n> > ! 17011 | min | 769 | int4smaller\n> > (2 rows)\n> > \n> > -- Cross-check finalfn (if present) against its entry in pg_proc.\n> > \n> > ======================================================================\n> > \n> > * [email protected] <[email protected]> [001120 20:11]:\n> > > Date: Monday, November 20, 2000 @ 21:11:06\n> > > Author: vadim\n> > > \n> > > Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam\n> > > from hub.org:/home/projects/pgsql/tmp/cvs-serv62721/src/backend/access/transam\n> > > \n> > > Modified Files:\n> > > xlog.c \n> > > \n> > > ----------------------------- Log Message -----------------------------\n> > > \n> > > Init ShmemVariableCache in BootStrapXLOG()\n> > > (should fix OID bootstraping).\n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 21 Nov 2000 00:03:05 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/backend/access/transam (xlog.c)"
},
{
"msg_contents": "> Nope. Still fails...\n\nI know, but looks better, eh? -:)\n\n> \n> *** ./expected/opr_sanity.out Tue Nov 14 13:32:58 2000\n> --- ./results/opr_sanity.out Mon Nov 20 20:27:46 2000\n> ***************\n> *** 482,489 ****\n> (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n> oid | aggname | oid | proname \n> -------+---------+-----+-------------\n> ! 16998 | max | 768 | int4larger\n> ! 17012 | min | 769 | int4smaller\n> (2 rows)\n> \n> -- Cross-check finalfn (if present) against its entry in pg_proc.\n> --- 482,489 ----\n> (p2.pronargs = 1 AND p1.aggbasetype = 0)));\n> oid | aggname | oid | proname \n> -------+---------+-----+-------------\n> ! 16997 | max | 768 | int4larger\n> ! 17011 | min | 769 | int4smaller\n> (2 rows)\n> \n> -- Cross-check finalfn (if present) against its entry in pg_proc.\n> \n> ======================================================================\n> \n> * [email protected] <[email protected]> [001120 20:11]:\n> > Date: Monday, November 20, 2000 @ 21:11:06\n> > Author: vadim\n> > \n> > Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam\n> > from hub.org:/home/projects/pgsql/tmp/cvs-serv62721/src/backend/access/transam\n> > \n> > Modified Files:\n> > xlog.c \n> > \n> > ----------------------------- Log Message -----------------------------\n> > \n> > Init ShmemVariableCache in BootStrapXLOG()\n> > (should fix OID bootstraping).\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n",
"msg_date": "Mon, 20 Nov 2000 22:09:00 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/backend/access/transam (xlog.c)"
},
{
"msg_contents": "> >> Larry Rosenman <[email protected]> writes:\n> >>>> Nope. Still fails...\n> \n> You should've said that the OIDs are now just off-by-one from where they\n> were before, instead of off by several thousand. That I'm willing to\n> accept as an implementation change ;-) I've updated the expected file.\n\nActually, pg_shadow' oid for DBA inserted by initdb is 2 now - I'm fixing\nthis now...\n\nVadim\n\n\n",
"msg_date": "Mon, 20 Nov 2000 22:18:25 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/backend/access/transam (xlog.c) "
}
]
|
[
{
"msg_contents": "\nGot the following errors when using very recent CVS build:\n\nOn client (psql):\n\n1: psql:./indexes_1.sql:12: pqReadData() -- backend closed the channel\nunexpectedly.\n2: This probably means the backend terminated abnormally\n3: before or while processing the request.\n4: psql:./indexes_1.sql:12: connection to server was lost\n\non backend:\n\nDEBUG: MoveOfflineLogs: skip 0000000000000002\nDEBUG: MoveOfflineLogs: skip 0000000000000001\nDEBUG: --Relation pg_type--\nDEBUG: Pages 2: Changed 1, reaped 0, Empty 0, New 0; Tup 136: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0, MinLen 106, MaxLen 109; Re-us\ning: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\nDEBUG: Index pg_type_oid_index: Pages 2; Tuples 136. CPU 0.00s/0.00u sec.\nDEBUG: Index pg_type_typname_index: Pages 2; Tuples 136. CPU 0.00s/0.00u sec.\nDEBUG: Analyzing...\nDEBUG: --Relation pg_attribute--\nDEBUG: Pages 11: Changed 4, reaped 3, Empty 0, New 0; Tup 796: Vac 54,\nKeep/VTL 0/0, Crash 0, UnUsed 0, MinLen 98, MaxLen 98; Re-us\ning: Free/Avail. Space 6412/3720; EndEmpty/Avail. Pages 0/2. CPU\n0.00s/0.00u sec.\nDEBUG: Index pg_attribute_relid_attnam_index: Pages 12; Tuples 796:\nDeleted 54. CPU 0.00s/0.01u sec.\nDEBUG: Index pg_attribute_relid_attnum_index: Pages 6; Tuples 796: Deleted\n54. CPU 0.00s/0.00u sec.\nTRAP: Failed Assertion(\"!(((file) > 0 && (file) < (int) SizeVfdCache &&\nVfdCache[file].fileName != ((void *)0))):\", File: \"fd.c\", Li\nne: 967)\n!(((file) > 0 && (file) < (int) SizeVfdCache && VfdCache[file].fileName !=\n((void *)0))) (0)\nServer process (pid 7187) exited with status 6 at Tue Nov 21 13:44:27 2000\nTerminating any active server processes...\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died\nabnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to\nterminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\nServer processes were terminated at Tue Nov 21 13:44:27 2000\nReinitializing shared memory and semaphores\nDEBUG: Data Base System is starting up at Tue Nov 21 13:44:28 2000\nDEBUG: Data Base System was interrupted being in production at Tue Nov 21\n13:44:07 2000\nDEBUG: CheckPoint record at (0, 31491092)\nDEBUG: Redo record at (0, 31470036); Undo record at (0, 23533448);\nShutdown FALSE\nDEBUG: NextTransactionId: 771; NextOid: 280037\nDEBUG: The DataBase system was not properly shut down\n Automatic recovery is in progress...\nDEBUG: Redo starts at (0, 31470036)\nDEBUG: Redo done at (0, 31636584)\nDEBUG: MoveOfflineLogs: skip 0000000000000002\nDEBUG: MoveOfflineLogs: skip 0000000000000001\nDEBUG: Data Base System is in production state at Tue Nov 21 13:44:32 2000\n\n\nNow, when I go into psql and type \\d, I get:\n\nbirds_load=# \\d\nERROR: cannot open pg_proc_proname_narg_type_index: No such file or directory\n\n\nI can run the backend in debug to get a backtrace, if that would help, but\nI won;t have a chance for a few hours.\n\nFWIW, the task being performed was to load about 80000 records using\nDBD::Pg via INSERT statements in a *single* transaction, then commit and\ndefine 5 indexes. The final step was to run a 'vacuum analyze'. From the\nlog looks like it was in the vacuum step.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Nov 2000 13:54:25 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assert Failure with current CVS"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> TRAP: Failed Assertion(\"!(((file) > 0 && (file) < (int) SizeVfdCache &&\n> VfdCache[file].fileName != ((void *)0))):\", File: \"fd.c\", Line: 967)\n> !(((file) > 0 && (file) < (int) SizeVfdCache && VfdCache[file].fileName !=\n> ((void *)0))) (0)\n> Server process (pid 7187) exited with status 6 at Tue Nov 21 13:44:27 2000\n\nThere should be a core file from this --- backtrace please?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 22:38:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert Failure with current CVS "
},
{
"msg_contents": "At 22:38 20/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> TRAP: Failed Assertion(\"!(((file) > 0 && (file) < (int) SizeVfdCache &&\n>> VfdCache[file].fileName != ((void *)0))):\", File: \"fd.c\", Line: 967)\n>> !(((file) > 0 && (file) < (int) SizeVfdCache && VfdCache[file].fileName !=\n>> ((void *)0))) (0)\n>> Server process (pid 7187) exited with status 6 at Tue Nov 21 13:44:27 2000\n>\n>There should be a core file from this --- backtrace please?\n>\n\n#0 0x400da0d1 in __kill () at soinit.c:27\n#1 0x400d9eff in raise (sig=6) at ../sysdeps/posix/raise.c:27\n#2 0x400db19b in abort () at ../sysdeps/generic/abort.c:83\n#3 0x8155068 in ExcAbort () at excabort.c:27\n#4 0x8154fc7 in ExcUnCaught (excP=0x81c3d58, detail=0, data=0x0,\n message=0x81a3fa0 \"!(((file) > 0 && (file) < (int) SizeVfdCache &&\nVfdCache[file].fileName != ((void *)0)))\") at exc.c:178\n#5 0x815501a in ExcRaise (excP=0x81c3d58, detail=0, data=0x0,\n message=0x81a3fa0 \"!(((file) > 0 && (file) < (int) SizeVfdCache &&\nVfdCache[file].fileName != ((void *)0)))\") at exc.c:195\n#6 0x81540ef in ExceptionalCondition (\n conditionName=0x81a3fa0 \"!(((file) > 0 && (file) < (int) SizeVfdCache\n&& VfdCache[file].fileName != ((void *)0)))\",\n exceptionP=0x81c3d58, detail=0x0, fileName=0x81a3e87 \"fd.c\",\nlineNumber=967) at assert.c:73\n#7 0x810afeb in FileSync (file=31) at fd.c:967\n#8 0x81136e0 in mdcommit () at md.c:818\n#9 0x8114510 in smgrcommit () at smgr.c:519\n#10 0x8107fed in BufmgrCommit () at xlog_bufmgr.c:1071\n#11 0x808c7f6 in RecordTransactionCommit () at xact.c:688\n#12 0x80bd245 in repair_frag (vacrelstats=0x8234d44, onerel=0x8210344,\nvacuum_pages=0xbfffeeac, fraged_pages=0xbfffee9c,\n nindices=2, Irel=0x8234dbc) at vacuum.c:1790\n#13 0x80ba26b in vacuum_rel (relid=1249, analyze=1, is_toastrel=0 '\\000')\nat vacuum.c:477\n#14 0x80b9cf3 in vac_vacuum (VacRelP=0x0, analyze=1 '\\001', anal_cols2=0x0)\nat vacuum.c:245\n#15 0x80b9c6c in vacuum (vacrel=0x0, verbose=0, analyze=1 '\\001',\nanal_cols=0x0) at vacuum.c:163\n#16 0x8117a25 in ProcessUtility (parsetree=0x824529c, dest=Remote) at\nutility.c:690\n#17 0x8115775 in pg_exec_query_string (query_string=0x8244f50 \"vacuum\nanalyze;\", dest=Remote, parse_context=0x81fbe58)\n at postgres.c:786\n#18 0x8116802 in PostgresMain (argc=4, argv=0xbffff148, real_argc=3,\nreal_argv=0xbffffa34, username=0x8208f99 \"pjw\")\n at postgres.c:1826\n#19 0x80fd6ef in DoBackend (port=0x8208d30) at postmaster.c:2060\n#20 0x80fd28a in BackendStartup (port=0x8208d30) at postmaster.c:1837\n#21 0x80fc556 in ServerLoop () at postmaster.c:1027\n#22 0x80fbf3d in PostmasterMain (argc=3, argv=0xbffffa34) at postmaster.c:700\n#23 0x80dc095 in main (argc=3, argv=0xbffffa34) at main.c:112\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Nov 2000 16:01:49 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assert Failure with current CVS "
},
{
"msg_contents": "At 22:38 20/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> TRAP: Failed Assertion(\"!(((file) > 0 && (file) < (int) SizeVfdCache &&\n>> VfdCache[file].fileName != ((void *)0))):\", File: \"fd.c\", Line: 967)\n>> !(((file) > 0 && (file) < (int) SizeVfdCache && VfdCache[file].fileName !=\n>> ((void *)0))) (0)\n>> Server process (pid 7187) exited with status 6 at Tue Nov 21 13:44:27 2000\n>\n>There should be a core file from this --- backtrace please?\n>\n\n#0 0x400da0d1 in __kill () at soinit.c:27\n#1 0x400d9eff in raise (sig=6) at ../sysdeps/posix/raise.c:27\n#2 0x400db19b in abort () at ../sysdeps/generic/abort.c:83\n#3 0x8155068 in ExcAbort () at excabort.c:27\n#4 0x8154fc7 in ExcUnCaught (excP=0x81c3d58, detail=0, data=0x0,\n message=0x81a3fa0 \"!(((file) > 0 && (file) < (int) SizeVfdCache &&\nVfdCache[file].fileName != ((void *)0)))\") at exc.c:178\n#5 0x815501a in ExcRaise (excP=0x81c3d58, detail=0, data=0x0,\n message=0x81a3fa0 \"!(((file) > 0 && (file) < (int) SizeVfdCache &&\nVfdCache[file].fileName != ((void *)0)))\") at exc.c:195\n#6 0x81540ef in ExceptionalCondition (\n conditionName=0x81a3fa0 \"!(((file) > 0 && (file) < (int) SizeVfdCache\n&& VfdCache[file].fileName != ((void *)0)))\",\n exceptionP=0x81c3d58, detail=0x0, fileName=0x81a3e87 \"fd.c\",\nlineNumber=967) at assert.c:73\n#7 0x810afeb in FileSync (file=31) at fd.c:967\n#8 0x81136e0 in mdcommit () at md.c:818\n#9 0x8114510 in smgrcommit () at smgr.c:519\n#10 0x8107fed in BufmgrCommit () at xlog_bufmgr.c:1071\n#11 0x808c7f6 in RecordTransactionCommit () at xact.c:688\n#12 0x80bd245 in repair_frag (vacrelstats=0x8234d44, onerel=0x8210344,\nvacuum_pages=0xbfffeeac, fraged_pages=0xbfffee9c,\n nindices=2, Irel=0x8234dbc) at vacuum.c:1790\n#13 0x80ba26b in vacuum_rel (relid=1249, analyze=1, is_toastrel=0 '\\000')\nat vacuum.c:477\n#14 0x80b9cf3 in vac_vacuum (VacRelP=0x0, analyze=1 '\\001', anal_cols2=0x0)\nat vacuum.c:245\n#15 0x80b9c6c in vacuum (vacrel=0x0, verbose=0, analyze=1 '\\001',\nanal_cols=0x0) at vacuum.c:163\n#16 0x8117a25 in ProcessUtility (parsetree=0x824529c, dest=Remote) at\nutility.c:690\n#17 0x8115775 in pg_exec_query_string (query_string=0x8244f50 \"vacuum\nanalyze;\", dest=Remote, parse_context=0x81fbe58)\n at postgres.c:786\n#18 0x8116802 in PostgresMain (argc=4, argv=0xbffff148, real_argc=3,\nreal_argv=0xbffffa34, username=0x8208f99 \"pjw\")\n at postgres.c:1826\n#19 0x80fd6ef in DoBackend (port=0x8208d30) at postmaster.c:2060\n#20 0x80fd28a in BackendStartup (port=0x8208d30) at postmaster.c:1837\n#21 0x80fc556 in ServerLoop () at postmaster.c:1027\n#22 0x80fbf3d in PostmasterMain (argc=3, argv=0xbffffa34) at postmaster.c:700\n#23 0x80dc095 in main (argc=3, argv=0xbffffa34) at main.c:112\n\n\nFWIW, this is quite reproducible.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Nov 2000 16:02:10 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assert Failure with current CVS "
}
]
|
[
{
"msg_contents": " Date: Monday, November 20, 2000 @ 22:23:19\nAuthor: tgl\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n from hub.org:/home/projects/pgsql/tmp/cvs-serv86997/src/backend/utils/adt\n\nModified Files:\n\toid.c \n\n----------------------------- Log Message -----------------------------\n\nMake oidin/oidout produce and consume unsigned representation of Oid,\nrather than just being aliases for int4in/int4out. Give type Oid a\nfull set of comparison operators that do proper unsigned comparison,\ninstead of reusing the int4 comparators. Since pg_dump is now doing\nunsigned comparisons of OIDs, it is now *necessary* that we play by\nthe rules here. In fact, given that btoidcmp() has been doing unsigned\ncomparison for quite some time, it seems likely that we have index-\ncorruption problems in 7.0 and before once the Oid counter goes past\n2G. Fixing these operators is a necessary step before we can think\nabout 8-byte Oid, too.\n",
"msg_date": "Mon, 20 Nov 2000 22:23:19 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "pgsql/src/backend/utils/adt (oid.c)"
},
{
"msg_contents": "Missing an #include of <errno.h>:\n\ncc -O -K inline -I/usr/local/include -I../../../../src/include -c -o numeric.o numeric.c\nUX:acomp: WARNING: \"numeric.c\", line 1953: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 1991: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 2058: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 2118: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 2147: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 2176: end-of-loop code not reached\ncc -O -K inline -I/usr/local/include -I../../../../src/include -c -o numutils.o numutils.c\ncc -O -K inline -I/usr/local/include -I../../../../src/include -c -o oid.o oid.c\nUX:acomp: ERROR: \"oid.c\", line 98: undefined symbol: errno\nUX:acomp: ERROR: \"oid.c\", line 108: undefined symbol: EINVAL\ngmake[4]: *** [oid.o] Error 1\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/adt'\ngmake[3]: *** [adt-recursive] Error 2\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ngmake[2]: *** [utils-recursive] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n* [email protected] <[email protected]> [001120 21:26]:\n> Date: Monday, November 20, 2000 @ 22:23:19\n> Author: tgl\n> \n> Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n> from hub.org:/home/projects/pgsql/tmp/cvs-serv86997/src/backend/utils/adt\n> \n> Modified Files:\n> \toid.c \n> \n> ----------------------------- Log Message -----------------------------\n> \n> Make oidin/oidout produce and consume unsigned representation of Oid,\n> rather than just being aliases for int4in/int4out. Give type Oid a\n> full set of comparison operators that do proper unsigned comparison,\n> instead of reusing the int4 comparators. Since pg_dump is now doing\n> unsigned comparisons of OIDs, it is now *necessary* that we play by\n> the rules here. In fact, given that btoidcmp() has been doing unsigned\n> comparison for quite some time, it seems likely that we have index-\n> corruption problems in 7.0 and before once the Oid counter goes past\n> 2G. Fixing these operators is a necessary step before we can think\n> about 8-byte Oid, too.\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 20 Nov 2000 22:13:22 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/utils/adt (oid.c)"
},
{
"msg_contents": "Larry Rosenman <[email protected]> writes:\n> Missing an #include of <errno.h>:\n\nOoops, sorry about that --- <errno.h> gets included by some other\nstandard header on my system, so I tend to miss that omission :-(\nWill fix shortly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 23:24:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/backend/utils/adt (oid.c) "
},
{
"msg_contents": "Can I assume this TODO item is now done?\n\n\t* Make oid use unsigned int more reliably, pg_atoi()\n\n> Date: Monday, November 20, 2000 @ 22:23:19\n> Author: tgl\n> \n> Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n> from hub.org:/home/projects/pgsql/tmp/cvs-serv86997/src/backend/utils/adt\n> \n> Modified Files:\n> \toid.c \n> \n> ----------------------------- Log Message -----------------------------\n> \n> Make oidin/oidout produce and consume unsigned representation of Oid,\n> rather than just being aliases for int4in/int4out. Give type Oid a\n> full set of comparison operators that do proper unsigned comparison,\n> instead of reusing the int4 comparators. Since pg_dump is now doing\n> unsigned comparisons of OIDs, it is now *necessary* that we play by\n> the rules here. In fact, given that btoidcmp() has been doing unsigned\n> comparison for quite some time, it seems likely that we have index-\n> corruption problems in 7.0 and before once the Oid counter goes past\n> 2G. Fixing these operators is a necessary step before we can think\n> about 8-byte Oid, too.\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Nov 2000 22:59:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/utils/adt (oid.c)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can I assume this TODO item is now done?\n> \t* Make oid use unsigned int more reliably, pg_atoi()\n\nNo. I cleaned up the LO-related contrib modules today, but I wouldn't\ncare to assert that OID is now handled correctly everywhere :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 00:07:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/utils/adt (oid.c) "
}
]
|
[
{
"msg_contents": " Date: Monday, November 20, 2000 @ 23:01:10\nAuthor: inoue\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n from hub.org:/tmp/cvs-serv1659/utils/adt\n\nModified Files:\n\tri_triggers.c \n\n----------------------------- Log Message -----------------------------\n\nkeep relations open until they are no longer needed.\n\n",
"msg_date": "Mon, 20 Nov 2000 23:01:10 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "pgsql/src/backend/utils/adt (ri_triggers.c)"
},
{
"msg_contents": "[email protected] writes:\n> Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n> Modified Files:\n> \tri_triggers.c \n> keep relations open until they are no longer needed.\n\nSomething that's been bothering me for a good while about ri_triggers\nis that it opens the relations without any lock to begin with.\nThat can't possibly be safe, can it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Nov 2000 23:37:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/utils/adt (ri_triggers.c) "
},
{
"msg_contents": "At 11:37 PM 11/20/00 -0500, Tom Lane wrote:\n>[email protected] writes:\n>> Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n>> Modified Files:\n>> \tri_triggers.c \n>> keep relations open until they are no longer needed.\n>\n>Something that's been bothering me for a good while about ri_triggers\n>is that it opens the relations without any lock to begin with.\n>That can't possibly be safe, can it?\n\nHmmm...I only worked within the structure Jan built (to fix/implement\nsemantics) but there are efforts to lock things down with \"select for\nupdate\" where Jan felt it was necessary. Whether or not that's sufficient\nis another question, but he obviously gave it *some* thought.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 20 Nov 2000 21:09:26 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [COMMITTERS] pgsql/src/backend/utils/adt\n (ri_triggers.c)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] writes:\n> > Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt\n> > Modified Files:\n> > ri_triggers.c\n> > keep relations open until they are no longer needed.\n>\n> Something that's been bothering me for a good while about ri_triggers\n> is that it opens the relations without any lock to begin with.\n> That can't possibly be safe, can it?\n\nOpening relations with no lock seems illegal to me.\nThough I have no evidence that it does wrong thing\nin ri_triggers.c,it seems that we had better acquire\nan AccessShareLock on trial.\nI sometimes see SEGV error around ri stuff and\nI've doubted opening relations with no lock.\nHowever the cause was different from it.\n\nHiroshi Inoue\n\n",
"msg_date": "Tue, 21 Nov 2000 18:54:24 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [COMMITTERS] pgsql/src/backend/utils/adt\n (ri_triggers.c)"
}
]
|
[
{
"msg_contents": "I sent this to the general list and got no response so I figure I can take\nit to the people who actually make the decisions.\n\nIs this a security bug or is it by design?\n\n----- Original Message -----\nFrom: \"Dan Wilson\" <[email protected]>\nTo: \"pgsql general\" <[email protected]>\nSent: Sunday, November 19, 2000 9:33 AM\nSubject: DB and Table Permissions\n\n\n> Is there a reason why _any_ user can create a table on a database? Even if\n> they do not own or have any permissions to it?\n>\n> I don't think that should happen. Is there a specific reason why it does?\n>\n> -Dan Wilson\n>\n\n",
"msg_date": "Mon, 20 Nov 2000 23:27:48 -0800",
"msg_from": "\"Dan Wilson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: DB and Table Permissions"
},
{
"msg_contents": "> > ----- Original Message -----\n> > From: \"Dan Wilson\" <[email protected]>\n> > To: \"pgsql general\" <[email protected]>\n> > Sent: Sunday, November 19, 2000 9:33 AM\n> > Subject: DB and Table Permissions\n> >\n> > > Is there a reason why _any_ user can create a table on a database?\nEven if\n> > > they do not own or have any permissions to it?\n> > >\n> > > I don't think that should happen. Is there a specific reason why it\ndoes?\n>\n> Well, you should be able to do \"GRANT ...\" statements against the pg_...\n> tables to control this if you want to.\n>\n> Cheers,\n> Andrew.\n\n\nUsing GRANT and REVOKE statements doesn't help because the permissions are\nattached to the table, not the database. So any user can create a new table\nwithin a database even if they are not the owner. I think this needs to be\ncorrected somehow.\n-Dan\n\n",
"msg_date": "Fri, 24 Nov 2000 09:31:39 -0800",
"msg_from": "\"Dan Wilson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fw: DB and Table Permissions"
}
]
|
[
{
"msg_contents": "> > >>>> Nope. Still fails...\n> > \n> > You should've said that the OIDs are now just off-by-one from where they\n> > were before, instead of off by several thousand. That I'm willing to\n> > accept as an implementation change ;-) I've updated the expected file.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nI hope for it -:)\n\n> Actually, pg_shadow' oid for DBA inserted by initdb is 2 now - I'm fixing\n> this now...\n\nFixed.\n\nVadim\n\n\n",
"msg_date": "Tue, 21 Nov 2000 02:01:55 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/backend/access/transam (xlog.c) "
}
]
|
[
{
"msg_contents": "> > >>>> Nope. Still fails...\n> > \n> > You should've said that the OIDs are now just off-by-one from where they\n> > were before, instead of off by several thousand. That I'm willing to\n> > accept as an implementation change ;-) I've updated the expected file.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nI hope for it -:)\n\n> Actually, pg_shadow' oid for DBA inserted by initdb is 2 now - I'm fixing\n> this now...\n\nFixed.\n\nVadim\n\n\n",
"msg_date": "Tue, 21 Nov 2000 02:05:44 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/backend/access/transam (xlog.c) "
}
]
|
[
{
"msg_contents": "\nFix some english issues...\nI also note some \"interesting\" (from an English perspective) #define \nnames that mayhaps need to be looked at. \n\n\nIndex: xlog.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam/xlog.c,v\nretrieving revision 1.31\ndiff -c -r1.31 xlog.c\n*** xlog.c\t2000/11/21 10:17:57\t1.31\n--- xlog.c\t2000/11/21 13:12:49\n***************\n*** 1426,1432 ****\n \t\t\t ControlFile->catalog_version_no, CATALOG_VERSION_NO);\n \n \tif (ControlFile->state == DB_SHUTDOWNED)\n! \t\telog(LOG, \"Data Base System was shutted down at %s\",\n \t\t\t str_time(ControlFile->time));\n \telse if (ControlFile->state == DB_SHUTDOWNING)\n \t\telog(LOG, \"Data Base System was interrupted when shutting down at %s\",\n--- 1426,1432 ----\n \t\t\t ControlFile->catalog_version_no, CATALOG_VERSION_NO);\n \n \tif (ControlFile->state == DB_SHUTDOWNED)\n! \t\telog(LOG, \"Data Base System was shutdown at %s\",\n \t\t\t str_time(ControlFile->time));\n \telse if (ControlFile->state == DB_SHUTDOWNING)\n \t\telog(LOG, \"Data Base System was interrupted when shutting down at %s\",\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 21 Nov 2000 07:14:34 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "quick english patch"
},
{
"msg_contents": "Want me to do a new patch, or will you fix mine? \n\nLER\n\n* Peter Eisentraut <[email protected]> [001121 11:51]:\n> Larry Rosenman writes:\n> \n> > --- 1426,1432 ----\n> > \t\t\t ControlFile->catalog_version_no, CATALOG_VERSION_NO);\n> > \n> > \tif (ControlFile->state == DB_SHUTDOWNED)\n> > ! \t\telog(LOG, \"Data Base System was shutdown at %s\",\n> \n> shut down (two words)\n> \n> > \t\t\t str_time(ControlFile->time));\n> > \telse if (ControlFile->state == DB_SHUTDOWNING)\n> > \t\telog(LOG, \"Data Base System was interrupted when shutting down at %s\",\n> > \n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 21 Nov 2000 11:54:11 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: quick english patch"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> --- 1426,1432 ----\n> \t\t\t ControlFile->catalog_version_no, CATALOG_VERSION_NO);\n> \n> \tif (ControlFile->state == DB_SHUTDOWNED)\n> ! \t\telog(LOG, \"Data Base System was shutdown at %s\",\n\nshut down (two words)\n\n> \t\t\t str_time(ControlFile->time));\n> \telse if (ControlFile->state == DB_SHUTDOWNING)\n> \t\telog(LOG, \"Data Base System was interrupted when shutting down at %s\",\n> \n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 21 Nov 2000 18:56:57 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quick english patch"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> Want me to do a new patch, or will you fix mine? \n\nI'll fix all these things. I'm also somewhat annoyed that these messages\nshow up during initdb now. Anyone know why exactly? I couldn't trace it\ndown.\n\n\n> \n> LER\n> \n> * Peter Eisentraut <[email protected]> [001121 11:51]:\n> > Larry Rosenman writes:\n> > \n> > > --- 1426,1432 ----\n> > > \t\t\t ControlFile->catalog_version_no, CATALOG_VERSION_NO);\n> > > \n> > > \tif (ControlFile->state == DB_SHUTDOWNED)\n> > > ! \t\telog(LOG, \"Data Base System was shutdown at %s\",\n> > \n> > shut down (two words)\n> > \n> > > \t\t\t str_time(ControlFile->time));\n> > > \telse if (ControlFile->state == DB_SHUTDOWNING)\n> > > \t\telog(LOG, \"Data Base System was interrupted when shutting down at %s\",\n> > > \n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 21 Nov 2000 19:28:36 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quick english patch"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I'm also somewhat annoyed that these messages show up during initdb\n> now. Anyone know why exactly? I couldn't trace it down.\n\nI assume you're talking about this DEBUG stuff:\n\n...\nCreating directory /home/postgres/testversion/data/pg_xlog\nCreating template1 database in /home/postgres/testversion/data/base/1\nDEBUG: starting up\nDEBUG: database system was shut down at 2000-11-22 14:38:01\nDEBUG: CheckPoint record at (0, 8)\nDEBUG: Redo record at (0, 8); Undo record at (0, 8); Shutdown TRUE\nDEBUG: NextTransactionId: 514; NextOid: 16384\nDEBUG: database system is in production state\nCreating global relations in /home/postgres/testversion/data/global\nDEBUG: starting up\nDEBUG: database system was shut down at 2000-11-22 14:38:09\nDEBUG: CheckPoint record at (0, 96)\nDEBUG: Redo record at (0, 96); Undo record at (0, 0); Shutdown TRUE\nDEBUG: NextTransactionId: 514; NextOid: 17199\nDEBUG: database system is in production state\nInitializing pg_shadow.\nEnabling unlimited row width for system tables.\n...\n\nAFAICT, it's always been true that elog(DEBUG) will write to stderr,\nand initdb does not redirect the backend's stderr. The change is that\nwith XLOG enabled, there is now code that will do elog(DEBUG) in the\ndefault path of control during initdb's bootstrap processing.\nSpecifically, all this chatter is coming out of StartupXLOG() in xlog.c.\nEvidently, up to now there were no elog(DEBUG) calls encountered during\na normal bootstrap run.\n\nNot sure whether we should change any code or not. I don't much like\nthe idea of having initdb send stderr to /dev/null, for example.\nPerhaps StartupXLOG could be made a little less chatty, however?\n\n\nBTW, Vadim, what is the reasoning for your having invented aliases\nSTOP and LOG for elog levels REALLYFATAL and DEBUG? I think it's\nconfusing to have more than one name for the same severity level.\nIf we're going to open up the issue of renaming the elog levels to\nsomething saner, there are a whole bunch of changes to be undertaken,\nand these aren't the names I'd choose anyway ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 15:11:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Talkative initdb, elog message levels"
}
]
|
[
{
"msg_contents": "I've subscribed and un-subscribed to the HACKERS-DIGEST list several\ntimes now. Each time I seem to be getting EVERY message sent to the list\nrather than a DIGEST.\n\nCan someone tell me if it is still possible to get a DIGEST of the list?\nIs the list administrator aware of the problem?\n\nThanks.\n-Tony\n\n\n",
"msg_date": "Tue, 21 Nov 2000 09:45:02 -0800",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Still having problems with DIGEST"
}
]
|
[
{
"msg_contents": "> This snippet in xlog.c makes we wonder...\n> \n> \telse if (ControlFile->state == DB_IN_RECOVERY)\n> \t{\n> \t\telog(LOG, \"Data Base System was interrupted \n> being in recovery at %s\\n\"\n> \t\t\t \"\\tThis propably means that some data \n> blocks are corrupted\\n\"\n> \t\t\t \"\\tAnd you will have to use last \n> backup for recovery\",\n> \t\t\t str_time(ControlFile->time));\n> \t}\n> \n> I thought this was going to be crash safe.\n\nWAL doesn't protect against disk block corruption what\ncould be reason of crash (or elog(STOP)) during recovery\nin most cases. Apart from disk corruption recovery is\n(or should be -:)) crash safe.\n\nVadim\n",
"msg_date": "Tue, 21 Nov 2000 14:01:07 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Crash during WAL recovery?"
}
]
|
[
{
"msg_contents": "This snippet in xlog.c makes we wonder...\n\n\telse if (ControlFile->state == DB_IN_RECOVERY)\n\t{\n\t\telog(LOG, \"Data Base System was interrupted being in recovery at %s\\n\"\n\t\t\t \"\\tThis propably means that some data blocks are corrupted\\n\"\n\t\t\t \"\\tAnd you will have to use last backup for recovery\",\n\t\t\t str_time(ControlFile->time));\n\t}\n\nI thought this was going to be crash safe.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 21 Nov 2000 23:03:42 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Crash during WAL recovery?"
},
{
"msg_contents": "Is there any particular reason the spelling and punctuation in the code\nsnippet below is so bad?\n\nChris\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Peter Eisentraut\n> Sent: Wednesday, November 22, 2000 6:04 AM\n> To: PostgreSQL Development\n> Subject: [HACKERS] Crash during WAL recovery?\n>\n>\n> This snippet in xlog.c makes we wonder...\n>\n> \telse if (ControlFile->state == DB_IN_RECOVERY)\n> \t{\n> \t\telog(LOG, \"Data Base System was interrupted being\n> in recovery at %s\\n\"\n> \t\t\t \"\\tThis propably means that some data\n> blocks are corrupted\\n\"\n> \t\t\t \"\\tAnd you will have to use last backup\n> for recovery\",\n> \t\t\t str_time(ControlFile->time));\n> \t}\n>\n> I thought this was going to be crash safe.\n>\n> --\n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n>\n\n",
"msg_date": "Wed, 22 Nov 2000 09:14:23 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Crash during WAL recovery?"
},
{
"msg_contents": "At 09:14 AM 11/22/00 +0800, Christopher Kings-Lynne wrote:\n>Is there any particular reason the spelling and punctuation in the code\n>snippet below is so bad?\n\nVadim's Russian. This impacts his english but not his ability to implement\ncomplex features like MVCC and WAL :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 21 Nov 2000 17:27:07 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Crash during WAL recovery?"
},
{
"msg_contents": ">> Is there any particular reason the spelling and punctuation in the code\n>> snippet below is so bad?\n\n> Vadim's Russian. This impacts his english but not his ability to implement\n> complex features like MVCC and WAL :)\n\nAs someone who can't speak anything but English worth a damn (even\nthough I was raised in Spanish-speaking countries, so you'd think\nI'd have acquired at least one clue), I have long since learned not\nto criticize the English of non-native speakers. Many of the\nparticipants in this project are doing far better than I would if\nthe tables were turned. So, I fix grammatical and spelling errors\nif I have another reason to be editing some piece of documentation,\nbut I never hold it against the original author.\n\nMore generally, a lot of the PG documentation could use the attention\nof a professional copy editor --- and I'm sad to say that the parts\ncontributed by native English speakers aren't necessarily any cleaner\nthan the parts contributed by those who are not. If you have the\ntime and energy to submit corrections, please fall to!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 00:29:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery? "
},
{
"msg_contents": "> As someone who can't speak anything but English worth a damn (even\n> though I was raised in Spanish-speaking countries, so you'd think\n> I'd have acquired at least one clue), I have long since learned not\n> to criticize the English of non-native speakers. Many of the\n> participants in this project are doing far better than I would if\n> the tables were turned. So, I fix grammatical and spelling errors\n> if I have another reason to be editing some piece of documentation,\n> but I never hold it against the original author.\n> \n> More generally, a lot of the PG documentation could use the attention\n> of a professional copy editor --- and I'm sad to say that the parts\n> contributed by native English speakers aren't necessarily any cleaner\n> than the parts contributed by those who are not. If you have the\n> time and energy to submit corrections, please fall to!\n\nI did have AW's copyeditor go through the refence manual. Would be nice\nif they had done the other manuals too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Nov 2000 00:35:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery?"
},
{
"msg_contents": "At 12:29 AM 11/22/00 -0500, Tom Lane wrote:\n>>> Is there any particular reason the spelling and punctuation in the code\n>>> snippet below is so bad?\n>\n>> Vadim's Russian. This impacts his english but not his ability to implement\n>> complex features like MVCC and WAL :)\n>\n>As someone who can't speak anything but English worth a damn (even\n>though I was raised in Spanish-speaking countries, so you'd think\n>I'd have acquired at least one clue), I have long since learned not\n>to criticize the English of non-native speakers.\n\nI think it's certain that the original poster didn't realize Vadim is not\na native English speaker, which is why I made my comment (to clue him in).\nVadim didn't take my comment as criticism, as his follow-on post made clear\n(he got the joke). I don't know from your post if you thought I was adding\nto the criticism or not, but I can say with certainty I wasn't. In my\nprevious life as the founder of a company specializing in optimizing\ncompilers for minicomputers, I employed Dutch (who spoke and wrote English\nthan I or anyone here), Polish, Vietmanese and other nationals who were\nexcellent hackers and who all spoke better English than I spoke their \nlanguage - or cooked their cuisine or even followed their table customs,\nfor that matter.\n\n>More generally, a lot of the PG documentation could use the attention\n>of a professional copy editor --- and I'm sad to say that the parts\n>contributed by native English speakers aren't necessarily any cleaner\n>than the parts contributed by those who are not. If you have the\n>time and energy to submit corrections, please fall to!\n\nThis is very much true. PG needs some good documentation volunteers.\nI'm not denigrating the current efforts, because PG documention's pretty \ngood all things considered. But some volunteers devoted to improving\nthe docs could accomplish a lot.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 21 Nov 2000 21:36:19 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery? "
},
{
"msg_contents": "> I don't know from your post if you thought I was adding\n> to the criticism or not, but I can say with certainty I wasn't.\n\nNo, I saw that you understood perfectly, I just wanted to add another\ntwo cents...\n\n> I'm not denigrating the current efforts, because PG documention's pretty \n> good all things considered. But some volunteers devoted to improving\n> the docs could accomplish a lot.\n\nYup. Anyone out there with the time and interest?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 00:54:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery? "
},
{
"msg_contents": "> I think it's certain that the original poster didn't realize Vadim is not\n> a native English speaker, which is why I made my comment (to clue him in).\n> Vadim didn't take my comment as criticism, as his follow-on post\n> made clear\n> (he got the joke). I don't know from your post if you thought I\n> was adding\n> to the criticism or not, but I can say with certainty I wasn't. In my\n> previous life as the founder of a company specializing in optimizing\n> compilers for minicomputers, I employed Dutch (who spoke and wrote English\n> than I or anyone here), Polish, Vietmanese and other nationals who were\n> excellent hackers and who all spoke better English than I spoke their\n> language - or cooked their cuisine or even followed their table customs,\n> for that matter.\n\nJust for the record, I apologise for criticising Valim's grammar. I didn't\nrealise that he was a non-native speaker - nor that it was even his code. I\njust thought I should point out that spelling error (propably) given that\nthere was a thread going on about spelling in some error messages...\n\nChris\n\n",
"msg_date": "Wed, 22 Nov 2000 14:40:21 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Crash during WAL recovery? "
},
{
"msg_contents": "On Wednesday 22 November 2000 00:54, Tom Lane wrote:\n> > I don't know from your post if you thought I was adding\n> > to the criticism or not, but I can say with certainty I wasn't.\n>\n> No, I saw that you understood perfectly, I just wanted to add another\n> two cents...\n>\n> > I'm not denigrating the current efforts, because PG documention's pretty\n> > good all things considered. But some volunteers devoted to improving\n> > the docs could accomplish a lot.\n>\n> Yup. Anyone out there with the time and interest?\n>\n> \t\t\tregards, tom lane\n\nI might be interested in helping with it. Whats involved (DocBook, SGML)?\n\n-- \n-------- Robert B. Easter [email protected] ---------\n- CompTechNews Message Board http://www.comptechnews.com/ -\n- CompTechServ Tech Services http://www.comptechserv.com/ -\n---------- http://www.comptechnews.com/~reaster/ ------------\n",
"msg_date": "Wed, 22 Nov 2000 04:21:09 -0500",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery?"
},
{
"msg_contents": "> I might be interested in helping with it. Whats involved (DocBook, SGML)?\n\nYup. The PostgreSQL source tree has a docs directory with all of the\nsources for the docs. I use emacs for editing, and several other options\nare discussed in the appendix on documentation in the doc set.\n\n - Thomas\n",
"msg_date": "Wed, 22 Nov 2000 15:31:00 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> More generally, a lot of the PG documentation could use the attention\n> of a professional copy editor --- and I'm sad to say that the parts\n> contributed by native English speakers aren't necessarily any cleaner\n> than the parts contributed by those who are not. \n\nThe difference between native English speaker and English writer is that \nwriter usually does not mix up dye and die ;)\n\nBut afaik there is no such language as Englis, so first we would need to \nagree on which of the many Englishes the docs will be in.\n\nI guess they are currently in \"International\" English which is quite\nfree \nabout grammar, spelling and punctuation.\n\nI would hate if we all started to write in some more rigid dialect. \n\nI've heard that some of these even make you put the full stop at the end\nof a \nsentence before closing parenthesiss (like this.)\n\nThey claim it is for \"typographical aesthetics\" ;)\n\n----------\nHannu\n",
"msg_date": "Wed, 22 Nov 2000 18:08:41 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery?"
},
{
"msg_contents": "On Wednesday 22 November 2000 02:36, Don Baccus wrote:\n>\n> >More generally, a lot of the PG documentation could use the attention\n> >of a professional copy editor --- and I'm sad to say that the parts\n> >contributed by native English speakers aren't necessarily any cleaner\n> >than the parts contributed by those who are not. If you have the\n> >time and energy to submit corrections, please fall to!\n>\n> This is very much true. PG needs some good documentation volunteers.\n> I'm not denigrating the current efforts, because PG documention's pretty\n> good all things considered. But some volunteers devoted to improving\n> the docs could accomplish a lot.\n\nIt would be a pleasure to help with the spanish docs, if any help is needed.\n\nSaludos... :-)\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart���n Marqu���s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 22 Nov 2000 16:07:44 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery?"
},
{
"msg_contents": "> It would be a pleasure to help with the spanish docs, if any help is needed.\n\nThere is a documentation translation effort hosted in Spain, and I'm\nsure that they would welcome help to stay current (I believe that a\nsubstantial portion of docs are already done for a recent, but perhaps\nnot current, set of docs). There should be a link to this from the\npostgresql.org site.\n\n - Thomas\n",
"msg_date": "Wed, 22 Nov 2000 21:48:23 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery?"
},
{
"msg_contents": "Hello,\n\nBefore the Thanksgiving holiday here in the US I had been following with \ngreat interest the thread regarding Vadim's English and the postgres docs. \nSince this was posted about 200 messages ago, I replied as a new thread... I \nhope you don't mind!\n\nI am interested in volunteering some time to helping with the documentation \nif the developers feel that I could be of service. I am not a C coder, \nalthough I do a lot of CGI programming in PHP and Perl. Mostly I am a \ndatabase and Unix systems administrator for Combimatrix, a biotech company \nnear Seattle, Washington. Although I'm not a technical writer, I have some \nbackground in writing, having been an English composition instructor at the \nUniversity of Connecticut and a Spanish and Linguistics major in college \nbefore that. \n\nI'm fairly new to Postgres, but for the last two months I have been helping \ndevelop applications in Java and PHP that rely on it, and have become by and \nlarge comfortable with it. I had used MySQL for most of my work over the last \ntwo years and now find myself wondering how I ever got anything done.\n\nPlease, no one should take this the wrong way, but despite its lack of \nimportant features relative to Postgres, I very much enjoyed working with \nMySQL in large part because of its nicely organized and constantly updated \ndocumentation. Quite honestly this is the one area where Postgres still needs \nto catch up, and if there's any way at all I can help make that happen I \nwould like to be involved.\n\nSo, if you think I can be of any service, please let me know.\n\nBest regards,\n\nNorm\n\n> >More generally, a lot of the PG documentation could use the attention\n> >of a professional copy editor --- and I'm sad to say that the parts\n> >contributed by native English speakers aren't necessarily any cleaner\n> >than the parts contributed by those who are not. If you have the\n> >time and energy to submit corrections, please fall to!\n",
"msg_date": "Mon, 27 Nov 2000 13:58:45 -0800",
"msg_from": "Norman Clarke <[email protected]>",
"msg_from_op": false,
"msg_subject": "postgres docs (was Re: Crash during WAL recovery?)"
},
{
"msg_contents": "> So, if you think I can be of any service, please let me know.\n\nWe already know that you can be of service :)\n\nThere are two ways to go about this:\n\n1) pick something in the docs to fix. A topic, or organization, or\nwhatever you think is a good candidate for improvement or inclusion. Run\nbig changes by the -hackers or -docs mailing list to make sure you have\na consensus that the change is desirable, then go do it! Small changes\nsuch as wording fixes can just be done and submitted as patches without\nneeding a peer review or consensus imho.\n\n2) fix something that someone else thinks should be fixed. Same process\nas before, and you might end up solving something bugging the rest of us\nfor a long time. But maybe less satisfying for you than (1) might be.\n\nEither works. This list is never short of suggestions if you want to try\n(2).\n\nWelcome!\n\n - Thomas\n",
"msg_date": "Mon, 04 Dec 2000 06:26:10 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres docs (was Re: Crash during WAL recovery?)"
},
{
"msg_contents": "Norman Clarke writes:\n\n> I am interested in volunteering some time to helping with the documentation\n\nGood. Not sure exactly what you want to do, but we need help in just\nabout every area, including proof-reading/copy-editing sort of stuff,\nmarkup/consistency improvements, verification of examples, trying out the\noutlined procedures from the point of view of a na�ve user, rewriting old\nstuff, documenting new stuff, etc.\n\nSince we're going beta any minute now the primary focus would currently be\non getting everything completed and updated, rather than undertaking major\nrewrites.\n\nThe Developer's Guide which should be found at or near\nwww.postgresql.org/devel-corner/docs has an appendix that explains how the\ndocumentation is handled. Contributions are accepted even if you don't\ncompletely understand DocBook or don't want to bother installing the\ntools. (OTOH, it's very rewarding to have installed the tools and to have\nunderstood DocBook. :-))\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 4 Dec 2000 22:21:37 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres docs (was Re: Crash during WAL recovery?)"
}
]
|
[
{
"msg_contents": "At 02:01 PM 11/21/00 -0800, Mikheev, Vadim wrote:\n>> This snippet in xlog.c makes we wonder...\n>> \n>> \telse if (ControlFile->state == DB_IN_RECOVERY)\n>> \t{\n>> \t\telog(LOG, \"Data Base System was interrupted \n>> being in recovery at %s\\n\"\n>> \t\t\t \"\\tThis propably means that some data \n>> blocks are corrupted\\n\"\n>> \t\t\t \"\\tAnd you will have to use last \n>> backup for recovery\",\n>> \t\t\t str_time(ControlFile->time));\n>> \t}\n>> \n>> I thought this was going to be crash safe.\n>\n>WAL doesn't protect against disk block corruption what\n>could be reason of crash (or elog(STOP)) during recovery\n>in most cases. Apart from disk corruption recovery is\n>(or should be -:)) crash safe.\n\nWhich is why we'll still need BAR tools later.\n\nThe WAL log can be used to recover from a crash if the database\nitself isn't corrupted (disk corruption, whatever), but not\notherwise because it applies logged data to the database itself.\n\nThe WAL log doesn't include changes caused by renegade disk\ncontrollers, etc :)\n\nBAR tools will allow recovery via archives of WAL logs applied\nto an archive of the database, to recreate the database in the\ncase where the existing database has been corrupted.\n\nIn Oracle parlance, \"WAL\" log == \"REDO\" log, and the BAR tool\nbuilds \"Archive\" logs.\n\nUhhh...I think, anyway.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 21 Nov 2000 14:21:11 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Crash during WAL recovery?"
}
]
|
[
{
"msg_contents": " Date: Tuesday, November 21, 2000 @ 19:00:55\nAuthor: tgl\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/contrib/pg_dumplo\n from hub.org:/home/projects/pgsql/tmp/cvs-serv39905\n\nModified Files:\n\tREADME.pg_dumplo lo_export.c lo_import.c main.c pg_dumplo.h \n\tutils.c \n\n----------------------------- Log Message -----------------------------\n\nCode review: minor cleanups, make the world safe for unsigned OIDs.\nImprove documentation, too.\n",
"msg_date": "Tue, 21 Nov 2000 19:00:55 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "pgsql/contrib/pg_dumplo (README.pg_dumplo lo_export.c lo_import.c\n\tmain.c pg_dumplo.h utils.c)"
},
{
"msg_contents": "[email protected] writes:\n\n> Date: Tuesday, November 21, 2000 @ 19:00:55\n> Author: tgl\n> \n> Update of /home/projects/pgsql/cvsroot/pgsql/contrib/pg_dumplo\n> from hub.org:/home/projects/pgsql/tmp/cvs-serv39905\n> \n> Modified Files:\n> \tREADME.pg_dumplo lo_export.c lo_import.c main.c pg_dumplo.h \n> \tutils.c \n> \n> ----------------------------- Log Message -----------------------------\n> \n> Code review: minor cleanups, make the world safe for unsigned OIDs.\n> Improve documentation, too.\n\nDoesn't pg_dump handle large objects these days?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 22 Nov 2000 02:09:53 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/contrib/pg_dumplo (README.pg_dumplo\n lo_export.c\n\tlo_import.c main.c pg_dumplo.h utils.c)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> [email protected] writes:\n>> Modified Files:\n>> README.pg_dumplo lo_export.c lo_import.c main.c pg_dumplo.h \n>> utils.c \n>>\n>> Code review: minor cleanups, make the world safe for unsigned OIDs.\n>> Improve documentation, too.\n\n> Doesn't pg_dump handle large objects these days?\n\nIt does, so pg_dumplo is probably dead code --- for people running 7.1\nor later. The reason I'm taking an interest in it is that Great Bridge\nwants to make it available to people running 6.5.* or 7.0.*, so that\nthey can get their large objects into newer versions in the first place.\n\nAlso, it's possible that someone using pg_dumplo would not want to\nchange (though I'm not sure why not). So we probably oughta leave it\nin the distro for a version or three anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Nov 2000 20:14:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/contrib/pg_dumplo (README.pg_dumplo\n\tlo_export.c lo_import.c main.c pg_dumplo.h utils.c)"
}
]
|
[
{
"msg_contents": "> >Is there any particular reason the spelling and punctuation \n> in the code\n> >snippet below is so bad?\n> \n> Vadim's Russian. This impacts his english but not his \n> ability to implement complex features like MVCC and WAL :)\n\nYes, sorry guys. C lang is much easier -:))\n\nVadim\n",
"msg_date": "Tue, 21 Nov 2000 17:37:25 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Crash during WAL recovery?"
},
{
"msg_contents": "Just speaking Russian and English both (to any degree) is absolutely\namazing, put that on top of MVCC and WAL and we have Vadim, the smartest\nperson alive! *grin*\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Mikheev, Vadim\" <[email protected]>\nTo: \"'Don Baccus'\" <[email protected]>; \"Christopher Kings-Lynne\"\n<[email protected]>; \"PostgreSQL Development\"\n<[email protected]>\nSent: Tuesday, November 21, 2000 5:37 PM\nSubject: RE: [HACKERS] Crash during WAL recovery?\n\n\n> > >Is there any particular reason the spelling and punctuation\n> > in the code\n> > >snippet below is so bad?\n> >\n> > Vadim's Russian. This impacts his english but not his\n> > ability to implement complex features like MVCC and WAL :)\n>\n> Yes, sorry guys. C lang is much easier -:))\n>\n> Vadim\n>\n\n",
"msg_date": "Tue, 21 Nov 2000 19:12:33 -0800",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash during WAL recovery?"
}
]
|
[
{
"msg_contents": "Hi,\n\nit's obviously there is a query plan optimizer bug, if int2 type used in fields,\nthe plan generator just use sequence scan, it's stupid, i am using PG7.03,\nthis is my log file:\n---------\nstock# drop table a;\nDROP\nstock# create table a(i int2, j int);\nCREATE\nstock# create unique index idx_a on a(i, j);\nCREATE\nstock# explain select * from a where i=1 and j=0;\npsql:test.sql:4: NOTICE: QUERY PLAN:\n\nSeq Scan on a (cost=0.00..25.00 rows=1 width=6)\n\nEXPLAIN\nstock# drop table a;\ncreate table a(i int, j int);\nCREATE\nstock# create unique index idx_a on a(i, j);\nCREATE\nstock# explain select * from a where i=1 and j=0;\npsql:test.sql:8: NOTICE: QUERY PLAN:\n\nIndex Scan using idx_a on a (cost=0.00..2.02 rows=1 width=8)\n\nEXPLAIN\n-----------\n",
"msg_date": "Wed, 22 Nov 2000 10:46:49 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "query plan optimizer bug"
},
{
"msg_contents": "At 10:46 AM 11/22/00 +0800, xuyifeng wrote:\n>Hi,\n>\n>it's obviously there is a query plan optimizer bug, if int2 type used in\nfields,\n>the plan generator just use sequence scan, it's stupid\n\nHave you checked this with real data after doing a VACUUM ANALYZE?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 21 Nov 2000 18:51:07 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan optimizer bug"
},
{
"msg_contents": "I did VACUUM ANALYZE, there is no effect.\n\nXuYifeng\n\n----- Original Message ----- \nFrom: Don Baccus <[email protected]>\nTo: xuyifeng <[email protected]>; <[email protected]>\nSent: Wednesday, November 22, 2000 10:51 AM\nSubject: Re: [HACKERS] query plan optimizer bug\n\n\n> At 10:46 AM 11/22/00 +0800, xuyifeng wrote:\n> >Hi,\n> >\n> >it's obviously there is a query plan optimizer bug, if int2 type used in\n> fields,\n> >the plan generator just use sequence scan, it's stupid\n> \n> Have you checked this with real data after doing a VACUUM ANALYZE?\n> \n> \n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n> \n",
"msg_date": "Wed, 22 Nov 2000 13:47:15 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query plan optimizer bug"
},
{
"msg_contents": "\"xuyifeng\" <[email protected]> writes:\n> stock# create table a(i int2, j int);\n> stock# create unique index idx_a on a(i, j);\n> stock# explain select * from a where i=1 and j=0;\n> psql:test.sql:4: NOTICE: QUERY PLAN:\n\n> Seq Scan on a (cost=0.00..25.00 rows=1 width=6)\n\nThe constant \"1\" is implicitly type int4, and our planner isn't\npresently very smart about optimizing cross-data-type comparisons\ninto indexscans. You could make it work with something like\n\n\tselect * from a where i = 1::int2 and j = 0;\n\nor just bite the bullet and declare column i as int4 (== \"int\").\nMaking i int2 isn't saving any storage space in the above example\nanyhow, because of alignment restrictions.\n\nTo be smarter about this, the system needs to recognize that \"1\"\ncould be typed as int2 instead of int4 in this case --- but not \"0\",\nelse that part of the index wouldn't apply.\n\nThat opens up a whole raft of numeric type hierarchy issues,\nwhich you can find discussed at length in the pghackers archives.\nWe do intend to fix this, but doing it without breaking other\nuseful cases is trickier than you might think...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 01:36:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan optimizer bug "
}
]
|
[
{
"msg_contents": "I've been examining the pg_dump source and output, and I've come to the\nconclusion that I can modify it so that UNIQUE constraints appear as part of\nthe CREATE TABLE statement, rather than as a separate CREATE INDEX. I know\nit is possible because phpPgAdmin does it!\n\nThis change should also be in line with what we have been discussing\nearlier, and could be a precursor to getting FOREIGN KEY constraints\nappearing as part of CREATE TABLE as well...\n\nIs there any problem with me working on this?\n\nChris\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n",
"msg_date": "Wed, 22 Nov 2000 15:50:34 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump / Unique constraints"
},
{
"msg_contents": "At 15:50 22/11/00 +0800, Christopher Kings-Lynne wrote:\n>I've been examining the pg_dump source and output, and I've come to the\n>conclusion that I can modify it so that UNIQUE constraints appear as part of\n>the CREATE TABLE statement, rather than as a separate CREATE INDEX.\n...\n>Is there any problem with me working on this?\n\nI actually don't think it's a good idea to force things to work that way. \n\nPerhaps as an *option*, but even then I'd be inclined to append them as a\nseries of 'ALTER TABLE ADD CONSTRAINT...' statements.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 22 Nov 2000 19:26:45 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump / Unique constraints"
},
{
"msg_contents": "At 15:50 22/11/00 +0800, Christopher Kings-Lynne wrote:\n> >I've been examining the pg_dump source and output, and I've come to the\n> >conclusion that I can modify it so that UNIQUE constraints\n> appear as part of\n> >the CREATE TABLE statement, rather than as a separate CREATE INDEX.\n> ...\n> >Is there any problem with me working on this?\n>\n> I actually don't think it's a good idea to force things to work that way.\n\nWhy, exactly?\n\nWhat's the difference between this:\n\n--\ncreate table test (\n a int4,\n constraint \"name\" unique (a)\n)\n--\n\nand this:\n\n--\ncreate table test (\n a int4\n)\ncreate unique index \"name\" on \"test\" using btree ( \"a\" \"int4_ops\" );\n--\n\nI note that when a table is dropped, any unique constraints (in fact all\nindices) associated with it are also dropped...\n\n> Perhaps as an *option*, but even then I'd be inclined to append them as a\n> series of 'ALTER TABLE ADD CONSTRAINT...' statements.\n\nAs far as I can tell, Postgres 7.0.3 only supports adding fk constraints.\nThe CVS version seems to support adding CHECK constraints, but other than\nthat, it has to be added as an index. If you're a database user, it's\nconceptually better to see right in your table that you've added a named (or\nnot) unique constraint, rather than noticing at the bottom of the file that\nthere's some unique index on one of your columns (IMHO).\n\nChris\n\n",
"msg_date": "Wed, 22 Nov 2000 16:33:52 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_dump / Unique constraints"
},
{
"msg_contents": "At 16:33 22/11/00 +0800, Christopher Kings-Lynne wrote:\n>At 15:50 22/11/00 +0800, Christopher Kings-Lynne wrote:\n>> >I've been examining the pg_dump source and output, and I've come to the\n>> >conclusion that I can modify it so that UNIQUE constraints\n>> appear as part of\n>> >the CREATE TABLE statement, rather than as a separate CREATE INDEX.\n>> ...\n>> >Is there any problem with me working on this?\n>>\n>> I actually don't think it's a good idea to force things to work that way.\n>\n>Why, exactly?\n\nHaving now looked at the code and seen that PK constraints are already\ndumped in the table definition, I guess doing unique constraints in the\nsame way is no worse.\n\nMy main concern is that I'd like pg_dump to be able to separate out the\nvarious parts of the schema, and this includes constraints. The ability to\nadd/drop constraints at any part of the restoration process would be very\nnice. The latest incarnations of pg_dump/pg_restore allow people (and\npg_dump/restore) to choose what to restore, and even to define an ordering\nfor them - and having the constraimts as separate items would be a great\nbenefit. One example of the problems that I'd like to avoid is in loading\ndata via INSERT statements - doing:\n\n Create Table...\n Insert many rows...\n Add Uniqueness Constraint\n\nis *substantially* faster than INSERTs on a table with constraints already\ndefined.\n\nAt the current time we don't even have a working 'ALTER TABLE...' that\nworks with all constraint types, so my hopes are probably in vain. I don't\nsuppose you feel like working on 'ALTER TABLE...ADD/DROP CONSTRAINT...' do\nyou????\n\n\n>What's the difference between this:\n>\n>--\n>create table test (\n> a int4,\n> constraint \"name\" unique (a)\n>)\n>--\n>\n>and this:\n>\n>--\n>create table test (\n> a int4\n>)\n>create unique index \"name\" on \"test\" using btree ( \"a\" \"int4_ops\" );\n\nThe fact that pg_dump/restore will be able to create the index at the end\nof the data load.\n\n\n>\n>As far as I can tell, Postgres 7.0.3 only supports adding fk constraints.\n>The CVS version seems to support adding CHECK constraints, but other than\n>that, it has to be added as an index.\n\nSounds like a good thing to work on ;-}\n\n\n>If you're a database user, it's\n>conceptually better to see right in your table that you've added a named (or\n>not) unique constraint, rather than noticing at the bottom of the file that\n>there's some unique index on one of your columns (IMHO).\n\nThis is a good argument for modifying the output of '\\d' in psql. It is\nalso probably a valid argument for a new option on pg_dump to specify if\nconstraints should be kept separate from table definitions. Then we could\nalso move FK constraints to the end.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 22 Nov 2000 19:58:11 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pg_dump / Unique constraints"
},
{
"msg_contents": "> At 16:33 22/11/00 +0800, Christopher Kings-Lynne wrote:\n> >At 15:50 22/11/00 +0800, Christopher Kings-Lynne wrote:\n> >> >I've been examining the pg_dump source and output, and I've come to the\n> >> >conclusion that I can modify it so that UNIQUE constraints\n> >> appear as part of\n> >> >the CREATE TABLE statement, rather than as a separate CREATE INDEX.\n> >> ...\n> >> >Is there any problem with me working on this?\n> >>\n> >> I actually don't think it's a good idea to force things to work that way.\n> >\n> >Why, exactly?\n> \n> Having now looked at the code and seen that PK constraints are already\n> dumped in the table definition, I guess doing unique constraints in the\n> same way is no worse.\n\nI have a good reason not to use UNIQUE. As I remember, pg_dump creates\nthe tables, copies in the data, then creates the indexes. This is much\nfaster than doing the copy with the indexes already created.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Nov 2000 10:50:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump / Unique constraints"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have a good reason not to use UNIQUE. As I remember, pg_dump creates\n> the tables, copies in the data, then creates the indexes. This is much\n> faster than doing the copy with the indexes already created.\n\nRight, that's the real implementation reason for doing it in two steps.\n\nThere's also a more abstract concern: ideally, pg_dump's schema output\nshould be the same as what the user originally entered. Converting a\ntable and separate index declaration into one statement is not any more\ncorrect than doing the reverse. Thus the real problem here is to know\nwhich way the index got created to begin with. Currently we do not\nknow that, because (you guessed it) we have not got a declarative\nrepresentation for the UNIQUE constraint, only the execution-oriented\nfact that the unique index exists.\n\nMy feeling is that there should be a stored indication someplace\nallowing us to deduce exactly what caused the index to be created.\nAn ad-hoc way is to add another field to pg_index, but it might be\ncleaner to create a new system catalog that covers all types of\nconstraint.\n\nThe next question is what pg_dump should emit, considering that it has\ntwo conflicting goals: it wants to restore the original state of the\nconstraint catalog *but* also be efficient about loading data. ALTER\nTABLE ADD CONSTRAINT seems to be an essential requirement there.\nBut it seems to me that it'd be really whizzy if there were two\ndifferent styles of output, one for a full dump (CREATE, load data,\nadd constraints) and one for schema-only dumps that tries to reproduce\nthe original table declaration with embedded constraint specs. That\nwould be nicer for documentation and editing purposes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 11:15:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump / Unique constraints "
},
{
"msg_contents": "I said:\n> But it seems to me that it'd be really whizzy if there were two\n> different styles of output, one for a full dump (CREATE, load data,\n> add constraints) and one for schema-only dumps that tries to reproduce\n> the original table declaration with embedded constraint specs. That\n> would be nicer for documentation and editing purposes.\n\nI just had an idea about this, based on the hackery that pg_dump\ncurrently does with triggers: what if there were an ALTER command that\nallows disabling and re-enabling constraint checking and index building?\nThen the dump script could look like\n\n\tfull CREATE TABLE with all constraints shown\n\n\tALTER TABLE DISABLE CONSTRAINTS\n\n\tCOPY data in\n\n\tALTER TABLE ENABLE CONSTRAINTS\n\nand there wouldn't have to be any difference between schema and full\ndump output for CREATE TABLE. If we were really brave (foolish?)\nthe last step could be something like\n\n\tALTER TABLE ENABLE CONSTRAINTS NOCHECK\n\nwhich'd suppress the scan for constraint violations that a normal\nALTER ADD CONSTRAINT would want to do.\n\nIt also occurs to me that we should not consider pg_dump as the only\narea that needs work to fix this. Why shouldn't pg_dump simply do\n\n\tfull CREATE TABLE with all constraints shown\n\tCREATE all indexes too\n\n\t-- if not schema dump then:\n\tCOPY data in\n\nThe answer to that of course is that cross-table constraints (like\nREFERENCES clauses) must be disabled while loading the data, or the\nintermediate states where only some tables have been loaded are likely\nto fail. So we do need some kind of DISABLE CONSTRAINT mode to make\nthis work. But it's silly that pg_dump has to go out of its way to\ncreate the indexes last --- if COPY has a performance problem there,\nwe should be fixing COPY, not requiring pg_dump to contort itself.\nWhy can't COPY recognize for itself that rebuilding the indexes after\nloading data is a better strategy than incremental index update?\n(The simplest implementation would restrict this to happen only if the\ntable is empty when COPY starts, which'd be sufficient for pg_dump.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 11:34:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump / Unique constraints "
},
{
"msg_contents": "> The answer to that of course is that cross-table constraints (like\n> REFERENCES clauses) must be disabled while loading the data, or the\n> intermediate states where only some tables have been loaded are likely\n> to fail. So we do need some kind of DISABLE CONSTRAINT mode to make\n> this work. But it's silly that pg_dump has to go out of its way to\n> create the indexes last --- if COPY has a performance problem there,\n> we should be fixing COPY, not requiring pg_dump to contort itself.\n> Why can't COPY recognize for itself that rebuilding the indexes after\n> loading data is a better strategy than incremental index update?\n> (The simplest implementation would restrict this to happen only if the\n> table is empty when COPY starts, which'd be sufficient for pg_dump.)\n\nCOPY would have to check to see if the table is already empty. You can\nCOPY into a table that already has data.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Nov 2000 11:37:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump / Unique constraints"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Why can't COPY recognize for itself that rebuilding the indexes after\n>> loading data is a better strategy than incremental index update?\n>> (The simplest implementation would restrict this to happen only if the\n>> table is empty when COPY starts, which'd be sufficient for pg_dump.)\n\n> COPY would have to check to see if the table is already empty.\n\nThat's what I said ... or intended to say, anyway. If there's already\ndata then the tradeoff between incremental update and index rebuild is\nnot so obvious, and the easiest first implementation would just be to\nalways do incremental update in that case. Or we could add an option\nto the COPY command to tell it which to do, and let the user do the\nguessing ;-)\n\nThere'd also be a locking issue, now that I think about it: to do an\nindex rebuild, we'd have to be sure that no other transaction is adding\ndata to the table at the same time. So we'd need to get a stronger lock\nthan a plain write lock to do it that way. A COPY option is sounding\nbetter and better...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 11:53:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump / Unique constraints "
},
{
"msg_contents": "My feeling is \"Let's walk before we run.\" We need psql \\dt to show\nprimary/foreign keys and SERIAL first.\n\n\n> Bruce Momjian <[email protected]> writes:\n> >> Why can't COPY recognize for itself that rebuilding the indexes after\n> >> loading data is a better strategy than incremental index update?\n> >> (The simplest implementation would restrict this to happen only if the\n> >> table is empty when COPY starts, which'd be sufficient for pg_dump.)\n> \n> > COPY would have to check to see if the table is already empty.\n> \n> That's what I said ... or intended to say, anyway. If there's already\n> data then the tradeoff between incremental update and index rebuild is\n> not so obvious, and the easiest first implementation would just be to\n> always do incremental update in that case. Or we could add an option\n> to the COPY command to tell it which to do, and let the user do the\n> guessing ;-)\n> \n> There'd also be a locking issue, now that I think about it: to do an\n> index rebuild, we'd have to be sure that no other transaction is adding\n> data to the table at the same time. So we'd need to get a stronger lock\n> than a plain write lock to do it that way. A COPY option is sounding\n> better and better...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Nov 2000 14:51:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump / Unique constraints"
},
{
"msg_contents": "At 11:34 22/11/00 -0500, Tom Lane wrote:\n>\n>\tfull CREATE TABLE with all constraints shown\n>\n>\tALTER TABLE DISABLE CONSTRAINTS\n\nI think you need something more like:\n\n SET ALL CONSTRAINTS DISABLED/OFF\n\nsince disabling one tables constraints won't work when we have\nsubselect-in-check (or if it does, then ALTER TABLE <table-name> DISABLE\nCONSTRAINTS will be a misleading name). Also, I think FK constraints on\nanother table that is already loaded will fail until the primary table is\nloaded.\n\n\n>\n>and there wouldn't have to be any difference between schema and full\n>dump output for CREATE TABLE.\n\nI still see a great deal of value in being able to get a list of 'ALTER\nTABLE ADD CONSTRAINT...' statements from pg_dump/restore. \n\n\n>If we were really brave (foolish?)\n>the last step could be something like\n>\n>\tALTER TABLE ENABLE CONSTRAINTS NOCHECK\n\nEek. Won't work for index-based constraints, since they are created anyway.\nIt *might* be a good idea for huge DBs.\n\n\n>But it's silly that pg_dump has to go out of its way to\n>create the indexes last --- if COPY has a performance problem there,\n>we should be fixing COPY, not requiring pg_dump to contort itself.\n\nThis is fine for COPY, but doesn't work for data-as-INSERTS.\n\n\n>Why can't COPY recognize for itself that rebuilding the indexes after\n>loading data is a better strategy than incremental index update?\n\nThe other aspect of COPY that needs fixing is the ability to specify column\norder (I think); from memory that's the reason the regression DB can't be\ndumped & loaded. It's also be nice to be able to specify a subset of columns.\n\n\n>(The simplest implementation would restrict this to happen only if the\n>table is empty when COPY starts, which'd be sufficient for pg_dump.)\n\nDoes this approach have any implications for recovery/reliability; adding a\nrow but not updating indexes seems a little dangerous. Or is the plan to\ndrop the indexes, add the data, and create the indexes?\n\n\nStepping back from the discussion for a moment, I am beginning to have\ndoubts about the approach: having pg_dump put the indexes (and constraints)\nat the end of the dump is simple and works in all cases. The only issue,\nAFAICT, is generating a single complete table defn for easy-reading. The\nsuggested solution seems a little extreme (a pg_dump specific hack to COPY,\nwhen there are other more general problems with COPY that more urgently\nrequire attention).\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 23 Nov 2000 11:59:26 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump / Unique constraints "
},
{
"msg_contents": "Just a quick question regarding the pg_dump program:\n\nI notice that PRIMARY KEY constraints are currently dumped as:\n\nPRIMARY KEY (\"field\")\n\nWhereas (to be in line with all the other constraints), it should be dumped\nas:\n\nCONSTRAINT \"name\" PRIMARY KEY (\"field\")\n\nOtherwise, some poor bugger who went to the trouble of giving his primary\nkeys custom names will lose them with a dump/restore???\n\nAlso, if they have defined a function or trigger that refers to that primary\nkey by name, won't it fail after a dump/restore? (If the name has changed?)\n\nI'm just asking, because I'm still trying to find something small and\nself-contained I can work on!\n\nChris\n\n",
"msg_date": "Thu, 23 Nov 2000 10:21:20 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_dump / Unique constraints "
},
{
"msg_contents": "> Is anybody working on:\n>\n> alter table <table> add constraint <name> primary key(column,...);\n>\n> or\n>\n> alter table <table> add constraint <name> unique(column,...);\n>\n> or\n>\n> alter table drop constraint\n\nI'd be more than happy to work on either of the above in the current\nimplementation, however - I'm not sure it'd be worth it, given that the\nconstraints system might be up for a reimplementation.\n\n> I guess this is not really a small task as it relates to unifying\n> constraint handling, but for the PK & unique constraints at least, we must\n> already have code that does the work - all(?) that has to happen\n> is to make\n> sure the ALTER command calls it...is that right?\n\nThat is a thought - can someone point me to the C file that handles CREATE\nTABLE so I can see how it's done? I can't for the life of me find that bit\nof code!\n\nChris\n\n",
"msg_date": "Thu, 23 Nov 2000 11:07:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: ALTER TABLE...ADD CONSTRAINT? "
},
{
"msg_contents": "At 10:21 23/11/00 +0800, Christopher Kings-Lynne wrote:\n>\n>I'm just asking, because I'm still trying to find something small and\n>self-contained I can work on!\n>\n\nIs anybody working on:\n\n alter table <table> add constraint <name> primary key(column,...);\n\nor\n\n alter table <table> add constraint <name> unique(column,...);\n\nor\n\n alter table drop constraint\n\nI guess this is not really a small task as it relates to unifying\nconstraint handling, but for the PK & unique constraints at least, we must\nalready have code that does the work - all(?) that has to happen is to make\nsure the ALTER command calls it...is that right?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 23 Nov 2000 14:12:38 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "ALTER TABLE...ADD CONSTRAINT? "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> can someone point me to the C file that handles CREATE\n> TABLE so I can see how it's done?\n\nbackend/parser/analyze.c has the preprocessing (see\ntransformCreateStmt). Actual execution starts in\nbackend/commands/creatinh.c, and there's also important code in\nbackend/catalog/heap.c.\n\nPlus subroutines scattered here, there, and everywhere :-(.\n\nYou really won't get far in reading the PG sources until you have\na tool that will quickly find the definition (and optionally all uses)\nof any particular symbol you are interested in. I'm partial to glimpse,\nbut you could also use ctags/etags or some other indexing program.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 22:44:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: ALTER TABLE...ADD CONSTRAINT? "
}
]
|
[
{
"msg_contents": "I ran the src/test/regressplans.sh script, which runs the regression tests\nunder exclusion of various join and scan types. Without merge joins (-fm)\nI get an assertion failure in opr_sanity.\n\nThe query is:\n\n SELECT p1.oid, p1.aggname\n FROM pg_aggregate as p1\n WHERE p1.aggfinalfn = 0 AND p1.aggfinaltype != p1.aggtranstype;\n\n(The plan for this query is a seq scan on pg_aggregate.)\n\nThe backtrace is:\n\n#0 0x4012b131 in __kill () from /lib/libc.so.6\n#1 0x4012aead in raise (sig=6) at ../sysdeps/posix/raise.c:27\n#2 0x4012c534 in abort () at ../sysdeps/generic/abort.c:88\n#3 0x8149b98 in ExceptionalCondition (\n conditionName=0x81988a0 \"!(((file) > 0 && (file) < (int) SizeVfdCache\n&& VfdCache[file].fileName != ((void *)0)))\", exceptionP=0x81b93c8,\ndetail=0x0,\n fileName=0x8198787 \"fd.c\", lineNumber=851) at assert.c:70\n#4 0x8105e6e in FileSeek (file=33, offset=0, whence=2) at fd.c:851\n#5 0x810e692 in _mdnblocks (file=33, blcksz=8192) at md.c:1095\n#6 0x810de9b in mdnblocks (reln=0x403a35f4) at md.c:667\n#7 0x810ec80 in smgrnblocks (which=0, reln=0x403a35f4) at smgr.c:441\n#8 0x8103303 in RelationGetNumberOfBlocks (relation=0x403a35f4)\n at xlog_bufmgr.c:1161\n#9 0x8072b04 in initscan (scan=0x822af94, relation=0x403a35f4, atend=0,\n nkeys=0, key=0x0) at heapam.c:128\n#10 0x8073fa0 in heap_beginscan (relation=0x403a35f4, atend=0,\n snapshot=0x822b438, nkeys=0, key=0x0) at heapam.c:811\n#11 0x80c69e4 in ExecBeginScan (relation=0x403a35f4, nkeys=0, skeys=0x0,\n isindex=0, dir=ForwardScanDirection, snapshot=0x822b438) at\nexecAmi.c:156\n#12 0x80c6986 in ExecOpenScanR (relOid=16960, nkeys=0, skeys=0x0,\n isindex=0 '\\000', dir=ForwardScanDirection, snapshot=0x822b438,\n returnRelation=0xbffff074, returnScanDesc=0xbffff078) at execAmi.c:104\n#13 0x80d098c in InitScanRelation (node=0x822ae60, estate=0x822aeec,\n scanstate=0x822b084) at nodeSeqscan.c:172\n#14 0x80d0a62 in ExecInitSeqScan (node=0x822ae60, estate=0x822aeec,\nparent=0x0)\n at nodeSeqscan.c:242\n#15 0x80c917f in ExecInitNode (node=0x822ae60, estate=0x822aeec,\nparent=0x0)\n at execProcnode.c:152\n#16 0x80c7be9 in InitPlan (operation=CMD_SELECT, parseTree=0x823b108,\n plan=0x822ae60, estate=0x822aeec) at execMain.c:621\n#17 0x80c765b in ExecutorStart (queryDesc=0x822b41c, estate=0x822aeec)\n at execMain.c:135\n#18 0x8111439 in ProcessQuery (parsetree=0x823b108, plan=0x822ae60,\n dest=Remote) at pquery.c:263\n#19 0x810ffea in pg_exec_query_string (\n query_string=0x823a548 \"SELECT p1.oid, p1.aggname\\nFROM pg_aggregate\nas p1\\nWHERE p1.aggfinalfn = 0 AND p1.aggfinaltype != p1.aggtranstype;\",\ndest=Remote,\n parse_context=0x81f13b0) at postgres.c:818\n<snipped>\n\nThis failure is completely reproduceable by running\n\nsrc/test/regress$ PGOPTIONS=-fm ./pg_regress opr_sanity\n\nThe problem also happens with the setting '-fn -fm', but *not* with the\nsetting '-fm -fh'. (Adding or removing -fs or -fi doesn't affect the\noutcome.)\n\n\nThe only other two failures are the join test when both merge and hash\njoins are disabled and alter_table without index scans. Both seem\nharmless; see attached diffs.\n\nThe former is related to outer joins apparently not working with nest\nloops. The latter is a missing ORDER BY, which I'm inclined to fix.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/",
"msg_date": "Wed, 22 Nov 2000 17:44:43 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "regressplans failures"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> #3 0x8149b98 in ExceptionalCondition (\n> conditionName=0x81988a0 \"!(((file) > 0 && (file) < (int) SizeVfdCache\n> && VfdCache[file].fileName != ((void *)0)))\", exceptionP=0x81b93c8,\n> detail=0x0,\n> fileName=0x8198787 \"fd.c\", lineNumber=851) at assert.c:70\n> #4 0x8105e6e in FileSeek (file=33, offset=0, whence=2) at fd.c:851\n\nI'm guessing this is a variant of the problem Philip Warner reported\nyesterday. Probably WAL-related. Vadim?\n\n> The only other two failures are the join test when both merge and hash\n> joins are disabled and alter_table without index scans. Both seem\n> harmless; see attached diffs.\n> The former is related to outer joins apparently not working with nest\n> loops. The latter is a missing ORDER BY, which I'm inclined to fix.\n\nFULL JOIN currently is only implementable by mergejoin (if you can\nfigure out how to do it with a nest or hash join, I'm all ears...).\nI guess it's a bug that the planner honors enable_mergejoin = OFF\neven when given a FULL JOIN query. (At least the failure detection\ncode works, though ;-).) I'll see what I can do about that.\n\nI'd be inclined *not* to add ORDER BYs just to make regressplans produce\nzero diffs in all cases. The presence of an ORDER BY may cause the\nplanner to prefer presorted-output plans, thus defeating the purpose\nof testing all plan types...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 12:12:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regressplans failures "
}
]
|
[
{
"msg_contents": "Hi,\n\n I'd like make some changes on the 7.1 (to be) libpgtcl.\n\n 1. Make the large object access null-byte safe, when\n libpgtcl is compiled against a 8.0 or higher version of\n Tcl.\n\n This would cause that a libpgtcl.so built on a system\n with Tcl 8.0 or higher will not any longer load into a\n pre 8.0 Tcl interpreter. Since Tcl's actual version is\n 8.3, I think it's long enough for backward compatibility.\n\n 2. Add a \"pg_execute\" command, that behaves almost like the\n \"spi_exec\" of PL/Tcl.\n\n Any objections?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 22 Nov 2000 12:26:40 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changes to libpgtcl"
},
{
"msg_contents": "Jan Wieck wrote:\n> Hi,\n>\n> I'd like make some changes on the 7.1 (to be) libpgtcl.\n>\n> 1. Make the large object access null-byte safe, when\n> libpgtcl is compiled against a 8.0 or higher version of\n> Tcl.\n>\n> This would cause that a libpgtcl.so built on a system\n> with Tcl 8.0 or higher will not any longer load into a\n> pre 8.0 Tcl interpreter. Since Tcl's actual version is\n> 8.3, I think it's long enough for backward compatibility.\n>\n> 2. Add a \"pg_execute\" command, that behaves almost like the\n> \"spi_exec\" of PL/Tcl.\n>\n> Any objections?\n\n O.K., the changes are committed. pg_lo_read and pg_lo_write\n are now able to handle binary data (actually I'm hearing an\n MP3 pumped by a Tcl script directly from the DB into xaudio\n :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Mon, 27 Nov 2000 07:47:32 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Changes to libpgtcl"
}
]
|
[
{
"msg_contents": "Hi,\n\nI have a question about the performance of the planner in 7.1. I've been\ntesting the 11/21 snapshot of the database just to get an idea of how it\nwill work for me when I upgrade from 7.02 I've noticed that some queries \nare taking much longer and I've narrowed it down (i think) to the planner.\n\nI've run an identical query against 7.02 and 7.1. Both databases have the exact\nsame data, and both databases have been vacuum'd. As you can see from below,\nthe 7.1 snapshot is spending 97% of the total time planning the query, where\nthe 7.0.2 version is spending only 27% of the total time planning the query.\n\nIf anyone is interested in this, I'll be happy to supply you with information\nthat would help track this down.\n\n\nThanks.\n\n7.1-snapshot\nPLANNER STATISTICS\n! system usage stats:\n!\t7.748602 elapsed 5.020000 user 0.200000 system sec\n!\t[5.090000 user 0.210000 sys total]\n!\t0/0 [0/0] filesystem blocks in/out\n!\t47/1246 [349/1515] page faults/reclaims, 0 [0] swaps\n!\t0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n!\t0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n!\tShared blocks: 20 read, 0 written, buffer hit rate = 99.94%\n!\tLocal blocks: 0 read, 0 written, buffer hit rate = 0.00%\n!\tDirect blocks: 0 read, 0 written\nEXECUTOR STATISTICS\n! system usage stats:\n!\t0.317000 elapsed 0.160000 user 0.010000 system sec\n!\t[5.250000 user 0.220000 sys total]\n!\t0/0 [0/0] filesystem blocks in/out\n!\t328/364 [677/1879] page faults/reclaims, 0 [0] swaps\n!\t0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n!\t0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n!\tShared blocks: 160 read, 0 written, buffer hit rate = 97.73%\n!\tLocal blocks: 0 read, 0 written, buffer hit rate = 0.00%\n!\tDirect blocks: 0 read, 0 written\n\n\n7.0.2\n! Planner Stats:\n! system usage stats:\n!\t0.051438 elapsed 0.050000 user 0.000000 system sec\n!\t[0.330000 user 0.050000 sys total]\n!\t0/0 [0/0] filesystem blocks in/out\n!\t0/51 [680/837] page faults/reclaims, 0 [0] swaps\n!\t0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n!\t0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n!\tShared blocks: 0 read, 0 written, buffer hit rate = 100.00%\n!\tLocal blocks: 0 read, 0 written, buffer hit rate = 0.00%\n!\tDirect blocks: 0 read, 0 written\n! Executor Stats:\n! system usage stats:\n!\t0.136506 elapsed 0.130000 user 0.000000 system sec\n!\t[0.460000 user 0.050000 sys total]\n!\t0/0 [0/0] filesystem blocks in/out\n!\t0/6 [680/843] page faults/reclaims, 0 [0] swaps\n!\t0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n!\t0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n!\tShared blocks: 98 read, 0 written, buffer hit rate = 98.98%\n!\tLocal blocks: 0 read, 0 written, buffer hit rate = 0.00%\n!\tDirect blocks: 0 read, 0 written\n\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n",
"msg_date": "Wed, 22 Nov 2000 11:01:31 -0700",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about performance of planner"
},
{
"msg_contents": "Brian Hirt <[email protected]> writes:\n> I have a question about the performance of the planner in 7.1. I've been\n> testing the 11/21 snapshot of the database just to get an idea of how it\n> will work for me when I upgrade from 7.02 I've noticed that some queries \n> are taking much longer and I've narrowed it down (i think) to the planner.\n\nDoes EXPLAIN show the same query plan in both cases?\n\n> If anyone is interested in this, I'll be happy to supply you with information\n> that would help track this down.\n\nSure. Let's see the query and the database schema ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 14:08:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about performance of planner "
}
]
|
[
{
"msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > #3 0x8149b98 in ExceptionalCondition (\n> > conditionName=0x81988a0 \"!(((file) > 0 && (file) < \n> (int) SizeVfdCache\n> > && VfdCache[file].fileName != ((void *)0)))\", exceptionP=0x81b93c8,\n> > detail=0x0,\n> > fileName=0x8198787 \"fd.c\", lineNumber=851) at assert.c:70\n> > #4 0x8105e6e in FileSeek (file=33, offset=0, whence=2) at fd.c:851\n> \n> I'm guessing this is a variant of the problem Philip Warner reported\n> yesterday. Probably WAL-related. Vadim?\n\nProbably, though I don't understand how WAL is related to execution plans.\nOk, it's easy to reproduce - I'll take a look.\n\nVadim\n",
"msg_date": "Wed, 22 Nov 2000 10:42:21 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: regressplans failures "
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> I'm guessing this is a variant of the problem Philip Warner reported\n>> yesterday. Probably WAL-related. Vadim?\n\n> Probably, though I don't understand how WAL is related to execution plans.\n> Ok, it's easy to reproduce - I'll take a look.\n\nCould just be a question of a different pattern of table accesses?\nLet me know if you want help looking...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 14:03:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regressplans failures "
}
]
|
[
{
"msg_contents": "Just playing with the syslog functionality on 7.1devel, and the\nexplain output looks weird to me:\n\nNov 22 14:58:44 lerami pg-test[4005]: [2] DEBUG: MoveOfflineLogs:\nskip 0000000000000006\nNov 22 14:58:44 lerami pg-test[4005]: [3] DEBUG: MoveOfflineLogs:\nskip 0000000000000005\nNov 22 14:59:09 lerami pg-test[4005]: [4] NOTICE: QUERY PLAN:\nNov 22 14:59:0 lerami Nov 22 14:59:09Index Scan using upslog_index on\nupslog (cost=0.00..88.65 rows=165 width=28)\n\nseems like it should be better. \n\nThe output at the client looks fine:\nler=# explain select * from upslog where upslogdate >='2000-11-01';\nNOTICE: QUERY PLAN:\n\nIndex Scan using upslog_index on upslog (cost=0.00..88.65 rows=165\nwidth=28)\n\nEXPLAIN\nler=# \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 22 Nov 2000 15:02:04 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "syslog output from explain looks weird..."
},
{
"msg_contents": "* Larry Rosenman <[email protected]> [001122 15:03]:\n> Just playing with the syslog functionality on 7.1devel, and the\n> explain output looks weird to me:\n> \n> Nov 22 14:58:44 lerami pg-test[4005]: [2] DEBUG: MoveOfflineLogs:\n> skip 0000000000000006\n> Nov 22 14:58:44 lerami pg-test[4005]: [3] DEBUG: MoveOfflineLogs:\n> skip 0000000000000005\n> Nov 22 14:59:09 lerami pg-test[4005]: [4] NOTICE: QUERY PLAN:\n> Nov 22 14:59:0 lerami Nov 22 14:59:09Index Scan using upslog_index on\n> upslog (cost=0.00..88.65 rows=165 width=28)\n> \n> seems like it should be better. \n> \n> The output at the client looks fine:\n> ler=# explain select * from upslog where upslogdate >='2000-11-01';\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using upslog_index on upslog (cost=0.00..88.65 rows=165\n> width=28)\n> \n> EXPLAIN\n> ler=# \nAnd here is a fix. What appears to piss off my syslogd is the no\ncharacter lines. So, I added spaces to the output. The new client\noutput looks like:\nler=# explain select * from upslog where upslogdate>='2000-11-01';\nNOTICE: QUERY PLAN:\n \n Index Scan using upslog_index on upslog (cost=0.00..88.65 rows=165\nwidth=28)\n\nEXPLAIN\nler=# \\q\n$ \n\nand the syslog looks like:\nNov 22 15:22:56 lerami pg-test[8299]: [2] NOTICE: QUERY PLAN:\nNov 22 15:22:56 lerami \nNov 22 15:22:56 lerami Index Scan using upslog_index on upslog\n(cost=0.00..88.65 rows=165 width=28)\n\nAnd the patch is:\n\nIndex: src/backend/commands/explain.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/explain.c,v\nretrieving revision 1.62\ndiff -c -r1.62 explain.c\n*** src/backend/commands/explain.c\t2000/11/12 00:36:56\t1.62\n--- src/backend/commands/explain.c\t2000/11/22 21:16:47\n***************\n*** 120,126 ****\n \t\ts = Explain_PlanToString(plan, es);\n \t\tif (s)\n \t\t{\n! \t\t\telog(NOTICE, \"QUERY PLAN:\\n\\n%s\", s);\n \t\t\tpfree(s);\n \t\t}\n \t}\n--- 120,126 ----\n \t\ts = Explain_PlanToString(plan, es);\n \t\tif (s)\n \t\t{\n! \t\t\telog(NOTICE, \"QUERY PLAN:\\n \\n %s\", s);\n \t\t\tpfree(s);\n \t\t}\n \t}\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 22 Nov 2000 15:24:50 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "* Larry Rosenman <[email protected]> [001122 15:25]:\n> * Larry Rosenman <[email protected]> [001122 15:03]:\n> > Just playing with the syslog functionality on 7.1devel, and the\n> > explain output looks weird to me:\n> > \n> > Nov 22 14:58:44 lerami pg-test[4005]: [2] DEBUG: MoveOfflineLogs:\n> > skip 0000000000000006\n> > Nov 22 14:58:44 lerami pg-test[4005]: [3] DEBUG: MoveOfflineLogs:\n> > skip 0000000000000005\n> > Nov 22 14:59:09 lerami pg-test[4005]: [4] NOTICE: QUERY PLAN:\n> > Nov 22 14:59:0 lerami Nov 22 14:59:09Index Scan using upslog_index on\n> > upslog (cost=0.00..88.65 rows=165 width=28)\n> > \n> > seems like it should be better. \n> > \n> > The output at the client looks fine:\n> > ler=# explain select * from upslog where upslogdate >='2000-11-01';\n> > NOTICE: QUERY PLAN:\n> > \n> > Index Scan using upslog_index on upslog (cost=0.00..88.65 rows=165\n> > width=28)\n> > \n> > EXPLAIN\n> > ler=# \n> And here is a fix. What appears to piss off my syslogd is the no\n> character lines. So, I added spaces to the output. The new client\n> output looks like:\n> ler=# explain select * from upslog where upslogdate>='2000-11-01';\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using upslog_index on upslog (cost=0.00..88.65 rows=165\n> width=28)\n> \n> EXPLAIN\n> ler=# \\q\n> $ \n> \n> and the syslog looks like:\n> Nov 22 15:22:56 lerami pg-test[8299]: [2] NOTICE: QUERY PLAN:\n> Nov 22 15:22:56 lerami \n> Nov 22 15:22:56 lerami Index Scan using upslog_index on upslog\n> (cost=0.00..88.65 rows=165 width=28)\n> \nLooking some more, I found some other places that need a space (I\nsuspect...), so here is an updated patch.\n\nIndex: src/backend/commands/explain.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/explain.c,v\nretrieving revision 1.62\ndiff -c -r1.62 explain.c\n*** src/backend/commands/explain.c\t2000/11/12 00:36:56\t1.62\n--- src/backend/commands/explain.c\t2000/11/22 22:52:39\n***************\n*** 110,116 ****\n \t\ts = nodeToString(plan);\n \t\tif (s)\n \t\t{\n! \t\t\telog(NOTICE, \"QUERY DUMP:\\n\\n%s\", s);\n \t\t\tpfree(s);\n \t\t}\n \t}\n--- 110,116 ----\n \t\ts = nodeToString(plan);\n \t\tif (s)\n \t\t{\n! \t\t\telog(NOTICE, \"QUERY DUMP:\\n \\n %s\", s);\n \t\t\tpfree(s);\n \t\t}\n \t}\n***************\n*** 120,126 ****\n \t\ts = Explain_PlanToString(plan, es);\n \t\tif (s)\n \t\t{\n! \t\t\telog(NOTICE, \"QUERY PLAN:\\n\\n%s\", s);\n \t\t\tpfree(s);\n \t\t}\n \t}\n--- 120,126 ----\n \t\ts = Explain_PlanToString(plan, es);\n \t\tif (s)\n \t\t{\n! \t\t\telog(NOTICE, \"QUERY PLAN:\\n \\n %s\", s);\n \t\t\tpfree(s);\n \t\t}\n \t}\n***************\n*** 149,155 ****\n \n \tif (plan == NULL)\n \t{\n! \t\tappendStringInfo(str, \"\\n\");\n \t\treturn;\n \t}\n \n--- 149,155 ----\n \n \tif (plan == NULL)\n \t{\n! \t\tappendStringInfo(str, \"\\n \");\n \t\treturn;\n \t}\n \n***************\n*** 283,289 ****\n \t\t\t\t\t\t plan->startup_cost, plan->total_cost,\n \t\t\t\t\t\t plan->plan_rows, plan->plan_width);\n \t}\n! \tappendStringInfo(str, \"\\n\");\n \n \t/* initPlan-s */\n \tif (plan->initPlan)\n--- 283,289 ----\n \t\t\t\t\t\t plan->startup_cost, plan->total_cost,\n \t\t\t\t\t\t plan->plan_rows, plan->plan_width);\n \t}\n! \tappendStringInfo(str, \"\\n \");\n \n \t/* initPlan-s */\n \tif (plan->initPlan)\n***************\n*** 293,299 ****\n \n \t\tfor (i = 0; i < indent; i++)\n \t\t\tappendStringInfo(str, \" \");\n! \t\tappendStringInfo(str, \" InitPlan\\n\");\n \t\tforeach(lst, plan->initPlan)\n \t\t{\n \t\t\tes->rtable = ((SubPlan *) lfirst(lst))->rtable;\n--- 293,299 ----\n \n \t\tfor (i = 0; i < indent; i++)\n \t\t\tappendStringInfo(str, \" \");\n! \t\tappendStringInfo(str, \" InitPlan\\n \");\n \t\tforeach(lst, plan->initPlan)\n \t\t{\n \t\t\tes->rtable = ((SubPlan *) lfirst(lst))->rtable;\n***************\n*** 369,375 ****\n \n \t\tfor (i = 0; i < indent; i++)\n \t\t\tappendStringInfo(str, \" \");\n! \t\tappendStringInfo(str, \" SubPlan\\n\");\n \t\tforeach(lst, plan->subPlan)\n \t\t{\n \t\t\tes->rtable = ((SubPlan *) lfirst(lst))->rtable;\n--- 369,375 ----\n \n \t\tfor (i = 0; i < indent; i++)\n \t\t\tappendStringInfo(str, \" \");\n! \t\tappendStringInfo(str, \" SubPlan\\n \");\n \t\tforeach(lst, plan->subPlan)\n \t\t{\n \t\t\tes->rtable = ((SubPlan *) lfirst(lst))->rtable;\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 22 Nov 2000 16:54:21 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "Larry Rosenman <[email protected]> writes:\n> Looking some more, I found some other places that need a space (I\n> suspect...), so here is an updated patch.\n\nThis seems like the wrong way to go about it, because anytime anyone\nchanges any elog output anywhere, we'll risk another failure. If\nsyslog can't cope with empty lines, I think the right fix is for the\noutput-to-syslog routine to change the data just before sending ---\nthen there is only one place to fix. See the syslog output routine in\nsrc/backend/utils/error/elog.c.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 23:44:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: syslog output from explain looks weird... "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001122 22:44]:\n> Larry Rosenman <[email protected]> writes:\n> > Looking some more, I found some other places that need a space (I\n> > suspect...), so here is an updated patch.\n> \n> This seems like the wrong way to go about it, because anytime anyone\n> changes any elog output anywhere, we'll risk another failure. If\n> syslog can't cope with empty lines, I think the right fix is for the\n> output-to-syslog routine to change the data just before sending ---\n> then there is only one place to fix. See the syslog output routine in\n> src/backend/utils/error/elog.c.\nMakes sense. Here's a new patch, now the output even looks better:\nNov 23 00:58:04 lerami pg-test[9914]: [2-1] NOTICE: QUERY PLAN:\nNov 23 00:58:04 lerami pg-test[9914]: [2-2] \nNov 23 00:58:04 lerami pg-test[9914]: [2-3] Seq Scan on upsdata\n(cost=0.00..2766.62 rows=2308 width=48)\nNov 23 00:58:04 lerami pg-test[9914]: [2-4] \n\n\nIndex: src/backend/utils/error/elog.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/error/elog.c,v\nretrieving revision 1.67\ndiff -c -r1.67 elog.c\n*** src/backend/utils/error/elog.c\t2000/11/14 19:13:27\t1.67\n--- src/backend/utils/error/elog.c\t2000/11/23 06:58:23\n***************\n*** 657,663 ****\n \tseq++;\n \n \t/* divide into multiple syslog() calls if message is too long */\n! \tif (len > PG_SYSLOG_LIMIT)\n \t{\n \t\tstatic char\tbuf[PG_SYSLOG_LIMIT+1];\n \t\tint chunk_nr = 0;\n--- 657,664 ----\n \tseq++;\n \n \t/* divide into multiple syslog() calls if message is too long */\n! \t/* or if the message contains embedded NewLine(s) '\\n' */\n! \tif (len > PG_SYSLOG_LIMIT || strchr(line,'\\n') != NULL )\n \t{\n \t\tstatic char\tbuf[PG_SYSLOG_LIMIT+1];\n \t\tint chunk_nr = 0;\n***************\n*** 667,675 ****\n--- 668,684 ----\n \t\t{\n \t\t\tint l;\n \t\t\tint i;\n+ \t\t\t/* if we start at a newline, move ahead one char */\n+ \t\t\tif (line[0] == '\\n')\n+ \t\t\t{\n+ \t\t\t\tline++;\n+ \t\t\t\tlen--;\n+ \t\t\t}\n \n \t\t\tstrncpy(buf, line, PG_SYSLOG_LIMIT);\n \t\t\tbuf[PG_SYSLOG_LIMIT] = '\\0';\n+ \t\t\tif (strchr(buf,'\\n') != NULL) \n+ \t\t\t\t*strchr(buf,'\\n') = '\\0';\n \n \t\t\tl = strlen(buf);\n #ifdef MULTIBYTE\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 23 Nov 2000 01:01:05 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "* Larry Rosenman <[email protected]> [001123 01:10]:\n> * Tom Lane <[email protected]> [001122 22:44]:\n> Makes sense. Here's a new patch, now the output even looks better:\n> Nov 23 00:58:04 lerami pg-test[9914]: [2-1] NOTICE: QUERY PLAN:\n> Nov 23 00:58:04 lerami pg-test[9914]: [2-2] \n> Nov 23 00:58:04 lerami pg-test[9914]: [2-3] Seq Scan on upsdata\n> (cost=0.00..2766.62 rows=2308 width=48)\n> Nov 23 00:58:04 lerami pg-test[9914]: [2-4] \n> \n> \n[snip]\nAny comments from the committers crowd? (I can't commit it...) \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 23 Nov 2000 19:02:17 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "Applied.\n\n\n> * Tom Lane <[email protected]> [001122 22:44]:\n> > Larry Rosenman <[email protected]> writes:\n> > > Looking some more, I found some other places that need a space (I\n> > > suspect...), so here is an updated patch.\n> > \n> > This seems like the wrong way to go about it, because anytime anyone\n> > changes any elog output anywhere, we'll risk another failure. If\n> > syslog can't cope with empty lines, I think the right fix is for the\n> > output-to-syslog routine to change the data just before sending ---\n> > then there is only one place to fix. See the syslog output routine in\n> > src/backend/utils/error/elog.c.\n> Makes sense. Here's a new patch, now the output even looks better:\n> Nov 23 00:58:04 lerami pg-test[9914]: [2-1] NOTICE: QUERY PLAN:\n> Nov 23 00:58:04 lerami pg-test[9914]: [2-2] \n> Nov 23 00:58:04 lerami pg-test[9914]: [2-3] Seq Scan on upsdata\n> (cost=0.00..2766.62 rows=2308 width=48)\n> Nov 23 00:58:04 lerami pg-test[9914]: [2-4] \n> \n> \n> Index: src/backend/utils/error/elog.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/error/elog.c,v\n> retrieving revision 1.67\n> diff -c -r1.67 elog.c\n> *** src/backend/utils/error/elog.c\t2000/11/14 19:13:27\t1.67\n> --- src/backend/utils/error/elog.c\t2000/11/23 06:58:23\n> ***************\n> *** 657,663 ****\n> \tseq++;\n> \n> \t/* divide into multiple syslog() calls if message is too long */\n> ! \tif (len > PG_SYSLOG_LIMIT)\n> \t{\n> \t\tstatic char\tbuf[PG_SYSLOG_LIMIT+1];\n> \t\tint chunk_nr = 0;\n> --- 657,664 ----\n> \tseq++;\n> \n> \t/* divide into multiple syslog() calls if message is too long */\n> ! \t/* or if the message contains embedded NewLine(s) '\\n' */\n> ! \tif (len > PG_SYSLOG_LIMIT || strchr(line,'\\n') != NULL )\n> \t{\n> \t\tstatic char\tbuf[PG_SYSLOG_LIMIT+1];\n> \t\tint chunk_nr = 0;\n> ***************\n> *** 667,675 ****\n> --- 668,684 ----\n> \t\t{\n> \t\t\tint l;\n> \t\t\tint i;\n> + \t\t\t/* if we start at a newline, move ahead one char */\n> + \t\t\tif (line[0] == '\\n')\n> + \t\t\t{\n> + \t\t\t\tline++;\n> + \t\t\t\tlen--;\n> + \t\t\t}\n> \n> \t\t\tstrncpy(buf, line, PG_SYSLOG_LIMIT);\n> \t\t\tbuf[PG_SYSLOG_LIMIT] = '\\0';\n> + \t\t\tif (strchr(buf,'\\n') != NULL) \n> + \t\t\t\t*strchr(buf,'\\n') = '\\0';\n> \n> \t\t\tl = strlen(buf);\n> #ifdef MULTIBYTE\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 24 Nov 2000 23:37:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "Applied. :-)\n\n> * Larry Rosenman <[email protected]> [001123 01:10]:\n> > * Tom Lane <[email protected]> [001122 22:44]:\n> > Makes sense. Here's a new patch, now the output even looks better:\n> > Nov 23 00:58:04 lerami pg-test[9914]: [2-1] NOTICE: QUERY PLAN:\n> > Nov 23 00:58:04 lerami pg-test[9914]: [2-2] \n> > Nov 23 00:58:04 lerami pg-test[9914]: [2-3] Seq Scan on upsdata\n> > (cost=0.00..2766.62 rows=2308 width=48)\n> > Nov 23 00:58:04 lerami pg-test[9914]: [2-4] \n> > \n> > \n> [snip]\n> Any comments from the committers crowd? (I can't commit it...) \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 24 Nov 2000 23:38:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "Someone ought to backpatch to REL_7_0_PATCHES, as it's syslog also\nlooks bad...\n\nLER\n\n* Bruce Momjian <[email protected]> [001124 22:38]:\n> Applied. :-)\n> \n> > * Larry Rosenman <[email protected]> [001123 01:10]:\n> > > * Tom Lane <[email protected]> [001122 22:44]:\n> > > Makes sense. Here's a new patch, now the output even looks better:\n> > > Nov 23 00:58:04 lerami pg-test[9914]: [2-1] NOTICE: QUERY PLAN:\n> > > Nov 23 00:58:04 lerami pg-test[9914]: [2-2] \n> > > Nov 23 00:58:04 lerami pg-test[9914]: [2-3] Seq Scan on upsdata\n> > > (cost=0.00..2766.62 rows=2308 width=48)\n> > > Nov 23 00:58:04 lerami pg-test[9914]: [2-4] \n> > > \n> > > \n> > [snip]\n> > Any comments from the committers crowd? (I can't commit it...) \n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 24 Nov 2000 22:51:50 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "> Someone ought to backpatch to REL_7_0_PATCHES, as it's syslog also\n> looks bad...\n\nNot sure if we will have a 7.0.4, and I can't see it as a major bug\nproblem anyway.\n\n\n> \n> LER\n> \n> * Bruce Momjian <[email protected]> [001124 22:38]:\n> > Applied. :-)\n> > \n> > > * Larry Rosenman <[email protected]> [001123 01:10]:\n> > > > * Tom Lane <[email protected]> [001122 22:44]:\n> > > > Makes sense. Here's a new patch, now the output even looks better:\n> > > > Nov 23 00:58:04 lerami pg-test[9914]: [2-1] NOTICE: QUERY PLAN:\n> > > > Nov 23 00:58:04 lerami pg-test[9914]: [2-2] \n> > > > Nov 23 00:58:04 lerami pg-test[9914]: [2-3] Seq Scan on upsdata\n> > > > (cost=0.00..2766.62 rows=2308 width=48)\n> > > > Nov 23 00:58:04 lerami pg-test[9914]: [2-4] \n> > > > \n> > > > \n> > > [snip]\n> > > Any comments from the committers crowd? (I can't commit it...) \n> > > \n> > > -- \n> > > Larry Rosenman http://www.lerctr.org/~ler\n> > > Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> > > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 24 Nov 2000 23:58:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "> > Someone ought to backpatch to REL_7_0_PATCHES, as it's syslog also\n> > looks bad...\n> \n> Not sure if we will have a 7.0.4, and I can't see it as a major bug\n> problem anyway.\n\nThinking about 7.0.3 has a new option to enable syslog, we might have\nmore often complaints from users than before, no? I think it's worth\nto make back patches...\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 26 Nov 2000 20:49:04 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: syslog output from explain looks weird..."
},
{
"msg_contents": "Hi,\n\ncan anyone tell me when Postgresql 7.1 will be released?\n\nthanks,\nXuYifeng\n\n\n",
"msg_date": "Sun, 26 Nov 2000 20:18:54 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "when will PostgreSQL 7.1?"
},
{
"msg_contents": "On Sun, 26 Nov 2000, xuyifeng wrote:\n\n> Hi,\n> \n> can anyone tell me when Postgresql 7.1 will be released?\n\nabout a month after it goes beta ... which should be over the next couple\nof weeks ...\n\n\n",
"msg_date": "Sun, 26 Nov 2000 16:19:03 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when will PostgreSQL 7.1?"
}
]
|
[
{
"msg_contents": "> I assume you're talking about this DEBUG stuff:\n> \n> ...\n> Creating directory /home/postgres/testversion/data/pg_xlog\n> Creating template1 database in /home/postgres/testversion/data/base/1\n> DEBUG: starting up\n> DEBUG: database system was shut down at 2000-11-22 14:38:01\n\nI had to add StartupXLOG call when bootstraping to handle OIDs\ncorrectly.\n\n> Not sure whether we should change any code or not. I don't much like\n> the idea of having initdb send stderr to /dev/null, for example.\n> Perhaps StartupXLOG could be made a little less chatty, however?\n\nI considered messages during database system startup/shutdown as having\nhigher interest/priority than regular debug messages. Some if()\nwouldn't be bad, probably.\n\n> BTW, Vadim, what is the reasoning for your having invented aliases\n> STOP and LOG for elog levels REALLYFATAL and DEBUG? I think it's\n> confusing to have more than one name for the same severity level.\n> If we're going to open up the issue of renaming the elog levels to\n> something saner, there are a whole bunch of changes to be undertaken,\n> and these aren't the names I'd choose anyway ...\n\nWell, as stated above I would think about XLOG (maybe some others?)\nmessages as about something different from debug ones. Look at syslog -\nthere are NOTICE & INFO logging levels, not just DEBUG.\n\nAs for STOP - there was no REALLYFATAL at the time I started XLOG codding\n(> year ago)... Anyway, I like STOP more than REALLYFATAL -:) But wouldn't\ninsist on this name.\n\nVadim\n",
"msg_date": "Wed, 22 Nov 2000 14:06:31 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Talkative initdb, elog message levels"
}
]
|
[
{
"msg_contents": "> \n> On Tue, 21 Nov 2000, Christopher Kings-Lynne wrote:\n> \n> > > > Problem is that there are 5 difference types of constraints,\n> > > implemented in\n> > > > 5 different ways. Do you want a unifed, central catalog of\n> > > constraints, or\n> > > > just for some of them, or what?\n> > >\n> > > Dunno. Maybe a unified representation would make more sense, or maybe\n> > > it's OK to treat them separately. The existing implementations of the\n> > > different types of constraints were done at different times, and perhaps\n> > > are different \"just because\" rather than for any good reason. We need\n> > > investigation before we can come up with a reasonable proposal.\n> > \n> > It strikes me that having a catalog (so to speak) of all contraints, with\n> > flags in the tables where the contraints are implemented would allow a\n> > separation of presentation and implementation.\n> \n> Yeah, the hard part is storing enough information to recover the\n> constraint in an easy way without going to the implementation details,\n> strings aren't sufficient by themselves because that gets really difficult\n> to maintain as table/columns change or are dropped. Maybe a central\n> catalog like the above and a backend function that takes care of\n> formatting to text would work. Or keeping track of the dependent objects\n> and re-figuring the text form (or drop constraint, or whatever) when those\n> objects are changed/dropped.\n> \n> I think that combining different constraints is good to some extent\n> because there are alot of problems with many constraints (the RI ones have\n> problems, check constraints are currently not deferrable AFAIK,\n> the unique constraint doesn't actually have the correct semantics) and\n> maybe thinking about the whole set of them at the same time would be a\n> good idea.\n> \n> > > > I assume that column contraints implicitly become table\n> > > constraints. This\n> > > > will also make it easy to have global unique contraint names.\n> > > Actually -\n> > > > are the constraint names currently unique for an entire database?\n> > >\n> > > No, and they shouldn't be --- only per-table, I think.\n> > \n> > Oops - correct. Wasn't paying attention. I forgot that the table name is\n> > specified as part of the ALTER statement.\n> \n> I'm not sure actually, it seems to say in the syntax rules for the\n> constraint name definition that the qualified identifier of a constraint\n> needs to be different from any other qualified identifier for any other\n> constraint in the same schema, so Christopher may have been right the\n> first time (given we don't have schema).\n\ntom and i spoke of this problem at the Open Source Database\nSummit awhile back. \n\nin a nutshell, postgres doesn't maintain explicit \nrelationships between tables. my experience says\nthat foreign/primary keys fall under the\ncategory of extended domains, not rules, and, hence,\npostgres is a bit out of the loop.\n\nmy vote is for storing the relationships in\nthe system tables, as most commercial DBs do.\notherwise, an entire class of DDL applications\nwon't be possible under postgres.\n\njohn\n\n-\n\nJohn Scott\nSenior Partner\nAugust Associates\n\n web: http://www.august.com/~john\n\n....................................\nGet your own free email account from\nhttp://www.popmail.com\n\n",
"msg_date": "Wed, 22 Nov 2000 17:59:08 -0600 (CST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Table/Column Constraints "
}
]
|
[
{
"msg_contents": "> >> I'm guessing this is a variant of the problem Philip \n> >> Warner reported yesterday. Probably WAL-related. Vadim?\n> \n> > Probably, though I don't understand how WAL is related to \n> > execution plans. Ok, it's easy to reproduce - I'll take a look.\n> \n> Could just be a question of a different pattern of table accesses?\n> Let me know if you want help looking...\n\nFixed - fdstate was not properly setted in fd.c:fileNameOpenFile\nwith WAL enabled, sorry.\n\nPhilip, please try to reproduce crash.\n\nVadim\n",
"msg_date": "Wed, 22 Nov 2000 16:58:53 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: regressplans failures "
}
]
|
[
{
"msg_contents": ">\n>Fixed - fdstate was not properly setted in fd.c:fileNameOpenFile\n>with WAL enabled, sorry.\n>\n>Philip, please try to reproduce crash.\n>\n\nSeems to have fixed the crash for me as well. Thanks.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 23 Nov 2000 13:06:40 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: regressplans failures "
}
]
|
[
{
"msg_contents": "After Tom's bug fix, I can now load the data model with no\nproblem.\n\nVery cool, I'm pumped!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 22 Nov 2000 18:56:42 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "OpenACS datamodel vs. current PG 7.1 sources"
}
]
|
[
{
"msg_contents": "Hello, pgsql-hackers\n\nCan I change postgresql's source to make the following plpgsql works ?\nIf could, would you please tell me where can i change the source?\nI want to try it.\n\n-------------------------------------------------------\nCREATE FUNCTION users_select_by_id(@id int4)\nRETURNS SETOF users_set\nAS '\n\ndeclare rec record;\n\nbegin\n\n\tfor rec in\n\t\tselect * from users where id = @id\n\tloop\n\t\treturn next rec;\n\tend loop;\n\treturn;\n\nend; 'LANGUAGE plpgsql;\n-------------------------------------------------------\n\n\nThanks & Regards\n\nArnold.Zhu\n2000-11-23\n\n\n\n\n",
"msg_date": "Thu, 23 Nov 2000 11:59:58 +0800",
"msg_from": "\"Arnold.Zhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to make @id or $id as parameter name in plpgsql, is it available?"
},
{
"msg_contents": "On Thu, Nov 23, 2000 at 11:59:58AM +0800, Arnold.Zhu wrote:\n\n> Can I change postgresql's source to make the following plpgsql works ?\n> If could, would you please tell me where can i change the source?\n> I want to try it.\n\nNo need -- PostgreSQL 8.0 (currently in beta) already supports\nargument names in a function's argument list, although I think\nonly PL/pgSQL currently does anything with them.\n\n> CREATE FUNCTION users_select_by_id(@id int4)\n\nChange @id to be a valid identifier name and it should work.\nYou can keep using @id if you double-quote it as \"@id\".\n\nIf that's not what you meant then please be more specific.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Tue, 23 Nov 2004 22:46:54 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to make @id or $id as parameter name in plpgsql,\n\tis it available?"
},
{
"msg_contents": "On Tue, Nov 23, 2004 at 10:46:54PM -0700, Michael Fuhr wrote:\n\n> On Thu, Nov 23, 2000 at 11:59:58AM +0800, Arnold.Zhu wrote:\n ^^^^\nUmmm...did you know your clock was four years behind?\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Tue, 23 Nov 2004 22:52:05 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to make @id or $id as parameter name in plpgsql,\n\tis it available?"
}
]
|
[
{
"msg_contents": "\nThere is a minor breakage of existing apps that occurs with current CVS.\n\nIn 7.0 doing the following:\n\n create table tsc(f1 int4 , f2 int4);\n insert into tsc values(1,4);\n select sum(f1)/sum(f2) from tsc;\n\nwould actually result in zero, since it worked with integers throughout. As\na result, I adopted the following strategy:\n\n select cast(sum(f1) as float8)/sum(f2) from tsc;\n\nwhich produced the expected results.\n\nNow in 7.1 this breaks with:\n\nERROR: Unable to identify an operator '/' for types 'float8' and 'numeric'\n You will have to retype this query using an explicit cast\n\nIs there a reason why it doesn't promote float8 to numeric?\n\n \n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 23 Nov 2000 15:00:26 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Breaking of existing apps with CVS version"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> select cast(sum(f1) as float8)/sum(f2) from tsc;\n\n> Now in 7.1 this breaks with:\n\n> ERROR: Unable to identify an operator '/' for types 'float8' and 'numeric'\n> You will have to retype this query using an explicit cast\n\n> Is there a reason why it doesn't promote float8 to numeric?\n\nActually, if we were to do any automatic coercion in this case,\nI think that the SQL spec requires casting in the other direction,\nnumeric to float8. Mixing exact and inexact numerics (to use the\nspec's terminology) can hardly be expected to produce an exact result.\n\nThe reason for the change in behavior is that sum(int4) now produces\nnumeric, not int4, to avoid overflow problems. I believe this change\nis for the better both in practical terms and in terms of closer\nadherence to the intent of the SQL spec. However, it may indeed cause\npeople to run into the numeric-vs-float8 ambiguity.\n\nI'd prefer that we not try to solve this issue for 7.1. We've gone\naround on the question of changing the numeric-type promotion hierarchy\na couple of times, without reaching any clear resolution of what to do\n--- so I doubt that a quick hack in the waning days of the 7.1 cycle\nwill prove satisfactory. Let's leave it be until we have a real\nsolution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Nov 2000 23:27:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Breaking of existing apps with CVS version "
},
{
"msg_contents": "At 23:27 22/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>\n>> Is there a reason why it doesn't promote float8 to numeric?\n>\n>Mixing exact and inexact numerics (to use the\n>spec's terminology) can hardly be expected to produce an exact result.\n\nI suppose it's a question of working in the most accurate representation\nfor each number to minimize inaccuracy, then representing the result as\naccurately as possible. Since numeric is more accurate for calculation, I\nassumes we'd use it if we had to choose. How we represent the result may be\nup to the SQL standard.\n\nAll that aside, I was more worried that when people start upgrading to 7.1\nwe might be a flood of \"my application doesn't work any more\" bug reports. \n\n\n>However, it may indeed cause\n>people to run into the numeric-vs-float8 ambiguity.\n\nIt's a little more than an ambiguity; anyone that mixes floats with sums\nwill get a crash in their application.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 23 Nov 2000 15:51:45 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Breaking of existing apps with CVS version "
}
]
|
[
{
"msg_contents": "Hello,\nI've looked at the resources available through the web page to CVS and other\nstuff,\nhowever I cant find a statement of whats likely to be in 7.1 and what is planned\nfor later.\n\nReason: I want to know if any of these features are scheduled.\n\n1. Calculated fields in table definitions . eg.\n\n Create table test (\n A Integer,\n B integer,\n the_sum As (A+B),\n);\n\nThis is like MSSQL\n\n2. Any parameterised triggers\n\n3. Any parameterised stored procedures that return a result set.\n\n\nThese are _extraordinarily_ useful for application development.\n\nIf anyone has a way of bolting on any of these to 7.0, I'd be keen to hear from\nyou.\n\nRegards\n\nJohn\n\n\n",
"msg_date": "Thu, 23 Nov 2000 18:00:34 +1300",
"msg_from": "\"John Huttley\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Please advise features in 7.1"
},
{
"msg_contents": "\"John Huttley\" <[email protected]> writes:\n> Reason: I want to know if any of these features are scheduled.\n\n> 1. Calculated fields in table definitions . eg.\n\n> Create table test (\n> A Integer,\n> B integer,\n> the_sum As (A+B),\n> );\n\nYou can do that now (and for many versions past) with a trigger.\nIt's not quite as convenient as it ought to be, but it's possible.\nAFAIK there's no change in that situation for 7.1.\n\n> 2. Any parameterised triggers\n\nWe've had parameterized triggers for years. Maybe you attach some\nmeaning to that term beyond what I do?\n\n> 3. Any parameterised stored procedures that return a result set.\n\nThere is some support (dating back to Berkeley Postquel) for functions\nreturning sets, but it's pretty ugly and limited. Proper support might\nhappen in 7.2 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Nov 2000 01:05:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 "
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"John Huttley\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, 23 November 2000 19:05\nSubject: Re: [HACKERS] Please advise features in 7.1\n\n\n> \"John Huttley\" <[email protected]> writes:\n> > Reason: I want to know if any of these features are scheduled.\n>\n> > 1. Calculated fields in table definitions . eg.\n>\n> > Create table test (\n> > A Integer,\n> > B integer,\n> > the_sum As (A+B),\n> > );\n>\n> You can do that now (and for many versions past) with a trigger.\n> It's not quite as convenient as it ought to be, but it's possible.\n> AFAIK there's no change in that situation for 7.1.\n>\n\n\nYes, Perhaps defining the table with a dummy field and setting up a\n'before'\ntrigger which replaced that field with a calculated value?\n\nMessy but feasible.\n\n\n> > 2. Any parameterised triggers\n>\n> We've had parameterized triggers for years. Maybe you attach some\n> meaning to that term beyond what I do?\n\nI'm referring to the manual that says functions used for triggers must have\nno parameters\nand return a type Opaque. And indeed it is impossible to create a trigger\nfrom a plSQL function that takes any parameters.\n\nThus if we have a lot of triggers which are very similar, we cannot just use\none function\nand pass an identifying parameter or two to it. We must create an\nindividual function for each trigger.\n\nIts irritating more than fatal.\n\n> > 3. Any parameterised stored procedures that return a result set.\n>\n> There is some support (dating back to Berkeley Postquel) for functions\n> returning sets, but it's pretty ugly and limited. Proper support might\n> happen in 7.2 ...\n\nSomething to look forward to! Meanwhile I'll have a play and see if its\npossible to use a read trigger\nto populate a temporary table. hmm, that might require a statement level\ntrigger. Another thing for 7.2,\ni guess.\n\nThe application programming we are doing now utilises stored procedures\nreturning record sets\n(MSSQL) and the lack is showstopper in our migration plans. Sigh.\n\n\nThanks Tom\n\nRegards\n\n\nJohn\n\n\n",
"msg_date": "Thu, 23 Nov 2000 20:44:04 +1300",
"msg_from": "\"john huttley\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 "
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"John Huttley\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, 23 November 2000 19:05\nSubject: Re: [HACKERS] Please advise features in 7.1\n\n\n> \"John Huttley\" <[email protected]> writes:\n> > Reason: I want to know if any of these features are scheduled.\n>\n> > 1. Calculated fields in table definitions . eg.\n>\n> > Create table test (\n> > A Integer,\n> > B integer,\n> > the_sum As (A+B),\n> > );\n>\n> You can do that now (and for many versions past) with a trigger.\n> It's not quite as convenient as it ought to be, but it's possible.\n> AFAIK there's no change in that situation for 7.1.\n>\n\n\nYes, Perhaps defining the table with a dummy field and setting up a\n'before'\ntrigger which replaced that field with a calculated value?\n\nMessy but feasible.\n\n\n> > 2. Any parameterised triggers\n>\n> We've had parameterized triggers for years. Maybe you attach some\n> meaning to that term beyond what I do?\n\nI'm referring to the manual that says functions used for triggers must have\nno parameters\nand return a type Opaque. And indeed it is impossible to create a trigger\nfrom a plSQL function that takes any parameters.\n\nThus if we have a lot of triggers which are very similar, we cannot just use\none function\nand pass an identifying parameter or two to it. We must create an\nindividual function for each trigger.\n\nIts irritating more than fatal.\n\n> > 3. Any parameterised stored procedures that return a result set.\n>\n> There is some support (dating back to Berkeley Postquel) for functions\n> returning sets, but it's pretty ugly and limited. Proper support might\n> happen in 7.2 ...\n\nSomething to look forward to! Meanwhile I'll have a play and see if its\npossible to use a read trigger\nto populate a temporary table. hmm, that might require a statement level\ntrigger. Another thing for 7.2,\ni guess.\n\nThe application programming we are doing now utilises stored procedures\nreturning record sets\n(MSSQL) and the lack is showstopper in our migration plans. Sigh.\n\n\nThanks Tom\n\nRegards\n\n\nJohn\n\n\n",
"msg_date": "Thu, 23 Nov 2000 20:44:04 +1300",
"msg_from": "\"john huttley\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 "
},
{
"msg_contents": "At 18:00 23/11/00 +1300, John Huttley wrote:\n>\n>1. Calculated fields in table definitions . eg.\n>\n\nCan't really do this - you might want to consider a view with an insert &\nupdate rule. I'm not sure how flexible rules are and you may not be able to\nwrite rules to make views functions like tables, but that is at least part\nof their purpose I think.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 23 Nov 2000 22:49:53 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1"
},
{
"msg_contents": "At 06:00 PM 11/23/00 +1300, John Huttley wrote:\n\n>1. Calculated fields in table definitions . eg.\n>\n> Create table test (\n> A Integer,\n> B integer,\n> the_sum As (A+B),\n>);\n\n...\n\n>These are _extraordinarily_ useful for application development.\n>\n>If anyone has a way of bolting on any of these to 7.0, I'd be keen to hear\nfrom\n>you.\n\nCreate a trigger on insert/update for this case...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 23 Nov 2000 06:07:35 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1"
},
{
"msg_contents": "\"john huttley\" <[email protected]> writes:\n>> We've had parameterized triggers for years. Maybe you attach some\n>> meaning to that term beyond what I do?\n\n> I'm referring to the manual that says functions used for triggers must\n> have no parameters and return a type Opaque.\n\nThe function has to be declared that way, but you can actually pass a\nset of string parameters to it from the CREATE TRIGGER command. The\nstrings show up in some special variable or other inside the function.\n(No, I don't know why it was done in that ugly way...) See the manual's\ndiscussion of trigger programming.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Nov 2000 10:30:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 "
},
{
"msg_contents": "Thanks for your help, everyone.\n\nThis is a summary of replies.\n\n1. Calculated fields in table definitions . eg.\n\n Create table test (\n A Integer,\n B integer,\n the_sum As (A+B),\n);\n\nThis functionality can be achieved through the use of views.\nImplementing the create table syntax may not be too hard,\nbut not in 7.1...\n\n2 Parameterised Triggers\n\nFunctionality is there, just that the documentation gave the wrong implication.\nAn user manual example of using parameterised triggers to implement referential\nintegrity\nwould be welcome.\n\n3. Stored Procedures returning a record set.\n\nDream on!\n\n\nRegards\n\nJohn\n\n\n",
"msg_date": "Tue, 28 Nov 2000 14:04:01 +1300",
"msg_from": "\"John Huttley\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Please advise features in 7.1 (SUMMARY)"
},
{
"msg_contents": "\nOn Tue, 28 Nov 2000, John Huttley wrote:\n\n> 3. Stored Procedures returning a record set.\n> \n> Dream on!\n\nThis is something I would be really interested to see working. What are the\nissues? my understanding is that it is technically feasible but too\ncomplicated to add to PL/PGsql? it seems to me a basic service that needs\nto be implemented soon, even if its just returning multiple rows of one\ncolumn...\n\n\n- Andrew\n\n\n",
"msg_date": "Tue, 28 Nov 2000 14:44:05 +1100 (EST)",
"msg_from": "Andrew Snow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 (SUMMARY)"
},
{
"msg_contents": "Hi,\n\n how long is PG7.1 already in beta testing? can it be released before Christmas day?\n can PG7.1 will recover database from system crash?\n\n Thanks,\n\nXuYifeng\n\n",
"msg_date": "Tue, 28 Nov 2000 16:17:21 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "beta testing version"
},
{
"msg_contents": "At 04:17 PM 11/28/00 +0800, xuyifeng wrote:\n>Hi,\n>\n> how long is PG7.1 already in beta testing? can it be released before Christmas day?\n> can PG7.1 will recover database from system crash?\n\nThis guy's a troll from the PHP Builder's site (at least, Tim Perdue and I suspect this\ndue to some posts he made in regard to Tim's SourceForge/Postgres article).\n\nSince he's read Tim's article, and at least some of the follow-up posts (given that\nhe's posted responses himself), he should know by now that PG 7.1 is still in a pre-beta\nstate and won't be released before Christmas day. I also posted a fairly long answer\nto a question Tim's posted at phpbuilder.com regarding recoverability and this guy's\nundoubtably read it, too.\n\nHave I forgotten anything, xuyifeng?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 28 Nov 2000 06:37:52 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "no doubt, I have touched some problems PG has, right? if PG is so good, \nis there any necessary for the team to improve PG again?\n\nRegards,\nXuYifeng\n\n----- Original Message ----- \nFrom: Don Baccus <[email protected]>\nTo: xuyifeng <[email protected]>; <[email protected]>\nSent: Tuesday, November 28, 2000 10:37 PM\nSubject: Re: [HACKERS] beta testing version\n\n\n> At 04:17 PM 11/28/00 +0800, xuyifeng wrote:\n> >Hi,\n> >\n> > how long is PG7.1 already in beta testing? can it be released before Christmas day?\n> > can PG7.1 will recover database from system crash?\n> \n> This guy's a troll from the PHP Builder's site (at least, Tim Perdue and I suspect this\n> due to some posts he made in regard to Tim's SourceForge/Postgres article).\n> \n> Since he's read Tim's article, and at least some of the follow-up posts (given that\n> he's posted responses himself), he should know by now that PG 7.1 is still in a pre-beta\n> state and won't be released before Christmas day. I also posted a fairly long answer\n> to a question Tim's posted at phpbuilder.com regarding recoverability and this guy's\n> undoubtably read it, too.\n> \n> Have I forgotten anything, xuyifeng?\n> \n> \n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n> \n",
"msg_date": "Tue, 28 Nov 2000 23:15:23 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 11:15 PM 11/28/00 +0800, xuyifeng wrote:\n>no doubt, I have touched some problems PG has, right? if PG is so good, \n>is there any necessary for the team to improve PG again?\n\nSee? Troll...\n\nThe guy worships MySQL, just in case folks haven't made the connection.\n\nI'm going to ignore him from now on, suggest others do the same, I'm sure\nhe'll go away eventually.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 28 Nov 2000 07:16:33 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "you are complete wrong, if I don't like PG, I'll never go here or talk anything about PG, I don't care it.\nI just want PG can be improved quickly, for me crash recover is very urgent problem,\notherewise PG is forced to stay on my desktop machine, We'll dare not move it to our Server,\nI always see myself as a customer, customer is always right.\n\nRegards,\nXuYifeng\n\n\n----- Original Message ----- \nFrom: Don Baccus <[email protected]>\nTo: xuyifeng <[email protected]>; <[email protected]>\nSent: Tuesday, November 28, 2000 11:16 PM\nSubject: Re: [HACKERS] beta testing version\n\n\n> At 11:15 PM 11/28/00 +0800, xuyifeng wrote:\n> >no doubt, I have touched some problems PG has, right? if PG is so good, \n> >is there any necessary for the team to improve PG again?\n> \n> See? Troll...\n> \n> The guy worships MySQL, just in case folks haven't made the connection.\n> \n> I'm going to ignore him from now on, suggest others do the same, I'm sure\n> he'll go away eventually.\n> \n> \n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n> \n",
"msg_date": "Tue, 28 Nov 2000 23:26:00 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Tue, Nov 28, 2000 at 02:04:01PM +1300, John Huttley wrote:\n> Thanks for your help, everyone.\n> \n> This is a summary of replies.\n> \n> 1. Calculated fields in table definitions . eg.\n> \n> Create table test (\n> A Integer,\n> B integer,\n> the_sum As (A+B),\n> );\n> \n> This functionality can be achieved through the use of views.\n\nUsing a view for this isn't quite the same functionality as a computed\nfield, from what I understand, since the calculation will be done at\nSELECT time, rather than INSERT/UPDATE.\n\nThis can also be done with a trigger, which, while more cumbersome to\nwrite, would be capable of doing the math at modification time.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Tue, 28 Nov 2000 09:43:24 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 (SUMMARY)"
},
{
"msg_contents": "> no doubt, I have touched some problems PG has, right? if PG is so good,\n> is there any necessary for the team to improve PG again?\n\n*rofl*\n\nGood call Don :)\n\n - Thomas\n",
"msg_date": "Tue, 28 Nov 2000 15:49:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Tue, 28 Nov 2000, xuyifeng wrote:\n\n> no doubt, I have touched some problems PG has, right? if PG is so good, \n> is there any necessary for the team to improve PG again?\n\nThere is always room for improvements for any software package ... whether\nit be PgSQL, Linux, FreeBSD or PHPBuilder ... as ppl learn more,\nunderstand more and come up with new techniques, things tend to get better\n...\n\n > > Regards,\n> XuYifeng\n> \n> ----- Original Message ----- \n> From: Don Baccus <[email protected]>\n> To: xuyifeng <[email protected]>; <[email protected]>\n> Sent: Tuesday, November 28, 2000 10:37 PM\n> Subject: Re: [HACKERS] beta testing version\n> \n> \n> > At 04:17 PM 11/28/00 +0800, xuyifeng wrote:\n> > >Hi,\n> > >\n> > > how long is PG7.1 already in beta testing? can it be released before Christmas day?\n> > > can PG7.1 will recover database from system crash?\n> > \n> > This guy's a troll from the PHP Builder's site (at least, Tim Perdue and I suspect this\n> > due to some posts he made in regard to Tim's SourceForge/Postgres article).\n> > \n> > Since he's read Tim's article, and at least some of the follow-up posts (given that\n> > he's posted responses himself), he should know by now that PG 7.1 is still in a pre-beta\n> > state and won't be released before Christmas day. I also posted a fairly long answer\n> > to a question Tim's posted at phpbuilder.com regarding recoverability and this guy's\n> > undoubtably read it, too.\n> > \n> > Have I forgotten anything, xuyifeng?\n> > \n> > \n> > \n> > - Don Baccus, Portland OR <[email protected]>\n> > Nature photos, on-line guides, Pacific Northwest\n> > Rare Bird Alert Service and other goodies at\n> > http://donb.photo.net.\n> > \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 28 Nov 2000 12:47:39 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Tue, 28 Nov 2000, xuyifeng wrote:\n\n> you are complete wrong, if I don't like PG, I'll never go here or talk\n> anything about PG, I don't care it. I just want PG can be improved\n> quickly, for me crash recover is very urgent problem, otherewise PG is\n> forced to stay on my desktop machine, We'll dare not move it to our\n> Server, I always see myself as a customer, customer is always right.\n\nexcept when they are wrong ...\n\n... but, as for crash recover, the plan right now is that on Thursday, Dec\n1st, 7.1 goes beta ... if you are so keen on the crash recovery stuff,\nwhat I'd recommend is grab the snapshot, and work with that on your\nmachine, get used to the features that it presents and report any bugs you\nfind. Between beta and release, there will be bug fixes, but no features\nadded, so it makes for a relatively safe starting point. I wouldn't use\nit in production (or, rather, I personally would, but it isn't something\nI'd recommend for the faint of heart), but it will give you a base to\nstart from ...\n\nrelease will be shortly into the new year, depending on what sorts of bugs\nppl report and how quickly they can be fixed ... if all goes well, Jan 1st\nwill be release date, but, from experience, we're looking at closer to jan\n15th :)\n\n > > Regards,\n> XuYifeng\n> \n> \n> ----- Original Message ----- \n> From: Don Baccus <[email protected]>\n> To: xuyifeng <[email protected]>; <[email protected]>\n> Sent: Tuesday, November 28, 2000 11:16 PM\n> Subject: Re: [HACKERS] beta testing version\n> \n> \n> > At 11:15 PM 11/28/00 +0800, xuyifeng wrote:\n> > >no doubt, I have touched some problems PG has, right? if PG is so good, \n> > >is there any necessary for the team to improve PG again?\n> > \n> > See? Troll...\n> > \n> > The guy worships MySQL, just in case folks haven't made the connection.\n> > \n> > I'm going to ignore him from now on, suggest others do the same, I'm sure\n> > he'll go away eventually.\n> > \n> > \n> > \n> > - Don Baccus, Portland OR <[email protected]>\n> > Nature photos, on-line guides, Pacific Northwest\n> > Rare Bird Alert Service and other goodies at\n> > http://donb.photo.net.\n> > \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 28 Nov 2000 12:51:45 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Tue, 28 Nov 2000, Hannu Krosing wrote:\n\n> xuyifeng wrote:\n> > \n> \n> I just noticed this conversation so I have not followed all of it, \n> but you seem to have strange priorities\n> \n> > I just want PG can be improved quickly, for me crash recover is very urgent problem,\n> \n> Crash avoidance is usually much more urgent, at least on production\n> servers.\n\nGood call, but I kinda jumped to the conclusion that since PgSQL itself\nisn't that crash prone, its his OS or his hardware that was the problem :0\n\n\n",
"msg_date": "Tue, 28 Nov 2000 12:53:06 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "This is one of the not-so-stomped boxes running PostgreSQL -- I've never\nrestarted PostgreSQL on it since it was installed.\n\n12:03pm up 122 days, 7:54, 1 user, load average: 0.08, 0.11, 0.09\n\nI had some index corruption problems in 6.5.3 but since 7.0.X I haven't\nheard so much as a peep from any PostgreSQL backend. It's superbly stable on\nall my machines..\n\nDamn good work guys.\n\n-Mitch\n\n----- Original Message -----\nFrom: \"The Hermit Hacker\" <[email protected]>\nTo: \"Hannu Krosing\" <[email protected]>\nCc: \"xuyifeng\" <[email protected]>; <[email protected]>;\n\"Don Baccus\" <[email protected]>\nSent: Tuesday, November 28, 2000 8:53 AM\nSubject: Re: [HACKERS] beta testing version\n\n\n> On Tue, 28 Nov 2000, Hannu Krosing wrote:\n>\n> > xuyifeng wrote:\n> > >\n> >\n> > I just noticed this conversation so I have not followed all of it,\n> > but you seem to have strange priorities\n> >\n> > > I just want PG can be improved quickly, for me crash recover is very\nurgent problem,\n> >\n> > Crash avoidance is usually much more urgent, at least on production\n> > servers.\n>\n> Good call, but I kinda jumped to the conclusion that since PgSQL itself\n> isn't that crash prone, its his OS or his hardware that was the problem :0\n>\n>\n>\n\n",
"msg_date": "Tue, 28 Nov 2000 10:12:27 -0800",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "xuyifeng wrote:\n> \n\nI just noticed this conversation so I have not followed all of it, \nbut you seem to have strange priorities\n\n> I just want PG can be improved quickly, for me crash recover is very urgent problem,\n\nCrash avoidance is usually much more urgent, at least on production\nservers.\n\n> otherewise PG is forced to stay on my desktop machine, We'll dare not move it to our Server,\n\nWhy do you keep crashing your server ?\n\nIf your desktop crashes less often than your server you might exchange\nthem, no?\n\n> I always see myself as a customer, customer is always right.\n\nI'd like to see myself as being always right too ;)\n\n-------------------\nHannu\n",
"msg_date": "Tue, 28 Nov 2000 18:38:34 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "Mitch Vincent wrote:\n> \n> This is one of the not-so-stomped boxes running PostgreSQL -- I've never\n> restarted PostgreSQL on it since it was installed.\n> 12:03pm up 122 days, 7:54, 1 user, load average: 0.08, 0.11, 0.09\n> I had some index corruption problems in 6.5.3 but since 7.0.X I haven't\n> heard so much as a peep from any PostgreSQL backend. It's superbly stable on\n> all my machines..\n\nI have a 6.5.x box at 328 days of active use.\n\nCrash \"recovery\" seems silly to me. :-)\n\n-Bop\n\n--\nBrought to you from boop!, the dual boot Linux/Win95 Compaq Presario 1625\nlaptop, currently running RedHat 6.1. Your bopping may vary.\n",
"msg_date": "Tue, 28 Nov 2000 15:25:05 -0700",
"msg_from": "Ron Chmara <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 03:25 PM 11/28/00 -0700, Ron Chmara wrote:\n>Mitch Vincent wrote:\n>> \n>> This is one of the not-so-stomped boxes running PostgreSQL -- I've never\n>> restarted PostgreSQL on it since it was installed.\n>> 12:03pm up 122 days, 7:54, 1 user, load average: 0.08, 0.11, 0.09\n>> I had some index corruption problems in 6.5.3 but since 7.0.X I haven't\n>> heard so much as a peep from any PostgreSQL backend. It's superbly stable on\n>> all my machines..\n>\n>I have a 6.5.x box at 328 days of active use.\n>\n>Crash \"recovery\" seems silly to me. :-)\n\nWell, not really ... but since our troll is a devoted MySQL user, it's a bit\nof a red-herring anyway, at least as regards his own server.\n\nYou know, the one he's afraid to put Postgres on, but sleeps soundly at\nnight knowing the mighty bullet-proof MySQL with its full transaction\nsemantics, archive logging and recovery from REDO logs and all that\nwill save him? :)\n\nAgain ... he's a troll, not even a very entertaining one.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 28 Nov 2000 14:58:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Tue, 28 Nov 2000, Don Baccus wrote:\n\n> At 03:25 PM 11/28/00 -0700, Ron Chmara wrote:\n> >Mitch Vincent wrote:\n> >> \n> >> This is one of the not-so-stomped boxes running PostgreSQL -- I've never\n> >> restarted PostgreSQL on it since it was installed.\n> >> 12:03pm up 122 days, 7:54, 1 user, load average: 0.08, 0.11, 0.09\n> >> I had some index corruption problems in 6.5.3 but since 7.0.X I haven't\n> >> heard so much as a peep from any PostgreSQL backend. It's superbly stable on\n> >> all my machines..\n> >\n> >I have a 6.5.x box at 328 days of active use.\n> >\n> >Crash \"recovery\" seems silly to me. :-)\n> \n> Well, not really ... but since our troll is a devoted MySQL user, it's a bit\n> of a red-herring anyway, at least as regards his own server.\n> \n> You know, the one he's afraid to put Postgres on, but sleeps soundly at\n> night knowing the mighty bullet-proof MySQL with its full transaction\n> semantics, archive logging and recovery from REDO logs and all that\n> will save him? :)\n> \n> Again ... he's a troll, not even a very entertaining one.\n\nOr informed?\n\n\n",
"msg_date": "Tue, 28 Nov 2000 19:10:58 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "NO, I just tested how solid PgSQL is, I run a program busy inserting record into PG table, when I \nsuddenly pulled out power from my machine and restarted PG, I can not insert any record into database\ntable, all backends are dead without any respone (not core dump), note that I am using FreeBSD 4.2, \nit's rock solid, it's not OS crash, it just losted power. We use WindowsNT and MSSQL on our production\nserver, before we accept MSSQL, we use this method to test if MSSQL can endure this kind of strik,\nit's OK, all databases are safely recovered, we can continue our work. we are a stock exchange company,\nour server are storing millilion $ finance number, we don't hope there are any problems in this case, \nwe are using UPS, but UPS is not everything, it you bet everything on UPS, you must be idiot. \nI know you must be an avocation of PG, but we are professional customer, corporation user, we store critical\ndata into database, not your garbage data.\n\nRegards,\nXuYifeng\n\n----- Original Message ----- \nFrom: Don Baccus <[email protected]>\nTo: Ron Chmara <[email protected]>; Mitch Vincent <[email protected]>; <[email protected]>\nSent: Wednesday, November 29, 2000 6:58 AM\nSubject: Re: [HACKERS] beta testing version\n\n\n> At 03:25 PM 11/28/00 -0700, Ron Chmara wrote:\n> >Mitch Vincent wrote:\n> >> \n> >> This is one of the not-so-stomped boxes running PostgreSQL -- I've never\n> >> restarted PostgreSQL on it since it was installed.\n> >> 12:03pm up 122 days, 7:54, 1 user, load average: 0.08, 0.11, 0.09\n> >> I had some index corruption problems in 6.5.3 but since 7.0.X I haven't\n> >> heard so much as a peep from any PostgreSQL backend. It's superbly stable on\n> >> all my machines..\n> >\n> >I have a 6.5.x box at 328 days of active use.\n> >\n> >Crash \"recovery\" seems silly to me. :-)\n> \n> Well, not really ... but since our troll is a devoted MySQL user, it's a bit\n> of a red-herring anyway, at least as regards his own server.\n> \n> You know, the one he's afraid to put Postgres on, but sleeps soundly at\n> night knowing the mighty bullet-proof MySQL with its full transaction\n> semantics, archive logging and recovery from REDO logs and all that\n> will save him? :)\n> \n> Again ... he's a troll, not even a very entertaining one.\n> \n> \n> \n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n> \n",
"msg_date": "Wed, 29 Nov 2000 09:59:34 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Wed, Nov 29, 2000 at 09:59:34AM +0800, xuyifeng wrote:\n> NO, I just tested how solid PgSQL is, I run a program busy inserting\n> record into PG table, when I suddenly pulled out power from my machine ...\n\nNobody claims PostgreSQL is proof against power failures.\n\n> ... We use WindowsNT and MSSQL on our production server,\n> before we accept MSSQL, we use this method to test if MSSQL can endure\n> this kind of strike, it's OK, all databases are safely recovered, we\n> can continue our work. \n\nYou got lucky. Period. MSSQL is not proof against power failures,\nand neither is NTFS. In particular, that the database accepted \ntransactions afterward is far from proof that its files were not \ncorrupted.\n\nIncompetent testers produce invalid tests. Invalid tests lead to \nmeaningless conclusions. Incompetent testers' employers suffer\nfrom false confidence, and poor decision-making.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Tue, 28 Nov 2000 18:26:30 -0800",
"msg_from": "Nathan Myers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "> server, before we accept MSSQL, we use this method to test if MSSQL can\nendure this kind of strik,\n> it's OK, all databases are safely recovered, we can continue our work. we\nare a stock exchange company,\n\nAnd how exactly did you test the integrity of your data? Unless every single\nrecord has got at least a CRC stored somewhere, you won't be able AT ALL to\ncheck for database integrity. The reports from NTFS and MSSQL internal\nchecking are meaningless for your data integrity.\n\nWe are doing this checksumming in our project, and already got a few nasty\nsurprises when the \"CRC daemon\" stumbled over a few corrupted records we\nnever would have discovered otherwise. Exactly this checksumming weeded out\nour server alternatives; at present only PostgreSQL is left, was the most\nreliable of all.\n\nHorst\n\n",
"msg_date": "Wed, 29 Nov 2000 19:51:43 +1100",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "xuyifeng wrote:\n> \n> NO, I just tested how solid PgSQL is, I run a program busy inserting record into PG table, when I\n> suddenly pulled out power from my machine and restarted PG, I can not insert any record into database\n> table, all backends are dead without any respone (not core dump), note that I am using FreeBSD 4.2,\n> it's rock solid, it's not OS crash, it just losted power. We use WindowsNT and MSSQL on our production\n> server, before we accept MSSQL, we use this method to test if MSSQL can endure this kind of strik,\n> it's OK, all databases are safely recovered, we can continue our work.\n\nThe only way to safely recover them after a major crash would be\nmanual/supervised recovery from backups + logs\n\nAs not even NTFS is safe from power failures (I have lost an NTFS file\nsystem a few times due to not \nhaving an UPS) it is irrelevant if MSSQL is. Even if MSSQL is \"crash\nproof\" (tm), how can you _prove_ \nyour customers/superiors that the last N minutes of transactions were\nnot lost ? \n\nIf the DB is able to \"continue your work\" after the crash, you can of\ncourse cover up the fact that the \ncrash even happened and blame the lost transactions on someone else when\nthey surface at the next audit ;)\n\nOr just claim thet computer technology is so complicated that losing a\nfew transactions is normal - but \nyou could go on working ;) :~) ;-p\n\nWhat you want for mission-critical data is replicated databases or at\nleast off-site logging, not \"crash \nrecovery\" at some arbitrarily chosen layer. You will need to recover\nfrom the crash even if it destroys \nthe whole computer.\n\nMay I suggest another test for your NT/MSSQL setup - dont pull the plug\nbut change the input voltage \nto 10 000 VAC, if this goes well, test vith 100 000 VAC ;)\nThis is also a scenario much less likely to be protected by an UPS than\npower loss.\n\n> we are a stock exchange company,\n> our server are storing millilion $ finance number, we don't hope there are any problems in this case,\n> we are using UPS, but UPS is not everything, it you bet everything on UPS, you must be idiot.\n\nSo are you, if you bet everything on hoping that DB will do crash\nrecovery from any type of crash.\n\nA common case of \"crash\" that may need to be recovered from is also a\nhuman error , like typing drop database \nat the wrong console;\n\n> I know you must be an avocation of PG, but we are professional customer, corporation user, we store critical\n> data into database, not your garbage data.\n\nThen you'd better have a crash recovery infrastructure/procedures in\nplace and not hope that DB server \nwill do that automatically for you\n\n--------------------\nHannu\n",
"msg_date": "Wed, 29 Nov 2000 12:23:30 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "I don't have the same luck, sorry to say!\n\nI am running Mandrake linux with OpenWall patched 2.2.17 kernel, dual p3\n550Mhz, 1gb memory.\nIt's a really busy webserver that constantly is running with 10 in load.\nSometime it spikes to ~40-50 in load (the most we had was 114(!)).\nI am running postgresql 7.0.2 (from the Mandrake rpm's).\n\nOne problem i have is that in one database we rapidly insert/delete in some\ntables, and to maintain a good performance on that db, i have to run a\nvacuum every hour(!).\nI think that db has excessive indexes all over the place (if that could have\nanything to do with it?).\n\nAnother other problem that is more severe is that the database \"crashes\"\n(read: stops working), if i run psql and do a select it says\n\"001129.07:04:15.688 [25474] FATAL 1: Memory exhausted in AllocSetAlloc()\"\nand fails.\nI have a cron script that watches postgres, and restarts it if it cant get a\nselect right.\nIt fails this way maybe once a day or two days.\nI've searched the mailinglist archives for this problem, but it allways\nseems that my problem doesn't fit the descriptions of the other ppl's\nproblem generating this error message.\n\nI have not found the right time to upgrade to 7.0.3 yet, and i don't know if\nthat would solve anything.\n\nAnother problem i have is that i get \"001128.12:58:01.248 [23444] FATAL 1:\nSocket command type unknown\" in my logs. I don't know if i get that from\nthe unix odbc driver, the remote windows odbc driver, or in unix standard db\nconnections.\n\nI get \"pq_recvbuf: unexpected EOF on client connection\" alot too, but that i\nthink only indicates that the socket was closed in a not-so-nice way, and\nthat it is no \"real\" error.\nIt seems that the psql windows odbc driver is generating this.\n\nThe postmaster is running with these parameters: \"-N 512 -B 1024 -i -o -S\n4096\"\n\nBut as a happy note i can tell you that we have a Linux box here (pentium\n100, kernel 2.0.3x) that has near 1000 days uptime, and runs postgres 6.5.x.\nIt has never failed, not even a single time :)\n\nMagnus Naeslund\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n----- Original Message -----\nFrom: \"Mitch Vincent\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, November 28, 2000 19:12\nSubject: Re: [HACKERS] beta testing version\n\n\n> This is one of the not-so-stomped boxes running PostgreSQL -- I've never\n> restarted PostgreSQL on it since it was installed.\n>\n> 12:03pm up 122 days, 7:54, 1 user, load average: 0.08, 0.11, 0.09\n>\n> I had some index corruption problems in 6.5.3 but since 7.0.X I haven't\n> heard so much as a peep from any PostgreSQL backend. It's superbly stable\non\n> all my machines..\n>\n> Damn good work guys.\n>\n> -Mitch\n>\n> ----- Original Message -----\n> From: \"The Hermit Hacker\" <[email protected]>\n> To: \"Hannu Krosing\" <[email protected]>\n> Cc: \"xuyifeng\" <[email protected]>; <[email protected]>;\n> \"Don Baccus\" <[email protected]>\n> Sent: Tuesday, November 28, 2000 8:53 AM\n> Subject: Re: [HACKERS] beta testing version\n>\n>\n> > On Tue, 28 Nov 2000, Hannu Krosing wrote:\n> >\n> > > xuyifeng wrote:\n> > > >\n> > >\n> > > I just noticed this conversation so I have not followed all of it,\n> > > but you seem to have strange priorities\n> > >\n> > > > I just want PG can be improved quickly, for me crash recover is very\n> urgent problem,\n> > >\n> > > Crash avoidance is usually much more urgent, at least on production\n> > > servers.\n> >\n> > Good call, but I kinda jumped to the conclusion that since PgSQL itself\n> > isn't that crash prone, its his OS or his hardware that was the problem\n:0\n> >\n> >\n> >\n>\n>\n\n",
"msg_date": "Wed, 29 Nov 2000 13:08:00 +0100",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "\"Magnus Naeslund\\(f\\)\" <[email protected]> writes:\n> Another other problem that is more severe is that the database \"crashes\"\n> (read: stops working), if i run psql and do a select it says\n> \"001129.07:04:15.688 [25474] FATAL 1: Memory exhausted in AllocSetAlloc()\"\n> and fails.\n\nThat's odd. Does any select at all --- even, say, \"SELECT 2+2\" --- fail\nlike that, or just ones referencing a particular table, or maybe you\nmeant just one specific query?\n\n> Another problem i have is that i get \"001128.12:58:01.248 [23444] FATAL 1:\n> Socket command type unknown\" in my logs. I don't know if i get that from\n> the unix odbc driver, the remote windows odbc driver, or in unix standard db\n> connections.\n\nDo any of your client applications complain that they're being\ndisconnected on? This might come from something not doing disconnection\ncleanly, in which case the client probably wouldn't notice anything wrong.\n\n> I get \"pq_recvbuf: unexpected EOF on client connection\" alot too, but that i\n> think only indicates that the socket was closed in a not-so-nice way, and\n> that it is no \"real\" error.\n> It seems that the psql windows odbc driver is generating this.\n\nThat message is quite harmless AFAIK, although it'd be nice to clean up\nthe ODBC driver so that it disconnects in the approved fashion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Nov 2000 21:25:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version "
},
{
"msg_contents": "Is \"if\" clause support in PG? \nfor example:\n\"drop table aa if exist\"\n\"insert into aa values(1) if not exists select * from aa where i=1\"\n\nI would like PG support it.\n---\nXuYifeng\n\n----- Original Message ----- \nFrom: John Huttley <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, November 28, 2000 9:04 AM\nSubject: [HACKERS] Please advise features in 7.1 (SUMMARY)\n\n\n> Thanks for your help, everyone.\n> \n> This is a summary of replies.\n> \n> 1. Calculated fields in table definitions . eg.\n> \n> Create table test (\n> A Integer,\n> B integer,\n> the_sum As (A+B),\n> );\n> \n> This functionality can be achieved through the use of views.\n> Implementing the create table syntax may not be too hard,\n> but not in 7.1...\n> \n> 2 Parameterised Triggers\n> \n> Functionality is there, just that the documentation gave the wrong implication.\n> An user manual example of using parameterised triggers to implement referential\n> integrity\n> would be welcome.\n> \n> 3. Stored Procedures returning a record set.\n> \n> Dream on!\n> \n> \n> Regards\n> \n> John\n> \n> \n> \n",
"msg_date": "Thu, 30 Nov 2000 10:50:32 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 (SUMMARY)"
},
{
"msg_contents": "> our server alternatives; at present only PostgreSQL is left, was the most\n> reliable of all.\n\nmind i ask on which platform (Operating system) did you do your test,i'm\nmostly used to linux but after i paid my computer (still 5 month\nremaining),i want to get a used SGI box from reputable system and put NetBSD\nas well as PostgreSQL on it (and maybe AolServer too,depending on the\nthreading model of NetBSD).\n\nAlain Toussaint\n\n",
"msg_date": "Wed, 29 Nov 2000 23:28:50 -0500 (EST)",
"msg_from": "Alain Toussaint <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "\nOn Thu, 30 Nov 2000, Thomas Lockhart wrote:\n\n> > Is \"if\" clause support in PG?\n> > for example:\n> > \"drop table aa if exist\"\n> > \"insert into aa values(1) if not exists select * from aa where i=1\"\n> \n> No. afaict it is not in any SQL standard, so is unlikely to get much\n> attention from developers.\n\nPlus, for that second one can't you just do:\n\nINSERT INTO aa SELECT 1 WHERE NOT EXISTS (SELECT * FROM aa WHERE i=1);\n\n\n- Andrew\n\n\n",
"msg_date": "Thu, 30 Nov 2000 16:15:14 +1100 (EST)",
"msg_from": "Andrew Snow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 (SUMMARY)"
},
{
"msg_contents": "> Is \"if\" clause support in PG?\n> for example:\n> \"drop table aa if exist\"\n> \"insert into aa values(1) if not exists select * from aa where i=1\"\n\nNo. afaict it is not in any SQL standard, so is unlikely to get much\nattention from developers.\n\n - Thomas\n",
"msg_date": "Thu, 30 Nov 2000 05:24:14 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 (SUMMARY)"
},
{
"msg_contents": "At 05:24 AM 11/30/00 +0000, Thomas Lockhart wrote:\n>> Is \"if\" clause support in PG?\n>> for example:\n>> \"drop table aa if exist\"\n>> \"insert into aa values(1) if not exists select * from aa where i=1\"\n>\n>No. afaict it is not in any SQL standard, so is unlikely to get much\n>attention from developers.\n\nThe insert, at least, can be written in standard SQL anyway...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 30 Nov 2000 06:55:21 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please advise features in 7.1 (SUMMARY)"
},
{
"msg_contents": "\nv7.1 should improve crash recovery for situations like this ... you'll\nstill have to do a recovery of the data on corruption of this magnitude,\nbut at least with the WAL stuff that Vadim is producing, you'll be able to\nrecover up until the point that the power cable was pulled out of the wall\n...\n\n\nOn Wed, 29 Nov 2000, xuyifeng wrote:\n\n> NO, I just tested how solid PgSQL is, I run a program busy inserting record into PG table, when I \n> suddenly pulled out power from my machine and restarted PG, I can not insert any record into database\n> table, all backends are dead without any respone (not core dump), note that I am using FreeBSD 4.2, \n> it's rock solid, it's not OS crash, it just losted power. We use WindowsNT and MSSQL on our production\n> server, before we accept MSSQL, we use this method to test if MSSQL can endure this kind of strik,\n> it's OK, all databases are safely recovered, we can continue our work. we are a stock exchange company,\n> our server are storing millilion $ finance number, we don't hope there are any problems in this case, \n> we are using UPS, but UPS is not everything, it you bet everything on UPS, you must be idiot. \n> I know you must be an avocation of PG, but we are professional customer, corporation user, we store critical\n> data into database, not your garbage data.\n> \n> Regards,\n> XuYifeng\n> \n> ----- Original Message ----- \n> From: Don Baccus <[email protected]>\n> To: Ron Chmara <[email protected]>; Mitch Vincent <[email protected]>; <[email protected]>\n> Sent: Wednesday, November 29, 2000 6:58 AM\n> Subject: Re: [HACKERS] beta testing version\n> \n> \n> > At 03:25 PM 11/28/00 -0700, Ron Chmara wrote:\n> > >Mitch Vincent wrote:\n> > >> \n> > >> This is one of the not-so-stomped boxes running PostgreSQL -- I've never\n> > >> restarted PostgreSQL on it since it was installed.\n> > >> 12:03pm up 122 days, 7:54, 1 user, load average: 0.08, 0.11, 0.09\n> > >> I had some index corruption problems in 6.5.3 but since 7.0.X I haven't\n> > >> heard so much as a peep from any PostgreSQL backend. It's superbly stable on\n> > >> all my machines..\n> > >\n> > >I have a 6.5.x box at 328 days of active use.\n> > >\n> > >Crash \"recovery\" seems silly to me. :-)\n> > \n> > Well, not really ... but since our troll is a devoted MySQL user, it's a bit\n> > of a red-herring anyway, at least as regards his own server.\n> > \n> > You know, the one he's afraid to put Postgres on, but sleeps soundly at\n> > night knowing the mighty bullet-proof MySQL with its full transaction\n> > semantics, archive logging and recovery from REDO logs and all that\n> > will save him? :)\n> > \n> > Again ... he's a troll, not even a very entertaining one.\n> > \n> > \n> > \n> > \n> > - Don Baccus, Portland OR <[email protected]>\n> > Nature photos, on-line guides, Pacific Northwest\n> > Rare Bird Alert Service and other goodies at\n> > http://donb.photo.net.\n> > \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 30 Nov 2000 19:02:01 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 07:02 PM 11/30/00 -0400, The Hermit Hacker wrote:\n>\n>v7.1 should improve crash recovery for situations like this ... you'll\n>still have to do a recovery of the data on corruption of this magnitude,\n>but at least with the WAL stuff that Vadim is producing, you'll be able to\n>recover up until the point that the power cable was pulled out of the wall\n\nNo, WAL won't help if an actual database file is corrupted, say by a\ndisk drive hosing a block or portion thereof with zeros. WAL-based\nrecovery at startup works on an intact database.\n\nStill, in the general case you need real backup and recovery tools.\nThen you can apply archives of REDOs to a backup made of a snapshot\nand rebuild up to the last transaction. As opposed to your last\npg_dump.\n\nSo what about mirroring (RAID 1)? As the docs tell ya, that protects\nyou against one drive failing but not against power failure, which can\ncause bad data to be written to both mirrors if both are actively \nwriting when the plug is pulled.\n\nPower failures are evil, face it! :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 30 Nov 2000 15:35:54 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, Nov 30, 2000 at 07:02:01PM -0400, The Hermit Hacker wrote:\n> \n> v7.1 should improve crash recovery ...\n> ... with the WAL stuff that Vadim is producing, you'll be able to\n> recover up until the point that the power cable was pulled out of \n> the wall.\n\nPlease do not propagate falsehoods like the above. It creates\nunsatisfiable expectations, and leads people to fail to take\nproper precautions and recovery procedures. \n\nAfter a power outage on an active database, you may have corruption\nat low levels of the system, and unless you have enormous redundancy\n(and actually use it to verify everything) the corruption may go \nundetected and result in (subtly) wrong answers at any future time.\n\nThe logging in 7.1 protects transactions against many sources of \ndatabase crash, but not necessarily against OS crash, and certainly\nnot against power failure. (You might get lucky, or you might just \nthink you were lucky.) This is the same as for most databases; an\nembedded database that talks directly to the hardware might be able\nto do better. \n\nNathan Myers\[email protected]\n",
"msg_date": "Thu, 30 Nov 2000 15:35:59 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, 30 Nov 2000, Don Baccus wrote:\n\n> At 07:02 PM 11/30/00 -0400, The Hermit Hacker wrote:\n> >\n> >v7.1 should improve crash recovery for situations like this ... you'll\n> >still have to do a recovery of the data on corruption of this magnitude,\n> >but at least with the WAL stuff that Vadim is producing, you'll be able to\n> >recover up until the point that the power cable was pulled out of the wall\n> \n> No, WAL won't help if an actual database file is corrupted, say by a\n> disk drive hosing a block or portion thereof with zeros. WAL-based\n> recovery at startup works on an intact database.\n\nNo, WAL does help, cause you can then pull in your last dump and recover\nup to the moment that power cable was pulled out of the wall ...\n\n\n",
"msg_date": "Thu, 30 Nov 2000 19:47:08 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, Nov 30, 2000 at 07:47:08PM -0400, The Hermit Hacker wrote:\n> On Thu, 30 Nov 2000, Don Baccus wrote:\n> > At 07:02 PM 11/30/00 -0400, The Hermit Hacker wrote:\n> > >\n> > >v7.1 should improve crash recovery for situations like this ... you'll\n> > >still have to do a recovery of the data on corruption of this magnitude,\n> > >but at least with the WAL stuff that Vadim is producing, you'll be able to\n> > >recover up until the point that the power cable was pulled out of the wall\n> > \n> > No, WAL won't help if an actual database file is corrupted, say by a\n> > disk drive hosing a block or portion thereof with zeros. WAL-based\n> > recovery at startup works on an intact database.\n> \n> No, WAL does help, cause you can then pull in your last dump and recover\n> up to the moment that power cable was pulled out of the wall ...\n\nFalse, on so many counts I can't list them all.\n\nNathan Myers\nncm\n\n",
"msg_date": "Thu, 30 Nov 2000 16:10:24 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, 30 Nov 2000, Nathan Myers wrote:\n\n> On Thu, Nov 30, 2000 at 07:47:08PM -0400, The Hermit Hacker wrote:\n> > On Thu, 30 Nov 2000, Don Baccus wrote:\n> > > At 07:02 PM 11/30/00 -0400, The Hermit Hacker wrote:\n> > > >\n> > > >v7.1 should improve crash recovery for situations like this ... you'll\n> > > >still have to do a recovery of the data on corruption of this magnitude,\n> > > >but at least with the WAL stuff that Vadim is producing, you'll be able to\n> > > >recover up until the point that the power cable was pulled out of the wall\n> > > \n> > > No, WAL won't help if an actual database file is corrupted, say by a\n> > > disk drive hosing a block or portion thereof with zeros. WAL-based\n> > > recovery at startup works on an intact database.\n> > \n> > No, WAL does help, cause you can then pull in your last dump and recover\n> > up to the moment that power cable was pulled out of the wall ...\n> \n> False, on so many counts I can't list them all.\n\nwould love to hear them ... I'm always opening to having my\nmisunderstandings corrected ...\n\n",
"msg_date": "Thu, 30 Nov 2000 20:34:41 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, 30 Nov 2000, Nathan Myers wrote:\n\n> On Thu, Nov 30, 2000 at 07:47:08PM -0400, The Hermit Hacker wrote:\n> > On Thu, 30 Nov 2000, Don Baccus wrote:\n> > > At 07:02 PM 11/30/00 -0400, The Hermit Hacker wrote:\n> > > >\n> > > >v7.1 should improve crash recovery for situations like this ... you'll\n> > > >still have to do a recovery of the data on corruption of this magnitude,\n> > > >but at least with the WAL stuff that Vadim is producing, you'll be able to\n> > > >recover up until the point that the power cable was pulled out of the wall\n> > >\n> > > No, WAL won't help if an actual database file is corrupted, say by a\n> > > disk drive hosing a block or portion thereof with zeros. WAL-based\n> > > recovery at startup works on an intact database.\n> >\n> > No, WAL does help, cause you can then pull in your last dump and recover\n> > up to the moment that power cable was pulled out of the wall ...\n>\n> False, on so many counts I can't list them all.\n\n*YAWN*\n\n\n\n",
"msg_date": "Thu, 30 Nov 2000 19:44:53 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, Nov 30, 2000 at 05:37:58PM -0800, Mitch Vincent wrote:\n> > > No, WAL does help, cause you can then pull in your last dump and recover\n> > > up to the moment that power cable was pulled out of the wall ...\n> >\n> > False, on so many counts I can't list them all.\n> \n> Why? If we're not talking hardware damage and you have a dump made\n> sometime previous to the crash, why wouldn't that work to restore the\n> database? I've had to restore a corrupted database from a dump before,\n> there wasn't any hardware damage, the database (more specifically the\n> indexes) were corrupted. Of course WAL wasn't around but I don't see\n> why this wouldn't work...\n\nI posted a more detailed explanation a few minutes ago, but\nit appears to have been eaten by the mailing list server.\n\nI won't re-post the explanations that you all have seen over the \nlast two days, about disk behavior during a power outage; they're \nin the archives (I assume -- when last I checked, web access to it \ndidn't work). Suffice to say that if you pull the plug, there is \njust too much about the state of the disks that is unknown.\n\nAs for replaying logs against a restored snapshot dump... AIUI, a \ndump records tuples by OID, but the WAL refers to TIDs. Therefore, \nthe WAL won't work as a re-do log to recover your transactions \nbecause the TIDs of the restored tables are all different. \n\nTo get replaying we need an \"update log\", something that might be\nin 7.2 if somebody does a lot of work.\n\n> Note I'm not saying you're wrong, just asking that you explain your\n> comment a little more. If WAL can't be used to help recover from\n> crashes where database corruption occurs, what good is it?\n\nThe WAL is a performance optimization for the current recovery\ncapabilities, which assume uncorrupted table files. It protects\nagainst those database server crashes that happen not to corrupt \nthe table files (i.e. most). It doesn't protect against corruption \nof the tables, by bugs in PG or in the OS or from \"hardware events\". \nIt also doesn't protect against OS crashes that result in \nwrite-buffered sectors not having been written before the crash. \nPractically, this means that WAL file entries older than a few \nseconds are not useful for much.\n\nIn general, it's foolish to expect a single system to store very\nvaluable data with much confidence. To get full recoverability, \nyou need a \"hot failover\" system duplicating your transactions in \nreal time. (Even then, you're vulnerable to application-level \nmistakes.)\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Thu, 30 Nov 2000 17:15:29 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "> > No, WAL does help, cause you can then pull in your last dump and recover\n> > up to the moment that power cable was pulled out of the wall ...\n>\n> False, on so many counts I can't list them all.\n\nWhy? If we're not talking hardware damage and you have a dump made sometime\nprevious to the crash, why wouldn't that work to restore the database? I've\nhad to restore a corrupted database from a dump before, there wasn't any\nhardware damage, the database (more specifically the indexes) were\ncorrupted. Of course WAL wasn't around but I don't see why this wouldn't\nwork...\n\nNote I'm not saying you're wrong, just asking that you explain your comment\na little more. If WAL can't be used to help recover from crashes where\ndatabase corruption occurs, what good is it?\n\n -Mitch\n\n",
"msg_date": "Thu, 30 Nov 2000 17:37:58 -0800",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 05:15 PM 11/30/00 -0800, Nathan Myers wrote:\n\n>As for replaying logs against a restored snapshot dump... AIUI, a \n>dump records tuples by OID, but the WAL refers to TIDs. Therefore, \n>the WAL won't work as a re-do log to recover your transactions \n>because the TIDs of the restored tables are all different. \n\nActually, the dump doesn't record tuple OIDs (unless you specifically\nask for them), it just dumps source sql. When this gets reloaded\nyou get an equivalent database, but not the same database, that you\nstarted out with.\n\nThat's why I've presumed you can't run the WAL against it.\n\nIf you and I are wrong I'd love to be surprised!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 30 Nov 2000 17:55:21 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, 30 Nov 2000, Nathan Myers wrote:\n\n> On Thu, Nov 30, 2000 at 07:02:01PM -0400, The Hermit Hacker wrote:\n> > \n> > v7.1 should improve crash recovery ...\n> > ... with the WAL stuff that Vadim is producing, you'll be able to\n> > recover up until the point that the power cable was pulled out of \n> > the wall.\n> \n> Please do not propagate falsehoods like the above. It creates\n> unsatisfiable expectations, and leads people to fail to take\n> proper precautions and recovery procedures. \n> \n> After a power outage on an active database, you may have corruption\n> at low levels of the system, and unless you have enormous redundancy\n> (and actually use it to verify everything) the corruption may go \n> undetected and result in (subtly) wrong answers at any future time.\n> \n> The logging in 7.1 protects transactions against many sources of \n> database crash, but not necessarily against OS crash, and certainly\n> not against power failure. (You might get lucky, or you might just \n> think you were lucky.) This is the same as for most databases; an\n> embedded database that talks directly to the hardware might be able\n> to do better. \n\nWe're talking about transaction logging here ... nothing gets written to\nit until completed ... if I take a \"known to be clean\" backup from the\nnight before, restore that and then run through the transaction logs, my\ndata should be clean, unless my tape itself is corrupt. If the power goes\noff half way through a write to the log, then that transaction wouldn't be\nmarked as completed and won't roll into the restore ...\n\nif a disk goes corrupt, I'd expect that the redo log would possibly have a\nproblem with corruption .. but if I pull the plug, unless I've somehow\ndamaged the disk, I would expect my redo log to be clean *and*, unless\nVadim totally messed something up, if there is any corruption in the redo\nlog, I'd expect that restoring from it would generate from red flags ...\n\n\n\n",
"msg_date": "Fri, 1 Dec 2000 00:00:12 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 03:35 PM 11/30/00 -0800, Nathan Myers wrote:\n>On Thu, Nov 30, 2000 at 07:02:01PM -0400, The Hermit Hacker wrote:\n>> \n>> v7.1 should improve crash recovery ...\n>> ... with the WAL stuff that Vadim is producing, you'll be able to\n>> recover up until the point that the power cable was pulled out of \n>> the wall.\n>\n>Please do not propagate falsehoods like the above. It creates\n>unsatisfiable expectations, and leads people to fail to take\n>proper precautions and recovery procedures. \n\nYeah, I posted similar stuff to the PHPbuilder forum in regard to\nPG.\n\n>The logging in 7.1 protects transactions against many sources of \n>database crash, but not necessarily against OS crash, and certainly\n>not against power failure. (You might get lucky, or you might just \n>think you were lucky.) This is the same as for most databases; an\n>embedded database that talks directly to the hardware might be able\n>to do better. \n\nLet's put it this way ... Oracle, a transaction-safe DB with REDO\nlogging, has for a very long time implemented disk mirroring. Now,\nwhy would they do that if you could pull the plug on the processor\nand depend on REDO logging to save you?\n\nAnd even then you're expected to provide adequate power backup to\nenable clean shutdown.\n\nThe real safety you get is that your battery sez \"we need to shut\ndown!\" but has enough power to let you. Transactions in progress\naren't logged, but everything else can tank cleanly, and your DB is\nin a consistent state. \n\nMirroring protects you against (some) disk drive failures (but not\nthose that are transparent to the RAID controller/driver - if your\ndrive writes crap to the primary side of the mirror and no errors\nare returned to the hardware/driver, the other side of the mirror\ncan faithfully reproduce them on the mirror!)\n\nBut since drives contain bearings and such that are much more likely\nto fail than electronics (good electronics and good designs, at least),\nmechanical failure's more likely and will be known to whatever is driving\nthe drive. And you're OK then...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 30 Nov 2000 21:39:14 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, 30 Nov 2000, Nathan Myers wrote:\n\n> After a power outage on an active database, you may have corruption\n> at low levels of the system, and unless you have enormous redundancy\n> (and actually use it to verify everything) the corruption may go \n> undetected and result in (subtly) wrong answers at any future time.\nNathan, why are you so hostile against postgres? Is there an ax to grind?\n\nThe conditions under which WAL will completely recover your database:\n1) OS guarantees complete ordering of fsync()'d writes. (i.e. having two\nblocks A and B, A is fsync'd before B, it could NOT happen that B is on\ndisk but A is not).\n2) on boot recovery, OS must not corrupt anything that was fsync'd.\n\nRule 1) is met by all unixish OSes in existance. Rule 2 is met by some\nfilesystems, such as reiserfs, tux2, and softupdates. \n\n> The logging in 7.1 protects transactions against many sources of \n> database crash, but not necessarily against OS crash, and certainly\n> not against power failure. (You might get lucky, or you might just \n> think you were lucky.) This is the same as for most databases; an\n> embedded database that talks directly to the hardware might be able\n> to do better. \n\n",
"msg_date": "Fri, 1 Dec 2000 01:54:23 -0500 (EST)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "> As for replaying logs against a restored snapshot dump... AIUI, a \n> dump records tuples by OID, but the WAL refers to TIDs. Therefore, \n> the WAL won't work as a re-do log to recover your transactions \n> because the TIDs of the restored tables are all different. \n\nTrue for current way of backing up - ie saving data in \"external\"\n(sql) format. But there is another way - saving data files in their\nnatural (binary) format. WAL records may be applyed to\nsuch dump, right?\n\n> To get replaying we need an \"update log\", something that might be\n\nWhat did you mean by \"update log\"?\nAre you sure that WAL is not \"update log\" ? -:)\n\n> in 7.2 if somebody does a lot of work.\n> \n> > Note I'm not saying you're wrong, just asking that you explain your\n> > comment a little more. If WAL can't be used to help recover from\n> > crashes where database corruption occurs, what good is it?\n> \n> The WAL is a performance optimization for the current recovery\n> capabilities, which assume uncorrupted table files. It protects\n> against those database server crashes that happen not to corrupt \n> the table files (i.e. most). It doesn't protect against corruption \n> of the tables, by bugs in PG or in the OS or from \"hardware events\". \n> It also doesn't protect against OS crashes that result in \n> write-buffered sectors not having been written before the crash. \n> Practically, this means that WAL file entries older than a few \n> seconds are not useful for much.\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nEven now, without BAR, WAL entries become unuseful only after checkpoints\n(and I wouldn't recomend to create them each few seconds -:)). WAL based\nBAR will require archiving of log records.\n\nVadim\n\n\n",
"msg_date": "Thu, 30 Nov 2000 23:06:31 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Fri, Dec 01, 2000 at 12:00:12AM -0400, The Hermit Hacker wrote:\n> On Thu, 30 Nov 2000, Nathan Myers wrote:\n> > On Thu, Nov 30, 2000 at 07:02:01PM -0400, The Hermit Hacker wrote:\n> > > v7.1 should improve crash recovery ...\n> > > ... with the WAL stuff that Vadim is producing, you'll be able to\n> > > recover up until the point that the power cable was pulled out of \n> > > the wall.\n> > \n> > Please do not propagate falsehoods like the above. It creates\n> > unsatisfiable expectations, and leads people to fail to take\n> > proper precautions and recovery procedures. \n> > \n> > After a power outage on an active database, you may have corruption\n> > at low levels of the system, and unless you have enormous redundancy\n> > (and actually use it to verify everything) the corruption may go \n> > undetected and result in (subtly) wrong answers at any future time.\n> > \n> > The logging in 7.1 protects transactions against many sources of \n> > database crash, but not necessarily against OS crash, and certainly\n> > not against power failure. (You might get lucky, or you might just \n> > think you were lucky.) This is the same as for most databases; an\n> > embedded database that talks directly to the hardware might be able\n> > to do better. \n> \n> We're talking about transaction logging here ... nothing gets written\n> to it until completed ... if I take a \"known to be clean\" backup from\n> the night before, restore that and then run through the transaction\n> logs, my data should be clean, unless my tape itself is corrupt. If\n> the power goes off half way through a write to the log, then that\n> transaction wouldn't be marked as completed and won't roll into the\n> restore ...\n\nSorry, wrong. First, the only way that your backups could have any\nrelationship with the transaction logs is if they are copies of the\nraw table files with the database shut down, rather than the normal \n\"snapshot\" backup. \n\nSecond, the transaction log is not, as has been noted far too frequently\nfor Vince's comfort, really written atomically. The OS has promised\nto write it atomically, and given the opportunity, it will. If you pull \nthe plug, all promises are broken.\n\n> if a disk goes corrupt, I'd expect that the redo log would possibly\n> have a problem with corruption .. but if I pull the plug, unless I've\n> somehow damaged the disk, I would expect my redo log to be clean\n> *and*, unless Vadim totally messed something up, if there is any\n> corruption in the redo log, I'd expect that restoring from it would\n> generate from red flags ...\n\nYou have great expectations, but nobody has done the work to satisfy\nthem, so when you pull the plug, I'd expect that you will be left \nin the dark, alone and helpless.\n\nVadim has done an excellent job on what he set out to do: optimize\ntransaction processing. Designing and implementing a factor-of-twenty \nspeed improvement on a professional-quality database engine demanded\ngreat effort and expertise. To complain that he hasn't also done \na lot of other stuff would be petty.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Thu, 30 Nov 2000 23:30:32 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Fri, Dec 01, 2000 at 01:54:23AM -0500, Alex Pilosov wrote:\n> On Thu, 30 Nov 2000, Nathan Myers wrote:\n> > After a power outage on an active database, you may have corruption\n> > at low levels of the system, and unless you have enormous redundancy\n> > (and actually use it to verify everything) the corruption may go \n> > undetected and result in (subtly) wrong answers at any future time.\n>\n> Nathan, why are you so hostile against postgres? Is there an ax to grind?\n\nAlex, please don't invent enemies. It's clear what important features\nPostgreSQL still lacks; over the next several releases these features\nwill be implemented, at great expense. PostgreSQL is useful and usable\nnow, given reasonable precautions and expectations. In the future it\nwill satisfy greater (albeit still reasonable) expectations.\n\n> The conditions under which WAL will completely recover your database:\n>\n> 1) OS guarantees complete ordering of fsync()'d writes. (i.e. having two\n> blocks A and B, A is fsync'd before B, it could NOT happen that B is on\n> disk but A is not).\n> 2) on boot recovery, OS must not corrupt anything that was fsync'd.\n> \n> Rule 1) is met by all unixish OSes in existance. Rule 2 is met by some\n> filesystems, such as reiserfs, tux2, and softupdates. \n\nNo. The OS asks the disk to write blocks in a certain order, but \ndisks normally reorder writes. Not only that; as noted earlier, \ntypical disks report the write completed long before the blocks \nactually hit the disk.\n\nA logging file system protects against the simpler forms of OS crash,\nwhere the OS data-structure corruption is noticed before any more disk\nwrites are scheduled. It can't (by itself) protect against disk\nerrors. For critical applications, you must supply that protection\nyourself, with (e.g.) battery-backed mirroring.\n\n> > The logging in 7.1 protects transactions against many sources of \n> > database crash, but not necessarily against OS crash, and certainly\n> > not against power failure. (You might get lucky, or you might just \n> > think you were lucky.) This is the same as for most databases; an\n> > embedded database that talks directly to the hardware might be able\n> > to do better. \n\nThe best possible database code can't overcome a broken OS or a broken \ndisk. It would be unreasonable to expect otherwise.\n\nNathan Myers\[email protected] \n",
"msg_date": "Fri, 1 Dec 2000 00:21:26 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": " Date: Fri, 1 Dec 2000 01:54:23 -0500 (EST)\n From: Alex Pilosov <[email protected]>\n\n On Thu, 30 Nov 2000, Nathan Myers wrote:\n\n > After a power outage on an active database, you may have corruption\n > at low levels of the system, and unless you have enormous redundancy\n > (and actually use it to verify everything) the corruption may go \n > undetected and result in (subtly) wrong answers at any future time.\n Nathan, why are you so hostile against postgres? Is there an ax to grind?\n\nI don't think he is being hostile (I work with him, so I know that he\nis generally pro-postgres).\n\n The conditions under which WAL will completely recover your database:\n 1) OS guarantees complete ordering of fsync()'d writes. (i.e. having two\n blocks A and B, A is fsync'd before B, it could NOT happen that B is on\n disk but A is not).\n 2) on boot recovery, OS must not corrupt anything that was fsync'd.\n\n Rule 1) is met by all unixish OSes in existance. Rule 2 is met by some\n filesystems, such as reiserfs, tux2, and softupdates. \n\nI think you are missing his main point, which he stated before, which\nis that modern disk hardware is both smarter and stupider than most\npeople realize.\n\nSome disks cleverly accept writes into a RAM cache, and return a\ncompletion signal as soon as they have done that. They then feel free\nto reorder the writes to magnetic media as they see fit. This\nsignificantly helps performance. However, it means that all bets off\non a sudden power loss.\n\nYour rule 1 is met at the OS level, but it is not met at the physical\ndrive level. The fact that the OS guarantees ordering of fsync()'d\nwrites means little since the drive is capable of reordering writes\nbehind the back of the OS.\n\nAt least with IDE, it is possible to tell the drive to disable this\nsort of caching and reordering. However, GNU/Linux, at least, does\nnot do this. After all, doing it would hurt performance, and would\nmove us back to the old days when operating systems had to care a\ngreat deal about disk geometry.\n\nI expect that careful attention to the physical disks you purchase can\nhelp you avoid these problems. For example, I would hope that EMC\ndisk systems handle power loss gracefully. But if you buy ordinary\noff the shelf PC hardware, you really do need to arrange for a UPS,\nand some sort of automatic shutdown if the UPS is running low.\nOtherwise, although the odds are certainly with you, there is no 100%\nguarantee that a busy database will survive a sudden power outage.\n\nIan\n",
"msg_date": "1 Dec 2000 00:30:57 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Thu, Nov 30, 2000 at 11:06:31PM -0800, Vadim Mikheev wrote:\n> > As for replaying logs against a restored snapshot dump... AIUI, a \n> > dump records tuples by OID, but the WAL refers to TIDs. Therefore, \n> > the WAL won't work as a re-do log to recover your transactions \n> > because the TIDs of the restored tables are all different. \n> \n> True for current way of backing up - ie saving data in \"external\"\n> (sql) format. But there is another way - saving data files in their\n> natural (binary) format. WAL records may be applyed to\n> such dump, right?\n\nBut (AIUI) you can only safely/usefully copy those files when the \ndatabase is shut down.\n\nMany people hope to run PostgreSQL 24x7x365. With vacuuming, you \nmight just as well shut down afterward; but when that goes away \n(in 7.2?), when will you get the chance to take your backups? \nClearly we need either another form of snapshot backup that can \nbe taken with the database running, and compatible with the \ncurrent WAL (or some variation on it); or, we need another kind \nof log, in addition to the WAL.\n\n> > To get replaying we need an \"update log\", something that might be\n> > in 7.2 if somebody does a lot of work.\n> \n> What did you mean by \"update log\"?\n> Are you sure that WAL is not \"update log\" ? -:)\n\nNo, I'm not sure. I think it's possible that a new backup utility \ncould be written to make a hot backup which could be restored and \nthen replayed using the current WAL format. It might be easier to\nadd another log which could be replayed against the existing form\nof backups. That last is what I called the \"update log\".\n\nThe point is, WAL now does one job superbly: maintain a consistent\non-disk database image. Asking it to do something else, such as \nsupporting hot BAR, could interfere with it doing its main job. \nOf course, only the person who implements hot BAR can say.\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 1 Dec 2000 00:55:21 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 00:55 1/12/00 -0800, Nathan Myers wrote:\n>On Thu, Nov 30, 2000 at 11:06:31PM -0800, Vadim Mikheev wrote:\n>> > As for replaying logs against a restored snapshot dump... AIUI, a \n>> > dump records tuples by OID, but the WAL refers to TIDs. Therefore, \n>> > the WAL won't work as a re-do log to recover your transactions \n>> > because the TIDs of the restored tables are all different. \n>> \n>> True for current way of backing up - ie saving data in \"external\"\n>> (sql) format. But there is another way - saving data files in their\n>> natural (binary) format. WAL records may be applyed to\n>> such dump, right?\n>\n>But (AIUI) you can only safely/usefully copy those files when the \n>database is shut down.\n>\n\nThis is not true; the way Vadim has implemeted WAL is to write a series of\nfiles of fixed size. When all transactions that have records in one file\nhave completed, that file is (currently) deleted. When BAR is going, the\nfiles will be archived.\n\nThe only circumstance in which this strategy will fail is if there are a\nlarge number of intensive long-standing single transactions - which is\nunlikely (not to mention bad practice).\n\nAs a result of this, BAR will just need to take a snapshot of the database\nand apply the logs (basically like a very extended recovery process).\n\nYou have raised some interesting issues regrading write-order etc. Can we\nassume that when fsync *returns*, all records are written - though not\nnecessarily in the order that the IO's were executed?\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 01 Dec 2000 21:13:28 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 11:06 PM 11/30/00 -0800, Vadim Mikheev wrote:\n>> As for replaying logs against a restored snapshot dump... AIUI, a \n>> dump records tuples by OID, but the WAL refers to TIDs. Therefore, \n>> the WAL won't work as a re-do log to recover your transactions \n>> because the TIDs of the restored tables are all different. \n>\n>True for current way of backing up - ie saving data in \"external\"\n>(sql) format. But there is another way - saving data files in their\n>natural (binary) format. WAL records may be applyed to\n>such dump, right?\n\nRight. That's what's missing in PG 7.1, the existence of tools to\nmake such backups. \n\nProbably the best answer to the \"what does WAL get us, if it doesn't\nget us full recoverability\" questions is to simply say \"it's a prerequisite\nto getting full recoverability, PG 7.1 sets the foundation and later\nwork will get us there\".\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 01 Dec 2000 06:39:57 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 12:30 AM 12/1/00 -0800, Ian Lance Taylor wrote:\n>For example, I would hope that EMC\n>disk systems handle power loss gracefully.\n\nThey must, their marketing literature says so :)\n\n> But if you buy ordinary\n>off the shelf PC hardware, you really do need to arrange for a UPS,\n>and some sort of automatic shutdown if the UPS is running low.\n\nWhich is what disk subsystems like those from EMC do for you. They've\ngot build-in battery backup that lets them guarantee (assuming the\nhardware's working right) that in the case of a power outage, all blocks\nthe operating system thinks have been written will in actuality be written\nbefore the disk subsystem powers itself down.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 01 Dec 2000 06:46:57 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 12:55 AM 12/1/00 -0800, Nathan Myers wrote:\n\n>Many people hope to run PostgreSQL 24x7x365. With vacuuming, you \n>might just as well shut down afterward; but when that goes away \n>(in 7.2?), when will you get the chance to take your backups? \n>Clearly we need either another form of snapshot backup that can \n>be taken with the database running, and compatible with the \n>current WAL (or some variation on it); or, we need another kind \n>of log, in addition to the WAL.\n\nVadim's not ignorant of such matters, when he says \"make a copy\nof the files\" he's not talking about using tar on a running\ndatabase. BAR tools are needed, as Vadim has pointed out here in\nthe past.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 01 Dec 2000 06:51:31 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "> > > As for replaying logs against a restored snapshot dump... AIUI, a \n> > > dump records tuples by OID, but the WAL refers to TIDs. Therefore, \n> > > the WAL won't work as a re-do log to recover your transactions \n> > > because the TIDs of the restored tables are all different. \n> > \n> > True for current way of backing up - ie saving data in \"external\"\n> > (sql) format. But there is another way - saving data files in their\n> > natural (binary) format. WAL records may be applyed to\n> > such dump, right?\n> \n> But (AIUI) you can only safely/usefully copy those files when the \n> database is shut down.\n\nNo. You can read/save datafiles at any time. But block reads must be\n\"atomic\" - no one should be able to change any part of a block while\nwe read it. Cp & tar are probably not suitable for this, but internal\nBACKUP command could do this.\n\nRestoring from such backup will like recovering after pg_ctl -m i stop: all\ndata blocks are consistent and WAL records may be applyed to them.\n\n> Many people hope to run PostgreSQL 24x7x365. With vacuuming, you \n> might just as well shut down afterward; but when that goes away \n> (in 7.2?), when will you get the chance to take your backups? \n\nAbility to shutdown 7.2 will be preserved -:))\nBut it's not required for backup.\n\n> > > To get replaying we need an \"update log\", something that might be\n> > > in 7.2 if somebody does a lot of work.\n> > \n> > What did you mean by \"update log\"?\n> > Are you sure that WAL is not \"update log\" ? -:)\n> \n> No, I'm not sure. I think it's possible that a new backup utility \n> could be written to make a hot backup which could be restored and \n> then replayed using the current WAL format. It might be easier to\n> add another log which could be replayed against the existing form\n> of backups. That last is what I called the \"update log\".\n\nConsistent read of data blocks is easier to implement, sure.\n\n> The point is, WAL now does one job superbly: maintain a consistent\n> on-disk database image. Asking it to do something else, such as \n> supporting hot BAR, could interfere with it doing its main job. \n> Of course, only the person who implements hot BAR can say.\n\nThere will be no interference because of BAR will not ask WAL to do\nanything else it does right now - redo-ing changes.\n\nVadim\n\n\n",
"msg_date": "Fri, 1 Dec 2000 08:10:40 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Fri, Dec 01, 2000 at 06:39:57AM -0800, Don Baccus wrote:\n> \n> Probably the best answer to the \"what does WAL get us, if it doesn't\n> get us full recoverability\" questions is to simply say \"it's a \n> prerequisite to getting full recoverability, PG 7.1 sets the foundation \n> and later work will get us there\".\n\nNot to quibble, but for most of us, the answer to Don's question is:\n\"It gives a ~20x speedup over 7.0.\" That's pretty valuable to some of us.\nIf it turns out to be useful for other stuff, that's gravy.\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 1 Dec 2000 11:02:38 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 11:02 AM 12/1/00 -0800, Nathan Myers wrote:\n>On Fri, Dec 01, 2000 at 06:39:57AM -0800, Don Baccus wrote:\n>> \n>> Probably the best answer to the \"what does WAL get us, if it doesn't\n>> get us full recoverability\" questions is to simply say \"it's a \n>> prerequisite to getting full recoverability, PG 7.1 sets the foundation \n>> and later work will get us there\".\n>\n>Not to quibble, but for most of us, the answer to Don's question is:\n>\"It gives a ~20x speedup over 7.0.\" That's pretty valuable to some of us.\n>If it turns out to be useful for other stuff, that's gravy.\n\nOh, but given that power failures eat disks anyway, you can just run PG 7.0\nwith -F and be just as fast as PG 7.1, eh? With no theoretical loss in\nsafety? Where's your faith in all that doom and gloom you've been \nspreading? :) :)\n\nYou're right, of course, we'll get roughly -F performance while maintaining\na much more comfortable level of risk than you get with -F.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 01 Dec 2000 11:15:36 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "On Fri, Dec 01, 2000 at 09:13:28PM +1100, Philip Warner wrote:\n> \n> You have raised some interesting issues regrading write-order etc. Can we\n> assume that when fsync *returns*, all records are written - though not\n> necessarily in the order that the IO's were executed?\n\nNot with ordinary disks. With a battery-backed disk server, yes.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Fri, 1 Dec 2000 11:23:59 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n> On Fri, Dec 01, 2000 at 09:13:28PM +1100, Philip Warner wrote:\n>> You have raised some interesting issues regrading write-order etc. Can we\n>> assume that when fsync *returns*, all records are written - though not\n>> necessarily in the order that the IO's were executed?\n\n> Not with ordinary disks. With a battery-backed disk server, yes.\n\nI think the real point of this discussion is that there's no such thing\nas an ironclad guarantee. That's why people make backups.\n\nAll we can do is the best we can ;-). In that light, I think it's\nreasonable for Postgres to proceed on the assumption that fsync does\nwhat it claims to do, ie, all blocks are written when it returns.\nWe can't realistically expect to persuade a disk controller that\nreorders writes to stop doing so. We can, however, expect that we've\nminimized the probability of failures induced by anything other than\ndisk hardware failure or power failure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Dec 2000 14:47:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version "
},
{
"msg_contents": "On Fri, Dec 01, 2000 at 08:10:40AM -0800, Vadim Mikheev wrote:\n> \n> > ... a new backup utility \n> > could be written to make a hot backup which could be restored and \n> > then replayed using the current WAL format. It might be easier to\n> > add another log which could be replayed against the existing form\n> > of backups. That last is what I called the \"update log\".\n> \n> Consistent read of data blocks is easier to implement, sure.\n> \n> > The point is, WAL now does one job superbly: maintain a consistent\n> > on-disk database image. Asking it to do something else, such as \n> > supporting hot BAR, could interfere with it doing its main job. \n> > Of course, only the person who implements hot BAR can say.\n> \n> There will be no interference because of BAR will not ask WAL to do\n> anything else it does right now - redo-ing changes.\n\nThe interference I meant is that the current WAL file format is designed \nfor its current job. For BAR, you would be better-served by a more \ncompact format, so you need not archive your logs so frequently. \n(The size of the WAL doesn't matter much because you can rotate them \nvery quickly.) A more compact format is also better as a basis for \nreplication, to minimize network traffic. To compress the WAL would \nhurt performance -- but adding performance was the point of the WAL.\n\nA log encoded at a much higher semantic level could be much more \ncompact, but wouldn't be useful as a WAL because it describes \ndifferences from a snapshot backup, not from the current table \nfile contents.\n\nThus, I'm not saying that you can't implement both WAL and hot BAR\nusing the same log; rather, it's just not _obviously_ the best way to \ndo it. \n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 1 Dec 2000 12:27:47 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "Ok, this has peaked my interest in learning exactly what WAL is and what it\ndoes... I don't see any in-depth explanation of WAL on the postgresql.org\nsite, can someone point me to some documentation? (if any exists, that is).\n\nThanks!\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Nathan Myers\" <[email protected]>\nTo: <[email protected]>\nSent: Friday, December 01, 2000 11:02 AM\nSubject: Re: [HACKERS] beta testing version\n\n\n> On Fri, Dec 01, 2000 at 06:39:57AM -0800, Don Baccus wrote:\n> >\n> > Probably the best answer to the \"what does WAL get us, if it doesn't\n> > get us full recoverability\" questions is to simply say \"it's a\n> > prerequisite to getting full recoverability, PG 7.1 sets the foundation\n> > and later work will get us there\".\n>\n> Not to quibble, but for most of us, the answer to Don's question is:\n> \"It gives a ~20x speedup over 7.0.\" That's pretty valuable to some of us.\n> If it turns out to be useful for other stuff, that's gravy.\n>\n> Nathan Myers\n> [email protected]\n>\n\n",
"msg_date": "Fri, 1 Dec 2000 12:29:34 -0800",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "WAL information"
},
{
"msg_contents": "From: \"Nathan Myers\" <[email protected]>\n> On Thu, Nov 30, 2000 at 07:02:01PM -0400, The Hermit Hacker wrote:\n> >\n[snip]\n> The logging in 7.1 protects transactions against many sources of\n> database crash, but not necessarily against OS crash, and certainly\n> not against power failure. (You might get lucky, or you might just\n> think you were lucky.) This is the same as for most databases; an\n> embedded database that talks directly to the hardware might be able\n> to do better.\n>\n\nIf PG had a type of tree based logging filesystem, that it self handles,\nwouldn't that be almost perfectly safe? I mean that you might lose some data\nin an transaction, but the client never gets an OK anyways...\nLike a combination of raw block io and tux2 like fs.\nDoesn't Oracle do it's own block io, no?\n\nMagnus\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n\n\n",
"msg_date": "Sat, 2 Dec 2000 19:35:54 +0100",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
},
{
"msg_contents": "At 02:47 PM 12/1/00 -0500, Tom Lane wrote:\n\n>All we can do is the best we can ;-). In that light, I think it's\n>reasonable for Postgres to proceed on the assumption that fsync does\n>what it claims to do, ie, all blocks are written when it returns.\n>We can't realistically expect to persuade a disk controller that\n>reorders writes to stop doing so. We can, however, expect that we've\n>minimized the probability of failures induced by anything other than\n>disk hardware failure or power failure.\n\nRight. This is very much the guarantee that RAID (non-zero) makes, \nexcept \"other than disk hardware failure\" is replaced by \"other than\nthe failure of two drives\". RAID gives you that (very, very substantial\nboost which is why it is so popular for DB servers). It doesn't give\nyou power failure assurance for much the same reason that PG (or Oracle,\netc) can.\n\nIf transaction processing alone could give you protection against a \nsingle disk hardware failure, Oracle wouldn't've bothered implementing\nmirroring in the past before software (and even reasonable hardware)\nRAID was available.\n\nLikewise, if mirroring + transaction processing could protect against\ndisks hosing themselves in power failure situations Oracle wouldn't \nsuggest that enterprise level customers invest in external disk\nsubsystems with battery backup sufficient to guarantee everything\nthe db server believes has been written really is written.\n\nOf course, Oracle license fees are high enough that proper hardware\nsupport tends to look cheap in comparison...\n\nVadim's WAL code is excellent, and the fact that we run in essence\nwith -F performance and also less write activity to the disk both\nincreases performance, and tends to lessen the probability that the\ndisk will actually be writing a block when the power goes off. The\ndice aren't quite so loaded against the server with this lowered\ndisk activity...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 03 Dec 2000 22:32:47 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version "
},
{
"msg_contents": "On Thu, 30 Nov 2000, Nathan Myers wrote:\n\n> Second, the transaction log is not, as has been noted far too frequently\n> for Vince's comfort, really written atomically. The OS has promised\n> to write it atomically, and given the opportunity, it will. If you pull\n> the plug, all promises are broken.\n\nSay what?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 4 Dec 2000 06:37:23 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
}
]
|
[
{
"msg_contents": "\nxuyifeng (<[email protected]>) wrote:\n\n> it's obviously there is a query plan optimizer bug, if int2 type used\n> in fields, the plan generator just use sequence scan, it's stupid, i\n> am using PG7.03, this is my log file:\n> \n> ---------\n> stock# drop table a;\n> DROP\n> stock# create table a(i int2, j int);\n> CREATE\n> stock# create unique index idx_a on a(i, j);\n> CREATE\n> stock# explain select * from a where i=1 and j=0;\n> psql:test.sql:4: NOTICE: QUERY PLAN:\n> \n> Seq Scan on a (cost=0.00..25.00 rows=1 width=6)\n> \n> EXPLAIN\n> stock# drop table a;\n> create table a(i int, j int);\n> CREATE\n> stock# create unique index idx_a on a(i, j);\n> CREATE\n> stock# explain select * from a where i=1 and j=0;\n> psql:test.sql:8: NOTICE: QUERY PLAN:\n> \n> Index Scan using idx_a on a (cost=0.00..2.02 rows=1 width=8)\n> \n> EXPLAIN\n> -----------\n\n\nThis actually appears to be a bug in the auto-casting mechanism (or\nthe parser, or something):\n\nkevin=# explain select * from a where i = 1 and j = 0;\nNOTICE: QUERY PLAN:\n\nSeq Scan on a (cost=0.00..25.00 rows=1 width=6)\n\nEXPLAIN\nkevin=# explain select * from a where i = '1' and j = '0';\nNOTICE: QUERY PLAN:\n\nIndex Scan using idx_a on a (cost=0.00..2.02 rows=1 width=6)\n\nEXPLAIN\n\n\n\nThis behavior appears to happen for int8 as well.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n It's really hard to define what \"anomalous behavior\" means when you're\n talking about Windows.\n",
"msg_date": "Wed, 22 Nov 2000 21:29:24 -0800",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query plan optimizer bug"
}
]
|
[
{
"msg_contents": "\n> Reason: I want to know if any of these features are scheduled.\n> \n> 1. Calculated fields in table definitions . eg.\n> \n> Create table test (\n> A Integer,\n> B integer,\n> the_sum As (A+B),\n> );\n\nThis is currently easily done with a procedure that takes a tabletype parameter\nwith the name the_sum returning the sum of a + b.\n\n Create table test (\n A Integer,\n B integer\n );\n\ncreate function the_sum (test) returns integer as\n'\n\tbegin;\n\t\treturn ($1.a + $1.b);\n\tend;\n' language 'plpgsql';\n\nA select * won't return the_sum, but a \n\tselect t.a, t.b, t.the_sum from test t; \nwill do what you want.\n\nUnfortunately it only works if you qualify the column the_sum with a tablename or alias.\n(But I heard you mention the Micro$oft word, and they tend to always use aliases anyway)\nMaybe we could even extend the column search in the unqualified case ?\n\nAndreas\n",
"msg_date": "Thu, 23 Nov 2000 12:28:35 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Please advise features in 7.1"
}
]
|
[
{
"msg_contents": "\nWe lack a syntax that would enable us to write an on update/delete do instead rule\nthat would efficiently map an update/delete to a table that is referenced by a view.\n\nCurrently the only rule you can implement is one that uses a primary key.\nThis has the disadvantage of needing a self join to find the appropriate rows.\n\nAndreas\n",
"msg_date": "Thu, 23 Nov 2000 13:22:26 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "deficiency on delete and update instead rules for views"
}
]
|
[
{
"msg_contents": "At 13:22 23/11/00 +0100, Zeugswetter Andreas SB wrote:\n>\n>We lack a syntax that would enable us to write an on update/delete do\ninstead rule\n>that would efficiently map an update/delete to a table that is referenced\nby a view.\n>\n>Currently the only rule you can implement is one that uses a primary key.\n>This has the disadvantage of needing a self join to find the appropriate\nrows.\n>\n\nOne of the concepts used in other DBs is to have views with row\nOIDs/DBKeys: ie. views that have one primary table (but maybe have column\nselects, calculations and/or function calls) can still have a real row\nunderlying each row. This then allows insert, update & delete to work more\neasily. Doesn't really help now, but it might be useful in a future release.\n\n \n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 23 Nov 2000 23:38:27 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: deficiency on delete and update instead rules\n for views"
}
]
|
[
{
"msg_contents": "\n> >We lack a syntax that would enable us to write an on update/delete do instead rule\n> >that would efficiently map an update/delete to a table that is referenced by a view.\n> >\n> >Currently the only rule you can implement is one that uses a primary key.\n> >This has the disadvantage of needing a self join to find the appropriate rows.\n> >\n> \n> One of the concepts used in other DBs is to have views with row\n> OIDs/DBKeys: ie. views that have one primary table (but maybe have column\n> selects, calculations and/or function calls) can still have a real row\n> underlying each row. This then allows insert, update & delete to work more\n> easily. Doesn't really help now, but it might be useful in a \n> future release.\n\nImho the functionality inside the backend is probably there since old Postgres 4\ncould do such rules. That is why I said that syntax is missing.\n\nBtw, the insert is not a problem, the on insert do instead rules are straight forward\nto write, at least in the cases where other db's allow an insert on a view. \n(e.g. on insert to test1 do instead insert into test (a,b) values (new.a, new.b); \nwhere test1 has a few extra calculated columns)\n\nAndreas\n",
"msg_date": "Thu, 23 Nov 2000 14:03:47 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: deficiency on delete and update instead rules for v\n\tiews"
}
]
|
[
{
"msg_contents": "At 12:28 PM 11/23/00 +0100, Zeugswetter Andreas SB wrote:\n>\n>> Reason: I want to know if any of these features are scheduled.\n>> \n>> 1. Calculated fields in table definitions . eg.\n>> \n>> Create table test (\n>> A Integer,\n>> B integer,\n>> the_sum As (A+B),\n>> );\n>\n>This is currently easily done with a procedure that takes a tabletype\nparameter\n>with the name the_sum returning the sum of a + b.\n>\n> Create table test (\n> A Integer,\n> B integer\n> );\n>\n>create function the_sum (test) returns integer as\n>'\n>\tbegin;\n>\t\treturn ($1.a + $1.b);\n>\tend;\n>' language 'plpgsql';\n>\n>A select * won't return the_sum\n\ncreate view test2 select A, B, A+B as the_sum from test;\n\nwill, though.\n\nSee, lots of ways to do it!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 23 Nov 2000 06:11:02 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Please advise features in 7.1"
}
]
|
[
{
"msg_contents": "\nI've been reading with interest the comments about the transaction log\nmanagement.\n\nFirst, I'm pretty new to PostgreSQL so please forgive any blatant\nerrors or misunderstanding on my part.\n\nWe want access to the log to be serialized and similarly we don't want\nfsync()s to happen in parallel nor do we want them to occur more than\nnecessary. Hence the discussion that has been happening.\n\nIt seems to me that, if any locking is to occur, it should be done\nusing a semaphore mechanism of some kind (fcntl() locking will do)\nthat is managed by the kernel. The reason is that as a DBA, I want to\nbe able to kill off backend processes (with SIGKILL if necessary)\nwithout hanging the rest of the PostgreSQL system. Any setup where\none backend process must actively signal the rest in order to wake\nthem up is one that is vulnerable to this scenario. Much better to\nhave them agree to attempt to acquire a lock on a file or a semaphore,\nin other words something managed by the system, so that when a process\nholding the lock dies the others can continue about their business.\n\nI realize that there are pitfalls with this approach: killing one of\nthe backend processes can leave the database in an inconsistent\nstate. But that seems a bit better than the alternative, which is\nthat I'd have to kill ALL the backend processes, and have the database\nend up in the same state anyway.\n\nThoughts?\n\n\nGuess it's time for me to subscriber to pgsql-hackers... :-)\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n It's really hard to define what \"anomalous behavior\" means when you're\n talking about Windows.\n",
"msg_date": "Thu, 23 Nov 2000 15:55:32 -0800",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c\n xlog.c)"
},
{
"msg_contents": "Kevin Brown <[email protected]> writes:\n> The reason is that as a DBA, I want to\n> be able to kill off backend processes (with SIGKILL if necessary)\n> without hanging the rest of the PostgreSQL system.\n\nThat has never been safe (or even possible, given how the postmaster\nwill respond).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Nov 2000 20:32:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c\n\txlog.c)"
}
]
|
[
{
"msg_contents": "Hi,\n\nWhat is the current way of getting the last built-in oid?\n\nI looked at the source of pg_dump, and it does this:\n\nSELECT datlastsysoid from pg_database where datname = 'dbname'\n\nBut as far as I can tell, the datlastsysoid field does not exist in\npg_database.\n\nWhat gives?\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n",
"msg_date": "Fri, 24 Nov 2000 11:27:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "last built-in oid"
},
{
"msg_contents": "Ah. What about in 7.0.3 and below?\n\nBasically this because I am attempting to select all built-in functions. I\nwould like it to be backwards compatible, if at all possible.\n\nChris\n\n> -----Original Message-----\n> From: Philip Warner [mailto:[email protected]]\n> Sent: Friday, November 24, 2000 11:47 AM\n> To: Christopher Kings-Lynne; Pgsql-Hackers\n> Subject: Re: [HACKERS] last built-in oid\n>\n>\n> At 11:27 24/11/00 +0800, Christopher Kings-Lynne wrote:\n> >\n> >SELECT datlastsysoid from pg_database where datname = 'dbname'\n> >\n> >But as far as I can tell, the datlastsysoid field does not exist in\n> >pg_database.\n> >\n>\n> If you build from CVS and do an initdb, you will find datlastsysoid should\n> exist...\n>\n>\n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n>\n\n",
"msg_date": "Fri, 24 Nov 2000 11:41:34 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: last built-in oid"
},
{
"msg_contents": "At 11:27 24/11/00 +0800, Christopher Kings-Lynne wrote:\n>\n>SELECT datlastsysoid from pg_database where datname = 'dbname'\n>\n>But as far as I can tell, the datlastsysoid field does not exist in\n>pg_database.\n>\n\nIf you build from CVS and do an initdb, you will find datlastsysoid should\nexist...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 24 Nov 2000 14:47:04 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: last built-in oid"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> Ah. What about in 7.0.3 and below?\n\nThere is no good way --- if there were, we'd not have bothered to invent\ndatlastsysoid. pg_dump used to use the OID of the template1 database\nas an estimate of the last built-in OID. This was wrong to begin with,\nand is completely untenable in 7.1 (template1's OID is now 1).\n\n> Basically this because I am attempting to select all built-in\n> functions.\n\nIf you only care about functions then it's probably possible to\nhard-wire an assumption that system functions have OIDs < 16384.\nRight now all built-in functions have manually-assigned OIDs,\nso that works. But I wouldn't want to promise that it'll work\nforever. It already doesn't work for aggregates, for example\n(were you including aggregates in \"functions\"?).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Nov 2000 23:54:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: last built-in oid "
}
]
|
[
{
"msg_contents": "Howdy,\n\n> It turns out that the number of max_persistent \n> is linked to the httpd processes in some \n> difficult-to-describe way.\n\nIt's not that hard to describe. The max_persistent/max_links values are\nper Apache process.\n\nThus if you have:\n\npgsql.max_persistent = 2\n\nand \n\nMaxClients 300\n\nYou could potentially reach a state where you are maintaining 600\npersistant connections to the database (if your PHP scripts actually\npg_pconnect() with two different connect strings).\n\nI think that if you are using persistant connections you may as well set\nMaxClients to the same number of database backends you are allowing (or\npossibly a bit less if you need other connections to the database).\nThere's no real point in allowing more Maxclients as they'll just start\nhitting connect errors anyway.\n\nObviously this isn't the most efficient use of backends because a fair\namount of the time the Apache processes won't be using them at all\n(they'll be sitting there doing nothing, or serving images or other\nstatic content).\n\nIf your application is big enough you may benefit from serving static\ncontent (images etc) from a different server, so the Apache processes\nwith persistant connection to backends are being used more heavily for\ndatabase work.\n\nIe if you normally have 100 Apache processes running on your webserver\nand you are using one persistant connection per process you will need\n100 backends. However if at any one time those 60%\nof those processes are serving images then you could have an Apache\nserver on one machine serving those images and only need 40 Apache\nprocesses and therefore 40 backends on the Apache server that serves the\nPHP script. You'll be tuning each machine for a more specific task,\nrather than having one machine doing all sorts of different stuff.\n\n> I do not know what will happen with PHP when there are more than one \n> different (i.e. different username, database) persistent connections.\n> I suppose they would be affected by the max_persistent. (?).\n\nIf you want persistant connections to two different database/username\npairs then you need to have max_persistant=2, one for each different\nconnection string.\n\nIf you have one database that is used a lot and one that isn't, you may\nwish to set max_persistant to 1 and max_clients to 2. Use pg_pconnect()\nfor the one accessed a lot and pg_connect() for the other. Set Apache\nMaxClients to X and the max number of PG backends to X + Y, where Y\nallows for the load required by the short lived pg_connect()s.\n\nAs you've probably noticed, balancing all this is a rather manual\nprocess.\nPerhaps Apache 2.0 will make way for some connection pooling.\n\nI hope that wasn't too confusing.\n\nOh, and if you are using pg_close() I don't think it works\nin any currently released PHP4 versions. See:\nhttp://bugs.php.net/bugs.php?id=7007\n>From the changelog:\nhttp://cvs.php.net/viewcvs.cgi/~checkout~/php4/ChangeLog?rev=1.541&conte\nnt-type=text/plain\nit seems a fix went in to CVS on 2000-11-03.\n\n\n--\nPaul McGarry mailto:[email protected] \nSystems Integrator http://www.opentec.com.au \nOpentec Pty Ltd http://www.iebusiness.com.au\n6 Lyon Park Road Phone: (02) 9878 1744 \nNorth Ryde NSW 2113 Fax: (02) 9878 1755755\n",
"msg_date": "Fri, 24 Nov 2000 15:17:59 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: re : PHP and persistent connections"
},
{
"msg_contents": "On Fri, Nov 24, 2000 at 03:17:59PM +1100, some SMTP stream spewed forth: \n> Howdy,\n> \n> > It turns out that the number of max_persistent \n> > is linked to the httpd processes in some \n> > difficult-to-describe way.\n> \n> It's not that hard to describe. The max_persistent/max_links values are\n> per Apache process.\n\nIt was difficult to describe because I was not recieving consistent\nresults in experiments due to a number of factors. It makes sense now.\n\n> \n> Thus if you have:\n> \n> pgsql.max_persistent = 2\n> \n> and \n> \n> MaxClients 300\n> \n> You could potentially reach a state where you are maintaining 600\n> persistant connections to the database (if your PHP scripts actually\n> pg_pconnect() with two different connect strings).\n> \n> I think that if you are using persistant connections you may as well set\n> MaxClients to the same number of database backends you are allowing (or\n> possibly a bit less if you need other connections to the database).\n> There's no real point in allowing more Maxclients as they'll just start\n> hitting connect errors anyway.\n\nWell, see, the thing is, we do webhosting for a number of different\ndomains from the same server so the number of MaxClients needs to be\nhigh. I think that 300 is obscene, as the server is not powerful enough\nto handle 300 apache processes without dumping a large number of them\ninto swap space, not to mention the processing, but no matter what, we\nwould have several extra postgres backends just hanging around wasting\nram. \nOnly a few unique persistent connections would be in use at any given \ntime as only a few domains use the database.\n\nThis has made me realize just how completely braindead our server setup\nis. ;-) It seems that we would to bring up a seperate database\nserver, very soon. \n\n> \n> Obviously this isn't the most efficient use of backends because a fair\n> amount of the time the Apache processes won't be using them at all\n> (they'll be sitting there doing nothing, or serving images or other\n> static content).\n\nJust what I was thinking. Connection pooling would avoid that, correct?\n\n> \n> If your application is big enough you may benefit from serving static\n> content (images etc) from a different server, so the Apache processes\n> with persistant connection to backends are being used more heavily for\n> database work.\n\nTrue, but in this case probably moving the database to a different server\nwould make more sense because most of the backends would be serving\ncontent that is completely unrelated to the database.\n\n> \n> Ie if you normally have 100 Apache processes running on your webserver\n> and you are using one persistant connection per process you will need\n> 100 backends. However if at any one time those 60%\n> of those processes are serving images then you could have an Apache\n> server on one machine serving those images and only need 40 Apache\n> processes and therefore 40 backends on the Apache server that serves the\n> PHP script. You'll be tuning each machine for a more specific task,\n> rather than having one machine doing all sorts of different stuff.\n> \n> > I do not know what will happen with PHP when there are more than one \n> > different (i.e. different username, database) persistent connections.\n> > I suppose they would be affected by the max_persistent. (?).\n> \n> If you want persistant connections to two different database/username\n> pairs then you need to have max_persistant=2, one for each different\n> connection string.\n> \n> If you have one database that is used a lot and one that isn't, you may\n> wish to set max_persistant to 1 and max_clients to 2. Use pg_pconnect()\n> for the one accessed a lot and pg_connect() for the other. Set Apache\n> MaxClients to X and the max number of PG backends to X + Y, where Y\n> allows for the load required by the short lived pg_connect()s.\n> \n> As you've probably noticed, balancing all this is a rather manual\n> process.\n> Perhaps Apache 2.0 will make way for some connection pooling.\n> \n> I hope that wasn't too confusing.\n\nYour explanation makes perfect sense. A Zen sort of understanding has \ncome to me through experimenting with different settings. \n\nHow would persistent connections fit into a dual-server setup where one\nserver is handling all of the webserving and the other simply handles the\ndatabase data-serving? \nThe number of backends on the database server would be independent\nof the number of Apache processes on the webserver inasmuch as there\ncould be 75 Apache processes but only 25 are connected to backends on the\ndatabase server, correct?\nThere would not necessarily be any Apache stuff on the database server?\n\n> \n> Oh, and if you are using pg_close() I don't think it works\n> in any currently released PHP4 versions. See:\n\nThis seems to be true. I ran into some fun link errors while\nconnecting and disconnecting more than once in a script.\n\nThanks again, and again.\n\ngh\n\n> http://bugs.php.net/bugs.php?id=7007\n> >From the changelog:\n> http://cvs.php.net/viewcvs.cgi/~checkout~/php4/ChangeLog?rev=1.541&conte\n> nt-type=text/plain\n> it seems a fix went in to CVS on 2000-11-03.\n> \n> \n> --\n> Paul McGarry mailto:[email protected] \n> Systems Integrator http://www.opentec.com.au \n> Opentec Pty Ltd http://www.iebusiness.com.au\n> 6 Lyon Park Road Phone: (02) 9878 1744 \n> North Ryde NSW 2113 Fax: (02) 9878 1755755\n",
"msg_date": "Thu, 23 Nov 2000 22:47:40 -0600",
"msg_from": "GH <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: re : PHP and persistent connections"
},
{
"msg_contents": "\nAt 12:47 PM 11/24/00, GH wrote:\n>On Fri, Nov 24, 2000 at 03:17:59PM +1100, some SMTP stream spewed forth:\n> > Oh, and if you are using pg_close() I don't think it works\n> > in any currently released PHP4 versions. See:\n>\n>This seems to be true. I ran into some fun link errors while\n>connecting and disconnecting more than once in a script.\n\n This sounds disturbing!\n\n How then should I go about closing persistent connections? Can I close \nthem at all?\n\n Would pg_close() work if I used it on non-persistent connections?\n\n Thanks in advance,\n\nMikah\n\n",
"msg_date": "Fri, 24 Nov 2000 14:48:18 +0800",
"msg_from": "jmcazurin <[email protected]>",
"msg_from_op": false,
"msg_subject": "re: PHP and persistent connections"
},
{
"msg_contents": "GH wrote:\n> On Fri, Nov 24, 2000 at 03:17:59PM +1100, some SMTP stream spewed forth:\n> > Howdy,\n> > > It turns out that the number of max_persistent\n> > > is linked to the httpd processes in some\n> > > difficult-to-describe way.\n> > It's not that hard to describe. The max_persistent/max_links values are\n> > per Apache process.\n> It was difficult to describe because I was not recieving consistent\n> results in experiments due to a number of factors. It makes sense now.\n\nI've copied this email exchange over to my PHP folder.. I see what I can do\ndo to improve the online documentation. :-)\n\n> Well, see, the thing is, we do webhosting for a number of different\n> domains from the same server so the number of MaxClients needs to be\n> high. I think that 300 is obscene, as the server is not powerful enough\n> to handle 300 apache processes without dumping a large number of them\n> into swap space, not to mention the processing, but no matter what, we\n> would have several extra postgres backends just hanging around wasting\n> ram.\n> Only a few unique persistent connections would be in use at any given\n> time as only a few domains use the database.\n\nGive them their own apache? You can set up two apache instances on one box,\nset up one with lots of backends, set up the other to match the applicable\ndb usage...\nYou could make a postgres+apache box for these few clients...\n\n> This has made me realize just how completely braindead our server setup\n> is. ;-) It seems that we would to bring up a seperate database\n> server, very soon.\n\nDepends on the load. I'm serving 429 domains off of PHP/PostgreSQL,\nusing non-persistant connections (even though almost every page has\na select or two), and it's working just fine. My biggest selects only\nreturn a few hundred rows, my small inserts/updates are done in PHP,\nthe big ones (4,000+ rows) are just parsed into files that a Perl/cron job\ntakes care of them. It also depends, obviously, on how you write your\ncode for all of this, how good the hardware is, etc.\n(PII/500, 512Mb of RAM, RH 6.2 for the above)\n\n> > Obviously this isn't the most efficient use of backends because a fair\n> > amount of the time the Apache processes won't be using them at all\n> > (they'll be sitting there doing nothing, or serving images or other\n> > static content).\n> Just what I was thinking. Connection pooling would avoid that, correct?\n> > If your application is big enough you may benefit from serving static\n> > content (images etc) from a different server, so the Apache processes\n> > with persistant connection to backends are being used more heavily for\n> > database work.\n> True, but in this case probably moving the database to a different server\n> would make more sense because most of the backends would be serving\n> content that is completely unrelated to the database.\n\nWell, here's the problem:\n\n1 apache/php/postgres thread = 1 possible persistant postgres connection\n\nif you run up 200 threads on _any_ server instance, that means you need\n200 waiting backends, if that server is also doing postgres content\nwith persistant connections anywhere in that server.\n\nI think the idea being referred to works like this:\nIn a big, mega-hit app, you put your simple content on a simple server,\nso the web pages reference GIF's/frames/whatever stored there, rather\nthan on a heavy-use box. This means that the clients go to *another*\nweb server for that non-dynamic content.\n\n> How would persistent connections fit into a dual-server setup where one\n> server is handling all of the webserving and the other simply handles the\n> database data-serving?\n\nEr... well, if you db load was really heavy, this would make sense,\nbut your problem is about having all of the webserving in one place.\n\n> The number of backends on the database server would be independent\n> of the number of Apache processes on the webserver inasmuch as there\n> could be 75 Apache processes but only 25 are connected to backends on the\n> database server, correct?\n\nAll 75 Apache processes might eventually try to serve up the db pages.\n\nSo all 75 *could* eventually want persistant connections. You can't control\nwhich process gets which page.\n\n> There would not necessarily be any Apache stuff on the database server?\n\nNot if you don't want it, no. Keep in mind that using _non_ persistant\nconnections on this setup will be even slower, as well.\n\n-Ron\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n",
"msg_date": "Fri, 24 Nov 2000 00:52:34 -0700",
"msg_from": "Ron Chmara <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: re : PHP and persistent connections"
},
{
"msg_contents": "On Fri, Nov 24, 2000 at 02:48:18PM +0800, some SMTP stream spewed forth: \n> \n> At 12:47 PM 11/24/00, GH wrote:\n> >On Fri, Nov 24, 2000 at 03:17:59PM +1100, some SMTP stream spewed forth:\n> > > Oh, and if you are using pg_close() I don't think it works\n> > > in any currently released PHP4 versions. See:\n> >\n> >This seems to be true. I ran into some fun link errors while\n> >connecting and disconnecting more than once in a script.\n> \n> This sounds disturbing!\n\nMaybe it should, I thought it was. Who knows.\n> \n\n> How then should I go about closing persistent connections? Can I close \n> them at all?\n\nYou cannot, by design and purpose, close persistent connections.\nYou could kill the postgres backend, but that is not quite the same. ;-)\n\n> Would pg_close() work if I used it on non-persistent connections?\n\nMy experience has caused me to believe that no, it will not.\nThat is not final, as I do not have true proof.\n\n> \n> Thanks in advance,\n\nNo prob, we are here to benefit each other.\n\nIt seems like PHP would open other new connections using pg_connect(), \nbut would not close them. Has anyone had experiences other than this?\n\n\ngh\n\n> \n> Mikah\n> \n",
"msg_date": "Fri, 24 Nov 2000 03:00:20 -0600",
"msg_from": "GH <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PHP and persistent connections"
},
{
"msg_contents": "On Fri, Nov 24, 2000 at 12:52:34AM -0700, some SMTP stream spewed forth: \n> GH wrote:\n> > On Fri, Nov 24, 2000 at 03:17:59PM +1100, some SMTP stream spewed forth:\n> > > Howdy,\n> > > > It turns out that the number of max_persistent\n> > > > is linked to the httpd processes in some\n> > > > difficult-to-describe way.\n> > > It's not that hard to describe. The max_persistent/max_links values are\n> > > per Apache process.\n> > It was difficult to describe because I was not recieving consistent\n> > results in experiments due to a number of factors. It makes sense now.\n> \n> I've copied this email exchange over to my PHP folder.. I see what I can do\n> do to improve the online documentation. :-)\n\nGreat. Thanks.\n\n> \n> > Well, see, the thing is, we do webhosting for a number of different\n> > domains from the same server so the number of MaxClients needs to be\n> > high. I think that 300 is obscene, as the server is not powerful enough\n> > to handle 300 apache processes without dumping a large number of them\n> > into swap space, not to mention the processing, but no matter what, we\n> > would have several extra postgres backends just hanging around wasting\n> > ram.\n> > Only a few unique persistent connections would be in use at any given\n> > time as only a few domains use the database.\n> \n> Give them their own apache? You can set up two apache instances on one box,\n> set up one with lots of backends, set up the other to match the applicable\n> db usage...\n> You could make a postgres+apache box for these few clients...\n\nEr, I think I missed something.\nYou mean give them their own Apache instance using a seperate ip?\n\nIs it /possible/ to have a group of httpd processes (Apache) share a \ngroup of Postgres backends without having one backend to one httpd?\nThat would be connection pooling, correct? Which is not yet possible?\n\n> \n> > This has made me realize just how completely braindead our server setup\n> > is. ;-) It seems that we would to bring up a seperate database\n> > server, very soon.\n> \n> Depends on the load. I'm serving 429 domains off of PHP/PostgreSQL,\n> using non-persistant connections (even though almost every page has\n> a select or two), and it's working just fine. My biggest selects only\n> return a few hundred rows, my small inserts/updates are done in PHP,\n> the big ones (4,000+ rows) are just parsed into files that a Perl/cron job\n> takes care of them. It also depends, obviously, on how you write your\n> code for all of this, how good the hardware is, etc.\n> (PII/500, 512Mb of RAM, RH 6.2 for the above)\n\nThat makes sense. The only reason I am so zealous about persistent \nconnections is that I have seen them be 3 times as fast as regular\nconnections.\n\n> \n> > > Obviously this isn't the most efficient use of backends because a fair\n> > > amount of the time the Apache processes won't be using them at all\n\nMy main question now is, how can I avoid this?\nI would have to go to non-persistent connections, correct?\nI think I further understand things now.\n\n\nSo, persistent connections create a one-to-one ratio of \ndb-using Apache processes and Postgres backends, no matter what?\nThe only way to avoid such a one-to-one setup would be to \nuse non-persistent connections or do connection pooling?\n\nSo, even if the database were running on a seperate server, \neach apache procees on the main server would require one backend process\non the db server?\n\n> \n> -Ron\n> \n> --\n> Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\n> which is currently in MacOS land. Your bopping may vary.\n",
"msg_date": "Fri, 24 Nov 2000 03:16:36 -0600",
"msg_from": "GH <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: re : PHP and persistent connections"
},
{
"msg_contents": "I have a couple of other questions that I believe are not ansvered in \nthe docs anywhere.\n\nDo the \"persistent-connected\" Postgres backends ever timeout or die?\nIs it possible to set something like a timeout for persistent connctions?\n(Er, would that be something that someone would want \n\tto do? A Bad Thing?)\n\nWhat happens when the httpd process that held a persistent connection\ndies? Does \"its\" postgres process drop the connection and wait for\nothers? When the spare apache processes die, the postgres processes\nremain.\n\nThanks.\n\ngh\n\n",
"msg_date": "Fri, 24 Nov 2000 03:36:37 -0600",
"msg_from": "GH <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: re : PHP and persistent connections"
},
{
"msg_contents": "We're in quote hell.\nYay. \n\nGH wrote:\n> \n> On Fri, Nov 24, 2000 at 12:52:34AM -0700, some SMTP stream spewed forth:\n> > GH wrote:\n> > > On Fri, Nov 24, 2000 at 03:17:59PM +1100, some SMTP stream spewed forth:\n> > > > Howdy,\n> > Give them their own apache? You can set up two apache instances on one box,\n> > set up one with lots of backends, set up the other to match the applicable\n> > db usage...\n> > You could make a postgres+apache box for these few clients...\n> Er, I think I missed something.\n> You mean give them their own Apache instance using a seperate ip?\n\nYes.\n\nApache one, httpd, serves 14 domains, conf files in /usr/local/apache/conf.\npgsql.max_persistent = 1\nMaxClients 8\n\nApache two, httpd2, serves 327 domains, conf files in /usr/local/apache2/conf.\nMax clients 150 (no postgres backends, no PHP)\n\n> Is it /possible/ to have a group of httpd processes (Apache) share a\n> group of Postgres backends without having one backend to one httpd?\n> That would be connection pooling, correct? Which is not yet possible?\n\nApache's process management, AFAIK, makes this fairly difficult. As in:\n\"I've never seen it, and I can't find docs on on, maybe v.2 will\nhave better support for children sharing common resources\".\n\n> > Depends on the load. I'm serving 429 domains off of PHP/PostgreSQL,\n> > using non-persistant connections (even though almost every page has\n> > a select or two), and it's working just fine. My biggest selects only\n> > return a few hundred rows, my small inserts/updates are done in PHP,\n> > the big ones (4,000+ rows) are just parsed into files that a Perl/cron job\n> > takes care of them. It also depends, obviously, on how you write your\n> > code for all of this, how good the hardware is, etc.\n> > (PII/500, 512Mb of RAM, RH 6.2 for the above)\n> That makes sense. The only reason I am so zealous about persistent\n> connections is that I have seen them be 3 times as fast as regular\n> connections.\n\nHm.\n\nI havn't. In PHP, one connection for the duration of a single\npage (pg_connect()) takes as much time as a new persistant connection\n(pg_pconnect()). Since you're often only creating one connection per page,\nand running a single transaction on it, the main difference\nwould be in your connection setup... how did you test this? (I'm just\ncurious). Is it a usage test (real, live, use) or a bench test (push\nto a limit that won't be reached in actual use.) I have one horribly\nwritten app, that does maybe 50 _different_ selects on one page,\nand it's still under two seconds per user....\n\n> > > > Obviously this isn't the most efficient use of backends because a fair\n> > > > amount of the time the Apache processes won't be using them at all\n> My main question now is, how can I avoid this?\n\nServe the postgres pages from a different server instance, on the same\nmachine, or a different one.\n\n> I would have to go to non-persistent connections, correct?\n\nYou could use persistant connections on a different server/instance,\nor use non-persistant and loose ~10ms per page, less time than your average\n10K GIF takes up on a 56K download.\n\nYou see, persistant PHP connections offer *no other value*, at all. None.\n(it's a common error for new PHP folks to think that a web server\nwill somehow track their connections.) All it does is reduce setup time on a\npage. No \"session\", no \"tracking\", nada. It reduces your connection\ntime for the page, but not significanly enough for users to know,\nor care (IME). In web-page uses, the time is pretty much irrelevant,\nbecause you only need one or two connections per page to get most\nof your data out. Persistant connections are an interesting idea,\nbut they don't offer much. See:\nhttp://www.php.net/manual/features.persistent-connections.php\n\n> So, persistent connections create a one-to-one ratio of\n> db-using Apache processes and Postgres backends, no matter what?\n\nAlmost. You can have more persistant connections for each apache\nchild, but each child may look for one. So it may be 5 apache\nto 5 postgres, or 5 apache to 50 postgres, if needed (of course,\nif you had that many conections, you may want to re-architect anyways)\n\n> The only way to avoid such a one-to-one setup would be to\n> use non-persistent connections or do connection pooling?\n\nI'm still not following you on the \"pooling\". Apache doesn't, AFAICT,\noffer this in each child. Each child is its own application, it's own\napache+php+postgres. Postgres doesn't care. PHP doesn't care. Apache\ncares. If you give each child piece 5 postgres connections, and have\n10 children, you need up to 50 backends.\n\n> So, even if the database were running on a seperate server,\n> each apache procees on the main server would require one backend process\n> on the db server?\n\nYup. If it was going to pull a postgres+PHP page, it would. You see,\napache doesn't work in a space where one apache process can crash the\nwhole thing. Each piece is isolated. This means that each piece needs\nit's own resources. Compare this to other engines, where a single\ncrash on one serving instance takes down the _entire_ server, and\nit makes sense (if the pool is down, it all goes down, a la IIS).\n\n\"It scales, but not that way\". :-(\n\n-Ronabop\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n",
"msg_date": "Fri, 24 Nov 2000 04:52:27 -0700",
"msg_from": "Ron Chmara <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: re : PHP and persistent connections"
},
{
"msg_contents": "On Fri, Nov 24, 2000 at 04:52:27AM -0700, some SMTP stream spewed forth: \n> We're in quote hell.\n> Yay. \n\nAh, but now the hell thickens. ;-)\n\n> \n> GH wrote:\n> > \n> > On Fri, Nov 24, 2000 at 12:52:34AM -0700, some SMTP stream spewed forth:\n> > > GH wrote:\n> > > > On Fri, Nov 24, 2000 at 03:17:59PM +1100, some SMTP stream spewed forth:\n> > > > > Howdy,\n> > > Give them their own apache? You can set up two apache instances on one box,\n> > > set up one with lots of backends, set up the other to match the applicable\n> > > db usage...\n> > > You could make a postgres+apache box for these few clients...\n> > Er, I think I missed something.\n> > You mean give them their own Apache instance using a seperate ip?\n> \n> Yes.\n> \n> Apache one, httpd, serves 14 domains, conf files in /usr/local/apache/conf.\n> pgsql.max_persistent = 1\n> MaxClients 8\n> \n> Apache two, httpd2, serves 327 domains, conf files in /usr/local/apache2/conf.\n> Max clients 150 (no postgres backends, no PHP)\n\nI see. \n\n> \n> > Is it /possible/ to have a group of httpd processes (Apache) share a\n> > group of Postgres backends without having one backend to one httpd?\n> > That would be connection pooling, correct? Which is not yet possible?\n> \n> Apache's process management, AFAIK, makes this fairly difficult. As in:\n> \"I've never seen it, and I can't find docs on on, maybe v.2 will\n> have better support for children sharing common resources\".\n> \n\nJust checking. I had heard (and expected) that it did not -- for the same\nreason.\n\n> > > Depends on the load. I'm serving 429 domains off of PHP/PostgreSQL,\n> > > using non-persistant connections (even though almost every page has\n> > > a select or two), and it's working just fine. My biggest selects only\n> > > return a few hundred rows, my small inserts/updates are done in PHP,\n> > > the big ones (4,000+ rows) are just parsed into files that a Perl/cron job\n> > > takes care of them. It also depends, obviously, on how you write your\n> > > code for all of this, how good the hardware is, etc.\n> > > (PII/500, 512Mb of RAM, RH 6.2 for the above)\n> > That makes sense. The only reason I am so zealous about persistent\n> > connections is that I have seen them be 3 times as fast as regular\n> > connections.\n> \n> Hm.\n> \n> I havn't. In PHP, one connection for the duration of a single\n> page (pg_connect()) takes as much time as a new persistant connection\n> (pg_pconnect()). Since you're often only creating one connection per page,\n> and running a single transaction on it, the main difference\n> would be in your connection setup... how did you test this? (I'm just\n> curious). Is it a usage test (real, live, use) or a bench test (push\n> to a limit that won't be reached in actual use.) I have one horribly\n> written app, that does maybe 50 _different_ selects on one page,\n> and it's still under two seconds per user....\n\n\"Test\" is a strong word. ;-) I have a timer set on a page.\nThe overall exec time is less that 1-tenth of a second using persistent \nconnections, so long as a connection exists. Using regular connections,\nthe exec time soars (;-)) to a whopping 3-tenths or so.\nSo, no big fat deal. The exec time is low enough that the effects of the\nconnections shine, but in general are insignificant.\nIf the script in discussion did anything worthwhile, I doubt that I would\nnotice anything even close to 3x.\n\n> \n> > > > > Obviously this isn't the most efficient use of backends because a fair\n> > > > > amount of the time the Apache processes won't be using them at all\n> > My main question now is, how can I avoid this?\n> \n> Serve the postgres pages from a different server instance, on the same\n> machine, or a different one.\n> \n> > I would have to go to non-persistent connections, correct?\n> \n> You could use persistant connections on a different server/instance,\n> or use non-persistant and loose ~10ms per page, less time than your average\n> 10K GIF takes up on a 56K download.\n> \n> You see, persistant PHP connections offer *no other value*, at all. None.\n> (it's a common error for new PHP folks to think that a web server\n> will somehow track their connections.) All it does is reduce setup time on a\n> page. No \"session\", no \"tracking\", nada. It reduces your connection\n> time for the page, but not significanly enough for users to know,\n> or care (IME). In web-page uses, the time is pretty much irrelevant,\n> because you only need one or two connections per page to get most\n> of your data out. Persistant connections are an interesting idea,\n> but they don't offer much. See:\n> http://www.php.net/manual/features.persistent-connections.php\n\nI have read it (note: the phrasing seems to be a bit \"messy\"), but \nfor some reason I must have missed what it was saying. I \"get it\" now.\n\n> \n> > So, persistent connections create a one-to-one ratio of\n> > db-using Apache processes and Postgres backends, no matter what?\n> \n> Almost. You can have more persistant connections for each apache\n> child, but each child may look for one. So it may be 5 apache\n> to 5 postgres, or 5 apache to 50 postgres, if needed (of course,\n> if you had that many conections, you may want to re-architect anyways)\n> \n> > The only way to avoid such a one-to-one setup would be to\n> > use non-persistent connections or do connection pooling?\n> \n> I'm still not following you on the \"pooling\". Apache doesn't, AFAICT,\n\nI almost knew that it did not. But I was trying to re-affirm my grasp of \njust what \"pooling\" would do.\n\n> offer this in each child. Each child is its own application, it's own\n> apache+php+postgres. Postgres doesn't care. PHP doesn't care. Apache\n> cares. If you give each child piece 5 postgres connections, and have\n> 10 children, you need up to 50 backends.\n> \n> > So, even if the database were running on a seperate server,\n> > each apache procees on the main server would require one backend process\n> > on the db server?\n> \n> Yup. If it was going to pull a postgres+PHP page, it would. You see,\n> apache doesn't work in a space where one apache process can crash the\n> whole thing. Each piece is isolated. This means that each piece needs\n> it's own resources. Compare this to other engines, where a single\n> crash on one serving instance takes down the _entire_ server, and\n> it makes sense (if the pool is down, it all goes down, a la IIS).\n> \n> \"It scales, but not that way\". :-(\n\nGot it. Maybe this thread will finally pass away now. ;-)\n\nThanks again.\n\ngh\n\n> \n> -Ronabop\n> \n> --\n> Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\n> which is currently in MacOS land. Your bopping may vary.\n",
"msg_date": "Fri, 24 Nov 2000 06:18:04 -0600",
"msg_from": "GH <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: re : PHP and persistent connections"
},
{
"msg_contents": "GH <[email protected]> writes:\n> Do the \"persistent-connected\" Postgres backends ever timeout or die?\n\nNo. A backend will sit patiently for the client to send it another\nquery or close the connection.\n\n(Barely on topic: in recent releases, the backend does set TCP\n\"keepalive\" mode on the client socket. On a cross-machine connection,\nthis causes the kernel to ping every so often on an idle connection, to\nmake sure that the peer machine is still alive and still believes the\nconnection is open. However, this does not guard against a client\nprocess that is holding connections open without any intention of using\nthem again soon --- it only protects against half-open connections left\nover after a system crash at the client end. In any case, I believe the\ntotal time delay before declaring the connection lost has to be an hour\nor more in a spec-compliant TCP implementation.)\n\n> Is it possible to set something like a timeout for persistent connctions?\n> (Er, would that be something that someone would want \n> \tto do? A Bad Thing?)\n\nThis has been suggested before, but I don't think any of the core\ndevelopers consider it a good idea. Having the backend arbitrarily\ndisconnect on an active client would be a Bad Thing for sure. Hence,\nany workable timeout would have to be quite large (order of an\nhour, maybe? not milliseconds anyway). And that means that it's not\nan effective solution for the problem. Under load, a webserver that\nwastes backend connections will run out of available backends long\nbefore a safe timeout would start to clean up after it.\n\nTo my mind, a client app that wants to use persistent connections\nhas got to implement some form of connection pooling, so that it\nrecycles idle connections back to a \"pool\" for allocation to task\nthreads that want to make a new query. And the threads have to release\nconnections back to the pool as soon as they're done with a transaction.\nActively releasing an idle connection is essential, rather than\ndepending on a timeout.\n\nI haven't studied PHP at all, but from this conversation I gather that\nit's only halfway there...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 12:02:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: re : PHP and persistent connections "
},
{
"msg_contents": "Note: CC'd to Hackers, as this has wandered into deeper feature issues.\n\nTom Lane wrote:\n> GH <[email protected]> writes:\n> > Do the \"persistent-connected\" Postgres backends ever timeout or die?\n> No. A backend will sit patiently for the client to send it another\n> query or close the connection.\n\nThis does have an unfortunate denial-of-service implication, where\nan attack can effectively suck up all available backends, and there's\nno throttle, no timeout, no way of automatically dropping these....\n\nHowever, the more likely possibility is similar to the problem that\nwe see in PHP's persistant connections.... a normally benign connection\nis inactive, and yet it isn't dropped. If you have two of these created\nevery day, and you only have 16 backends, after 8 days you have a lockout.\n\nOn a busy web site or another busy application, you can, of course,\nexhaust 64 backends in a matter of minutes.\n\n> > Is it possible to set something like a timeout for persistent connctions?\n> > (Er, would that be something that someone would want\n> > to do? A Bad Thing?)\n> This has been suggested before, but I don't think any of the core\n> developers consider it a good idea. Having the backend arbitrarily\n> disconnect on an active client would be a Bad Thing for sure.\n\nRight.... but I don't think anybody has suggested disconnecting an *active*\nclient, just inactive ones.\n\n> Hence,\n> any workable timeout would have to be quite large (order of an\n> hour, maybe? not milliseconds anyway). \n\nThe mySQL disconnect starts at around 24 hours. It prevents a slow\naccumulation of unused backends, but does nothing for a rapid\naccumulation. It can be cranked down to a few minutes AFAIK.\n\n> And that means that it's not\n> an effective solution for the problem. Under load, a webserver that\n> wastes backend connections will run out of available backends long\n> before a safe timeout would start to clean up after it.\n\nDepends on how it's set up... you see, this isn't uncharted territory,\nother web/db solutions have already fought with this issue. Much\nlike the number of backends set up for pgsql must be static, a timeout\nmay wind up being the same way. The critical thing to realize is\nthat you are timing out _inactive_ connections, not connections\nin general. So provided that a connection provided information\nabout when it was last used, or usage set a counter somewhere, it\ncould easily be checked.\n\n> To my mind, a client app that wants to use persistent connections\n> has got to implement some form of connection pooling, so that it\n> recycles idle connections back to a \"pool\" for allocation to task\n> threads that want to make a new query. And the threads have to release\n> connections back to the pool as soon as they're done with a transaction.\n> Actively releasing an idle connection is essential, rather than\n> depending on a timeout.\n> \n> I haven't studied PHP at all, but from this conversation I gather that\n> it's only halfway there...\n\nWell...... This is exactly how apache and PHP serve pages. The\nproblem is that apache children aren't threads, they are separate copies\nof the application itself. So a single apache thread will re-use the\nsame connection, over and over again, and give that conection over to\nother connections on that apache thread.. so in your above model, it's\nnot really one client application in the first place.\n\nIt's a dynamic number of client applications, between one and hundreds\nor so.\n\nSo to turn the feature request the other way 'round:\n\"I have all sorts of client apps, connecting in different ways, to\nmy server. Some of the clients are leaving their connections open,\nbut unused. How can I prevent running out of backends, and boot\nthe inactive users off?\"\n\n-Ronabop\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n",
"msg_date": "Sat, 25 Nov 2000 17:26:42 -0700",
"msg_from": "Ron Chmara <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "At 05:26 PM 11/25/00 -0700, Ron Chmara wrote:\n>Note: CC'd to Hackers, as this has wandered into deeper feature issues.\n>\n>Tom Lane wrote:\n>> GH <[email protected]> writes:\n>> > Do the \"persistent-connected\" Postgres backends ever timeout or die?\n>> No. A backend will sit patiently for the client to send it another\n>> query or close the connection.\n>\n>This does have an unfortunate denial-of-service implication, where\n>an attack can effectively suck up all available backends, and there's\n>no throttle, no timeout, no way of automatically dropping these....\n>\n>However, the more likely possibility is similar to the problem that\n>we see in PHP's persistant connections.... a normally benign connection\n>is inactive, and yet it isn't dropped. If you have two of these created\n>every day, and you only have 16 backends, after 8 days you have a lockout.\n>\n>On a busy web site or another busy application, you can, of course,\n>exhaust 64 backends in a matter of minutes.\n\nUgh...the more I read stuff like this the more I appreciate AOlserver's\nbuilt-in database API which protects the application from any such\nproblems altogether. The particular problem being described simply\ncan't occur in this environment.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 25 Nov 2000 18:54:21 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent\n connections"
},
{
"msg_contents": "> \"I have all sorts of client apps, connecting in different ways, to\n> my server. Some of the clients are leaving their connections open,\n> but unused. How can I prevent running out of backends, and boot\n> the inactive users off?\"\n\nhow about having a middle man between apache (or aolserver or any other\nclients...) and PosgreSQL ??\n\nthat middleman could be configured to have 16 persistant connections,every\nclients would deal with the middleman instead of going direct to the\ndatabase,this would be an advantage where multiple PostgreSQL server are\nused...\n\n240 apache process are running on a box and there's 60 PostgreSQL instance\nrunning on the machine or another machine:\n\n240 apache process --> middleman --> 60 PostgreSQL process\n\nnow if there's multiple Database server:\n\n240 apache process --> middleman --> 12 PostgreSQL for each server (5\nservers in this case)\n\nin this case,the middleman could be a shared library which the clients\nlink to..\n\nwhat do you think about that ??\n\nAlain Toussaint\n\n",
"msg_date": "Sun, 26 Nov 2000 00:07:46 -0500 (EST)",
"msg_from": "Alain Toussaint <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "At 10:00 PM 11/25/00 -0800, Mitch Vincent wrote:\n> I've tried quite a bit to use persistent connections with PHP (for over\n>a year) and always the scripts that I try to use them with behave crazy...\n>The last time I tried there were problems all over the place with PHP,\n>variables getting overwritten, certain functions just totally breaking\n>(date() to name one) and so on.. I know I'm not being specific but my point\n>is that I think there are some other outstanding PHP issues that play into\n>this problem as the behavior that I've seen isn't directly related to\n>PostgreSQL but only happens when I use persistent connections.. \n\nI've heard rumors that PHP isn't thoroughly threadsafe, could this be a\nsource of your problems?\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 25 Nov 2000 21:18:33 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent\n connections"
},
{
"msg_contents": "At 12:07 AM 11/26/00 -0500, Alain Toussaint wrote:\n\n>how about having a middle man between apache (or aolserver or any other\n>clients...) and PosgreSQL ??\n>\n>that middleman could be configured to have 16 persistant connections,every\n>clients would deal with the middleman instead of going direct to the\n>database,this would be an advantage where multiple PostgreSQL server are\n>used...\n\nWell, this is sort of what AOLserver does for you without any need for\nmiddlemen. \n\nAgain, reading stuff like this makes me think \"ugh!\"\n\nThis stuff is really pretty easy, it's amazing to me that the Apache/db\nworld talks about such kludges when they're clearly not necessary.\n\nMy first experience running a website (donb.photo.net) was with Apache\non Linux on an old P100 system in 1996 when few folks had personal photo\nsites with >1000 photos on them getting thousands of hits a day. I have\nfond memories of those days, and Apache served me (or more properly webserved\nmy website) well. This site is largely responsible for my reputation that\nlets me freelance nature photography to the national media market pretty\nmuch at will. Thus my fondness.\n\nBut ... for database stuff the release of AOLserver as first Free Beer,\nand now Free Speech software has caused me to abandon Apache and suggestions\nlike the above just make me cringe.\n\nIt shouldn't be that hard, folks.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 25 Nov 2000 21:24:22 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent\n connections"
},
{
"msg_contents": " I've tried quite a bit to use persistent connections with PHP (for over\na year) and always the scripts that I try to use them with behave crazy...\nThe last time I tried there were problems all over the place with PHP,\nvariables getting overwritten, certain functions just totally breaking\n(date() to name one) and so on.. I know I'm not being specific but my point\nis that I think there are some other outstanding PHP issues that play into\nthis problem as the behavior that I've seen isn't directly related to\nPostgreSQL but only happens when I use persistent connections.. I've been\ntrying to corner the problem for quite some time, it's an elusive one for\nsure.. I spoke with the PHP developers 9 or so months ago about the problems\nand they didn't seem to pay any attention to it, the thread on the mailing\nlist was short with the bug report collecting dust at the bottom of the\nto-do list I'm sure (as that was back before PHP 4 was even released and\nobviously the problem remains)..\n\nJust my $0.02 worth.\n\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Ron Chmara\" <[email protected]>\nTo: \"Tom Lane\" <[email protected]>; \"PostgreSQL Hackers List\"\n<[email protected]>\nCc: \"GH\" <[email protected]>; <[email protected]>\nSent: Saturday, November 25, 2000 4:26 PM\nSubject: [HACKERS] Re: [NOVICE] Re: re : PHP and persistent connections\n\n\n> Note: CC'd to Hackers, as this has wandered into deeper feature issues.\n>\n> Tom Lane wrote:\n> > GH <[email protected]> writes:\n> > > Do the \"persistent-connected\" Postgres backends ever timeout or die?\n> > No. A backend will sit patiently for the client to send it another\n> > query or close the connection.\n>\n> This does have an unfortunate denial-of-service implication, where\n> an attack can effectively suck up all available backends, and there's\n> no throttle, no timeout, no way of automatically dropping these....\n>\n> However, the more likely possibility is similar to the problem that\n> we see in PHP's persistant connections.... a normally benign connection\n> is inactive, and yet it isn't dropped. If you have two of these created\n> every day, and you only have 16 backends, after 8 days you have a lockout.\n>\n> On a busy web site or another busy application, you can, of course,\n> exhaust 64 backends in a matter of minutes.\n>\n> > > Is it possible to set something like a timeout for persistent\nconnctions?\n> > > (Er, would that be something that someone would want\n> > > to do? A Bad Thing?)\n> > This has been suggested before, but I don't think any of the core\n> > developers consider it a good idea. Having the backend arbitrarily\n> > disconnect on an active client would be a Bad Thing for sure.\n>\n> Right.... but I don't think anybody has suggested disconnecting an\n*active*\n> client, just inactive ones.\n>\n> > Hence,\n> > any workable timeout would have to be quite large (order of an\n> > hour, maybe? not milliseconds anyway).\n>\n> The mySQL disconnect starts at around 24 hours. It prevents a slow\n> accumulation of unused backends, but does nothing for a rapid\n> accumulation. It can be cranked down to a few minutes AFAIK.\n>\n> > And that means that it's not\n> > an effective solution for the problem. Under load, a webserver that\n> > wastes backend connections will run out of available backends long\n> > before a safe timeout would start to clean up after it.\n>\n> Depends on how it's set up... you see, this isn't uncharted territory,\n> other web/db solutions have already fought with this issue. Much\n> like the number of backends set up for pgsql must be static, a timeout\n> may wind up being the same way. The critical thing to realize is\n> that you are timing out _inactive_ connections, not connections\n> in general. So provided that a connection provided information\n> about when it was last used, or usage set a counter somewhere, it\n> could easily be checked.\n>\n> > To my mind, a client app that wants to use persistent connections\n> > has got to implement some form of connection pooling, so that it\n> > recycles idle connections back to a \"pool\" for allocation to task\n> > threads that want to make a new query. And the threads have to release\n> > connections back to the pool as soon as they're done with a transaction.\n> > Actively releasing an idle connection is essential, rather than\n> > depending on a timeout.\n> >\n> > I haven't studied PHP at all, but from this conversation I gather that\n> > it's only halfway there...\n>\n> Well...... This is exactly how apache and PHP serve pages. The\n> problem is that apache children aren't threads, they are separate copies\n> of the application itself. So a single apache thread will re-use the\n> same connection, over and over again, and give that conection over to\n> other connections on that apache thread.. so in your above model, it's\n> not really one client application in the first place.\n>\n> It's a dynamic number of client applications, between one and hundreds\n> or so.\n>\n> So to turn the feature request the other way 'round:\n> \"I have all sorts of client apps, connecting in different ways, to\n> my server. Some of the clients are leaving their connections open,\n> but unused. How can I prevent running out of backends, and boot\n> the inactive users off?\"\n>\n> -Ronabop\n>\n> --\n> Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC\nmachine,\n> which is currently in MacOS land. Your bopping may vary.\n>\n\n",
"msg_date": "Sat, 25 Nov 2000 22:00:27 -0800",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "> Well, this is sort of what AOLserver does for you without any need for\n> middlemen.\n\ni agree that AolServer is good karma,i've been reading various docs on\nAolserver since Philip Greenspun talked about it on linuxworld and i'm glad\nthat there's some java support being coded for it (im my opinion,it's the only\nadvantage that Apache had over AolServer for me).\n\n> Again, reading stuff like this makes me think \"ugh!\"\n>\n> This stuff is really pretty easy, it's amazing to me that the Apache/db\n> world talks about such kludges when they're clearly not necessary.\n\nwell...i was using Apache as an example due to it DB model but the stuff i\nwas talking would work quite well in the case of multiple DB server\nhosting differents table and you want to maintain location\nindependance,here's an example:\n\nyou have 7 Database server,5 are online and the other 2 are for\nmaintenance and/or development purpose,for simplicity,we'll name the\nserver database1.example.net to\ndatabase7.example.net,database4.example.net is currently doing a dump and\ndatabase6.example.net is loading the dump from database4,then,you\nreconfigure the middleman so it redirect all request from database4 to\ndatabase6:\n\nvim /etc/middleman.conf\n\nand then a sighup to the middleman so it reread its config file:\n\nkillall -HUP middleman\n\nthis would update the middleman's shared lib with the new configuration\ninfo (and BTW,i just extended my idea from a single shared lib to a\ndaemon/shared lib combo).\n\nnow i'm off to get the dog out for a walk and then,take a nap,see ya !!\n\nAlain Toussaint\n\n",
"msg_date": "Sun, 26 Nov 2000 02:50:36 -0500 (EST)",
"msg_from": "Alain Toussaint <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "I'm sure that this, if true, could certainly be the source of the problems\nI've seen... I can't comment on if PHP is completely threadsafe, I know that\nsome of the modules (for lack of a better word) aren't, possible the ClibPDF\nlibrary I'm using. I'll check into it.\n\nThanks!\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Don Baccus\" <[email protected]>\nTo: \"Mitch Vincent\" <[email protected]>; \"PostgreSQL Hackers List\"\n<[email protected]>\nCc: <[email protected]>\nSent: Saturday, November 25, 2000 9:18 PM\nSubject: Re: [HACKERS] Re: [NOVICE] Re: re : PHP and persistent connections\n\n\n> At 10:00 PM 11/25/00 -0800, Mitch Vincent wrote:\n> > I've tried quite a bit to use persistent connections with PHP (for\nover\n> >a year) and always the scripts that I try to use them with behave\ncrazy...\n> >The last time I tried there were problems all over the place with PHP,\n> >variables getting overwritten, certain functions just totally breaking\n> >(date() to name one) and so on.. I know I'm not being specific but my\npoint\n> >is that I think there are some other outstanding PHP issues that play\ninto\n> >this problem as the behavior that I've seen isn't directly related to\n> >PostgreSQL but only happens when I use persistent connections..\n>\n> I've heard rumors that PHP isn't thoroughly threadsafe, could this be a\n> source of your problems?\n>\n>\n>\n>\n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n>\n\n",
"msg_date": "Sun, 26 Nov 2000 00:02:59 -0800",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "\nOn Sun, 26 Nov 2000, Alain Toussaint wrote:\n\n> > \"I have all sorts of client apps, connecting in different ways, to\n> > my server. Some of the clients are leaving their connections open,\n> > but unused. How can I prevent running out of backends, and boot\n> > the inactive users off?\"\n> \n> how about having a middle man between apache (or aolserver or any other\n> clients...) and PosgreSQL ??\n\n I don't see it solving anything. You just move the connection\nmanagement problem from the database to the middleman (in the industry\nsuch a thing would be called a query multiplexor). Multiplexors have\noften been used in the past to solve this problem, because the database\ncould not be extended or protected.\n\n Besides, if you are an n-tier developer, this isn't a problem as your\nmiddle tier not does connection management, but some logic as well. At\nthe end of the day, PHP/Apache is just not suitable for complex\napplications. \n\nTom\n\n",
"msg_date": "Sun, 26 Nov 2000 10:58:34 -0800 (PST)",
"msg_from": "Tom Samplonius <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "Don Baccus wrote:\n> At 12:07 AM 11/26/00 -0500, Alain Toussaint wrote:\n> >how about having a middle man between apache (or aolserver or any other\n> >clients...) and PosgreSQL ??\n> >that middleman could be configured to have 16 persistant connections,every\n> >clients would deal with the middleman instead of going direct to the\n> >database,this would be an advantage where multiple PostgreSQL server are\n> >used...\n> Well, this is sort of what AOLserver does for you without any need for\n> middlemen.\n\nWhat if you have a server farm of 8 AOL servers, and 12 perl clients, and\n3 MS Access connections, leaving things open? Is AOLserver parsing the\nPerl DBD/DBI, connects, too? So you're using AOLserver as (cough) a\nmiddleman? <g>\n\n> Again, reading stuff like this makes me think \"ugh!\"\n> This stuff is really pretty easy, it's amazing to me that the Apache/db\n> world talks about such kludges when they're clearly not necessary.\n\nHow does AOL server time out access clients, ODBC connections, Perl\nclients? I thought it was mainly web-server stuff.\n\nApache/PHP isn't the only problem. The problem isn't solved by\ntelling others to fix their software, either... is this something\nthat can be done _within_ postmaster?\n\n-Bop\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n",
"msg_date": "Mon, 27 Nov 2000 00:38:46 -0700",
"msg_from": "Ron Chmara <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "Tom Samplonius wrote:\n> On Sun, 26 Nov 2000, Alain Toussaint wrote:\n> > > \"I have all sorts of client apps, connecting in different ways, to\n> > > my server. Some of the clients are leaving their connections open,\n> > > but unused. How can I prevent running out of backends, and boot\n> > > the inactive users off?\"\n> > how about having a middle man between apache (or aolserver or any other\n> > clients...) and PosgreSQL ??\n> I don't see it solving anything. You just move the connection\n> management problem from the database to the middleman (in the industry\n> such a thing would be called a query multiplexor). Multiplexors have\n> often been used in the past to solve this problem, because the database\n> could not be extended or protected.\n\nAnd I'm requesting protection. Because the database isn't capable of dynamically\ndetroying temporary backends. (Which would be another solution to this\nproblem)\n\n> Besides, if you are an n-tier developer, this isn't a problem as your\n> middle tier not does connection management, but some logic as well. At\n> the end of the day, PHP/Apache is just not suitable for complex\n> applications.\n\nIs it dump on PHP day?\n\nOkay, pretend the problem is left-open Perl connections. Slam that for\na while. Move over to left open Access connections. Bag on that for\na few posts. Errant C code for a few days. Still have a problem. :-) \n\nHow does a db admin close connections that are idle, and unwanted, without\nshutting the postmaster down?\n\n-Bop\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n",
"msg_date": "Mon, 27 Nov 2000 00:56:03 -0700",
"msg_from": "Ron Chmara <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "At 12:38 AM 11/27/00 -0700, Ron Chmara wrote:\n>Don Baccus wrote:\n>> At 12:07 AM 11/26/00 -0500, Alain Toussaint wrote:\n>> >how about having a middle man between apache (or aolserver or any other\n>> >clients...) and PosgreSQL ??\n>> >that middleman could be configured to have 16 persistant connections,every\n>> >clients would deal with the middleman instead of going direct to the\n>> >database,this would be an advantage where multiple PostgreSQL server are\n>> >used...\n>> Well, this is sort of what AOLserver does for you without any need for\n>> middlemen.\n>\n>What if you have a server farm of 8 AOL servers, and 12 perl clients, and\n>3 MS Access connections, leaving things open? Is AOLserver parsing the\n>Perl DBD/DBI, connects, too? So you're using AOLserver as (cough) a\n>middleman? <g>\n\nWell, no - we'd use the built-in Tcl, Python or nsjava (still in infancy)\nmodules which interface natively to AOLserver's built-in database API.\n\nYou don't NEED the various connection implementations buried in various\nlanguages because they're provided directly in the server. That's the\npoint. That's the main reason people use it.\n\nIf you're going to run CGI/Perl scripts using its database connectivity\nstuff, don't use AOLserver. They'll run since AOLserver supports CGI,\nbut they'll run no better than under Apache and probably worse, since\nno one doing serious AOLserver work uses CGI and therefore the code which\nimplements it has languished - there's no motivation to improve something\nthat no one uses.\n\nIf you're willing to use a language module which exposes the AOLserver\nAPI to your application, then AOLserver's a great choice.\n\n>> Again, reading stuff like this makes me think \"ugh!\"\n>> This stuff is really pretty easy, it's amazing to me that the Apache/db\n>> world talks about such kludges when they're clearly not necessary.\n>\n>How does AOL server time out access clients, ODBC connections, Perl\n>clients? I thought it was mainly web-server stuff.\n\nWell, for starters one normally wouldn't use ODBC since AOLserver\nincludes drivers for PostgreSQL, Oracle and Sybase. There's one for\nSolid, too, but no one seems to use Solid since they raised their\nprices drastically a couple of years ago (if you're going to spend\nlots of money on a database, Oracle and Sybase are more than willing\nto help you). Nor does nsjava use JDBC, it encapsulates the AOLserver\nAPI into a database API class(es?).\n\nAOLserver manages the database pools in about the same way it manages\nthreads, i.e. if a thread can't get the handles it needs (usually only\none, sometimes two, more than that usually indicates poorly written\ncode) it blocks until another thread releases a handle. When a thread\nends (returns a page) any allocated handles are released. Transactions\nthat haven't been properly committed are rolled back as well (lesser of\ntwo evils - the event's logged since it indicates a bug). \n\nFor each pool you provide the name of the driver (which of course serves\nto select which RDMBS that pool will use - you can use as many different\nRDBMSs as you have, and have drivers for), a datasource, the maximum \nnumber of connections to open for that pool, minimum and maximum lifetimes\nfor connections, etc. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 27 Nov 2000 07:18:48 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent\n connections"
},
{
"msg_contents": "Uh, Don?\nNot all the world's a web page, you know. Thatkind of thinking is _so_\nmid 90's ;-) Dedicated apps that talk directly the user seem to be making\na comeback, due to a number of factors. They can have much cleaner user\ninterfaces, for example.\n\nWhich brings us back around to the point of why this is on Hackers:\nPostgreSQL currently has no clean method for dropping idle connections.\nYes, some apps handle this themselves, but not all. A number of people\nseem to feel there is a need for this feature. How hard would it be to\nimplement? \n\nProbably not too hard: we've already got an 'idle' state, suring which we \nblock on the input. Add a timeout to hat, and we're pretty much there.\n\n<goes and looks at code for a bit> \n\nHmm, we're down in the bowels of libpq, doing a recv() on the socket\nto the frontend, about 4 layers down from backend's blocking call to\nReadCommand(). I seem to recall someone working on creating an async\nversion of the libpq API, but Tom not being happy with the approach.\nSo, it's not a simple change.\n\nRoss\n\nOn Mon, Nov 27, 2000 at 07:18:48AM -0800, Don Baccus wrote:\n> At 12:38 AM 11/27/00 -0700, Ron Chmara wrote:\n> >Don Baccus wrote:\n> >> At 12:07 AM 11/26/00 -0500, Alain Toussaint wrote:\n> >> >how about having a middle man between apache (or aolserver or any other\n> >> >clients...) and PosgreSQL ??\n> >> >that middleman could be configured to have 16 persistant connections,every\n> >> >clients would deal with the middleman instead of going direct to the\n> >> >database,this would be an advantage where multiple PostgreSQL server are\n> >> >used...\n> >> Well, this is sort of what AOLserver does for you without any need for\n> >> middlemen.\n> >\n> >What if you have a server farm of 8 AOL servers, and 12 perl clients, and\n> >3 MS Access connections, leaving things open? Is AOLserver parsing the\n> >Perl DBD/DBI, connects, too? So you're using AOLserver as (cough) a\n> >middleman? <g>\n\nNote that only the AOL servers here are web client/servers, the rest are\ndedicated apps.\n\n<snip Don missing the point>\n\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Mon, 27 Nov 2000 10:46:08 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "You could set MaxRequestsPerChild in apache's httpd.conf. This controls\nhow many requests each apache process is allowed to serve. After it\nserves this many the process dies which should close the postgres process\nas well (if it isn't, you have other problems). \n\nI know that for a long time Apache recommened setting this fairly low on\nSolaris due to a memory leak in solaris...ideally you'd want to set this\nreally high, but setting it low will make the processes die...\n\n-philip\n\nOn Fri, 24 Nov 2000, jmcazurin wrote:\n\n> \n> At 12:47 PM 11/24/00, GH wrote:\n> >On Fri, Nov 24, 2000 at 03:17:59PM +1100, some SMTP stream spewed forth:\n> > > Oh, and if you are using pg_close() I don't think it works\n> > > in any currently released PHP4 versions. See:\n> >\n> >This seems to be true. I ran into some fun link errors while\n> >connecting and disconnecting more than once in a script.\n> \n> This sounds disturbing!\n> \n> How then should I go about closing persistent connections? Can I close \n> them at all?\n> \n> Would pg_close() work if I used it on non-persistent connections?\n> \n> Thanks in advance,\n> \n> Mikah\n> \n\n",
"msg_date": "Mon, 27 Nov 2000 08:55:58 -0800 (PST)",
"msg_from": "Philip Hallstrom <[email protected]>",
"msg_from_op": false,
"msg_subject": "re: PHP and persistent connections"
},
{
"msg_contents": "> Is it possible to set something like a timeout for persistent connctions?\n> (Er, would that be something that someone would want \n> \tto do? A Bad Thing?)\n\nsee my other email about apache's MaxRequestsPerChild...\n\n> What happens when the httpd process that held a persistent connection\n> dies? Does \"its\" postgres process drop the connection and wait for\n> others? When the spare apache processes die, the postgres processes\n> remain.\n\nOn my server (freebsd 4.x, php 4.0.2, postgresl 7.0.3) when I kill the\nhttpd processes the postgres processes die as well...\n\n",
"msg_date": "Mon, 27 Nov 2000 08:57:23 -0800 (PST)",
"msg_from": "Philip Hallstrom <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: re : PHP and persistent connections"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Which brings us back around to the point of why this is on Hackers:\n> PostgreSQL currently has no clean method for dropping idle connections.\n> Yes, some apps handle this themselves, but not all. A number of people\n> seem to feel there is a need for this feature.\n\nI'm still not following exactly what people think would happen if we did\nhave such a \"feature\". OK, the backend times out after some interval\nof seeing no activity, and disconnects. How is the client going to\nreact to that, exactly, and why would it not conclude that something's\ngone fatally wrong with the database?\n\nSeems to me that you still end up having to fix the client, and that\nin the last analysis this is a client issue, not something for the\nbackend to hack around.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Nov 2000 12:09:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections "
},
{
"msg_contents": "On Mon, Nov 27, 2000 at 12:09:00PM -0500, Tom Lane wrote:\n> \n> I'm still not following exactly what people think would happen if we did\n> have such a \"feature\". OK, the backend times out after some interval\n> of seeing no activity, and disconnects. How is the client going to\n> react to that, exactly, and why would it not conclude that something's\n> gone fatally wrong with the database?\n\nBecause a lot of commercial (and other) databases have this \"feature\",\na lot of well behaved apps (and middleware packages) already know how\nto deal with it: i.e. try to reconnect, and continue. If that fails,\nthrow an error.\n\n> Seems to me that you still end up having to fix the client, and that\n> in the last analysis this is a client issue, not something for the\n> backend to hack around.\n\nIt's already fixed, see above. In addition, your assuming the same\nadministrative entity has control over the clients and the backend.\nThis is not always the case. For example, in a web hosting environment.\nThen, the DBA has the responsibiltiy to ensure minimal interference\nbetween different customers.\n\nAs it stands, the client that causes the problem sees no problem to\nfix: other clients get 'that damn PostgreSQL backend quits accepting\nconnections', and yell at the DBA. So, the DBA wants a way to propagate\nthe 'problem' to the clients that cause it, by timing out the idle\nconnections. Then, those clients _will_ fix their code, if it doesn't\nalready do it for them, as per above.\n\nBasically, PostgreSQL is being too polite: it's in the clients interest to\nkeep the connection open, since it minimizes response time, regardless\nof how this might affect other backends. It's cooperative vs. hard\nmultitasking, all over again.\n\nClients and servers optimize for different parameters: the client wants\nminimum response time for it's requests. The backend wants minimum\n_average_ response time, over all requests.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Mon, 27 Nov 2000 12:10:39 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent connections"
},
{
"msg_contents": "At 10:46 AM 11/27/00 -0600, Ross J. Reedstrom wrote:\n>Uh, Don?\n>Not all the world's a web page, you know. Thatkind of thinking is _so_\n>mid 90's ;-) Dedicated apps that talk directly the user seem to be making\n>a comeback, due to a number of factors. They can have much cleaner user\n>interfaces, for example.\n\nOf course. But the question's been raised in the context of a web server,\nand I've answered in context.\n\nI've been trying to move the discussion offline to avoid clogging\nthe hackers list with this stuff but some of the messages have escaped\nmy machine with my forgetting to remove pg_hackers from the distribution\nlist. I'll try to be more diligent if the discussion continues.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 27 Nov 2000 10:30:27 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [NOVICE] Re: re : PHP and persistent\n connections"
}
]
|
[
{
"msg_contents": "There's bound to be a better way, but in the NT resource kit there was a\ntool you can use to make any .exe a service.\n\nI have a bash script running under Cygwin as a service here using it.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support Officer, Maidstone Borough Council\nEmail: [email protected]\nWWW: http://www.maidstone.gov.uk\nAll views expressed within this email are not the views of Maidstone Borough\nCouncil\n\n\n-----Original Message-----\nFrom: Luis =?UNKNOWN?Q?Maga=F1a?= [mailto:[email protected]]\nSent: Monday, November 20, 2000 5:24 PM\nTo: [email protected]\nSubject: [HACKERS] PostgreSQL as windows 2000 service\n\n\nHi: \n \nWonder if any of you know how to setup a postgreSQL server as a windows 2000\nservice or have a URL or document on how to do it. \n \nThank you \n\n--\nLuis Maga�a\nGnovus Networks & Software\nwww.gnovus.com\nTel. +52 (7) 4422425\[email protected]\n\n",
"msg_date": "Fri, 24 Nov 2000 07:49:57 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PostgreSQL as windows 2000 service"
}
]
|
[
{
"msg_contents": "At 07:49 24/11/00 -0000, Peter Mount wrote:\n>There's bound to be a better way, but in the NT resource kit there was a\n>tool you can use to make any .exe a service.\n\nWithout modifying the postmaster, this is probably the best solution. An NT\nservice has to handle and respond to various events (START, STOP, PAUSE,\nRESUME) as well as be able to install & deinstall itself. This is what the\nNT Res Kit stuff does for you.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 24 Nov 2000 19:34:23 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PostgreSQL as windows 2000 service"
}
]
|
[
{
"msg_contents": "Hi,\n\nI had a problem porting applications from mySQL.\nI can't find info on this in the docs... so mailed the list, sorry for \nmy english.\n\nI create the fileds name with first letter uppercase, I need this way, \nbecause the result set must have the fileds name with the correct case \nin PHP.\n\nI would like to be able to do a select wich is not case sensitive on \nthe field name.\n\nfor example:\n\nCreate table test (\"CodUtente\");\n\nselect CodUtente from test;\nselect codutente from test;\n\non the current tree I got errors on the two select ,\nonly \nselect \"CodUtente\" from test\nwill work.\n\n\nHow ca i solve this problem ??\nI can't rewrite all the query.\nThere is a patch somewhere to make pgsql really case insensitive (also \nin this strange case)?\n\n\nthanks in advance\n\nGiuseppe Tanzilli\n\n\n",
"msg_date": "Fri, 24 Nov 2000 16:49:05 +0100",
"msg_from": "Giuseppe Tanzilli - CSF <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fields Case problem"
}
]
|
[
{
"msg_contents": "<[email protected]> writes:\n> Strnage isn't it????\n\nNo. That's the intended and documented behavior. See the manual, eg,\nhttp://www.postgresql.org/users-lounge/docs/7.0/postgres/syntax525.htm\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 12:05:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug with CREATE TABLE ..... "
},
{
"msg_contents": "Hi,\n\nFirst excuse me for my bad english,\n\nI use postgresql V7.0.2 with linux and I found a stange\nresult with create table.\n\n\nCREATE TABLE \"UTILISATEURS\" (\n\t..\n);\n\nOk no problem, and when i use \\\\dt under pgsql i see this\nname. But when i write select * from UTILISATEURS ,it doesn't\nwork. if i create table a second table\nCREATE TABLE \"utilistaeurs\" (\n...\n);\nand if i write select * from UTILISATEURS it works but postgresql\nrefer to table \"utilisateurs\". And if i tape\nselect * from \"UTILISATEURS\" it's work and refer to\ntable UTILISATEURS\n\nStrnage isn't it????\n\nThanks,\n\nBest regards\n",
"msg_date": "Fri, 24 Nov 2000 18:45:48 +0100 (CET)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Bug with CREATE TABLE ....."
}
]
|
[
{
"msg_contents": "... and I am not going to allow 7.1 to go out without a fix for this\nclass of problems. I'm fed up ;-)\n\nAs near as I can tell from the setlocale() man page, the only locale\ncategories that are really hazardous for us are LC_COLLATE and LC_CTYPE;\nthe other categories like LC_MONETARY affect only I/O routines, not\nsort ordering, and so cannot result in corrupt indices.\n\nI propose, therefore, that in an --enable-locale installation, initdb\nshould save its values for LC_COLLATE and LC_CTYPE in pg_control, and\nbackend startup should restore these settings from pg_control. Other\nlocale categories will continue to be acquired from the postmaster\nenvironment. This will eliminate the class of bugs associated with\nindex corruption from not always starting the postmaster with the same\nlocale settings, while not forcing people to do an initdb to change\nharmless settings.\n\nAlso, since \"LC_COLLATE=en_US\" seems to misbehave rather spectacularly\non recent RedHat releases, I propose that initdb change \"en_US\" to \"C\"\nif it finds that setting. (Are there any platforms where there are\nnon-bogus differences between the two?)\n\nFinally, until we have a really bulletproof solution for LIKE indexing\noptimization, I will disable that optimization if --enable-locale is\ncompiled *and* LC_COLLATE is not C. Better to get \"LIKE is slow\" bug\nreports than \"LIKE gives wrong answers\" bug reports.\n\nComments? Anyone think that initdb should lock down more categories\nthan just these two?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 16:20:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Tom Lane writes:\n\n> I propose, therefore, that in an --enable-locale installation, initdb\n> should save its values for LC_COLLATE and LC_CTYPE in pg_control, and\n> backend startup should restore these settings from pg_control.\n\nNote that when these are unset there might still be a \"catch-all\" locale\nvalue coming from the LANG env. var. (or LC_ALL on some systems).\n\n> Also, since \"LC_COLLATE=en_US\" seems to misbehave rather spectacularly\n> on recent RedHat releases, I propose that initdb change \"en_US\" to \"C\"\n> if it finds that setting. (Are there any platforms where there are\n> non-bogus differences between the two?)\n\nThere *should* be differences and it is definitely not okay to mix them\nup.\n\n> Finally, until we have a really bulletproof solution for LIKE indexing\n> optimization, I will disable that optimization if --enable-locale is\n> compiled *and* LC_COLLATE is not C. Better to get \"LIKE is slow\" bug\n> reports than \"LIKE gives wrong answers\" bug reports.\n\n(C or POSIX)\n\nI have a question about that optimization: If you have X LIKE 'foo%',\nwouldn't it be enough to use X >= 'foo' (which certainly works for any\nlocale I've ever heard of)? Why do you need the X <= 'foo???' at all?\n\n> Comments? Anyone think that initdb should lock down more categories\n> than just these two?\n\nNot sure whether LC_CTYPE is necessary.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 24 Nov 2000 23:13:44 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> I propose, therefore, that in an --enable-locale installation, initdb\n>> should save its values for LC_COLLATE and LC_CTYPE in pg_control, and\n>> backend startup should restore these settings from pg_control.\n\n> Note that when these are unset there might still be a \"catch-all\" locale\n> value coming from the LANG env. var. (or LC_ALL on some systems).\n\nActually, what I intend to do while writing pg_control is read the\ncurrent effective values via \"setlocale(category, NULL)\" --- then it\nshouldn't matter where they came from, no?\n\nThis brings up a question I had just come across while doing further\nresearch: backend/main/main.c does \n\n#ifdef USE_LOCALE\n setlocale(LC_CTYPE, \"\"); /* take locale information from an\n * environment */\n setlocale(LC_COLLATE, \"\");\n setlocale(LC_MONETARY, \"\");\n#endif\n\nwhich seems a little odd --- why not setlocale(LC_ALL, \"\") ? Karel\nZak said in a thread around 8/15/00 that this is deliberate, but\nI don't quite see why.\n\n>> Also, since \"LC_COLLATE=en_US\" seems to misbehave rather spectacularly\n>> on recent RedHat releases, I propose that initdb change \"en_US\" to \"C\"\n>> if it finds that setting. (Are there any platforms where there are\n>> non-bogus differences between the two?)\n\n> There *should* be differences and it is definitely not okay to mix them\n> up.\n\nI have now received positive proof that en_US sort order on RedHat is\nbroken. For example, it asserts\n\t'/root/' < '/root0'\nbut\n\t'/root/t' > '/root0'\nI defy you to find anyone in the US who will say that that is a\nreasonable definition of string collation. \n\nOf course, if you prefer the notion of disabling LIKE optimization\non a default RedHat installation, we can go ahead and accept en_US.\nBut I say it's broken and we shouldn't use it.\n\n>> Finally, until we have a really bulletproof solution for LIKE indexing\n>> optimization, I will disable that optimization if --enable-locale is\n>> compiled *and* LC_COLLATE is not C. Better to get \"LIKE is slow\" bug\n>> reports than \"LIKE gives wrong answers\" bug reports.\n\n> (C or POSIX)\n\nDo you think there are cases where setlocale(,NULL) will give back\n\"POSIX\" rather than \"C\"? We can certainly test for either.\n\n> I have a question about that optimization: If you have X LIKE 'foo%',\n> wouldn't it be enough to use X >= 'foo' (which certainly works for any\n> locale I've ever heard of)? Why do you need the X <= 'foo???' at all?\n\nBecause you need a two-sided index constraint, not a one-sided one.\nOtherwise you're probably better off doing a sequential scan ---\nscanning 50% of the table (on average) via an index will be slower\nthan sequential.\n\n>> Comments? Anyone think that initdb should lock down more categories\n>> than just these two?\n\n> Not sure whether LC_CTYPE is necessary.\n\nI'm not either, but I'm afraid to leave it float...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 17:31:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane writes:\n\n> >> Also, since \"LC_COLLATE=en_US\" seems to misbehave rather spectacularly\n> >> on recent RedHat releases, I propose that initdb change \"en_US\" to \"C\"\n> >> if it finds that setting. (Are there any platforms where there are\n> >> non-bogus differences between the two?)\n> \n> > There *should* be differences and it is definitely not okay to mix them\n> > up.\n> \n> I have now received positive proof that en_US sort order on RedHat is\n> broken. For example, it asserts\n> \t'/root/' < '/root0'\n> but\n> \t'/root/t' > '/root0'\n> I defy you to find anyone in the US who will say that that is a\n> reasonable definition of string collation. \n\nThat's certainly very odd, but Unixware does this too, so it's probably\nsome sort of standard. And a few other European/Latin locales I tried\nalso do this.\n\nBut here's another example of why C and en_US are different.\n\npeter ~$ cat foo\nDelta\n�crire\nBeta\nalpha\ngamma\npeter ~$ LC_COLLATE=C sort foo\nBeta\nDelta\nalpha\ngamma\n�crire\npeter ~$ LC_COLLATE=en_US sort foo\nalpha\nBeta\nDelta\n�crire\ngamma\n\nThe C locale sorts strictly by character code. But in the en_US locale\nthe accented letter is put into a \"natural\" position, and the upper and\nlower case letters are grouped together. Intuitively, the en_US order is\nin which you'd look up things in a dictionary.\n\nThis also explains (to me at least) the example you have above: When you\nlook up words in a dictionary you ignore \"funny characters\". My American\nHeritage Dictionary explains:\n\n: Entries are listed in alphabetical order without taking into account\n: spaces or hyphens.\n\nSo at least this concept isn't that far out.\n\n\n> Do you think there are cases where setlocale(,NULL) will give back\n> \"POSIX\" rather than \"C\"? We can certainly test for either.\n\nI know there are (old) systems that reject LANG=C as invalid locale, but I\ndon't know what setlocale returns there.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 25 Nov 2000 00:18:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> I have now received positive proof that en_US sort order on RedHat is\n>> broken. For example, it asserts\n>> '/root/' < '/root0'\n>> but\n>> '/root/t' > '/root0'\n>> I defy you to find anyone in the US who will say that that is a\n>> reasonable definition of string collation. \n\n> That's certainly very odd, but Unixware does this too, so it's probably\n> some sort of standard. And a few other European/Latin locales I tried\n> also do this.\n\nI don't have very many platforms to try, but HPUX does not think that\nen_US sorts that way. It may well be standard in some European locales,\nbut there's a reason why C locale acts the way it does: that behavior is\nthe accepted one on this side of the pond. Sufficiently well accepted\nthat it was quite a few years before American programmers noticed there\nwas any reason to behave differently ;-)\n\n> This also explains (to me at least) the example you have above: When you\n> look up words in a dictionary you ignore \"funny characters\". My American\n> Heritage Dictionary explains:\n> : Entries are listed in alphabetical order without taking into account\n> : spaces or hyphens.\n\nThat's workable for an English dictionary, where symbols other than\nletters are (a) rare and (b) usually irrelevant to the meaning. Do\nyou think anyone would tolerate treating \"/\" as a noise character in a\nlisting of Unix filenames, to take one counterexample? Unfortunately,\nen_US does so.\n\nThis'd be less of a problem if we had support for per-column charset\nand locale specifications. There'd be no objection to sorting a column\nthat contains only (or mostly) words like that. But I've got strong\ndoubts that the average user of a default RedHat installation expects\n*all* data to get sorted that way, or that he wants us to honor a\ndefault that he didn't ask for to the extent of disabling LIKE\noptimization to make it work.\n\nI suppose we could do it that way and add a FAQ entry:\n\n\tQ. Why are my LIKE queries so slow?\n\n\tA. Change your locale to C, then dump, initdb, reload.\n\nBut somehow I don't think that'll go over well...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 18:45:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane wrote:\n> that contains only (or mostly) words like that. But I've got strong\n> doubts that the average user of a default RedHat installation expects\n> *all* data to get sorted that way, or that he wants us to honor a\n> default that he didn't ask for to the extent of disabling LIKE\n> optimization to make it work.\n\nThe change in collation for RedHat >6.0 is deliberate -- and conforms to\nISO standards. There was noise in an unmentionable list at an\nunmentionable time about why it was this way -- and the result was a\nseesaw -- it was almost turned back to 'conventional' collation, but was\nthen put back into ISO-conforming shape.\n\nAsk Trond ([email protected]) about it.\n \n> I suppose we could do it that way and add a FAQ entry:\n> \n> Q. Why are my LIKE queries so slow?\n> \n> A. Change your locale to C, then dump, initdb, reload.\n> \n> But somehow I don't think that'll go over well...\n\nMethinks you are very right. Very right.\n\nI am not at all happy about the 'broken' RedHat locale -- the quick and\ndirty solution is to remove or rename '/etc/sysconfig/i18n' -- but that\ndoesn't cure the root issue.\n\nOh, and to make matters that much worse, on a RedHat system it doesn't\nmatter if you build with or without --enable-locale -- locale support is\nin the libc used, and locale support gets used regardless of what you\nselect on the configure line :-(. Been there; distributed that in the\n6.5.x 'nl' RPM series.\n\nBut it sounds to me like you're on the right track, Tom.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 24 Nov 2000 19:07:06 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> Oh, and to make matters that much worse, on a RedHat system it doesn't\n> matter if you build with or without --enable-locale -- locale support is\n> in the libc used, and locale support gets used regardless of what you\n> select on the configure line :-(.\n\nI don't follow. Of course locale support is in libc; where else would\nit be? But without --enable-locale, we will never call setlocale().\nSurely even RedHat is not so broken that they default to non-C locale\nin a program that has not called setlocale()? That directly contravenes\nthe letter of the ISO C standard, IIRC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 19:14:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> I am not at all happy about the 'broken' RedHat locale -- the quick and\n> dirty solution is to remove or rename '/etc/sysconfig/i18n' -- but that\n> doesn't cure the root issue.\n\nActually, that suggestion points out that just nailing down LC_COLLATE\nat initdb time isn't sufficient, at least not on systems where libc's\nlocale behavior depends on user-alterable external files. Even with\nmy proposed initdb change in place, a user could still corrupt his\nindices by removing or replacing /etc/sysconfig/i18n. Ugh. Not sure\nI see a way around this, though, short of dumping libc and bringing\nalong our own locale support.\n\nOf course, we might end up doing that anyway to support column-specific\nlocales. I suspect setlocale() is far too slow on many implementations\nto be executed again for every string comparison :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 19:20:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Possible compromise: let initdb accept en_US, but have it spit out a\nwarning message:\n\nNOTICE: initializing database with en_US collation order.\nIf you're not certain that's what you want, then it's probably not what\nyou want. We recommend you set LC_COLLATE to \"C\" and re-initdb.\nFor more information see <appropriate place in admin guide>\n\nThoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 19:32:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > Oh, and to make matters that much worse, on a RedHat system it doesn't\n> > matter if you build with or without --enable-locale -- locale support is\n> > in the libc used, and locale support gets used regardless of what you\n> > select on the configure line :-(.\n \n> But without --enable-locale, we will never call setlocale().\n> Surely even RedHat is not so broken that they default to non-C locale\n> in a program that has not called setlocale()? That directly contravenes\n> the letter of the ISO C standard, IIRC.\n\nI just know this -- regression tests failed the same way with the 'nl'\nnon-locale RPM's as they did (and do) with the regular locale-enabled\nRPM's. Collation was the same, regardless of the --enable-locale\nsetting. I got lots of 'bug' reports about the RPM's failing\nregression, giving an unexpected sort order (see the archives -- the\nbest model thread's start post is:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-12/msg00587.html). \nI was pretty ignorant back then of some of these issues :-).\n\nApparently RedHat is _that_ broken in that respect (among others). \nThankfully some of RedHat's more egregious faults have been fixed in\n7.....\n\nBut then again what Unix isn't broken in some respect :-).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 24 Nov 2000 19:41:46 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> Collation was the same, regardless of the --enable-locale\n> setting. I got lots of 'bug' reports about the RPM's failing\n> regression, giving an unexpected sort order (see the archives -- the\n> best model thread's start post is:\n> http://www.postgresql.org/mhonarc/pgsql-hackers/1999-12/msg00587.html). \n\nHmm. I reviewed that thread and found this comment from you:\n\n: > Any differences in the environment variables maybe?\n: \n: In a nutshell, yes. /etc/sysconfig/i18n on the fresh install sets LANG,\n: LC_ALL, and LINGUAS all to be \"en_US\". The upgraded machine at home doesn't\n: have an /etc/sysconfig/i18n -- nor does the RH 6.0 box.\n\nThat makes it sounds like /etc/sysconfig/i18n is not what I'd assumed\n(namely, a data file read at runtime by libc) but only a bit of shell\nscript that sets exported environment variables during bootup. I don't\nhave that file here, so could you enlighten me as to exactly what it\nis/does?\n\nIf it is just setting some default environment variables for the system,\nthen it isn't anything we can't deal with by forcing setlocale() at\npostmaster start. That'd make me feel a lot better ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 20:27:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Possible compromise: let initdb accept en_US, but have it spit out a\n>> warning message:\n\n> I certainly don't like treating en_US specially, when in fact all locales\n> are affected by this.\n\nWell, my thought was that another locale, say en_FR, would be far more\nlikely to be something that the system's user had explicitly chosen to\nuse at some point, and thus there's less reason to suppose that he\ndoesn't know what he's getting into. However, I have no objection to\nprinting such a complaint whenever the locale is one that will defeat\nLIKE optimization --- how does that sound?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 20:36:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane writes:\n\n> Possible compromise: let initdb accept en_US, but have it spit out a\n> warning message:\n> \n> NOTICE: initializing database with en_US collation order.\n> If you're not certain that's what you want, then it's probably not what\n> you want. We recommend you set LC_COLLATE to \"C\" and re-initdb.\n> For more information see <appropriate place in admin guide>\n\nI certainly don't like treating en_US specially, when in fact all locales\nare affected by this. You could print a general notice that the database\nsystem will be initialized with a (non-C, non-POSIX) locale and that this\nmay/will affect the performance in certain cases. Maybe a\n--disable-locale switch to initdb as well?\n\nBut IMHO we're not in the business of nitpicking or telling people how to\nwrite, install, or use their operating systems when the issue is not a\nshow-stopper type, but really an aesthetics/convenience issue.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 25 Nov 2000 02:36:46 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > Collation was the same, regardless of the --enable-locale\n> > setting. I got lots of 'bug' reports about the RPM's failing\n \n> Hmm. I reviewed that thread and found this comment from you:\n \n> : In a nutshell, yes. /etc/sysconfig/i18n on the fresh install sets LANG,\n> : LC_ALL, and LINGUAS all to be \"en_US\". The upgraded machine at home doesn't\n> : have an /etc/sysconfig/i18n -- nor does the RH 6.0 box.\n \n> That makes it sounds like /etc/sysconfig/i18n is not what I'd assumed\n> (namely, a data file read at runtime by libc) but only a bit of shell\n> script that sets exported environment variables during bootup. I don't\n> have that file here, so could you enlighten me as to exactly what it\n> is/does?\n\nOh, yes, sorry -- /etc/sysconfig/i18n is read during sysinit,\nimmediately before starting swap (IOW, it's only read the once). On my\nRH 6.2 box, it is the following line:\n\n----- /etc/sysconfig/i18n -------\nLANG=\"en_US\"\n------------- EOF ---------------\n\nIt's the same on a fresh RedHat 7.0 install.\n \n> If it is just setting some default environment variables for the system,\n> then it isn't anything we can't deal with by forcing setlocale() at\n> postmaster start. That'd make me feel a lot better ;-)\n\nThen you need to feel alot better :-).....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 24 Nov 2000 20:55:16 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "At 07:32 PM 11/24/00 -0500, Tom Lane wrote:\n>Possible compromise: let initdb accept en_US, but have it spit out a\n>warning message:\n>\n>NOTICE: initializing database with en_US collation order.\n>If you're not certain that's what you want, then it's probably not what\n>you want. We recommend you set LC_COLLATE to \"C\" and re-initdb.\n>For more information see <appropriate place in admin guide>\n>\n>Thoughts?\n\nAre you SURE you want to use en_US collation? [no]\n\n(ask the question, default to no?)\n\nYes, a question in initdb is ugly, this whole thing is ugly.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 24 Nov 2000 18:51:36 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> Are you SURE you want to use en_US collation? [no]\n> (ask the question, default to no?)\n\n> Yes, a question in initdb is ugly, this whole thing is ugly.\n\nA question in initdb won't fly for RPM installations, since the RPMs\ntry to do initdb themselves (or am I wrong about that?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 22:07:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane wrote:\n> Don Baccus <[email protected]> writes:\n> > Are you SURE you want to use en_US collation? [no]\n> > (ask the question, default to no?)\n \n> > Yes, a question in initdb is ugly, this whole thing is ugly.\n \n> A question in initdb won't fly for RPM installations, since the RPMs\n> try to do initdb themselves (or am I wrong about that?)\n\nThe RPMset initdb's the first time the initscript is run to start\npostmaster, not at installation time.\n\nA command-line argument to initdb would suffice to override -- maybe a\n'--initlocale' parameter?? Now, what sort of default for\n--initlocale.....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 24 Nov 2000 22:34:42 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> A command-line argument to initdb would suffice to override -- maybe a\n> '--initlocale' parameter??\n\nHardly need one, when setting LANG or LC_ALL will do just as well.\n\n> Now, what sort of default for --initlocale.....\n\nI think your complaints about RedHat's default are right back in your\nlap ;-). Do you want to ignore their default, or not?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 22:44:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> I think your complaints about RedHat's default are right back in your\n> lap ;-). Do you want to ignore their default, or not?\n\nYes, I want to ignore their default. This problem is more than just\ncosmetic, thanks to the bugs that sparked this thread.\n\nI can do things in the initscript if necessary. That only helps the\nRPM's, though, not those from-source RedHat installations.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 24 Nov 2000 23:07:58 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Lamar Owen writes:\n\n> Yes, I want to ignore their default.\n\nIf you want to do that then the infinitely better solution is to compile\nwithout locale support in the first place. (Make the locale-enabled\nserver a separate package.) Alternatively, the locale of the postgres\nuser to POSIX.\n\n> I can do things in the initscript if necessary. That only helps the\n> RPM's, though, not those from-source RedHat installations.\n\nThe subject of this whole discussion was IIRC the \"default Red Hat\ninstallation\". Those who compile from source can always make more\ninformed decisions about what features to enable.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 25 Nov 2000 16:43:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Tom Lane writes:\n\n> > I certainly don't like treating en_US specially, when in fact all locales\n> > are affected by this.\n> \n> Well, my thought was that another locale, say en_FR, would be far more\n> likely to be something that the system's user had explicitly chosen to\n> use at some point,\n\nIIRC, the default locale is chosen during the installation process of Red\nHat, so any locale is explicitly chosen. If Red Hat does not provide a\nmeans to set the C locale as the default, that is Red Hat's fault. But\nthen it should also be Red Hat's job (and Red Hat's decision) to install\nPostgreSQL in a certain way or other to account for that. Compiles from\nsource don't count here, those users enabled locale explicitly anyway.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 25 Nov 2000 16:49:25 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > Yes, I want to ignore their default.\n \n> If you want to do that then the infinitely better solution is to compile\n> without locale support in the first place. (Make the locale-enabled\n> server a separate package.) Alternatively, the locale of the postgres\n> user to POSIX.\n\nOk, let me repeat -- the '--enable-locale' setting will not affect the\ncollation sequence problem on RedHat. If you set PostgreSQL to use\nlocale, it uses it. If you configure PostgreSQL to not use locale, the\ncollation set by LANG, LC_ALL, or LC_COLLATE is _STILL_ honored, thanks\nto the libc used.\n\nDuring the 6.5.x cycle, I built, for performance reasons, RPM's without\nlocale/multibyte support. These were referred to as the 'nl' RPM's. \nPlease see the thread I referred to to see how running with the\n'non-locale' RPM's did not in the least solve the problem or change the\nsymptoms.\n\nSetting the locale environment for the postmaster process is a\npossibility, but I'll have to do some testing to see if there are any\ninteraction problems. And this still only helps RPM users, as my\ninitscript is not part of the canonical tarball.\n \n> > I can do things in the initscript if necessary. That only helps the\n> > RPM's, though, not those from-source RedHat installations.\n \n> The subject of this whole discussion was IIRC the \"default Red Hat\n> installation\". Those who compile from source can always make more\n> informed decisions about what features to enable.\n\nThose who compile from source and configure for no locale support will\nget a nasty surprise on RedHat 6.1 and later.\n\nEven though a different library function is used to do the comparison\nfor sorts and orderings, libc (in particular, glibc 2.1) _still_ uses\nthe LC_ALL, LANG, or LC_COLLATE setting to determine collation. For the\n--enable-locale case, the function used is strcoll(); if not,\nstrncmp(). See varstr_cmp() in src/backend/utils/adt/varlena.c.\n\nIOW, it is advisable to always enable locale on RedHat, as then you can\nat least know what to expect. And you then will still get unexpected\nresults unless you do some locale work -- and, unfortunately, RedHat\n6.x's locale documentation was sketchy at best; nonexistent at worst. I\nhaven't seen RedHat 7's printed documentation yet, so I can't comment on\nit.\n\nFor reference on this issue, please see the archives, in particular the\nfollowing messages:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-12/msg00678.html\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-12/msg00685.html\n(where I got the function names above....)\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 25 Nov 2000 14:34:00 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Lamar Owen writes:\n\n> Ok, let me repeat -- the '--enable-locale' setting will not affect the\n> collation sequence problem on RedHat. If you set PostgreSQL to use\n> locale, it uses it. If you configure PostgreSQL to not use locale, the\n> collation set by LANG, LC_ALL, or LC_COLLATE is _STILL_ honored, thanks\n> to the libc used.\n\nWell, I'm looking at Red Hat 7.0 here and the locale variables are most\ncertainly getting ignored in the default compile. Moreover, at no point\ndid strncmp() in glibc behave as you claim. You can look at it yourself\nhere:\n\nhttp://subversions.gnu.org/cgi-bin/cvsweb/glibc/sysdeps/generic/strncmp.c\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 25 Nov 2000 22:44:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Lamar Owen writes:\n>> Ok, let me repeat -- the '--enable-locale' setting will not affect the\n>> collation sequence problem on RedHat. If you set PostgreSQL to use\n>> locale, it uses it. If you configure PostgreSQL to not use locale, the\n>> collation set by LANG, LC_ALL, or LC_COLLATE is _STILL_ honored, thanks\n>> to the libc used.\n\n> Well, I'm looking at Red Hat 7.0 here and the locale variables are most\n> certainly getting ignored in the default compile. Moreover, at no point\n> did strncmp() in glibc behave as you claim.\n\nI'm having a hard time believing Lamar's recollection, also. I wonder\nif there could have been some other factor involved? One possible line\nof thought: a non-locale-enabled compilation, installed to replace a\nlocale-enabled one, would behave rather inconsistently if run on the\nsame database used by the locale-enabled version (since indexes will\nstill be in locale order). Depending on what tests you did, you might\nwell think that it was still running locale-enabled.\n\nBTW: as of my commits of an hour ago, the above failure mode is no\nlonger possible, since a non-locale-enabled Postgres will now refuse to\nstart up in a database that shows any locale other than 'C' in pg_control.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Nov 2000 17:14:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "> I'm having a hard time believing Lamar's recollection, also. I wonder\n> if there could have been some other factor involved? One possible line\n> of thought: a non-locale-enabled compilation, installed to replace a\n> locale-enabled one, would behave rather inconsistently if run on the\n> same database used by the locale-enabled version (since indexes will\n> still be in locale order). Depending on what tests you did, you might\n> well think that it was still running locale-enabled.\n> \n> BTW: as of my commits of an hour ago, the above failure mode is no\n> longer possible, since a non-locale-enabled Postgres will now refuse to\n> start up in a database that shows any locale other than 'C' in pg_control.\n\nDo local-enabled compiles have the LIKE optimization disabled always?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 25 Nov 2000 18:03:43 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Do local-enabled compiles have the LIKE optimization disabled always?\n\nNo. They do a run-time check to see what locale is active.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Nov 2000 18:13:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Also, since \"LC_COLLATE=en_US\" seems to misbehave rather spectacularly\n> on recent RedHat releases, I propose that initdb change \"en_US\" to \"C\"\n> if it finds that setting. \n\nIt does not misbehave in glibc (it's not Red Hat specific).\nBasically, glibc is the old\n\n\n From a discussion on a semi-internal list, written by Alan Cox:\n\n************************************************************************\nI read the ISO doc (god Its boring)\n\nOk\n\nUlrich is right for the spec. Its the official correct filing order\nfor more\nthan just in computing\n\nI think the right answer maybe this\n\nDefault to ISOblah including sort remaining sorting AbBb..\nDocument this and also how to switch just the collation series to Unix\nstyle\nin the README files and docs that come with the release (like we\ndocumented\nhow to turn off color ls\n\n\nUltimately this comes down to:\n \nUnix behaviour since 197x versus librarians and others since\nconsiderably\nearlier. We are breaking Unix behaviour but I can now sort of\nappreciate\nthe thinking behind this. \n\n************************************************************************\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "Sun, 26 Nov 2000 16:12:00 -0500 (EST)",
"msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=d8d?=)",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "[email protected] (Trond Eivind Glomsr�d) writes:\n\n> Tom Lane <[email protected]> writes:\n> \n> > Also, since \"LC_COLLATE=en_US\" seems to misbehave rather spectacularly\n> > on recent RedHat releases, I propose that initdb change \"en_US\" to \"C\"\n> > if it finds that setting. \n> \n> It does not misbehave in glibc (it's not Red Hat specific).\n> Basically, glibc is the old\n\nOops, here's the rest:\n\nglibc with the C/POSIX locale will make things work the old computer\nway:\nAB...Zab..z\n\nWith en_US, it works the iso way:\nA/a B/b ... Z/z \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "Sun, 26 Nov 2000 16:21:07 -0500 (EST)",
"msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=d8d?=)",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
},
{
"msg_contents": "\nOn Fri, 24 Nov 2000, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > Tom Lane writes:\n> >> I propose, therefore, that in an --enable-locale installation, initdb\n> >> should save its values for LC_COLLATE and LC_CTYPE in pg_control, and\n> >> backend startup should restore these settings from pg_control.\n> \n> > Note that when these are unset there might still be a \"catch-all\" locale\n> > value coming from the LANG env. var. (or LC_ALL on some systems).\n> \n> Actually, what I intend to do while writing pg_control is read the\n> current effective values via \"setlocale(category, NULL)\" --- then it\n> shouldn't matter where they came from, no?\n> \n> This brings up a question I had just come across while doing further\n> research: backend/main/main.c does \n> \n> #ifdef USE_LOCALE\n> setlocale(LC_CTYPE, \"\"); /* take locale information from an\n> * environment */\n> setlocale(LC_COLLATE, \"\");\n> setlocale(LC_MONETARY, \"\");\n> #endif\n> \n> which seems a little odd --- why not setlocale(LC_ALL, \"\") ? Karel\n> Zak said in a thread around 8/15/00 that this is deliberate, but\n> I don't quite see why.\n\n LC_ALL set too:\n\n\tLC_NUMERIC and LC_TIME\n\n we in backend use some locale sensitive routines like strftime() and\nsprintf() (and more?).\n\n The timeofday() make output via strftime() if you set LC_ALL, a query \nlike:\n\tselect timeofday()::timestamp;\n\nwill (IMHO) crashed.\n\n With float numbers and decimal point I not sure. If *all* numbers will\nlike locale-setting and all routines and utils will expect correct\nlocale-like decimal point we probably not see some problem. But what\nwill happen in client program if this FE not will known anything about\ncurrent BE setting? BE send locale decimal point (czech) \"123,456\" and\nand FE is set to \"en_US\" - event of client's atod() is \"123.000\"....\n\n And etc...etc...\n\n We need *robust* BE<->FE correct and comumns specific local supporte, \nwithout this we can use locale sensitive to_char() for numbers and pray \nand hope that everything in the PG is right :-)\n\n we need (TODO?):\n\n\t- comumns specific locale setting\n\t- FE routine for obtain column locale setting, like\n\t\tPQflocale(const PGresult *res, int field_index);\n\t- on-the-fly numbers (and date/time?!) recoding if BE and\n\t FE use differend locale\n\t- be-build index for new locale setting\n\t- fast locale information for date/time and support for\n\t locale-sensitive date/time parsing (IMHO almost impossible\n\t write this)\n\t\n\t... etc.\n\n too much long way to LC_ALL.\n\n\t\t\t\t\tKarel\n\nPS. IMHO current PG locale setting is not bad. I know biger problems\n an example not-existing error codes and thread ignorand FE lib. With\n these problems is not possible write good large and robust FE. \n\n\n\n\n\n\n \n\n\n\n",
"msg_date": "Mon, 27 Nov 2000 11:09:30 +0100 (CET)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Hi,\n\n...\n> LC_NUMERIC and LC_TIME\n...\n> The timeofday() make output via strftime() if you set LC_ALL, a query\n> like: select timeofday()::timestamp;\n\nActually *I would* expect it to return a localized string. But then again I\nalways expect BE to use '.' as decimal point ( I must be damaged :-/ ).\n\n...\n> We need *robust* BE<->FE correct and comumns specific local supporte,\n\nI agree :-) And the easiest (and only robust) way would be to define which\nchar is decimal point, how a date/time must be formated to be accepted on a\nINSERT or SELECT. And leave the job of localization to the FE. (I do not\nknow what SQL9_ says about this, and franctly I do not care.)\n\nAnd then to sorting (and compare) of strings. PostgreSQL should decide on\none charset (UTF8, UTF16) and expect that clients (FE) to enforce that. Yes\nsome sorting would be wrong but In most cases it would be correct.\nPostgreSQL will never be able to do correct indexing in a mized locale\nenviroment if it does not have one index tree (hash or whatever) per locale.\nBut with UTF8 it could do a good (if not perfect) jobb.\n\nSomething like this for sorting:\n\tnoice-chars-in-any-order..0..1..A..a..e..�..E..�..U..�..u..�..Z..z..�..�\nAnd as time/date/timestamp format:\n\t2000.11.27 12:55.01.000000\nwould be a good compromize.\n\nThis maybe feels like moving the trouble from BE to FE, but *I think* this\nis the only solution that would always work (if not perfectly...). And this\nwould remove all the problems with the \"--enable-locale which locale to use\"\nproblem. Also if someone would want to connect with a new unknown locale it\nwould work without changes in the BE side.\n\nAnd to the errorious results from \"SELECT * FROM myTable where strString >\n'abc'\". This suggestion would not solve all of those, but it would solve\nmost of them. And *I think* any compare but = and != on a string is prone to\nerrors (even as a optimation of LIKE).\n\n// Jarmo\n\n",
"msg_date": "Mon, 27 Nov 2000 13:24:06 +0100",
"msg_from": "\"Jarmo Paavilainen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "SV: OK, that's one LOCALE bug report too many... "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > Lamar Owen writes:\n> >> Ok, let me repeat -- the '--enable-locale' setting will not affect the\n> >> collation sequence problem on RedHat. If you set PostgreSQL to use\n> >> locale, it uses it. If you configure PostgreSQL to not use locale, the\n> >> collation set by LANG, LC_ALL, or LC_COLLATE is _STILL_ honored, thanks\n> >> to the libc used.\n \n> > Well, I'm looking at Red Hat 7.0 here and the locale variables are most\n> > certainly getting ignored in the default compile. Moreover, at no point\n> > did strncmp() in glibc behave as you claim.\n\nTry on RH 6.x. It is possible RH 7 has this behavior fixed -- I have\nnot built _any_ no-locale RPM's since 6.5.3 -- and the last OS I built\nthat on was RH 6.2. Amend my statement above to read 'caollation\nsequence problem on RedHat 6.x, where x>0.'\n \n> I'm having a hard time believing Lamar's recollection, also.\n\nIt's in the archives. Not just my (often bad) recollections..... :-)\n\nOf course, RH 7.0's behavior and RH 6.1's behavior (which was the\nversion I reported having the problem in the archive message thread) may\nnot be congruent.\n\n> I wonder\n> if there could have been some other factor involved? One possible line\n> of thought: a non-locale-enabled compilation, installed to replace a\n> locale-enabled one, would behave rather inconsistently if run on the\n> same database used by the locale-enabled version (since indexes will\n> still be in locale order). Depending on what tests you did, you might\n> well think that it was still running locale-enabled.\n\nNo index was involved. The simple test script referred to in that\nthread was all that was used. I even went through an initdb cycle for\nit. However, I am willing to test again with fresh built 'no-locale'\nRPM's on RH 6.2 and RH7 to see, if there is need.\n\nAll I need to do now is to make sure that the initscript starts\npostmaster with the 'C' locale if the locale is set to 'en_US'. Or is\nthat _really_ what we want, here?\n \n> BTW: as of my commits of an hour ago, the above failure mode is no\n> longer possible, since a non-locale-enabled Postgres will now refuse to\n> start up in a database that shows any locale other than 'C' in pg_control.\n\nGood.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 27 Nov 2000 13:35:02 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, that's one LOCALE bug report too many..."
}
]
|
[
{
"msg_contents": "This code in psql/command.c allows *any* system user to place a\npredictably named symbolic link in /tmp and use it to alter/destroy\nfiles owned by the user running psql. (tested - postgresql 7.0.2).\n\nAll the information a potential attacker would need are available via a\nsimple 'ps'.\n\nIt might (untested) also allow an another user to exploit the race\nbetween the closing of the file by the editor and the re-reading of its\ncontents to execute arbitrary SQL commands.\n\nIMHO these files, if they must be created in /tmp should at least be\ncreated O_EXCL, but there are still editor vulnerabilities with opening\nany files in a world writeable directory (see recent joe Vulnerability:\nhttp://lwn.net/2000/1123/a/sec-joe.php3)\n\nMy system is RedHat 6.2 on an i686, with Postgresql 7.0.2 but the same\ncode currently exists in CVS (or at least CVS-web).\n\nI am not subscribed to this list, so please CC me for replies. (Also\ntell me if there is a more appropriate forum for this, but\nwww.postgresql.org doesn't have a listed security issue address).\n-- \nAndrew Bartlett\[email protected]\n",
"msg_date": "Sat, 25 Nov 2000 11:28:42 +1100",
"msg_from": "Andrew Bartlett <[email protected]>",
"msg_from_op": true,
"msg_subject": "SECURITY: psql allows symlink games in /tmp "
},
{
"msg_contents": "Andrew Bartlett wrote:\n> \n> This code in psql/command.c allows *any* system user to place a\n> predictably named symbolic link in /tmp and use it to alter/destroy\n> files owned by the user running psql. (tested - postgresql 7.0.2).\n> \n> All the information a potential attacker would need are available via a\n> simple 'ps'.\n> \n> It might (untested) also allow an another user to exploit the race\n> between the closing of the file by the editor and the re-reading of its\n> contents to execute arbitrary SQL commands.\n> \n> IMHO these files, if they must be created in /tmp should at least be\n> created O_EXCL, but there are still editor vulnerabilities with opening\n> any files in a world writeable directory (see recent joe Vulnerability:\n> http://lwn.net/2000/1123/a/sec-joe.php3)\n> \n> My system is RedHat 6.2 on an i686, with Postgresql 7.0.2 but the same\n> code currently exists in CVS (or at least CVS-web).\n> \n> I am not subscribed to this list, so please CC me for replies. (Also\n> tell me if there is a more appropriate forum for this, but\n> www.postgresql.org doesn't have a listed security issue address).\n> --\n> Andrew Bartlett\n> [email protected]\n\nSorry, forgot to inlude the offending code....\n\n(This is part of do_edit, called from edit_file and the \\e query buffer\nediting fuction)\n\n if (filename_arg)\n fname = filename_arg;\n\n else\n {\n /* make a temp file to edit */\n#ifndef WIN32\n mode_t oldumask;\n const char *tmpdirenv = getenv(\"TMPDIR\");\n\n sprintf(fnametmp, \"%s/psql.edit.%ld.%ld\",\n tmpdirenv ? tmpdirenv : \"/tmp\",\n (long) geteuid(), (long) getpid());\n#else\n GetTempFileName(\".\", \"psql\", 0, fnametmp);\n#endif\n fname = (const char *) fnametmp;\n\n#ifndef WIN32\n oldumask = umask(0177);\n#endif\n stream = fopen(fname, \"w\");\n#ifndef WIN32\n umask(oldumask);\n#endif\n\n \n-- \nAndrew Bartlett\[email protected]\n",
"msg_date": "Sat, 25 Nov 2000 11:42:02 +1100",
"msg_from": "Andrew Bartlett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SECURITY: psql allows symlink games in /tmp"
},
{
"msg_contents": "Thanks for the pointer. Here is a diff to fix the problem. How does it\nlook to you?\n\n> This code in psql/command.c allows *any* system user to place a\n> predictably named symbolic link in /tmp and use it to alter/destroy\n> files owned by the user running psql. (tested - postgresql 7.0.2).\n> \n> All the information a potential attacker would need are available via a\n> simple 'ps'.\n> \n> It might (untested) also allow an another user to exploit the race\n> between the closing of the file by the editor and the re-reading of its\n> contents to execute arbitrary SQL commands.\n> \n> IMHO these files, if they must be created in /tmp should at least be\n> created O_EXCL, but there are still editor vulnerabilities with opening\n> any files in a world writeable directory (see recent joe Vulnerability:\n> http://lwn.net/2000/1123/a/sec-joe.php3)\n> \n> My system is RedHat 6.2 on an i686, with Postgresql 7.0.2 but the same\n> code currently exists in CVS (or at least CVS-web).\n> \n> I am not subscribed to this list, so please CC me for replies. (Also\n> tell me if there is a more appropriate forum for this, but\n> www.postgresql.org doesn't have a listed security issue address).\n> -- \n> Andrew Bartlett\n> [email protected]\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? GNUmakefile\n? src/Makefile.custom\n? src/GNUmakefile\n? src/Makefile.global\n? src/log\n? src/crtags\n? src/backend/postgres\n? src/backend/catalog/global.description\n? src/backend/catalog/global.bki\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_config/pg_config\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pgaccess/pgaccess\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/include/stamp-h\n? src/interfaces/ecpg/lib/libecpg.so.3.2.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2.1\n? src/interfaces/libpgtcl/libpgtcl.so.2.1\n? src/interfaces/libpq/libpq.so.2.1\n? src/interfaces/perl5/blib\n? src/interfaces/perl5/Makefile\n? src/interfaces/perl5/pm_to_blib\n? src/interfaces/perl5/Pg.c\n? src/interfaces/perl5/Pg.bs\n? src/pl/plperl/blib\n? src/pl/plperl/Makefile\n? src/pl/plperl/pm_to_blib\n? src/pl/plperl/SPI.c\n? src/pl/plperl/plperl.bs\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/Makefile.tcldefs\nIndex: src/bin/psql/command.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/command.c,v\nretrieving revision 1.38\ndiff -c -r1.38 command.c\n*** src/bin/psql/command.c\t2000/11/13 23:37:53\t1.38\n--- src/bin/psql/command.c\t2000/11/25 06:18:33\n***************\n*** 13,19 ****\n #include <ctype.h>\n #ifndef WIN32\n #include <sys/types.h>\t\t\t/* for umask() */\n! #include <sys/stat.h>\t\t\t/* for umask(), stat() */\n #include <unistd.h>\t\t\t\t/* for geteuid(), getpid(), stat() */\n #else\n #include <win32.h>\n--- 13,20 ----\n #include <ctype.h>\n #ifndef WIN32\n #include <sys/types.h>\t\t\t/* for umask() */\n! #include <sys/stat.h>\t\t\t/* for stat() */\n! #include <fcntl.h>\t\t\t\t/* open() flags */\n #include <unistd.h>\t\t\t\t/* for geteuid(), getpid(), stat() */\n #else\n #include <win32.h>\n***************\n*** 1397,1403 ****\n \tFILE\t *stream;\n \tconst char *fname;\n \tbool\t\terror = false;\n! \n #ifndef WIN32\n \tstruct stat before,\n \t\t\t\tafter;\n--- 1398,1405 ----\n \tFILE\t *stream;\n \tconst char *fname;\n \tbool\t\terror = false;\n! \tint\t\t\tfd;\n! \t\n #ifndef WIN32\n \tstruct stat before,\n \t\t\t\tafter;\n***************\n*** 1411,1417 ****\n \t{\n \t\t/* make a temp file to edit */\n #ifndef WIN32\n- \t\tmode_t\t\toldumask;\n \t\tconst char *tmpdirenv = getenv(\"TMPDIR\");\n \n \t\tsprintf(fnametmp, \"%s/psql.edit.%ld.%ld\",\n--- 1413,1418 ----\n***************\n*** 1422,1436 ****\n #endif\n \t\tfname = (const char *) fnametmp;\n \n! #ifndef WIN32\n! \t\toldumask = umask(0177);\n! #endif\n! \t\tstream = fopen(fname, \"w\");\n! #ifndef WIN32\n! \t\tumask(oldumask);\n! #endif\n \n! \t\tif (!stream)\n \t\t{\n \t\t\tpsql_error(\"couldn't open temp file %s: %s\\n\", fname, strerror(errno));\n \t\t\terror = true;\n--- 1423,1433 ----\n #endif\n \t\tfname = (const char *) fnametmp;\n \n! \t\tfd = open(fname, O_WRONLY|O_CREAT|O_EXCL, 0600);\n! \t\tif (fd != -1)\n! \t\t\tstream = fdopen(fd, \"w\");\n \n! \t\tif (fd == -1 || !stream)\n \t\t{\n \t\t\tpsql_error(\"couldn't open temp file %s: %s\\n\", fname, strerror(errno));\n \t\t\terror = true;",
"msg_date": "Sat, 25 Nov 2000 01:19:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SECURITY: psql allows symlink games in /tmp"
},
{
"msg_contents": "Looks like what I would have done if I knew C.\n\nThe only issue remaining is a policy issue as to if psql should call an\neditor in /tmp at all, considering the issues raised bye the recent joe\nvulnerability, ie can we trust the editor not to do a crazy thing, like\nnot creating a similarly predictable backup-file name etc. It should at\nleast be documented so that a more parinoid sys-admin can make sure that\nusers use a private TMPDIR.\n\nThanks for the quick response,\n\nAndrew Bartlett\[email protected]\n\nBruce Momjian wrote:\n> \n> Thanks for the pointer. Here is a diff to fix the problem. How does it\n> look to you?\n> \n> > This code in psql/command.c allows *any* system user to place a\n> > predictably named symbolic link in /tmp and use it to alter/destroy\n> > files owned by the user running psql. (tested - postgresql 7.0.2).\n> >\n> > All the information a potential attacker would need are available via a\n> > simple 'ps'.\n> >\n> > It might (untested) also allow an another user to exploit the race\n> > between the closing of the file by the editor and the re-reading of its\n> > contents to execute arbitrary SQL commands.\n> >\n> > IMHO these files, if they must be created in /tmp should at least be\n> > created O_EXCL, but there are still editor vulnerabilities with opening\n> > any files in a world writeable directory (see recent joe Vulnerability:\n> > http://lwn.net/2000/1123/a/sec-joe.php3)\n> >\n> > My system is RedHat 6.2 on an i686, with Postgresql 7.0.2 but the same\n> > code currently exists in CVS (or at least CVS-web).\n> >\n> > I am not subscribed to this list, so please CC me for replies. (Also\n> > tell me if there is a more appropriate forum for this, but\n> > www.postgresql.org doesn't have a listed security issue address).\n> > --\n> > Andrew Bartlett\n> > [email protected]\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ------------------------------------------------------------------------\n> ? config.log\n> ? config.cache\n> ? config.status\n> ? GNUmakefile\n> ? src/Makefile.custom\n> ? src/GNUmakefile\n> ? src/Makefile.global\n> ? src/log\n> ? src/crtags\n> ? src/backend/postgres\n> ? src/backend/catalog/global.description\n> ? src/backend/catalog/global.bki\n> ? src/backend/catalog/template1.bki\n> ? src/backend/catalog/template1.description\n> ? src/backend/port/Makefile\n> ? src/bin/initdb/initdb\n> ? src/bin/initlocation/initlocation\n> ? src/bin/ipcclean/ipcclean\n> ? src/bin/pg_config/pg_config\n> ? src/bin/pg_ctl/pg_ctl\n> ? src/bin/pg_dump/pg_dump\n> ? src/bin/pg_dump/pg_restore\n> ? src/bin/pg_dump/pg_dumpall\n> ? src/bin/pg_id/pg_id\n> ? src/bin/pg_passwd/pg_passwd\n> ? src/bin/pgaccess/pgaccess\n> ? src/bin/pgtclsh/Makefile.tkdefs\n> ? src/bin/pgtclsh/Makefile.tcldefs\n> ? src/bin/pgtclsh/pgtclsh\n> ? src/bin/pgtclsh/pgtksh\n> ? src/bin/psql/psql\n> ? src/bin/scripts/createlang\n> ? src/include/config.h\n> ? src/include/stamp-h\n> ? src/interfaces/ecpg/lib/libecpg.so.3.2.0\n> ? src/interfaces/ecpg/preproc/ecpg\n> ? src/interfaces/libpgeasy/libpgeasy.so.2.1\n> ? src/interfaces/libpgtcl/libpgtcl.so.2.1\n> ? src/interfaces/libpq/libpq.so.2.1\n> ? src/interfaces/perl5/blib\n> ? src/interfaces/perl5/Makefile\n> ? src/interfaces/perl5/pm_to_blib\n> ? src/interfaces/perl5/Pg.c\n> ? src/interfaces/perl5/Pg.bs\n> ? src/pl/plperl/blib\n> ? src/pl/plperl/Makefile\n> ? src/pl/plperl/pm_to_blib\n> ? src/pl/plperl/SPI.c\n> ? src/pl/plperl/plperl.bs\n> ? src/pl/plpgsql/src/libplpgsql.so.1.0\n> ? src/pl/tcl/Makefile.tcldefs\n> Index: src/bin/psql/command.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/command.c,v\n> retrieving revision 1.38\n> diff -c -r1.38 command.c\n> *** src/bin/psql/command.c 2000/11/13 23:37:53 1.38\n> --- src/bin/psql/command.c 2000/11/25 06:18:33\n> ***************\n> *** 13,19 ****\n> #include <ctype.h>\n> #ifndef WIN32\n> #include <sys/types.h> /* for umask() */\n> ! #include <sys/stat.h> /* for umask(), stat() */\n> #include <unistd.h> /* for geteuid(), getpid(), stat() */\n> #else\n> #include <win32.h>\n> --- 13,20 ----\n> #include <ctype.h>\n> #ifndef WIN32\n> #include <sys/types.h> /* for umask() */\n> ! #include <sys/stat.h> /* for stat() */\n> ! #include <fcntl.h> /* open() flags */\n> #include <unistd.h> /* for geteuid(), getpid(), stat() */\n> #else\n> #include <win32.h>\n> ***************\n> *** 1397,1403 ****\n> FILE *stream;\n> const char *fname;\n> bool error = false;\n> !\n> #ifndef WIN32\n> struct stat before,\n> after;\n> --- 1398,1405 ----\n> FILE *stream;\n> const char *fname;\n> bool error = false;\n> ! int fd;\n> !\n> #ifndef WIN32\n> struct stat before,\n> after;\n> ***************\n> *** 1411,1417 ****\n> {\n> /* make a temp file to edit */\n> #ifndef WIN32\n> - mode_t oldumask;\n> const char *tmpdirenv = getenv(\"TMPDIR\");\n> \n> sprintf(fnametmp, \"%s/psql.edit.%ld.%ld\",\n> --- 1413,1418 ----\n> ***************\n> *** 1422,1436 ****\n> #endif\n> fname = (const char *) fnametmp;\n> \n> ! #ifndef WIN32\n> ! oldumask = umask(0177);\n> ! #endif\n> ! stream = fopen(fname, \"w\");\n> ! #ifndef WIN32\n> ! umask(oldumask);\n> ! #endif\n> \n> ! if (!stream)\n> {\n> psql_error(\"couldn't open temp file %s: %s\\n\", fname, strerror(errno));\n> error = true;\n> --- 1423,1433 ----\n> #endif\n> fname = (const char *) fnametmp;\n> \n> ! fd = open(fname, O_WRONLY|O_CREAT|O_EXCL, 0600);\n> ! if (fd != -1)\n> ! stream = fdopen(fd, \"w\");\n> \n> ! if (fd == -1 || !stream)\n> {\n> psql_error(\"couldn't open temp file %s: %s\\n\", fname, strerror(errno));\n> error = true;\n\n-- \nAndrew Bartlett\[email protected]\n",
"msg_date": "Sat, 25 Nov 2000 17:46:05 +1100",
"msg_from": "Andrew Bartlett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SECURITY: psql allows symlink games in /tmp"
},
{
"msg_contents": "> Looks like what I would have done if I knew C.\n> \n> The only issue remaining is a policy issue as to if psql should call an\n> editor in /tmp at all, considering the issues raised bye the recent joe\n> vulnerability, ie can we trust the editor not to do a crazy thing, like\n> not creating a similarly predictable backup-file name etc. It should at\n> least be documented so that a more parinoid sys-admin can make sure that\n> users use a private TMPDIR.\n\nNot sure it is worth the addition.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 25 Nov 2000 09:28:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SECURITY: psql allows symlink games in /tmp"
}
]
|
[
{
"msg_contents": "Vadim,\n\nIn xlog.c, the declaration of struct ControlFileData says:\n\n /*\n * MORE DATA FOLLOWS AT THE END OF THIS STRUCTURE - locations of data\n * dirs\n */\n\nIs this comment accurate? I don't see any sign in the code of placing\nextra data after the declared structure. If you're planning to do it\nin future, I think it would be a bad idea. I'd prefer to see all the\ndata in that file declared as a fixed-size structure, for two reasons:\n\n1. I'd like to change the statement that reads pg_control into memory\nfrom\n\n if (read(fd, ControlFile, BLCKSZ) != BLCKSZ)\n elog(STOP, \"read(\\\"%s\\\") failed: %m\", ControlFilePath);\n\nto\n\n if (read(fd, ControlFile, sizeof(ControlFileData)) != sizeof(ControlFileData))\n elog(STOP, \"read(\\\"%s\\\") failed: %m\", ControlFilePath);\n\nWith the existing code, if one recompiles with a larger BLCKSZ and then\ntries to restart without initdb, one gets an unhelpful message about\nread() failed --- with no relevant error condition, since early EOF\ndoesn't set errno --- rather than the helpful complaint about BLCKSZ\nmismatch that would come out if we got as far as checking the contents\nof the struct. We've already had one user complaint about this, so it's\nfar from hypothetical. If the protocol were to write BLCKSZ amount of\ndata but only read sizeof(ControlFileData) worth, then we'd have a\nbetter shot at issuing useful rather than useless error messages for\npg_control mismatches.\n\n2. If there is any large amount of data appended to the struct, I'm\na little worried about the data overrunning the BLCKSZ space allocated\nfor it. Especially if someone were to reduce BLCKSZ because they were\nworried about 8K page writes not being atomic. I'd prefer to place all\nthe expected data in the struct and have an explicit test for\nsizeof(ControlFileData) <= BLCKSZ. \n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2000 20:15:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Are pg_control contents really variable-length?"
},
{
"msg_contents": "> In xlog.c, the declaration of struct ControlFileData says:\n> \n> /*\n> * MORE DATA FOLLOWS AT THE END OF THIS STRUCTURE - locations of data\n> * dirs\n> */\n> \n> Is this comment accurate? I don't see any sign in the code of placing\n> extra data after the declared structure. If you're planning to do it\n> in future, I think it would be a bad idea. I'd prefer to see all the\n\nThat was my thought but as you see nothing was done for it, so\nfeel free to change anything you want there.\n\nVadim\n\n\n",
"msg_date": "Sat, 25 Nov 2000 10:56:51 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are pg_control contents really variable-length?"
}
]
|
[
{
"msg_contents": "Hello, Francisco Figueiredo Jr.\n\nplpgsql did not support @id as parameter(need double-quote), somebody suggest me to use vid\nfor parameter. When use vid as parameter for plpgsql, in C# program we use @vid, Npgql\nwill delete @, then pass vid to plpgsql. So I want to change Npgsql not to delete @ in program.\n\n\nThanks & Regards!\n\t\t\t \nArnold.Zhu\n2000-11-25\n\n\n\n",
"msg_date": "Sat, 25 Nov 2000 10:14:44 +0800",
"msg_from": "\"Arnold.Zhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to make @id or $id as parameter name in plpgsql,\n\tis it available?"
}
]
|
[
{
"msg_contents": "This is part question, part short, sad tale.\n\nWorking on my database, I had a view that would lock up the \nmachine (eats all available memory, soon goes belly-up.) Turned out \nto be a recursive view: view A asked a question of view B that \nasked view A. [is it possible for pgsql to detect this? I worry about \nmy users doing this.] [and, yes, I should use kernel-level controls to \nmake sure that the postmaster process can't use all available \nresources; but hey, it's a development machine. ]\n\nAnyway, as I was tracking down this problem, I couldn't restart \nPostgreSQL if the machine had crashed and I had a /tmp/.PGSQL.* \nfile in the temp directory; it assumed that the socket was in use. \nSo, I began restarting pgsql w/a line like\n\nrm -f /tmp/.PGSQL.* && postmaster -i >log 2>log &\n\nWhich works great. Except that I *kept* using this for two weeks \nafter the view problem (damn that bash up-arrow laziness!), and \nyesterday, used it to restart PostgreSQL except (oops!) it was \nalready running.\n\nResults: no database at all. All classes (tables/views/etc) returned \n0 records (meaning that no tables showed up in psql's \\d, since \npg_class returned nothing.)\n\nI don't know enough about why -- the /tmp files appear to have a \nlength of 0, but pgsql seems to care a great deal about them.\n\n[ I did have a very fresh pg_dumpall file--thank you, anacron--so I \nlost about 30 minutes worth of work, but it would have been \neverything if I never backed up. ]\n\nMy advice:\n\n1) Use pg_dumpall.\n2) Don't delete those /tmp files until you're *sure* you're out of Pg\n\nAnyone know what *happened* and *why*? Was there anything I \ncould have done?\n\nThanks!\n\n[ I do read these lists, but always appreciate a cc on responses so I \ndon't accidentally miss them. TIA. ]\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n",
"msg_date": "Sat, 25 Nov 2000 16:41:38 -0500",
"msg_from": "\"Joel Burton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Warning: Don't delete those /tmp/.PGSQL.* files"
},
{
"msg_contents": "\"Joel Burton\" <[email protected]> writes:\n> Working on my database, I had a view that would lock up the \n> machine (eats all available memory, soon goes belly-up.) Turned out \n> to be a recursive view: view A asked a question of view B that \n> asked view A. [is it possible for pgsql to detect this?\n\nIt should have been detected --- there is a check in the rewriter that's\nsupposed to error out after ten recursive rewrite calls. Maybe that\nlogic is broken, or misses certain cases. Could you exhibit the views\nthat caused this behavior for you?\n\n> So, I began restarting pgsql w/a line like\n\n> rm -f /tmp/.PGSQL.* && postmaster -i >log 2>log &\n\n> Which works great. Except that I *kept* using this for two weeks \n> after the view problem (damn that bash up-arrow laziness!), and \n> yesterday, used it to restart PostgreSQL except (oops!) it was \n> already running.\n\n> Results: no database at all. All classes (tables/views/etc) returned \n> 0 records (meaning that no tables showed up in psql's \\d, since \n> pg_class returned nothing.)\n\nUgh. The reason that removing the socket file allowed a second\npostmaster to start up is that we use an advisory lock on the socket\nfile as the interlock that prevents two PMs on the same port number.\nRemove the socket file, poof no interlock.\n\n*However*, there is a second line of defense to prevent two postmasters\nin the same directory, and I don't understand why that didn't trigger.\nUnless you are running a version old enough to not have it. What PG\nversion is this, anyway?\n\nAssuming you got past both interlocks, the second postmaster would have\nreinitialized Postgres' shared memory block for that database, which\nwould have been a Bad Thing(tm) ... but it would not have led to any\nimmediate damage to your on-disk files, AFAICS. Was the database still\nhosed after you stopped both postmasters and started a fresh one? (Did\nyou even try that?)\n\nThis story does indicate that we need a less fragile interlock against\nstarting two postmasters on one database. I have to admit that it\nhadn't occurred to me that you could break the port-number interlock\nso easily as that :-(. But obviously you can, so we need a different\nway of representing the interlock. Hackers, any thoughts?\n\nNote: I've narrowed followups to just pghackers, since that seems like\nthe right forum for discussing a better interlock mechanism.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Nov 2000 17:35:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001125 16:37]:\n> \"Joel Burton\" <[email protected]> writes:\n> \n> This story does indicate that we need a less fragile interlock against\n> starting two postmasters on one database. I have to admit that it\n> hadn't occurred to me that you could break the port-number interlock\n> so easily as that :-(. But obviously you can, so we need a different\n> way of representing the interlock. Hackers, any thoughts?\nhow about a .pid/.port/.??? file in the /data directory, and a lock on that? \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 25 Nov 2000 16:40:43 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files"
},
{
"msg_contents": "On Sat, Nov 25, 2000 at 05:35:13PM -0500, some SMTP stream spewed forth: \n*snip*\n> \n> > So, I began restarting pgsql w/a line like\n> \n> > rm -f /tmp/.PGSQL.* && postmaster -i >log 2>log &\n> \n> > Which works great. Except that I *kept* using this for two weeks \n> > after the view problem (damn that bash up-arrow laziness!), and \n> > yesterday, used it to restart PostgreSQL except (oops!) it was \n> > already running.\n> \n> > Results: no database at all. All classes (tables/views/etc) returned \n> > 0 records (meaning that no tables showed up in psql's \\d, since \n> > pg_class returned nothing.)\n> \n\n*snip Tom's reply*\nI have a situation vaguely related to this.\nAt some point Postgres was not shut down properly and now everytime at \nstartup I the error log gets something like:\n\n---------\nroot% tail -f errlog\nWaiting for postmaster starting up...DEBUG: Data Base System is starting\nup at Sat Nov 25 16:53:10 2000\nDEBUG: Data Base System was interrupted being in production at Sat Nov\n25 16:35:27 2000\nDEBUG: Data Base System is in production state at Sat Nov 25 16:53:10\n2000\nFATAL 1: ReleaseLruFile: No open files available to be closed\n............................................................pg_ctl:\npostmaster does not start up\n---------\n\nAfter that, all postgres processes die and the cycle begins again on\nsubsequent attempts to start postgres.\nAt one point I would receive some \"Too many open files\" (or similar)\nerror with postgres holding more than 750 file descriptors -- almost\nentirely consisting of socket streams.\nWhat is the significance of \"ReleaseLruFile\" and how can I repair this?\n\nThis is using FreeBSD 4.1-RELEASE and Postgres 7.0.2.\n\nThanks\n\ngh\n\n",
"msg_date": "Sat, 25 Nov 2000 17:03:27 -0600",
"msg_from": "GH <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warning: Don't delete those /tmp/.PGSQL.* files"
},
{
"msg_contents": "Larry Rosenman <[email protected]> writes:\n> * Tom Lane <[email protected]> [001125 16:37]:\n>> This story does indicate that we need a less fragile interlock against\n>> starting two postmasters on one database. I have to admit that it\n>> hadn't occurred to me that you could break the port-number interlock\n>> so easily as that :-(. But obviously you can, so we need a different\n>> way of representing the interlock. Hackers, any thoughts?\n\n> how about a .pid/.port/.??? file in the /data directory, and a lock on that?\n\nNope, 'cause it wouldn't protect you against two postmasters in\ndifferent data directories trying to use the same port number.\nThe port-number lock has to use a system-wide mechanism.\n\nYou may want to go back and review the previous threads that have\ndiscussed interlock issues. We have really three independent resources\nthat we have to ensure only one postmaster is using at a time:\n\n1. Port number (for Unix socket, IP address, etc)\n\n2. Data directory (database files)\n\n3. Shared memory.\n\nUp to now shared memory has been protected more or less implicitly\nby the port-number lock, since the shared memory IPC key is derived\nfrom the port number. However, the \"virtual host\" patch that we\nrecently accepted (way prematurely IMHO) breaks that correspondence.\nI suspect that we really ought to try to have an independent interlock\non the shared memory block itself. There was a thread around 4/30/00\nconcerning changing the way that shmem IPC keys are generated, and\nmaybe that would address this concern.\n\nIf we weren't relying on port number to protect shared memory, I think\nthe existing interlocks on port would be sufficient. The kernel\nenforces an interlock on listening to the same IP address, so that's\nOK, and an advisory lock on the socket file is OK for preventing two\npostmasters from listening to the same socket file. (There's no real\nreason to prevent postmasters from using similarly-socket-numbered\nsocket files in different directories, other than the shmem key issue,\nso a lock on the socket file is really just what we want for that\nspecific resource.)\n\nThere is a related issue on my todo list, though --- didn't we find out\nawhile back that some older Linux kernels crash and burn if one attempts\nto get an advisory lock on a socket file? (See thread 7/6/00) Were we\ngoing to fix that, and if so how? Or will we just tell people that they\nhave to update their kernel to run Postgres? The current configure\nscript \"works around\" this by disabling the advisory lock on *all*\nversions of Linux, which I regard as a completely unacceptable\nsolution...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Nov 2000 18:10:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "GH <[email protected]> writes:\n> FATAL 1: ReleaseLruFile: No open files available to be closed\n> ............................................................pg_ctl:\n> postmaster does not start up\n\n> After that, all postgres processes die and the cycle begins again on\n> subsequent attempts to start postgres.\n> At one point I would receive some \"Too many open files\" (or similar)\n> error with postgres holding more than 750 file descriptors -- almost\n> entirely consisting of socket streams.\n> What is the significance of \"ReleaseLruFile\" and how can I repair this?\n\n> This is using FreeBSD 4.1-RELEASE and Postgres 7.0.2.\n\n7.0.3 will probably help --- the message is coming out of some\ninappropriate error recovery code that we fixed in 7.0.3.\n\nThe underlying problem, however, is that you are running out of kernel\nfile table slots (ENFILE or EMFILE error return from open()). Not\nenough info here to tell why that's happening.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Nov 2000 18:40:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "Tom Lane writes:\n\n> There is a related issue on my todo list, though --- didn't we find out\n> awhile back that some older Linux kernels crash and burn if one attempts\n> to get an advisory lock on a socket file? (See thread 7/6/00) Were we\n> going to fix that, and if so how? Or will we just tell people that they\n> have to update their kernel to run Postgres? The current configure\n> script \"works around\" this by disabling the advisory lock on *all*\n> versions of Linux, which I regard as a completely unacceptable\n> solution...\n\nFirstly, AFAIK there's no official production kernel that fixes this. \nWhen and if it gets fixed we can change that logic.\n\nI have simple test program that exhibits the problem (taken from the\nkernel mailing list), but\n\na) You shouldn't run test programs in configure.\n\nb) You really shouldn't run test programs in configure that set up\n networking connections.\n\nc) You definitely shouldn't run test programs in configure that provoke\n kernel exceptions.\n\nWe could use flock() on Linux, though.\n\n\nMaybe we could name the socket file .s.PGSQL.port.pid and make\n.s.PGSQL.port a symlink. Then you can find out whether the postmaster\nthat created the file is still running. (You could even put the actual\nsocket file into the data directory, although that would require\nre-thinking the file permissions on the latter.)\n\nActually, this turns out to be similar to what you wrote in\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1998-08/msg00835.html\n\n\nBut we really should be fixing the IPC interlock with IPC_EXCL, but the\ncode changes look to be non-trivial.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 26 Nov 2000 01:28:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.*\n files"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Maybe we could name the socket file .s.PGSQL.port.pid and make\n> .s.PGSQL.port a symlink. Then you can find out whether the postmaster\n> that created the file is still running.\n\nOr just create a lockfile /tmp/.s.PGSQL.port#.lock, ie, same name as\nsocket file with \".lock\" added (containing postmaster's PID). Then we\ncould share code with the data-directory-lockfile case.\n\n> Actually, this turns out to be similar to what you wrote in\n> http://www.postgresql.org/mhonarc/pgsql-hackers/1998-08/msg00835.html\n\nWell, we've talked before about moving the socket files to someplace\nsafer than /tmp. The problem is to find another place that's not\nplatform-dependent --- else you've got a major configuration headache.\n\n> But we really should be fixing the IPC interlock with IPC_EXCL, but the\n> code changes look to be non-trivial.\n\nAFAIR the previous thread, it wasn't that bad, it was just a matter of\nsomeone taking the time to do it. Maybe I'll have a go at it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Nov 2000 19:41:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "On Sat, Nov 25, 2000 at 06:40:12PM -0500, some SMTP stream spewed forth: \n> GH <[email protected]> writes:\n> > FATAL 1: ReleaseLruFile: No open files available to be closed\n> > ............................................................pg_ctl:\n> > postmaster does not start up\n> \n> > After that, all postgres processes die and the cycle begins again on\n> > subsequent attempts to start postgres.\n> > At one point I would receive some \"Too many open files\" (or similar)\n> > error with postgres holding more than 750 file descriptors -- almost\n> > entirely consisting of socket streams.\n> > What is the significance of \"ReleaseLruFile\" and how can I repair this?\n> \n> > This is using FreeBSD 4.1-RELEASE and Postgres 7.0.2.\n> \n> 7.0.3 will probably help --- the message is coming out of some\n> inappropriate error recovery code that we fixed in 7.0.3.\n> \n> The underlying problem, however, is that you are running out of kernel\n> file table slots (ENFILE or EMFILE error return from open()). Not\n> enough info here to tell why that's happening.\n\nWell, through some research of my own I have discovered that the file\nissue is somehow related to our startup script:\n/usr/local/etc/rc.d/pgsql.sh.\nI am not sure how familiar you are with FreeBSD's startup process, but \nit will suffice to say that this script expects one of three arguments:\nstart, stop, or status -- apparently corresponding to the options of\npg_ctl. \n\nWhen I start the postgres server manually, it runs relatively fine.\ni.e. \n# su -l pgsql /usr/local/pgsql/bin/pg_ctl -w start > /usr/local/pgsql/errlog 2>&1 &\n\nHere is pgsql.sh:\n\n#!/bin/sh\n\n# $FreeBSD: ports/databases/postgresql7/files/pgsql.sh.tmpl,v 1.8\n2000/05/25 09:35:25 andreas Exp $\n#\n# For postmaster startup options, edit $PGDATA/postmaster.opts.default\n# Preinstalled options are -i -o \"-F\"\n\ncase $1 in\nstart)\n [ -d /usr/local/pgsql/lib ] && /sbin/ldconfig -m /usr/local/pgsql/lib\n# Clean up by Matt\n# This is a really bad idea, unless we are absolutely certain that there\n# are no postgres processes running or that we feel like restoring\n# from a recent backup. ;-) gh\n rm -f /tmp/.s.PGSQL*\n [ -x /usr/local/pgsql/bin/pg_ctl ] && {\n su -l pgsql \\\n /usr/local/pgsql/bin/pg_ctl -w start >\n/usr/local/pgsql/errlog 2>&1 &\n# /usr/local/pgsql/bin/pg_ctl -w start -o \"-B 64 -N 32\" start >\n/usr/local/pgsql/errlog 2>&1 &\n echo -n ' pgsql'\n }\n ;;\n\nstop)\n [ -x /usr/local/pgsql/bin/pg_ctl ] && {\n su -l pgsql -c 'exec /usr/local/pgsql/bin/pg_ctl -w -m fast stop'\n }\n ;;\n\nstatus)\n [ -x /usr/local/pgsql/bin/pg_ctl ] && {\n su -l pgsql -c 'exec /usr/local/pgsql/bin/pg_ctl status'\n }\n ;;\n\n*)\n echo \"usage: `basename $0` {start|stop|status}\" >&2\n exit 64\n ;;\nesac\n\nEOF\n\nrunning this script with \"start\" causes the postgres server to start, \nrun out of files, and then shutdown. Postgres is useable until it runs\nout of files and shuts down.\n\n\nThanks.\n\ngh\n\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Nov 2000 18:59:32 -0600",
"msg_from": "GH <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warning: Don't delete those /tmp/.PGSQL.* files"
},
{
"msg_contents": "On Sat, Nov 25, 2000 at 07:41:52PM -0500, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > Actually, this turns out to be similar to what you wrote in\n> > http://www.postgresql.org/mhonarc/pgsql-hackers/1998-08/msg00835.html\n> \n> Well, we've talked before about moving the socket files to someplace\n> safer than /tmp. The problem is to find another place that's not\n> platform-dependent --- else you've got a major configuration headache.\n\nCould this be described in e.g. /etc/postgresql/pg_client.conf?\na la the dbname idea?\n\nI cant remember the exact terminology, but there is a\nconfiguration file for clients, set at compile time where are\nset the connection params for clients.\n\n---------\n\n[db_foo]\ntype=inet\nhost=srv3.devel.net\nport=1234\n# there should be a way of specifing dbname later too\ndatabase=asdf\n\n[db_baz]\ntype=unix\nsocket=/var/lib/postgres/comm/db_baz\n\n--------\n\nAlso there should be possible to give another configuration file\nwith env vars or command-line parameters.\n\nWell, just a idea.\n\n-- \nmarko\n\n",
"msg_date": "Mon, 27 Nov 2000 15:21:17 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files"
},
{
"msg_contents": "Marko Kreen <[email protected]> writes:\n>> Well, we've talked before about moving the socket files to someplace\n>> safer than /tmp. The problem is to find another place that's not\n>> platform-dependent --- else you've got a major configuration headache.\n\n> Could this be described in e.g. /etc/postgresql/pg_client.conf?\n\nThe major objection to that is that if we rely on such a config file,\nthen you *cannot* install postgres without root permission (to make\nthe config file). Currently it's possible to fire up a test postmaster\nwithout any special privileges whatever, and that's a nice feature.\n\nA related objection is that such a file will itself become a source of\ncontention among multiple postmasters. Suppose I'm setting up a test\ninstallation of a new version, while still running the prior release\nas my main database. OK, I fire up the test postmaster on a different\nport, and now I want to launch some of my usual clients for testing.\nOops, they connect to the old postmaster because that's what it says\nto do in /etc/postgresql/pg_client.conf. I can't get them to connect\nto the new postmaster unless I change /etc/postgresql/pg_client.conf,\nwhich I *don't* want to do at this stage --- it'll break non-test\ninstances of these same clients.\n\nI see some value in the pg_client.conf idea as a *per user* address\nbook, to shortcut full specification of all the databases that user\nmight want to connect to. As a system-wide configuration file, I think\nit's a terrible idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Nov 2000 11:05:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "On Mon, Nov 27, 2000 at 11:05:40AM -0500, Tom Lane wrote:\n> Marko Kreen <[email protected]> writes:\n> >> Well, we've talked before about moving the socket files to someplace\n> >> safer than /tmp. The problem is to find another place that's not\n> >> platform-dependent --- else you've got a major configuration headache.\n> \n> > Could this be described in e.g. /etc/postgresql/pg_client.conf?\n> \n> The major objection to that is that if we rely on such a config file,\n> then you *cannot* install postgres without root permission (to make\n> the config file). Currently it's possible to fire up a test postmaster\n> without any special privileges whatever, and that's a nice feature.\n\nI do not see this much of a problem tho'.\n\n[ I use the words XCONFIG and XNAME because I have no good idea\nwhat they should be called. ]\n\nserver startup precedence:\n\n1) postmaster --xconfig ./foo.cfg\n2) PG_XCONFIG=./foo.cfg\n3) /etc/postgresql/pg_xconfig (compile time spec)\n\nthere is also a thing 'xname' which is the section of config\nfile to use:\n\n1) --xname foodb\n2) PG_XNAME=foodb\n3) default_xname specified in config.\n\nso, client (libpq (psql)) startup:\n\n1) psql --xconfig ./xxx\n2) PG_XCONFIG=./xxx\n3) ~/.pg_xconfig\n4) /etc/postgresql/pg_xconfig\n\nand xname as in server.\n\n\nIt may be better if server config is in separate file because we\nmay want to give more options to server (ipc keys, data dirs,\netc). But I guess its sipler when they read the same file and\nclient simply ignores server directives. And server ignores\nremote servers.\n\nAlso it should be possible to put all directives into commend\nline too, for both client and server.\n\n> \n> A related objection is that such a file will itself become a source of\n> contention among multiple postmasters. Suppose I'm setting up a test\n> installation of a new version, while still running the prior release\n> as my main database. OK, I fire up the test postmaster on a different\n> port, and now I want to launch some of my usual clients for testing.\n> Oops, they connect to the old postmaster because that's what it says\n> to do in /etc/postgresql/pg_client.conf. I can't get them to connect\n> to the new postmaster unless I change /etc/postgresql/pg_client.conf,\n> which I *don't* want to do at this stage --- it'll break non-test\n> instances of these same clients.\n\npostmaster --xconfig ./test.cfg --xname testdb &\npsql --xconfig ./test.cfg --xname testdb\n\n> \n> I see some value in the pg_client.conf idea as a *per user* address\n> book, to shortcut full specification of all the databases that user\n> might want to connect to. As a system-wide configuration file, I think\n> it's a terrible idea.\n\nSo what you think of the above idea?\n\n-- \nmarko\n\n",
"msg_date": "Mon, 27 Nov 2000 19:16:51 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files"
},
{
"msg_contents": "\n\nOn 25 Nov 2000, at 17:35, Tom Lane wrote:\n\n> > So, I began restarting pgsql w/a line like\n> \n> > rm -f /tmp/.PGSQL.* && postmaster -i >log 2>log &\n> \n> > Which works great. Except that I *kept* using this for two weeks\n> > after the view problem (damn that bash up-arrow laziness!), and\n> > yesterday, used it to restart PostgreSQL except (oops!) it was\n> > already running.\n> \n> > Results: no database at all. All classes (tables/views/etc) returned\n> > 0 records (meaning that no tables showed up in psql's \\d, since\n> > pg_class returned nothing.)\n> \n> Ugh. The reason that removing the socket file allowed a second\n> postmaster to start up is that we use an advisory lock on the socket\n> file as the interlock that prevents two PMs on the same port number.\n> Remove the socket file, poof no interlock.\n> \n> *However*, there is a second line of defense to prevent two\n> postmasters in the same directory, and I don't understand why that\n> didn't trigger. Unless you are running a version old enough to not\n> have it. What PG version is this, anyway?\n\n7.1devel, from about 1 week ago.\n \n> Assuming you got past both interlocks, the second postmaster would\n> have reinitialized Postgres' shared memory block for that database,\n> which would have been a Bad Thing(tm) ... but it would not have led to\n> any immediate damage to your on-disk files, AFAICS. Was the database\n> still hosed after you stopped both postmasters and started a fresh\n> one? (Did you even try that?)\n\nYes, I stopped both, rebooted machine, restarted postmaster. \nRebooted machine, used just postgres, tried to vacuum, tried to \ndump, etc. Always the same story.\n \n\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n",
"msg_date": "Tue, 28 Nov 2000 13:14:21 -0500",
"msg_from": "\"Joel Burton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "\"Joel Burton\" <[email protected]> writes:\n> On 25 Nov 2000, at 17:35, Tom Lane wrote:\n>> Ugh. The reason that removing the socket file allowed a second\n>> postmaster to start up is that we use an advisory lock on the socket\n>> file as the interlock that prevents two PMs on the same port number.\n>> Remove the socket file, poof no interlock.\n>> \n>> *However*, there is a second line of defense to prevent two\n>> postmasters in the same directory, and I don't understand why that\n>> didn't trigger. Unless you are running a version old enough to not\n>> have it. What PG version is this, anyway?\n\n> 7.1devel, from about 1 week ago.\n\nAh, I see why the data-directory interlock file wasn't helping: it\nwasn't checked until *after* shared memory was set up (read clobbered\n:-(). This was not a very bright choice. I'm still surprised that\nthe shared-memory reset should've trashed your database so thoroughly,\nthough.\n\nOver the past two days I've committed changes that should make the data\ndirectory, socket file, and shared memory interlocks considerably more\nrobust. In particular, mechanically doing \"rm -f /tmp/.s.PGSQL.5432\"\nshould never be necessary anymore.\n\nSorry about your trouble...\n\nBTW, your original message mentioned something about a recursive view\ndefinition that wasn't being recognized as such. Could you provide\ndetails on that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Nov 2000 17:09:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "GH <[email protected]> writes:\n> running this script with \"start\" causes the postgres server to start, \n> run out of files, and then shutdown. Postgres is useable until it runs\n> out of files and shuts down.\n\nContinuing on that line of thought --- it seems like this must be an\nindication of a file-descriptor leak somewhere. That is, some bit of\ncode forgets to close a file it opened. Cycle through that bit of code\nenough times, and the kernel stops being willing to give you more file\ndescriptors.\n\nIf this is correct, we could probably identify the leak by knowing what\nfile is being opened multiple times. Can you run 'lsof' or some similar\ntool to check for duplicate descriptors being held open by the\npostmaster?\n\nI recall that we have fixed one or two leaks of this kind in the past,\nbut I don't recall details, nor which versions the fixes first appeared\nin.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Nov 2000 17:27:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "> Ah, I see why the data-directory interlock file wasn't helping: it\n> wasn't checked until *after* shared memory was set up (read clobbered\n> :-(). This was not a very bright choice. I'm still surprised that\n> the shared-memory reset should've trashed your database so thoroughly,\n> though.\n> \n> Over the past two days I've committed changes that should make the\n> data directory, socket file, and shared memory interlocks considerably\n> more robust. In particular, mechanically doing \"rm -f\n> /tmp/.s.PGSQL.5432\" should never be necessary anymore.\n\nThat's fantastic. Thanks for the quick fix. \n\n> BTW, your original message mentioned something about a recursive view\n> definition that wasn't being recognized as such. Could you provide\n> details on that?\n\nI can't. It's a few weeks ago, the database has been in furious \ndevelopment, and, of course, I didn't bother to save all those views \nthat crashed my server. I keep trying to re-create it, but can't \nfigure it out. I'm sorry.\n\nI think it wasn't just two views pointing at each other (it would, of \ncourse, be next to impossible to even create those, unless you hand \ntweaked the system tables), but I think was a view-relies-on-a-\nfunction-relies-on-a-view kind of problem. If I ever see it again, I'll \nsave it.\n\nThanks!\n\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n",
"msg_date": "Wed, 29 Nov 2000 18:49:07 -0500",
"msg_from": "\"Joel Burton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files "
},
{
"msg_contents": "\"Joel Burton\" <[email protected]> writes:\n> I think it wasn't just two views pointing at each other (it would, of \n> course, be next to impossible to even create those, unless you hand \n> tweaked the system tables), but I think was a view-relies-on-a-\n> function-relies-on-a-view kind of problem.\n\nOh, OK. I wouldn't expect the rewriter to realize that that sort of\nsituation is recursive. Depending on what your function is doing, it\nmight or might not be an infinite recursion, so I don't think I'd want\nthe system arbitrarily preventing you from doing this sort of thing.\n\nPerhaps there should be an upper bound on function-call recursion depth\nenforced someplace? Not sure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Nov 2000 19:25:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Warning: Don't delete those /tmp/.PGSQL.* files "
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.